This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-01-03
Channels
- # announcements (2)
- # babashka (66)
- # beginners (225)
- # braveandtrue (1)
- # calva (14)
- # circleci (1)
- # clj-kondo (36)
- # cljsrn (3)
- # clojure (423)
- # clojure-finland (7)
- # clojure-nl (1)
- # clojure-spec (14)
- # clojure-survey (41)
- # clojure-sweden (2)
- # clojure-uk (13)
- # clojurescript (59)
- # community-development (10)
- # cursive (2)
- # datascript (14)
- # datomic (63)
- # events (3)
- # expound (8)
- # figwheel-main (6)
- # kaocha (8)
- # luminus (6)
- # malli (1)
- # nrepl (2)
- # off-topic (51)
- # other-lisps (3)
- # reagent (16)
- # shadow-cljs (44)
- # spacemacs (7)
- # sql (22)
- # vrac (1)
Clojure primitive vectors created using, e.g. (vector-of :long ...)
contain small Java arrays of primitive values at the "leaves", each of which individually should be well positioned for cache locality. Different leaves might not be in adjacent cache lines, but except for cache line prefetch optimizations in processors that shouldn't make much difference.
I would consider using an array with the typical power of two increase in size and copy whenever it overflows
although my personal belief is that if you're ever at that level of tweaky performance you're really close to just making random noise anyway 🙂
And the compacting GC will also optimize cache hits, because it compacts in allocation order
since it sounds like the only reason you are avoiding primitive arrays is because you don't know what your final size will be
The switch to ArrayList over vector was pretty good. Now I'm only 3 times slower than the C++ impl
it keeps an array, and doubles its size and copies the contents over if it needs more capacity
so if you do that bookkeeping, you can do with a primitive array instead of an Object array
It's difficult to measure the full impact of boxing (or anything that allocates) because much of the overhead is deferred until GC time. It all depends on whether GC is a performance factor for your workload. But you can be sure you will reduce overhead to some degree by not boxing. I have done what @hiredman recommends with success in many situations.
Meh, I just went with an array of size that I know is big enough, even though its possibly too big
java.util.Arrays.copyOf methods make copying easy, if you need it.
The slowdown in Clojure is the call to uncheckedLongCast. I can't totally say the difference, but ArrayList<Object> get still is probably slower than ArrayList<Long>
it does, it will result in a checkcast jvm instruction, but the jvm is typically very good at optimizing that instruction
There are security concerns with stuff like that
Well, the RT.uncheckedLongCast does have a cost in Clojure, about 47% of my profiled samples
That sounds ... unlikely to be right
I mean, I am at under 200ms and need to sample with 1000 nanosecond to even see what's taking most time
Having spent many hours of my life looking at profilers, I find it’s good to have a healthy distrust of them :)
I have experimental "builder" support in org.clojure/java.data
now on git and it would be great if some folks could test it before I cut an actual release with it in... add this to your deps.edn
:
org.clojure/java.data
{:git/url ""
:sha "20d8562bf451f41bf58ac0b38401648ba21232a3"}
and then
(require '[clojure.java.data.builder :as builder])
(builder/to-java some.buildable.Thing {:foo "bar"})
By default it looks for some.buildable.Thing$Builder
, constructs it, calls the various property methods on it (`(.foo builder "bar")` in this case) and then builds it (by default calling (.build builder)
with the updated builder). It assumes each builder property method returns the updated builder instance (as opposed to regular clojure.java.data/to-java
which assumes setters return void and mutate the original instance).Example
(builder/to-java java.util.Locale {:language "en"})
(it supports both setLanguage
and language
style property methods, as long as they return the builder instance)
I've also seen some builders that use withLanguage
. Not sure if you are interested in supporting that scenario.
@U0CJ19XAM interesting. Yes probably worth supporting somehow
Added support for that. We'll see if it causes any weirdness in the wild.
async-profiler upon which clj-async-profiler is based is nice in that it has fairly extensive documentation describing "safe point bias", which is at least one way that some other JVM profilers are misleading that async-profiler is not. But its documentation is very engineering-oriented towards caution, if not distrust.
What's an easy way to create a non caching seq? Or something similar? Basically, a generator? But it just needs to remember the previous state, not all of it. I normally use iterator, but that remembers the whole sequence no ?
Many Java Iterator implementations are mutable objects, I thought, and thus are very limited in the amount of memory they allocate.
e.g. a single object with a few dozen bytes
eduction?
Or maybe I'm just thinking of something where you call .next and get the next thing, forgetting what came before, but still continuing from where it left off...
so something built via iterate or reductions? d'oh, n/m those cache
Most Clojure seq implementations are immutable objects, so every call to next/rest returns a freshly allocated object. I am not sure exactly what you are looking for, but if you are looking to avoid allocating one object per sequence element, Clojure seq implementations are not what you want. They are good in that they are immutable, but not for minimizing memory allocation.
I'm not too sure what I want. It might not make sense. But I'm thinking I have a loop, and on every iteration, I want a number of increment 10 lets say (this is an example). So sure, I can (loop [i 0] (let [num (* 10 i)] ... (recur (inc i))
But, I'm thinking, well if I had an iterator that every time I looped I could call getNext on or something
You can also (recur (+ i 10))
in that example, without num
(iterate #(+ 10 %) 1)
kind of does it, its not quite, since it will cache all prior results
iterate doesn't cache, but anything with a handle to the head will
if you don't bind it, it will gc-churn but won't blow up space usage
that is, if you don't bind it in a way that escapes scope - the compiler is smart about letting lazy nodes be gc'd if it can prove they can't escape
When you say you are trying to avoid caching prior results, what exactly is it you are trying to avoid? The allocation of all of those objects, or something else? Because if allocating many objects that are collectable garbage before you get to the end of the sequence is good enough, then just don't "hold on to the head", and every element becomes collectable garbage as soon as you move past it.
Like: (loop [ten (iterate #(+ 10 %) 1)] (let [num (first ten)] ... (recur (next ten)))
Hum, yes, I think I was overthinking and assuming that this would not get garbage collected
I am nearly certain that every element of the return value of iterate is collectable garbage as soon as you go to the next loop iteration (in that code snippet as shown at least -- it is easy to hold on to the head of the sequence if you try to, and occasionally by surprise if you are not aware of the issue)
Are the parens matched the way you want there?
Yes, that’s a transducer
I haven't grokked eduction enough to answer, sorry.
Eductions are usually reduced
As an aside that may be irrelevant for why you are asking, but I am thinking about since you were worried about the performance differences between doing boxing/unboxing operations on primitive values earlier -- if you are that worried about performance, then creating Clojure sequences is probably something you also want to avoid, because of all of the object allocation, and later GC, they require.
as the doc string says, it returns something that is "reducible/iterable" (notably, not seqable)
actually, I guess it is seqable, I forgot about some of the details here
haven't used korma in a while, what's the more "proper" way to do (korma/where "create_time < now()")
it implements Iterable
what are you actually trying to do?
we're just taking a random walk afaict
I guess I'm just trying to loop over a collection applying a transducer but using loop/recur
why are you constraining to loop/recur?
what are you actually trying to do?
(let [iter (.iterator (eduction (map #(* 10 %)) (range 10)))]
(loop [element (when (.hasNext iter) (.next iter))]
(println element)
(when (.hasNext iter)
(recur (.next iter)))))
iterators are from a different unpleasant galaxy and you should seek to avoid them when possible
Except perhaps when you are trying to eke out the last bit of performance of some inner loop 🙂
Well, I need to loop over something, and I need 3 different values at each iteration. And one of those values I want to use a chain of transformation to compute the next one, but avoid creating intermediate results like with seq.
which is partly what reduce
is intended to help with, while hiding the mutable parts from your code.
So sometimes I do that with reduce where I wrap things in a vector and destructure, but the reduce that takes xf applies the transducer to the whole vector
you can exit reduce early with reduced
I'm probably overengineering trying to not just do it all in the loop, which is what I have now. But I was wondering if I could extract some of that logic out
This is a contrived example, so mostly right now I'm just experimenting and exploring Clojure, not doing anything serious, but:
(loop [i 0 letters (map clojure.string/upper-case (cycle ["a" "b" "c" "d"]))]
(when (< i 10)
(let [next-letter (first letters)]
(println i ":" next-letter)
(recur (inc i) (next letters)))))
And it seems eduction is the trick:
(loop [i 0 letters (eduction (comp (filter #(or (= "b" %) (= "d" %)))
(map clojure.string/upper-case))
(cycle ["a" "b" "c" "d"]))]
(when (< i 10)
(let [next-letter (first letters)]
(println i ":" next-letter)
(recur (inc i) (next letters)))))
But I'm not sure what it does under the hood. Does it somehow just iterate statefully?
yes, it's an iterator
but you don't even need that, you can just use transduce for this
So it be the same as this:
(let [iter (.iterator (eduction (comp (filter #(or (= "b" %) (= "d" %)))
(map clojure.string/upper-case))
(cycle ["a" "b" "c" "d"])))]
(loop [i 0]
(when (< i 10)
(let [next-letter (when (.hasNext iter) (.next iter))]
(println i ":" next-letter)
(recur (inc i))))))
The (when (< i 10) ...)
is basically a take
. And map-indexed
gives you access to the index. So, you can push almost all of the logic into transducers and use run!
(which uses reduce
) over an eduction, like this:
(let [fmt-line #(str % " : " (clojure.string/upper-case %2))
xform (comp (filter #(or (= "b" %) (= "d" %)))
(take 10)
(map-indexed fmt-line))]
(->> (cycle ["a" "b" "c" "d"])
(eduction xform)
(run! println)))
Won't help with boxed nums though.This is fully eager, produces no intermediate collections. (Except for the one that cycle
produces, of course)
Thanks. Ya my actual algorithm was more complex and the i wasn't simply an incremental count. Also I was doing math with it inside the loop and wanted it to be primitive
Using the iterator was still a bit faster, but negligible compared to its extra ugliness
And either way, having an xf proved too slow for me. I had to instead compute the transformation myself as part of my loop in a more optimal way
you don't need eduction here
transduce takes an iterable source (like cycle), looks at each element in turn, applies a transducer chain, and passes the results to a final function f that can do whatever you want
you can combine transducers for map, filter, take, and map-indexed to do everything here
then it's a constant, so who cares
Well, so part of what I was optimizing, math is applied to i as well, and I needed that to be primitive
none of that is going to work
transducers apply functions and anything will be boxed
if you want primitive loops, you have to loop/recur
Right, I'm talking inside my loop/recur, I have other bindings which I want to have as primitive
well, then probably I'd use your eduction with first/next above (not the java interop to iterator blech) or just use seqs - you're not holding onto the head, so who cares
seqs are designed to gc behind the iteration if you're not holding the head
or sequence
if you want to use transducers
Ya, I'll probably use seq. I was just trying to see if transducers can be used as well. So if not for this, what would be a good use case for eduction ?
it's mostly used to return a delayed reduction over an external resource (file, result set, etc)
so you can return it from a function without it having done any work yet, then reduce it elsewhere, and know that when the reduce is complete, the resource is no longer needed and can be closed
the faq entry https://clojure.org/guides/faq#transducers_vs_seqs talks about this re resource control
anyone ever played with javazoom.jl.Player? Can't seem to get any audio output from playing an mp3. Ideally looking for some java that can play and edit mp3s if anyone knows of anything offhand
when used with first and next, the eduction is still getting put into a sequence, so I'm just not sure it's buying you much
FYI, eduction with first/next seems about 60% faster than using plain sequences in my case
So I'm guessing there's performance benefit to using sequence with transducer over just using sequences directly
Most likely my guess is that it reduces the number of intermediate seqs, as the transducer steps don't need one, even though there is one between calls to next
(this has been a fascinating discussion this evening so thank you @didibus and everyone else!)
Hello guys, I try to use cljs-ajax
for file uploading. Everything seems to work, but on the serverside i get
{... :form-params {}, :websocket? false, :session/key nil, :query-params {}, :content-type "multipart/form-data; boundary=----WebKitFormBoundaryoWBHFfHqkdAWW8Tt", :character-encoding "utf8", :uri "/send-files", :server-name "localhost", :query-string nil, :path-params {}, :body #object[org.httpkit.BytesInputStream 0x22b4bc94 "BytesInputStream[len=100003]"], :multipart-params {}, :scheme :http, :request-method :post, :session {}...}
instead of
{...
:params
{"file" {:filename "words.txt"
:content-type "text/plain"
:tempfile #object[.File ...]
:size 51}}
...}
I use multipart-params, what am I doing wrong? Or how can I convert BytesInputStream into something usable? I want to save multiple files. Any help is appreciated@paul931224 It's been a long time since I uploaded a file, but IIRC tempfile is just a File object where the actual content is stored in a temp folder. You can just use that as it is. For instance you could create an input-stream from a file object as shown here: https://clojuredocs.org/clojure.java.io/input-stream
I don’t understand it fully, is object[org.httpkit.BytesInputStream 0x22b4bc94 "BytesInputStream[len=100003]"]
a file object? Where is it stored? I did it a while ago aswell, but I remember getting a map with the name, size and temporary file, now I only get a BytesInputStream
Ah sorry, I thought you got the request map from your second example.
So you have a org.httpkit.BytesInputStream
instead. According to the code: https://github.com/http-kit/http-kit/blob/master/src/java/org/httpkit/BytesInputStream.java this extends
so you can do everything with it you can do with an inputstream. For instance use the example I pasted before: https://clojuredocs.org/clojure.java.io/input-stream
You can see the test for multipart handlers here: https://github.com/http-kit/http-kit/blob/a21d17cbce48eb2490c828c2ccf357dcf56c2e03/test/org/httpkit/server_test.clj#L46
The test server makes use of http://weavejester.github.io/compojure/compojure.handler.html#var-site middlewares which includes wrap-multipart-params
So I guess that you are missing the multipart params parser in your middleware chain.
Well, meanwhile I am trying to save my files from the input-stream, still, getting the tempfile location would be easier, so here is my code. I didn’t miss the multipart-params wrapper, but it still comes back only with the body, the params, and multipart params keys are empty maps
I mean, wrap-defaults contains the multipart-params wrapper, but I tried to add it explicitly, didn’t work either
Hello guys. Is it possible to loop
into a map of maps in clojure? I have a map of addresses and i would like to, for each element on this map, add a new key/value to each address on addresses.
foreach (address : addresses){
address.put (newKey, "NewValue")
}
Kind of it. Sorry for OO thinking.Assuming the addresses
map contains some arbitrary keys with actual addresses being the values, one way to do it would be:
(reduce (fn [acc k]
(assoc-in acc [k new-key] "NewValue"))
addresses (keys addresses))
{:facturation {:street "name"}
:main {:street "name"}}
The map is kinda like this
If you do some operations within deep nested structures often, you may want to use https://github.com/redplanetlabs/specter
With it, the code, I think, would be:
(s/transform s/MAP-VALS #(assoc % new-key "NewValue") addresses)
@UCPS050BV What's the difference? You would have two assoc
instead of one assoc-in
.
Although in the general case, you can't replace the first usage of addresses
with just {}
. But we've seen the example data, so that should be OK.
BTW another pretty common way is via into
:
(into (empty addresses)
(map (fn [[k a]]
[k (assoc a new-key "NewValue")]))
addresses)
Mapping over vals is a pretty common thing to do, lots of util libraries have a "map-vals" function
Medley is one of the more popular ones I think:
(medley/map-vals #(assoc % :new-key "new-value") addresses)
What you want is to get a map where a particular key-value pair is associated into every value of the input map. Let's say, we have a function f
that associates said key-value pair into a map (`#(assoc % newKey "NewValue")` does that), then it's just a matter of applying this function to every value in the input map. Maps in Clojure can be treated as sequences of key value pairs. So you can use map
to do that, like this: (map (fn [[k v]] [k (f v)]) input-map)
. Note we're using destructuring to get to the key and value in the kv-pair. What this returns is a sequence of kv-pairs with the transformation done. Now all we need to do is to pour these kv-pairs into a map, which we can do with into
, making the whole thing this:
(into {} (map (fn [[k v]] [k (assoc v newKey "NewValue")]) input-map))
. Now this works, but in case your input map is a sorted map, this code will return a hashmap. In order to preserve the exact type. What we can do, is to pour into a empty instance of the same type as your input map. Now the final version is this:
(->> input-map
(map (fn [[k v]]
[k (assoc v newKey "NewValue")]))
(into (empty input-map)))
If you're doing this kind of thing a lot, then you can extract map-vals
as a separate function of course.@U883WCP5Z A small note - in the code block using ->>
, it's better to replace it with a transducer.
I thought transducers are a bit too advanced, and unless somebody is worried about performance, or re-usability, not tend to suggest them.
I’m educating myself with some 4clojure problems. Things where smooth until I hit something like a wall. Not going to spoil which problem it is, but I find myself wanting to un-partition a sequence that I have first partitioned with sliding windows of two (so (partition 2 1 coll)
), and then filtered. Throw me a hint, anyone, please. ❤️
(reduce concat coll)
? just a tip, I may be wrong :D
I’ve tried that. 😃 It leaves me with the duplicates. ((x y) (y z)) => ( x y y z)
. I want it to become (x y z).
Let’s say I started with (x y z)
, then partitioned it so that I have ((x y) (y z))
. I want to get back to (x y z)
.
well it is one step from duplicates (set (reduce concat coll))
for the first time I didn’t understand what “sliding windows of two” means, but it is clear now 😄
oh wait, set
won’t be good, I guess you may want to keep duplicates, just not the ones which are one after another?
Maybe this does it: (*fn* [c] [(first c) (map second (rest c))])
, not sure if it is general enough…
(conj (map first coll) (last (last coll)))
oh yes, almost the same solutions 😄
why rest
though? dont you need second element from the first in the coll?
An alternative solution that returns a vector:
(into (vec (first coll))
(map second)
(rest coll))
oh, my bad, I imagined (first (first coll))
, also you are totally right with the order
well I guess it won’t get nicer, but I think it is general enough
Ah, thanks. Let’s see if this unlocks the problem for me. I have a feeling I am making it more complicated than it should be….
It definitely sounds like it, given that you first partition then, then de-partition. :) Feels like maybe transducers would be able to help you.
Also check out dedupe
and distinct
.
There:
(fn [input]
(let [length-groups
(->> input
(partition 2 1)
(partition-by (partial apply <))
(filter #(apply < (first %)))
(group-by count))]
(if-not (empty? length-groups)
(->> (get length-groups (apply max (keys length-groups)))
first
(#(concat (first %) (map second (rest %)))))
'())))
Looking at the other solutions, mine doesn’t seem very complicated. Probably just a quite tricky problem.
Also, (not (empty? ...))
is an anti-pattern. Replace it with just (seq ...)
. Or maybe use not-empty
when you create the collection itself. Something like (if-some [coll (not-empty ...)] ...)
.
Thanks. Will try some of the suggestions for not empty, Why is it an anti-pattern like I did it? Also, what’s the problem with ->>
together with (#(…))
?
Just look at the implementation of empty?
. :) Also, try using any CLJ linter on (not (empty? ...))
.
By using (#(...))
, you create a whole new function for the sole purpose of immediately applying it. There's no point in doing that when you can just run its body without creating the function.
clj-kondo
does not complain about my current construct, but will try with the spelled out one now…
Like so now:
(fn [input]
(let [length-groups (not-empty (->> input
(partition 2 1)
(partition-by (partial apply <))
(filter #(apply < (first %)))
(group-by count)))]
(if length-groups
(as-> (get length-groups (apply max (keys length-groups))) v
(first v)
(concat (first v) (map second (rest v))))
'())))
Feels like I could get rid of the let
now, somehow, but maybe that’s just a mirage.Oh, it's even in the docstring of empty?
: "Please use the idiom (seq x) rather than (not (empty? x))".
Yes, you can absolutely get rid of let
by replacing if
with if-some
and moving the bindings block inside it.
Also, I meant using a combination of ->
and as->
:
(-> length-groups
(get (reduce max (keys length-groups)))
first
(as-> $ (concat (first $) (map second (rest $)))))
> Feels like maybe transducers would be able to help you.
I saw in some other thread that the presence of ->>
would indicate that transducer could be used. But I don’t know anything about transducers. Will have to investigate first.
Although, it doesn't really simplify it. Maybe it's worth extracting (concat (first $) (map second (rest $)))
into its own function, like de-partition
or something.
java.lang.RuntimeException: Unable to resolve symbol: as-> in this context
Too old 4Clojure…
https://clojure.atlassian.net/browse/CLJ-1293 this just caught me out a little - was writing try/catch code and not having a portable catch everything
meant digging around a bit. :default
or similar would be a helpful idiom.
You can vote for that issue at https://ask.clojure.org/index.php/1953/support-try-catch-default-for-portable-catch-all
I feel like I'm missing something obvious. I'm trying to update *data-readers*
similar to how it's done in clojure.core
(https://github.com/clojure/clojure/blob/clojure-1.9.0/src/clj/clojure/core.clj#L7753), but alter-var-root
, while apparently returning the correct value, doesn't change the map used to resolve the reader tag. Yet that value is getting stored somewhere, since a second call shows the change I made is getting passed to alter-var-root
. set!
seems to work, but I'm assuming there's some reason clojure.core
uses alter-var-root
instead.
you can think of bindings as a stack - the root binding, followed by all the thread-local bindings above
I see, thanks. I'm working on a macro that defines tagged literals. Nice-to-have would be updating *data-readers*
in the REPL, rather than having to change the data_readers.clj
file and restarting. Is there any way to achieve this? Seems like clojure.main
would have to run again no matter what to establish the thread-local bindings.
you can call set!
(set! *data-readers* (merge *data-readers* { ... }))
this only works because *data-readers*
has been bound in clojure.main
that may not be true of every repl environment, although I think it is in most
It only seems to work if the reader tag is accessed in the same context (thread?) as set!
was called. I can have a namespace that sets *data-readers*
and uses the tag, and calling require
will run that code fine. But *data-readers*
is unchanged in user
(or whatever ns is current), and attempts to use the tag fail.
am I the only one terrified by the idea of dynamically adjusting data readers in a macro? 😛
generally, macros that alter the runtime are a bad idea as compile time may be totally removed from execution time if AOT'ed
Hmmm, good point. I assume that's the rationale for data_readers.clj
?
you're overriding the local thread binding, not the global root
you started with alter-var-root, and you could do both - alter the data-readers root, and set! the current thread
it depends what your actual goal is here
you might also want to look instead at *default-data-reader-fn*
(a fallback function used if a tag isn't found)
*default-data-reader-fn
is also bound by clojure.main
, so I assume would have the same issue.
Thanks. Just seeing if there was way to define the reader tag dynamically, without modifying data_readers.clj
, and that would work everywhere. May not be worth the effort. For developing custom tags, set!
is probably sufficient, since you can iterate and test the definition in a namespace until happy, then update data_readers.clj
.
Is there any language that offers a clojure like REPL and also types? With Clojure, I find it difficult to refactor code at > 5k LOC; but in compiled languages, waiting 5-10 seconds to recompile after each edit loses the 'immediate feedback'
I only know of other Lisps that do this. And both of them are optionally typed, which doesn't necessarily solve your issue.
The challenge is that for any mandatory full program type checking the full program has to always be known. But the REPL doesn't know the full program. It knows only form by form. So the best you could do is a REPL that is project aware somehow. But then, the second issue is that type annotations make it really annoying to use the REPL, and type inference on partial code is hard as well
why do you find it difficult to refactor code?
@alexmiller: In OCaml / Haskell / Rust / C++, if I add or remove a field from a struct/class, the compiler will complain at me and force me to fix the corresponding lines that (1) construct the class w/o the struct, (2) use the field of the struct. In Clojure, the code 'compiles' fine, but we get silent nils that blowup at runtime -- often not at where the nil should have been initialized / read, but later on after it's passed around in various funcntions.
By silent nils it sounds like you have existing code that is extracting fields that no longer exist. Tools exist in Cursive, CIDER, etc to find usages of a field - have you tried using those to find uses before removal?
There's also a bit of the fact that you'd now be writing OOP-ish code. Or more specifically, ADT-ish code. Since you now need custom data types to model your data instead of data-structures
If you're willing to go there and I don't think it's a good idea. I wonder... I think you could create a macro that generates a struct like that with knowledge of keys, and a corresponding get-entity-key and set-entity-key functions for it. Now if you remove a key the corresponding get-entity-key and set-entity-key fns would no longer exist and cause a compile error
Imo, you’re then giving up a lot of the Clojure value prop
Are there any recommendations to do http calls over unix sockets like we do in the following curl command from clojure?
$ curl --unix-socket /var/run/docker.sock
[
{
"Containers": -1,
"Created": 1577395270,
"Id": "sha256:ce6c1e7ac56533e2742030f033cf0d8cf0adc996c7bb87453eb5adc266b2ef2e",
"Labels": null,
"ParentId": "",
"RepoDigests": [
"busybox@sha256:7fe0cb3632d9ea7b2a9ab4427e339e01f7cdfeff50674804cb8946664976c610"
],
"RepoTags": [
"busybox:musl"
],
"SharedSize": -1,
"Size": 1461385,
"VirtualSize": 1461385
}
]
clj-http as far as i understand doesn't support unix sockets?@dpsutton: Is there any particular youtube video / talk that shows off the ELM repl? I'm having trouble finding anything that matches the experience of figwheel / devcards / ...
@alexmiller: adding to +/- on fields in structs. Similarly, if I reorder args in a function or add/remove args from functions, the compiler once again tells me at compile time and forces me to fix it. in Clojure, these would be runtime errors, often in the wrong place, of the args are maps, which can sorta be interchanged, with many reads becoming nils.
you may not see compilation until runtime though
Ya, I have a AOT compile step used only as a linter specially for this. Also clj-kondo should catch wrong arity inside editor as well
specs can help with some of that
Spec is great in that it can test conditions that a standard type system can't. However, when I tried using spec, I found myself often writingn specs of the form: this function takes arg1 of shape A, arg2 of shape B, arg3 of shape C, and produces an result of shape D in which case if I just used a typed language and defined types/unionns A, B, C, and D, the compiler could handle it automatically for me
defn-spec/orchestra is nice for what you are describing
Heavy specing (and slightly unconventional one: "closed keys" checks, etc) can solve 80% of the problem the remaining 20% can be killed by the realisation that this popular notion of "refactoring", in reality tends to mean "mutate this big complex ball into another complex ball" if your program is made of simple atomic things, refactoring simply means a different composition of those atoms. related: https://twitter.com/elementsofclj/status/1210613819975835648
As much as I agree in principle this is not how it happens in practice and even with well-written code you will sometimes miss a thing or two. Tests help in that case. That said, this is a dynamically typed programming language. It's good in the small, and I'm sure there are plenty of places that use it in the large (i.e., big single codebases) but like all other dynamically typed languages it will hit a ceiling in size wrt "moldability" beyond which making larger changes to the codebase will be more difficult than to a statically typed one.
interestingly, IMO hitting that ceiling is a smell per se i.e. if a codebase becomes too large, it failed at modularity IOW, I'd say that there's no such thing as a big atom. One is either programming with atoms or not
I agree. I guess I am simply trying to point out that there are plenty of such failures out in the wild, even in Clojure land, and that dealing with big failures is easier when you have a static type system to hand.
Respected! Personally I try to avoid complexities upfront at all costs, but using a static lang is also an option.
I deal with big Scala, Java and Kotlin failures a lot, and everytime I deal with them similarly as I do with Clojure failures (though I've never had one of those yet). Which is you need to create an Anti Corruption Layer and slowly replace the component by a new one
The idea that all you need to fix is just removing one field from A and adding to B is false. The failure is often deep rooted, the data is modeled wrong, the whole class hierarchy is broken, the database is corrupted, the tests assert wrong behavior or don't exist, and when you look into it, you realize that all logic was moved to some configuration anyways which isn't type checked
There's magic strings everywhere for trying to rebuild dynamic behavior that the type checked langs don't naturally offer, etc.
When I think of adding/removing fields as refactoring, I don't think of it as fixing a bug but rather a change due to a new feature or requirement. And that happens a lot for me. To me there is no question that static types help with this and therefore there is a drawback to using dynamic types. But dynamic types have many strengths as well. It's just a trade-off. The bottom line for me is that Clojure was designed quite well with dynamic types in mind, and I get a lot of benefit from that design. And for me those benefits are worth the price you pay with dynamic typing.
I guess I honestly don't see how static types help with that, and I've mostly only programmed with static typed language prior to Clojure.
In Java, IntelliJ refactoring are often unreliable. So I always do a find and replace pass after to be sure it didn't miss anything. It struggles often with cross project changes, it ignores comments, you never know if reflection access wasn't used somewhere, etc. And if the IntelliJ project was configured properly it could have ignored files, sometimes it chokes on Lombok, or where code generation is being used, etc.
Do you have a more concrete example, maybe give a scenario? Where you relied on static types to help you add a new feature? And how?
In Java I try to use final fields in classes, and when I add a new field I get compiler errors telling me where I have neglected to initialize the field. And obviously when you remove a field you'll get compiler errors for all references to it. Refactoring doesn't necessarily mean automatic refactoring in the IDE, although I think I've had better luck with IntelliJ than you have.
For me, when using a dynamically typed language I have to be more careful coding and do more frequent testing as I code, however the code is simpler and easier to read. With static types, the compiler catches more problems for me, but I also spend more time fighting with it to force my solution into something that fits the type system.
It is a mistake to think that with static types the compiler catches all the errors, and I doubt that many people really believe that. It just catches more errors than with dynamic types.
I don't want to sound like I'm bashing static types. Being able to have static analysis of code and strong guarantees of certain properties has value, and the trade offs comes from the constraints it imposes and the additional annotations needed for it.
It's interesting that you bring up final on the variable, since I wouldn't really consider that related to types. I think that's an interesting thing that this points out. There is more than just types. For example, it is a compile error in Clojure to call a function with the wrong number of arguments. Similarly, it is a compile error in Clojure to declare a let binding without a value. And I got curious if this was valid: (def a)
. Surprisingly, this is not a compile error. Actually, it is pretty interesting, it creates an Unbound Var, whose value is the Unbound object. So I would have thought maybe it would just default to null, but it doesn't.
P.S.: I don't think you need to declare the value final in java. If you try and use an un-initialized variable it'll be a compile error. The difference is final will complain if it is never initialized, or if it isn't immediately after it is declared.
I meant final class fields, not variables.
@lkschubert8 From what I've read, defn-spec/orchestra is automatic instrumentation, and instrumentation sounds like "run time type checking" -- so what happens if we pass it a vec with 10M elements -- will it, right before the function is called, run through all the elements and check that they all have the same shape?
I would recommend instrumenting only in dev/test
@sgerguri Right, just to be clear, I'm not bashing clojure. I love clojure for "exploratory programming", and am not happy that right now, every tiny edit takes 5-10 seconds to recompile. This is why I really want a setup where I can use Clojure as the "glue/scripting" langauge, with a typed language building up the "primitives."
Yeah, that's the trade-off that you get with static types. Pick your poison. I like Clojure precisely because it allows me to have a conversation with the runtime in realtime, and I feel like I have enough experience to not shoot myself in the foot in the process too much.
I think better error messages/stack traces would certainly help, though with some exposure (and a little bit of subsequent experimenting in REPL) you will usually be able to pinpoint the issue fairly quickly.
we've spent a ton of time in 1.10.0 and 1.10.1 improving the error messages and narrowing the provided info to provide less noise and more signal. I'd be interested if you have feedback on that.
When I do get exceptions I tend to get the similar low-level stack traces as before, though in our case it tended to be a function of interfacing into the Java wrappers that we were using, or things not being initialised properly (we use mount). We've had some type-level mismatches post- code rejigging (I hesitate to call it refactoring after the above conversation! :)) that would simply tell us that type X is being used but type Y is expected, though it wouldn't say exactly where it happens - so one still relies on fishing out the correct line from the stack trace, and then the rest of the difficulty tends to be associated with stacked transformations happening in a single place of code. I know error messages is something people have been complaining about for a long time so I don't want to sound ungrateful for the effort that the core team has put into improving it.
I'm curious where you're seeing stack traces when an exception happens? Generally, you shouldn't as of Clojure 1.10.1 unless a tool you're using is doing that.
that is, are you seeing this in your editor (and if so, which one), or with lein (and if so, are you using Clojure 1.10.1), or somewhere else (where?)
I'm trying to assess whether you're seeing the stuff we've worked on or a different experience that is shaped by a tool
I think there were three cases when it happened:
• REPL startup
• State initialization through mount
• Tests - as mentioned all of our tests are property-based, so some of the generated data may trigger an exception somewhere in the codepath it is exercising
I may also be mixing the concept of low-level or indirectly-indicative errors together with the full stack trace - I remember we would sometimes get an error, though if I recall correctly that would simply be the error message from the stack trace that was being hidden away. In such cases I would almost always pull out the full stack trace anyway (through clojure.repl/pst *e
) as I would need to see the callpath to identify where the actual problem was.
Other times, I would actually get the entire stack trace - typically during test runs or when initialising a REPL through Cursive. For test runs, this would happen both when running tests directly from Cursive as well as through leiningen. Interestingly, however, Cursive itself would sometimes freeze in the REPL without showing the stacktrace/exception, something which leiningen wouldn't do.
Ya, you're probably not talking about stack trace which is the full list of a called b called c etc. But more that the exception cause and message are Java specific and not always precise to the real issue in your Clojure code
I know how to interpret those, and still sometimes it takes me way to long to realize that I was say doing (i < 10)
by mistake
Yes, I know what stack traces are, thanks. 😉 I think the point is that even though the full stack trace may be hidden the error message is still not custom to the context and the raw stack trace error message is being shown.
Well, I meant that I understood you didn't meant stacktrace per say, since those are hidden now, but exception cause
I have the same feeling, while the noise is removed a bit from the exceptions, the exceptions themselves haven't improved much
But I think it's a pretty hard problem. Because to add better error messages you'd need to add more checks and those would slow the code down.
well, it depends
Luckily enough our input, representation and output data is fully specced-out so we never actually hit this in production, always during testing/changing code, and then one can interactively zoom in on the issue with a helpful piece of random, generated data. I appreciate not every codebase is going to be hot on this particular approach, however.
Also, some of that is down to our preference for using higher-level properties in our tests - laws that should be upheld in protocol implementations, or properties for bigger/more important functions. We don't directly test everything so when the code changes and a high-level property like this blows up it can take some experimentation in the repl to find out exactly what went wrong.
Hi all, I could use your help designing a macro I’m building for use of CLJS in AWS Lambda. Right now you can set up a lambda like you would a function:
(deflambda fn1 [event context]
(response/ok
{:foo "bar"}))
But it would be nice to allow the addition of middleware.
With the current macro, it would have to be this:
(deflambda fn2 [event context]
((-> fn2-handler
(m/wrap-content-type "application/transit+json"))))
But that explicit function call seems off design-wise.
How do you think it would be best to handle the middleware case?My first thought was for the deflambda
body to accept a value or a function. But is that idiomatic?
The Lambda would never return a function, so I think making it so if it does you treat it as a middleware is probably fine
But to be honest I'm confused about middleware here. You mean to say you could setup the deflambda to automatically pass the return to a middleware chain before it returns it
If so, and given it's a macro, you can do something more syntax based like: `(deflambda fn1 [event context] :middleware [middleware1 middleware2] body)`
Or even if you want a input middleware chain and an output one you could have :in-middleware and :out-middleware. Both optional.
I wondered how clojure
determined when parsing of its own command line args is done and when the user arguments start
Yep - it’s really expected to be used with -m
JIRA if you have an account there, otherwise http://ask.clojure.org
There are tags for all the contrib libs on http://ask.clojure.org
https://clojure.atlassian.net/projects/TLOG/issues/?filter=allopenissues if you use JIRA
@hhausman I'm curious as to what bug fix and/or feature request you have in mind for c.t.l. since we're a heavy user at work?
first, wow, http://ask.clojure.org is rad! Glad now that I asked. @seancorfield - https://ask.clojure.org/index.php/8985/can-i-suppress-low-level-logs-with-tools-logging
tools.logging is really a facade on top of a number of java logging libraries. for things not covered by the facade (which is basically everything except actually producing log messages) you'll need to interact with whatever concrete java logging library is being used
In that case, I guess my feature request is to augment the facade to include functionality for temporarily disabling certain log levels (simlar to with-level
in timbre).
Maybe its possible, but I don't think so. Because timbre is both a logger and a log interface. While tools.logging makes use of multiple log frameworks and they don't all support that feature I believe.
I've got a deftype
and I want to define how it gets compared by = to another object. I can't figure out the right protocol and methods to support for that. it's clojure.lang.Associative
and clojure.lang.IPersistentCollection
but not updatable.
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Util.java#L24 is where clojure.core/= bottoms out
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/APersistentMap.java#L73 is the method that will eventually get called when comparing against one of clojures built in maps
it would be equals and hashCode on Object, but deftype might interfere with defining those
(ins)user=> (deftype Easy [] Object (equals [_ _] true) (hashCode [_] 0))
user.Easy
(ins)user=> (= (->Easy) 0)
true
(cmd)user=> (= (->Easy) 1)
true
it clearly isn't happening above - I see you mean if you also implemented those interfaces which deftype doens't out of the box...
if you redefine = without redefining hashCode as well, be prepared for bad behavior in associative collections
(see for example existing weird behavior of Double/NaN when used as a key or set member)
The core.rrb-vector library is one example of implementing a custom vector-like class in Clojure. It has deftype Vec
in the source file I will link in this message, and implements equals and hashCode methods for Java capability, and hasheq (for Clojure.core/hash) and equiv (for clojure.core/=): https://github.com/clojure/core.rrb-vector/blob/master/src/main/clojure/clojure/core/rrb_vector/rrbt.clj#L473
If your collection is more like a map or set than a vector, then obviously don't copy and paste those implementations -- you would want to check for other interfaces/Java-collection-classes than that code does for vectors and vector-like things.
is there a friendly way in clojure to do same thing as tree-seq but with pre-order style ( like prewalk ) ??
clojure.walk/prewalk sorry, misread
isn't tree-seq already preorder, you see a parent first, then its children, recursively
problem with tree-seq is it is post order ( dept first ) and it does not go into nodes in a vector .. i need to hit all the nodes with prewalk style
maybe i should just look at code for tree-seq and substitue postwalk for prewalk
it definitely goes into vector nodes (as long as your predicate accepts them), and depth vs. breadth is an entirely separate issue
ok .. let me check on that
it' doesn't use any clojure.walk function
it's just a recursivie lazy function
you could switch the nesting to do breadth before depth - it's a small fuction, easy to modify
user=> (source tree-seq)
(defn tree-seq
"Returns a lazy sequence of the nodes in a tree, via a depth-first walk.
branch? must be a fn of one arg that returns true if passed a node
that can have children (but may not). children must be a fn of one
arg that returns a sequence of the children. Will only be called on
nodes for which branch? returns true. Root is the root node of the
tree."
{:added "1.0"
:static true}
[branch? children root]
(let [walk (fn walk [node]
(lazy-seq
(cons node
(when (branch? node)
(mapcat walk (children node))))))]
(walk root)))
nil
got it .. working on that now
this appears to work
(defn breadth-tree-seq
[branch? children root]
(let [walk (fn walk [node]
(let [branches (when (branch? node)
(children node))]
(concat branches
(mapcat walk branches))))]
(cons root (walk root))))
no, it returns all the leaves just like tree-seq does (concat branches ...)
ensures that
either branches aren't leaves and they are recursed on, or they are leaves and thus in the output
it doesn't include "leaves" where the parent returns false for branch?
, just like tree-seq
i'm not convinced. the cons node
will get the leaf from tree-seq but there's no equivalent that i can see in the breadth version
it always concats branches into the output
that's the equivalent
(and starts with (cons node ...)
at the top level
anyway, running it proves it works too
user=> (pprint (breadth-tree-seq coll? seq [1 #{2 {:a 3 :b 4}} [5 6]]))
([1 #{{:a 3, :b 4} 2} [5 6]]
1
#{{:a 3, :b 4} 2}
[5 6]
{:a 3, :b 4}
2
[:a 3]
[:b 4]
:a
3
:b
4
5
6)
nil
(edited to make order more obvious)am i just loosing marbles ? i am trying
(tree-seq #(or (map? %) (vector? %)) vals) tree
and tree-seq not loving that(tree-seq map vals tree) works fine
vals surely doesn't work on vectors
oh .. ok .. hold on
yeah .. that did it .. thanks @noisesmith
does anyone know how to deal with this situation: I am adding a test dependency to a lib that is also needed for tests of the calling project but not during in prod. if i only add this dependency to the :dev profile, then it is not available when calling project’s tests run. the best i found so far is https://github.com/technomancy/leiningen/blob/master/doc/PROFILES.md#dynamic-eval but it seems like hack Thanks!
@antonmos is using lein with-profile +dev test
an option?
that means "use the normal profiles, plus dev, for this task"
then I don't understand where the problem is here
oh, yeah, than I agree with @hiredman here that you shouldn't use test deps of your deps - that smells like a design problem
this might be what i need https://github.com/technomancy/leiningen/blob/master/doc/PROFILES.md#profile-metadata
it provides support for talking to a db as well as support writing integration tests that need that db
at my company we solve this by the depended project defining a lib of test helpers
yeah, I mean, my two reactions are project B should depend on X for its tests then, and also something is broken in the design that makes X required for both
from one repo, you can get two artifacts: the lib itself, and the test helper lib
via separate jar / deploy profiles
if B is doing that stuff, it should depend on the library that gives it that stuff (X)
@noisesmith @hiredman do you by any chance have a link to an example project that publishes multiple jars from one repo?
I thought I had one, but the reusable test stuff was folded in with the rest of the project
@antonmos We do it at work, but unfortunately that's closed source. We have a monorepo, with about 30 subprojects, managed with CLI/`deps.edn`, and we create JARs from 12 of those subprojects. I don't know how you'd do it with lein
but it's fairly straightforward with CLI/`deps.edn` (and we used to do it with Boot pretty easily too before we switched to the CLI stuff).
proof it works
all I had to add was the :test-jar profile, otherwise a default project
(then invoke the profile in the jar task, of course)
(and there's how you do it with Leiningen 🙂 -- thanks @noisesmith )
this is likely the best way to do a clojure / lein monorepo - have a separate subdirictory for each app etc.
and then use source-paths to define the set of in-repo libs that go in each app's jar, and optionally per-profile deps for each if they want different deps
(the other approach I've seen is a modules plugin that does some magic to make version numbers match up, which is kind of messy)
looks like the core.clj is included in both jars, i think i can add :jar-exclusions within the :default profile
or the ^:replace
metadata on :source-paths
I think the replace option is more parsimonious (matches semantically what you want)
yeah, :source-paths ^:replace ["test"]
fixes it
as an aside I think leiningen suffers from far too many "good enough" but inelegant configs, and then other configs being copy pasted - and it's powerful enough to keep iterating and getting uglier and uglier setups that way... perhaps this could be called the javascript problem haha
or the makefile problem
the other thing you want is to override the project name inside the profile (I think that's possible...)