This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-08-05
Channels
- # all-the-channels (1)
- # announcements (3)
- # asami (13)
- # beginners (227)
- # calva (2)
- # chlorine-clover (8)
- # cider (7)
- # clj-kondo (4)
- # cljs-dev (21)
- # cljsrn (8)
- # clojure (64)
- # clojure-europe (39)
- # clojure-france (2)
- # clojure-italy (3)
- # clojure-losangeles (1)
- # clojure-nl (20)
- # clojure-uk (8)
- # clojurescript (24)
- # conjure (12)
- # datalog (9)
- # datomic (24)
- # emacs (8)
- # figwheel-main (1)
- # fulcro (15)
- # jobs-discuss (1)
- # malli (5)
- # meander (3)
- # off-topic (26)
- # re-frame (15)
- # reagent (10)
- # reitit (1)
- # reveal (1)
- # sci (15)
- # shadow-cljs (25)
- # spacemacs (7)
- # sql (3)
- # xtdb (1)
My use case was {:ref my-atom}
in reagent. Today we have to create our own functions to reset!
them, and it is annoying both for syntax and perf. I figured it would be convenient to use them as functions in such cases if there weren't any big downsides, but I see now why it may be a bad idea in general.
In reagent reset!
is common, so yea, I may just make a new mutable container with that behavior for IFn to test it out
It sounds just like the already existing React/createRef
that's intended to be used with :ref
.
As far as making it stand out, @smith.adriane, I see your point, but there are other ways to do that, like naming convention. For example, I've seen some people name all their atoms like !my-atom
, or similar
{:deps {org.clojure/clojure {:mvn/version "1.10.1"} ; FOP 2.4 and 2.5 depend on [com.sun.media/jai-codec "1.1.3"] which is not OSS org.apache.xmlgraphics/fop {:mvn/version "2.5" :exclusions [com.sun.media/jai-codec javax.media/jai-codec javax.media/jai-core]} org.apache.xmlgraphics/fop-core {:mvn/version "2.5" :exclusions [com.sun.media/jai-codec javax.media/jai-codec javax.media/jai-core]} org.soulspace.clj/clj.java {:mvn/version "0.8.0"}}}
the trace you posted doesn't show those exclusions getting included in the path so I may be confused what you're asking about
are you just saying that you need to specify the exclusions on both top level libs to avoid the exclusion? if so, then that is the expected behavior currently (there are no global excludes)
any idea, why it is not enough to have the exclusions on fop? If I don't include fop-core with the exclusions, the jai stuff will be included. 😞
can you use -Strace
and post the emitted trace.edn file here as a snippet?
Here is the trace.edn with the fop-core dep commented out
is there a library that can take a datomic/datascript-ish schema, and use it to flatten nested maps/"entities"?
I came across this fairly common implementation to “dechunk” a chunked seq to realize only 1 item at at time. The impl is typically this:
(defn dechunk
[xs]
(lazy-seq
(when-first [x xs]
(cons x
(dechunk (rest xs))))))
I found this to be interesting, so tested it out first with:
(first (dechunk (map prn ( range 10))))
Which printed out all 10 times - so dechunk
didn’t seem to help.
I did notice, more obviously, this did work:
(first (map prn (dechunk (range 10))))
So then I thought, you just cannot do any fn that may realize the chunk before you have wrapped in dechunk
. However, then I noticed this:
(first (map prn (map inc (dechunk (range 10)))))
this ends up working again - only prints 1 item.
Same goes with
(first (map prn (filter even? (dechunk (range 10)))))
Any explanation?dechunk can't change map's chunked behavior if it doesn't wrap map's input
IMHO if you need dechunk you are better off swapping in explicitly imperative logic instead of your lazy code somewhere, lazy-seqs are not good queues, but agents and channels are
pretending something imperative isn't imperative leads to bugs, and agents and channels are very nice imperative abstractions
more clarity about that first part: map detects a chunkable input and uses a performance optimization that calculates a chunk at a time, dechunk returns a new lazy-seq that isn't chunkable, and map doesn't return a chunkable either
in fact that makes me wonder why one doesn't replace dechunk with (partial map identity)
(except for the design concerns already stated indicating you shouldn't be using laziness in the first place)
oh - I guess I misunderstood his last part then
every lazy-seq operation returns a new lazy seq that refers to the previous, so as you force it you force the previous
so dechunking a lazy seq returns a new lazy seq that is not chunked, but it can't go back and make the previous lazy seqs not chunked
so if you look at all of the examples above, when the prn is after the dechunk you get one at a time, when prn is before you get multiple
I still don’t understand why it works/doesn’t work in some cases. prn isn’t really special here I’d think.
Dechunk consumes a sequence and returns a new sequence built from elements of that consumed sequence
Consuming a chunked sequence will realize in chunks, no matter how you consume it, because that is how it is constructed
So dechunk gives you a sequence that chunking aware operations (map, for, etc) will not treat as chunked
But if the input to dechunk is chunked, that sequence can only be realized in chunks, and the chunks pulled apart
@U0NCTKEV8 good explanation and sort of what I was thinking. So doesn’t this mean that dechunk isn’t really a solution?
It’s posted all over the place online. Stackoverflow, blogs, etc. but really if you are given a chunked seq. You can’t make it suddenly not chunked anymore
Is that correct? It seems to align with what you’re saying. You are only a consumer at the point of calling dechunk. If the seq given is chunked, you will still realize it in chunks
It’s almost like you can cause chunk aware things later to no longer be chunked. But you can not prevent the original chunked seq from being realized in chunks. So it could have some uses depending on what was trying to be dechunked.
Yes, it can be useful, you just need to call it as soon as possible (e.g. immediately on range or a vector) to make sure your side effecting operations happen one at a time
Indeed. Interesting. I just came across this sort of thing being used somewhere and became curious. I try to avoid if possible, needing this type of thing.
(->> (range 10)
(map prn)
(dechunk)
(first))
(->> (range 10)
(dechunk)
(map prn)
(first))
Is there a way to compare too maps, using == instead of = for any entries that happen to be numeric?
you could do it by mapping over two tree-seqs I bet...
Yeah. Just checking if there's a better way! It seems like a common use case, but then I guess this is the first time I've needed it 😁
tree seq looks nifty!
what I have so far uses (comp sort seq)
but I can't just use sort because keys are heterogenous
Well, here's an idea... is there a way I can just cast all numbers in a map to double?
haha, that's easier: clojure.walk/postwalk
awesome 👍
much easier
(ins)user=> (walk/postwalk #(if (number? %) (double %) %) {:a 0 :e {:b 1}})
{:a 0.0, :e {:b 1.0}}
walk should really have a (defn transform-type [pred f] (fn [x] (if (pred x) (f x) x)))
type thing
Arity question regarding a more generalized version of this function:
(defn maybe-apply
[pred f x]
(if (pred x)
(f x)
x))
Would it be better for x
to be the first or last argument? Why first? ->
, which is coherent with how other value transforming functions work (assoc, conj, etc)pred, x, value goes from most general to most specific, making it friendly to partial
if you want threading, this is nearly cond->
which can be used inside ->
I feel like I've written so many variations of that, and always for clojure.walk
Hey! Passing test!
That's great, I'm definitely saving this code snippet to an accessible location