This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-03
Channels
- # announcements (1)
- # babashka (31)
- # babashka-sci-dev (53)
- # beginners (33)
- # calva (54)
- # cider (15)
- # clj-kondo (9)
- # clojure (115)
- # clojure-dev (19)
- # clojure-europe (21)
- # clojure-nl (1)
- # clojure-norway (78)
- # clojurescript (10)
- # clr (9)
- # community-development (9)
- # core-async (24)
- # cursive (18)
- # datomic (59)
- # emacs (43)
- # figwheel-main (2)
- # fulcro (4)
- # graphql (4)
- # malli (7)
- # meander (12)
- # nbb (14)
- # off-topic (22)
- # polylith (8)
- # re-frame (5)
- # reitit (3)
- # releases (1)
- # shadow-cljs (36)
- # sql (1)
- # tools-build (23)
- # xtdb (13)
Just stumbled upon https://ask.clojure.org/index.php/9010/distinct-should-support-sets. Basically I have a fn that takes a coll and I wanted to be sure it was distinct regardless of type. I see this question has no Jira ticket on it. Any chance of that happening?
btw, the transducer version will work with sets
it provides a solution now in lieu of a change
but I will file a ticket for it
I notice that there is opportunity to reduce a lot of allocations in int-maps during iteration, if we have Leaf nodes (https://github.com/clojure/data.int-map/blob/master/src/main/java/clojure/data/int_map/Nodes.java#L567) extend clojure.lang.MapEntry
(since that is what they are basically anyways)
Then its iterator can return this
instead of new MapEntry(key, value)
https://github.com/clojure/data.int-map/blob/master/src/main/java/clojure/data/int_map/Nodes.java#L596
and similarly its reduce method can invoke on this
rather than new MapEntry(key, value)
https://github.com/clojure/data.int-map/blob/master/src/main/java/clojure/data/int_map/Nodes.java#L614
On an int-map of 50k entries I observe about 40% speedup in reduce
by doing that
a complication is that int MapEntry.count() { return 2 }
clashes in both meaning and signature with long INode.count()
(which should return 1
). So INode.count would need to be renamed to like nodeCount()
or something.
I don't have a way to check out against the latest build of Clojure(JVM), but I'm wondering about the effects of commit https://github.com/clojure/clojure/commit/b2366fa5c748f9d600879c3e0b549e631a5b386f on LongRange
chunking. Prior to this commit LongRange
had similar logic to what is still in Range
for chunking, namely forceChunk
would make sure that the LongChunk
created had size no more than CHUNK_SIZE = 32
. in the new code, the LongChunk
will have size equal to the size of the LongRange
. I'm guessing this will be a surprise to anyone trying (take 1 (map f (range 1000000)))
who likely will expect f
to be called at most 32 times. (I'm basing this on comparing my current install of clj
= 1.11.1 against the latest ClojureCLR which has this change. I used a side-effecting f
. For clj
f
is called 32 times. for ClojureCLR, well, at least in only tested it against 100 and not 1,000,000. Tracing through my code vs current JVM code, I don't think the problem is just on my side.)
this is been changed back in master
or maybe it hasn't been ok'ed yet
doesn't show in the repo yet. My usual assumption is that something is broken in my code. It took me a long time to think of checking the commit history on that file.
actually had a missing field set so was not showing up in the right list, so thanks! ;)