This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-08-04
Channels
- # announcements (2)
- # beginners (24)
- # cider (53)
- # clara (4)
- # clj-kondo (3)
- # cljdoc (2)
- # clojars (1)
- # clojure (17)
- # clojure-dev (48)
- # clojure-russia (14)
- # clojure-uk (10)
- # clojuredesign-podcast (5)
- # clojurescript (11)
- # cursive (4)
- # events (5)
- # joker (1)
- # juxt (1)
- # kaocha (1)
- # re-frame (13)
- # reagent (1)
- # reitit (2)
- # sql (28)
has anyone done any experiments with the inline classes in Valhalla early access and Clojure’s persistent data structures? 🙂
He was talking about it from the jvm lang summit
I haven't had a chance to do anything except think about how it could apply within PersistentHashMap
Brian Goetz was strongly encouraging experimentation -- it's ready for that https://wiki.openjdk.java.net/display/valhalla/LW2
the Vector API is very exciting for the JVM. I think we'll be able to use the Vector API, but I don't think it will be very performant unless we can write our functions in such a way that a large areas of code have the proper Vector types exposed -- that way the JVM will optimize through it
I'm not sure whether it would work as well with intervening casts to/from Object as with IFn
I guess if you arrange a fat method body using a bunch of macros, where are the locals are typed Vectors that might work
yeah, I think I will have to jump through some major hoops to make it work, but that’s the fun in it! 🙂
I wish we could express SIMD crypto routines with the Vector API, but it's not timing-attack safe to do it within a JIT, unless there was a way to tell hotspot not to do timing-unsafe xforms within a region of code
to be honest I don’t understand how it’s possible to write timing-sensitive code on the JVM at all ¯\(ツ)/¯
this is a pretty cool talk by the guy who did the ECC implementation in javax.crypto where he talks about timing-dependence etc: https://www.youtube.com/watch?v=5kj_GT6qvYI
> This relates to my comment that we need a way for the Vector runtime to "crack" the lambdas passed to HOF API points like Vector.reduce. If we had the equivalent of C# expression trees, we could treat chains of vector ops as queries to be optimized, when executing a terminal operation (such as Vector.intoArray or hypothetical Vector.collect). A vector expression could be cooked into some kind of IR, and then instruction-selected into a AVX code.
Even without using inline classes, I think it might be worth experimenting, at least for Clojure vectors, with trees that have no PersistentVector$Node objects, only Object arrays. It seems there are twice as many levels of indirection as there need to be.
would that make transients harder?
I do not believe so. You would still need the 'edit' fields, but they could be tucked away in an extra array element of the Object arrays, at a fixed index, e.g. index 32.
isn't the length-32 thing special for cache compatibility?
I may do an experiment with this starting from core.rrb-vector's implementation, to see whether it gains any performance.
12 to 16 bytes of Object header at the beginning, plus 32*4 bytes for the 32-element Object array elements themselves, doesn't fit into any cache line sizes I have seen (32 or 64 bytes are common?)
¯\(ツ)/¯
It's an experiment thing, just based on a hunch that following 2 arbitrary pointers per tree level is likely more expensive in the common case, vs. 1 with the changes I have in mind. It probably will not actually improve things by a 2-to-1 factor in the common case, e.g. small arrays.
The Fingerhut Conjecture
Exactly! I will resist the urge to store data in NaN's 🙂
that's terrible
I almost wish to apologize for infecting your brain with that word.
I don't endorse any of this
I am pretty sure 32 was a good tradeoff choice - larger would reduce lookup times, but at the cost of increasing assoc/add times.