Fork me on GitHub
#clojure-dev
<
2019-02-26
>
borkdude09:02:24

OK, with these changes: https://github.com/clojure/clojure/compare/master...borkdude:map-optimizations And this benchmark:

clj -Sdeps '{:deps {org.clojure/clojure {:mvn/version "1.11.0-master-SNAPSHOT"} criterium/criterium {:mvn/version "RELEASE"}}}'
user=> (use 'criterium.core)
nil
user=> (defn merge-benchmark [size] (let [m1 (zipmap (range size) (range size)) m2 (zipmap (range size (* 2 size)) (range size (* 2 size)))] (println (type m1) (type m2)) (quick-bench (merge m1 m2))))
#'user/merge-benchmark
It’s starting to pay off with maps > 64 kv-pairs or so. Below that the performance is roughly the same between 16-64 elements and maybe a tad slower below 16 elements.

borkdude09:02:33

so for a 2000 element map it would mean a speedup of 35%

mpenet09:02:36

small maps are super common tho, ex ring

borkdude09:02:25

yeah. maybe a count on the seq before doing the optimization would make sense?

borkdude09:02:05

or we could just decide that this is not worth optimizing. chances are that Rich has already thought about this case

borkdude09:02:40

for PersistentArrayMap the override could be removed since those are small maps all the time

borkdude10:02:15

The solution with counting up front makes it slower in all cases. So the trade off is to optimize for big maps (> 64) or smaller maps (< 64). I guess it’s very common to have small maps in Clojure, hence the optimization is probably off the table.

borkdude10:02:54

People using big maps can write their own transient merge probably