This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-12-13
Channels
- # adventofcode (77)
- # beginners (132)
- # boot (11)
- # cider (40)
- # clara (10)
- # cljsjs (1)
- # cljsrn (4)
- # clojure (148)
- # clojure-android (1)
- # clojure-greece (5)
- # clojure-italy (5)
- # clojure-nl (7)
- # clojure-spec (57)
- # clojure-uk (9)
- # clojurescript (115)
- # core-matrix (1)
- # cursive (3)
- # data-science (1)
- # datomic (1)
- # duct (7)
- # emacs (20)
- # fulcro (29)
- # funcool (4)
- # graphql (31)
- # instaparse (15)
- # java (1)
- # jobs (6)
- # jobs-discuss (95)
- # leiningen (2)
- # off-topic (30)
- # om (4)
- # onyx (7)
- # pedestal (6)
- # quil (4)
- # re-frame (52)
- # reagent (59)
- # rum (1)
- # spacemacs (3)
- # specter (61)
- # test-check (3)
given a vector v
and an int n
what is the fastest way to drop the last n
elements of v
i.e.
v=[1,2,3,4]
n=1
-> [1,2,3]
you could benchmark (into [] (subvec v ...))
vs (nth (iterate pop v) n)
I wonder if vectors are smart about using subvecs via structural sharing
https://clojuredocs.org/clojure.core/subvec <-- O(1) time according to docs
right what I am asking is how expensive making a vector out of a subvec is
that may or may not be faster than making a vector out of a list
I thought subvec returned a vector. If so, where is the problem of 'how expensive making a vector out of a subvec is"
it doesn’t
oh wait, it isn’t a vector, but might as well be one, never mind!
Nothing ready AFAIK, and I don't know how fast (apply vector (drop-last 1 [1 2 3 4])) would be.
hello guys
what is the best to handle deeper maps in clojure with only make affect on the most inner map for example:
{:t1 {:f1 {:l1 {:name :hello}}} :t2 {:fff2 {:ll2 {:name :world}}}
@abdullahibra did you have a look at https://github.com/nathanmarz/specter already?
what i want to achieve is handle the last map without affecting other
(assoc-in map [:t1 :f1 :l1 :name] "goodbye")
@dabra that's true if the keys are equivalent
but they aren't
are transducers helpful here?
what are trying to change?
most inner map only without affecting the outer structure
clojure.walk/postwalk should do it
recursive traversal of the tree
okay good
thanks guys
If you have more cases like this, I would still like to encourage you to check specter. But of course it might also be that you prefer to keep going with vanilla Clojure! 👍
@nblumoe thanks, seems specter very cool, and i guess it solve my issue with handling inner data structures, is there any connection between specter and transducers ?
haha, lol. I didn’t see that there was a reference to specter already just a few lines above. 😄 Wow, I was confused 😄
@nblumoe can you help with specter question
(s/select [MAP-VALS MAP-VALS MAP-VALS :title (filterer (fn [s] (empty? s)))] res), i'm trying to select hash maps which has non-empty :title
i got this: IllegalArgumentException Don't know how to create ISeq from: java.lang.Character clojure.lang.RT.seqFrom (RT.java:542)
oh got it
(s/select [MAP-VALS MAP-VALS MAP-VALS :title (fn [s] (empty? s))] res)
without filterer
@nblumoe thanks for referring this nice lib to me
another question
(s/transform [MAP-VALS MAP-VALS MAP-VALS (fn [h] ((comp not nil?) (re-find #"wow" (:title h))))] identity res) why not i got all titles with "wow" string and got all values ?
i only want to filter those which has wow string
is there a good way to measure how many threads are calling a function at the same time?
i'm now abusing java.util.concurrent.Semaphore with more permits than i will ever need, does anyone know a better way?
I notice that https://clojure.org/guides/getting_started uses an unversioned install script, this is problematic for reproducible builds. Is there an alternative url?
Looks like the expectation is that the linux install script will not be used by packages. But the contents will be replicated into a build step. Got it.
It is versioned and there is a actually a versioned script in the same location as well. I would like it to be used by packages if possible rather than replicating stuff. If you’re working on something, please let me know.
@abdullahibra what's the input data for that code?
i'm in another situation now, if i have set of hash maps like this: [{:s1 "cool" :p1 [{:name "hello"} {:name "world"}]}, {:s2 "cool2" :p1 [{:name "wow"} {:name "world"}]}]
how can i get the full hash maps which contain "wow" in this paths [MAP-VALS :p1 :name] ?
so in this case i should get : [{:s2 "cool2" :p1 [{:name "wow"} {:name "world"}]}]
i think this is what you're looking for: (select [ALL (selected? :p1 ALL :name (pred= "wow"))] data)
@nathanmarz very good thanks 🙂
and thank you for specter
@abdullahibra sure thing, feel free to jump into the #specter channel if you have more questions
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
Running into this ^ in a project where I previously never had issues with heap space. The stacktrace seems to indicate that I’m trying to print an infinite seq or something like that but there’s nothing in the stacktrace telling me where it’s coming from — anyone suggestions how I could debug this? (full stacktrace: http://sprunge.us/UUGN)@martinklepsch I'd be tempted to say to look into a sudden grow in size of some external data dependency, assuming your code depending on some and you didn't change anything
then you can also reduce the size of the heap then enable heap dumps and be ready to crawl into logs, but the few times I had to deal with that, correlation with metrics/commits when it started was enough
To my surprise I can’t find any commit that does not produce an OOM exception. 😞 I did take a heap dump and there are about 1.1M maps with dates and all kinds of things so these seem to be retained somehow, either because they are printed as the stacktrace seems to indicate or because I’m somehow holding on to the head of a sequence. Do you have any recommendations to figure out where this is happening?
I’m really stunned that this issue seems to appear even with older commits
@martinklepsch do you have access to a memory profiler (like yourkit?) they can often perform memory retention analysis. "1.4mil dates, held by these two references..."
I have no idea what happened but the issue seems to just have gone away, what the heck? 😳
Is there a variant of partition
that, instead of asking for partitions of size N, I can ask for N partitions?
@cjsauer Not that I'm aware of, you'll have to make it yourself. It's presumably not included because in order to have exactly N partitions of roughly equal size, you have to know the size of the entire collection ahead of time, which means it's inherently unable to be lazy
the reason why it's not in core is that your function needs to know the lenght of the collection
well, what @tanzoniteblack said :)
@bronsa @tanzoniteblack thanks guys. Ended up here:
;; adapted from
(defn chunkify
[values n]
(let [cnt (count values)
[k m] [(quot cnt n) (rem cnt n)]]
(for [i (range n)]
(subvec values
(+ (* i k) (min i m))
(+ (* (inc i) k) (min (inc i) m))
))))
wait, am I missing something, why concat?
oh, if I’m not mistaken that concat is a noop
something like
(defn partition-n-groups [n coll]
(partition-all (int (/ (count coll) n)) coll))
@tanzoniteblack counter example:
(partition-n-groups 3 [1 2 3 4])
yeah, I think you want to round up instead of down
agreed
(defn partition-n-groups [n coll]
(partition-all (Math/ceil (/ (count coll) n)) coll))
it’s still off
(ins)user=> (defn partition-n-groups [n coll]
(partition-all (Math/ceil (/ (count coll) n)) coll))
#'user/partition-n-groups
(ins)user=> (partition-n-groups 3 [1 2 3 4])
((1 2) (3 4))
@cjsauer Well what should it return? The last el dropped or the last element merged into the last collection?
As far as grouping goes, I'm not too picky. It's fine if an element ends up in any grouping.
@cjsauer are you trying to guarantee that you 1.) don't drop any elements and 2.) have exactly n
groups in the output?
@tanzoniteblack exactly that. I'm specifically dealing with n = 3
, and there are guaranteed to be >= 3
elements in the input coll
@cjsauer interesting problem - I think this solves it
(defn partition-n
[n coll]
(let [c (count coll)
r (rem c n)
q (quot c n)
N (+ q (if (zero? r) 0 1))]
(partition-all N coll)))
perhaps N should be (if (zero? r) q (inc q))
(same result)
Can I have a spec conformer for a map that will cherry-pick required keys? Something like:
(s/def ::bid (s/merge (s/keys :req-un [::title ::value])
(s/map-of #{::title ::value} any?)))
(s/def ::safe-bid
(s/and
(s/conformer #(select-keys % #{::title ::value}))
::bid))
(s/conform ::safe-bid {:title "reveal" :value 0.1 :extra "foo"})
That would return:`=> {:title "reveal" :value 0.1}`@noisesmith ah that's much cleaner
could maybe use longer names, but hey it’s so mathy maybe one letter names are OK haha
@noisesmith hmm...I'm getting:
=> (partition-n 3 [1 2 3 4])
((1 2) (3 4))
clearly I didn’t try enough test cases
@cjsauer aha - there’s no way for partition-all to return a collection of length 3 if given a collection of length 4
I bet there’s still an elegant solution though…
what is the python version?
@tanzoniteblack hm, not sure
Is there a Clojure DSL for specifying parallel computations on tensors that: on Desktop, compiles to Cuda/OpenCL and on CLJS, compiles to WebAssembly? 🙂
"parallel Tensor Ops" seems like it should be high enough level that it's possible to efficiently target all those platforms
Are there clojure x videos up anywhere?
Thanks for the reminder - I haven't had a chance to watch them yet and was looking for something to watch later 🙂
Thanks!
@cjsauer I made a sequence version of n-partitions
, by re-using the math from the python/vector version. I wouldn't really call it an improvement in clarity though.
(defn n-partitions
[n coll]
(let [cnt (count coll)
[q r] [(quot cnt n) (rem cnt n)]
partition-size (fn [i]
(- (+ (* (inc i) q) (min (inc i) r))
(+ (* i q) (min i r))))]
(->> (range n)
(reductions (fn [[_ more] i]
(split-at (partition-size i) more))
[nil coll])
(rest)
(map first))))
sad :( at least in the end i ended up refactoring the code to use simple functions instead, because multimethods wasn't the correct approach
I found a weird thing with some
. I rewrote it using reduce
and found it was much faster:
(defn find-first
[pred vals]
(reduce
(fn [_ v]
(when (pred v)
(reduced v)))
nil
vals))
(time (some identity (repeat 10000000 nil))) ;; 250 ms
(time (find-first identity (repeat 10000000 nil))) ;; 40 ms
@borkdude Yeah some uses first/next which isnt' all that fast. Reduce is usually faster. I see a 2x factor
(time (some #(when (> ^long % 10000000) %) (range))) ;; 558 ms
(time (find-first #(> ^long % 10000000) (range))) ;; 95 ms
re: find-first transducer version: https://github.com/weavejester/medley/blob/1.0.0/src/medley/core.cljc#L6
Is there a Clojure killer applications? For example, Scala has a renowned Akka.
Clojure is the killer application. It sells itself, you don't really need anything else. For example, since you mentioned Akka, you'd probably be interested in core.async, though its just a part of Clojure.
In terms of things made in Clojure to be used as frameworks from other languages, I guess @U5LPUJ7AP pointed to some: Datomic and Onyx. I'll add Apache Storm, Overtone, Cascalog, Riemann, Puppet, Metabase, Alda, Transit, Datascript and Quil to that list.
I can't think of anything that would cause people to switch to Clojure just to use it...
(mind you, Akka and Play are both probably used more by Java devs now?)
(fn [v n] [(take n v) (drop n v)]) ^-- is there a builtin for this? to 'split' a vector at a specified index
we have Datomic and Onyx, and of course ClojureScript, which has the obvious advantage of allowing you to avoid JS 😛
also you could use subvec if you want a faster op that retains vectorness
ser=> (let [v [1 2 3 4 5 6]] [(subvec v 0 3) (subvec v 3 (count v))])
[[1 2 3] [4 5 6]]