This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-06-12
Channels
- # beginners (36)
- # boot (11)
- # cider (10)
- # cljs-dev (10)
- # cljsrn (3)
- # clojure (103)
- # clojure-greece (1)
- # clojure-italy (16)
- # clojure-nl (3)
- # clojure-spec (59)
- # clojure-uk (129)
- # clojurescript (125)
- # data-science (29)
- # datomic (30)
- # emacs (12)
- # events (5)
- # fulcro (61)
- # graphql (5)
- # keechma (3)
- # leiningen (9)
- # luminus (7)
- # onyx (26)
- # re-frame (3)
- # reagent (56)
- # reitit (25)
- # ring-swagger (16)
- # shadow-cljs (44)
- # spacemacs (4)
- # specter (2)
- # tools-deps (7)
- # vim (8)
anyone using re-frame? Is there a way to turn off the re-frame.core/debug interceptor in production?
jdbc question: if I use jdbc/query
to execute a postgres SELECT… FOR UPDATE query within a jdbc/with-db-transaction
, will I have locks on the selected rows for the remainder of the transaction? Or do I need to use something other than jdbc/query
? The expected behavior isn’t clear to me from the docs.
@nick652, afaik the behavior should be the same as in a psql session wrapped in "BEGIN TRANSACTION ... COMMIT"
Okay thanks. So you wouldn’t expect that I need to use jdbc/execute! rather than jdbc/query for the SELECT...FOR UPDATE query within that transaction?
according to the docstring, execute! is for INSERT, UPDATE etc., not SELECT: >>> perform a general (non-select) SQL operation
but it's a good idea to verify this from the repl to familarize yourself with how postgres transactions and jdbc work
Yeah that part is clear. My uncertainty is because the postgres docs suggest that from a concurrency standpoint, SELECT FOR UPDATE acts more like an UPDATE than it does a SELECT. Although that’s an implementation detail of postgres, it’s not clear to me if jdbc/execute! is performing some sort of setup required to enable the UPDATE-like concurrency aspects of the query.
In any case, you’re right that writing tests is necessary on my end to confirm. Just helps to have some idea of what to expect going in.
From the perspective of Clojure's JDBC, the important thing is whether a result set is returned or not. Execute just returns the number of affected rows, not the actual result set. Even a "UPDATE .. WHERE x = y RETURNING *" would be more appropriate to do using jdbc/query
@U0M8Y3G6N that clears it up completely, thanks!
Hi! In my system I shell out to libreoffice executable to generate a PDF from a word file. Occassionally it (libreoffice) hangs or chokes on the docx passed in. In such a case, my threads fill up, and at somepoint my uberjar process halts. Is there a way to timeout the shell call, and when it does stop the thread? What would be idiomatic?
You could try deref
ing the future with timeout. If it times out, future-cancel the future
ie. if the passed in function hangs forever… are resources being freed up? Haven’t got enough knowledge on runtime internals, i’m afraid…
Hmm.. Not sure if core async will clean up the parked thread in jvm
My feeling is you may need to isolate each shell call on its own thread instead of a pool, and kill the thread on timeout.
I'm not sure if this can be done using a thread pool. Actually, now I'm not sure if future-cancel can terminate the shell.. Would have to test
Make a thread or future shell out a long sleep command, then future cancel it?
Then ps and see if it kills the sleep
Otherwise your LibreOffice process won't get killed and you risk running out of ram?
A hackier workaround outside the box is to have a cron job regularly look for stuck LibreOffice process and kill them.. Haha
i was even thinking about moving that call to a different server, and make some sort of http-call in between them
I am going to wrap the libreoffice call in a .sh script and call that instead from clojure. In that shell script, i use GNU Coreutils timeout
:
http://www.gnu.org/software/coreutils/manual/html_node/timeout-invocation.html
Hey there. Can anyone point out some resources on how to integrate compojure with ring swagger?
https://github.com/metosin/compojure-api does that. Or did you mean integrating ring-swagger into existing compojure codebase? There’s #ring-swagger where you might find help.
Does anybody know where the conj is this year?
Yeah I am trying to figure out how to do it without compojure api cause I have got some legacy routes... but would you suggest wrapping everything with compojure-api?
@gfredericks we are finalizing contracts now and hope to announce soon
@andreasp1994 AFAIK compojure-api supports vanilla compojure out of box so starting a new project with c-api and migrating the legacy routes there could work.
So if I just remove compojure dependencies and add compojure api everything well be supported?
@andreasp1994 mostly, yes. Compojure-api just wraps Compojure, so underlaying routing engine is the same.
Hey I am trying out the compojure-api but I am getting this error when doing a post request.`"title": "(not (instance? java.lang.String a-java.lang.Class))"` any ideas what this means?
Oh alright sorry
I am following the compojure-api examples form here https://github.com/metosin/compojure-api/tree/master/examples/thingie/src/examples
Looking at pmap
more closely, it seems that n
is not the degree of parallelism: https://github.com/clojure/clojure/blob/clojure-1.8.0/src/clj/clojure/core.clj#L6735
Instead, pmap
launches as many threads (`future`s) as possible to process the elements of coll
. What is the purpose of n
, then?
the laziness there means n is actually controlling the launch rate
if I read it correctly
no, notice that step uses n, and that n items are dropped from the input when passed to step the first time
so rets is lazy, as each item is forced a future starts for that item
the control of parallelism is hidden in the control of realization of elements of rets
the thing with lazy-seq and cons inside step is kind of odd and I'm not sure why it's constructed like that
For n
to be the degree of parallelism, there'd have to be some point where new threads (`future`s) are not launched until n
existing ones have been deref
ed. I don't see that anywhere.
it's hidden in the laziness. (do (map #(future ....) coll) nil)
starts no futures
since there's a map call that creates futures, the forcing of items coming from that call (called rets here), controls the parallelism
it is complicated, and interacts with things like chunked-seqs, but n is sort of the amount of eager parallelism
Let's say n = 3
and coll = (range 9)
. I'll use <i>
to indicate (future (f i))
. Then we end up w/ (lazy-seq (cons @<0> (lazy-seq (cons @<1> ... (map deref '(<6> <7> <8>))
, correct?
one thing missing here is that the destructure in the args list forces at least one item
the seq coll could be doing a force as well
it's not just derefs that matter here, it's the first realization of an item from ret (even if you are just checking if it's there)
if you have (let [[x & ys] ret] ...) x has been forced even if deref isn't called yet
range is chunked, so if you pmap over range you will get chunk size number of futures
the derefs don't matter at all, the derefs don't cause futures to run or not, they just retrieve the value, so in the analysis of when a futures run, they just don't matter
I don't see that n
or chunk size has anything to do w/ how many futures run at a time. Chunk size may affect how many are launched at a time, but not how many are in flight. Is that a correct understanding?
why would number in flight be in any way independent of how many are launched?
the deref call blocks, which is a backpressure on the number in flight
and agreed, this is weird code that probably isn't doing what anyone actually wants from it (and it's hard to even know what it's doing precisely due to the structure)
What I'm saying is that I was expecting n
to be the degree of parallelism, i.e. how many threads are actually being used to process the collection. But pmap
uses future
, which uses Agent/soloExecutor
, which is a CachedThreadPool
(no limit to # of threads running simultaneously).
I wonder if the intent w/ n
is to use the Agent/pooledExecutor
instead, as that is a FixedThreadPool
w/ has 2 + Runtime.getRuntime().availableProcessors()
threads.
what I'm saying is that laziness is controlling the number of threads in flight in pmap, it's not eagerly starting futures across your full collection
and n is effecting the laziness due to it's usage inside step, but due to the complex code the relationship is hard to determine
edited
This may be slightly easier to follow
user=> (def n (atom 0))
#'user/n
user=> (defn slow-f [_] (swap! n inc) (Thread/sleep 1000) (swap! n dec))
#'user/slow-f
user=> (doall (pmap slow-f (range 50)))
(30 29 31 23 28 27 26 25 22 24 21 16 13 18 17 20 14 15 11 10 9 4 8 12 19 7 6 5 2 0 1 3 16 17 15 14 13 11 12 10 9 8 6 7 5 0 4 1 3 2)
user=> (doall (pmap slow-f (range 100)))
(29 31 30 28 25 27 26 24 20 23 22 21 19 18 16 17 15 14 11 13 12 10 9 8 7 6 4 3 5 2 31 30 31 27 30 29 28 24 26 25 23 22 21 20 19 16 18 17 15 13 12 14 11 10 9 6 8 7 5 4 3 2 0 1 28 27 29 26 30 31 25 23 22 24 21 19 20 18 17 14 16 15 13 10 12 11 9 8 7 6 3 5 4 2 0 1 0 1 2 3)
user=> (doall (pmap slow-f (into [] (range 50))))
(31 27 29 28 30 26 25 24 22 23 21 19 20 18 17 16 14 15 13 11 12 9 10 8 6 7 4 5 3 2 0 1 15 14 17 13 16 11 12 10 9 8 5 6 4 3 7 1 2 0)
user=> (doall (pmap slow-f (into [] (range 100))))
(30 31 28 27 24 29 25 26 23 22 21 20 19 17 18 16 15 14 13 12 11 9 10 8 7 6 5 0 4 2 3 1 28 26 30 31 25 29 27 23 22 24 20 21 19 18 17 16 15 14 11 13 12 10 8 9 6 7 4 5 0 3 2 1 25 27 16 31 29 26 30 28 24 21 23 22 20 19 15 18 17 13 14 12 11 9 10 8 7 5 6 3 4 5 4 6 3 2 1 0)
The maximum number in-flight is 32.
(which is the chunk size) <--- this is wrong
(the result is the sequence of one less than the in-flight number at each point)
B/c w/ each chunk we block until all the future
s in the chunk have been deref
ed, correct?
notice what's happening with (rest s)
on line 6740, it seems like that's being done to keep some number of items ahead of the derefs
If I make slow-f
faster, 100ms instead of 1s, then I get more functions in flight.
at first I was worried something was lost from s, but s only exists for forcing purposes
since everything in s is already in the first arg to step
user=> (defn slow-f [_] (swap! n inc) (Thread/sleep 100) (swap! n dec))
#'user/slow-f
user=> (doall (pmap slow-f (into [] (range 100))))
(26 28 29 30 24 31 27 25 21 23 18 22 20 19 16 15 17 13 14 11 12 10 8 9 28 33 36 34 33 35 32 34 22 30 31 29 28 27 25 26 23 24 20 21 19 17 18 16 15 14 13 12 11 10 9 8 7 37 36 26 35 34 32 33 30 29 8 28 31 27 25 26 24 23 22 21 19 18 20 15 17 16 13 14 11 12 10 9 11 10 5 9 6 7 8 4 1 2 0 3)
Notice that I get up to 38 in-flight here.given that s starts as (drop n coll), n is deciding how far ahead of your derefs of results the launches of new futures by forcing them out of laziness are
in a weird way
Depending on how slow the function is and how large the collection is, I get somewhere between 32 and 38 in-flight (on a machine that reports 8 processors).
that into doesn't prevent chunking, but replacing range with iterate does
user=> (doall (pmap slow-f (take 100 (iterate inc 0))))
(5 3 2 1 0 6 4 6 5 4 3 2 1 0 6 5 4 3 1 2 0 5 1 6 4 3 2 0 6 5 4 3 2 1 0 4 5 0 1 2 6 3 5 4 6 3 2 1 0 6 5 4 3 2 1 0 5 6 2 4 3 2 1 6 6 4 6 5 3 2 6 6 6 4 5 2 3 6 6 3 2 4 5 6 6 6 3 5 6 4 2 6 6 5 6 4 3 2 1 0)
user=> (.. Runtime getRuntime availableProcessors)
4