This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-04
Channels
- # beginners (8)
- # boot (20)
- # cider (8)
- # cljs-dev (263)
- # cljsjs (8)
- # cljsrn (20)
- # clojure (151)
- # clojure-argentina (1)
- # clojure-belgium (7)
- # clojure-dev (18)
- # clojure-italy (25)
- # clojure-spec (34)
- # clojure-uk (15)
- # clojurescript (89)
- # component (45)
- # core-async (27)
- # cursive (16)
- # datomic (53)
- # emacs (40)
- # figwheel (3)
- # hoplon (62)
- # jobs (1)
- # jobs-discuss (7)
- # luminus (8)
- # lumo (60)
- # off-topic (3)
- # parinfer (1)
- # precept (1)
- # protorepl (15)
- # re-frame (37)
- # reagent (7)
- # ring (3)
- # ring-swagger (73)
- # slack-help (1)
- # specter (19)
- # sql (4)
- # test-check (10)
- # uncomplicate (2)
- # unrepl (14)
- # untangled (52)
- # vim (5)
- # yada (42)
hi all -- I'm using tools.cli and it's not clear how to pass multiple arguments to an option -- eg if I want an option to become a vector of numbers
nevermind -- ended up taking it as a string and doing this - :parse-fn (fn [input ](into [] (map #(Integer/parseInt (str %)) input)))]
This is rather odd, but has anyone experienced an issue where dereferencing agents of collections truncates some of the elements?
remember that send and send-off can return before their action finishes - or even before it starts sometimes
+user=> (let [a (agent []) f #(send a conj nil) g (fn [] (count @a))] (dotimes [_ 10000] (f)) (repeatedly 10 g))
(6622 6687 6716 6735 6757 6776 6794 6825 6851 6880)
@noisesmith that's got to be it. I think I just got confused because I'm creating and returning the agent from inside a macro. That's where I see the trouble. If I create a def
for the return value and then deref
that it waits just long enough for the last results
there's the function await
Oh...actually I didn't realize await
depends on the :done
key. I'm using an int-map
so no keywords...
what?
there's no magic :done key, await just lines up a lock that won't unlock until every pending action is done, then waits on it
(defn await
"Blocks the current thread (indefinitely!) until all actions
dispatched thus far, from this thread or agent, to the agent(s) have
occurred. Will block on failed agents. Will never return if
a failed agent is restarted with :clear-actions true."
{:added "1.0"
:static true}
[& agents]
(io! "await in transaction"
(when *agent*
(throw (new Exception "Can't await in agent action")))
(let [latch (new java.util.concurrent.CountDownLatch (count agents))
count-down (fn [agent] (. latch (countDown)) agent)]
(doseq [agent agents]
(send agent count-down))
(. latch (await)))))
Oh, ok. I was confused by Clojure Docs: https://clojuredocs.org/clojure.core/await
one possibility is that something is calling send after you call await?
it does do something - it waits on all actions that have been submitted before it is called - but you need something different if you want to wait on things pdiff might initiate after the point await is called
also calling def on the output of gensym is super weird -why not a let block?
def always creates namespace level bindings, using def with a gensym is weird because how would you even know which thing to look for later?
let is for locals, so I would expect let instead in a scenario like this
that has nothing to do with it
def creates a new var in your namespace
oh - I see, you create the gensym def once
that guarantees that pdiff-once is a race condition if it's used in two threads though
What’s the purpose of the macro?
that's another good question
The purpose of the macro is to automate three calls in one: creating the agent, calling pdiff, and returning the contents of the agent
if two threads use that macro, the second one will replace the data used by the first
it's extremely unsafe
Okay, but why not:
(defn pdiff-once [poly order]
(let [tape (agent (i/int-map))]
(pdiff poly tape order)
(await tape)
@tape))
why this:
router=> (if-let [{params :params} {}] (str "params:" params ".") "NOT-defined")
"params:."
@fenton because {} is not falsey.
weavejester: let me try that. I'm not sure it'll work, but I'm relatively new to agents.
(if-let [params (:params {})] ...)
@weavejester : thanks 🙂
With if-let, the expression on the right needs to be nil
or false
to fail
@sophiago You could also use a promise.
It depends what pdiff
looks like, but a promise is more usual.
I assumed an agent was being used because there were multiple alterations
but doing that in a new thread makes using an agent problematic...
The name pdiff-once
also suggests a memoize, but it depends what you’re trying to do.
Could you give us an idea of what pdiff
is doing, @sophiago ?
if await returns, that means those calls weren't even made before await was
pdiff
is parallel diff I assume.
you need some other way to know pdiff is done
you said "await isn't working" which I might have misinterpreted
I said, "I think await
isn't doing anything since it can't know whether pdiff
has finished"
Would it be possible to post the definition of pdiff
as well?
it could, if there was a way to ensure you don't return from pdiff until it makes all it's send calls for example
I’m not clear on what pdiff
is doing, or why you’re using an agent. I assume it’s doing something in parallel.
@sophiago do you get your functionality working without all the parallel stuff? Just plain idiomatic single-threaded clojure
Does anyone know how to obtain the Set-Cookie
header from http-kit
, when you pass the Cookie
header in the call as well? E.g.,
(defn visit-url [{:keys [cookies url] :as context}]
(let [result-chan (chan)
check-result (fn [{:keys [status] :as response}]
;; TODO: get new cookies here....:/ not visible in response
(log/error "RESPONSE" response) ; => no `Set-Cookie`
(go (>! result-chan (= status 200))))]
(http/get url
{:headers {"Accept" "text/html"
"Cookie" cookies} ; cookies is string of earlier obtained cookies
:follow-redirects false}
check-result)
result-chan))
question: how to find a dependency a leiningen plugin pulls in at runtime? I've run lein deps :tree
on the most obvious dependencies, and I don't see where a particular version of ring is coming from. I've sprinkled :exclusions
all round. Yet I find the dependency in target/stale/leiningen.core.classpath.extract-native-dependencies
...
(context: trying to upgrade gorilla repl to 1.9, but it barfs on an old version or ring)
happens when I do lein gorilla :port 9000
, yet neither gorilla-repl nor lein-gorilla seem to need it
have you tried deleting target
?
I've deleted all the content yes
lein deps :tree
should work, unless gorilla-repl does something funky
you can issue a global exclusion for ring
hm. the problem being that lein-gorilla does need ring, just not that old one ...
that's ok, you can specify the latest version
exclusion just means "ignore whatever transitive dependencies there are relating to this"
thank you, will try that 🙂
to look at plugin deps there's a separate plugin tree command lein deps :plugin-tree
@noisesmith did not know that
@noisesmith nice one, that should help
thank you!
well, it looks like it's something funky alright
(as in it still doesn't show up)
question-- does anyone have a homoiconic datetime solution/approach that they use in their projs?
I use only java.util.Date and js/Date
just everything UTC -> epoch @mpenet ?
with the occasional goog.time and clj-time for formatting
date as number works well as long as you don't need to be super precise with "date math"
not a fan of #object[java.time.LocalDate ...]
seems strange those don’t get read
that would require clojure to require java 8 right?
I guess it could do that conditionally?
would be interesting to have those dump into a constructor/factory format like (java.time.LocalDate. & args)
the more likely option would be what clojure does today with java.util.Date - using a literal representation that the reader accepts
oh rllllly
i did not know that
+user=> (java.util.Date.)
#inst "2017-07-04T13:25:33.376-00:00"
+user=> #inst "1492-01-10T12:11:44.000-00:00"
#inst "1492-01-10T12:11:44.000-00:00"
Date is a less than great API, but clojure makes it readable
does this #inst
mean https://clojure.github.io/clojure/clojure.instant-api.html?
no, it's how Date objects are printed, its the instant reader
I confirm the fishiness, was pulling a dependency in the code
so that was fun
this is cool. didnt know about #inst
and #uuid
and stuff
hello everyone, i have 4 heavy database queries that are running sequentially
(benefit-db/transition-benefits-to-ongoing db/db-spec)
(benefit-db/transition-benefits-to-consumed db/db-spec)
(benefit-db/transition-benefits-to-ended db/db-spec)
is there an easy way to run this guys in parallel and do something else when they all finish?plins: (let [results (map deref [(future trans1) (future trans2) (future trans3)] do-something-else))
hey all… for a number of strange reasons I want to create an instance of a class whose name I only have as a string. How would I instantiate one?
tho (.newInstance (Class/forName "foo.bar.baz"))
only seems to have a 0-args sig
ah, you have to use .getDeclaredConstructor
on the Class
object to get the constructor method and then call that
Do multiple threads just reading from (derefing) an atom that has a constant value block or cause any kind of contention?
that is kind of complicated, they are different things. my intuition would be that vars would be ever so slightly faster before the jit has kicked in, and an atom would be ever so slightly faster after
understood. the story is that we have a system with config data in atoms, which are constant after configuration, and our multicore scaling is bad. the question came up of whether billions of accesses to constant-valued atoms might be causing contention. the alternative we're considering isn't actually to use a var (actually, we'd get rid of the vars that currently hold the atoms), but just to pass all of the config data as an argument throughout the system. it'll be a relatively big job to try this, so i'm trying to figure out if the underlying theory is even true, that reading (derefing) the atom in a var can cause contention among threads.
atoms contain an AtomicReference https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Atom.java#L20 and use the get method to access their value https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicReference.html#get()
get on an AtomicReference is the same overhead as reading a volatile, which sources claim is cheap
this should also be easy to microbenchmark though
if your configs are in global atoms, then you are dereferencing both the var containing the atom and the atom
that's a good point, based on profiling I've done I'd expect the var lookup to be more expensive than the atom deref
for vars vs. atoms, without the jit, my guess is the atoms still have to traverse a few extra pointers for access, but with the jit I expect it is a regular reference that just uses atomic instructions
fwiw the differences are pretty small once the JIT has kicked in
+user=> (let [a (atom 42)] (reduce (fn [_ ft] @ft) (repeatedly 10000 #(future @a))))
42
+user=> (let [a (atom 42)] (crit/bench (reduce (fn [_ ft] @ft) (repeatedly 10000 #(future @a)))))
Evaluation count : 900 in 60 samples of 15 calls.
Execution time mean : 91.621587 ms
Execution time std-deviation : 11.931053 ms
Execution time lower quantile : 57.841222 ms ( 2.5%)
Execution time upper quantile : 106.129733 ms (97.5%)
Overhead used : 1.534327 ns
Found 4 outliers in 60 samples (6.6667 %)
low-severe 4 (6.6667 %)
Variance from outliers : 80.6424 % Variance is severely inflated by outliers
nil
+user=> (def a (atom 42))
#'user/a
:user=> (crit/bench (reduce (fn [_ ft] @ft) (repeatedly 10000 #(future @a))))
Evaluation count : 600 in 60 samples of 10 calls.
Execution time mean : 97.601813 ms
Execution time std-deviation : 11.193445 ms
Execution time lower quantile : 73.574920 ms ( 2.5%)
Execution time upper quantile : 116.281827 ms (97.5%)
Overhead used : 1.534327 ns
Found 1 outliers in 60 samples (1.6667 %)
low-severe 1 (1.6667 %)
Variance from outliers : 75.5029 % Variance is severely inflated by outliers
nil
+user=> (crit/bench (reduce (fn [_ ft] @ft) (repeatedly 10000 #(future *clojure-version*))))
Evaluation count : 780 in 60 samples of 13 calls.
Execution time mean : 99.271467 ms
Execution time std-deviation : 8.236914 ms
Execution time lower quantile : 78.021114 ms ( 2.5%)
Execution time upper quantile : 111.492377 ms (97.5%)
Overhead used : 1.534327 ns
Found 4 outliers in 60 samples (6.6667 %)
low-severe 1 (1.6667 %)
low-mild 3 (5.0000 %)
Variance from outliers : 61.8161 % Variance is severely inflated by outliers
nil
that's comparing - direct access to atom and deref, access of atom in var and deref, and access to a value in a var
the futures are because there was concern about contention
obviously what you should do is preprocess your config in to a primitive array, and generate macros for accessing each config value that expand in to agets of the right array slot
all very interesting! point well taken about checking data before optimizing, and we have a lot of work to do on that front
it seems from the discussion here this use of atoms in vars probably isn't producing contention, right?
right- no evidence of contention there at all
fwiw the alternative we're considering would be function args, rather than vars, with the config passed everywhere
I realize I didn't benchmark just having a data literal - checking that now to see if there's any difference worth noticing
but really profiling your own code and looking for the actual perf bottlenecks is the way to go
yeah - replacing @a with a literal 42 (but still reducing over futures etc.) ends up taking the same (actually slightly longer, but within the measurement epsilon so not meaningful)