This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-06-12
Channels
- # aleph (1)
- # aws (2)
- # babashka (44)
- # beginners (178)
- # biff (12)
- # calva (22)
- # chlorine-clover (60)
- # cider (1)
- # clj-kondo (9)
- # cljdoc (6)
- # cljs-dev (37)
- # cljss (2)
- # clojure (43)
- # clojure-europe (3)
- # clojure-finland (23)
- # clojure-italy (1)
- # clojure-nl (4)
- # clojure-norway (3)
- # clojure-spec (56)
- # clojure-uk (148)
- # clojuredesign-podcast (1)
- # clojurescript (11)
- # conjure (5)
- # core-async (22)
- # cursive (9)
- # datascript (5)
- # datomic (4)
- # duct (8)
- # emotion-cljs (2)
- # figwheel-main (15)
- # fulcro (53)
- # graalvm (68)
- # helix (2)
- # jackdaw (1)
- # kaocha (9)
- # lambdaisland (1)
- # malli (10)
- # meander (2)
- # news-and-articles (1)
- # observability (12)
- # off-topic (17)
- # pathom (1)
- # pedestal (25)
- # practicalli (1)
- # protojure (4)
- # re-frame (2)
- # reagent (57)
- # reitit (1)
- # releases (2)
- # shadow-cljs (69)
- # specter (6)
- # tools-deps (10)
- # vim (16)
- # vscode (4)
- # yada (3)
has anyone tried clojure on the erlang vm
@danieltanfh95 You mean this http://clojerl.org/ ?
I have no experience with it myself. I thought there might be a channel here dedicated to it, but there isn't.
I followed https://clojure.org/guides/dev_startup_time but I'm not sure if it's worked. Is there a way to know when clojure is reading the .class rather than the .clj?
Heads up -> I had a recent issue with this approach. I AOT compiled a lib at version 0.12 which was pretty old. I then updated the lib to version 0.13 which was newer, but also older than the day I AOT compiled version 0.12. So my deps.edn said 0.13 but my classes folder said 0.12, and took precedence. Makes sense if you think what @alexmiller said, but still was a real head scratcher.
I've was considering writing a lib that would attempt to cache against a set of libs, and then pick on a lib-by-lib basis which nss to compile.
I have worked on a couple prototypes for actually building this into clj (or even clojure itself). Rich and I keep coming back to it. eventually we will figure it out. :)
@U7PBP4UVA anytime you update deps is a good time to drop your aot classes cache
@U09LZR36F you need to be careful about order of compilation wrt protocols in particular. compiling from your main ns gives you the right ordering automatically usually. compiling whole/partial libs proactively is tricky (although I think possible if you did a reverse topo-sort)
@alexmiller I have no main ns, all my namespaces are loaded dynamically! What's the problem around protocols?
@alexmiller the funny thing is that you do it once, then forget about it because it’s out of the way :)
well that's why doing it automatically would be nice! :) clj knows when it has to drop the cp cache already
I spent a couple days on this last week and got it about 90% of the way there but there are some non-obvious complexities. set aside for the moment.
I'm really excited for this feature because I have very old devices, and things are pretty slow because of it.
is it faster?
I can't think of a definitive way to actually tell; that logic is all bound up in load so I don't think there's any way to directly invoke or observe it
@alexmiller I haven't measured, but it doesn't feel significantly faster.
the classes should be used if a) they are on the classpath and b) their timestamp is newer than the equivalent clj file
depending what you're doing, it may not seem much different
Is someone else having problems with clojars? I just saw this in my CI:
Downloading: clojure-csv/clojure-csv/2.0.2/clojure-csv-2.0.2.jar from clojars
Error building classpath. Could not transfer artifact ring-middleware-format:ring-middleware-format:jar:0.7.4 from/to clojars ( ): connect timed out
So I have a namespace foo
and a test namespace foo-test
in an internal utils/stdlib project.
I can run the tests in the project and the project tests pass and build in CI.
When I include this in my main project I get FileNotFoundException
for foo
. This internal utils project has been working up til now.
Has anyone seen this before?
Hello! I am trying Datomic in a project. I got this query here:
[:find ?e ?date
:in $ ?user
:where
[?e :period/user ?user]
[?e :period/start ?date]
(not [?e :period/end])]
And I would like to get only the max value foundfor ?date
I tried:
[:find ?e ?date
:in $ ?user
:where
[?e :period/user ?user]
(max [?e :period/start ?date])
(not [?e :period/end])]
But this does not work... what is the correct way to do it?
Thanks in advance!I was thinking of something like [(tuple ?date ?e) ?tup]
then finding the max of ?tup
I’m trying to write a reasonably sized multiagent sim and someone suggested that I look at the ants demo. A more scaled up version of that does look like what I’m looking for, but I tried upping the sizes and it’s not doing too well. Should I be using the same constructs in a different way? Or would for example, core.async, something event driven or something else be a better fit?
Hmm digging into agents the absence of coordination means I’m really unclear as to how to use these in a sim context, unless I put all of my entities inside a collection wrapped in an agent?
yeah, you could give each agent entity an id then use a map inside a single agent whose state is altered asynchronously.
Should I read that as, give each entity an id and then use a map inside a single clojure agent who’s state is altered asynchronously?
The ants demo as demo’d by Rich was as much a demo of using agents as it is STM, each ant is an agent, and the world is a 2d array of STM ref
s.
Not sure about the main cause of the scalability issues in that code; most likely the use of send-off
which I suspect means a system thread per ant. Changing that to send
would mean you get pooled threads; though I suspect send-off
has been used for a reason.
I’m thinking using a pool (via send
might mean there’s a potential for deadlocking on sync
).
It seems to be around the thread sleep? Specifically:
(. Thread (sleep ant-sleep-ms))
I’ve not done stuff with threads in a while, and none of it in clojure 😊, most of the higher level stuff makes it not necessary, so not certain…
Reading clojuredocs there’s a bit more of an explanation (http://clojuredocs.org/clojure.core/send-off), fordsfords’s comment is particularly instructive =)…
So send vs send-off is more of a question between should the threadpool provided be fixed or grow?
So altering send-off to send for the behaviour function introduces stuttering in the animation. Not sure that’s the reason rich chose to use send-off
, but it’s there…
The send version doesn’t have the same scaling problem, although we now get a fun conga-line of ants =)…
Reducing ant-sleep-ms to 4 creates a mexican-wave.
I tried popping a scheduler inside behave
so that we don’t get that thread creation overhead:
IE replace this:
(. Thread (sleep ant-sleep-ms))
with this:
(def scheduler (Executors/newScheduledThreadPool 4))
(defn sleep-for [ms f]
(.scheduleWithFixedDelay scheduler f 0 ms TimeUnit/MILLISECONDS))
But it’s not quite working…I think at this point you are already stepping out of the general question frame and going too deep into the model for any of us to help you more, sorry 😞
No problem :)... I'll see if I can figure it out. I mean the underlying STM is there, just need to work out how to use it properly ;)...
> It seems to be around the thread sleep? Specifically:
> (. Thread (sleep ant-sleep-ms))
I’ve not ran this example for years, but I suspect that’s a symptom rather than a cause. The sleep just blocks the thread, and yields to another thread. What error are you getting? I suspect you’re just running out of memory as the allocated stack on the JVM is IIRC quite large.
> So send vs send-off is more of a question between should the threadpool provided be fixed or grow?
That’s one differentiator yes. The other is that if you have a pool with limits (and you’re using send
threads in that pool blocking on I/O aren’t returned, so you jobs being submitted to the pool can block. send-off
doesn’t suffer this, as threads are truly independent.
> So altering send-off to send for the behaviour function introduces stuttering in the animation. Not sure that’s the reason rich chose to use send-off, but it’s there…
I’d expect this, you’re most likely just getting more contention across competing threads. A thread pool will have less contention and a greater throughput as there’s less setup/teardown and a restricted amount of contending threads (with the rest queued).
I suspect the reason rich chose send-off
is because agents can block on updates to the STM ref’s, so send-off
prevents starving new work (send-offs) with other agents blocked on ref updates. I guess it might be possible for deadlock to occur in this case, if you were unlucky. send-off
should never deadlock, though suffers from contention through creating too many threads.
For this reason I don’t think it would be wise to mix the STM with core async go block threads.
I think if you want 10's of thousands of threads you need to use fibres or delimited continuations, or core async go
blocks… but as I said I wouldn’t want to mix them with the STM which is blocking. As a single STM transaction could block the thousands of fibres sharing an O/S thread, and if the wrong set were blocked you could possibly deadlock.
I could be wrong though… but that’s my hunch anyway.
FYI: Most folk don’t use the STM, and instead sync updates into a single atom. The STM is more useful (but still not frequently used) when a library provides state; and another library provides state… and you want to transact on them both. This doesn’t normally happen, because in clojure we tend to consider state an application concern; so most libraries avoid assigning vars to hold it; and leave that up to the app. And if you control the whole app; you can just use an atom.
The specific error was:
[258.039s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 4k, detached.
Ok so perhaps core.async
might be a better fit for this then. Thanks @U06HHF230!Yeah core.async should let you have large numbers of independent threads. It will require a different architecture though. Be careful not to inadvertantly block any of the go blocks though; and communicate with any blocking processes by hiding them behind threads and talking to them with channels.