Fork me on GitHub
#clojure
<
2020-06-12
>
Daniel Tan04:06:19

has anyone tried clojure on the erlang vm

seancorfield05:06:02

I have no experience with it myself. I thought there might be a channel here dedicated to it, but there isn't.

mpenet07:06:05

there's an erlang slack (erlanger), I guess it's there

dominicm09:06:14

I followed https://clojure.org/guides/dev_startup_time but I'm not sure if it's worked. Is there a way to know when clojure is reading the .class rather than the .clj?

orestis14:06:46

Heads up -> I had a recent issue with this approach. I AOT compiled a lib at version 0.12 which was pretty old. I then updated the lib to version 0.13 which was newer, but also older than the day I AOT compiled version 0.12. So my deps.edn said 0.13 but my classes folder said 0.12, and took precedence. Makes sense if you think what @alexmiller said, but still was a real head scratcher.

dominicm14:06:11

I've was considering writing a lib that would attempt to cache against a set of libs, and then pick on a lib-by-lib basis which nss to compile.

Alex Miller (Clojure team)15:06:51

I have worked on a couple prototypes for actually building this into clj (or even clojure itself). Rich and I keep coming back to it. eventually we will figure it out. :)

Alex Miller (Clojure team)15:06:23

@U7PBP4UVA anytime you update deps is a good time to drop your aot classes cache

Alex Miller (Clojure team)15:06:33

@U09LZR36F you need to be careful about order of compilation wrt protocols in particular. compiling from your main ns gives you the right ordering automatically usually. compiling whole/partial libs proactively is tricky (although I think possible if you did a reverse topo-sort)

dominicm15:06:28

@alexmiller I have no main ns, all my namespaces are loaded dynamically! What's the problem around protocols?

dominicm15:06:05

I see, interesting.

orestis18:06:56

@alexmiller the funny thing is that you do it once, then forget about it because it’s out of the way :)

Alex Miller (Clojure team)18:06:01

well that's why doing it automatically would be nice! :) clj knows when it has to drop the cp cache already

Alex Miller (Clojure team)18:06:08

I spent a couple days on this last week and got it about 90% of the way there but there are some non-obvious complexities. set aside for the moment.

dominicm19:06:09

Well, in that case I guess I won't do it that way.

dominicm19:06:52

I'm really excited for this feature because I have very old devices, and things are pretty slow because of it.

Alex Miller (Clojure team)11:06:47

I can't think of a definitive way to actually tell; that logic is all bound up in load so I don't think there's any way to directly invoke or observe it

dominicm12:06:03

@alexmiller I haven't measured, but it doesn't feel significantly faster.

Alex Miller (Clojure team)13:06:28

the classes should be used if a) they are on the classpath and b) their timestamp is newer than the equivalent clj file

Alex Miller (Clojure team)13:06:31

depending what you're doing, it may not seem much different

sveri16:06:52

Is someone else having problems with clojars? I just saw this in my CI:

Downloading: clojure-csv/clojure-csv/2.0.2/clojure-csv-2.0.2.jar from clojars

Error building classpath. Could not transfer artifact ring-middleware-format:ring-middleware-format:jar:0.7.4 from/to clojars (): connect timed out

djtango17:06:45

So I have a namespace foo and a test namespace foo-test in an internal utils/stdlib project. I can run the tests in the project and the project tests pass and build in CI. When I include this in my main project I get FileNotFoundException for foo. This internal utils project has been working up til now. Has anyone seen this before?

kishima21:06:53

Hello! I am trying Datomic in a project. I got this query here:

[:find ?e ?date
         :in $ ?user
         :where
         [?e :period/user ?user]
         [?e :period/start ?date]
         (not [?e :period/end])]
And I would like to get only the max value foundfor ?date I tried:
[:find ?e ?date
         :in $ ?user
         :where
         [?e :period/user ?user]
         (max [?e :period/start ?date])
         (not [?e :period/end])]
But this does not work... what is the correct way to do it? Thanks in advance!

kishima21:06:21

Hi, Ben, sorry, I think I don't get the idea...

Ben Sless06:06:04

I was thinking of something like [(tuple ?date ?e) ?tup] then finding the max of ?tup

folcon23:06:36

I’m trying to write a reasonably sized multiagent sim and someone suggested that I look at the ants demo. A more scaled up version of that does look like what I’m looking for, but I tried upping the sizes and it’s not doing too well. Should I be using the same constructs in a different way? Or would for example, core.async, something event driven or something else be a better fit?

folcon13:06:39

Hmm digging into agents the absence of coordination means I’m really unclear as to how to use these in a sim context, unless I put all of my entities inside a collection wrapped in an agent?

carocad09:06:58

yeah, you could give each agent entity an id then use a map inside a single agent whose state is altered asynchronously.

folcon10:06:58

Should I read that as, give each entity an id and then use a map inside a single clojure agent who’s state is altered asynchronously?

✔️ 3
rickmoynihan08:06:04

The ants demo as demo’d by Rich was as much a demo of using agents as it is STM, each ant is an agent, and the world is a 2d array of STM ref s. Not sure about the main cause of the scalability issues in that code; most likely the use of send-off which I suspect means a system thread per ant. Changing that to send would mean you get pooled threads; though I suspect send-off has been used for a reason. I’m thinking using a pool (via send might mean there’s a potential for deadlocking on sync).

folcon11:06:52

It seems to be around the thread sleep? Specifically:

(. Thread (sleep ant-sleep-ms))
I’ve not done stuff with threads in a while, and none of it in clojure 😊, most of the higher level stuff makes it not necessary, so not certain… Reading clojuredocs there’s a bit more of an explanation (http://clojuredocs.org/clojure.core/send-off), fordsfords’s comment is particularly instructive =)… So send vs send-off is more of a question between should the threadpool provided be fixed or grow? So altering send-off to send for the behaviour function introduces stuttering in the animation. Not sure that’s the reason rich chose to use send-off, but it’s there… The send version doesn’t have the same scaling problem, although we now get a fun conga-line of ants =)… Reducing ant-sleep-ms to 4 creates a mexican-wave. I tried popping a scheduler inside behave so that we don’t get that thread creation overhead: IE replace this:
(. Thread (sleep ant-sleep-ms))
with this:
(def scheduler (Executors/newScheduledThreadPool 4))

(defn sleep-for [ms f]
  (.scheduleWithFixedDelay scheduler f 0 ms TimeUnit/MILLISECONDS))
But it’s not quite working…

carocad09:06:03

I think at this point you are already stepping out of the general question frame and going too deep into the model for any of us to help you more, sorry 😞

folcon09:06:40

No problem :)... I'll see if I can figure it out. I mean the underlying STM is there, just need to work out how to use it properly ;)...

rickmoynihan17:06:47

> It seems to be around the thread sleep? Specifically: > (. Thread (sleep ant-sleep-ms)) I’ve not ran this example for years, but I suspect that’s a symptom rather than a cause. The sleep just blocks the thread, and yields to another thread. What error are you getting? I suspect you’re just running out of memory as the allocated stack on the JVM is IIRC quite large. > So send vs send-off is more of a question between should the threadpool provided be fixed or grow? That’s one differentiator yes. The other is that if you have a pool with limits (and you’re using send threads in that pool blocking on I/O aren’t returned, so you jobs being submitted to the pool can block. send-off doesn’t suffer this, as threads are truly independent. > So altering send-off to send for the behaviour function introduces stuttering in the animation. Not sure that’s the reason rich chose to use send-off, but it’s there… I’d expect this, you’re most likely just getting more contention across competing threads. A thread pool will have less contention and a greater throughput as there’s less setup/teardown and a restricted amount of contending threads (with the rest queued). I suspect the reason rich chose send-off is because agents can block on updates to the STM ref’s, so send-off prevents starving new work (send-offs) with other agents blocked on ref updates. I guess it might be possible for deadlock to occur in this case, if you were unlucky. send-off should never deadlock, though suffers from contention through creating too many threads. For this reason I don’t think it would be wise to mix the STM with core async go block threads.

rickmoynihan17:06:37

I think if you want 10's of thousands of threads you need to use fibres or delimited continuations, or core async go blocks… but as I said I wouldn’t want to mix them with the STM which is blocking. As a single STM transaction could block the thousands of fibres sharing an O/S thread, and if the wrong set were blocked you could possibly deadlock.

rickmoynihan17:06:10

I could be wrong though… but that’s my hunch anyway.

rickmoynihan17:06:48

FYI: Most folk don’t use the STM, and instead sync updates into a single atom. The STM is more useful (but still not frequently used) when a library provides state; and another library provides state… and you want to transact on them both. This doesn’t normally happen, because in clojure we tend to consider state an application concern; so most libraries avoid assigning vars to hold it; and leave that up to the app. And if you control the whole app; you can just use an atom.

folcon18:06:21

The specific error was:

[258.039s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 4k, detached.
Ok so perhaps core.async might be a better fit for this then. Thanks @U06HHF230!

rickmoynihan06:06:10

Yeah core.async should let you have large numbers of independent threads. It will require a different architecture though. Be careful not to inadvertantly block any of the go blocks though; and communicate with any blocking processes by hiding them behind threads and talking to them with channels.