This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-08-26
Channels
- # announcements (1)
- # beginners (42)
- # biff (11)
- # calva (15)
- # cider (3)
- # clj-http-lite (3)
- # clojure (52)
- # clojure-europe (16)
- # clojure-nl (1)
- # clojure-norway (39)
- # clojure-uk (4)
- # clojurescript (52)
- # code-reviews (13)
- # conjure (1)
- # cursive (4)
- # data-science (1)
- # datomic (5)
- # emacs (6)
- # events (3)
- # graalvm (5)
- # hyperfiddle (7)
- # kaocha (14)
- # lsp (11)
- # malli (3)
- # nbb (13)
- # off-topic (87)
- # pathom (15)
- # polylith (23)
- # portal (5)
- # reitit (4)
- # shadow-cljs (110)
- # squint (114)
- # testing (1)
- # vim (13)
is there a way of getting a function cljs source string in a cljs repl? like (clojure.repl/source-fn 'clojure.core/map)
in clojure?
You can use cljs.repl/source
. It doesn't return a string though, just prints to the repl.
nice, thanks
i haven't tried it yet but i really want to: https://github.com/johnmn3/cljs-thread (just be aware of all the performance limitations due to slow "thread" startup and message passing)
you can use the node child_process
package to spawn additional node processes and communicate with them, but thats nowhere near pmap
I have a use case… oh god this is so hard to explain, but I will try to make it short 🙂
The third party library return things asynchronous. I have to process data in the right order and I have to wait until each part of data processing end to do some action. The point is this third party library process things very long, so I would like to do in parallel, but get data in order which were on input.
pmap
just do exactly what I want. It returns data in right order and process things much faster for functions which take more time, because of query other servers.
So I don’t care about CPU limitations too much, because the IO operation take the most time. The fn doesn’t use a lot of CPU.
Will https://github.com/johnmn3/cljs-thread#pmap work for this use case or do you recommend something else @U05224H0W?
I am trying to solve it with async
but the code is so complex and still not as efficient as pmap
would be.
if you are doing io then you don't need more threads because js is already going to park the io and continue processing stuff on the event loop.
to summary this: 1) IO blocking operations 2) parallel preparing from point 1) data to make it faster 3) do it in order as it was in input 4) run fn after each input was processed in right order. So I have to wait until first seq in order is ready, get it, processed it, and run fn. But I want to next items to be prepared parallel. 5) run fn on the end
define "IO blocking operations". if you truly block the only option is to use additional processes since no other work can occur while blocking
> define “IO blocking operations”.
1) I have to run the same fn
N times. Each time with different parameters.
2) Wait until all of them finish and reduce
them together. Each of this fn
from 1) return data which are collected from the network (third party serivce, IO blocking operation)
3) process them
4) after 3) run a callback fn
Repeat the process X times, where X can be thousands.
I can’t process next item, until I will finish process the previous one.
I have to call fn
which is async and can return in random moment. So because of that I have to use async/chan
to control the order of processing.
But in the meantime I would like them to prepare data to process, but in the right order. I am very limited here, because the fn
which I have to call is async and I have to control the order.
I believe it is hard to imagine the use case with all corner cases. This is quite hard example to explain with all details.
The problem is the fn
in library is asynchronous with control of the order of the execution, but this is really a synchronous operation and have to be done in right order. But data prepared in advance to make it faster.
And what makes it even more harder after each “partition” like N data I have to run some fn. So I hve to control the order on each moment. Hard to explain, sorry.
you could assign an incrementing number for each task. when the result arrives you check if all previous numbers have been handled, otherwise store it in a map or something and wait until all previous ones arrive
> so its not really blocking you just have coordination problem? we can say like that. But I can’t run them without control, because it will end with Out Of Memory.
> do you have your entire thousands of inputs up front or is this a streaming system? Each call of 1) from https://clojurians.slack.com/archives/C03S1L9DN/p1661523274103949?thread_ts=1661518948.395909&cid=C03S1L9DN get chunk of data from the network. I operate on this chunks. I know all inputs in advance, but I don’t know outputs.
https://clojurians.slack.com/archives/C03S1L9DN/p1661523816074659?thread_ts=1661518948.395909&cid=C03S1L9DN The code is already complex. I don’t want to create a monster 🙂
I will think if I can figure out something with pipeline-async. I don’t remember how it works.
could you promise chain? ⚠️ psuedo-code
(->> some-list
(map start-worker-get-promise)
(reduce (fn [[prev next]]
(-> prev
(.then process-work)
(.then next))))
(.then do-checkin))
but how do you control to not prepare too much data at once? It will throw Out Of Memory. I have to prepare next N items, but not more.
kinda hard to answer questions like that without knowing more about what you are doing with the data as you accumulate it.
I found promise-chan
have some issues about free memory. I can’t explain it, but the same code with promise-chan
and chan
works with chan
, but for promise-chan
throw Out Of Memory. Even if I close!
the chan manually and immediately. Something is very wrong about promise-chan
and memory. I don’t see any logical explanation what is is happening. It has to be some kind of the bug for free references for garbage collector.
> kinda hard to answer questions like that without knowing more about what you are doing with the data as you accumulate it. I know, but the use case is enough complex and hard… It is just not possible to explain on slack with words.
but yeah, I am looking a solution because promise-chan
has a bug / issue about release the memory.
i personally have tried to use it a couple times but find it makes obtuse code so i have yet to actually PR something with it
In general I am trying to avoid async
in all possible ways. But here it is not possible, because a lot of code already exist.
Hello! Since upgrading my macos to 12.5.1 yesterday, I’ve been having issues with figwheel: When started, it gets to the point of printing Successfully compiled build …
(the result is correctly written on disk). After that it’s just stuck consuming 100% CPU for ~half hour, and throws an OOM exception. The stacktrace is not very useful, it seems it’s trying to create a huge string for some reason and then runs out of memory. Any ideas, or anyone had a similar issue?
FWIW I tried cleaning up various different parts of my setup: reinstalled the jvm & leiningen, pulled fresh repo, cleared node_modules, ~/.m2 etc all to no avail
I'm not going to update to 12.5.1 yet... even though I don't use figwheel at the moment!
@UHJQAD8BW What version of figwheel-main
are you using? The Hawk filewatcher was broken by one of the macOS updates and Figwheel switched to Beholder in 0.2.14 which may fix the problem.