Fork me on GitHub

is there a way of getting a function cljs source string in a cljs repl? like (clojure.repl/source-fn 'clojure.core/map) in clojure?


You can use cljs.repl/source. It doesn't return a string though, just prints to the repl.


nice, thanks


What is the closest solution to pmap in ClojureScript (with node)?


i haven't tried it yet but i really want to: (just be aware of all the performance limitations due to slow "thread" startup and message passing)

👍 1

there is none since there are no real threads with shared memory


you can use the node child_process package to spawn additional node processes and communicate with them, but thats nowhere near pmap


I have a use case… oh god this is so hard to explain, but I will try to make it short 🙂 The third party library return things asynchronous. I have to process data in the right order and I have to wait until each part of data processing end to do some action. The point is this third party library process things very long, so I would like to do in parallel, but get data in order which were on input. pmap just do exactly what I want. It returns data in right order and process things much faster for functions which take more time, because of query other servers.


So I don’t care about CPU limitations too much, because the IO operation take the most time. The fn doesn’t use a lot of CPU.


Will work for this use case or do you recommend something else @U05224H0W?


I am trying to solve it with async but the code is so complex and still not as efficient as pmap would be.


have you looked into using pipeline-async with your core.async code?


if you are doing io then you don't need more threads because js is already going to park the io and continue processing stuff on the event loop.


to summary this: 1) IO blocking operations 2) parallel preparing from point 1) data to make it faster 3) do it in order as it was in input 4) run fn after each input was processed in right order. So I have to wait until first seq in order is ready, get it, processed it, and run fn. But I want to next items to be prepared parallel. 5) run fn on the end


pmap in Clojure just do it


let me remember how pipeline-async work. I was using it a long time ago.


define "IO blocking operations". if you truly block the only option is to use additional processes since no other work can occur while blocking


> define “IO blocking operations”. 1) I have to run the same fn N times. Each time with different parameters. 2) Wait until all of them finish and reduce them together. Each of this fn from 1) return data which are collected from the network (third party serivce, IO blocking operation) 3) process them 4) after 3) run a callback fn Repeat the process X times, where X can be thousands.


in node all IO is async and non-blocking by defaujlt


so what you doing that makes it blocking?


i think he just means io bound


I can’t process next item, until I will finish process the previous one. I have to call fn which is async and can return in random moment. So because of that I have to use async/chan to control the order of processing. But in the meantime I would like them to prepare data to process, but in the right order. I am very limited here, because the fn which I have to call is async and I have to control the order.


I believe it is hard to imagine the use case with all corner cases. This is quite hard example to explain with all details.


The problem is the fn in library is asynchronous with control of the order of the execution, but this is really a synchronous operation and have to be done in right order. But data prepared in advance to make it faster.


And what makes it even more harder after each “partition” like N data I have to run some fn. So I hve to control the order on each moment. Hard to explain, sorry.


so its not really blocking you just have coordination problem?


do you have your entire thousands of inputs up front or is this a streaming system?


you could assign an incrementing number for each task. when the result arrives you check if all previous numbers have been handled, otherwise store it in a map or something and wait until all previous ones arrive


> so its not really blocking you just have coordination problem? we can say like that. But I can’t run them without control, because it will end with Out Of Memory.


> do you have your entire thousands of inputs up front or is this a streaming system? Each call of 1) from;cid=C03S1L9DN get chunk of data from the network. I operate on this chunks. I know all inputs in advance, but I don’t know outputs.


I will think if I can figure out something with pipeline-async. I don’t remember how it works.


I have used this style of "coordination" many times and it is trivial code


could you promise chain? ⚠️ psuedo-code

(->> some-list
       (map start-worker-get-promise)
       (reduce (fn [[prev next]]
              (-> prev
                  (.then process-work)
                  (.then next))))
       (.then do-checkin))


it sounds exactly like the pipeline-async use case


but how do you control to not prepare too much data at once? It will throw Out Of Memory. I have to prepare next N items, but not more.


kinda hard to answer questions like that without knowing more about what you are doing with the data as you accumulate it.


I found promise-chan have some issues about free memory. I can’t explain it, but the same code with promise-chan and chan works with chan, but for promise-chan throw Out Of Memory. Even if I close! the chan manually and immediately. Something is very wrong about promise-chan and memory. I don’t see any logical explanation what is is happening. It has to be some kind of the bug for free references for garbage collector.


> kinda hard to answer questions like that without knowing more about what you are doing with the data as you accumulate it. I know, but the use case is enough complex and hard… It is just not possible to explain on slack with words.


but yeah, I am looking a solution because promise-chan has a bug / issue about release the memory.


*at least in cljs and node. I didn’t try in Java.


so BTW I recommend to not use promise-chan at all in cljs


But pipeline-async looks promising 👍 Thank you.

👌 1

i personally have tried to use it a couple times but find it makes obtuse code so i have yet to actually PR something with it


In general I am trying to avoid async in all possible ways. But here it is not possible, because a lot of code already exist.


At least in Java I found it is always better to not use async, than use it 🙂


besides of 1 very specific use case which need strong control about processing


Hello! Since upgrading my macos to 12.5.1 yesterday, I’ve been having issues with figwheel: When started, it gets to the point of printing Successfully compiled build … (the result is correctly written on disk). After that it’s just stuck consuming 100% CPU for ~half hour, and throws an OOM exception. The stacktrace is not very useful, it seems it’s trying to create a huge string for some reason and then runs out of memory. Any ideas, or anyone had a similar issue?


FWIW I tried cleaning up various different parts of my setup: reinstalled the jvm & leiningen, pulled fresh repo, cleared node_modules, ~/.m2 etc all to no avail


I'm not going to update to 12.5.1 yet... even though I don't use figwheel at the moment!


@UHJQAD8BW What version of figwheel-main are you using? The Hawk filewatcher was broken by one of the macOS updates and Figwheel switched to Beholder in 0.2.14 which may fix the problem.