This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-01-16
Channels
- # babashka (48)
- # beginners (44)
- # biff (3)
- # calva (1)
- # cider (42)
- # clj-kondo (8)
- # clojure (93)
- # clojure-australia (2)
- # clojure-europe (3)
- # clojure-taiwan (1)
- # clojurescript (10)
- # conjure (4)
- # deps-new (5)
- # joker (9)
- # lsp (12)
- # membrane (16)
- # minecraft (12)
- # missionary (4)
- # mount (3)
- # off-topic (60)
- # reitit (9)
- # releases (13)
- # ring-swagger (1)
- # shadow-cljs (18)
- # sql (67)
- # tools-deps (2)
Hi @borkdude You helped me create processes with Babashka earlier to start bash jobs. You even showed me how I could add a callback function to these processes which tells me when the bash jobs are complete and process the results of the bash job. My problem is now: these callback functions seem to continue running in the same thread started for the bash process. So in the end lots of futures are created and my regular execution happens in the futures too. Do you know a workaround for this? I.e. having the callback function pass over execution to the main thread instead of continuing in the new thread?
Sorry, yes of course. I thought this was a standard pattern
Most of these additions are my own, but this should give you the gist
(I added the snippet in a new post since it was hard to read in the replies sidebar)
Ill move it again
I think the problem is that everything that happens as a result of
(reset! app-state/jobresult [jobid (.exitValue p) job])))))
happens in the new thread. This is where I'd really like to pass control over to my normal program.
I understand that his might not be something that babashka can help with though :)To have the exitValue you need to wait for the process to finish, so then you could just not use onExit at all, and just use waitFor
Thanks. But will the bash job be blocking then? Or can I run multiple bash jobs at the same time?
process
isn't blocking by default, unless you block (deref) yourself. can you explain in words what you are trying to accomplish again?
Sorry. I want to run bash jobs, possibly multiple, often in parallel. When they finish I want to process the result and update my DAG and start the next jobs ready for execution.
yeah ok, so you could have three processes, start them in parallel with (def p1 (process ["bash"]) .. (def p3 ..)
then at the end you can collect the results by just walking over the processes with deref, to wait for all them to finish?`(mapv deref [p1 p2 p3])`
if you want to see the exit value of the first returned one, you could use promises or core.async in combination with .onExit
but if you want the combined result of all of them, then it doesn't matter, you have to wait for all of them anyway
I see that I have lots to learn and that this might not be babashka related, but I appreciate your help :)
or you could update an atom in .onExit
:
(def results (atom []))
(.onExit (reify Function (swap! results ..))The problem is that I am trying to create something like Make but suited to my needs. The program does not know ahead of time which jobs will be finished when so I need to have a program running that dispatches jobs one by one and collects their results and sees what new jobs can be run.
Your solution is what I have tried:
(reset! app-state/jobresult [jobid (.exitValue p) job])
But the code started by the watcher of app-state/jobresult seems to run in the future
Okay, this is my first foray into concurrency. But now I know the limitations. Thanks!
> obviously
I wasn't certain, but now I know
There are only two possibilities: either you run something async and handle the result async, or you block and do something with the result on the main thread.
I should really look at using babashka tasks as a submodule then. I'm writing something more involved (look up nextflow or snakemake if interested) but I'm all for code-reuse. It seems like babashka/tasks has the ability to call functions upon starting a job and finishing it. This would allow me to run arbitrarily complex code around the workflow...
Thanks for the info. I'll start to learn about babashka tasks and think about how I can use it as a task runner within a larger program 😄
I want to run babashka tasks to run programs in parallel. These often write output to screen that I want to show. However, I want to tell which program wrote which lines to screen (and perhaps whether these were directed to stdin or stdout). Is this possible in babashka tasks? So that instead of seeing
<output from program 1>
<output from program 3>
...
I could have babashka explain who wrote what? Like:
(program 1, time, stderr): output
(program 3, time, stdout): output
Where (program 1, time, stderr)
is user-configurable. If this is not possible, is this something you would consider adding?Perhaps worth a try. You can open a Github Discussion about this idea so we can maybe discuss it further and others can respond there/upvote the idea. https://github.com/babashka/babashka/discussions
Come to think of it, all that is needed is to have an option to send the stdout/stderr to tap, and then it could be handled there.
If you're shelling out, perhaps there is also a unix tool that you can filter output through and which prepends something to it, for now
Also a short question about the docs: > The `current-task` function returns a map representing the currently running task. This function is typically used in the `:enter` and `:leave` hooks. What does it return when multiple jobs are running?
in a task, it's always returns the current task. there is always one current task, just like there is always one "the current thread"
I thought multiple tasks could be run in parallel with the parallel flag