This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (4)
- # babashka (4)
- # beginners (164)
- # calva (47)
- # cider (1)
- # cljs-dev (29)
- # cljsrn (3)
- # clojure (137)
- # clojure-europe (23)
- # clojure-nl (3)
- # clojure-spec (7)
- # clojure-uk (44)
- # clojurescript (35)
- # component (8)
- # conjure (119)
- # cursive (32)
- # datomic (12)
- # emacs (31)
- # figwheel-main (36)
- # graalvm (10)
- # jobs (2)
- # kaocha (1)
- # lein-figwheel (3)
- # meander (15)
- # mount (3)
- # off-topic (9)
- # pathom (8)
- # quil (4)
- # re-frame (13)
- # reagent (15)
- # remote-jobs (10)
- # shadow-cljs (128)
- # slack-help (2)
- # spacemacs (8)
- # test-check (6)
- # xtdb (6)
Hey Clojourians!! Nice to meet you all! Is anyone in Europe here? I have a remote long term project with a leading client where I’m looking for an experienced Clojure/ClojureScript with Java Microservices BE and AWS skills too. If anyone is interested in a high paid remote role, please get in touch with me ASAP 🙂 Have a nice day, hope everyone is staying safe. Many Thanks Daniel
Oh, I see you have. You should remove this message, as it's not the appropriate channel.
any recommendations on a mature http client that supports
core.async by default? its server side so it dont need to be compatiblle with CLJS
Hi, this is a post that may help you. http://martintrojer.github.io/clojure/2013/07/07/coreasync-and-blocking-io
requests with cognitect.http-client return a channel, but cognitect.http-client is designed for a very specific use-case
Separately, I've been working on adapting the client within Cognitect Labs'
aws-api to support broader use-cases.
Apple's Service Talk seems like a robust foundation https://apple.github.io/servicetalk/servicetalk-examples/SNAPSHOT/http/index.html
if I request data from 8 different web services and the requests don't depend on each other it's nice if something goes off and gets all that concurrently
@lilactown real world use case: to get a result for a client, I need data from 5 apis in parallel - none of the requests wait on results from the others
there's a paradigm where all IO is async by default, and you have to be explicit about what things you want to wait for. it's pretty popular
right, but shouldn’t how the http call gets handled (asynchronous, synchronous, channels, which thread, etc) be left to the consumer of the http library rather than bundled with the http library itself?
there's a decent argument that syncing on something async has fewer gotchas than going async on something sync
we do have good concurrency tools (futures, core.async, thread pool executors) but there are gotchas
yeah, I’m just pattern matching on the rule of thumb that “io inside of a go channel is bad”
sounds like having async IO that returns a channel representation of that async process is what we’re talking about, not actually running the request inside of a go?
I was recently looking at guile fibers, it's an interesting compare / contrast (instead of go recompiling a state machine, the can use continuations; just like us they need to worry about blocking calls eating up their thread pool)
https://wingolog.org/archives/2017/06/29/a-new-concurrent-ml is a very good blog post covering some of that work in guile
fibers/continuations at the language level seem like such a good idea for the 80% case, we just need some behemoth to throw a bunch of money at making it good enough at the 20% where it kinda sucks rn
well, they have continuations at the language level, fibers are userspace just like our go blocks are
they fixed multiple bugs in their compiler (and more to come) due to problems exposed by using fibers as intended, which is cool
one thing i like - instead of our
go which puts your code into a singleton async scheduler, fibers have
run-fibers which creates a new scheduler to run the body in - so for example two libraries can each use their own fiber scheduler if they are independently async and don't need to coordinate
I'm not sure how practical that is in practice. The reality is that you really want a single thread pool in your application really. You can't reserve the number of cores*2 to infinity. In theory I'm guessing that core async could do that by not using a Singleton?
probably - I'm not sure how much (if any) of the current code relies on there being a single instance of the thread pool - eg. what would happen if one channel was used in two go blocks inside different schedulers? since the continuation of a go block is expressed as a channel callback, I'm not sure how that should work
in fact, the least disruptive thing would be channels attached to schedulers, not go blocks, but that's very counterintuitive
can avoid using either of them and submit normal functions to an ExecutorService, that yield things on channels when done
I suspect grouping fibers like guile does is more about the ability to wait for them all to finish then it is about scheduling
I think guile is single thread, so having multiple "schedulers" on top of a single thread would be really odd
having a process return a channel allows you to use all the coordination facilities in core.async
I believe you; I'm very #beginners in my understanding of Java threads / Clojure core.async.
I was thinking of sort of the general core.async case where you want to throw some computation to be scheduled to be run, and it might await other channels
I thought we were talking about the general case - I put some code in a
go block and it gets scheduled to run on some thread pool or what have you
isn't it more you can do blocking IO there, but you should not, because that thread pool has been sized specifically for doing compute?
and when you said that go/thread isn't relevant, and you can create your own function/macro that runs the code on the executor
it sounded like you were saying the solution was to essentially create your own
go construct that runs on the executor of your choice
I wrote a process that crawls a filesystem, looking for all video files, and launches ffmpeg using the above code to extract metadata from all the videos.
the process runs max 20
ffmpeg calls in parallel
another process in the middle takes video files and shells out to
ffmpeg, subject to the concurrency limit
so I can answer questions like "What files have h.264 streams that are more than 720p?", etc.
HTTP is a little bit complex though. If you're sending JSON or receiving JSON, in which thread pool does the JSON get written/read?
I'm still a little confused, since it didn't sound like we were talking about I/O before
it does make sense paired with the previous part of the convo when I was confused about a question about http requests and core.async
I thought you were replying to > in fact, the least disruptive thing would be channels attached to schedulers, not go blocks, but that's very counterintuitive
which it actually sounds like is what happens in practice, but channels returned by go blocks are inherently tied to whatever thread pool core.async comes with by default, which it sounds like is different than guile's fibers that allow you to pick the scheduler at runtime
the jvm has had a default threadpool for a while now too (https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html#commonPool--) which core.async and clojure don't use
I thought you could do all of the channel operations on whichever thread you want with the blocking(
!! suffix) operations
the channels directly add thunks invoking callbacks to core.async's threadpool, but an alternative implementation would be the channels directly invoking the callbacks and relying on them to do work on whatever threadpool they want
my understanding was that if you want to use a default threadpool, then use
go blocks. if you want more control over scheduling and threads, use the blocking operations (`!!` suffix)
I imagine it was just chance of the draw that channels ended up putting stuff on dispatch instead of the other way
but there would even be some benefits to having the callbacks control the threadpooling queing
right now the way real blocking operations are implemented is a callback is attached to the channel which fulfills a promise when run, and the operation blocks on that promise
that callback still gets run on the core.async pool, so even if you are only ever doing real blocking stuff you are running lots of small little tasks on the core.async pool
if the callback got to decide if it ran on an executor or not it could skip the executor for just fulfilling the promise
@smith.adriane https://groups.google.com/d/msg/clojure/yUIl2tl4HoM/awXms4n4AAAJ > In general, you should never directly or indirectly use blocking IO operations in go block threads. The go block threads are a fixed pool of (by default 8) threads. If you block enough of these threads, you lock up the pool, potentially in unrecoverable ways. > This release contains a new Java system property (intended primarily for development use) that will throw if core.async blocking operations (anything ending in "!!") are used in a go block. The exception will be thrown in a go block thread which by default will bubble up to the ThreadGroup's uncaught exception handler and get printed to stderr. You can also set Thread.setDefaultUncaughtExceptionHandler() if you want to do something else. Note that this only catches one set of blocking calls, other blocking IO is equally as problematic and will not be caught with this flag.
(That's not the latest release anymore, since @alexmiller released more good stuff this morning)
:thumbsup:. I wasn’t recommending using blocking operations inside of
go blocks. Is it still reasonable to use your own threads with blocking operations if want more control over scheduling and threads?
In the system I mentioned above, the filesystem pump uses async/thread the ffmpeg stuff is a CompletableFuture that dumps onto a channel the concurrency limiter in the middle is a single go block the main thread reads the results of video metadata extraction
outside of cljs, I typically avoid using
go blocks except for temporary solutions
@ghadi do you mind sharing what the concurrency limiter looks like? I'm trying to think of what that looks like with completeable futures
its input, like clojure.core.async/pipeline-async, is a function that returns a channel
"Runs asynchronous function 'af' on each input from channel 'in', producing results to channel 'out'. af is presumed to return a channel Input order is *not* preserved. Runs af with maximum 'max' concurrency. max can be an integer or a function returning integer (allowing dynamic concurrency control) close?, default true, controls whether the output channel is closed upon completion"
af that I pass in, is https://clojurians.slack.com/archives/C03S1KBA2/p1589230848408600
which uses a CompletableFuture internally, because that's what java offers you when shelling out
why not use clojure.core.async/pipeline-async? 1) I don't care about preserving input order 2) there is a tiny bug (maybe?) in pipeline-async's concurrency limit, where you might get N+2 tasks in flight
the limiter looks very similar to
pmax in here https://stuartsierra.com/2013/12/08/parallel-processing-with-core-async
the key here is not caring about anything about
af besides it takes an input and returns you a channel with eventual result