Fork me on GitHub
Jonas Svalin12:07:04

Hi 👋 I’m working on an API which will have a long-running process, but should respond immediately upon receiving the request. If we pretend that the resource is called operations the flow would be something like: 1. POST /operations 2. Async process to perform operation is kicked off 3. Responds with 201 - and { status: "inProgress" } 4. GET /operations/1 eventually responds with { status: "completed" } How would I best achieve this? Is it okay to just use thread like (thread (perform-operation…)) and respond immediately, or can that cause issues? The docs for thread states:

"Executes the body in another thread, returning immediately to the
  calling thread. Returns a channel which will receive the result of
  the body when completed, then close."
If I don’t handle the returned channel, will that cause a memory leak or will it be cleaned up after it completes?


I would put! that request in a shared channel instead so that you have backpressure via buffer. And span some number of threads that <!! from that channel.


put! is async, it does not “exert” backpressure per-se



(put! port val)
(put! port val fn1)
(put! port val fn1 on-caller?)

Asynchronously puts a val into port, calling fn1 (if supplied) when
 complete, passing false iff port is already closed. nil values are
 not allowed. If on-caller? (default true) is true, and the put is
 immediately accepted, will call fn1 on calling thread.  Returns
 true unless port is already closed.


If you want backpressure by default (JVM-only), perhaps >!! can be used.


I didn't say that put! does "exert" backpressure. Channel is for backpressure (there is no unbounded chan). put! is actually also bounded but it doesn't block - it will throw an exception if too many put!s are pending (more than 1000 I think). If you have a web server with thread pool size 100 than 100 pending >!! requests will make the whole server unusable, i.e. all other endpoints will be blocked as well. With put! it won't. And you can handle error and return message to the caller that too many requests are pending.

👍 2
Jonas Svalin07:07:16

@U011LEAEURE Thanks for the input, I will try that out!

Jonas Svalin09:07:35

@U011LEAEURE Just as a side question, if I have full control over how many requests happen at any point in time, is there a scenario where I ignore the thread pool and channel approach and just spawn new threads every time (for simplicities sake)? The argument being that I know that by design it will never exceed X number of concurrent threads


@U01SV1FBY6A Sure. But then you probably don't need core-async, just use futures.

👌 2