This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-11-01
Channels
- # announcements (3)
- # babashka (20)
- # beginners (77)
- # calva (27)
- # cider (5)
- # clara (3)
- # clj-kondo (9)
- # cljs-dev (4)
- # cljsrn (5)
- # clojure (26)
- # clojure-europe (32)
- # clojure-italy (5)
- # clojure-nl (3)
- # clojure-uk (5)
- # clojurescript (25)
- # clojureverse-ops (4)
- # core-async (49)
- # cursive (15)
- # data-science (1)
- # datahike (4)
- # datomic (3)
- # docker (1)
- # events (1)
- # helix (5)
- # holy-lambda (3)
- # introduce-yourself (1)
- # jobs (1)
- # kaocha (2)
- # lsp (15)
- # malli (42)
- # off-topic (18)
- # pathom (18)
- # pedestal (12)
- # polylith (7)
- # rdf (1)
- # re-frame (22)
- # reitit (2)
- # releases (1)
- # remote-jobs (1)
- # rewrite-clj (33)
- # shadow-cljs (85)
- # spacemacs (3)
- # vim (12)
- # xtdb (29)
Morning!
gmawning!
Sorry for being a bit of topic, but I’m looking for either word or solution:
So, let’s say I have a slow, somewhat pure function slow
which might receive multiple calls with the same arguments within a short timespan (in multiple threads, like a webserver), and I don’t want to do the slow stuff multiple times.
The obvious solution, if the calls came in with an interval longer than the execution time of slow
would be some sort of memoize, but here I want to “park” the calls after the first, until the first calculation is done, and then return the result of the calculation to all of the “parked” callers.
So, anyone knows what this would be called, and even better, if there is a clojure lib/fn that implements this?
Memoize + future?
don't have a lib for it, but have used a combination of agent
s and manifold/deferred
for this... each requestor gets a deferred
of the value, and an agent
send
fn either just fulfils the deferred
with the already existing value or creates it on the agent
thread first
same would work with core.async
channels if that's your preferred async mechanism
There is of course a complicating factor here that this is distributed over several api-nodes, so futures as such won’t handle it alone. We have redis as our inter-api cache thingy. I sort of have an idea of sticking something in redis, like pending
and then poll on that value and return when it changes.
hmm - but you have to deal with failure then - if the first node sees nothing in redis, sets pending
but then dies, other nodes will see pending
forever, unless you get more complicated
we have one similar situation where the slow
objects are reasonably long-lived, and i chose to create one-per-process, just to avoid any IPC
otherwise, redis has transactions doesn't it - you could use one of those ?
although i have no idea how redis transactions are implemented (i.e. will a lock prevent other transactions proceeding until one is complete, which is what you would probably want, or will all transactions proceed but only one will commit), so that might not help
I have implemented something similar in Node.js with promises but we only had one instance (backed by Redis). So the first request for X would create and locally cache a promise for its value, which all other requests for X would get. Eventually the promise gets fulfilled or times out (and is retried). With multiple nodes you either live with the request happening up to # nodes in || or you need, i guess, some central coordinator, As suggested, perhaps Redis can help? I know it has time-to-live so that can work as safety mechanism to implement timeouts.
Here’s my implementation:
(defn- wait-until-result [redis k]
(async/go-loop []
(let [result (car/wcar redis (car/get k))]
(if (not= "pending" result)
result
(do (async/<! (async/timeout 100))
(recur))))))
(defn set-if-empty [redis k v]
(car/wcar redis
(car/lua "if redis.call('exists', _:k) == 1 then return redis.call('get', _:k) else redis.call('setex', _:k, _:ttl, _:v); return nil; end"
{:k k}
{:v v
:ttl 30})))
(defn do-heavy-work!
[{:keys [redis] :as snapshot-service} ctx datasource]
(let [result (set-if-empty redis datasource "pending")]
(cond
(nil? result) (let [result (do-heavy-work* datasource)]
(car/wcar redis (car/setex datasource 30 result))
result)
(= "pending" result) (wait-until-result redis datasource)
:else result)))
So basically, it’s a distributed promise, but I’d love to get a more computer-sciency name for it.
sorry if this is too late, now you've solved your problem, @U04V5VAUN ... but there's https://github.com/clojure/core.memoize which is built on https://github.com/clojure/core.cache which would let you store args + results in redis. I've only used it for in memory memoization, but I think some people have done that. I know dpsutton has a related blog post that I've found useful : https://dev.to/dpsutton/exploring-the-core-cache-api-57al ... hope this is helpful in some way ... 😉
Here’s a question…… I don’t want a API action to block the page load. If I fire it as a promise or a future I have to handle it’s dereferenced state eventually, I can’t fire and forget.
do you mean you want to fire+forget an action which will not result in any output to the page, so you don't want to wait on it ?
you can use a promise or future - just wrap it in a catch, so you don't get any unhandled exceptions, and forget about the result
Thank you @mccraigmccraig appreciate it. Hope you are keeping the best. We must get a catch up sometime.
The more I think about it I’ve come across this before with @mccraigmccraig