Fork me on GitHub

Hey team, I created a quick “delay” fn.

(defn delay-f
  [f delay-ms]
  (let [timer (Timer.)
        task (atom nil)
        latest-batch (atom [])]
    (fn [& args]
      (swap! latest-batch conj args)
      (when-not @task
        (let [new-task (proxy [TimerTask] []
                         (run []
                           (f @latest-batch)
                           (reset! task nil)
                           (reset! latest-batch [])
                           (.purge timer)))]
          (reset! task new-task)
          (.schedule timer new-task delay-ms))))))
The idea is, based on an ms, I batched the calls to f within that ms, and send it all out at once. I am a bit nervous about my use of atoms, and potential race conditions. If ya’ll have thoughts would appreciate it!


Yeah there’s a race condition in it.


You have to lock around something.


Or use a single atom


And don't use reset!


I worry I can’t rely on atom’s swap! as f could have side-effects


A common pattern is to create a delay that does side effects, then swap in the delay if needed, and force whatever delay is there (the new one if swapped, the old one if not)


(defn delay-f
  [f delay-ms]
  (let [timer       (Timer.)
        lock        (Object.)
        task         (volatile! nil)
        latest-batch (volatile! [])]
    (fn [& args]
      (locking lock
        (vswap! latest-batch conj args)
        (when-not @task
          (let [new-task (proxy [TimerTask] []
                           (run []
                             (locking lock
                               (f @latest-batch)
                               (vreset! task nil)
                               (vreset! latest-batch [])
                               (.purge timer)))]
            (vreset! task new-task)
            (.schedule timer new-task delay-ms))))))


^just so you know how the lock can work


Fantastic — thank you @U3JH98J4R One noob question: Here, do you use volatile! strictly because it’s faster than an atom, or for some other reason? Mainly, my assumption is that if I were to replace the volatile variables there with an atom, I assume things will still work because of the lock. If that’s not the case, would love to know, to get a better grasp of how things work


Things would still work, yeah. Volatile would just be faster since the lock makes CAS redundant


But I recommend nitro cold brew coffee, around 2 Liters, if you want to dive into JVM concurrency and properly understand volatile

😆 1

I'm not sure I do 100%


Aweesome! thanks @U3JH98J4R — learning bit by bit!


One more noob question: would we need to add one more (locking inside the TimerTask? (reason being that it does vreset! outside of initial lock)


Timer is for one-off timers. DelayQueue is for delays in an ongoing stream of work.


oo. will look deeper, thanks!


(This seems more the latter, but I’m not exactly sure.)


a delayqueue would still need a worker thread to service it, something like a scheduled executor bundles a delay queue with a threadpool


Timer is just kind of old fashioned, it predates the existence of java.util.concurrent

Ben Sless17:10:38

I'd also add an overflow condition, this system might memory leak. Also, if you close over the parameters you can keep the head of lots of lazy sequences

❤️ 1