Fork me on GitHub
#code-reviews
<
2021-10-22
>
stopa15:10:26

Hey team, I created a quick “delay” fn.

(defn delay-f
  [f delay-ms]
  (let [timer (Timer.)
        task (atom nil)
        latest-batch (atom [])]
    (fn [& args]
      (swap! latest-batch conj args)
      (when-not @task
        (let [new-task (proxy [TimerTask] []
                         (run []
                           (f @latest-batch)
                           (reset! task nil)
                           (reset! latest-batch [])
                           (.purge timer)))]
          (reset! task new-task)
          (.schedule timer new-task delay-ms))))))
The idea is, based on an ms, I batched the calls to f within that ms, and send it all out at once. I am a bit nervous about my use of atoms, and potential race conditions. If ya’ll have thoughts would appreciate it!

potetm15:10:23

Yeah there’s a race condition in it.

potetm15:10:57

You have to lock around something.

hiredman15:10:03

Or use a single atom

hiredman15:10:21

And don't use reset!

stopa15:10:51

I worry I can’t rely on atom’s swap! as f could have side-effects

hiredman15:10:17

A common pattern is to create a delay that does side effects, then swap in the delay if needed, and force whatever delay is there (the new one if swapped, the old one if not)

emccue17:10:00

(defn delay-f
  [f delay-ms]
  (let [timer       (Timer.)
        lock        (Object.)
        task         (volatile! nil)
        latest-batch (volatile! [])]
    (fn [& args]
      (locking lock
        (vswap! latest-batch conj args)
        (when-not @task
          (let [new-task (proxy [TimerTask] []
                           (run []
                             (locking lock
                               (f @latest-batch)
                               (vreset! task nil)
                               (vreset! latest-batch [])
                               (.purge timer)))]
            (vreset! task new-task)
            (.schedule timer new-task delay-ms))))))

emccue17:10:50

^just so you know how the lock can work

stopa18:10:46

Fantastic — thank you @U3JH98J4R One noob question: Here, do you use volatile! strictly because it’s faster than an atom, or for some other reason? Mainly, my assumption is that if I were to replace the volatile variables there with an atom, I assume things will still work because of the lock. If that’s not the case, would love to know, to get a better grasp of how things work

emccue20:10:08

Things would still work, yeah. Volatile would just be faster since the lock makes CAS redundant

emccue20:10:00

But I recommend nitro cold brew coffee, around 2 Liters, if you want to dive into JVM concurrency and properly understand volatile

😆 1
emccue20:10:10

I'm not sure I do 100%

stopa20:10:15

Aweesome! thanks @U3JH98J4R — learning bit by bit!

stopa21:10:44

One more noob question: would we need to add one more (locking inside the TimerTask? (reason being that it does vreset! outside of initial lock)

potetm15:10:35

Timer is for one-off timers. DelayQueue is for delays in an ongoing stream of work.

stopa15:10:58

oo. will look deeper, thanks!

potetm15:10:58

(This seems more the latter, but I’m not exactly sure.)

hiredman16:10:45

a delayqueue would still need a worker thread to service it, something like a scheduled executor bundles a delay queue with a threadpool

hiredman16:10:42

Timer is just kind of old fashioned, it predates the existence of java.util.concurrent

Ben Sless17:10:38

I'd also add an overflow condition, this system might memory leak. Also, if you close over the parameters you can keep the head of lots of lazy sequences

❤️ 1