Fork me on GitHub
#core-async
<
2019-07-11
>
souenzzo00:07:17

There is some docs/reference about how to "asyncfy" a library like clojure.jdbc ? I know that I need to create a worker pool, inset in a queue and wait in a channel. But I'm not sure about the wait part (who will give me a channel?)

noisesmith00:07:14

a decent pattern is to use one channel, and N threads consuming, running a function that takes two items: something describing the work, and a channel on which to put the result

noisesmith00:07:31

the N decides your maximum parallelization

noisesmith00:07:55

you don't need a separate queue - a channel is already a queue

noisesmith00:07:59

any consumer that wants a result would write a message tuple [params, c], then wait for a result on c

hiredman00:07:06

You can also use async/thread or pipeline-blocking for all your io, you don't need to setup your own threadpool

souenzzo00:07:56

async/thread is nice! I will use it for ASAP

Yehonathan Sharvit13:07:08

Hi there, A question related to the performances of go processes. I am executing a similar code in thread vs. go processes and it seems that go processes takes more time. Here is my code:

(do
    (let [c (chan)
          _ (put! c 2)
          t (. System (nanoTime))]
      (thread
        (let [val (<!! c)
              elapsed (- (. System (nanoTime)) t)]
          (println (str "taken value, delivery time with thread -- " val " time: " (float  (/ elapsed 1e6)) "ms" "\n")))))

    (let [c (chan)
          _ (put! c 1)
          t (. System (nanoTime))]
      (go (let [val (<! c)
                elapsed (- (. System (nanoTime)) t)]
            (println (str "taken value, delivery time with go: -- " val " time: " (float  (/ elapsed 1e6)) "ms" "\n")))))))
The output is:
taken value, delivery time with thread -- 2 time: 0.228456ms
taken value, delivery time with go: -- 1 time: 1.516011ms
I was expecting go processes to run faster than threads.

mpenet13:07:47

a go block is nothing more than a "coordination block", in the end it all runs in threads, it's not a lightweight thread per say

mpenet13:07:06

thread has no bookkeeping to do about parking put/take, it just runs the body in a thread, returns a chan with the result and bye.

ghadi14:07:11

@viebel the benchmark is flawed to the point of not telling you anything. > expecting go processes to run faster than threads. Why?

Yehonathan Sharvit14:07:46

because go processes are kind og lightweight threads

ghadi14:07:18

they still run within threads

ghadi14:07:34

they have to do more work to suspend and unsuspend

mpenet14:07:37

it's just a coordination primitive that can run blocking stuff (wrapped in chan put/takes) on a fixed threadpool, like in go

Yehonathan Sharvit14:07:47

Within threads, yes but no need to create a new thread each time

ghadi14:07:58

you can run more of them than threads

mpenet14:07:04

if you have 1 blocking task and nothing else, for sure it will be slower

ghadi14:07:07

you can run 2 million go blocks, easy

ghadi14:07:13

can't do that with threads

ghadi14:07:23

but pound-for-pound, go blocks will be slower

alexmiller16:07:52

Pounds are not a unit of speed :)

ghadi14:07:12

your benchmark isn't actually doing much work, and it's incorrectly calling blocking I/O (`println`) within the go block

ghadi14:07:54

blocking that isn't channel operations is not recommended because it can cause deadlocks

ghadi14:07:16

try some benchmarks that exercise more than 20000 processes

ghadi14:07:24

and things that have more operations per process: put, take, especially with unbuffered channels where they might not be available to proceed

Yehonathan Sharvit15:07:57

something is not clear to me: Does a go block creates a new thread or does it use a thread that is already ready to run?

Yehonathan Sharvit15:07:30

Let’s say I create 10 go blocks sequentially, will the thread be reused or will it be recreated?

markmarkmark15:07:30

core async dispatches its work on a thread pool that defaults to 8 threads.

markmarkmark15:07:52

if you created 10 go blocks sequentially it would likely create all 8 threads and would then begin reusing them as needed

Yehonathan Sharvit17:07:07

And between two go blocks execution, the thread from the thread pool is not kept alive. Right?

noisesmith17:07:27

it's in a waiting state where the vm won't try to execute it between running tasks

noisesmith17:07:42

it's still allocated, it just doesn't try to run

noisesmith17:07:03

(it might be more a question of whether the underlying OS tries to wake it up at that point, but regardless the execution dispatcher knows it's not ready to do anything)

Yehonathan Sharvit17:07:34

Is running a thread from a waiting state cheaper than creating a new thread or is it the same?

noisesmith17:07:54

it's cheaper, this is why we have thread pools - allocation and initialization aren't free

markmarkmark17:07:14

the threads in the core async dispatch pool are never destroyed once they've been created

markmarkmark17:07:24

unless there's an exception

markmarkmark17:07:32

in which case a new one will be created if it's needed

Yehonathan Sharvit17:07:40

Now I am back to my original question: if go blocks are cheaper to create than thread, why running sequentially go blocks is not faster that running threads sequentially?

noisesmith17:07:05

because creating the thread isn't the expensive part of running a task on a thread

noisesmith17:07:14

(or at least it shouldn't be)

noisesmith17:07:10

what core.async improves is the time it takes to context switch (thanks to reusing small callbacks on a small thread pool), and the correctness of code that coordinates light weight channel ops

noisesmith17:07:02

but if all you are doing is a sequence of actions (IO or CPU) and you aren't juggling async results, core.async can only match regular perf at best, and likely slows things down

markmarkmark17:07:15

@viebel in your example that you pasted earlier, what is the thread that is used? the core async thread macro?

Yehonathan Sharvit17:07:42

I have improved my example. Now it’s

(let [iterations 10000]
    [(let [t (. System (nanoTime))]
        (dotimes [_ iterations]
          (let [c (chan)
                _ (put! c 1)]
            (deref (future (<!! c)))))
        (let [elapsed (- (. System (nanoTime)) t)]
          (str (float  (/ elapsed 1e6)) "ms")))

     (let [t (. System (nanoTime))]
       (dotimes [_ iterations]
         (let [c (chan)
               _ (put! c 1)]
           (<!! (go (<! c)))))
       (let [elapsed (- (. System (nanoTime)) t)]
         (str (float  (/ elapsed 1e6)) "ms")))])

Yehonathan Sharvit17:07:57

It’s go vs future

ghadi18:07:05

sorry, this is still misconceived

Yehonathan Sharvit18:07:06

On my MacBook Pro, the result is:

["191.04631ms" "211.95337ms"]

Yehonathan Sharvit18:07:13

future is faster than go

Yehonathan Sharvit18:07:32

why is it misconceived @ghadi ?

ghadi18:07:30

For one, no real world task looks like this

ghadi18:07:49

deriving conclusions like "x is faster" isn't useful

ghadi18:07:34

none of the channels block on either test, they're already ready to read

ghadi18:07:18

I don't know what the goal is here

ghadi18:07:36

the same code in go vs ordinary block is going to be slower

ghadi18:07:04

it's clear if you macroexpand the go block that there are (small) costs to the go machinery itself

ghadi18:07:38

But, if a channel blocks in a thread or future, that thread is useless until the channel yields

ghadi18:07:45

not so with a go block

ghadi18:07:02

the thread that the go block is running upon will start running some other go block

ghadi18:07:22

The benchmark also waits for the go block to finish before scheduling another go block

ghadi18:07:34

which defeats the advantage

Yehonathan Sharvit18:07:20

@ghadi I understand that the real value of go blocks, is when you run them in parallel. I brought this sequential example in order to help understand how things work

Yehonathan Sharvit18:07:48

When I say future is faster than go, I don’t mean to “blame” go

Yehonathan Sharvit18:07:30

I was assuming that creating a thread was an expensive OS operation and that’s why I thought go blocks would be futures even in the sequential case

Yehonathan Sharvit18:07:46

I still don’t get it 100%

noisesmith18:07:03

future also uses a thread pool

Yehonathan Sharvit19:07:43

What about core.async thread? Does it also use a thread pool?

markmarkmark19:07:50

and the thread pools that future and core.async thread use are actually completely unbounded

robertfw23:07:39

I thought that the async threadpool by default is a fixed 8 thread pool?

markmarkmark23:07:47

the threadpool used by the go blocks are by default fixed at 8. the threadpool used by the thread macro is unbounded.

markmarkmark19:07:53

though, I guess them being unbounded doesn't matter as much in your example since you wait for the future before you make the next one

Yehonathan Sharvit19:07:11

What do you mean by unbounded?

markmarkmark19:07:41

there's no limit to how many threads they can start up

markmarkmark19:07:50

besides the obvious memory limits and whatnot

Yehonathan Sharvit19:07:35

I added a core.async/thread scenario

(let [iterations 100]
    [(let [t (. System (nanoTime))]
        (dotimes [_ iterations]
          (let [c (chan)
                _ (put! c 1)]
            (deref (future (<!! c)))))
        (let [elapsed (- (. System (nanoTime)) t)]
          (str (float  (/ elapsed 1e6)) "ms")))

     (let [t (. System (nanoTime))]
       (dotimes [_ iterations]
         (let [c (chan)
               _ (put! c 1)]
           (<!! (thread (<! c)))))
       (let [elapsed (- (. System (nanoTime)) t)]
         (str (float  (/ elapsed 1e6)) "ms")))

     (let [t (. System (nanoTime))]
       (dotimes [_ iterations]
         (let [c (chan)
               _ (put! c 1)]
           (<!! (go (<! c)))))
       (let [elapsed (- (. System (nanoTime)) t)]
         (str (float  (/ elapsed 1e6)) "ms")))])

Yehonathan Sharvit19:07:52

The results are: ["3.231052ms" "20.063465ms" "5.740195ms"]

Yehonathan Sharvit19:07:01

thread is much slower!

Yehonathan Sharvit19:07:38

And when I raise the number of iterations to 10,000 the thread code triggers an out of memory exception