Fork me on GitHub
#clojure-europe
<
2020-10-19
>
slipset06:10:44

God morgen!

borkdude08:10:15

Morning! This weekend I tried to make the finishing touches on https://github.com/babashka/babashka.process.

🎉 6
borkdude09:10:02

Nifty feature:

👍 3
otfrom10:10:09

what do people reach for when they want to do lots of streaming calcs on the fly, but share the early stages of it (does this make sense?). I'm doing things where I'm parsing quite large files and then doing some basic cleaning of what comes out of the files and then I'm doing different things based on that processed input. Usually I reach for core.async for this and a mult on the core bit of processing with some kind of reduce on the last channels. I'd like if possible to use as many cores as possible given that the jobs are parallel, but I'd also like to keep the amount of memory down. Any thoughts?

Ben Hammond10:10:17

so you have seperate functions for • scrubbing data • computing some kind of dispatching value • multitimethods to seperately compute the different streams Glued together using tranducing map and parallelised using something like that blogpost I linked to

Ben Hammond10:10:26

I'd use transduce personally

dominicm10:10:27

It's a shame that parallel transducers never happened.

Ben Hammond10:10:41

oh hang on theres a bit of a blog post on that

Ben Hammond10:10:07

essentially funnels each value into its own future then run a few entries behind in order to give the futures a chance to execute before you deref them

Ben Hammond10:10:37

(you have to 'close off', the transducer by calling the arity-1 reducing function; if you forget to do that then you might lose some data)

dominicm10:10:17

There's the core.async parallel transducer context :)

otfrom11:10:45

I think what I mean is that I like composing my code, but I don't have good ways of composing my results as they flow

otfrom11:10:16

so I can do

(comp
  (map this-fn)
  (filter other-fn))

borkdude11:10:35

@otfrom is sequence the right thing here? it caches results so then you will share the early results?

otfrom11:10:03

but I don't see a good way of using the results of that w/o doing transduce/into/reduce multiple times or going to core.async/mult

otfrom11:10:52

@borkdude sequence is right if I was just after caching, but the data is large enough that I don't want all of the intermediate values in memory if I can avoid it

otfrom11:10:34

and the end of my pipes I do a lot of side effecting (writing to files for me atm, tho it has been a data base or msg stream/q in the past)

borkdude11:10:05

maybe use a database ;)

otfrom11:10:23

it might come to that

otfrom11:10:32

atm I can just run it mulitiple times and wait

otfrom11:10:49

or go for core.async which is my usual fave way of doing this kind of thing

Ben Hammond11:10:20

I dont see a problem with (a small number) of nested transduce/into/reduce as long as you dont let it get out of control

otfrom11:10:51

I don't either. I was just getting to the tipping point on that and was wondering how others were solving the problem

Ben Hammond11:10:14

I find core.async to be awkward to debug from the repl

Ben Hammond11:10:31

but perhaps that is because I haven't tried hard enough

borkdude11:10:29

core.async is optimized for correct programs

😆 6
😅 3
Ben Hammond11:10:31

all programs are wrong some are useful

👍 3
otfrom11:10:01

(btw I find x/reduce and x/by-key to make my xf pipelines much more composable as it means I need the results of a transduce a lot less often)

otfrom11:10:48

I've gotten reasonably good at debugging core.async issues, but I mostly do it by using the xfs in a non-async way to make sure they are right and then building them up piece by piece

Ben Hammond11:10:56

but I find the happy-hacking carefree nature of the REPL to be Clojure's killer feature certainly in terms of maintaining productivity and focus

3
dominicm11:10:46

@otfrom core.reducers is probably what you really want.

dominicm11:10:55

But no transducers there.

Ben Hammond11:10:28

have you found https://clojure.org/reference/reducers to be useful, @dominicm I've been underwhelmed

Ben Hammond11:10:25

I thought the most useful things were fjfork and fjjoin, and they're both declared as private https://github.com/clojure/clojure/blob/master/src/clj/clojure/core/reducers.clj#L32-L34

Ben Hammond11:10:30

er, so I mean I did not find clojure.core.reducers to add much value on top of the raw Java Fork/Join mechanism

dominicm12:10:09

I've never used the fork join mechanism. It gives me a pretty easy api to just merge together some bits.

otfrom11:10:35

@dominicm it still doesn't give me a way to share my intermediate results. I've had good fun using them with memory mapped files though

Ben Hammond11:10:41

when you say 'share', how do you expect to retrieve the intermediate results?

otfrom11:10:30

I'm not entirely sure about the mechanism, but I'd like to avoid re-calculating everything each time and I'd like to avoid having everything in memory all at once

Ben Hammond11:10:59

wouldn't you just have a big map that you assoc interesting things on to

Ben Hammond11:10:30

oh you'd write to a temporary file then

Ben Hammond11:10:47

and assoc its filename into your big map

borkdude11:10:49

> oh you'd write to a temporary file then I said, use database.

mpenet13:10:57

sqlite has also decent json support if you want something more lightweight

ordnungswidrig12:10:13

Postgres’ indeed cool.

ordnungswidrig12:10:29

Like “I can declare indexes on paths in json”-cool.

dominicm12:10:50

There's that cool library from Riverford that does that for maps

dominicm12:10:22

@otfrom maybe titanoboa?

otfrom13:10:08

hmm... my intermediate results don't have to go into a database tho. If I'm reading from a file a lot of times what I'll end up doing is reading line by line, then splitting the lines to multiple records using mapcat and turning things into maps with useful keys and tweaked types at that point

otfrom13:10:41

so it is really just a stream by that point and not something I want to write into a database as I'd just dump the intermediate result anyway

otfrom13:10:15

most of the work is pretty boring by the time it gets to something that fits into a database

borkdude13:10:31

@otfrom maybe redis? ;) - neh, temp file sounds fine then

otfrom13:10:53

I don't want to write out to files either. The computation usually fits into memory if I'm reasonably thoughtful about it

otfrom13:10:01

it is just coming up today b/c I have 2 charts I need to make that do different filtering on the same scrubbed source data, which is reasonably big (500MB of csv) so takes about 1 minute to run. I'd just like it to stay around that number rather than creep up as I add new things that hang off the back of the reading and scrubbing

otfrom13:10:27

@dominicm I've been keeping an eye on titanoboa. That might be the way to go, but I've not looked into it enough

dominicm13:10:18

Me neither. I'd love to hear how you get on, if it isn't/is a fit, why that is, etc. I'd love to know the fit.

otfrom13:10:28

@dominicm ah, looking at this I know why I'm not going to use it: https://github.com/mikub/titanoboa it is waaaay more than I need

Ben Hammond13:10:24

@otfrom; so you return a map that contains a bunch of atom values as your code processes stuff, it swap! s intermediate values into that map. isnt that the kind of thing that you are talking yourself into?

otfrom13:10:51

no, b/c most of what I want to do is before I'd turn anything into a map

otfrom14:10:27

hmm... I think an eduction with a redcible that handles the lines/records from the file sounds like the way to go atm. I get the code composition if not the result composition

otfrom14:10:15

and then if I want to re-wire it later into core.async then I can as it doesn't sound like there is an alternative for this bit of it

otfrom14:10:35

once it is a map then going in and out of a database sounds reasonable

otfrom14:10:06

I suppose my issue is that I don't want to keep a database around as it is all batch, so I'd be creating a whole bunch of things just to throw them away when I produce my results

otfrom14:10:17

(I go months between getting new bits of input)

Ben Hammond14:10:00

You're worried about the expense of a database that you only use rarely?

borkdude14:10:33

@otfrom use redis + memory limit, if you're going to throw away the results afterwards?

otfrom14:10:37

from a dependency and remembering how to do it pov

borkdude14:10:52

if the results are bigger than memory

otfrom14:10:07

the final results for the things I do are usually quite small. I'm almost always rolling up a bunch of simulations and pushing them through a t-digest to create histograms (iqr, medians, that kind of thing). The problem is to get a new histogram for a different cut you have to push all the data through the t-digest engine again

otfrom14:10:27

so I end up calculating lots of different counts per simulation and then pushing them through

otfrom14:10:12

so, results tiny, input large-ish, but not large enough to need something like spark/titanoboa/other

otfrom14:10:11

I'm wondering if my route is eventually going to be http://tech.ml.dataset and lots of things on top of that as it seems to have lots of ways of doing fast memory mapped access

dominicm14:10:32

I wonder if onyx in single node mode would be good for this

dominicm14:10:27

I bet there's a java-y thing that's not too bad either.

dominicm14:10:02

No, not kafka :)

dominicm14:10:12

Kafka streams-alike though

dominicm15:10:04

But for a single instance, and presumably with some kind of web interface or something for intermediates.

mpenet15:10:46

ChronicleQueue is a decent embedded solution for queue persistence with kafaka'ish semantics

otfrom15:10:01

@mpenet and @borkdude ChronicleQueue and tape are on my "will use at some point" list

dominicm16:10:24

I suppose they're more single threaded though, kafka streams is anyway iirc. But I'm no expert there.

otfrom23:10:42

I seem to be going doing the core.async route

3