Fork me on GitHub
#off-topic
<
2018-10-03
>
tristefigure07:10:12

not really listening to this, but the video is gold !

val_waeselynck11:10:30

Wondering if I'm getting banned for this

borkdude11:10:35

Typing only ) is even worse, you canโ€™t repair that

val_waeselynck12:10:38

> you canโ€™t repair that You can, if you mutate.

john14:10:25

I wonder if TypedArray-backed persistent data structures could be operated on in parallel over the gpu with something like: https://github.com/gpujs/gpu.js

tbaldridge14:10:13

probably, but GPU support in JS is pretty primitive as of yet.

tbaldridge14:10:56

That's another hard problem with GPUs. AMD has pretty good OpenCL support, but CUDA blows OpenCL away as far as advanced features go.

tbaldridge14:10:41

And neither of those is supported super well from JS

john14:10:08

I'm just starting to scratch the surface. It seems like a very cool idea to explore though.

john14:10:25

so many cores...

john14:10:52

๐Ÿคค

tbaldridge14:10:35

Might be worth breaking out a C++ compiler and playing with it a bit there. The new NVidia GPUs (10xx series) have some pretty insane features, and C++ is the best supported language. This is my favorite example. CUDA supports unified memory, so you can allocate some data in the CPU and then just pass a pointer to the GPU. The GPU driver will page in main memmory as the GPU requires it:

tbaldridge14:10:01

the <<<...>>> thing says "this function runs on the GPU"

tbaldridge14:10:20

Someday I'll get around to figuring out how to write a GC on a GPU, then I can write a lisp interpreter ๐Ÿ˜›

john14:10:54

lol, we no need no stinking GC! Just make persistent collections smarter ๐Ÿ˜‰

tbaldridge14:10:27

Heh, can't make them smarter when the key component is multiple threads reading the same data. Even refcounting doesn't work there.

john14:10:47

I've got a pie in the sky idea I'm working on

john14:10:14

Not sure if it's applicable to gpgpu though

tbaldridge14:10:20

what's the gist of the idea?

john14:10:36

Basically, cooperative alloc/free, where the threads cooperatively clean up neighboring pointers when they clean up their own. And it gradually defragments essentially

tbaldridge14:10:07

But how does a thread know when a given pointer is no longer in use?

john14:10:31

Well, I'm not yet sure it'll work, but I'm thinking the memory pool can have "top level" objects and sub-objects. And I'd only have to keep a list of links between threads and top-level objects. Threads would need to inc/dec top level object's ref counters. If a thread happens to dec one to 0, it is responsible for freeing the block.

john14:10:28

And all the logic would be implemented behind the swap/deref interfaces for the irefs that point to the top level objects

john14:10:31

I'd actually like to have a more thorough conversation with you about the details of the idea some time. Perhaps you could disabuse me of some of my misconceptions around memory management. I riffed a sorta neat, halfway finished wait-free array-buffer alloc/free mechanism though, which seems to enable some cool ideas.

tbaldridge14:10:10

sounds a bit like hazard pointers

tbaldridge14:10:38

Which I've thought could work fairly well with region allocation.

tbaldridge14:10:43

But yes, the devil is in the details

john14:10:01

Aye, thanks for the hint there. That does seem like it's in the same problem space. I'll research that further.

tbaldridge14:10:24

one thing that can help here, is to implement the ideas using a linked list. A list of cons cells is about 10 loc in any language, yet hits all major problems involved with immutable data memory management

john14:10:46

Right, I'll probably implement that one first

john14:10:58

If you don't mind a fairly sloppy, thrown together video (at our Clojure meetup group), I touch on some of the general ideas here: https://youtu.be/G-VhNV5euSI The wait free alloc/free algo has been updated a bit since then though

john14:10:21

Basically, it's a pairwise array of [[address][length],[address][length]... and calling alloc on the shared array buffer will just scan the array left to right. First to find wins. Freeing uses atomics to coordinate between threads.

john14:10:53

Here's a version of the algo https://www.maria.cloud/gist/72fd24101ec3fac15c270a226576271d I have a slightly less broken version I haven't uploaded yet.

john14:10:31

And I still haven't even started to test it in multi threaded situations, so it's still vaporware-ish. But the general idea seems to have enough shape now to look at and critique.

john14:10:31

And I haven't fully articulated anywhere my ideas on how to do the ref-counting and the whole "top level" refs thing. I'll probably try to put together a design doc of some sort soon.

dpsutton18:10:00

seen a few people mention wasm. google unofficially has schism, a scheme to webassembly self-hosted compiler: https://github.com/google/schism

๐Ÿ‘€ 8
dpsutton19:10:58

was thinking of you specifically @U050PJ2EU

john19:10:45

Clutch bro. I surprisingly hadn't found that gem before.

andy.fingerhut18:10:22

Yeah, I tried using schism, but the community seems so strongly divided on such major language issues... (that's a bad pun, BTW)

๐Ÿ˜† 4
mattly18:10:40

you could say there's a meta-schism

dpsutton19:10:21

@andy.fingerhut what do you mean? hadn't heard about this

andy.fingerhut19:10:24

I have never used the project you refer to. I was attempting to make a bad pun on the name of the project (look up schism in the dictionary)

๐Ÿ˜‚ 16
dpsutton19:10:39

haha whoops ๐Ÿ™‚

Eric Ervin19:10:01

"We're sick and tired of your ism skism game" - Bob Marley