Fork me on GitHub
#off-topic
<
2018-02-07
>
andy.fingerhut01:02:10

When Rich Hickey uses "CAS" in his talks, I am assuming that means "Compare And Swap", but not necessarily just the processor instruction version of that idea?

andy.fingerhut01:02:50

And if that is correct, what is the more general form of it that is not a processor instruction? The moral equivalent in any concurrent system you are talking about?

andy.fingerhut01:02:53

Here is link to several uses of CAS in his talks across the years, from matthiasn's Github repo of talk transcripts: https://github.com/matthiasn/talk-transcripts/search?utf8=%E2%9C%93&amp;q=CAS&amp;type=

andy.fingerhut02:02:41

In particular the reference to "CAS timelines" in the "Are We There Yet" talk is not familiar to me. That might mean "a causal timeline constructed between threads based upon an order that they succeeded updating a compare-and-swap word of state"?

qqq02:02:23

@andy.fingerhut: I do not know for certain what Rich is saying by STM vs CAS, but I can give a plausible theory. Do you want to hear it?

fellshard02:02:24

STM incorporates multiple values changing atomically, whereas CAS involves single values, if I'm not mistaken. Basically refs vs. atoms?

andy.fingerhut02:02:17

Do you have a pointer to some place where he is saying "STM vs CAS", or a few sentences in one of his talk transcripts that you are describing that way?

qqq02:02:30

Say we have some map M, thread 1: read (:a M) read (:b M) do some computation write to (:c M) thread 2: write (:a M) write (:b M) //in the STM model, both of these are executing optimistically, but there is a potential "oh shit", if it happens taht: thread 2 writes :a thread 1 rreads :a thread 1 reads : b thread 2 writes : b thread 1 writes : c so the STM somehow needs to detect thes interleaved read/writes, and rolls stuff back in the CAS model, we have: thread 1: I read M_bar, I now want to write it with M_bar' thread 2: I read M_foo, I want to write M_foo' so basically both threads are saying: "if current state of M is the state I read it as, then use this new value for M" ^-- this is all theory / my understanding / no idea if this s what Rich meant in the talk

qqq02:02:44

with regards to your other uestion, here we are "CAS" on a entire MAP, not just a single value

andy.fingerhut02:02:48

So I understand that CAS the instruction can be used to implement a pointer-flip atomically, so "if this pointer to a map is the one I have been calculating an updated version of, then here is the pointer to the new version to replace it with, otherwise return failure, and I will start all over again with the current latest map".

andy.fingerhut02:02:11

That I believe is the underpinning of a Clojure atom.

andy.fingerhut02:02:42

With which you can do atomic updates of arbitrarily complex data structures, but only single ones.

fellshard02:02:46

The 'start over and retry' is what makes it a CAS as opposed to what is traditionally called just 'atomic swap'

fellshard02:02:16

(or something like that, reaching back to ye olden days)

andy.fingerhut02:02:25

STM is coordinated updates across an arbitrary subset of your refs, which I haven't dug into all the details of, other than if I wanted to do that, I would want to do nothing else brain intensive while studying it for a month.

tbaldridge02:02:47

STM is basically a locking system, somewhat like the way core.async channels work

tbaldridge02:02:45

You do some work, tracking all the refs you need to read/write from, when you're done you lock all the refs and and make the changes. If the contents of the refs have changed between the reading and the final lock, roll everything back

tbaldridge02:02:47

There's a lot more in the implementation though since you have to lock the refs in the right order (to avoid deadlock), and Clojure STM has the ability to barge a lock. One thread can say "you're taking too long, stop it!" and take control of the refs from another thread.

qqq02:02:35

performance debugging STM sounds like a nightmare

andy.fingerhut02:02:36

So in his "Language of the System" talk, he says this: "There's nothing about what I just said that is about Clojure, that is about memory, that is about locking. There is a little bit that's probably about CAS, but not CAS on the chip. It's a very, very general notion, and Datomic implements that notion at large, but you can also implement it yourself. And you're going to need to combine a couple things. You're going to need to combine naming values with some sort of reference and some sort of a la carte coordination."

noisesmith02:02:43

does STM imply an automatic retry on conflict, or is that just a clojure thing?

andy.fingerhut02:02:06

I am curious what the "very general notion" of CAS is meant here -- is it basically "any way of doing the equivalent of a Clojure atom swap! operation", whether it involves a compare-and-swap instruction in a chip, or some other method?

tbaldridge02:02:10

Right, so in the case of Datomic it's all a bit different. Values + single writer + some queue = Datomic's model

tbaldridge02:02:32

So in the case of Datomic on Cassandra, most of the data storage used is immutable but it uses Cassandra's conditional put to store the roots of the DB tree.

tbaldridge02:02:24

That doesn't require lockcmpxcg (x86 CAS or whatever it's called), Cassandra just has a way of doing an atomic conditional put on a CRDT store

andy.fingerhut02:02:45

I mean, I can implement a little web service that has no parallelism at all within its implementation, and it implements a 'compare and swap' service that many clients could use, and it wouldn't need a CAS instruction on a chip, but it would implement that feature for its clients.

tbaldridge02:02:17

And if your clients used an immutable store, they could store the GUID for the root node in your service

andy.fingerhut02:02:40

In his "Are We There Yet" talk, he uses the term "CAS timeline", e.g. here: "You can use CAS, which is essentially saying there’s one timeline per identity. Right? And it’s uncoordinated. It’s impossible to coordinate two things that are using CAS timelines, but CAS timelines are still useful."

andy.fingerhut02:02:09

My guess is that means something like "the causal sequential ordering you can see, after an execution occurs, that is the order that multiple threads CAS operations (or something like it, e.g. atom swap! operations) successfully complete.

tbaldridge02:02:05

I took it to mean that you can't do two swap! calls and guarantee consistency between two atoms.

tbaldridge02:02:24

At least not without a outside communication point (like a lock of some sort)

andy.fingerhut02:02:52

That makes sense, but it is the "CAS timeline" part that I haven't seen described elsewhere, and was trying to clarify what he means by that term.

tbaldridge02:02:38

yeah, and that's a phrase I've never heard before 😐

tbaldridge02:02:07

so I'd assume it's what you say, the succession of values seen by a location being updated via cas.

tbaldridge02:02:46

But I guess that makes sense, CAS has a form of time. Update 1, Update 2, Update 3. That timeline is unique to that "cell" of memory.

andy.fingerhut02:02:20

Yeah, that seems to fit the way he is using it there.

tbaldridge02:02:49

On the STM note, Intel Processors used to have HTM, that worked a lot like Clojure's STM. And it'd roll back any changes if someone touched the data you were working on. Sadly it had so many bugs it was disabled for future work. AFAIK it's still disabled.

andy.fingerhut02:02:14

That definitely sounds like the kind of thing that is a few steps beyond what I would recommend putting in specialized hardware.

andy.fingerhut02:02:36

because of the difficulty of getting it right.

fellshard02:02:47

I'm recalling a paper that basically described that synchronizing multiple individual values, each on their own timeline, was irreducible; I think that was back when reading up on vector clocks?

Ryan Radomski15:02:37

Programming Erlang: Software for a concurrent world Is on the clojure reading list. Does reading that book give perspective that's useful for clojure developers?

mpenet15:02:24

it's quite dated

mpenet15:02:08

personally I enjoyed "Designing for Scalability with Erlang/OTP"

Ryan Radomski15:02:21

I understand the erlang VM still has pretty efficient CSP to this day. Anything to gleam there?

Ryan Radomski15:02:31

Is the O'Reily book better than hearing it from Armstrong himself?

mpenet15:02:36

Yes, that book I mentioned is written by also reliable/knowledgeable authors, and it's certainly more up to date

mpenet15:02:38

I didn't read the one you mentioned but it seems more about the basics

mpenet15:02:23

actually I read it 🙂, I have the second edition, was fooled by the different cover

tbaldridge15:02:25

Erlang does actor based programming, not CSP

mpenet15:02:43

yes, totally different

tbaldridge15:02:21

Erlang is great at what it does, but most of the criticisms here are well founded: https://clojure.org/about/state#actors

mpenet15:02:34

it's not to be used in the same situations I guess

mpenet15:02:33

rh said it himself indirectly recently in one of his talks, you prolly wouldn't use clojure to write "phone switches"

mpenet15:02:56

even tho it's arguable, the memory model is so different, both have strength/weaknesses

Ryan Radomski15:02:14

Oh that's a really nice read

Ryan Radomski15:02:29

If I'm undestanding correctly, an important disadvantage to consider is that you lose the ability to throw something on a channel and stop caring about what happens to it, which is a form of complexity

tbaldridge15:02:34

agreed, CSP doesn't work over networks, as you need locks and the like

tbaldridge15:02:49

actors aren't needed as much when you have a shared memory model with locks

chris15:02:21

OTP is 😘👌 though

tbaldridge15:02:18

@radomski That's true. Also, actors require a message to be sent in order to deref the actor. That's another cause of complexity

Ryan Radomski15:02:03

But on the other end, your messaging is complected with locality with CSP from what I'm gleaming

Ryan Radomski15:02:14

So the golden rule of everything is a tradeoff

mpenet15:02:20

csp makes backpressure/flow control easy

mpenet15:02:34

in erlang you have to build that yourself since mailboxes are unbounded

tbaldridge15:02:42

As I've heard Rich say several times: "The first thing people do with Erlang is build a distributed queue".

mpenet15:02:47

easy to get into weird territory

Ryan Radomski15:02:13

A distributed que sounds like a good middle ground to an outsider

tbaldridge15:02:20

There's a lot of truth in that ^. I've never reached for Erlang simply because stuff like Tibco, Rabbit and Kafka exist.

mpenet15:02:21

elixir has a few "behaviors" to fix that, or in erlang you have to use credit flow control and such

tbaldridge15:02:54

Elixir is cool for what it does, back when I did Erlang all we had was the prolog syntax. Now I love me some prolog, but dang Erlang code is ugly.

Ryan Radomski15:02:57

We don't have a centrally used actors library correct?

Ryan Radomski15:02:11

Prolog is extremely cool

mpenet15:02:31

I kinda like it's syntax, it's simple. elixir if full of macros

tbaldridge15:02:39

@radomski that's correct. Pulsar is about the closest Iv'e seen. Although I think Akka works just fine with Clojure as well.

tbaldridge15:02:04

There's also a lot of confusion about what actors are. Erlang takes a rather loose view of actors

tbaldridge15:02:40

Notice that Erlang does a ton of stuff not mentioned here: https://en.wikipedia.org/wiki/Actor_model#Fundamental_concepts

Ryan Radomski15:02:43

I am understanding in terms of channels. Sort of a two way chennel with required locking

mpenet15:02:07

but there's no per actor gc in akka/pulsar right?

tbaldridge15:02:30

@mpenet correct, that's a big reason why Erlang is better at these sort of things.

mpenet15:02:52

yes, that's the big selling point of erlang imho

mpenet15:02:04

well, of BEAM

mpenet15:02:14

not many platforms have this

tbaldridge15:02:27

@radomski eh...CSP channels are a many-to-many queue with an optional buffer. Each message is delivered exactly once. If the channel doesn't have a buffer the channel works as a handoff between two threads.

tbaldridge15:02:03

Guaranteed delivery as well, so if you don't have a buffer, and a put succeeds, you know another thread has the message.

Ryan Radomski15:02:32

I understand that, but I'm still trying to grock actors

Ryan Radomski15:02:06

It looks like actors need a reference to eachother

tbaldridge15:02:07

This definition is my favorite.

tbaldridge15:02:10

An actor is a computational entity that, in response to a message it receives, can concurrently: send a finite number of messages to other actors; create a finite number of new actors; designate the behavior to be used for the next message it receives. There is no assumed sequence to the above actions and they could be carried out in parallel.

tbaldridge15:02:30

@radomski they need a name, to send to, in many situations this is nothing more than a id

Ryan Radomski15:02:32

That's actually the one I'm staring at right now funny enough

tbaldridge15:02:33

Excellent discussion here, by one of the inventors of Actors systems, and the inventors of a few other languages: https://channel9.msdn.com/Shows/Going+Deep/Hewitt-Meijer-and-Szyperski-The-Actor-Model-everything-you-wanted-to-know-but-were-afraid-to-ask

Ryan Radomski15:02:35

When I send a message, I will always expect one back or there will be a failure?

orestis15:02:59

The low level send primitive has no hard guarantees. You have to build ACK in your protocols. OTP gives you some of those though.

orestis15:02:38

The thing is, you don’t know if the process you’re sending the message to is local or remote or across the network. Hence the uniformity of no guarantees.

orestis15:02:53

In practice, most people model with “generic servers” which OTP provide which have to acknowledge messages within a time out (default is 5s).

orestis15:02:10

Of course, this means that calling code effectively blocks waiting for the response to come back. This works well if you had to block anyway, e.g. reading from a database in order to proceed.

orestis16:02:40

The way failures work is by declaring dependency trees between processes. If a process that you depend on dies (or becomes inaccessible), you get notified and most often propagate the crash upwards until someone restarts the whole tree.

Ryan Radomski16:02:58

Fault tolerance + decomplecting with locality seems like 2 big pluses. I think I'm starting to understand why the model doesn't have core support though

Ryan Radomski16:02:03

https://github.com/onyx-platform/onyx and then I see this and want to play around with it

nooga21:02:31

@tbaldridge cool lisp-in-x vid, can’t wait for the next one