Fork me on GitHub
#off-topic
<
2019-04-24
>
john13:04:21

Apparently, since the 1990s, network latency has been dropping exponentially faster than ram access latency

john13:04:07

So that begs the question: are our insecurities about treating "remote" state as "local" still well founded? Network access latency seems to be as low as local memory access latency in the 90s.

john13:04:39

I mean, in reality, nothing within a computer is "local." Locality is just a category for "not very remote"

Daniel Hines13:04:30

Alex Miller recently told me on this channel that this is a trap (if I understood him correctly)

🦑 4
john13:04:50

Yeah, that's a super common refrain I think

john13:04:19

I'm trying to push back on that philosophy a little bit... See if it needs to be updated

4
john13:04:33

And I'm willing to be totally wrong on this hunch... I just think a re-assessment is warranted in 2019.

john13:04:13

I mean, if you want to crunch some numbers ASAP, you want to be on RAM, or preferably on L1 or L2, sure

john13:04:05

If I'm reading the above chart correctly, these days, the difference between L1 and RAM is not much different than the difference between RAM and network latency. But for our purposes, we don't mind saying that L1 and RAM are both "local enough"

john14:04:57

One salient data point: Google Stadia https://www.notebookcheck.net/Hands-on-Google-Stadia-testers-report-noticeable-lag-and-mild-image-artifacts.414778.0.html which can stream games at latencies just barely below human perception. So perhaps human perception should be the measure of whether a piece of "state" may be considered "local" or not.

Daniel Hines14:04:47

I’m a total noob when it comes to these CS basics. But one difference that comes to mind between RAM and network packets is the chance for loss/corruption is higher for the latter, correct?

john14:04:22

Right, reliability is a factor. "locality" probably assumes a much higher reliability rate than the internet. But again, wrt human perceptions, a user using Google Stadia doesn't care if they can't play the game without the internet anyway.

john15:04:23

We essentially have to draw the line at "locality" somewhere. And does that line keep on receding as memory access gets faster? Or, if network access one day becomes as fast as present day RAM access, will we then call that "local?" And on that note, looking backwards, has network latency already surpassed the speed of what we used to call local many years ago?

john15:04:13

The only constant there seems to be human perception

âž• 4
andy.fingerhut15:04:17

If all participating computers on the network remain up, and it is OK for your system to come to a screeching halt, or need to be restarted from the beginning, if one or more of them fail, then local vs. remote is no big deal. "Local" is multiple chips in a system, and you probably never worry about how to make your software recover if one of the RAM chips fails in the middle.

andy.fingerhut15:04:54

It is partial failure, and yet making a system continue to make progress, with some kinds of assurances it still has the correct behavior, that makes distributed systems much trickier to deal with.

andy.fingerhut15:04:09

"Local" software would have the same challenges if you wanted to continue to make progress in the face of partial failures.

8
john15:04:36

Agreed. But hardware does have a good amount of error detection and correction, giving us the illusion that all our computer pieces exist reliably and synchronously at one place and one time. TCP/IP takes us part of the way there, but yeah, I'd agree that the underlying platform that makes networked state appear local would need to have enough redundancy to maintain the illusion of locality.

andy.fingerhut18:04:49

Yeah, the main part that TCP/IP doesn't take you is when the connection breaks, or is delayed for 20 mins, and you can't tell the difference between that and the other end being dead. Exactly the "trying to correctly make progress in the face of partial failure" difficulty.

hiredman18:04:49

tcp, through a certain lens, looks very similar to some parts of consensus protocols, a sort of 2pc on an ordered stream of packets, and it has the same byzantine fault issues that consensus protocols have

hiredman18:04:40

with the google stadia example, I doubt there is any local state, the client is just a dumb terminal

andy.fingerhut18:04:07

The simplest distributed system!

hiredman18:04:21

so the it doesn't tell you anything about the latency involved when peers have to agree to make progress

💯 4
john19:04:59

Yeah, in the Stadia example, much of the complex crunching may be taking place on a single machine, so it may not be a perfect example for my argument.

john20:04:40

And yeah, I don't think Stadia is using TCP