Fork me on GitHub
#clojure-uk
<
2020-11-19
>
dharrigan07:11:55

Good Morning!

rlj07:11:38

Mornin

👋 3
cddr10:11:31

What's the etymology of "o/"? I've seen it before in IRC but never known exactly what it means

bronsa10:11:09

it's a head with a raised arm waving hi

bronsa10:11:14

back before emojis existed :)

cddr10:11:36

Aha cool! Thanks

dominicm10:11:30

There's no direct equivalent emoji, I'm surprised

lsnape11:11:24

I also like the more jubilant double-arm raise \o/

🙌 6
joetague12:11:34

Hopefully not too off topic. I found Kleppmann’s Designing Data-Intensive Applications book very useful. Currently reading the notes for this now: https://twitter.com/martinkl/status/1329051710019543041

alexlynham12:11:25

he's a nice guy too, i had an interesting conversation with him on blockchain/distributed data protocols

cdpjenkins12:11:27

Great book. I confess I only read part of it and only took in some of that but I learned an awful lot from it.

alexlynham12:11:36

one of the only tech books i've ever found value in reading cover-to-cover tbh

👍 6
dominicm12:11:41

I found it a real page turner

👍 6
alexlynham14:11:03

as an aside this interview with him is ace

alexlynham14:11:31

the section on blockchain tech is interesting too

Martin: Well, WebRTC is at a different level of the stack, since it’s intended mostly for connecting two people together who might be having a video call; in fact, the software we’re using for this interview right now may well be using WebRTC. And WebRTC does give you a data channel that you can use for sending arbitrary binary data over it, but building a full replication system on top of that is still quite a bit of work. And that’s something that Dat or IPFS do already.

You mentioned responsiveness — that is certainly one thing to think about. Say you wanted to build the next Google Docs in a decentralized way. With Google Docs, the unit of changes that you make is a single keystroke. Every single letter that you type on your keyboard may get sent in real time to your collaborators, which is great from the point of view of fast real-time collaboration. But it also means that over the course of writing a large document you might have hundreds of thousands of these single-character edits that accumulate, and a lot of these technologies right now are not very good at compressing this kind of editing data. You can keep all of the edits that you’ve ever made to your document, but even if you send just a hundred bytes for every single keystroke that you make and you write a slightly larger document with, say, 100,000 keystrokes, you suddenly now have 10 MB of data for a document that would only be a few tens of kilobytes normally. So we have this huge overhead for the amount of data that needs to be sent around, unless we get more clever at compressing and packaging up changes.

Rather than sending somebody the full list of every character that has ever been typed, we might just send the current state of the document, and after that we send any updates that have happened since. But a lot of these peer-to-peer systems don’t yet have a way of doing those state snapshots in a way that would be efficient enough to use them for something like Google Docs. This is actually an area I’m actively working on, trying to find better algorithms for synchronizing up different users for something like a text document, where we don’t want to keep every single keystroke because that would be too expensive, and we want to make more efficient use of the network bandwidth.

alexlynham14:11:32

he goes on to talk about compression and formal checking, it's interesting

👍 3