Fork me on GitHub
Drew Verlee01:04:11

Here is a post on using Sqlite over http. It talks about storing the db on the clients/users system. Which i think is a really interesting idea. However i don't understand how this changes the scaling story unless he is doing peer to peer. Also I'll need to read it again to figure out how a client would know which new data to get.


It scales because it is serving static content. The front end figures out the data by SQLite trying to page things from disk that are actually served over http range requests.

Drew Verlee01:04:13

How does that scale differently then a more traditional method? For example, if someone adds a new ansi image, it has to travel from there client, to a central server that other clients aware of, then get picked up. Using SQL can streamline the communication, but i don't see how it changes the story around how much you have to hold on some set of master/centeral databases.

Drew Verlee01:04:03

i'll re-read it tomorrow and it will probably all make sense.


They mean scale as in "I can serve a lot of traffic". The compute for queries isn't happening on your database server or on your API. Serving static files is very cheap compared to that

👍 1
Drew Verlee02:04:46

Thanks Jimmy, that makes sense.

Martynas Maciulevičius07:04:12

But well... would you download a 5gb database just to view 20 records? They either have to be compiled for each user or the whole webpage has to be reasonably small. For instance it could work for a personal blog or something like 4clojure where you have a bunch of small problems to solve. Also apps that "require constant network connection" do this because for data collection reasons and not because of "it's not possible otherwise". It is possible, we saw this shift with google maps when they had a real offline version and then stopped doing it. Also if we serve a Sqlite DB and then want to recompile it then it either has to be our exact DB or we have to produce it. So it brings up a question about event ingestion and probably Sqlite alone can't handle that. So front-end not only needs to download the binary versioned blob, but also some non-ingested events too so that it can figure out what happened after the DB was produced.


Ther'es also this absurd approach: "Persisting" SQLite database on IndexedDB in blocks, in a way that it can be faster than using IndexedDB alone!


@U028ART884X That's the clever thing about this. You only download the pages you need for the queries on demand. No big upfront download necessary

Martynas Maciulevičius15:04:59

So you talk about personalized DBs. Ok. But if you download the pages you need then you simply do one large select. Yes, you do a single read from the disk. But you would do that anyway for each user that may need the data. Also that could be used to render "viral" articles. But how many of those viral ones are there?


Can anyone versed in the deep lore tell me which became mainstream first - XML or Java


My programming career started in 95 right when all this stuff was coming out. "Became mainstream" is a bit subjective, so I'll indulge in reminiscing.


My company implemented a real product in Java, released in 96 on jdk1.1. We were starting to look at XML shortly after that. In my judgement, Java exploded first, and in some ways it kinda dragged XML along with it towards the end of the 90s especially as "Enterprise Java" became a thing, and putting EVERYTHING into XML files was all the rage.

Serafeim Papastefanos06:04:59

Yes I also believe that Java was first

Cora (she/her)22:04:23

does html predate java?

Nom Nom Mousse03:04:50

I would have guessed them to be much further apart!