Fork me on GitHub

@viebel, hello and a question from a fan of your book, which I haven't finished yet, so it may be answered already. My concern is when we aggregate data (herd it together, according to the dictionary definition), it quickly becomes highly denormalized, and it gets hard to update. How would you deal with this?

Yehonathan Sharvit09:07:18

From my experience, inside applications we usually prefer to keep data denormalized. It makes reading data much simpler. When updating data, one needs to update in multiple places. I am interested to hear thoughts from other folks.


What about making a kind of reactive thing that keeps data we got somewhere from aggregated maps that are essentially made of subscriptions and calculations based on said data, just like in "Out of the Tar Pit" paper?

Yehonathan Sharvit10:07:21

You could. I think that it’s what re-frame does.


What about storing data in datascript instead of maps, like re-posh does? It's a step away from the principle of using mostly generic data structures, but this allows to herd data without schema for a while, when it's not clear what exactly would work


Someone on reddit mentioned Fulcro, which recalculates derived data. That's probably what I'm looking for. To me using just maps is seemingly easy, but it gets much harder quite soon.