This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-07-28
Channels
- # asami (1)
- # aws (9)
- # babashka (16)
- # beginners (32)
- # calva (2)
- # clj-kondo (20)
- # cljdoc (6)
- # clojure (35)
- # clojure-dev (25)
- # clojure-europe (11)
- # clojure-india (1)
- # clojure-norway (2)
- # clojure-spec (26)
- # clojure-uk (1)
- # clojurescript (41)
- # conjure (3)
- # css (9)
- # cursive (18)
- # data-oriented-programming (6)
- # data-science (2)
- # emacs (47)
- # events (1)
- # fulcro (15)
- # graalvm (30)
- # gratitude (7)
- # honeysql (27)
- # inf-clojure (4)
- # introduce-yourself (2)
- # lsp (129)
- # malli (7)
- # missionary (21)
- # nbb (17)
- # off-topic (18)
- # re-frame (6)
- # releases (1)
- # shadow-cljs (120)
- # vim (7)
- # xtdb (15)
@viebel, hello and a question from a fan of your book, which I haven't finished yet, so it may be answered already. My concern is when we aggregate data (herd it together, according to the dictionary definition), it quickly becomes highly denormalized, and it gets hard to update. How would you deal with this?
From my experience, inside applications we usually prefer to keep data denormalized. It makes reading data much simpler. When updating data, one needs to update in multiple places. I am interested to hear thoughts from other folks.
What about making a kind of reactive thing that keeps data we got somewhere from aggregated maps that are essentially made of subscriptions and calculations based on said data, just like in "Out of the Tar Pit" paper?
You could. I think that it’s what re-frame does.