This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-12-15
Channels
- # adventofcode (46)
- # announcements (3)
- # aws (7)
- # babashka (47)
- # beginners (86)
- # calva (40)
- # cider (8)
- # clj-kondo (22)
- # clojure (63)
- # clojure-europe (16)
- # clojure-hungary (3)
- # clojure-nl (1)
- # clojure-norway (46)
- # clojure-sweden (1)
- # clojure-uk (3)
- # clojuredesign-podcast (2)
- # conjure (4)
- # datalevin (1)
- # events (1)
- # fulcro (5)
- # graalvm (4)
- # honeysql (8)
- # hyperfiddle (15)
- # music (1)
- # off-topic (5)
- # pathom (7)
- # pedestal (1)
- # polylith (3)
- # portal (19)
- # quil (1)
- # re-frame (36)
- # releases (1)
- # specter (3)
- # sql (3)
- # timbre (11)
- # tools-deps (4)
- # xtdb (55)
@neumann @nate I loved this episode. Your ability to describe coding and calling external APIs that are slow, scattered and nested, through narration is uncanny and awesome. 🙂 I found myself constantly nodding as you were describing making calls to the database and the MAM — especially when you were describing having to reassemble denormalized data, and caching them locally. My example of having to cache data was using OpenAI’s API for a huge batch operation, as well as calling some sort of readability score generation — in the latter case, I cached it wasn’t because of cost, but it was so computationally expensive. I calculated the readability score of 100s of commits of my book manuscript, and it took tens of seconds per call. I remember generating all the scores using pmap, which took tens of minutes, and then caching them away so I never needed to do that computation again. Keep up the great work!
@U6VPZS1EK Thanks so much. That's a great example! Saving all that time, but still having them at hand in the REPL. Very cool! Thanks for sharing!