This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-03-09
Channels
- # aleph (1)
- # announcements (4)
- # asami (6)
- # babashka (45)
- # beginners (19)
- # biff (3)
- # calva (35)
- # cider (4)
- # clojars (5)
- # clojure (117)
- # clojure-art (3)
- # clojure-denmark (2)
- # clojure-europe (89)
- # clojure-gamedev (5)
- # clojure-nl (4)
- # clojure-norway (17)
- # clojure-spec (3)
- # clojure-uk (5)
- # clojurescript (84)
- # conjure (13)
- # datomic (11)
- # emacs (2)
- # figwheel (2)
- # fulcro (16)
- # graphql (5)
- # honeysql (7)
- # introduce-yourself (1)
- # lsp (86)
- # malli (16)
- # music (1)
- # off-topic (2)
- # pathom (14)
- # polylith (28)
- # re-frame (11)
- # reagent (23)
- # releases (1)
- # reveal (19)
- # shadow-cljs (72)
- # spacemacs (13)
- # sql (1)
- # test-check (3)
- # timbre (4)
- # tools-deps (45)
- # vim (18)
I wonder if there a way to do limits and offsets in Asami queries? I think there is some Datomic syntax using an outer map of {:query query :args args}
but it doesn’t seem that Asami supports this.
for now I’m just doing drop
and take
on a memoized function result to achieve something similar
There isn't, but to be honest, that's how I’d implement it. It's more of an issue if it's a query syntax over a network connection (which will happen when I do SPARQL), but since it's embedded only, then it doesn't apply
Ok! No problem. I implemented sorting too. Thankfully, it wasn’t too hard in my case.
Actually, I wouldn't memoize… (when implementing internally) It would use core.cache on the query to get the plan, but result would not be stored, because it's lazy. That means that the drop portion or a drop/take could involve re-execution, but this allows for large results that can't fit in memory