This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-08
Channels
- # aleph (5)
- # announcements (3)
- # aws-lambda (24)
- # babashka (17)
- # beginners (59)
- # calva (168)
- # clerk (4)
- # clj-kondo (62)
- # clojure (77)
- # clojure-belgium (4)
- # clojure-brasil (10)
- # clojure-ecuador (3)
- # clojure-europe (41)
- # clojure-losangeles (2)
- # clojure-nl (2)
- # clojure-norway (24)
- # clojure-uk (2)
- # clojurescript (44)
- # clr (21)
- # community-development (7)
- # conjure (1)
- # cursive (6)
- # datalevin (15)
- # datomic (1)
- # deps-new (12)
- # emacs (45)
- # events (1)
- # fulcro (8)
- # funcool (7)
- # graphql (5)
- # hugsql (15)
- # jobs (2)
- # matcher-combinators (17)
- # meander (14)
- # membrane (31)
- # pathom (28)
- # pedestal (8)
- # practicalli (6)
- # re-frame (12)
- # releases (1)
- # remote-jobs (1)
- # shadow-cljs (32)
- # tools-deps (8)
- # vim (16)
anyone use datalevin for implementing ecs 👀 or is there a better alternative
Not many people write games in clojure. But there are different kind of games. People were not writing games in Java either, but we got Minecraft. So why not to try
Will try first, already implemented a initial version, i like how efficient bulk updates can be
It will get better. There are still a lot of low hanging fruits of performance improvement that I tried to put out in every release
question about backups + DB dumps: is there any downside to invoking the functions defined in datalevin.main
from a deps.edn
alias instead of the standalone CLI? I know I need to add a few :extra-deps
to load that namespace, but I figured this would be an effective way of invoking the copy and dump commands in a more platform-independent manner that ensures the backup always reflects the version of Datalevin in deps.edn
instead of whatever version of dtlv
I happen to have installed.
Any pitfalls to watch out for with this approach?
DB version upgrade question: I have some values that are encoded with non-default tagged literals in my DB (this may not be a great idea in general but for now I have to live with that decision). When trying to read those back in with datalevin.main/load
I get a parse error.
For this use case, would it be advisable to make datalevin.main/data-readers
dynamic so users can rebind it like *data-readers*
in other contexts? Or should I just do some post-processing on the decanted DB after using datalevin.main/dump
to clean up this data before I import it to another Datalevin DB?