This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-09-15
Channels
- # ai (35)
- # announcements (3)
- # babashka (16)
- # babashka-sci-dev (2)
- # beginners (37)
- # biff (16)
- # calva (5)
- # cider (2)
- # clj-commons (81)
- # clj-kondo (29)
- # cljfx (2)
- # cljs-dev (4)
- # clojars (4)
- # clojure (92)
- # clojure-europe (72)
- # clojure-losangeles (8)
- # clojure-nl (1)
- # clojure-norway (10)
- # clojure-uk (1)
- # clojurescript (20)
- # clojutre (2)
- # conjure (2)
- # data-science (18)
- # datomic (1)
- # emacs (10)
- # fulcro (49)
- # joyride (1)
- # kaocha (23)
- # leiningen (8)
- # lsp (14)
- # meander (5)
- # off-topic (93)
- # polylith (4)
- # re-frame (20)
- # reagent (9)
- # reitit (2)
- # remote-jobs (8)
- # sci (1)
- # shadow-cljs (21)
- # testing (3)
- # vim (27)
- # xtdb (35)
I'm still thinking about supporting nodes being in multiple locations in my graph (https://clojurians.slack.com/archives/CG3AM2F7V/p1660853551111529). The complication is that some of a node's data should be location-specific. So, I've had a crazy idea – duplication 😅
{:xt/id 1 :children [2 3]}
{:xt/id 3 :children [4]}
{:xt/id 2 :wrapped-in [4] :global :a :local :b}
{:xt/id 4 :wraps 2 :global :a :local :c}
The reasoning is that multiple locations will be used infrequently (less than 1% of nodes) and that propagating the changes to all locations can be automated easily via :wraps
and :wrapped-in
. This should make graph traversal and queries simpler (and faster).
Is this a very bad idea?idk if it is bad or not… but you could also make a “link document” like {:xt/id #uuid "…" :parent 4 :child 2}
or even {:xt/id {:p 4 :c 2} :parent 4 :child 2}
so the id also encodes the information and you can issue puts and not get duplicates (or delete without needing to know a separate uuid for the link)
I'm not quite following. What I'm seeing is that this would introduce a join (two, actually)? Would this "link document" also be used for e.g. 1
->`2` relationship?
Maybe I should've said that I'm thinking of moving away from my current model (from the linked thread)
{:xt/id 1 :children [2 3]}
{:xt/id 3 :children [4]}
{:xt/id 2 :global :a :local :b}
{:xt/id 4 :wraps 2 :local :c}
because this complicates managing the graph to a surprising extent.Maybe equivalent way of thinking about your idea is that :a
in :global
can become a document itself, yes? This I don't like because it would create two entities for each node, but it's only really necessary for less than 1% of nodes.
the direct attribute referencing a child is most convenient if you want to do nested pull
on the docs
> what is “best” depends heavily on the query and update patterns Query-wise, I was fairly happy with my previous attempt. I've written my own 'pull' so no limitations there. But the update (and UI) part is proving much more challenging and I periodically get fed up and try to think of a new model. I was avoiding duplication because of course, but now it seems feasible (for my use case) and that I'm essentially sacrificing disk space for much simpler business logic. I wanted to verify with the community that the idea is not as silly as it sounds to me 🙂
so you need to update all related docs in the same tx, and match
all of them to make sure the :wrapped-in
and :wraps
are consistent? or perhaps have a tx functions for it
Yes, that sounds very doable. And more importantly – it's one-and-done. Previously, I had to write each CRUD op with great care.
Hi all, the 1.22.0
release is out! See https://github.com/xtdb/xtdb/releases/tag/1.22.0
Thank you to everyone who has helped with raising issues and testing 🙏
And if you're on Twitter... https://twitter.com/xtdb_com/status/1570383459406974978
It seems there’s more dev focus toward RocksDB than LMDB, perhaps we should switch to rocks as well now that indexes need to be recreated anyway, is that a fair assessment?
It's probably worth benchmarking for your use-case (e.g. LMDB can sometimes be multiple times faster for reads), but RocksDB wins in my mind as the better option for the average scenario, I think - mostly because it has built-in compression and has a more well-trodden scale-up story
fair enough, should be easy to do as I could spin up both rocksdb and lmdb instances from the same golden stores and do some measuring
There’s been some jackson deps changes with 1.22 release apparently, tried simply updating the deps and my project won’t start
Caused by: java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/exc/StreamReadException
Maybe something else (not XT) in your classpath is overriding it with an older version?
Caused by: xtdb.IllegalArgumentException: Error locating module
at xtdb.error$illegal_arg.invokeStatic(error.clj:12)
at xtdb.error$illegal_arg.invoke(error.clj:3)
at xtdb.system.ModuleRef$fn__32870.invoke(system.clj:109)
at xtdb.system.ModuleRef.prepare_dep(system.clj:106)
at xtdb.system$opts_reducer$f__32908.invoke(system.clj:131)
at xtdb.system$opts_reducer$f__32908.invoke(system.clj:130)
at clojure.lang.PersistentVector.reduce(PersistentVector.java:343)
at clojure.core$reduce.invokeStatic(core.clj:6827)
at clojure.core$reduce.invoke(core.clj:6810)
at xtdb.system$prep_system.invokeStatic(system.clj:156)
at xtdb.system$prep_system.invoke(system.clj:141)
at xtdb.system$prep_system.invokeStatic(system.clj:142)
at xtdb.system$prep_system.invoke(system.clj:141)
at xtdb.api$start_node.invokeStatic(api.clj:256)
at xtdb.api$start_node.invoke(api.clj:243)
...
Caused by: Syntax error compiling deftype* at (xtdb/rocksdb.clj:111:1).
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7115)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.analyze(Compiler.java:6745)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6118)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6436)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7107)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.analyze(Compiler.java:6745)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6120)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5467)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:4029)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7105)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.eval(Compiler.java:7174)
at clojure.lang.Compiler.load(Compiler.java:7636)
at clojure.lang.RT.loadResourceScript(RT.java:381)
at clojure.lang.RT.loadResourceScript(RT.java:372)
at clojure.lang.RT.load(RT.java:459)
at clojure.lang.RT.load(RT.java:424)
at clojure.core$load$fn__6839.invoke(core.clj:6126)
at clojure.core$load.invokeStatic(core.clj:6125)
at clojure.core$load.doInvoke(core.clj:6109)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invokeStatic(core.clj:5908)
at clojure.core$load_one.invoke(core.clj:5903)
at clojure.core$load_lib$fn__6780.invoke(core.clj:5948)
at clojure.core$load_lib.invokeStatic(core.clj:5947)
at clojure.core$load_lib.doInvoke(core.clj:5928)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:667)
at clojure.core$load_libs.invokeStatic(core.clj:5985)
at clojure.core$load_libs.doInvoke(core.clj:5969)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:667)
at clojure.core$require.invokeStatic(core.clj:6007)
at clojure.core$require.doInvoke(core.clj:6007)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:665)
at clojure.core$serialized_require.invokeStatic(core.clj:6079)
at clojure.core$requiring_resolve.invokeStatic(core.clj:6088)
at clojure.core$requiring_resolve.invoke(core.clj:6082)
at xtdb.system.ModuleRef$fn__32870.invoke(system.clj:107)
... 64 more
Caused by: java.lang.RuntimeException: No such var: kv/KvStoreTx
at clojure.lang.Util.runtimeException(Util.java:221)
at clojure.lang.Compiler.resolveIn(Compiler.java:7388)
at clojure.lang.Compiler.resolve(Compiler.java:7358)
at clojure.lang.Compiler$NewInstanceExpr.build(Compiler.java:8015)
at clojure.lang.Compiler$NewInstanceExpr$DeftypeParser.parse(Compiler.java:7935)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7107)
... 105 more
got this when trying to start the nodeWith the jackson upgrade, did you already have some other dependency? Or does this feel like a bug?
idk about the jackson thing, we are getting it indirectly, I think it’s more the usual common JVM land issue that many libraries can’t grow gracefully
got everything indexed (some 500k docs) on my local test database, ran some queries… it seems to work fine
traced our deps, and it appears that cambium.logback.json is depending on 2.12 of jackson