Fork me on GitHub

I'm still thinking about supporting nodes being in multiple locations in my graph ( The complication is that some of a node's data should be location-specific. So, I've had a crazy idea – duplication 😅

{:xt/id 1 :children [2 3]}
  {:xt/id 3 :children [4]}
  {:xt/id 2 :wrapped-in [4] :global :a :local :b}
  {:xt/id 4 :wraps 2        :global :a :local :c}
The reasoning is that multiple locations will be used infrequently (less than 1% of nodes) and that propagating the changes to all locations can be automated easily via :wraps and :wrapped-in. This should make graph traversal and queries simpler (and faster). Is this a very bad idea?


idk if it is bad or not… but you could also make a “link document” like {:xt/id #uuid "…" :parent 4 :child 2}


you could totally decouple the parent/child relationship from the documents themselves


or even {:xt/id {:p 4 :c 2} :parent 4 :child 2} so the id also encodes the information and you can issue puts and not get duplicates (or delete without needing to know a separate uuid for the link)


I'm not quite following. What I'm seeing is that this would introduce a join (two, actually)? Would this "link document" also be used for e.g. 1->`2` relationship?


Maybe I should've said that I'm thinking of moving away from my current model (from the linked thread)

{:xt/id 1 :children [2 3]}
  {:xt/id 3 :children [4]}
  {:xt/id 2 :global :a :local :b}
  {:xt/id 4 :wraps 2 :local :c}
because this complicates managing the graph to a surprising extent.


Maybe equivalent way of thinking about your idea is that :a in :global can become a document itself, yes? This I don't like because it would create two entities for each node, but it's only really necessary for less than 1% of nodes.


yes, a link doc would create need an extra join when querying


what is “best” depends heavily on the query and update patterns


the direct attribute referencing a child is most convenient if you want to do nested pull on the docs


> what is “best” depends heavily on the query and update patterns Query-wise, I was fairly happy with my previous attempt. I've written my own 'pull' so no limitations there. But the update (and UI) part is proving much more challenging and I periodically get fed up and try to think of a new model. I was avoiding duplication because of course, but now it seems feasible (for my use case) and that I'm essentially sacrificing disk space for much simpler business logic. I wanted to verify with the community that the idea is not as silly as it sounds to me 🙂


so you need to update all related docs in the same tx, and match all of them to make sure the :wrapped-in and :wraps are consistent? or perhaps have a tx functions for it


Yes, that sounds very doable. And more importantly – it's one-and-done. Previously, I had to write each CRUD op with great care.


Hi all, the 1.22.0 release is out! See Thank you to everyone who has helped with raising issues and testing 🙏 And if you're on Twitter...

🎉 7
doge 2

It seems there’s more dev focus toward RocksDB than LMDB, perhaps we should switch to rocks as well now that indexes need to be recreated anyway, is that a fair assessment?


It's probably worth benchmarking for your use-case (e.g. LMDB can sometimes be multiple times faster for reads), but RocksDB wins in my mind as the better option for the average scenario, I think - mostly because it has built-in compression and has a more well-trodden scale-up story


fair enough, should be easy to do as I could spin up both rocksdb and lmdb instances from the same golden stores and do some measuring

blob_thumbs_up 1

There’s been some jackson deps changes with 1.22 release apparently, tried simply updating the deps and my project won’t start

Caused by: java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/exc/StreamReadException


will investigate more later


interesting, thanks for mentioning that, I'll take a look a little later


So we did bump jackson-* from "2.12.2" to "2.13.3"


Maybe something else (not XT) in your classpath is overriding it with an older version?


yeah, I got it running by explicitly depending on latest jackson core


Caused by: xtdb.IllegalArgumentException: Error locating module
	at xtdb.error$illegal_arg.invokeStatic(error.clj:12)
	at xtdb.error$illegal_arg.invoke(error.clj:3)
	at xtdb.system.ModuleRef$fn__32870.invoke(system.clj:109)
	at xtdb.system.ModuleRef.prepare_dep(system.clj:106)
	at xtdb.system$opts_reducer$f__32908.invoke(system.clj:131)
	at xtdb.system$opts_reducer$f__32908.invoke(system.clj:130)
	at clojure.lang.PersistentVector.reduce(
	at clojure.core$reduce.invokeStatic(core.clj:6827)
	at clojure.core$reduce.invoke(core.clj:6810)
	at xtdb.system$prep_system.invokeStatic(system.clj:156)
	at xtdb.system$prep_system.invoke(system.clj:141)
	at xtdb.system$prep_system.invokeStatic(system.clj:142)
	at xtdb.system$prep_system.invoke(system.clj:141)
	at xtdb.api$start_node.invokeStatic(api.clj:256)
	at xtdb.api$start_node.invoke(api.clj:243)
Caused by: Syntax error compiling deftype* at (xtdb/rocksdb.clj:111:1).
	at clojure.lang.Compiler.analyzeSeq(
	at clojure.lang.Compiler.analyze(
	at clojure.lang.Compiler.analyze(
	at clojure.lang.Compiler$BodyExpr$Parser.parse(
	at clojure.lang.Compiler$LetExpr$Parser.parse(
	at clojure.lang.Compiler.analyzeSeq(
	at clojure.lang.Compiler.analyze(
	at clojure.lang.Compiler.analyze(
	at clojure.lang.Compiler$BodyExpr$Parser.parse(
	at clojure.lang.Compiler$FnMethod.parse(
	at clojure.lang.Compiler$FnExpr.parse(
	at clojure.lang.Compiler.analyzeSeq(
	at clojure.lang.Compiler.analyze(
	at clojure.lang.Compiler.eval(
	at clojure.lang.Compiler.load(
	at clojure.lang.RT.loadResourceScript(
	at clojure.lang.RT.loadResourceScript(
	at clojure.lang.RT.load(
	at clojure.lang.RT.load(
	at clojure.core$load$fn__6839.invoke(core.clj:6126)
	at clojure.core$load.invokeStatic(core.clj:6125)
	at clojure.core$load.doInvoke(core.clj:6109)
	at clojure.lang.RestFn.invoke(
	at clojure.core$load_one.invokeStatic(core.clj:5908)
	at clojure.core$load_one.invoke(core.clj:5903)
	at clojure.core$load_lib$fn__6780.invoke(core.clj:5948)
	at clojure.core$load_lib.invokeStatic(core.clj:5947)
	at clojure.core$load_lib.doInvoke(core.clj:5928)
	at clojure.lang.RestFn.applyTo(
	at clojure.core$apply.invokeStatic(core.clj:667)
	at clojure.core$load_libs.invokeStatic(core.clj:5985)
	at clojure.core$load_libs.doInvoke(core.clj:5969)
	at clojure.lang.RestFn.applyTo(
	at clojure.core$apply.invokeStatic(core.clj:667)
	at clojure.core$require.invokeStatic(core.clj:6007)
	at clojure.core$require.doInvoke(core.clj:6007)
	at clojure.lang.RestFn.applyTo(
	at clojure.core$apply.invokeStatic(core.clj:665)
	at clojure.core$serialized_require.invokeStatic(core.clj:6079)
	at clojure.core$requiring_resolve.invokeStatic(core.clj:6088)
	at clojure.core$requiring_resolve.invoke(core.clj:6082)
	at xtdb.system.ModuleRef$fn__32870.invoke(system.clj:107)
	... 64 more
Caused by: java.lang.RuntimeException: No such var: kv/KvStoreTx
	at clojure.lang.Util.runtimeException(
	at clojure.lang.Compiler.resolveIn(
	at clojure.lang.Compiler.resolve(
	at clojure.lang.Compiler$
	at clojure.lang.Compiler$NewInstanceExpr$DeftypeParser.parse(
	at clojure.lang.Compiler.analyzeSeq(
	... 105 more
got this when trying to start the node


Huh, have you upgraded the RocksDB dep also?


yes, all are 1.22.0 …but I’ll check my $HOME if there’s anything funky I’ve forgotten


With the jackson upgrade, did you already have some other dependency? Or does this feel like a bug?


Maybe try clearing your caches


pebkac luckily, I did not in fact change all the deps, a multicursor edit fail


I got it up and running locally now, it is reindexing


idk about the jackson thing, we are getting it indirectly, I think it’s more the usual common JVM land issue that many libraries can’t grow gracefully


got everything indexed (some 500k docs) on my local test database, ran some queries… it seems to work fine


traced our deps, and it appears that cambium.logback.json is depending on 2.12 of jackson


Ah, good to hear, thanks for reporting back. In extreme cases we have resorted to using MrAnderson for working around dep issues 🙂


xtdb-inspector at least works fine with 1.22.0 version

🙏 3