Fork me on GitHub

Hi, I am evaluating xtdb. When I start an app and submit, query or pull everything works as documented. But if I call the exact same submits from http-kit requests (via pathom) tx-committed? checks succeed but the transactions only show up in the tx-log. Queries and pulls do not find the data. I have no experience programming Java, so I do not know what happens underneath. It feels like threads working with a common tx-log but writing to different document-stores and indices. Has anyone seen behavior like this and has a hint, what I am missing? I use clojure 1.11.1, openjdk 16.0.2, http-kit 2.6.0, pathom 2.4.0 on FreeBSD 13.1. When I use transaction functions as per the docs (xt/entity db eid) does not find prior entities either when called from the http-kit requests.


Hey @U15BH4U4V please can you share the config option map/file you are using? Is this using XT 1.21.0? Are you only running one node or are you attempting to run multiple nodes using the same/different config?


Thank your for replying @U899JBRPF. I kicked out everything and use the empty map at the moment to make sure I have no conflicting dependencies.

πŸ‘ 1

Hey again, to confirm, you are still seeing issues despite what you just wrote?


Yes, I had switched to an empty map before daring do post here ;)

πŸ‘ 1

Are there any potentially interesting log messages being printed? No "ingestion aborted" erorrs (etc.)?


Nothing I could find, though I switched xtdb namespaces to debug logging via the timbre slf4j appender. Definitely no errors.


The log of ok transactions vs. the http-kit triggered identical not ok transactions looked the same to me.


> It feels like threads working with a common tx-log but writing to different document-stores and indices. how is your start-node function being called? maybe there really are multiple unrelated instances being created accidentally. What does the attribute-stats API show?


Hello @U899JBRPF, I start xtdb

(defn mem-node []
  (let [node (xt/start-node {})]
... logging etc
before starting http-kit. Http-kit calls a pathom parser of a fulcro app. At this point node gets passed in. I pretty much replaced conn with node from working with datahike and datalevin, where it works. For debugging purposes the function that calls submit-tx also places values into a debug-atom which leads to these findings: The node inside the http-kit request and the node outside that works fine are clojure.core/=. When I check xtdb.api/attribute-stats before and after submit-tx they look the same. The one before the submit-tx looks good. The one after does not contain any changes from the first one so submit-tx did not work. As mentioned before: If I use a transaction function it receives nil as old-entity. So it seems that attribute-stats receives what it needs but submit-tx does not. Additionally submit-tx does not put the result where it should.


This should always be the same and only node. I run xtdb version 1.21.0.


The described failure happens on a startup on a freshly booted machine.


I tested this to rule out that my debugging, which of course restarts with a fresh db, connects to old remnants (maybe off heap).


Thanks for the new details - I'm perplexed πŸ˜…


if a submit-tx succeeds, the transaction should be visible via open-tx-log if you sync and the latest-completed-tx should match the tx-id details of the submit-tx 'receipt' map


Have you checked those APIs also?


(last (with-open [l (xt/open-tx-log node nil true)]
                       (iterator-seq l)))
Iooks correct to me a) within the function that calls submit-tx with sync b) outside the http-kit reqeusts. latest-completed-tx before submit-tx, after submit-tx and later outside the http-kit reqeust are fine to. The submit after is one more/later than the one before.


I'm surprised that example executes safely (and doesn't crash), certainly doing it for real you should be careful about consuming what you need from the seq within the with-open context, i.e.:

(with-open [l (xt/open-tx-log node nil true)]
  (doall (last (iterator-seq l))))


unfortunately I'm a bit lost trying to follow your descriptions here (undoubtedly my fault not yours though!), is there any way you could create a minimal end-to-end example of the behaviour, perhaps in a separate repository that you could share with me?


Yea. These tests just have two transactions on an empty db. One for setup and the second one is the one that fails πŸ˜‰ It will take some time to think how to separate this one out.


cool, well I'm eager to help πŸ™‚


out of interest, what are you evaluating XT against? Postgres?


The alternative at the moment is to use a datascript fork (datahike/datalevin) for state and create a transaction-log from the actually changed triples which these databases return when you transact. This can actually be done with about 50 lines of code but data wise one needs to think very carefully what one is doing since these db's use integers als entity ids. So if anything shifts one needs to be prepared. So xtdbs possibility to use {:person/id uuid} instead of an int looked enticing to me and would allow me to delete some code that translates between the two. My domain data has timestamps and is itself events so even though I like the temporal bits I had no need to use them so far. Also xtdb would theoretically allow me to have a triple as the entity which reduces my data modeling contraints.


Cool, thanks for sharing the context! DataScript is great for prototyping was definitely my own gateway to XT πŸ˜„


The main architect behind XT actually started with a datascript fork also


(although technically it's not a fork, since it's monkeypatching datascript as a library)


Maybe we should call it a descendant or something ...

πŸ˜„ 1

Cross posting here from #datascript as I have a better chance of getting an answer here:

[{:db/id 1, :co-ordinate {:x 0, :y 1, :z 2}}, {:db/id 2, :co-ordinate {:x 1, :y 1, :z 2}}]
How would I find all entities with a y co-ordinate of 1? Is there a way I can bind the internals of :co-ordinate as a part of a query?


in xtdb, values are only shallowly indexed, so you can't efficiently query for map values. it's recommended to instead flatten the map, e.g. {:db/id 1 :co-ordinate-x 1 :co-ordinate-y 1 :co-ordinate-z 2}


The above answer is essentially correct, however if you don't the query to be fast (i.e. a full scan is acceptable) you can use regular Clojure functions like so:

[?e :co-ordinate ?c]
[(get ?c :y) ?y]
[(== ?y 1)]


Thanks a ton

blob_thumbs_up 1