Fork me on GitHub
#xtdb
<
2020-06-16
>
flefik08:06:42

Is there a way to get more detail in for IllegalArgumentException "Invalid tx op", than what the tx was? Like a reason why the tx was considered invalid?

jarohen08:06:29

Hi @UAEFFG05B - is that being thrown over a remote connection? It normally comes with more information if it's local.

jarohen08:06:52

If you can paste the transaction or a sample of it here (or feel free to DM me), we might be able to spot it. One common cause (I do it more often than I'd admit!) is to not have passed the tx operation in its own vector:

(crux/submit-tx node [:crux.tx/put ...]) ; wrong
(crux/submit-tx node [[crux.tx/put ...]]) ; correct

flefik08:06:45

hi @U050V1N74 , i did find the error. I'd done: (crux/submit-tx node [[:crux.db/put ...]]) and it took me longer than I'd like to admit to find it. However I think it'd be useful to get more detailed error messaging here. Something along the lines of: Invalid tx op. Missing operation, expected crux.tx/put, ... found crux.db/put

jarohen08:06:07

Ah, yep 🙂 I agree. We do have that information available as part of the ex-data in the cause chain, at least on local nodes, which should show up in your editor? Will see if we can get it forwarded over the HTTP client too - if you're not using the remote API, that's a different problem!

flefik10:06:39

oh it's in there already? I did not see it

flefik10:06:44

let me dig a little more

ordnungswidrig14:06:23

Beware that most logging libraries do not dump the ex-data in the cause chain.

ordnungswidrig14:06:51

I had to fall back to a custom formatter for logback in a recent project to have it dumped.

hoppy13:06:05

@taylor.jeremydavid - perhaps you could guide as to what you cruxations would offer as a best practice for modelling something that looks like a ledger. Transactions have something to which they belong (aka [customer, part, revision]) and then we move these things around and want every movement logged. I was toying with some cutsie idea of having the base document, and then using valid time to model the log entries, but wanted to see if you have better thinking on this than I.

refset16:06:22

Hey @U19EVCEBV! I think working with a history of documents for a single "ledger entity" is a very sensible approach: it is the most succinct representation (if index size is a concern) and it also very simple to make consistent (use match against n-1 and against nil for the head of the log alongside your op) However, you may also find useful benefits when modelling each entry in the ledger with distinct entities, implicitly grouped together with an incrementing index attribute e.g. :ledger-index/id-123 (these can be efficiently range-scanned within a query) as you also then have the power to perform joins across the (domain) "transaction" history and jump to a specific position in the ledger without scanning through entity-history (entity-history is based on transaction times, not an integer index count). The downside is that this would require transaction functions to make consistent, but we are officially supporting transaction functions in the imminent 1.9 release 🙂 Do you think you may have requirements that warrant looking at the second suggestion in detail?

hoppy16:06:20

still on the fence on that one. Index size is a concern, because of the compostition of the composite keys, which outweigh the contents of a transaction by a bunch. Maybe a solution to that concern (in general) might make this more interesting. Like a document that resolves a multi-part key into a single one, and then chase that through the index (via your :ledger-index example)

refset16:06:18

Yes an extra composite-key->uuid lookup query prior to the main query may not be too painful, and I can see how the space savings could be worth the hassle at sufficient scale. It's probably worth validating that assumption with a modest benchmark before going too far down the path though.

jarohen16:06:29

Might be worth considering using v5 UUIDs (SHA1 hashes) as entity IDs in this case - you can save yourself the extra lookup

3
hoppy17:06:55

interesting angle. that and a sequence would get me there.

hoppy17:06:07

or use the v5 uuid to identify the entity, and valid time to store the "row" of the transaction - since time is always in play with that?

jarohen17:06:03

:thinking_face:

jarohen17:06:46

With a ledger, you've naturally got immutability built in - I'd presume you're not going to be updating a transaction once it's been entered, except by virtue of a subsequent reversing transaction, maybe?

jarohen17:06:37

If so, even though Crux does have the concept of valid time, I'd be tempted to not use it extensively for the ledger itself - you can insert the transaction when it arises, and then leave it be, as you would with a 'normal' db 🙂

jarohen17:06:36

If an individual transaction does mutate over time, yes, by all means, replace the entity at a later valid time - I'm guessing the queries you'll want at that point are the latest values of all the transactions of an account, say?

jarohen17:06:10

Safe to say, there's still plenty of use for bitemporality when it comes to naturally mutable customer/part data, for example 🙂

dcj15:06:23

"cruxations" I love it! 🙂

😄 9
✝️ 3
🦀 3
glitch_crab 3
jarohen18:06:08

evening all - we've released Crux 1.8.5-alpha, mainly to fix a recent memory leak in our RocksDB integration (introduced in 1.8.4-alpha)

jarohen18:06:36

in more exciting news, the 1.9.0 release is imminent - we'll have a lot more to tell you about soon 🙂

👆 9