This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (8)
- # aws (9)
- # babashka (26)
- # beginners (125)
- # calva (18)
- # chlorine-clover (2)
- # cider (12)
- # cljs-dev (6)
- # cljsrn (4)
- # clojure (134)
- # clojure-europe (31)
- # clojure-italy (2)
- # clojure-nl (14)
- # clojure-uk (83)
- # clojurescript (81)
- # conjure (4)
- # cursive (2)
- # datomic (145)
- # emacs (13)
- # events (3)
- # figwheel-main (14)
- # fulcro (30)
- # graalvm (23)
- # graphql (15)
- # helix (21)
- # jackdaw (20)
- # juxt (1)
- # lambdaisland (4)
- # leiningen (2)
- # malli (12)
- # meander (22)
- # observability (22)
- # off-topic (27)
- # pedestal (3)
- # re-frame (12)
- # reitit (1)
- # releases (2)
- # rewrite-clj (3)
- # shadow-cljs (67)
- # spacemacs (7)
- # sql (1)
- # tools-deps (19)
- # unrepl (2)
- # xtdb (25)
Is there a way to get more detail in for IllegalArgumentException "Invalid tx op", than what the tx was? Like a reason why the tx was considered invalid?
Hi @UAEFFG05B - is that being thrown over a remote connection? It normally comes with more information if it's local.
If you can paste the transaction or a sample of it here (or feel free to DM me), we might be able to spot it. One common cause (I do it more often than I'd admit!) is to not have passed the tx operation in its own vector:
(crux/submit-tx node [:crux.tx/put ...]) ; wrong (crux/submit-tx node [[crux.tx/put ...]]) ; correct
hi @U050V1N74 ,
i did find the error. I'd done:
(crux/submit-tx node [[:crux.db/put ...]])
and it took me longer than I'd like to admit to find it.
However I think it'd be useful to get more detailed error messaging here.
Something along the lines of:
Invalid tx op. Missing operation, expected crux.tx/put, ... found crux.db/put
Ah, yep 🙂 I agree. We do have that information available as part of the ex-data in the cause chain, at least on local nodes, which should show up in your editor? Will see if we can get it forwarded over the HTTP client too - if you're not using the remote API, that's a different problem!
Beware that most logging libraries do not dump the ex-data in the cause chain.
I had to fall back to a custom formatter for logback in a recent project to have it dumped.
@taylor.jeremydavid - perhaps you could guide as to what you cruxations would offer as a best practice for modelling something that looks like a ledger. Transactions have something to which they belong (aka [customer, part, revision]) and then we move these things around and want every movement logged. I was toying with some cutsie idea of having the base document, and then using valid time to model the log entries, but wanted to see if you have better thinking on this than I.
Hey @U19EVCEBV! I think working with a history of documents for a single "ledger entity" is a very sensible approach: it is the most succinct representation (if index size is a concern) and it also very simple to make consistent (use
match against n-1 and against nil for the head of the log alongside your op)
However, you may also find useful benefits when modelling each entry in the ledger with distinct entities, implicitly grouped together with an incrementing index attribute e.g.
:ledger-index/id-123 (these can be efficiently range-scanned within a query) as you also then have the power to perform joins across the (domain) "transaction" history and jump to a specific position in the ledger without scanning through entity-history (entity-history is based on transaction times, not an integer index count). The downside is that this would require transaction functions to make consistent, but we are officially supporting transaction functions in the imminent 1.9 release 🙂
Do you think you may have requirements that warrant looking at the second suggestion in detail?
still on the fence on that one. Index size is a concern, because of the compostition of the composite keys, which outweigh the contents of a transaction by a bunch. Maybe a solution to that concern (in general) might make this more interesting. Like a document that resolves a multi-part key into a single one, and then chase that through the index (via your :ledger-index example)
Yes an extra composite-key->uuid lookup query prior to the main query may not be too painful, and I can see how the space savings could be worth the hassle at sufficient scale. It's probably worth validating that assumption with a modest benchmark before going too far down the path though.
Might be worth considering using v5 UUIDs (SHA1 hashes) as entity IDs in this case - you can save yourself the extra lookup
or use the v5 uuid to identify the entity, and valid time to store the "row" of the transaction - since time is always in play with that?
With a ledger, you've naturally got immutability built in - I'd presume you're not going to be updating a transaction once it's been entered, except by virtue of a subsequent reversing transaction, maybe?
If so, even though Crux does have the concept of valid time, I'd be tempted to not use it extensively for the ledger itself - you can insert the transaction when it arises, and then leave it be, as you would with a 'normal' db 🙂
If an individual transaction does mutate over time, yes, by all means, replace the entity at a later valid time - I'm guessing the queries you'll want at that point are the latest values of all the transactions of an account, say?
Safe to say, there's still plenty of use for bitemporality when it comes to naturally mutable customer/part data, for example 🙂
evening all - we've released Crux 1.8.5-alpha, mainly to fix a recent memory leak in our RocksDB integration (introduced in 1.8.4-alpha)
release notes here: https://github.com/juxt/crux/releases/tag/20.06-1.8.5-alpha