Fork me on GitHub

I'm pretty sure it's not an official archive, but is a good record πŸ™‚

πŸ‘ 1

zulipchat also has an archive, which is searchable.

πŸ‘ 1

Hi, first of all, thanks for Crux! I read through, but left wondering what happens if a submitted transaction fails. What are the failure modes? What is the idiomatic way to check from application code if a transaction succeeded?

πŸ™ 1

tx-committed? on the node (ICruxAPI) tells you whether the transaction succeeded. We don't provide a programmatic means of finding out exactly why a transaction failed, but there is logging and within a transaction function you can of course add your own custom logging


Failure modes for a match are strictly logical only, but a transaction function might also fail because: it can't compile, NPE etc.


Thanks, I was looking for something along the lines of in terms of development ergonomics, but I understand that this may not make sense in the context of Crux, i.e. optimistic async only tx processing

βœ”οΈ 1

Noted, we'll improve this area eventually. Thanks for the feedback πŸ™‚

πŸ‘ 1

To sum it up: I think a construct that allows submitting a transaction asynchronously and can be derefed to check if it succeeded would be a useful addition to improve developer ergoniomics.

πŸ‘Œ 2

I have the following code:

 (crux/db crux-node)
 {:find   [(list 'eql/project '?e query)]
  :where '[[?e :user/e-mail ?e-mail]
	       [?e :user/id]]
  :args   [{:?e-mail e-mail}]})
When running this I intermittently get an exception:
{:type java.lang.IllegalArgumentException
 :message No implementation of method: :id->buffer of protocol: #'crux.codec/IdToBuffer found for class: clojure.lang.PersistentList
 :at [clojure.core$_cache_protocol_fn invokeStatic core_deftype.clj 583]} 
It happens more often from a lein test than from the REPL. Should I open an issue for this, or is it something on my end? Note that it only seem to happen where there are no user with the given e-mail.


Hi @U1G8B7ZD3 - I'll take a quick look to see if it's something obvious, but if not, it's worth an issue - thanks πŸ™‚


Thanks. Let me know if an issue is needed, in that case I can provide a more complete stack trace


That'd be great, thanks πŸ™‚


created so that I don't lose track of it πŸ™‚


is there a way to disable the indexer for an embedded crux/jdbc: i.e. do not replay the tx log on start / rely on kv index files? use case: one of the systems we have is write only it records events from various sources to a SQL database (postgres) [no crux involved] I’d like to add crux/put on each successful upserted batch to store history other systems are read and write that would work with both postgres data and crux history, so indexer there is needed but for the write only system: is it possible to avoid carrying a rocksdb index that this system does not need? the reason it is troublesome is twofold: 1. these are millions of transactions in short periods of time: disk space 2. the write system has several instances managed by nomad: carrying for dir structure, docker binds, etc. complicates ops and is not really needed for business


This was definitely under consideration at some point. You could probably make a nop kvstore protocol implementation.


Yep it was under consideration for a while...and then luckily we did something about it! It's called the "ingest client" API, for submitting transactions only: I think it should be perfect for your use case


ahh, of course, sorry πŸ™‚ So it's currently hard-coded for Kafka but we could certainly get it working with JDBC too, with a small amount of effort, as all the plumbing exists already. I can't make promises on timelines right this moment - are you blocked on your testing without having it? Feel free to message me direct if you want to share details on timelines or whatever


nah, I don’t want to create more urgent work for you. just knowing you are thinking about adding this for jdbc within reasonable timeframe is good enough. I can do some gymnastics with those indices for now (at least I think will be able to, will depend on their size)

βœ”οΈ 1

if I have time I’ll explore a noop implementation as @U09LZR36F suggested. but I would of course prefer to have that working without a need of a plug )

πŸ‘ 1

agreed - indeed, have been removing the dependency on Kafka in the ingest client, currently queued up for the next major release

πŸ†’ 3
metal 1

^ awesome, I clearly failed to properly read the memo on that, thanks for chiming in @U050V1N74 πŸ™‚

πŸ™ 1