Fork me on GitHub

Does a stack trace like this indicate connectivity issues with the storage (couchbase in this case)?

ExecutionException java.lang.RuntimeException: Exception waiting for value (
	java.util.concurrent.FutureTask.get (
	clojure.core/deref-future (core.clj:2180)
	clojure.core/future-call/reify--6320 (core.clj:6420)
	clojure.core/deref (core.clj:2200)
	datomic.catalog/get-catalog (catalog.clj:30)
	datomic.coordination/cluster-conf->resolved-conf (coordination.clj:160)
	datomic.cache/fn/reify--2419 (cache.clj:342)
	clojure.lang.RT.get (
	datomic.cache/lookup-cache/reify--2416 (cache.clj:287)
	datomic.cache/lookup-cache/reify--2416 (cache.clj:280)
	clojure.lang.RT.get (
Caused by:
RuntimeException Exception waiting for value
Caused by:
ExecutionException java.lang.RuntimeException: Cancelled
Caused by:
RuntimeException Cancelled


(worked after pagerduty and automatic restart)


@donaldball: I’m fairly certain the data model and approach are a mismatch for what you’re trying to do re: book quality. Some of the changes I would make to the approach (a) use pull to get attr/value from a know entity id, (b) if you use keywords that have a corresponding integer value, add that value to the enums in the database. Otherwise, use numbers in the database to store values and track their meaning in your application. (c) if you pass in ordinal map, don’t resolve/get etc. in the query itself, use pull to get quality ident from db, then get corresponding value and generate transaction. (d) I wouldn’t do this in its own transaction function, if you need isolation this is a good fit for the built-in transaction function cas - do the lookup work on the peer, you don’t want that overhead in the transactor’s serialization.


@ljosa: it’s possible, hard to diagnose conclusively without more information — do you have more stack trace and/or logs (assuming you want to dig further). Did you encounter this on peer?


I think "it's possible" is a good enough answer. It only happened once, and yes, it was a peer. No other logs; the only thing to note is that this happened well after the peer had connected and had been running for some time. This peer fails fast when something goes wrong. Marathon restarted it, and all was fine after that.


@ljosa: Ok, sounds good. If you encounter anything similar in the future and opt for a deeper dive, feel free to ping me.


I’m trying to connect to a datomic instance but am getting HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user: …elided…]. What does this mean?


@hugod - possible that you’re exceeding the peer count?


@bkamphaus: The peer count is the number of processes connecting to datomic?


As far as I know this is the only process connecting, but I can check that.


@hugod: right, i.e. on free or starter, transactor + two peers. When up, REST server and Console count against limit. You can also look through transactor logs, will see a message where they’re logged with datomic.transactor - {:event :transactor/remote-ips …}


This is with datomic-pro, btw


by default transactor logs will go to log/ subdir of the datomic dir with naming by date.


I don’t see transactor/remote-ips anywhere in the logs


Which version are you on? I would expect to see even a blank message, e.g.:


2016-01-12 00:01:39.312 INFO  default    datomic.transactor - {:event :transactor/remote-ips, :ips #{}, :pid 15750, :tid 28}


Would two transactors running on the same table show this symptom?


Thanks, @bkamphaus, I realize there are probably better data models that would yield a better txn design, but I’m quite curious now what apparent constraint of txn fns I’m violating hereby


@bkamphaus: This is datomic-pro-0.9.4880


@hugod: nope, two transactors can’t run live against the same table. With paid pro, only one transactor can be active, others will see it writing heartbeat and enter standby. With pro starter/free, transactors will constantly fail with AlarmHeartbeatFailed :cause :conflict.


It may be that the remote-ips logging wasn’t yet included in that version (over a year old at this point), if you can consider upgrading transactor it should contain that logging info, should still be compatible with same peer lib version.


@bkamphaus: Thanks, I’ll assume that having two transactors is the cause of the issue for now, and test with just the one transactor up.


@donaldball: I can’t reproduce your failure using code in your gist - the transaction function works for me with two caveats: (1) I’m using namespaced datomic.api/q for query, and (2) I’m installing the db fn through standard process, not using your deftxfn, e.g.

{:db/id (d/tempid :db.part/user) 
             :db/ident :db.fn/increase-book-quality
             :db/fn #db/fn {:lang "clojure" 
                            :params [db eid attr quality] 
                            :code ...


Cool, thanks, I’ll dig in a little deeper then


@bkamphaus: Related to my dev/mem issue yesterday, it seems like dev is swallowing the root cause of exceptions while mem is properly wrapping them. See: