Fork me on GitHub
#datomic
<
2017-04-17
>
celldee14:04:53

Can anyone point me to an example that explains how to obtain all of the results from a Datomic client api query that returns a significant number of results?

celldee14:04:05

I'm running a query and can get the first chunk of results but I'm not sure how to get subsequent chunks until I have the whole result set.

matthavener14:04:47

celldee: looks like you just pass the :offset param

celldee14:04:50

matthavener: do you have an example that you could show?

matthavener14:04:28

celldee: sorry, I don’t. I haven’t used the client API but I was curious so I read through the docs.

celldee14:04:04

matthavener: no worries. This is my first attempt to use it for querying. The client/q functions seems to return a channel that I can take from but it chunks results. I read the :offset parameter to mean that you want to skip a number of results.

celldee14:04:10

My query looks like this - (def first-query {:query '[:find ?e ?id :where [?e :active-chance/id] [?e :active-chance/lastName] [?e :active-chance/id ?id]] :args [db] :limit -1 :chunk 10000})

celldee14:04:42

If I leave out the :limit argument then I only get 1000 results returned and with the :chunk argument I get 10000 results back. I'm expecting over 90000 items in the result set, which is what I get when I run the query in the Datomic console.

celldee15:04:56

Also, if I leave out the :chunk argument then I get 1000 results

celldee15:04:15

I'm imagining a mechanism where I keep taking from the channel until the result set is exhausted. Just not sure how to do that. I suppose if I was more conversant with core.async this would be obvious, however, I'm a bit of a Clojure novice.

uwo15:04:20

While running an importer I’ll intermittently get :db.error/transactor-unavailable. We are pipelining our transactions, as in http://docs.datomic.com/best-practices.html#pipeline-transactions, with some logic to attempt a few retries with a 2 second timeout, but that doesn’t appear to be sufficient. What’s the right way to handle back pressure, or is this another issue?

uwo15:04:03

also, any advice on what to do if this occurs during an import? Critical failure, cannot continue: Heartbeat failed

uwo15:04:23

(this is with a dev: db)

uwo15:04:00

when the reference material says “be willing to wait for several minutes for a transaction to complete”, should I just use an exponential backoff retry when I get transactor unavailable?

marshall15:04:16

@celldee you can just repeatedly 'take' from the channel with the <!! Operator

marshall15:04:33

Once it's empty it will return nil

marshall15:04:56

@uwo heartbeat failure means the transactor is unable to write to storage. It will self destruct as a result.

celldee15:04:57

@marshall Thanks. What would that look like please? I'm still trying to learn Clojure.

marshall15:04:12

That might help ^

celldee15:04:46

@marshall Thanks very much!

marshall15:04:48

Luke does a good job of covering the concepts of core async in those videos

alexandergunnarson18:04:54

Quick question — is it possible to query Datomic backed by DynamoDB without a transactor running? It doesn't look like it.

alexandergunnarson18:04:27

(datomic.api/connect "datomic:ddb://...")

org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
    type: #object[org.apache.activemq.artemis.api.core.ActiveMQExceptionType$3 0x2512bf7b "NOT_CONNECTED"]
                                        clojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 4334

alexandergunnarson18:04:40

I know transactions aren't possible without one of course but I figured that queries might be, given that in the case of a transactor failure, queries are still possible

luke20:04:10

@alexandergunnarson no, a transactor is required for queries. The the fact that it continues to work (for some amount of time) after a transactor fails is an implementation detail, not something that should be depended upon.

alexandergunnarson20:04:32

Ah okay, makes sense. Thanks!

luke20:04:45

Datomic’s architecture could in theory support read-only models but it doesn’t right now

seancorfield20:04:22

Does that mean that in the time between the primary transactor failing and the standby transactor coming into play, you can’t run queries, even against the peers? So you (temporarily) lose reads as well as writes?

luke21:04:40

@seancorfield I’ve never seen a read fail in practice. I’d have to check if that’s a guarantee or just the way it always works (eventually, a peer will “notice” that it’s transactor is dead and stop working, but that is longer than the failover window)

luke21:04:07

Good question for @jaret, he probably knows the answer.

luke21:04:48

actually @seancorfield I should have read the docs before answering: Datomic does indeed guarantee that reads are available during a failover: http://docs.datomic.com/ha.html

seancorfield21:04:04

OK, glad to hear that. I would have been (unpleasantly) surprised if that wasn’t the case 🙂