Fork me on GitHub
#datomic
<
2021-09-03
>
Twan09:09:24

Is ?sslmode=require respected on Postgres connections for Datomic peer server? We're not sure if it is, but we'd like to enforce SSL on all our Postgres connections. This far, we were unable to connect via SSL

Jakub Holý (HolyJak)17:09:33

Hi! Is it possible to use datomic.client.api against on-prem Datomic, from a peer server itself? (B/c I want to use ragtime.datomic, which uses the client API) https://docs.datomic.com/client-api/datomic.client.api.html#var-client it would seem I need to run a separate peer server?!

Jakub Holý (HolyJak)17:09:58

FYI typo in the official docs at https://docs.datomic.com/on-prem/overview/clients-and-peers.html > begin with Datomic dev-local, which [should be with?] the client library in-process.

kenny20:09:22

Is the d/datoms :eavt index guaranteed to be sorted in ascending order of of :e? e.g., the following is true

(= (sort-by :e (d/datoms db {:index :eavt})) (d/datoms db {:index :eavt}))

favila20:09:43

the name of the index is the sort order of the index

2
ghadi20:09:48

but eav are ascending, t is descending

kenny20:09:51

I'd like to iterate through the d/datoms eavt index as fast as possible, applying parallelism if possible. Are there any techniques for doing so? Using :offset & :limit to batch doesn't seem like an effective strategy since d/datoms will need to walk the whole datoms index regardless.

favila20:09:20

If you’re using the sync API, you should pretty much never use offset+limit

favila20:09:02

just keep consuming it

favila20:09:40

offset+limit will turn it into O(n!); just consuming it will continue from whatever chunk pointer it has

favila20:09:53

increasing :chunksize can help too

favila20:09:32

for doing stuff in parallel, if you can use :AEVT instead, you can find all the :As and issue a d/datoms for each one

favila20:09:37

[:find ?a :where [:db.part/db :db.install/attribute ?a]]

favila20:09:16

that gets you all attributes. Then (d/datoms :aevt a1/2/3/n)

kenny21:09:05

Oh, that's an excellent idea! Thank you!!

kenny21:09:18

Why does changing the chunk size have an impact?

favila21:09:38

It sends more datoms at a time

favila21:09:55

It seems to be faster from experience 🤷 YMMV

kenny22:09:08

Does the sync client API officially support passing :chunk? From the code, I see that it happens to work right now since the sync arg map is just passed to the async api.

favila23:09:21

> https://docs.datomic.com/client-api/datomic.client.api.html functions are designed for convenience. They return a single collection or iterable and do not expose chunks directly. The chunk size argument is nevertheless available and relevant for performance tuning.

favila23:09:03

in addition, AFAIK the sync apis are implemented with the async ones, so this makes sense

2
kenny23:09:38

Gosh, you're a master of the docs. IMO, should be a part of the actual API docs 🙂

kenny23:09:14

Curious, have you had to handle exceptions thrown while traversing d/datoms? Ending up with some wacky feeling code:

(defn read-datoms-with-retry!
  [db argm dest-ch]
  (let [datoms (d/datoms db argm)
        *offset (volatile! (:offset argm 0))]
    (try
      (doseq [d datoms]
        (async/>!! dest-ch d)
        (swap! *offset inc))
      (catch ExceptionInfo ex
        (if (retry/default-retriable? ex)
          (do
            (read-datoms-with-retry! db (assoc argm :offset @*offset) dest-ch)
            (log/warn "Retryable anomaly while reading datoms. Retrying from offset..."
              :anomaly (ex-data ex)
              :offset @*offset))
          (throw ex))))))

kenny23:09:06

doseq is wrapped in try/catch b/c seq chunks are realized in doseq, not in d/datoms call.