Fork me on GitHub
#datomic
<
2018-01-22
>
Hendrik Poernama11:01:18

I'm trying to switch from peer api to client api in preparation for eventual cloud migration. Is there an established best practice on how to pass data around business logic functions? I used to pass almost everything as datomic entity (entity api). Now if I pass entity ids around, I end up with scattered ad-hoc pull queries and would sometimes pull the same entity multiple times on slightly different queries. Not sure if this a good design.

val_waeselynck16:01:13

@U7Q9VAXPT Note that passing around lookup-refs is not sufficient to emulate Peer Entities: you also need to pass the database values, otherwise you may run into inconsistencies.

val_waeselynck16:01:46

You're definitely going to face dilemmas you didn't have on Peers, typically simplicity (I want my functions to have little dependencies to each other, and my queries to be about just one thing) vs performance (I don't want the N+1 problem). I advise you do some benchmarking - you'll probably find as I did that a Datalog query has usually much more overhead than an Entity Lookup for the same amount of work.

val_waeselynck16:01:18

My intuition is that tools like GraphQL or other similar demand-driven can alleviate a lot of this problem, because a lot of business logic can be expressed via derived attributes, and you can relatively easily build an efficient GraphQL server by using a combination of asynchrony and batching - which is a good fit for a Datomic Client.

val_waeselynck16:01:24

GraphQL is for the read-site; as for the write-side, you usually have looser latency and throughput requirements for writes, so I wouldn't worry too much about the performance of that

val_waeselynck16:01:58

But I'm very curious to know what you find down that road.

Hendrik Poernama04:01:37

My first take on this, business functions have db as first argument and can take either entity id or lookup ref as additional arguments. I then have a set of functions to create lookup ref from name/uuid/natural key, and another set of functions to resolve entity id from lookup ref (so far only needed for existence test).

Hendrik Poernama04:01:26

I feel like this is a bit worse than N+1, because I'm seeing a lot of colocated on-demand pull of essentially the same information multiple times. Now that each pull is a network request, this worries me. Especially since I'm using ring without async handler...

Hendrik Poernama04:01:28

I think GraphQL clients work around this issue by locally caching every query result. Essentially almost what a peer is doing. So I could theoretically wrap pull with some custom memoize/caching if needed.

Hendrik Poernama04:01:01

Write is actually getting a bit more complicated if I'm designing for Datomic Cloud where db/cas is currently the only transaction function. I can use cas like clojure's ensure but I then have to explicitly handle retries.

Hendrik Poernama05:01:37

Maybe I'm looking at this from the wrong angle. The client library is designed for microservice architecture and I should not worry about performance as long as it is within the same algorithmic complexity and just scale horizontally.

val_waeselynck06:01:31

Still, you may have a latency problem. Maybe you should fetch data once and pass data structures to functions that do a lot of validation

Hendrik Poernama11:01:58

I also tried passing lookup-refs around as entity with the benefit of not having to do separate eid lookup.

Hendrik Poernama11:01:14

Maybe pulls are cheap enough and I should not worry about it?

stuarthalloway12:01:41

you can do multiple pulls in a single query

donmullen14:01:22

Is there sample code that shows a graceful way to handle getting this?

{:cognitect.anomalies/category :cognitect.anomalies/busy, :cognitect.anomalies/message "Busy rebuilding index", :dbs [{:database-id "954cc441-8125-45b1-a2d6-6547c985bfad", :t 1712, :next-t 1713, :history false}]}
I’m currently using tx-pipeline from https://docs.datomic.com/on-prem/best-practices.html — and calling the synchronous api via (client/transact conn {:tx-data data}). Wondering if I should switch to asynchronous, check for anomolies, wait and retry - or stick with synchronous and do the same. Hmm.. seems I should bit the bullet and pull in code from https://github.com/Datomic/mbrainz-importer - looks like the batch xform handles all the retries.

stuarthalloway16:01:32

I was mistaken in my comments before, please go by the (new) docs

val_waeselynck16:01:13

@stuarthalloway thanks for the clarification, have you been able to determine what happens when a client-side with'ed db is implicitly resolved via asOf on the serve-side?

stuarthalloway16:01:11

@val_waeselynck yeah, that is separate, will get back to you

asier17:01:00

Hi there. I have downloaded Datomic Starter and included the Client library [com.datomic/client-pro "0.8.14"] in my project.clj. Now I have this error when doing lein check:

asier17:01:34

I´m using Clojure 1.9.0

Alex Miller (Clojure team)17:01:04

Maybe the output of lein deps :tree would help shed some light on your dependencies.

souenzzo21:01:17

[datomic cloud] I have a library/framework and I want to allow the user choose between cloud or peer API. how to do that? there is future plans?