Fork me on GitHub

I noticed some surprising behaviour with on-prem datomic client 1.0.6202; only the first collection binding will be resolved as entities. I have a query with two collection bindings containing lookup refs; the second collection binding is not resolved as entities. I can work around it by adding [(datomic.api/entid $ ?y) ?z] to my :where-clause, or by manually constructing a relation binding. Is this to be expected?


@UHJH8MG6S Out of curiosity, is the binding that isn’t resolved first used in a clause where the attribute is not statically known?


Unless the query planner sees a static pattern like [_ :literal-ref-attr ?y] It doesn’t know that “?y” could possibly be resolved to an entity id. (IME)


so it just tries to match what you literally passed in


@U09R86PA4 I use a rule like this: (my-rule? ?x ?y), where ?x and ?y are both expected to be bound, and provided using coll-bindings. The rule is effectively [?x :attr ?y]. When I execute the query an error is thrown (`[?y] not bound in clause: [email protected]`). When I reverse my :in-arguments, the error changes to [?x] not bound in clause: [email protected]. @U0CJ19XAM I'll try to extend with a repro later :thumbsup::skin-tone-2:


Here's the repro of this behaviour using dev-tools 0.9.232. In this example ?parent is bound, ?child is not. I expected both to be bound. (see;cid=C03RZMDSH)


Your destructuring doesn’t make sense to me. What is ?e2 supposed to be?


The only choices you have for destructuring are to take the value as a whole unchanged, to take it as a collection of single items, or to take it as a collection of relations.


I think you either want args [db [ref1 ref2]] :in [$ [?e ...]] or args [db [ref1][ref2]] :in [$ [?e...] [?e2 ...]]


You are doing args [db [ref1 ref2]] :in [$ [[?e ...][?e2 ...]]]


The query parser isn’t catching this as a syntax error but it should


You're right; when i initially reported this issue I used

[db [ref1][ref2]] :in [$ [?e...] [?e2 ...]]
instead of the syntax in the repro. Let me get an example ready.


I updated my example to be more in line with my original report. Sorry for the noise 😅

Joe Lane14:05:59

@UHJH8MG6S If you pass in sets of eids instead of lookup refs it appears to do what you're after.

(d/q {:query '{:find [(pull ?parent [:person/name]) (pull ?child [:person/name])]
               :in   [$ [?parent ...] [?child ...]]
               :where [[?child :person/parent ?parent]]}
      :args  [(d/db conn)
              (into #{}
                    (map #(->> % (d/pull (d/db conn) [:db/id] ) :db/id)
                         #{[:person/name "pete"]}))
              (into #{}
                    (map #(->> % (d/pull (d/db conn) [:db/id] ) :db/id)
                         #{[:person/name "frank"]}))]})
;; Returns
;; => [[#:person{:name "pete"} #:person{:name "frank"}]]


Nice! I guess I can also use datomic.api/entid in the query to resolve the lookup refs. I initially expected both collection bindings to resolve the lookup refs, but it seems to do 1 collection at most. Is that intended?

Joe Lane14:05:21

I'll be creating an internal story to investigate this more deeply but for now you should consider it expected behavior.

👍 3

is it generally acceptable for a query group to communicate with another query group? for example, one query group behind http direct that sources data from other (non internet facing) query groups?

Joe Lane16:05:24

What does ".. sources data from... " mean?


perhaps via http, where each query group is a microservice

Joe Lane16:05:08

Do you need a database for your internet-facing QG?

Joe Lane16:05:07

What is pushing you towards "microservices"?


> Do you need a database for your internet-facing QG? i think so, yes. basically i have a few query groups all independently accessible via http-direct and serving their own rest APIs. i would like those query groups to remain as separate applications, but use only one exposed query group to handle my API requests and fetch data from other query groups as needed. for example: i have an existing query group that handles the customer api, and another existing query group that handles the billing api. i'd like all api requests to be routed through a single query group that will fetch data from the other query group APIs to build a response


> What is pushing you towards "microservices"? maybe microservice is too strong of a word here. the problem i'm trying to solve is that i have some independently managed APIs on different query groups, but a need to have information from many of them to make a proper decision about a response to the client. does that make sense?

Joe Lane16:05:38

Have you considered using those n-other services as libraries in your new internet-facing application? This way you can: • avoid a network hop • retain all data in process • return large result sets without placing a burden on those other services (you're avoiding fan-in) • Avoid needing to call d/sync or d/as-of to have a consistent view of the database • Autoscale your internet facing application independently of those other services • etc...


> Have you considered using those n-other services as libraries in your new internet-facing application? absolutely. we actually started that way but at the time (a few years ago) Ions had a very low max timeout on the deployment health check so we ended up splitting the APIs. services-as-libraries definitely has their benefits, and we're considering adopting polylith to help us move back in that direction. but in the case of at least one Query Group / API that we have, it must remain separate for business reasons (booo) and it's also considerably more resource intensive than the others.

Joe Lane17:05:42

Ok, well there is nothing inherently wrong with QG to QG communication, you could even leverage the client api to issue a query against the other QG. You will incur additional overhead because of this pattern and you might want to measure it to make sure it works for your needs.


good idea. so i suppose for the one separate query group i would leave HTTP Direct enabled and make HTTP requests to.. the QG's load balancer? or would i still need to involve an API Gateway?

Joe Lane18:05:38

That QG's LB endpoint on port 8184 is sufficient, no need to go back out through APIGW.


excellent, that's incredibly helpful. thanks again for your support


i understand that question is vague, but i'm generally aiming towards a micro service architecture