Fork me on GitHub
Luke Schubert15:01:23

I think I'm misunderstanding something about :keys in a query

Luke Schubert15:01:06

{:find [?id (max ?score) ?timestamp ?source]
 :keys [id score timestamp source]
shouldn't that return [{:id some-val :score some-val ...}]?

Luke Schubert15:01:51

as it stands now that query still returns a vec of vecs


Maybe you are using an older version?


nope that should have it


(datomic.api/q '{:find  [?a]
                 :keys  [a]
                 :where [[(ground [1 2 3]) [?a ...]]]})
=> [{:a 1} {:a 2} {:a 3}]
Works for me

Luke Schubert16:01:30

alright let me try a smaller more isolated query

Luke Schubert16:01:16

ahh I see what's wrong

Luke Schubert16:01:41

it would appear mem doesn't support :keys

Luke Schubert16:01:47

and the tests are using mem


woah, I would not have expected this to have any connection to the storage used


My query uses no dbs


are you sure that’s what’s going on?

Luke Schubert16:01:32

actually I may have jumped to that a little quickly

Luke Schubert16:01:58

ah my client lib is 0.9.5697

Luke Schubert16:01:20

sorry about that

Luke Schubert16:01:40

so this is weird

Luke Schubert16:01:46

same query you ran

Luke Schubert16:01:07

(datomic.api/q '{:find  [?a]
                 :keys  [a]
                 :where [[(ground [1 2 3]) [?a ...]]]})
=> #{[1] [2] [3]}


client-lib? you mean peer version?


0.9.5697 predates the :keys feature by a significant amount


so that is why it isn’t working

Luke Schubert17:01:04

yup, thanks for the help


how can one retract an entity representing a tuple? when we try, we get an exception and the error message Invalid list form [ ... ]

(d/transact (client/get-conn)
            {:tx-data [[:db/retractEntity 123456789123]]})

Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Invalid list form: [9876543219876 5555123123123 111122223333444]

Joe Lane17:01:17

Have you tried [:db/retract 123456789123 :my.tuple/attr [9876543219876 5555123123123 111122223333444]]

Joe Lane17:01:13

I know I've done what you want before I just cant remember the exact syntax.


unfortunately that doesn't work. datomic runs the transaction successfully but it's a noop.

Joe Lane17:01:03

Can you show the schema for that attribute?


something like this, where all three attributes are references:

#:db{:ident       :edu/university+semester+course,
     :valueType   :db.type/tuple
     :cardinality :db.cardinality/one
     :unique      :db.unique/identity
     :tupleAttrs  [:edu/university :edu/semester :edu/course]}

Joe Lane17:01:45

Are you using the latest datomic cloud release / latest client?

Joe Lane18:01:40

From the bottom of > Composite attributes are entirely managed by Datomic–you never assert or retract them yourself. Whenever you assert or retract any attribute that is part of a composite, Datomic will automatically populate the composite value.

Joe Lane18:01:18

So doing the first retract suggestion I gave wouldn't make sense.


Hmm. So in our case, we have an entity with a component reference to these tuples. When we retract the "parent" entity, Datomic automatically attempts to retract the tuple (due to the component reference) and then fails due to the invalid list form error. I wonder if we've found ourselves in an edge case.

Joe Lane18:01:36

So your schema attribute would be

#:db{:ident       :edu/university+semester+course,
     :valueType   :db.type/tuple
     :cardinality :db.cardinality/one
     :unique      :db.unique/identity
     :isComponent true
     :tupleAttrs  [:edu/university :edu/semester :edu/course]}


not quite, more like this: 1. a parent catalogue entity with a many/component attribute :catalogue/registrations which references: 2. various registration entities which contain the :edu/university+semester+course tuple attribute (and of course the tuple's individual attribute and values of :edu/university, :edu/semester, and :edu/course)


so when retracting a catalogue entity, those component entities with the tuple attributes are unsuccessfully retracted

Joe Lane18:01:51

Oh interesting. Does [:db/retract the-catalogue-eid :catalogue/registrations 123456789123] succeed?


let's find out! one sec.


yup, that works as expected. all we're doing is removing the relationship between the parent and the child which doesn't really affect the child with the tuple attribute.

Joe Lane18:01:56

So now try retracting the entity with a tuple manually. This may determine if you have indeed found some edge case in using component entities with composite tuples.


still no luck


Hi. I really want to get into Datomic, but I'm having trouble understanding how to take advantage of it. I tried it initially for a website, but I need full-text search and it's not in the cloud version apparently (and not really recommended for on-premises). I ended up using DynamoDB for key lookup and ElasticSearch for querying, and this works well but I want to take another look at Datomic before the project gets too far along. Should I replace DynamoDB with Datomic (having Datomic sync with ElasticSearch) and hit Datomic for key lookups and non-FTS queries?


Interested in the same question - are there any examples or tips for integrating Datomic with ElasticSearch or CloudSearch?


same here. in the past i've seen proof-of-concept examples of spooling the transaction log to ElasticSearch, but no concrete examples of (re)building searchable documents based on changes to entities over time.


@jshaffer2112 from what i've read, Datomic <-> ElasticSearch should be sufficient without DDB in the middle if you don't mind glueing them together. Datomic is pretty performant when looking up ids. however, DDB does have the advantage of configurable triggers to automatically push changes to ElasticSearch when a row entry is modified.


i suppose you can do the same with transaction functions, so long as you remember to use them when needed 🙂


I'll look up transaction functions and try that. Datomic is definitely more work to get started, but I think it will be worth it down the road.


let us know what you find. according to the docs, transaction functions must be side-effect free, so i suppose their purity is in the eye of the beholder. 😉 hopefully someone here can chime in if they have some experience using transaction functions to sync data with other resources.


keep in mind tx fns execute even if the tx ultimately fails


Is it working looking into CloudSearch instead of ElasticSearch?


you can subscribe to changes on the datomic db, and sync changes to ES that way


that would be a cleaner solution IMO than abusing transaction functions


@chrisblom is that Datomic Cloud friendly?


i'm not sure, i've only worked with on-prem


the last time i checked the proposed solution was to periodically read the Cloud transaction log via a scheduled Lambda, store processed transactions in DDB, and push changes to ES. but that was a year ago, so maybe things have changed?

Joe Lane20:01:58

After a transaction succeeds, why not initiate an indexing process with the tx-result by putting the new datoms on a queue to be indexed? Note, this assumes you don't need to query the Search index in-process with your query, and it also assumes you would be ok indexing the datoms as documents, not entity maps. IMO, indexing datoms is a smoother approach than updating documents.


why not sip the tx log?

Joe Lane20:01:08

@ghadi I've done that approach in the past too and it's fantastic, I just figured you already had the datoms in hand.


agreed. anecdotally, my issue with datom-level indexing is that i can't easily restrict access to indices based on authorization.

Joe Lane20:01:40

How would you meet that requirement in any other system with elasticsearch?


i'm by no means an ES expert, so take what i say with a grain of salt. 🙂 in my project i have clearly defined permissions to entities, and so it's easy to reason about which ES index (and attached permissions) to push entities as whole documents. when sipping datoms off the tx log i have to do a lot of reconciliation of [E V ] to make sure they end up in the right place. i don't think any other system solves the problem better, and for me it boils down to challenging business requirements.