This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-01-03
Channels
- # announcements (2)
- # babashka (66)
- # beginners (225)
- # braveandtrue (1)
- # calva (14)
- # circleci (1)
- # clj-kondo (36)
- # cljsrn (3)
- # clojure (423)
- # clojure-finland (7)
- # clojure-nl (1)
- # clojure-spec (14)
- # clojure-survey (41)
- # clojure-sweden (2)
- # clojure-uk (13)
- # clojurescript (59)
- # community-development (10)
- # cursive (2)
- # datascript (14)
- # datomic (63)
- # events (3)
- # expound (8)
- # figwheel-main (6)
- # kaocha (8)
- # luminus (6)
- # malli (1)
- # nrepl (2)
- # off-topic (51)
- # other-lisps (3)
- # reagent (16)
- # shadow-cljs (44)
- # spacemacs (7)
- # sql (22)
- # vrac (1)
I think I'm misunderstanding something about :keys
in a query
{:find [?id (max ?score) ?timestamp ?source]
:keys [id score timestamp source]
...}
shouldn't that return [{:id some-val :score some-val ...}]?as it stands now that query still returns a vec of vecs
0.9.5951?
(datomic.api/q '{:find [?a]
:keys [a]
:where [[(ground [1 2 3]) [?a ...]]]})
=> [{:a 1} {:a 2} {:a 3}]
Works for mealright let me try a smaller more isolated query
ahh I see what's wrong
it would appear mem doesn't support :keys
and the tests are using mem
actually I may have jumped to that a little quickly
ah my client lib is 0.9.5697
sorry about that
so this is weird
same query you ran
(datomic.api/q '{:find [?a]
:keys [a]
:where [[(ground [1 2 3]) [?a ...]]]})
=> #{[1] [2] [3]}
yup, thanks for the help
how can one retract an entity representing a tuple? when we try, we get an exception and the error message Invalid list form [ ... ]
(d/transact (client/get-conn)
{:tx-data [[:db/retractEntity 123456789123]]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Invalid list form: [9876543219876 5555123123123 111122223333444]
Have you tried [:db/retract 123456789123 :my.tuple/attr [9876543219876 5555123123123 111122223333444]]
unfortunately that doesn't work. datomic runs the transaction successfully but it's a noop.
something like this, where all three attributes are references:
#:db{:ident :edu/university+semester+course,
:valueType :db.type/tuple
:cardinality :db.cardinality/one
:unique :db.unique/identity
:tupleAttrs [:edu/university :edu/semester :edu/course]}
From the bottom of https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples > Composite attributes are entirely managed by Datomic–you never assert or retract them yourself. Whenever you assert or retract any attribute that is part of a composite, Datomic will automatically populate the composite value.
Hmm. So in our case, we have an entity with a component reference to these tuples. When we retract the "parent" entity, Datomic automatically attempts to retract the tuple (due to the component reference) and then fails due to the invalid list form error. I wonder if we've found ourselves in an edge case.
So your schema attribute would be
#:db{:ident :edu/university+semester+course,
:valueType :db.type/tuple
:cardinality :db.cardinality/one
:unique :db.unique/identity
:isComponent true
:tupleAttrs [:edu/university :edu/semester :edu/course]}
Correct?not quite, more like this:
1. a parent catalogue entity with a many/component attribute :catalogue/registrations
which references:
2. various registration entities which contain the :edu/university+semester+course
tuple attribute (and of course the tuple's individual attribute and values of :edu/university
, :edu/semester
, and :edu/course
)
so when retracting a catalogue entity, those component entities with the tuple attributes are unsuccessfully retracted
Oh interesting. Does [:db/retract the-catalogue-eid :catalogue/registrations 123456789123]
succeed?
yup, that works as expected. all we're doing is removing the relationship between the parent and the child which doesn't really affect the child with the tuple attribute.
So now try retracting the entity with a tuple manually. This may determine if you have indeed found some edge case in using component entities with composite tuples.
Hi. I really want to get into Datomic, but I'm having trouble understanding how to take advantage of it. I tried it initially for a website, but I need full-text search and it's not in the cloud version apparently (and not really recommended for on-premises). I ended up using DynamoDB for key lookup and ElasticSearch for querying, and this works well but I want to take another look at Datomic before the project gets too far along. Should I replace DynamoDB with Datomic (having Datomic sync with ElasticSearch) and hit Datomic for key lookups and non-FTS queries?
Interested in the same question - are there any examples or tips for integrating Datomic with ElasticSearch or CloudSearch?
same here. in the past i've seen proof-of-concept examples of spooling the transaction log to ElasticSearch, but no concrete examples of (re)building searchable documents based on changes to entities over time.
@jshaffer2112 from what i've read, Datomic <-> ElasticSearch should be sufficient without DDB in the middle if you don't mind glueing them together. Datomic is pretty performant when looking up ids. however, DDB does have the advantage of configurable triggers to automatically push changes to ElasticSearch when a row entry is modified.
i suppose you can do the same with transaction functions, so long as you remember to use them when needed 🙂
I'll look up transaction functions and try that. Datomic is definitely more work to get started, but I think it will be worth it down the road.
let us know what you find. according to the docs, transaction functions must be side-effect free, so i suppose their purity is in the eye of the beholder. 😉 hopefully someone here can chime in if they have some experience using transaction functions to sync data with other resources.
Is it working looking into CloudSearch instead of ElasticSearch?
@chrisblom is that Datomic Cloud friendly?
the last time i checked the proposed solution was to periodically read the Cloud transaction log via a scheduled Lambda, store processed transactions in DDB, and push changes to ES. but that was a year ago, so maybe things have changed?
After a transaction succeeds, why not initiate an indexing process with the tx-result by putting the new datoms on a queue to be indexed? Note, this assumes you don't need to query the Search index in-process with your query, and it also assumes you would be ok indexing the datoms as documents, not entity maps. IMO, indexing datoms is a smoother approach than updating documents.
@ghadi I've done that approach in the past too and it's fantastic, I just figured you already had the datoms in hand.
agreed. anecdotally, my issue with datom-level indexing is that i can't easily restrict access to indices based on authorization.
i'm by no means an ES expert, so take what i say with a grain of salt. 🙂 in my project i have clearly defined permissions to entities, and so it's easy to reason about which ES index (and attached permissions) to push entities as whole documents. when sipping datoms off the tx log i have to do a lot of reconciliation of [E V ] to make sure they end up in the right place. i don't think any other system solves the problem better, and for me it boils down to challenging business requirements.