This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-22
Channels
- # announcements (2)
- # beginners (137)
- # chlorine-clover (13)
- # clj-kondo (3)
- # cljsrn (4)
- # clojure (52)
- # clojure-australia (3)
- # clojure-dev (2)
- # clojure-europe (34)
- # clojure-nl (1)
- # clojure-sg (3)
- # clojure-spec (1)
- # clojure-uk (12)
- # clojurescript (2)
- # clojureverse-ops (7)
- # code-reviews (3)
- # conjure (2)
- # cursive (18)
- # datavis (21)
- # datomic (34)
- # exercism (1)
- # figwheel-main (6)
- # graphql (3)
- # helix (21)
- # introduce-yourself (1)
- # jackdaw (1)
- # jobs (4)
- # jobs-discuss (32)
- # juxt (14)
- # leiningen (6)
- # lsp (35)
- # meander (19)
- # nrepl (2)
- # off-topic (37)
- # portal (40)
- # quil (5)
- # re-frame (45)
- # reagent (10)
- # releases (1)
- # remote-jobs (4)
- # reveal (15)
- # sci (7)
- # shadow-cljs (40)
- # spacemacs (3)
- # tools-build (2)
- # vim (17)
- # xtdb (11)
Regarding the Pull API and its last parameter: entity id.
I noticed that all the examples refers to an entity id, a number. But if something is :db/ident
or is :db.unique/identity
attr, the pull can take more verbose param:
(d/pull db '[*] [:country/code "GB"]) ;; :country/code is an attr with :db.unique/identity
(d/pull db '[*] :temp.periodicity/daily) ;; :temp.periodicity/daily is :db/ident-based enum
Q1: What this alternative notation mean? Looks like [:country/code "GB"]
and :temp.periodicity/daily
are interchangeable with a number eid. Are they?
On the other hand, if I want to pull entity where composite tuple is :db.unique/identity
it doesn't work similary:
(d/pull db '[*] [:temp/location+periodicity+date [[:country/code "GB"] :temp.periodicity/daily #inst"2021-09-18"]]) ;; <= doesn't work
(d/pull db '[*] [:temp/location+periodicity+date [83562883711079 74766790688854 #inst"2021-09-18"]]) ;; <= works fine
Q2: Why it doesn't work the same with composite tuples for the elements of the tuple?Q2 has bitten me before too. There appear to be some cases where you can’t use a db/ident in place of an entity id.
Another case is :db/cas
, when old-val can be an entity id, but not an ident.
Q1: these are entity identifiers and they are all interchangeable in most contexts. D/entid (in peer api) resolves them to ids. In transaction data they will be resolved at the transactor rather than the peer, which makes them useful for preventing update and delete races
Q2: but there are exceptions, like inside tuple values. Another is in queries where the attribute is not statically known. The reason is that you need type information to know whether the slot in the tuple is a ref or not. (I’m sure datomic could be extended to figure it out, but for whatever reason it hasn’t been)
Thank you @U09R86PA4 @UDF11HLKC
Going back to this example:
(d/pull db '[*] [:temp/location+periodicity+date [[:country/code "GB"] :temp.periodicity/daily #inst"2021-09-18"]]) ;; <= doesn't work
(d/pull db '[*] [:temp/location+periodicity+date [83562883711079 74766790688854 #inst"2021-09-18"]]) ;; <= works fine
Since entity identifiers are not interchangeable in the context of a tuple, am I correct that this ⬇️ is the shortest way to pull such an entity details (shortest without using rules):
(d/q '[:find (pull ?e [*])
:where
[?ecountry :country/code "GBP"]
[?eperiodicity :db/ident :temp.periodicity/daily]
[(tuple ?ecountry ?eperiodicity #inst"2021-09-16") ?tup]
[?e :temp/location+periodicity+date ?tup]] (get-db))
I mean, the pull is just an example. But ultimately I need to find ?e, and for such a tuple I need to write 4 lines (4 facts, 4 data patterns - tbh I'm not sure how to name these :where vectors).
Since it is not that nice, people probably use rules, right?
(def rules
'[[(find-temp ?e ?country ?periodicityident ?date)
[?ecountry :country/name ?country]
[?eperiodicity :db/ident ?periodicityident]
[(tuple ?ecountry ?eperiodicity ?date) ?tup]
[?e :temp/location+periodicity+date ?tup]]])
(d/q '[:find (pull ?e [*])
:in $ %
:where
(find-temp ?e "GB" :temp.periodicity/daily #inst"2021-09-16")]
(get-db) rules)
Am I correct or am I still missing here some bits (in terms of accessing entities identified by composite tuples)?I've recently seen
2021-09-22 06:49:19.112 INFO datomic.update - {:event :transactor/admin-command, :cmd :request-index, :arg "xxx-7-24fef96f-2e3f-4369-acb3-4ae67b5f91df", :result {:queued "xxx-7-24fef96f-2e3f-4369-acb3-4ae67b5f91df"}, :pid 13582, :tid 772}
2021-09-22 06:49:19.263 INFO datomic.update - {:index/requested-up-to-t 17072440, :pid 13582, :tid 129}
2021-09-22 06:49:19.655 ERROR datomic.process - {:message "Terminating process - Timed out waiting for log write", :pid 13582, :tid 776}
in our transactor logs. Does that mean that data-dir
is not writable?
Or is it something else?What version of Datomic are you using? What underlying storage? Is this a new system or an existing system? If you'd like you can shoot me a support case by e-mailing <mailto:[email protected]|[email protected]> or <mailto:[email protected]|[email protected]> and we can share logs/config there and I can help. This message can be thrown for a variety of reasons: unavailability of storage, transactor failover, gc pauses etc.
Thanks Jaret. This is version 1.0.6344. PostgreSQL is the underlying storage.
It's an "old" system that's been unstable for some time.
I got that error after issuing datomic.api/request-index
. A repeated request-index
also restarted the transactor.
After changing the data-dir
to an absolute path and some minor Dockerfile changes, I did not get an error on request-index
- it worked fine.
If this is the current behavior of Datomic, i.e. that an incorrect data-dir
may restart the transactor at a future time, I think it should be fixed / detected on start-up that data-dir
is not writable.
I will file an issue if the transactors stops working again. Thanks!
If you can't write to the data-dir the transactor will failover. Any "restart" is going to be whatever you've implemented to facilitate high availability.
What does "failover" concretely mean? We have a single container instance running, no HA configured. So far things looks good today.
Sorry, if you are only running a single transactor pointed to the storage system then the transactor will just fail. Failover occurs when you have two transactors (active and standby) monitoring storage heartbeat. You should see reported :heartbeat
failure -- unable to write heartbeat to storage. I will add we do https://docs.datomic.com/on-prem/operation/deployment.html given that Datomic is a distributed system and given proper process isolation you're going to encounter transactor failure at some point (i.e. network latency or GC pauses etc) and https://docs.datomic.com/on-prem/operation/ha.htmlis the way to provide resiliency.
Thanks Jaret. I've encountered another issue/error now:
2021-09-24 16:31:03.076 WARN datomic.update - {:message "Index creation failed", :db-id "pvo-backend-service-stage-2-4220cac7-8b82-4f5e-af48-fc52303bb641", :pid 25625, :tid 93}
java.lang.Error: Timed out waiting to segment log.
at datomic.update$process_request_index$fn__23960$fn__23961.invoke(update.clj:183)
at datomic.update$process_request_index$fn__23960.invoke(update.clj:181)
at clojure.core$binding_conveyor_fn$fn__5772.invoke(core.clj:2034)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
not sure what/why/how this would happen?
Redeploying the container (to a different node I suppose, picked by azure) solved the problem.I don’t know what that means but something makes me wonder if you’ve run out of disk space somewhere?
Hi, Can somebody know how solve this problem:
{:message "Data did not conform",
:class ExceptionInfo,
:data
#:clojure.spec.alpha{:problems
({:path [:local :home],
:pred clojure.core/string?,
:val nil,
:via
[:cognitect.s3-libs.specs/local
:cognitect.s3-libs.specs/home],
:in [:local :home]}),
:spec
#object[clojure.spec.alpha$map_spec_impl$reify__1998 0x5dfc2a4 "clojure.spec.alpha$map_spec_impl$reify__1998@5dfc2a4"],
I can't figure out where he means the error.you have a nested map {:local {:home ....}} - the predicate there is string?
but it's getting nil
(which is not valid)
if nil should be allowed, then that spec should be (s/nilable string?)
instead
none of this seems related to datomic afaict
I don't know - what did you do to see the error?
@U064X3EF3 I use clojure -A:ion-dev '{:op :push :region "eu-central-1" }'
I don't have enough knowledge about what Datomic is doing there to answer that so will need to wait for someone from the Datomic team to look at it
Hello … with Datomic Analytics, is there any way to either: • see the database as of a particular t • when I run a query, get the t associated with the results that I’m seeing ?
What do you use for migrations? In a typical scenario the prod is not only deployment target. We might have separate dev, test, pre-prod deployment targets, and each of them holding separate Datomic db. The goal? Keep the db schema as code, and apply them in each deployment target within a CI/CD pipeline. In an SQL world I would integrate something like https://github.com/flyway/flyway. Here, there is not much on this topic if you google it. I've found a couple of libraries like https://github.com/luchiniatwork/migrana and https://github.com/avescodes/conformity but like majority of clj libs doesn't look very active (I know, they might just work and nothing more is left to do). Do you use any of these, or maybe something else?
Not sure how that fits with db schema as code
If you want somewhat shorter schema syntax that fits in an edn file, you may look at https://github.com/ivarref/datomic-schema (written by myself, based off cognitect-labs/vase)
I think you've already made the correct call to do any transactions such as schema installation during the CI/CD pipeline
Lots of people do it during instance startup and that's just got so many problems
Very interesting. Would it be great to mention in Datomic's doc site under e.g. Operation?
Going back to this example:
(d/pull db '[*] [:temp/location+periodicity+date [[:country/code "GB"] :temp.periodicity/daily #inst"2021-09-18"]]) ;; <= doesn't work
(d/pull db '[*] [:temp/location+periodicity+date [83562883711079 74766790688854 #inst"2021-09-18"]]) ;; <= works fine
Since entity identifiers are not interchangeable in the context of a tuple, am I correct that this ⬇️ is the shortest way to pull such an entity details (shortest without using rules):
(d/q '[:find (pull ?e [*])
:where
[?ecountry :country/code "GBP"]
[?eperiodicity :db/ident :temp.periodicity/daily]
[(tuple ?ecountry ?eperiodicity #inst"2021-09-16") ?tup]
[?e :temp/location+periodicity+date ?tup]] (get-db))
I mean, the pull is just an example. But ultimately I need to find ?e, and for such a tuple I need to write 4 lines (4 facts, 4 data patterns - tbh I'm not sure how to name these :where vectors).
Since it is not that nice, people probably use rules, right?
(def rules
'[[(find-temp ?e ?country ?periodicityident ?date)
[?ecountry :country/name ?country]
[?eperiodicity :db/ident ?periodicityident]
[(tuple ?ecountry ?eperiodicity ?date) ?tup]
[?e :temp/location+periodicity+date ?tup]]])
(d/q '[:find (pull ?e [*])
:in $ %
:where
(find-temp ?e "GB" :temp.periodicity/daily #inst"2021-09-16")]
(get-db) rules)
Am I correct or am I still missing here some bits (in terms of accessing entities identified by composite tuples)?