This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-06-07
Channels
- # announcements (12)
- # aws (2)
- # beginners (233)
- # calva (68)
- # cider (23)
- # circleci (5)
- # clj-kondo (40)
- # cljsrn (4)
- # clojars (3)
- # clojure (200)
- # clojure-austin (1)
- # clojure-canada (1)
- # clojure-dev (16)
- # clojure-europe (1)
- # clojure-finland (1)
- # clojure-italy (4)
- # clojure-nl (16)
- # clojure-spec (3)
- # clojure-uk (102)
- # clojurescript (16)
- # cursive (14)
- # datomic (16)
- # figwheel-main (7)
- # graalvm (3)
- # hoplon (37)
- # jackdaw (23)
- # jobs-discuss (24)
- # joker (4)
- # kaocha (6)
- # keechma (64)
- # off-topic (66)
- # parinfer (1)
- # pedestal (7)
- # re-frame (7)
- # reagent (10)
- # reitit (45)
- # rewrite-clj (12)
- # shadow-cljs (1)
- # slack-help (8)
- # spacemacs (55)
- # sql (9)
- # tools-deps (9)
- # vim (7)
Is it possible to supply a default value in a historical query? I'm looking for something like
[(get-else $ ?e :a/b ?tx true ?default-value) ?val]
Do I miss a jar?
Could not locate datomic/s3backup__init.class, datomic/s3backup.clj or datomic/s3backup.cljc on classpath.
We're on paid Pro. Thx.Are you running bin/datomic backup-db
from the root of your unzipped datomic distribution?
backup is a command line tool: https://docs.datomic.com/on-prem/backup.html#backing-up
Yes with the command line it's working fine. I thought i could use (datomic.backup/backup [from-conn-uri to-storage-uri sse? progress differential?]) directly in my code.
Did some tweaking of object-cache-max and gc settings on my peer. Tweaking these settings doesn’t seem to impact the metrics that are worrying me. However, maybe what I’m seeing is completely normal for a datomic peer: lots of churn in eden space due to allocations from datomic.api/q
. Overall, gc time seems insignificant but new gen gc count per second seems high. Objects look like they’re being allocated and immediately garbage collected. Is this normal?
Additionally, it doesn’t seem like the peer is pulling new segments during this time. Does this mean object-cache-max
is large enough to hold the data being queried?
is there a recommended library for doing ( handling / running ) schema/data migrations for datomic ? i see the library called conformity and curious if there is a better one that that?
Anybody running into issues with a IonHttpDirectFailedToStart
alert in datomic cloud?
On the following page: https://docs.datomic.com/on-prem/dev-setup.html, it reads
> To create a connection string, simply replace <DB-NAME> with a database name of your choice, e.g. “hello”
However, it’s not clear where I replace <DB-NAME
. I don’t see that anywhere in the .properties file.
Ok, I see now that that’s answered here: https://docs.datomic.com/on-prem/dev-setup.html#peer-server Just a note, that particular information quoted above seems out of order and caused me some confusion.