This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-10-21
Channels
- # announcements (13)
- # babashka (29)
- # beginners (52)
- # calva (95)
- # cider (18)
- # clj-commons (7)
- # cljs-dev (42)
- # clojure (121)
- # clojure-australia (1)
- # clojure-dev (39)
- # clojure-europe (36)
- # clojure-france (4)
- # clojure-greece (1)
- # clojure-italy (20)
- # clojure-nl (3)
- # clojure-portugal (1)
- # clojure-uk (7)
- # clojurescript (47)
- # conjure (2)
- # cursive (9)
- # datalevin (5)
- # datascript (8)
- # datomic (66)
- # defnpodcast (2)
- # deps-new (5)
- # fulcro (18)
- # graalvm (21)
- # gratitude (9)
- # jobs (6)
- # jobs-discuss (17)
- # leiningen (3)
- # lsp (80)
- # lumo (1)
- # malli (9)
- # mount (2)
- # off-topic (16)
- # other-languages (8)
- # podcasts (19)
- # reitit (5)
- # remote-jobs (5)
- # shadow-cljs (29)
- # sql (5)
- # tools-deps (13)
- # vim (11)
- # xtdb (19)
is there any possibility to have explain
functionality? to understand where the query spends most time?
I understand that there is no query planner (yet?), but having some explanation would help with manual query optimization
no i didnt, thanks!
I haven't seen (contains? <input> <bound-variable>)
recommended for matching a list of input values.
On my machine this uses only 10% of the time a regular list binding does.
Example here:
https://gist.github.com/ivarref/0d3d34eeeffbc4625d6120727368e405
What is the datomic way of getting the latest/newest entity of some type that has a :date
attribute?
Would the max
aggregate function help you?
https://docs.datomic.com/cloud/query/query-data-reference.html#aggregates
My suggestion would be something like this:
(d/q {:find [?e (max ?date]
:where [[?e :date ?date]]}
db)
Note this is a database scanis it best practice for such cases to write a custom aggregate function and use it in the :find clause?
The answer depends on your definition of newest
, for example if you define newest
as
(d/q {:args [db]
:query '[:find (max ?tx)
:where
[?e :date _ ?tx true]]})
then you can work from there. Retrieving all the entities that were touched in the same transaction using tx-range.
https://docs.datomic.com/cloud/time/log.html#tx-rangeWe are seeing occasionally, once or twice a day, that kv-cluster/read-val
takes slightly over 960 000 milliseconds.
It's almost always this "magic number" (of 960 000 ms or 16 minutes), normally it takes just a few milliseconds.
The segments are not large.
Anyone have experience with this scenario and/or have tips on how to fix it?
We are running the datomic on prem transactor (1.0.6344) in the azure cloud.
Our backing MS postgres server has 3000 IOPS available.
I am considering trying changing datomic.readConcurrency
to a lower default value.
Edit: And/or do anyone have experience in reproducing such a problem?
Is it some simple way to clear the local datomic cache to make every queries/pulls read (a lot) of data?
Is the key of the read value consistent? You can look up that key in your Postgres table to see if it’s unusual in some way. I’ve also seen abnormally large fetches caused by gc pauses. Is there gc pressure on this peer? Or maybe this is a driver or Postgres timeout
Hm. What happens if you try to read too much?
After push datomic.readConcurrency=2
we are now (currently) seeing a bunch of "late responses"
With stacktraces such as:
pool-9-thread-1 state: WAITING
stacktrace:
[email protected]/jdk.internal.misc.Unsafe.park(Native Method)
[email protected]/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
[email protected]/java.util.concurrent.FutureTask.awaitDone(FutureTask.java:447)
[email protected]/java.util.concurrent.FutureTask.get(FutureTask.java:190)
clojure.core$deref_future.invokeStatic(core.clj:2304)
clojure.core$future_call$reify__8477.deref(core.clj:6976)
clojure.core$deref.invokeStatic(core.clj:2324)
clojure.core$deref.invoke(core.clj:2310)
datomic.cluster$uncached_val_lookup$reify__3537.valAt(cluster.clj:192)
clojure.lang.RT.get(RT.java:791)
datomic.cache$double_lookup$reify__3308.valAt(cache.clj:358)
clojure.lang.RT.get(RT.java:791)
datomic.cache$lookup_transformer$reify__3299.valAt(cache.clj:245)
clojure.lang.RT.get(RT.java:791)
datomic.cache$lookup_cache$reify__3302.valAt(cache.clj:290)
clojure.lang.RT.get(RT.java:791)
datomic.common$getx.invokeStatic(common.clj:207)
datomic.common$getx.invoke(common.clj:203)
datomic.index.Index.seek(index.clj:555)
datomic.btset$seek.invokeStatic(btset.clj:399)
datomic.btset$seek.invoke(btset.clj:394)
datomic.db.Db.seekEAVT(db.clj:2344)
datomic.pull$next_a.invokeStatic(pull.clj:301)
datomic.pull$next_a.invoke(pull.clj:295)
datomic.pull$a_iter.invokeStatic(pull.clj:319)
datomic.pull$a_iter.invoke(pull.clj:316)
datomic.pull$pull_STAR_.invokeStatic(pull.clj:406)
datomic.pull$pull_STAR_.invoke(pull.clj:349)
clojure.lang.AFn.applyToHelper(AFn.java:171)
clojure.lang.AFn.applyTo(AFn.java:144)
clojure.core$apply.invokeStatic(core.clj:673)
clojure.core$partial$fn__5863.doInvoke(core.clj:2647)
clojure.lang.RestFn.invoke(RestFn.java:408)
clojure.core$mapv$fn__8468.invoke(core.clj:6914)
clojure.lang.PersistentVector.reduce(PersistentVector.java:343)
clojure.core$reduce.invokeStatic(core.clj:6829)
clojure.core$mapv.invokeStatic(core.clj:6905)
clojure.core$mapv.invoke(core.clj:6905)
datomic.pull$pull.invokeStatic(pull.clj:565)
datomic.pull$pull.invoke(pull.clj:509)
datomic.query$pull_fv$fn__8629$fn__8631.invoke(query.clj:712)
datomic.query$xf_tuple.invokeStatic(query.clj:698)
datomic.query$xf_tuple.invoke(query.clj:690)
datomic.query$q_STAR_$fn__8647.invoke(query.clj:761)
clojure.core$comp$fn__5825.invoke(core.clj:2573)
clojure.core$map$fn__5884.invoke(core.clj:2759)
clojure.lang.LazySeq.sval(LazySeq.java:42)
clojure.lang.LazySeq.seq(LazySeq.java:51)
clojure.lang.RT.seq(RT.java:535)
clojure.core$seq__5419.invokeStatic(core.clj:139)
clojure.core.protocols$seq_reduce.invokeStatic(protocols.clj:24)
clojure.core.protocols$fn__8168.invokeStatic(protocols.clj:75)
clojure.core.protocols$fn__8168.invoke(protocols.clj:75)
clojure.core.protocols$fn__8110$G__8105__8123.invoke(protocols.clj:13)
clojure.core$reduce.invokeStatic(core.clj:6830)
clojure.core$mapv.invokeStatic(core.clj:6905)
clojure.core$mapv.invoke(core.clj:6905)
datomic.query$q_STAR_.invokeStatic(query.clj:770)
datomic.query$q_STAR_.invoke(query.clj:742)
datomic.query$query_STAR_.invokeStatic(query.clj:783)
datomic.query$query_STAR_.invoke(query.clj:776)
datomic.query$query.invokeStatic(query.clj:803)
datomic.query$query.invoke(query.clj:796)
datomic.api$query.invokeStatic(api.clj:48)
datomic.api$query.invoke(api.clj:46)
Thank you @U09R86PA4 for your reply. I will reply more through tomorrow or later this evening.
I am also wondering if the 960 000 value is used somewhere deep down in Datomic...
“An entity is created the first time its id appears in the E position of a Datom.” Is this correct?
“For an entity id to appear in the A, V or TX positions of a datom it must first appear in the E position.” ?
There’s a mechanism for “minting” new entity ids from tempids, and that advances a counter to ensure uniqueness, but that doesn’t create an entity
“Entity ids are just numbers”: How does the database distinguish the contents of the V position in a datom between being a primitive and an entity ID?
Say I e.g. want to record the entity “Bill Clinton” into the database - can this new entity be “established” by adding an datom that just states an attribute value like “Bill” for attribute :person/name-first or can/should I establish the entity first without giving it any attribute value (if that’s at all possible)?
(i’m for sure in the process of unthinking the relational model…)
(putting on repeat)
“An entity cannot be put into the database without having an attribute value” Correct?
(question was, again, affected by the notion of a row - I guess..)
“Without any value, there’s no fact/datom.”
I see Datomic's entity model as a concrete application of the idea of a https://plato.stanford.edu/entries/object/#ConsOnto from metaphysics: > In addition to its properties, every object has as a constituent a bare particular (or ‘thin particular’ or ‘substratum’) that instantiates those properties. Bare particulars are ‘bare’ in at least this sense: unlike objects, they have no properties as parts. > > ... they are the subjects of properties or the items to which the properties are attached by instantiation or exemplification.
I like that connection. Have to read about it more. Do you recommend some book on the subject?
I found https://www.routledge.com/Metaphysics-A-Contemporary-Introduction/Loux-Crisp/p/book/9781138639348# an extremely clear and helpful text (I read the 3rd edition, which is available inexpensively as a paperback). It doesn't focus on substance theory in particular but it's the overview that introduced me to those ideas and allowed me to draw that connection. The Stanford Encyclopedia of Philosophy is a great resource in its own right, as well.
Hi if I run an app on aws ec2 (fargate) how much ram do I need to start a peer connection? 512 is not enough
Caused by: java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM
@benjamin.schwerdtner You can set the objectcache on the peer and on the transactor. The error you are encountering you are seeing specifically when you launch a peer?
memory-index-max=256m
object-cache-max=128m
either I made a mistake when setting it or it still throws even with 1gb ramThe peer builds the memory index from the log before the call to connect returns and the objectcache takes by default 50% of the remaining heap
Ok so you are setting the object-cache-max to 128, the memory-index-max is set on the transactor.
And the memory index will rarely rise much above the memory-index-threshold, except during data imports.
If you're setting -Ddatomic.objectCacheMax
to a high value, you'll need to make sure your heap size (`-Xmx`) is large enough for memory-index-max plus object-cache-max to fit below 75% of JVM RAM (as indicated by error message).
Now there are tradeoffs to not having a good sized object cache, and if you have some time I would encourage that you read through our docs on memory and capacity planning: https://docs.datomic.com/on-prem/overview/caching.html https://docs.datomic.com/on-prem/operation/capacity.html#peer-memory
Peers need a copy of the memory-index-max, their own object cache, and application memory. We have an example system at 4GB of ram on all transactors and you'll notice that the object cache and memory index max take up <75% of the memory:
beginner question how do I set a system property before loading anything? Is adding a call to System/setProperty on the top of my main file correct/ sufficient? (before the ns form)
https://clojure.org/reference/deps_and_cli An example in one deps.edn:
:jvm-opts ["-Dfile.encoding=UTF-8" "-Dconf=dev-config.edn" "-Dclojure.spec.skip-macros=true" "-Xmx500m" "-Xss512k" "-XX:+UseG1GC" "-XX:MaxGCPauseMillis=50"]