This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # adventofcode (1)
- # aleph (2)
- # beginners (28)
- # boot (26)
- # boot-dev (8)
- # cider (10)
- # clara (10)
- # cljs-dev (130)
- # cljs-experience (1)
- # cljsrn (12)
- # clojure (118)
- # clojure-austin (40)
- # clojure-boston (1)
- # clojure-chicago (1)
- # clojure-dusseldorf (1)
- # clojure-estonia (11)
- # clojure-france (1)
- # clojure-greece (3)
- # clojure-italy (19)
- # clojure-nl (1)
- # clojure-russia (1)
- # clojure-spec (19)
- # clojure-uk (34)
- # clojurescript (62)
- # core-logic (7)
- # cursive (11)
- # datomic (35)
- # emacs (15)
- # fulcro (264)
- # jobs (4)
- # leiningen (5)
- # midje (4)
- # off-topic (74)
- # onyx (27)
- # planck (14)
- # protorepl (4)
- # re-frame (37)
- # reagent (62)
- # rum (2)
- # shadow-cljs (171)
- # slack-help (5)
- # spacemacs (6)
- # specter (9)
Is anyone using Datomic for timeseries? I've been using datomic for simple timeseries data by using a compound id (event id + attribute + timestamp), but am running into some issues with this approach (slow queries, not easy to remove expire data), so I was wondering if and how other people solve this.
i'm thinking now that Datomic is not a good fit for this purpose , but I hope someone can prove me wrong
In my experience Datomic is not a very good fit when you want fast aggregations - we offloaded all of ours to ElasticSearch. Datomic still plays an important role in that : making data synchronization easy
@U0P1MGUSX making sure the ElasticSearch materialized view gets updated correctly and efficiently.
Out of curiosity, did anyone already developed an bridge between git's repository format and Datomic? Would there be any practical reason to do it?
I don't know if Datomic would be appropriate for this usage w.r.t. the potential huge size of the data, but I would definitively see an advantage in the gain of expressiveness of the queries we could run on imported git repositories.
Hi folks, i'm getting this error:
ActiveMQSecurityException AMQ119031: Unable to validate user when trying to connect to a transactor running on Heroku. My understanding is that it's a licensing issue, but i'm using a Datomic Pro Starter Edition license which allows unlimited peers. Does anybody have an idea what else could cause this?
I have a question about capacity planning. My process compares millions of records, one-by-one (with some parallelism involved). The left-hand version of an entity is almost always already in the db, unless we occur a 'new' one. Also, we will encounter each record only once. Is it fair to say that I should try to look for a way to disable any caching the datomic-peer (and indeed the transactor) might do for this usecase?
So, actual calls to
transact are pretty rare, whereas a read from the db using
entity happen in 99% of the time...
Or... i try to get as much of my database in memory as I can upfront, which leads to another question: how do I determine the size of my db?
"Size of db" inside SQL/Dynamo? Inside memory/running code? In
(d/datoms) form? In backup form?
Inside running code... I guess what I'm asking is, is what the ratio sizeof (psqldump) : running memory is.
Alas, I'm pretty sure it won't fit. A single file I process is roughly 22 GB. That doesn't fit on my laptop 🙂
Not in mem, at least... But I would consider getting lots of mem for the production-stage. Just hard to know how much that would require.
I'm also interested.
In my case, would be cool to know how many "peer memory"(50% of JVM) is enough to fit a database that has a backup with
I’m not sure if any of this has changed, but I believe the docs mention being careful about getting the heap too big, so that you don’t introduce large gc pauses. The alternative is to use the memcached intergration for larger memory caches.
I don’t actually see it in the docs, must have been support convos. If your async anyway, longer pauses might not be a big deal
@U0H4HJB08 thanks! i have indeed run into gc problems... I have decided to go the client-api route as to be isolated from the peculiarities of the peer library for this high-traffic scenario i have
@U0H4HJB08 in this usecase i will never hit any of the caches because i am hitting each entity in my database once
What is the proper syntax for using pull within a clojure peer client query and using defaults?
[:find (pull ?e [:job/doc-num (:job/filing-date :default "")]) :in $ :where [?e :job/job-num "01"]]