This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-02-19
Channels
- # announcements (10)
- # aws (3)
- # aws-lambda (1)
- # babashka (24)
- # beginners (57)
- # boot (5)
- # calva (20)
- # chlorine-clover (3)
- # cider (14)
- # clj-kondo (37)
- # clojars (17)
- # clojure (200)
- # clojure-dev (40)
- # clojure-europe (9)
- # clojure-france (7)
- # clojure-gamedev (5)
- # clojure-hungary (4)
- # clojure-italy (8)
- # clojure-losangeles (2)
- # clojure-nl (9)
- # clojure-uk (97)
- # clojurebridge (1)
- # clojured (3)
- # clojuredesign-podcast (23)
- # clojurescript (13)
- # code-reviews (2)
- # component (22)
- # core-typed (7)
- # cursive (64)
- # datascript (12)
- # datomic (60)
- # emacs (6)
- # fulcro (54)
- # graalvm (11)
- # graphql (3)
- # hoplon (25)
- # jobs (1)
- # joker (85)
- # juxt (5)
- # kaocha (10)
- # klipse (8)
- # malli (2)
- # off-topic (36)
- # parinfer (1)
- # pathom (1)
- # re-frame (9)
- # reagent (4)
- # reitit (1)
- # remote-jobs (1)
- # shadow-cljs (24)
- # spacemacs (1)
- # sql (39)
- # tools-deps (10)
- # tree-sitter (18)
- # xtdb (18)
FWIW I think the most complete library that is close to what you're asking for is https://github.com/denistakeda/re-posh
we have a lot of microservices that are currently exposed at various endpoints. I would like to be be able to query our system, via Datalog, to get a response that contains data from multiple endpoints.
E.g. if there’s a books
and authors
microservice, I’d be able to write a query on the client-side:
'[:find ?title ?author
:where
[?e :book/title ?title]
[?e :book/author ?author-id]
[_ :author/name ?author]]
and the service would query across the books
and authors
microservices to resolve the facts I wantIf you're going to make something to do this, I'd probably build it on top of pathom since it compiles indexes.
Datascript is (obviously) datalog implemented in the browser. Another interesting one is built into https://github.com/arachne-framework/factui , which builds an impressive datalog on top of clara rules (in the browser!).
To be clear, what we are talking about now is pretty far away from the initial question of: > has anyone used datascript as a client-side cache for datomic? Nothing wrong with that, per se, it just sounds like datalog to query across n-services is a very different (more general) problem than client-side datomic cache.
yes, the next step of the idea is that I would like to handle caching on the client of these queries so that it doesn’t have to send the request for datums that have already been requested from the microservices.
and my thinking was: what if my datalog-query-service responded with all of the datums that were requested in order to resolve the query, and then the client transacted those to a local datascript db?
I am looking for experience reports of using datascript to cache queries for a service that already uses datalog (datomic)
The closest thing I'm aware of is https://github.com/replikativ/datahike and the replikativ stack as an alternative trying to build a JS/JVM distributed data stack. But, I would also argue that your books and authors example sounds like you want to build a js-based datomic peer (that will fetch and cache datoms and do joins locally):
'[:find ?title ?author
:in $book-db $author-db
:where
[$book-db ?e :book/title ?title]
[$book-db ?e :book/author ?author-id]
[$author-db ?a :author/id ?author-id]
[$author-db ?a :author/name ?author]]
Hi, we have a memory issue (system crashes) because we get many records (over a million) and then sort by time.
{:db/id #db/id[:db.part/db]
:db/ident :lock/activities
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :activity/author
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :activity/at
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one
:db/index true
:db.install/_attribute :db.part/db}
@asier does it crash if you just call (mapv #(get-activity-data %) (:lock/activities lock))
without sorting?
So, the issue isn't sorting then, maybe the issue is eagerly realizing a million+ entities in memory at once?
Do you need all results here? It seems like you simply can’t fit all activities in memory; what are you willing to give up?
If not, I think you need to rearrange your schema a bit so you can make a composite attr sorted how you want
(sort #(compare (:activity/at %2)
(:activity/at %1))
(:lock/activities (or (d/entity db [:lock/id lock-id])
(d/entity db [:lock/serial-number lock-id])))
how is this different from your get-activity-data
? (Which I now realize you never showed us)
(defn get-activity-data
"Gets the attributes from entity"
[activity]
(merge
{:at (to-long (:activity/at activity))
:name (-> activity
:activity/kind
:activity-kind/name)
:desc (-> activity
:activity/kind
:activity-kind/desc)
:status (-> activity
:activity/kind
:activity-kind/status)
:image (->> activity
:activity/kind
:activity-kind/image
(str "assets/"))}
(if (:activity/author activity)
{:author (some-> activity :activity/author :user/username)}
{:author nil})))
Ah, I see, you were building full result sets, and you couldn’t fit that in memory. but you can fit the things you sort by
e.g.
(->> (d/q '[:find ?activity ?at
:in $ ?lock
:where
[?lock :lock/activities ?activity]
[?activity :activity/at ?at]]
db lock-eid)
(sort-by peek)
(into []
(comp
(map first)
(take 100))))
Hello! I'm trying to upgrade my system for the first time. I'm running a solo topology in Datomic Cloud. I've selected my root stack and used the formation template https://s3.amazonaws.com/datomic-cloud-1/cft/589-8846/datomic-storage-589-8846.json which I found from https://docs.datomic.com/cloud/releases.html however the stack update has failed with the status reason of Export with name XXX-MountTargetSecurityGroup is already exported by stack XXX-StorageXXX-XXX
. I have never updated my system since I created it in August. Any guidance would be appreciated!
Have you looked at https://docs.datomic.com/cloud/operation/upgrading.html and https://docs.datomic.com/cloud/operation/split-stacks.html ?
I was using the upgrading.html page but I was not using a split stack system. Let me try splitting the stacks and see if the problem persists after that. Thanks @joe.lane
Where should I go from here? My system is now down with the ec2 instance terminated
so some part of that delete worked
I resolved the above issue by navigating to the ENIs page and deleting the ENIs manually
@brian.rogers Did you then split the stack and upgrade?
ANN: Datomic CLI Tools 0.10.81 now available: https://forum.datomic.com/t/datomic-cli-0-10-81-now-available/1363 Check out the video overview: https://docs.datomic.com/cloud/livetutorial/clitools.html
does it need some configuration? I'm getting Error building classpath. Could not find artifact com.datomic:tools.ops:jar:0.10.81 in central (
when trying to run datomic command
If I add the datomic cloude s3 repo to deps I can get it to work
:mvn/repos {"datomic-cloud" {:url ""}}