This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-11
Channels
- # aws (2)
- # beginners (38)
- # boot (21)
- # boot-dev (8)
- # cider (51)
- # cljsrn (3)
- # clojars (23)
- # clojure (99)
- # clojure-austin (7)
- # clojure-brasil (1)
- # clojure-dev (8)
- # clojure-dusseldorf (1)
- # clojure-estonia (20)
- # clojure-greece (4)
- # clojure-italy (3)
- # clojure-russia (1)
- # clojure-spec (28)
- # clojure-uk (47)
- # clojurescript (47)
- # core-logic (3)
- # cursive (9)
- # data-science (1)
- # datomic (50)
- # docs (12)
- # emacs (5)
- # fulcro (60)
- # graphql (33)
- # hoplon (8)
- # jobs-discuss (1)
- # keechma (31)
- # lein-figwheel (10)
- # leiningen (4)
- # off-topic (70)
- # om (1)
- # onyx (15)
- # pedestal (5)
- # re-frame (185)
- # reagent (14)
- # remote-jobs (8)
- # ring-swagger (7)
- # rum (17)
- # shadow-cljs (193)
- # specter (6)
- # sql (51)
- # unrepl (8)
@jamesvickers19515, I can't speak in a great deal of detail wrt. Datomic, but I will note that as someone responsible for scaling PostgreSQL professionally, the PG case very much has a comparable set of bottlenecks -- you're still only able to do horizontal scaling only for reads; but without time-based queries, PG makes it harder to correlate multiple reads together consistently without a bunch of transaction locking ensuring that you're holding a reference to a specific point in time.
(referring to "horizontal scaling for reads" in PG being via such mechanisms as a pool of secondaries doing streaming replication or PITR recovery).
However, a pool of secondaries is only eventually consistent in PG, correct?
@val_waeselynck, any given secondary is internally consistent with some prior point-in-time that the master was at. That's effectively equivalent with what datomic gives you, if your PG schema is built to allow point-in-time queries.
@U2V9F98N8 not sure I understand, let me ask specifically: if you write to the master then read to the secondary, are you guaranteed to read your writes ?
You're guaranteed a read that accurately reflects a single point in time (in senses that aren't true with some "eventually consistent" databases), but not an up-to-date one.
that said, I've yet to have a write-heavy workload with Datomic, so I'm definitely not the best person to speak directly to the question.
Can I update a datomic entity using the entity id? Something like {:client/name "Caleb" :client/eid 123412341234}
? Or do I need to provide my own id?
its :db/id
@caleb.macdonaldblack Not sure I understood your question, but you can do both [{:my-ent/id "fjdkslfjlk" :my-ent/name "THIS VALUE CHANGED"}]
(a.k.a 'upsert') and [{:db/id 324252529624 :my-ent/name "THIS VALUE CHANGED"}]
@val_waeselynck Ah okay cheers. I didn’t know :client/id works too. Thanks!
Datomic 0.9.5661 is now available https://forum.datomic.com/t/datomic-0-9-5661-now-available/273
The peer API has pull-many
but the client-api doesn't. What is the preferred way to pull-many
through the client-API if i want to avoid doing a bunch of client/pull
requests?
I was looking at the query-api, it allows a pull-expression to be used but I suspect that will only work for a single entity...
Basically, I am looking for the datomic counterpart for select * from books where isbn in (1, 2, 3 , 4)
@hans378 you can (d/q '[:find [(pull ?e pattern) ...] :in $ pattern [?e ...]] db '[*] [id1 id2])
but i think that (map (partial d/pull db pattern) [id1 id2])
is faster once dont need to "parse" the query
not sure how cache works on "clients"(i just use peers)... but sure, "process" is faster then http.
because i am running a relatively shot-lived batch-process where every entity is touched only once
not sure how cache works on "clients"(i just use peers)... but sure, "process" is faster then http.
Datomic's pull syntax and GraphQL queries seem super related, and all my front-end guys want to speak Graph...does anyone have experience in sort of "converting" graph queries into pull syntax? Or maybe there's an even more elegant solution to the problem. We've found Lacinia, but I can't help but feel that if you're using Datomic, all these schemas are just unnecessary.
I have developed a 'variant' of GraphQL for my application, with essentially the same reads semantics as GraphQL. While directly converting GraphQL queries to pull patterns is appealing, as soon as you need either nontrivial authorization logic or derived fields or parameterized fields, Datomic pull is no longer powerful enough
However, developing a basic GraphQL interpreter (as a recursive function) on top of the Datomic Entity API is rather straightforward, to the point you could maybe do it without Lacinia. But be aware that this basic interpreter may be too naive an algorithm - you may get performance issues as soon as some fields require a network call, and maybe even a Datalog query (Datalog queries have much more overhead than Entity lookups).
My point being that most production-ready GraphQL interpreters need some way of leaving rooms for optimizations. In the NodeJS world, this is done by making Field Resolvers asynchronous, which also lets you do batch queries with some wizardry; In Lacinia, this is done via sub-query previews. My strategy has been to give up on Field Resolvers (synchronous functions that compute a single field of a single entity) and adopt the more general 'Asynchronous Tabular Resolvers' (asynchronous functions that compute several fields of a selection of entities).
You should also have a look on the work done for Om Next / Fulcro - the querying semantics are similar.
I may end up open-sourcing the query engine I made one of the days - let me know if you're interested.
@val_waeselynck I also (started)developed one, but it never finishes (my "main" app will not use it. Just future plans for now)
I am working on one right now that we plan to open-source once it's stable
Hopefully soon
It has two components, one is a library to that hooks up to the entity API and resolves stuff for you, and the other is a program that you point at your db and it extracts a lacinia schema definition from it
There's also the umlaut project, which can go take a graphql schema and produce a datomic schema from it
@val_waeselynck appreciate the reply, I'd definitely be interested in seeing that code. For this specific project, I may be concluding that datomic is just not the right tool at present...I really just need a straightforward way to expose my database to a GraphQL client, and unfortunately this doesn't feel "straightforward" enough given my project's extremely tight timeline. We may end up moving over to JS on the backend 😕 I've tried convincing my front-end guys of potentially embracing the Datomic API completely, and letting go of Graph, but they are hell-bent on keeping their familiar tools...such is life.
@U6GFE9HS7 I haven't used Lacinia yet, but I do think it's still very straightforward with it 🙂 - if that' s enough to tip the balance, you should try and sell the workflow aspects of Datomic to the clientside guys
We are doing something very similar to this at our company. No GraphQL though. The frontend subscribes to pull patterns by sending a HTTP request to the backend with the pull pattern and eid. The backend responds with the result of that pull pattern. Additionally, the frontend is connected via SSE so the backend sends updates to the frontend any time any datoms that match the pull patterns the client has subscribed to have been updated in the DB.
@U083D6HK9 that's exactly what I've been spiking out recently on our latest project. Remove GraphQL as a middle-man, and just speak Datomic over the wire with real-time updates via the datom comparison you described. I'm unfamiliar with SSE though, and have been using WebSockets. SSE seems much more appropriate for this context, I'll have to look into it.
Yep it’s pretty clean. You’ll likely need permissions, which are easy to implement using Datomic filters.
I’ve checked this URL at least 50 times in the past month https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=datomic&page=1&ref_=nav_search_box Did AWS give any updates re: the Datomic Cloud submission process? Is there an estimated duration of how long this will take?
is the best (only) way to warm the peer cache and index on startup just to issue a bunch of queries like the ones you’ll be running from that peer?
Probably yes, maybe you make something clever using the Log API to touch some of the last used index segments... However, you should definitely try the Memcached approach first as it requires zero effort.
Side note about memcached: it's also very useful on dev machines, especially since the same memcached cache can be shared for dev (on your machine) and production (remote) databases
Are there any potential issues to consider from running the same memcached instance for staging and production environments of the the same peer application?
They could be sharing most of their data if the staging data is obtained via restoring the production data; I cannot think of any issues except for the additional load