This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-10-03
Channels
- # aleph (1)
- # beginners (42)
- # boot (34)
- # cider (157)
- # cljs-dev (12)
- # cljsrn (3)
- # clojure (165)
- # clojure-conj (1)
- # clojure-india (1)
- # clojure-italy (6)
- # clojure-russia (20)
- # clojure-spec (27)
- # clojure-uk (173)
- # clojurescript (116)
- # cursive (30)
- # datomic (87)
- # devcards (1)
- # docs (9)
- # emacs (2)
- # ethereum (2)
- # events (2)
- # fulcro (60)
- # graphql (10)
- # hoplon (2)
- # jobs-rus (6)
- # keechma (1)
- # lein-figwheel (9)
- # leiningen (36)
- # luminus (2)
- # mount (3)
- # off-topic (16)
- # om (14)
- # onyx (12)
- # pedestal (19)
- # portkey (107)
- # re-frame (9)
- # reagent (5)
- # ring (26)
- # shadow-cljs (149)
- # spacemacs (3)
- # sql (6)
@danielcompton good point, thanks
then let me rephrase, how do people deal with multiple DC’s, do people use some kind of hot-standby set up, or are there other methods I missed?
I am puzzled why a 'coordinator' component that gives a facade api for the application and works with multiple datomic dbs is discouraged. It will make datomic a viable option for applications that need to scale horizontally.
It hurts to lose an argument to some other product just because they said they scale indefinitely .... Is it too much to expect a discussion on this sour point ?
Who discourages this? It doesn't come out of the box, but I don't think it's discouraged? It's definitely more complicated though
Datomic is read scale, not really write-scale. If you have a huge amount of data but low write volume, you can use one transactor with a very large storage and multiple dbs. (Although be careful what storage backend you use. A sql backend, for eg, puts everything for one transactor in one table.) If you have high write volume, you need multiple transactors, sharding, and probably have to live without cross-tx atomic commits.
Looking at ignite. It's a more complex ops architecture (designed to run in a cluster with dedicated machines), I'm not sure how read-scaling works (I think you need more cluster members?) and it doesn't have time-travel. But it's definitely "bigger", i.e. if you have the resources you can scale it to more storage and higher write volumes than is possible with datomic
when the in-memory grid databases talk about distributed transactions, they are dealing with similar concerns , right.
It depends on their model, but usually they are trying to write the same data to multiple nodes to ensure quorum and no conflics
you don't want some to succeed and some to fail, but they're not part of the same storage
keeping Ignite aside for a moment, I am trying to understand whether solving this in Datomic .. would it not make datomic available for lot more use cases .
If my data is split among 3 dbs and say I have one unit of work that needs to be stored among the three.
And, all that 'code' is essentially domain agnostic, right. As long as we come up with a standard way to express the needed metadata anyone else can use it.
we use lots of dbs in the same application, but we don't split a unit of work across two dbs
and .. it is exactly the that does not look like it is worth aspect that is puzzling to me. I understand it is complex, but, it looks like the in-memory grid guys have done it... May be my understanding of what they exactly offer is poor.
ok. the trade offs. I wanted to understand the trade offs and the right questions to ask. Your explanation is helping me. Thank you much. I appreciate all the help you provide to datomic users.
For us, we don't have big-data workloads, and immutability and history and easy administration are all very important
but it's not the only database we use. e.g. datomic is bad at fulltext search, so we pipe datomic data into elasticsearch
Here (http://docs.datomic.com/transactions.html#monitoring-transactions) it’s stated that the :tx-data
key on a transaction result can be used as a db value for a query. When trying that trick with the client API, I get Exception Not supported: class datomic.client.impl.types.Datom com.cognitect.transit.impl.AbstractEmitter.marshal (AbstractEmitter.java:176)
using the exact code example from the above link. Has anyone successfully pumped the transaction result back into a query using the client API?
OK. But I was assuming the point of using tx-data is that it would obviate the need for a trip to the server.
sure, you may want to examine the specific datoms created for something. maybe to get the tx-inst or save off the txn id locally for something
I am trying to find the reverse links to an entity using the datoms
fn via the :vaet
index, I am not sure how to navigate the results, what does it return and how do I handle those results ?
It returns a seqable (i.e. you can call seq
on it, or use something that does so automatically) that returns a lazy seq of datoms from the index you specified, with components matching what you specified in the args
Individual datoms have fields that can destructured by position [[e a v tx added?]]
or by key {:keys [e a v tx added?]}
(->> (d/datoms db :vaet :db.cardinality/one)
(take 5))
=>
(#datom[8 41 35 13194139533312 true]
#datom[9 41 35 13194139533312 true]
#datom[15 41 35 13194139533366 true]
#datom[17 41 35 13194139533366 true]
#datom[18 41 35 13194139533366 true])
I feel like accessing datom fields should be better documented. all I could find was this note
> The datoms implement the [datomic.Datom](http://docs.datomic.com/javadoc/datomic/Datom.html) interface. In Clojure, they act as both sequential and associative collections, and can be destructured accordingly.
(->> (d/datoms db :vaet :db.cardinality/one)
(take 5)
(map (fn [{:keys [e a v tx added]}]
[e a v tx added])))
(->> (d/datoms db :vaet :db.cardinality/one)
(take 5)
(map (fn [[e a v tx added]]
[e a v tx added])))
(->> (d/datoms db :vaet :db.cardinality/one)
(take 5)
(map (fn [[e a v tx added]]
[e (d/ident db a) v tx added])))