Fork me on GitHub
#datomic
<
2017-10-03
>
jwkoelewijn05:10:28

then let me rephrase, how do people deal with multiple DC’s, do people use some kind of hot-standby set up, or are there other methods I missed?

wistb13:10:41

I am puzzled why a 'coordinator' component that gives a facade api for the application and works with multiple datomic dbs is discouraged. It will make datomic a viable option for applications that need to scale horizontally.

wistb13:10:21

It may have to compromise on throughput which may be agreeable to some applications.

wistb13:10:41

It hurts to lose an argument to some other product just because they said they scale indefinitely .... Is it too much to expect a discussion on this sour point ?

favila15:10:47

Who discourages this? It doesn't come out of the box, but I don't think it's discouraged? It's definitely more complicated though

favila15:10:15

ah I meant that to be a thread

favila15:10:51

Datomic is read scale, not really write-scale. If you have a huge amount of data but low write volume, you can use one transactor with a very large storage and multiple dbs. (Although be careful what storage backend you use. A sql backend, for eg, puts everything for one transactor in one table.) If you have high write volume, you need multiple transactors, sharding, and probably have to live without cross-tx atomic commits.

favila15:10:15

Both of these are just more awkward.

favila15:10:05

Looking at ignite. It's a more complex ops architecture (designed to run in a cluster with dedicated machines), I'm not sure how read-scaling works (I think you need more cluster members?) and it doesn't have time-travel. But it's definitely "bigger", i.e. if you have the resources you can scale it to more storage and higher write volumes than is possible with datomic

wistb15:10:08

when the in-memory grid databases talk about distributed transactions, they are dealing with similar concerns , right.

favila15:10:29

If that's really what you need, then that's probably a better fit

favila15:10:50

I don't think so re: distributed transactions

favila15:10:49

It depends on their model, but usually they are trying to write the same data to multiple nodes to ensure quorum and no conflics

favila15:10:02

it's an artifact of clustering

favila15:10:27

when I talk distributed transactions, I mean more XD-like

favila15:10:45

you don't want some to succeed and some to fail, but they're not part of the same storage

wistb15:10:38

keeping Ignite aside for a moment, I am trying to understand whether solving this in Datomic .. would it not make datomic available for lot more use cases .

favila15:10:53

solving what? high write volume?

wistb15:10:30

suppose I am ok with low write volume, but, just that, I need to handle lot of data.

favila15:10:46

If you need to handle a lot of data, make more dbs

favila15:10:59

the same strategy that e.g. elasticsearch would take

wistb15:10:18

If my data is split among 3 dbs and say I have one unit of work that needs to be stored among the three.

wistb15:10:52

will that be one tx over 3 dbs (so that I get the acid over 3 dbs)

favila15:10:06

Yeah, that doesn't seem to be a problem worth solving

favila15:10:28

you could do it with your own single-writer facade

favila15:10:36

no one else could write

favila15:10:50

and you could attach metadata to each tx to correlate them together

favila15:10:09

and you could have well-defined rollback behavior if one fails

favila15:10:23

I mean, that doesn't look like it's worth it to me?

wistb15:10:07

And, all that 'code' is essentially domain agnostic, right. As long as we come up with a standard way to express the needed metadata anyone else can use it.

favila15:10:36

there are lots of edge cases around rollbacks

favila15:10:09

and what if the dbs got out of sync because someone bypassed the writer facade?

favila15:10:18

I mean, yeah, I guess you could solve all those problems

favila15:10:49

we use lots of dbs in the same application, but we don't split a unit of work across two dbs

favila15:10:10

we never need a guarantee that two txs fail or succeed together

wistb15:10:50

and .. it is exactly the that does not look like it is worth aspect that is puzzling to me. I understand it is complex, but, it looks like the in-memory grid guys have done it... May be my understanding of what they exactly offer is poor.

favila15:10:09

they have done it by making different tradeoffs

favila15:10:24

they have clustered architectures, custom storages, mutable data

favila15:10:48

datomic has single writer, storage agnostic, immutable data

wistb15:10:03

ok. the trade offs. I wanted to understand the trade offs and the right questions to ask. Your explanation is helping me. Thank you much. I appreciate all the help you provide to datomic users.

favila15:10:03

glad to help. I'm not going to claim datomic is right for every problem

favila15:10:42

For us, we don't have big-data workloads, and immutability and history and easy administration are all very important

favila15:10:46

the low-friction, low-impedance api is good too (no complex sql orms)

favila15:10:26

but it's not the only database we use. e.g. datomic is bad at fulltext search, so we pipe datomic data into elasticsearch

wistb15:10:22

got it. thank you.

cch117:10:29

Here (http://docs.datomic.com/transactions.html#monitoring-transactions) it’s stated that the :tx-data key on a transaction result can be used as a db value for a query. When trying that trick with the client API, I get Exception Not supported: class datomic.client.impl.types.Datom com.cognitect.transit.impl.AbstractEmitter.marshal (AbstractEmitter.java:176) using the exact code example from the above link. Has anyone successfully pumped the transaction result back into a query using the client API?

marshall18:10:02

@cch1 You can use the :db-after as a db value for query, not the :tx-data

cch118:10:36

OK. But I was assuming the point of using tx-data is that it would obviate the need for a trip to the server.

cch118:10:59

Is there a use case for :tx-data in the client API?

marshall18:10:01

all queries go to the peer server

marshall18:10:05

if you’re using client

marshall18:10:22

doesn’t matter what the db is

marshall18:10:51

sure, you may want to examine the specific datoms created for something. maybe to get the tx-inst or save off the txn id locally for something

len19:10:40

I am trying to find the reverse links to an entity using the datoms fn via the :vaet index, I am not sure how to navigate the results, what does it return and how do I handle those results ?

favila19:10:14

It returns a seqable (i.e. you can call seq on it, or use something that does so automatically) that returns a lazy seq of datoms from the index you specified, with components matching what you specified in the args

favila19:10:37

Individual datoms have fields that can destructured by position [[e a v tx added?]] or by key {:keys [e a v tx added?]}

len19:10:29

thanks looking into that now

favila19:10:43

typical use would be like this:

favila19:10:22

(->> (d/datoms db :vaet :db.cardinality/one)
     (take 5))
=>
(#datom[8 41 35 13194139533312 true]
 #datom[9 41 35 13194139533312 true]
 #datom[15 41 35 13194139533366 true]
 #datom[17 41 35 13194139533366 true]
 #datom[18 41 35 13194139533366 true])

len19:10:39

yes thats what I am getting

favila19:10:47

This is all references TO the :db.cardinality/one entity

favila19:10:53

(which is id 35)

favila19:10:01

so note the "v" slot on all results is 35

len19:10:28

How to decode the vals in the #datoms vector ?

favila19:10:53

you can get by position or key

len19:10:48

I see the attrribue name is returned by id, do I have to look that up ?

favila19:10:08

yes. this is the raw index, so there are no names or other niceties

len19:10:18

aah right

favila19:10:22

d/ident is your friend here

len19:10:26

make sense

len19:10:54

thanks !

favila19:10:16

I feel like accessing datom fields should be better documented. all I could find was this note

favila19:10:37

> The datoms implement the [datomic.Datom](http://docs.datomic.com/javadoc/datomic/Datom.html) interface. In Clojure, they act as both sequential and associative collections, and can be destructured accordingly.

len19:10:34

right thanks

favila19:10:27

(->> (d/datoms db :vaet :db.cardinality/one)
       (take 5)
       (map (fn [{:keys [e a v tx added]}]
              [e a v tx added])))

favila19:10:41

(->> (d/datoms db :vaet :db.cardinality/one)
       (take 5)
       (map (fn [[e a v tx added]]
              [e a v tx added])))

len19:10:59

Was just converging on that 🙂

len19:10:24

(->> (d/datoms db :vaet :db.cardinality/one)
       (take 5)
       (map (fn [[e a v tx added]]
              [e (d/ident db a) v tx added])))

len20:10:42

@favila thanks that works, does the datomic console synthesize the reverse keyword names like :entity/_link, just not sure where they come from ?

favila20:10:47

I don't use the console so I'm not sure

favila20:10:02

if their "datoms" feature has names, it's because it's calling ident

len20:10:18

How do I get a list of all the attributes in the system ?

favila20:10:31

the values of the :db.install/attribute attribute on the :db.part/db entity