Fork me on GitHub
#datomic
<
2017-01-10
>
eoliphant01:01:25

anyone here done much with CQRS/ES in datomic? I’m still getting my head around many of datomic’s ‘weird’ but good features (like I’ve finally gotten out of subconsciously ‘reducing server round trips’ lol) I’ve played around with using it to store projections, but the ‘time sense’ makes me think it might make sense to store the events themselves in datomic. I’ve seen a couple bits around the net, Bobby Calderwood’s talk about datomic, kafka,etc, then there’s that Yuppiechef deal (it’s a little out of date but seems to have some cool idea) that uses datomic, kafka and onyx. Just wondering about any ‘in the trenches’ experience folks might have had

robert-stuttaford05:01:34

Datomic is a very natural fit for storing both events and their aggregations

val_waeselynck09:01:00

@robert-stuttaford do you store derived data in Datomic? How does it work?

robert-stuttaford09:01:36

as noHistory attrs on (what would, in sql terms, be) join entities dedicated to keeping stats

robert-stuttaford09:01:56

we have an Onyx system which watches the tx log and does the calc and writes the results back to Datomic

robert-stuttaford09:01:19

allowing us to query across both events and aggregates as needed at read-time

robert-stuttaford09:01:02

a pragmatic approach, taken by generating events, writing queries over those events to satisfy views, and then improving the perf of those queries by pre-calculating and storing intermediate results - whether as actual derived values, or as short-cut collections that embody multiple ref jumps and maybe entity status as well - e.g. a user group caching a collection of all active documents generated by all of its users, allowing a join from group straight to active documents

val_waeselynck09:01:14

@robert-stuttaford do you have a nice way of handling cache misses in such cases ?

robert-stuttaford10:01:58

not at the moment, unfortunately

dominicm10:01:25

@marshall Trying to lazily consume an extremely large sequence. We have to generate JSON/CSV reports which is essentially "give me 90% of the entities in the database" and seeing memory consumption go very high when doing so. The idea is to consume the sequence lazily so that we don't blow through all our memory. Then do streamed encoding of JSON and pipe it out over HTTP immediately so there's no large string in memory either.

robert-stuttaford11:01:53

that’s a perfect use case for d/datoms @dominicm

dominicm11:01:47

@robert-stuttaford I thought so. Wanted to make sure I didn't end up wasting the advantage by doing d/datoms and then filtering in a slow way.

robert-stuttaford11:01:52

i’ve found transducers + sequence play very nicely with d/datoms

robert-stuttaford11:01:04

i guess it depends how complex your filtering is?

eoliphant13:01:16

@robert-stuttaford thanks for the input. So is your approach similar to that yuppiechef POC? though you seem to be using datomic as the event store as well. Any issues with scale?

eoliphant13:01:38

on the event store aspect that is

robert-stuttaford13:01:02

yes, we use Datomic for events too. it’s similar in principle to YC’s, but simpler because it’s really just Datomic and some Clojure apps, some of which are web facing, and some not

robert-stuttaford13:01:19

well, you’re really scaling the storage. we use DynamoDB

robert-stuttaford13:01:25

which is all-you-can-eat

eoliphant13:01:39

good point, and that’s our target prod storage

robert-stuttaford13:01:45

there is a comfort boundary at about 10bn datoms, but we’re so far away from that right now

eoliphant13:01:59

ok, will keep that in mind, what we’re working on is by no means ‘web scale’ lol. It’s more a new look on some traditional data processing type stuff, biggest challenge is probably getting the years of legacy data in

robert-stuttaford13:01:25

@stuarthalloway did a recent talk on writing ETL stuff with Datomic

eoliphant13:01:59

yeah I actually think that’s in my ‘watch later’ 🙂

eoliphant13:01:17

will check it out today

dominicm14:01:12

@robert-stuttaford I think depending on the filtering, it might be a job for reducers. As reducers can parallelize. Then in a final reducer I can write to a stream, I think. This is mostly me just planning ahead, so not deep in the code yet

karol.adamiec15:01:13

when making a query with lookup ref, if datomic can not find entity it throws quite nasty errors. any ways to handle that? reg query with :where just returns no results, but if done through lookup ref it bombs…

isaac16:01:48

Why Datomic doesn’t provide reverse index access, like: d/reverse-datoms? the (first (reverse (d/datoms ...))) is slower than (first (d/datoms …))

rauh16:01:27

@isaac I thought seek-datoms with a nil coudl do that? But I haven't tried it myself

isaac16:01:02

I read the doc http://docs.datomic.com/clojure/#datomic.api/seek-datoms, it seems has no this feature?

rauh16:01:30

@isaac What are your datoms parameters?

isaac16:01:04

(first (datoms :avet :some/attr)) => min-value

rauh16:01:57

@isaac Have you tried if datomic is smart enough about last?

rauh16:01:03

What type is your attr?

isaac16:01:45

reverse implemented by reduce1, both last & reduce1 implemented via recur

lellis16:01:01

Hello, I'm having problems with some characters in the fulltext

(d/q '[:find ?e ?name
       :in $ ?search
       :where [?e :user/name]
       [(fulltext $ :user/name ?search) [[?e ?name]]]]
     db search)
When the search contains a !(and same other chars, in some positions), an exception occurs. Is this foreseen (I did not find anything in the docs)? Is there a blacklist of characters?

rauh16:01:38

@isaac Why not use max function? I'd expect that to be optimized

isaac16:01:46

reverse & last need traverse entire seq

rauh16:01:51

Yeah you're right

isaac16:01:41

I tried, max more slower last datoms

isaac16:01:00

s/more slower/more slower than

marshall19:01:38

I believe there is a feature request for reverse iteration of indexes via datoms API

marshall19:01:54

I’d suggest you login to the feedback portal and vote for it if youre interested in that functionality

jaret19:01:09

You can get to the feedback portal from you my-datomic account page. Its in the top right, "suggest feature"

jaret19:01:33

@lellis could you try escaping the characters '+ - && || ! ( ) { } [ ] ^ " ~ * ? : \ /' with a '\' so '\(1\+1\)\:2'

jaret19:01:39

ugh formatting

jaret19:01:18

could you try escaping the characters + - && || ! ( ) { } [ ] ^ " ~ * ? : \ / with a \ so \(1\+1\)\:2?

souenzzo20:01:39

@jaret, That's exactly what the problem is! There is some "right" way to do that? I need to do it on all my queries?

favila23:01:44

I am remembering there used to be a problem with tempid issuance where you could potentially get an id in the transactor which collided with some tempid the peer created and sent in the tx

favila23:01:52

was this ever resolved or a workaround found?