This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-01-10
Channels
- # beginners (97)
- # boot (77)
- # cider (7)
- # cljs-dev (47)
- # cljsrn (3)
- # clojure (125)
- # clojure-austin (5)
- # clojure-dusseldorf (1)
- # clojure-italy (4)
- # clojure-russia (91)
- # clojure-spec (80)
- # clojure-uk (54)
- # clojurescript (92)
- # core-async (6)
- # cursive (17)
- # datomic (56)
- # hoplon (7)
- # immutant (3)
- # liberator (3)
- # luminus (4)
- # off-topic (26)
- # om (41)
- # om-next (11)
- # pedestal (3)
- # perun (3)
- # protorepl (25)
- # re-frame (32)
- # reagent (33)
- # ring (46)
- # rum (3)
- # spacemacs (5)
- # specter (82)
- # test-check (16)
- # untangled (8)
- # yada (26)
anyone here done much with CQRS/ES in datomic? I’m still getting my head around many of datomic’s ‘weird’ but good features (like I’ve finally gotten out of subconsciously ‘reducing server round trips’ lol) I’ve played around with using it to store projections, but the ‘time sense’ makes me think it might make sense to store the events themselves in datomic. I’ve seen a couple bits around the net, Bobby Calderwood’s talk about datomic, kafka,etc, then there’s that Yuppiechef deal (it’s a little out of date but seems to have some cool idea) that uses datomic, kafka and onyx. Just wondering about any ‘in the trenches’ experience folks might have had
Datomic is a very natural fit for storing both events and their aggregations
@robert-stuttaford do you store derived data in Datomic? How does it work?
as noHistory attrs on (what would, in sql terms, be) join entities dedicated to keeping stats
we have an Onyx system which watches the tx log and does the calc and writes the results back to Datomic
allowing us to query across both events and aggregates as needed at read-time
@robert-stuttaford interesting!
a pragmatic approach, taken by generating events, writing queries over those events to satisfy views, and then improving the perf of those queries by pre-calculating and storing intermediate results - whether as actual derived values, or as short-cut collections that embody multiple ref jumps and maybe entity status as well - e.g. a user group caching a collection of all active documents generated by all of its users, allowing a join from group straight to active documents
@robert-stuttaford do you have a nice way of handling cache misses in such cases ?
not at the moment, unfortunately
@marshall Trying to lazily consume an extremely large sequence. We have to generate JSON/CSV reports which is essentially "give me 90% of the entities in the database" and seeing memory consumption go very high when doing so. The idea is to consume the sequence lazily so that we don't blow through all our memory. Then do streamed encoding of JSON and pipe it out over HTTP immediately so there's no large string in memory either.
that’s a perfect use case for d/datoms @dominicm
@robert-stuttaford I thought so. Wanted to make sure I didn't end up wasting the advantage by doing d/datoms
and then filtering in a slow way.
i’ve found transducers + sequence
play very nicely with d/datoms
i guess it depends how complex your filtering is?
@robert-stuttaford thanks for the input. So is your approach similar to that yuppiechef POC? though you seem to be using datomic as the event store as well. Any issues with scale?
yes, we use Datomic for events too. it’s similar in principle to YC’s, but simpler because it’s really just Datomic and some Clojure apps, some of which are web facing, and some not
well, you’re really scaling the storage. we use DynamoDB
which is all-you-can-eat
there is a comfort boundary at about 10bn datoms, but we’re so far away from that right now
ok, will keep that in mind, what we’re working on is by no means ‘web scale’ lol. It’s more a new look on some traditional data processing type stuff, biggest challenge is probably getting the years of legacy data in
@stuarthalloway did a recent talk on writing ETL stuff with Datomic
@robert-stuttaford I think depending on the filtering, it might be a job for reducers. As reducers can parallelize. Then in a final reducer I can write to a stream, I think. This is mostly me just planning ahead, so not deep in the code yet
when making a query with lookup ref, if datomic can not find entity it throws quite nasty errors. any ways to handle that? reg query with :where just returns no results, but if done through lookup ref it bombs…
Why Datomic doesn’t provide reverse index access, like: d/reverse-datoms
?
the (first (reverse (d/datoms ...)))
is slower than (first (d/datoms …))
I read the doc http://docs.datomic.com/clojure/#datomic.api/seek-datoms, it seems has no this feature?
Hello, I'm having problems with some characters in the fulltext
(d/q '[:find ?e ?name
:in $ ?search
:where [?e :user/name]
[(fulltext $ :user/name ?search) [[?e ?name]]]]
db search)
When the search
contains a !
(and same other chars, in some positions), an exception occurs. Is this foreseen (I did not find anything in the docs)? Is there a blacklist of characters?I believe there is a feature request for reverse iteration of indexes via datoms API
I’d suggest you login to the feedback portal and vote for it if youre interested in that functionality
You can get to the feedback portal from you my-datomic account page. Its in the top right, "suggest feature"
@lellis could you try escaping the characters '+ - && || ! ( ) { } [ ] ^ " ~ * ? : \ /' with a '\' so '\(1\+1\)\:2'
could you try escaping the characters + - && || ! ( ) { } [ ] ^ " ~ * ? : \ /
with a \
so \(1\+1\)\:2
?
@jaret, That's exactly what the problem is! There is some "right" way to do that? I need to do it on all my queries?
@souenzzo you can check out https://clojuredocs.org/clojure.string/escape