This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-11-09
Channels
- # alda (9)
- # announcements (1)
- # beginners (6)
- # boot (140)
- # cbus (2)
- # cider (27)
- # cljs-dev (19)
- # cljsrn (17)
- # clojure (104)
- # clojure-art (1)
- # clojure-brasil (5)
- # clojure-colombia (2)
- # clojure-russia (146)
- # clojure-sg (3)
- # clojurescript (64)
- # clojurex (1)
- # cursive (17)
- # data-science (22)
- # datomic (41)
- # editors-rus (5)
- # events (1)
- # hoplon (61)
- # ldnclj (35)
- # lein-figwheel (1)
- # off-topic (1)
- # om (119)
- # onyx (214)
- # re-frame (3)
- # reagent (13)
- # robots (5)
- # slack-help (1)
- # yada (17)
@gurdas if you can share the schema it would help.
mainly, I want to verify cardinality of ref attrs (am I ok to infer it from plural vs. singular naming convention? i.e. node/tree line/dataset = card one, line/nodes card many?
For those of you running on AWS + dynamo, how frequently do you get “transactor not available”, as an intermittent error?
oh, fun. It looks like if you get the dynamo ‘throughput exceeded’ error, the transactor kills itself and starts over
@bkamphaus: Here's the schema i'm working with: https://gist.github.com/gurdasnijor/03e9ea105ed77775367c
Appreciate the help! Let me know if a dump of the datomic db i'm working with would help as well and I can get that out to you
@arohner: the transactor restarting is actually not a bad reaction to seeing errors
though it probably won't help much if the issue is due to dynamo's throughput limit
how did you find out? I find it hard to understand what the AMI does
there are logs on S3 but they only seem to be updated once a day
it's possible I didn't look close enough
I've also run into the issue that the AMI restarts continuously (every minute or so)
probably some misconfiguration of the auto-scaling group
arohner: we went through quite some fun with this. you have to get your write prov, and memory-index-threshold, memory-index-max values tuned
if your m-i-* values are too high, you can slam storage with a BIG amount of writes in an otherwise sleepy system
you want small, frequent indexing jobs, so small m-i-threshold. ours is 32mb. @bkamphaus is a wizard at reading CloudWatch, so if you’ve Pro, you should totally spend an hour with him, and have him read your account’s entrails
we had transactor-not-available issues all over until we got this right, and we did have two instances where things had to restart
@robert-stuttaford: thanks. Not sure what my m-i-threshold is, I’ll check it out
the transactor will need sufficient write throughput to handle both incoming transactions as well as background indexing and heartbeats
@arohner: m-i-threshold is my laziness; i mean max-index-threshold
as set out in your transactor.properties file when booting your transactor instances
our write throughput is 400 😐
kind of absurd I know, but I got an example app + transactor using postgresql running on a free heroku dyno: https://github.com/clojurous/shouter-datomic-heroku You can see it in action here: https://calm-castle-4835.herokuapp.com