This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (2)
- # beginners (18)
- # boot (118)
- # cider (12)
- # cljs-dev (12)
- # cljsrn (24)
- # clojure (142)
- # clojure-art (4)
- # clojure-bangladesh (3)
- # clojure-ireland (1)
- # clojure-italy (7)
- # clojure-norway (4)
- # clojure-poland (207)
- # clojure-russia (101)
- # clojurescript (108)
- # clojurewerkz (2)
- # core-async (6)
- # css (8)
- # data-science (23)
- # datomic (31)
- # devcards (2)
- # emacs (8)
- # funcool (25)
- # hoplon (34)
- # immutant (78)
- # ldnclj (7)
- # lein-figwheel (4)
- # leiningen (6)
- # luminus (35)
- # off-topic (1)
- # om (119)
- # onyx (43)
- # parinfer (29)
- # proton (11)
- # re-frame (25)
- # remote-jobs (1)
- # slack-help (1)
- # spacemacs (3)
- # yada (10)
@bkamphaus: hi Ben, we use official AMI of Datomic for our transactors, we have a couple troubles with failover situation, for investigating it's good to have ssh, datadog and papertrail on the boxes, could we get copy of the AMI (i'm not sure, if you use Packer might be you will share the packer file)? I've found you delete ssh init script with initialization of datomic, I don't want to apply ugly hacks for the transactors Thank you!
Btw, I'm able to install datadog-agent and configure syslog (we need it for realtime log stream) with UserData, but I prefer to have it prepared before running. Running Datomic transactors very important for our apps
hi guys, I don't know if I understand reverse attribute navigating correct or not, I have
:project/city if I query
:project/_city it would return all the project has the city.
in my case it returns nil ...
@nxqd if you mean inside a query, don’t use reverse ref - just swap the variables in the tuple, e.g., not
[?f :person/_friend ?p], just
[?p :person/friend ?f]. Reverse navigation is for
pull/`entity`, not needed for datalog.
@lowl4tency: are the failover situations recent? One issue, regarding streaming logging options, we’re considering this as a feature but there’s no way to hook this up with our provided AMI at present.
Some notes - we can only guarantee good shutdown behavior and metrics reporting for a well behaved self destruct. If something happens that causes the datomic process to suspend or stop suddenly, we can’t guarantee the shutdown behavior for HA failover.
bkamphaus: we are using papertrail, all what we need it's or statfull name of file like datadog.log or opportunity to logback.xml with writing to syslog
One sanity check you can put in place for the AMI is alarms around time window without HeartbeatMsec (ensure an active is up) or HeartMonitorMsec (ensure a standby is up).
This just gives you an indication to manually kill/force cycle the instance, and you lose logs doing this, but it’s more consistent with the disposable cloud assets deployment model to do so. Regarding diagnosis steps:
We are definitely considering doing something like, but not sure exactly what action we’ll take. It doesn’t provide a safe guarantee that you will know what happened/be able to diagnose.
At present the instance goes dead and stops reporting metrics. If there’s a JVM process that dies without the transactor following its self destruct process.
I understood, but if we can get realtime logs it can help us to investigate and recognize what happened
… this just adds one more layer of status reporting that goes silent. There’s no guarantee the alarm or metric would come through there, either.
But having more options could possibly make it more likely, I just want to make sure that its understood that it’s not guaranteed to add more meaningful metrics if e.g. the JVM segfaults or something, it just dies and the machine doesn’t cycle and the logs are just another form of monitoring that will go silent without definitive answers.
The issue are not getting alarm, the issue are getting understand why it happened. We are not able to ssh into the transactor, we are not able to grab logs from host, we can only restart instances and noone guaranties we get logs on S3 correctly
Might be we have small perfomance or it's DDB issue, or we catch something else. It's extrimely important
@lowl4tency: it may be for the level of investigative steps you want to take, the disposable transactor machine model with our provided AMI isn’t appropriate, and it’d make more sense to keep two transactor vms up with your own process monitor and relaunch logic, with your own configured logging, etc.
bkamphaus: yeah, it sounds good, but I don't wanna to create the own AMI from scratch
We do intend to explore options for providing better logging/monitoring configuration for the AMI, so I don’t mean that as a “won’t fix” type response, it’s just that what we provide won’t be available immediately and there will always be some other degree of monitoring or logging you could do if it was your own managed machine. So the time/ops investments to do so may be worthwhile.
bkamphaus: I even don't ask change your logic or processes, I'm just asking about opportunity to change your AMI or add our own apps/scripts to the AMI. Now with dropped ssh is not so simple.
I'm realising how many work you to do for getting all staff together, just if it could be a bit flexible it might do your work a bit easier. If you just provide interfaces for customizing
So Datomic does not have a data type for BLOB storage. But it does support "small" byte arrays and strings. Is there any documentation or guidance on how large is too large for "small"? 10 kB? 100 kB?
@alexisgallagher: it can be dependent on config - underlying storage, mem provisioning etc. As a rule of thumb, I’d try to stay below 100K max size (stay on the order of 10K) as a guideline.