Fork me on GitHub
#datomic
<
2016-03-03
>
lowl4tency05:03:43

@bkamphaus: hi Ben, we use official AMI of Datomic for our transactors, we have a couple troubles with failover situation, for investigating it's good to have ssh, datadog and papertrail on the boxes, could we get copy of the AMI (i'm not sure, if you use Packer might be you will share the packer file)? I've found you delete ssh init script with initialization of datomic, I don't want to apply ugly hacks for the transactors simple_smile Thank you!

lowl4tency05:03:26

Btw, I'm able to install datadog-agent and configure syslog (we need it for realtime log stream) with UserData, but I prefer to have it prepared before running. Running Datomic transactors very important for our apps

jimmy12:03:58

hi guys, I don't know if I understand reverse attribute navigating correct or not, I have :project/city if I query :project/_city it would return all the project has the city. in my case it returns nil ...

bkamphaus15:03:47

@nxqd if you mean inside a query, don’t use reverse ref - just swap the variables in the tuple, e.g., not [?f :person/_friend ?p], just [?p :person/friend ?f]. Reverse navigation is for pull/`entity`, not needed for datalog.

bkamphaus15:03:31

@lowl4tency: are the failover situations recent? One issue, regarding streaming logging options, we’re considering this as a feature but there’s no way to hook this up with our provided AMI at present.

lowl4tency15:03:57

bkamphaus: yeah, last fail of failover simple_smile

bkamphaus15:03:36

Some notes - we can only guarantee good shutdown behavior and metrics reporting for a well behaved self destruct. If something happens that causes the datomic process to suspend or stop suddenly, we can’t guarantee the shutdown behavior for HA failover.

lowl4tency15:03:44

bkamphaus: we are using papertrail, all what we need it's or statfull name of file like datadog.log or opportunity to logback.xml with writing to syslog

bkamphaus15:03:13

One sanity check you can put in place for the AMI is alarms around time window without HeartbeatMsec (ensure an active is up) or HeartMonitorMsec (ensure a standby is up).

lowl4tency15:03:46

bkamphaus: can we get something like that in next version of AMI?

lowl4tency15:03:55

Or might be you can share your AMI?

lowl4tency15:03:22

I afraid we have to change it for our purposes, or we should create own 😞

bkamphaus15:03:26

This just gives you an indication to manually kill/force cycle the instance, and you lose logs doing this, but it’s more consistent with the disposable cloud assets deployment model to do so. Regarding diagnosis steps:

bkamphaus15:03:55

We are definitely considering doing something like, but not sure exactly what action we’ll take. It doesn’t provide a safe guarantee that you will know what happened/be able to diagnose.

bkamphaus15:03:25

At present the instance goes dead and stops reporting metrics. If there’s a JVM process that dies without the transactor following its self destruct process.

lowl4tency15:03:43

I understood, but if we can get realtime logs it can help us to investigate and recognize what happened

bkamphaus15:03:48

… this just adds one more layer of status reporting that goes silent. There’s no guarantee the alarm or metric would come through there, either.

bkamphaus15:03:51

But having more options could possibly make it more likely, I just want to make sure that its understood that it’s not guaranteed to add more meaningful metrics if e.g. the JVM segfaults or something, it just dies and the machine doesn’t cycle and the logs are just another form of monitoring that will go silent without definitive answers.

lowl4tency15:03:21

The issue are not getting alarm, the issue are getting understand why it happened. We are not able to ssh into the transactor, we are not able to grab logs from host, we can only restart instances and noone guaranties we get logs on S3 correctly

lowl4tency15:03:08

Might be we have small perfomance or it's DDB issue, or we catch something else. It's extrimely important

lowl4tency15:03:23

Now transactors are blackbox for us

bkamphaus15:03:14

@lowl4tency: it may be for the level of investigative steps you want to take, the disposable transactor machine model with our provided AMI isn’t appropriate, and it’d make more sense to keep two transactor vms up with your own process monitor and relaunch logic, with your own configured logging, etc.

lowl4tency15:03:29

bkamphaus: yeah, it sounds good, but I don't wanna to create the own AMI from scratch simple_smile

bkamphaus15:03:34

We do intend to explore options for providing better logging/monitoring configuration for the AMI, so I don’t mean that as a “won’t fix” type response, it’s just that what we provide won’t be available immediately and there will always be some other degree of monitoring or logging you could do if it was your own managed machine. So the time/ops investments to do so may be worthwhile.

lowl4tency15:03:04

bkamphaus: I even don't ask change your logic or processes, I'm just asking about opportunity to change your AMI or add our own apps/scripts to the AMI. Now with dropped ssh is not so simple.

lowl4tency15:03:40

I can operate by UserData btw, but it looks like ugly ugly hack

lowl4tency15:03:58

I'm realising how many work you to do for getting all staff together, just if it could be a bit flexible it might do your work a bit easier. If you just provide interfaces for customizing simple_smile

alexisgallagher21:03:39

So Datomic does not have a data type for BLOB storage. But it does support "small" byte arrays and strings. Is there any documentation or guidance on how large is too large for "small"? 10 kB? 100 kB?

bkamphaus22:03:46

@alexisgallagher: it can be dependent on config - underlying storage, mem provisioning etc. As a rule of thumb, I’d try to stay below 100K max size (stay on the order of 10K) as a guideline.

alexisgallagher22:03:26

really not very big then. good to know...