Fork me on GitHub
#onyx
<
2018-05-03
>
eoliphant13:05:30

question, I’m going through prototyping out some ‘forward thinking’ on an architecture. I’ve played around with Onyx for some more traditional ‘move data around stuff’. But I just noticed the http ‘adapter’ or whatever. I’ve been building out a command/event based arch where I have micrservices that take the commands, do their thing persist reified transactions in Datomic with the approriate event tag and metadata and onyx pulls it out of the back end. So now I’m wondering if I could just have onyx accept the commands via http directly from browsers, etc

michaeldrogalis14:05:26

@eoliphant Yup, we've seen this done a few times. Can ask @robert-stuttaford about it in particular.

daniel-tcgplayer15:05:17

I'm running into a problem running multiple nodes in a cluster. The current behavior is as follows: I'll have the job running in a docker instance with some tenancy ID, using a hosted ZK cluster, and media driver in another process. The job will be running correctly (I can monitor it's output) so I'll go to start another container with the job. I use the same config and startup (same ID) and once it starts it locks up (I only know this because the output source stops receiving items). Eventually after a few minutes to job on both containers die

eoliphant16:05:50

I’ll ping him @michaeldrogalis thanks, Yeah read his older blog about what they were doing,but I got the impression that onyx was pulling stuff out of the ‘back’ lol of datomic

eoliphant16:05:28

this is the problem with the clojure ecosystem… The Crisis of Too Much Cool Stuff lol

lucasbradstreet16:05:20

@innit29 do you have of the onyx.log? We generally redirect output to stdout when running inside docker

daniel-tcgplayer16:05:26

I do have a very verbose log from the containers. What the best way to share that out?

lucasbradstreet16:05:35

private gist works, but whatever you want to do

daniel-tcgplayer16:05:12

I've got the log level on INFO, but it doesn't log anything useful. However setting the log level to debug creates hundreds of MB's. Do you want the debug logs?

lucasbradstreet16:05:34

debug logs won’t likely be helpful

lucasbradstreet16:05:54

I’m surprised there isn’t anything interesting in info. I’d expect to at least see peers timing out

daniel-tcgplayer17:05:12

The peers eventually time out which can be seen in the debug log, but it doesn't output anything in the info log. Maybe I've got logging misconfigured? here's the log https://gist.github.com/dcrouch26/e9246600b1a01ad32dee19c2193fd2b5 https://gist.github.com/dcrouch26/89becf285f6b94082524fedc1cfac308

daniel-tcgplayer17:05:43

Shared memory gets pretty low, so that might be a thing

lucasbradstreet17:05:51

@daniel-tcgplayer k, there could be a few problems here

lucasbradstreet17:05:03

1. you will probably want to increase the shm-size of the container.

lucasbradstreet17:05:49

(unless it’s completing, but it doesn’t seem like that)

lucasbradstreet17:05:33

3. it doesn’t look like you’re getting the onyx logging at all. Seems like logging is misconfigured - there should be a lot of other info level onyx logging.

lucasbradstreet17:05:06

3 will help you decide on 2. re: 2, you probably shouldn’t be using await-job-completion to decide when to shut down peers

daniel-tcgplayer17:05:15

Thanks Lucas! I'll look into getting my onyx logging back enabled, something did seem fishy. And I'll get that shared memory up. I'll post back when I've got all that

lucasbradstreet17:05:49

the shared memory issue is probably due to peers rebooting and the shm space not being reclaimed quickly enough. You’ll need a little extra to handle those reboots.

lucasbradstreet17:05:05

the requirements do go up as you go multi node since they all need connections to each other