Fork me on GitHub

trying to think If I am running a peer/media driver inside a container using straight up docker on a physical host the bind addr should be the ip of the host correct ?


Onyx bind address will be the internal ip in the media driver container


So the 172 address?


Sorry my brain is operating at half capacity right now


Should be. But you will need to expose the port, and you will probably need to advertise the external address via onyx otherwise the other peers won’t find it


Right, I am exposing the port for sure


is that doable ? Bind to the internal but expose the external ? Is there a separate config for what gets exposed ?


ah, external-addr


any ideas why i would get this with a 2gig shm

message "IllegalStateException : Insufficient usable storage for new log of length=50335744 in /dev/shm (shm)


It’s probably because you need to specify —shm-size for your container


I have it set to 2 gig


And df shows 2gig for shm when exec into the container


Hmm. Can you try that Aeron property I sent you? It sounds like you have a lot of connections between nodes and are running out of memory. If you make it smaller it should help


Ok I will adjust it in the morning and let you know


Is there a limitation in dire and onyx for that matter on using multimethods as tasks? I guess the function var name isn't friendly.


In Onyx, if I add a new job, is it possible for it to go back in time and execute the entire history? And if a record comes in late, can it update previous executions? (Say I have a rolling average and a new value comes in that should be in the middle of the recorded history.)


@frozenlock It’s been a while since I’ve looked at the checkpointing (not sure if it’s moved out of Zookeeper) but I see no reason why not. I used to do this a lot with Kafka topics when testing.


out of curiosity - did anyone have a look at apache pulsar instead of kafka?


@jasonbell Great. Guess I'll look more deeply into this rabbit hole then. Thanks 🙂


@frozenlock no problem at all, I need to look at things a lot closer over the next few weeks so if I see anything of interest I’ll post it here.


I would really appreciate. I have a ton of timeseries that I would like to process and I'm starting to think Onyx might just do it.


hi all. we're still we're having a little trouble with aeron. i noticed there is a option to use :atom as the messaging implementation but when I try and run my tests I get the following error: "org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /onyx/1/log-parameters/log-parameters" indicating that the peers aren't starting (i think)


i also saw there used to be an option to use :netty. is there any chance that is still usable?


@frozenlock You can do that if your input is durable in the sense that it allows repeated replays, yes.


@yonatanel Multimethods should work with Onyx. Dire Im not sure. I haven't picked it up in a really long time.


@ben.mumford We need to remove :netty -- I'm not sure about :atom, I think that ought to be removed, but Im not sure if it's being used to do any testing. Definitely not something to modify.


We'll try to get that patched over today to avoid others wandering into that parameter. Thanks!


@michaeldrogalis May I ask what it means to have a durable input? Can it be a DB?


@frozenlock Something where you can repeatedly read historical contents. Like Kafka, or, yes, an append-only database for instance.


Onyx doesn't hang onto your data as it's processed for any longer than it needs to. It continually pulls it out of the target input medium and releases it after it makes its way in.


Currently working with Onyx, but management is asking us to justify that choice vs Flink or Kafka streams. Not having any luck finding much in the way of comparisons on the web. Does anybody have a slide or something from which I could grab a few bullet points?


@jasonbell might have some experience for you @dave.dixon


@dave.dixon There’s some older blog posts that might be of interest:


@jasonbell Excellent, thank you.


Most welcome.


so i'm working on this onyx log -> datomic service, and i'm unsure what the best approach for this is. i basically have two choices - write datomic data using my own data model, directly relevant and usable for my own system, and use only those log entries that are relevant to me - build a complete mirror of the most recent replica state into datomic. this will be a time-consuming task, because the entire replica schema needs to be defined in datomic, but will be the most flexible. i'll can probably use onyx.extensions/replica-diff for this. i'm definitely erring towards the first option, and translate the onyx log into my own schema on the fly, but perhaps i'm missing something that makes it worth the effort to get the entire onyx schema into datomic


@lmergen that’s what we did, we just mapped the replica state -> what we cared about, and stuck it in the database


@lmergen on migration we went even further and only looked at the submit-job/kill-job entries, so we didn’t even have to play back the log using the same version of onyx. That at least told us which jobs were up.


yep, that makes sense


and now i understand why it's difficult to make such a component public, because it's so domain-specific 🙂


Yeah. That’s part of the reason we didn’t do it. I think having some code examples would be a good start


blog post might be the best format for something like this


That’d be great


A component with a configurable reaction, plus a migration component which you pass in tenancies and plays them back to build state might be good ways to split it up. I’ll be interested to see what the commonality is between what we’re both doing


> plus a migration component which you pass in tenancies and plays them back to build state what do you mean with this ?


because that's pretty much what should happen by default, right ?


atm i simply decoupled the log reading and the handling of the log entries using a multimethod


Right I mean on your application side, so you can tell what jobs are up when you migrate between tenancies (resubmitting jobs, resuming state, etc)


Which is distinct from getting job / replica statuses


@lmergen I would really be interested in reading about this


i can feel the pressure to write about this mounting on my shoulders :)


@jasonbell @dave.dixon with respect to those shm space issues that you had, we’re doing some work to improve the situation and defaults to make it work better under greater varieties of workloads. The current defaults are a bit too tuned for big nodes / high throughputs.


I’ve been debugging some of @camechis’s similar issues, so I have a lot more data on it now.


that’s great news, thanks for letting me know


No worries 🙂.