This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (10)
- # boot (14)
- # cider (80)
- # clara (1)
- # cljs-dev (19)
- # cljsrn (7)
- # clojure (284)
- # clojure-france (4)
- # clojure-italy (57)
- # clojure-poland (8)
- # clojure-russia (10)
- # clojure-spec (65)
- # clojure-uk (155)
- # clojurescript (156)
- # code-reviews (6)
- # copenhagen-clojurians (16)
- # cursive (10)
- # datomic (10)
- # emacs (13)
- # euroclojure (1)
- # graphql (4)
- # jobs (2)
- # lein-figwheel (3)
- # luminus (4)
- # off-topic (2)
- # onyx (42)
- # parinfer (23)
- # pedestal (1)
- # protorepl (8)
- # re-frame (34)
- # reagent (17)
- # ring-swagger (5)
- # timbre (24)
- # vim (72)
- # yada (1)
which of the configuration options influences the aeron service call timeout ?
trying to make this error go away on a heavily loaded dev environment:
io.aeron.exceptions.ConductorServiceTimeoutException: Timeout between service calls over 5000000000ns
I'm off to bed, and can't look up the exact property that will correspond to that message right now, but the timeout will definitely be listed in https://github.com/real-logic/aeron/wiki/Configuration-Options#media-driver-options
You can configure it via a Java property. Look for one that will correspond to that many ns
Hey all! I went through the onyx challenges this weekend, and very grateful for the tutorials. Very helpful! I have a few questions about best-practices and patterns RE: input tasks. I have a concrete use case for a personal project, which is input JSON from a long-polling http request. It looks to me like the use of a crontab to submit a job would fit best with the lein template app, but I have also seen talk about jobs that never stop (with what I assume is an :in task that is on some kind of trigger/timer). What’s the best way to have a long-poll type of input task?
@wildermuthn The long-polling story is a little bit awkward because Onyx is expecting that input plugins are communicating with a medium that durably stores results and can recover. IMO doing something like long polling then dumping into Kafka, then feeding that into Onyx would do well.
Is the idea that each job ought to execute over a batch of long-polled results? Trying to connect what you’re saying with Crontab.
Right, so I make a large amount of http requests to an external API every 30s, and want to process all the resulting data with an onyx job.
You’re saying, perhaps run those http requests outside of onyx, and put it into Kafka?
Yeah. Those two activities sound pretty distinct. Kafka might not be the right storage choice now that I have a little more insight.
You could plop them into S3 and use the S3-file reader plugin to handle them.
Ok, but the idea is the data should go somewhere durable so any peer can grab from it? Gotcha.
Yeah, mostly so that if there’s a failure of any kind, you’ll be able to recover the original data.
In terms of the crontab, I’m looking to understand the best way to submit a job to the peers, whether it should be submitted and then hang until data is avaiable (via Kafka/S3), or whether something should trigger the job externally (like a crontab)
@michaeldrogalis For pyroclast’s roaming, can arbitrary functions be included in the service composition?
So, I'm getting this error in one of our deployed environments on application startup:
Exception type: io.aeron.exceptions.RegistrationException. Exception message: Insufficient usable storage for new log of length=50332096 in /dev/shm (shm)
I'm starting the app using
with-test-env because we've currently no need to run Onyx in distributed mode
It works great locally, but apparently this box lacks sufficient space on /dev/shm
I tried setting
:onyx.log/file "/var/log/onyx.log" in both my env-config and peer-config. Is that ignored for
with-test-env? What should I do?
@stephenmhopper how many peers are you running on this node? If you have lots of peers connecting to each other, you’ll find that they each need a buffer in
/dev/shm and you use up your space pretty quickly
@stephenmhopper you can either increase /dev/shm’s size, or you can decrease the term buffer size, e.g.
Hmm, that seems odd that you’d run out of space that quickly. Are you using docker?
It has plenty of other space on other partitions though. Is there a way to use one of those instead?
also, I'm reading from and writing to AWS SQS queues and I'm getting permissions issues for my app profile
I'm not sure if it's just a red herring though because of the aforementioned /dev/shm issue
I'm giving the profile
["sqs:SendMessage*","sqs:ReceiveMessage","sqs:DeleteMessage*","sqs:ChangeMessageVisibility*"] does it need any other permissions on the queues?
Can you let us know how you go there? Once you figure it out I’ll add it to the docs