This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-30
Channels
- # arachne (2)
- # beginners (8)
- # boot (19)
- # chestnut (2)
- # cider (1)
- # clara (1)
- # cljs-dev (31)
- # cljsrn (82)
- # clojure (163)
- # clojure-dusseldorf (7)
- # clojure-greece (1)
- # clojure-italy (4)
- # clojure-norway (3)
- # clojure-russia (24)
- # clojure-sg (5)
- # clojure-spec (6)
- # clojure-uk (42)
- # clojurescript (239)
- # core-async (4)
- # cursive (10)
- # data-science (18)
- # datascript (1)
- # datomic (110)
- # emacs (16)
- # euroclojure (1)
- # events (1)
- # figwheel (1)
- # hoplon (22)
- # keechma (2)
- # klipse (5)
- # lein-figwheel (3)
- # leiningen (7)
- # luminus (27)
- # melbourne (2)
- # mount (5)
- # nyc (7)
- # off-topic (35)
- # om (20)
- # onyx (49)
- # pedestal (41)
- # re-frame (31)
- # reagent (18)
- # remote-jobs (9)
- # ring (4)
- # ring-swagger (1)
- # spacemacs (6)
- # specter (6)
- # uncomplicate (3)
- # unrepl (9)
- # untangled (54)
- # yada (11)
Hi there! We have a function which submits all our jobs to onyx and we’re trying to check if any tasks have been submitted already (to prevent duplicates). We’re using onyx.api/subscribe-to-log
for this. The problem is if it’s a new tenancy there’s no log in zookeeper and subscribe-to-log
loops infinitely trying to connect. Is there a way to check if log exists?
We’re on 0.9
Hello everyone. I'm stuck with onyx and onyx-kafka. Got
org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
all the time from onyx peers. What I am doing wrong?@maxk Are you sure Kafka is connected to your Zookeeper instance?
I would double check using the kafka-topics.sh
script to make sure you can connect to ZK and list topics.
@gardnervickers, thank you. Looks like you're correct
@jetmind You can query ZK directly for your tenancy under /onyx
You pretty much always want to spin on waiting for the log to be created.
At least for Onyx
I started a job that uses onyx-datomic read-log
to stream entities to ES on an empty Datomic db. I then populated Datomic with 500 entities, but onyx-datomic didn't read them. killed the job and started a new one and it streamed as expected.
will read-log
not pick up new entities as they are added? the job was still spinning, and datomic/log-end-tx
is unset.
@devth It should. Are you sure it wasn’t reading off d/db
?
read-log shouldn’t do that, but that’s just the first thing that comes to mind from the description
https://github.com/onyx-platform/onyx-datomic/blob/0.10.x/src/onyx/plugin/datomic.clj#L154
Sorry to interrupt, but what’s the best way to shutdown an Onyx environment to ensure that synchronization operations complete? I’m using the latest version of the SQS plugin and with-test-env
. I can write messages to an SQS queue with one job and then successfully read them off of that queue (out through a core.async channel) with another job. But the second job only sometimes makes the call back to SQS to delete the messages and it seems to be based on how much time I let the job sleep for. Is there an onyx.api call for gracefully terminating a job? Can I just call kill-job with the job-id?
@devth Can’t say I’m sure what’s going on there off the top of my head. If you manually open up a log connection to Datomic, will it also follow on the new entities?
I mean it should - Im just hard pressed to suggest what else could be going on there.
i think it polls via loop/recur. the 0.9.x code is https://github.com/onyx-platform/onyx-datomic/blob/0.9.x/src/onyx/plugin/datomic.clj#L301
@stephenmhopper kill-job will do the trick. The job is going to shutdown gracefully, and will invoke lifecycle/task-stop to finish out operations
Ah, you’re on 0.9?
0.9 Datomic is pretty actively used
@michaeldrogalis Yeah, that’s what I thought, but it doesn’t seem to always send the delete message back to SQS. I’ll submit a bug for @lucasbradstreet to look at it
strange. i'll have to do some more testing and observing to see what's really going on.
@stephenmhopper 0.9 or 0.10?
Okay thanks - was just curious
We’ll need to get a look at it later in the week, LB is on vacation
He released a 0.10.0.0-20170526.222812-24
version of the SQS plugin as the current beta version wasn’t calling delete messages at all due to a bug
You’re just seeing the last few messages on SQS not get cleared as the job spins down?
@stephenmhopper I could see how that would happen. You’ll probably have to wait for the next epoch to have completed before you can safely shutdown. We don’t have any helper functions for this yet, but you could make successive calls to https://github.com/onyx-platform/onyx/blob/0.10.x/src/onyx/api.clj#L262 and check the epoch on each?
@lucasbradstreet ah, so I just have to wait for the next epoch and then call kill-job for now?
that said, it should be finishing the epoch before it shuts down
though not if you’re kill-job’ing
@lucasbradstreet Is there a way to ask it nicely to finish the epoch and shut down?
Can I just close the core.async
:out
channel?
that’s currently the responsibility of the input plugin, and there’s currently no way to do that with the SQS plugin. I could see adding a kill-job that completes the job on the next epoch though.
@lucasbradstreet okay, cool. Checking the epoch and waiting for it to change seems to be working. Should I update the tests for the SQS plugin to do this instead of just sleeping for a few seconds?
That’d be great
The sleeping was a hack obviously
@stephenmhopper you might be interested in this https://help.github.com/articles/ignoring-files/#create-a-global-gitignore
specifically the global gitignore
@lucasbradstreet I might have to do that for open source projects. For professional / day-job work, I’ve always tried to avoid a global .gitignore so that I can ensure environment consistency for everyone on my team, but with open source projects, having a global .gitignore might save me some time. Thank you
Ah, that makes a lot of sense