This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-10-18
Channels
- # aleph (59)
- # beginners (21)
- # bigdata (1)
- # boot (110)
- # cider (7)
- # clara (1)
- # cljs-dev (160)
- # cljsjs (3)
- # clojars (10)
- # clojure (122)
- # clojure-czech (2)
- # clojure-dusseldorf (5)
- # clojure-france (1)
- # clojure-italy (4)
- # clojure-korea (5)
- # clojure-russia (13)
- # clojure-spec (15)
- # clojure-uk (78)
- # clojurebridge (1)
- # clojurescript (196)
- # core-async (6)
- # core-logic (27)
- # cursive (11)
- # data-science (2)
- # datomic (45)
- # dirac (9)
- # emacs (2)
- # funcool (8)
- # hoplon (16)
- # immutant (13)
- # jobs (1)
- # klipse (11)
- # lein-figwheel (1)
- # leiningen (1)
- # off-topic (3)
- # om (40)
- # onyx (31)
- # pedestal (25)
- # re-frame (55)
- # ring (1)
- # ring-swagger (1)
- # rum (4)
- # specter (1)
- # sql (2)
- # untangled (30)
- # vim (12)
- # yada (12)
That's right, you can do them independently
Hey all! I have a weird problem: my django app writes something to database, sends an event to Kafka about that and then commits transaction to database.
My onyx app is so fast (obviously) that it reads stuff from kafka before it appears in database, which throws an error and kills the job. I wonder what's the best way to deal with that.
It seems that there is no way I can pass anything from lifecycle handle-exception
to the task, so there is no way to make 'retry this task 3 times before giving up', right? I can of course hardcode that inside of a task itself...
Could this be a solution? http://www.onyxplatform.org/docs/cheat-sheet/latest/#flow-conditions-entry/:flow/action
Limiting retries are certainly a bit of a problem with the current design. Could you stick a timestamp on the message (I think Kafka can do timestamps now too) and make the flow conditions drop it after some time period?
@asolovyov Why do you commit the transaction after sending to kafka? What happens if sending to kafka fails, or if committing fails after successfully sending to kafka?
Need to push back the release of the thing I mentioned yesterday, apologies. Needs a few more days.
We're basically coming out with some new test/single-node facilities. They're quite nice to use, but need to be completely finished before being useful. Hit a snag last night trying to complete it.
@yonatanel: you're asking hard questions. Too hard for our old little django app :)
@lucasbradstreet: that's a great idea! Thanks!
@asolovyov Maybe you can batch the input in X seconds windows and then delay a bit the processing of each window.
Though I don't believe :onyx/batch-fn?
supports time based batches, and I didn't see an option to link a window accumulation to a task.
You can control an upper bound on the amount of time to wait for a batch to be read with :onyx/batch-timeout
, but I dont think thats what you want here.
I think I've got an idea how to fix django, a hack, but whatever, I plan to throw it out sooner or later :)
Got things done a little quicker than we thought we would. Blog post tomorrow, but worth checking out today.
Happy to announce onyx-local-rt, an alternate runtime for Onyx that is pure and deterministic, and runs in ClojureScript: https://github.com/onyx-platform/onyx-local-rt
Almost all API features of the distributed runtime work, so you can switch your tests over to use the local-rt. Tests go from ~1200ms down to < 10s on my machine.
@camechis Half because "why not" since it all we mostly had to do was convert files from clj to cljc, half because we get a lot of people who like the programming model but don't need the parallelism or fault tolerance.
Onyx in the browser as a teaching tool for example would be a nice excuse to learn some Om.
I almost want to switch learn-onyx over to use it so new comers don't need to futz around with ZooKeeper or BookKeeper.
It's pretty snappy, we've done a lot of work to make it smooth. Probably a good idea to just leave it alone.
But, anyhow. I'll write a longer post tomorrow. onyx-local-rt should be useful for testing, teaching, in-browser usage, and single-node usage.