This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-12-19
Channels
- # beginners (240)
- # boot (9)
- # braveandtrue (2)
- # bristol-clojurians (2)
- # cider (2)
- # cljsrn (84)
- # clojars (1)
- # clojure (195)
- # clojure-belgium (9)
- # clojure-china (5)
- # clojure-denmark (4)
- # clojure-italy (7)
- # clojure-mke (1)
- # clojure-norway (1)
- # clojure-russia (16)
- # clojure-spec (74)
- # clojure-uk (15)
- # clojurescript (78)
- # clr (3)
- # code-reviews (4)
- # datascript (8)
- # datomic (71)
- # emacs (9)
- # hoplon (18)
- # jobs (3)
- # kekkonen (32)
- # klipse (19)
- # lambdaisland (2)
- # luminus (15)
- # off-topic (6)
- # om (35)
- # om-next (62)
- # onyx (17)
- # overtone (5)
- # pedestal (1)
- # perun (1)
- # planck (31)
- # protorepl (1)
- # re-frame (135)
- # reagent (34)
- # ring-swagger (6)
- # rum (54)
- # specter (3)
- # untangled (14)
- # yada (14)
Hi all! I'd like to say something about my situation so someone can help me. Im using :onyx/plugin :onyx.plugin.datomic/read-log in 2 job's. Its all ok but i have a question about :datomic/log-start-tx. Im using (d/basis-t (d/db @conn)) to keep trancking from datomic without redo all when app restarts. But if the job for some reason going down and the application keep transact with datomic for 3 minutes before i restart the app, how can i track these missed data, because when (d/basis-t (d/db @conn)) run again when app restart the value will be the newest and i will lost the 3 min window data. Any ideia how can i handle these situation?
@lellis Does this help? From onyx-datomic readme: "Log read checkpointing is per job - i.e. if a virtual peer crashes, and a new one is allocated to the task, the new virtual peer will restart reading the log at the highest acked point. If a new job is started, this checkpoint information will not be used. In order to persist checkpoint information between jobs, add :checkpoint/key "somekey" to the task-map. This will persist checkpoint information for cluster (on a given :onyx/tenancy-id) under the key, ensuring that any new jobs restart at the checkpoint. This is useful if the cluster needs to be restarted, or a job is killed and a new one is created in its place."
Can onyx-kafka use a real kafka server but an in-memory zookeeper, so when I shut the env down it will forget commits?
@yonatanel Yeah, it can, but that’s agnostic to onyx-kafka.
Will just need to make sure your start up and shut down order for Kafka and ZooKeeper is correct and you should be okay
@lucasbradstreet or anyone, I'm getting this error while using embedded-kafka. Could it be because franzy-embedded is a dependency only in :dev profile of the kafka plugin? #error { :cause Could not locate franzy/embedded/component__init.class or franzy/embedded/component.clj on classpath. :via [{:type clojure.lang.Compiler$CompilerException :message java.io.FileNotFoundException: Could not locate franzy/embedded/component__init.class or franzy/embedded/component.clj on classpath., compiling:(onyx/kafka/embedded_server.clj:1:1)
@yonatanel That sounds right, yeah. Are you running from within an uberjar at this point?
Yep, that’ll be right. By the way, we’ve moved away from embedding kafka as it’s pretty finicky / brittle
actually what I said doesn't make sense. If I'm in :dev profile, using embedded-kafka lib as was published to clojars, will the lib contain dev dependencies when it is downloaded to my machine?
No. A library’s dev dependencies don’t go out with it to Clojars.
Still happens when I add franzy-embedded to my main dependencies just above onyx.kafka plugin. I get another error now: #error { :cause No matching ctor found for class kafka.server.KafkaServer Maybe I should move away from embedded-kafka too
To be honest we’ve tend to found it preferable to run Kafka inside Docker.
Easy enough to blow the entire thing away and get a new one, especially if you put ZooKeeper inside of it too