This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (1)
- # beginners (12)
- # cider (2)
- # cljs-dev (6)
- # cljsrn (4)
- # clojure (123)
- # clojure-austin (10)
- # clojure-brasil (1)
- # clojure-canada (21)
- # clojure-conj (5)
- # clojure-dev (8)
- # clojure-gamedev (42)
- # clojure-russia (121)
- # clojure-sanfrancisco (2)
- # clojure-spec (25)
- # clojure-uk (36)
- # clojurescript (195)
- # clojutre (3)
- # cursive (43)
- # datomic (6)
- # defnpodcast (2)
- # dirac (14)
- # emacs (2)
- # ethereum (2)
- # events (2)
- # funcool (6)
- # hoplon (76)
- # jobs (1)
- # kekkonen (9)
- # lein-figwheel (1)
- # leiningen (4)
- # mount (6)
- # off-topic (5)
- # om (2)
- # onyx (91)
- # pedestal (2)
- # protorepl (14)
- # re-frame (22)
- # reagent (20)
- # rethinkdb (1)
- # ring (2)
- # schema (3)
- # sfcljs (3)
- # spacemacs (15)
- # specter (12)
- # testing (1)
- # yada (63)
Hi all, we try to get the onyx-cheatsheet to run locally, but no luck so far. Clojurescript errors etc. Is the README up to date?
Hi @jeroenvandijk. It should work, though I remember there being some problems viewing the page if you accidentally use the Jekyll index.html meant for our website. I'll give it a go when I'm on a computer shortly. I'd clojurescript throwing an error on compile, or is it an error loading the page?
yeah it starts with an assertion error on
:source-map "resources/public/js/app.js.map” after setting it to
:source-map true it does compile
(browser-repl) command gives
IllegalArgumentException No value supplied for key: weasel.repl.websocket.WebsocketEnv@49ac47e0 clojure.lang.PersistentHashMap.create (PersistentHashMap.java:77)
and when I open
localhost:10555 I see in the console something about
Browser repl is likely broken. I haven't used it in a long time. I mostly just use figwheel when I'm working on the cheat sheet
All sounds pretty broken. The way we do it is to pull the http://onyx-platform.io repo, and then run build-site.sh which will build the latest cheat sheet
You may have noticed while looking at the project.clj that advanced compile is currently not used too
yeah I’m not up to date with the latest cljsbuild settings so I wasn’t sure what is good and bad
The whole thing could do with some more love but I guess we're doing OK overall
yeah well this is good enough for us I think. If we find fixes we’ll let you know
@lucasbradstreet just confirming that
:zookeeper/address needs to include the port for each server in the csv?
If I remember correctly, it’ll default to 2181 if you don’t provide them. I’ve always just tried to be safe and include the ports too
INFO [onyx.log.zookeeper] - Starting ZooKeeper client connection. If Onyx hangs here it may indicate a difficulty connecting to ZooKeeper. INFO [onyx.log.zookeeper] - Stopping ZooKeeper client connection
mmm. i can telnet from an Onyx node to a ZK node address that Onyx is being given on 2181, so it's open. is this a fair test to ensure O can reach ZK?
Are you seeing any exceptions? The only logging / failure I can see is those two info lines which look OK
if i see 'stopping zookeeper' after 'starting zookeeper. if hang, conn issues', and then i get a job-id back, then that's success, right?
we're also reading the datomic log to determine a start position for the read-log catalog entry which takes some time
ok. make it right, make it fast, make it pretty. i'm happy that i've got the first one done!
Yeah, you can start to move some of that data out of the job data and load it via before-task-start. @michaeldrogalis is keen to get some of these chunks in S3 soon, which should help there a bit
looks like all that time is in building the job - which probably means slow Datomic start up. i'll dig
another 3 minutes to boot the peers up, which is the same op - getting that start-t for the catalog
2016-08-26 09:57:52.934 INFO - Starting Jobs for :onyx/tenancy-id highstorm-prod-be112dc3b065dba1065d795a4776c6ef41c73e5e 2016-08-26 10:00:36.373 INFO - :read-log Start t 1000 ( 26831 behind ) tx 13194139534312 #inst "2013-01-15T00:00:00.000-00:00"
the memcached cluster should have covered that second one, but perhaps the operation we're doing doesn't benefit from the cache
@jeroenvandijk I got interactive dev with the cheat sheet going again. You should be able to pull and follow the README to make it work now (after doing a lein clean just to be safe)
we're going to have to rewrite that code to scan
d/log backwards from the present moment
@lucasbradstreet forgive me, we may have had this conversation before. is there a top-level metric where we can graph active peers over time?
There are metrics for when peers come online and go offline, but you can't just sum each and get a final figure, because sometimes peers will go offline by crashing and won't write their metric. This is something that'll be easier to do once we have the new query server be able to be inbuilt in the peer group. Then you'll be able to query the nodes to see how many peers you think each has. If still won't be easy to push this data to metrics since it'll be more pull based though. Short answer is no, not without grabbing from a peer query server that is in libonyx / 0.9.10
your answer shows the great forethought and planning you guys put into this, as always 🙂
One downside of being masterless is that you don't have a single place to report this stuff from. We've got solutions in the works which will be even better though :)
Suppose you have a several event-filtering jobs and several more windowed-aggregation jobs, do you put them in the same repo? what are people's thoughts around what constitutes an organized code structure for Onyx jobs? any thoughts welcome
For what it's worth, internally we're essentially building a very large Onyx system, and we're satisfied with how task bundles are scaling to high numbers of tasks, organization-wise.
So, one big repo with many different types of jobs (in various states of development) ?
I'm noticing that a lot of my flow-condition predicates are similar.. e.g.
I could write a macro, but kinda wish there was a way to use
(defn event-a? [event old-segment new-segment all-new-segments] (= :a (:event-type new-segment))) (defn event-b? [event old-segment new-segment all-new-segments] (= :b (:event-type new-segment)))
partialto reduce the boilerplate instead... maybe there's a more elegant way?