This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-08-23
Channels
- # admin-announcements (1)
- # alda (1)
- # bangalore-clj (5)
- # beginners (17)
- # boot (392)
- # capetown (4)
- # cider (16)
- # cljs-dev (24)
- # cljsrn (33)
- # clojure (106)
- # clojure-berlin (1)
- # clojure-nl (1)
- # clojure-russia (168)
- # clojure-spec (85)
- # clojure-uk (137)
- # clojurescript (83)
- # clojutre (4)
- # component (10)
- # cursive (6)
- # datavis (9)
- # datomic (11)
- # defnpodcast (15)
- # dirac (4)
- # docker (1)
- # ethereum (1)
- # hoplon (27)
- # jobs (5)
- # jobs-rus (1)
- # lein-figwheel (2)
- # luminus (5)
- # off-topic (5)
- # om (13)
- # onyx (60)
- # parinfer (2)
- # planck (12)
- # proton (2)
- # re-frame (45)
- # rethinkdb (5)
- # ring-swagger (9)
- # spacemacs (9)
- # specter (49)
- # test-check (1)
- # untangled (104)
- # yada (10)
Hi all, we are experimenting with an information model for our own application in order to generate specs, documentation, and maybe later if succesful API’s etc. We’re trying to connect the dots that are in onyx-cheat-sheet, onyx core and static analysis, but some things don’t directly add up. For instance I was wondering whether :restrictions
is used for validation, and if so, how. Is there some resource that explains the way you have designed the information model maybe?
it seems that :restrictions are only simple developer friendly descriptions, not used during schema processing
That’s right. We kinda skip building predicates out of the restrictions key, and build them with schema in schema.clj. :restrictions is used for documentation / error messages when these predicates fail
The way we built it is we built the schemas first, and then we went and built the information model, and built as much of the schema as we could from it easily. The pieces that seemed like they’d take too much time we left for later, and just added as documentation in the information model. We’ll get around to finishing the job at some point. Maybe once spec is released.
Thanks 🙂
I’m having trouble submitting my job to a peer running i a docker container., it seems the catalogs fails to load with a clojure.lang.ExceptionInfo: Could not resolve symbol on the classpath, did you require the file that contains the symbol : :onyx.plugin.sql/read-rows INFO [onyx.log.failure-detector:36] - Starting peer failure detector INFO [onyx.log.failure-detector:30] - Stopping peer failure detector monitor thread
Did you require onyx.plugin.sql in the namespace that your peer starts up in?
i.e. the one that calls start-peers
thx 🙂
i did a simple repl dev implementation on top of a new onyx-template
project https://www.refheap.com/122498 https://www.refheap.com/122499 @lucasbradstreet - happy to turn it into an onyx-template
PR if you want ?
That does look good, and would be useful. I’d like to hear what @gardnervickers thinks since he’s behind the job registration bits, and I’d like to make sure it fits in with his vision before you make the effort
sure - it doesn't quite fit with the lib-onyz/start-peer fn now (i had to replicate its function because it doesn't return anything), so it might make sense to adapt that too
@lucasbradstreet: Well I didn’t think it was possible but we made it worse, lol
Hah. OK
Any idea what you changed?
Well we attempted to roll our own create-state-update / apply. Not sure if we didn’t something really bad here
Ah k. So tell me what you’re doing in each
so in the create we check to see if we have seen a key before ( initialize if not ) or pull the old one out of the state and then apply the changes
the apply all we are doing is checking for the change log type and writing the :set-value
And is the set-value the coll of segments?
Maybe you could paste the aggregation in here, or to me in a PM, assuming it’s not doing much that’s secret
k, basically the main issue would be if you’re returning something bigger from create-state-update than the segment that created it
so before we had something like this because we were doing a collect by key
{:key #{seg1 seg2 seg3}
:key1 #{seg1 seg2 seg3}
then in the trigger we collapse these into one structurenow we are trying to merge each segment by key into 1 updating certain structures in the map
mmm. I think pretty much all that should be in create-state-update should be the returned segment
unless you’re cutting it down
In which case it should be something like (select-keys X segment)
so we are taking a look at how collect-by-key in the aggregation code is working and are trying to do a slightly modified form of it. One thing we noticed is defining a super-fn. Is that something we should be doing? Not sure we understand what that does or more its use case
super-aggregation functions take two aggregates and merge them together, which is needed for Session windows.
how often should
:aggregation\init
run? We are running sample with our data ( 5 segments which should aggregate ) but it looks like the init function is getting called multiple times blowing away our state?This is going to sound like a really daft question (and it’s probably down to my lack of docker experience) but has anyone got a peer running from docker but relying on either a standalone or zookeeper in another docker container?
That will setup a docker-compose file running external zookeeper
Yea so that actually launches zookeeper in another docker container https://github.com/onyx-platform/onyx-template/blob/0.9.x/src/leiningen/new/onyx_app/docker-compose.yaml
yeah that’s fine but I don’t want that to happen, one will be already running, I just want it to join in the fun
doing a docker run —link zookeeper:zookeeper peerimage
then complains about the network host
Error response from daemon: Conflicting options: host type networking can't be used with links. This would result in undefined behavior.
No, thats a limitation of docker when using net:host
You can’t use links with net:host networking.
You CAN however run your peer container in net:host mode and set your zookeeper address to <my_docker_ip>:<my_zookeeper_port>
Oh and thanks for the tweet a little earlier @lucasbradstreet 🙂
cough dcos install kubernetes
cough 😉