This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-04-17
Channels
- # bangalore-clj (2)
- # beginners (202)
- # boot (18)
- # cljs-dev (8)
- # cljsjs (7)
- # cljsrn (4)
- # clojars (2)
- # clojure (401)
- # clojure-boston (2)
- # clojure-dusseldorf (1)
- # clojure-gamedev (36)
- # clojure-greece (2)
- # clojure-italy (1)
- # clojure-russia (16)
- # clojure-spec (27)
- # clojure-uk (7)
- # clojurescript (68)
- # core-async (16)
- # cursive (25)
- # datascript (1)
- # datomic (34)
- # funcool (1)
- # hoplon (1)
- # interop (1)
- # klipse (1)
- # leiningen (2)
- # lumo (75)
- # off-topic (17)
- # om-next (2)
- # onyx (66)
- # re-frame (18)
- # reagent (2)
- # ring-swagger (11)
- # spacemacs (1)
- # specter (1)
- # timbre (3)
- # untangled (48)
- # yada (7)
getting ready to run onyx in staging / prod and starting to think about how we want to operate our onyx app. - should it submit its job automatically on startup? - should it idempotently ensure ElasticSearch has proper index and mapping and install them if not? i can imagine a few ways of operating an onyx app: - api calls / CLI - embeded nREPL - web dashboard - automation (no ops)
I am curious, did you write an elasticsearch plugin? It's something on my near future to-do.
@mpenet my coworker rewrote https://github.com/onyx-platform/onyx-elasticsearch using your spandex client for 5.x. (he hasn't published it yet)
@devth I wrote this for open source it, may be after a few iteration of the code quality?
@devth I’ve done all 4 of those before. 🙂 The usual answer - it depends.
i think i like the idea of "no ops". we're running in Kubernetes. no-ops allows you to spin up ad-hoc envs and not have to worry about manually provisioning stuff.
@devth same here, are you using Helm?
Nice, we've been loving being able to spin up ad hoc environments and have everything work with automatic TLS certificate procurement and ingress routing.
Haven't heard it called "no-ops" before, I like that!
we use kube-lego for external, vault for internal. haven't seen many other options for internal.
We use Traefik for ingress instead of multiple external load balances. They have an ACME option for LetsEncrypt.
Yea it's served us well so far, there is a similar setup possible with Nginx but it's not an out-of-the-box kind of experience like Traefik.
We're running on AWS in multiple regions, one cluster using kops
and another that's self-hosted with bootkube
having run these types of apps in production myself for quite some time, i found out the hard way that job management should ideally be explicit
also, think about compatibility with versions of your data — you can decide not to do anything with it yet, but it’s worth it to have at least some idea how you want to manage backwards incompatible upgrades of your streaming jobs
decided to see what would happen if i recursively killed the /onyx znode in zk while onyx was running. it quickly restarted all of its peers, recreated /onyx/my-tenancy/{bunch of sub nodes}, then the pod died with [s6-finish] sending all processes the KILL signal and exiting
and a new pod was created. :thinking_face:
@devth Erm, yeahhh don’t do that. 🙂
You can wipe out all the peers, but dropping /onyx/
while it’s actively running is pretty bad.
Yeah. You can also invoke the garbage collector.
Technically we should be able to bounce back from the /onyx
zk node being dropped. We don’t have a Jepsen test for that one though, since it’s akin to wiping out database transaction logs while a database server is still running.
Should be most of what you need.
It’s all fine for a ZK node to go down, for ZooKeeper to temporarily become available, but deleting those contents is a much more catastrophic fault, yes.
I’d recommend rotating your tenancy, :onyx/tenancy-id
, and after it transitions, you can rm
the old tenancy under /onyx
i wonder if your search results would improve if you added "Onyx Platform" to the <title>
. right now: <title>User Guide</title>
Probably would help. 🙂 Send a PR our way?
I can get to it later tonight if not.
do we really need s6-overlay when we have K8S to ensure the process is always up and running?
@devth Not if you're running the media driver in a separate container.
oh right. i'm using embedded right now but will switch to separate. haven't considered a separate container in the same pod but maybe that makes sense.
not sure if s6 is hiding anything. it just shut down / restarted but i have no idea why
Yea just make sure they share a /dev/shm memory volume.
e.g.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
S6 is just to get around the PID1 reaping problem.
@devth using a Memory
volume would offer better perf.
I believe emptyDir
writes to disk still
don't think you can share dev/shm across multiple containers yet https://github.com/kubernetes/kubernetes/issues/4823
You can just mount a mem volume across containers at /dev/shm and it's functionally the same