This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aleph (6)
- # beginners (37)
- # boot (415)
- # cider (17)
- # cljs-dev (79)
- # cljsjs (3)
- # cljsrn (18)
- # clojars (3)
- # clojure (34)
- # clojure-france (6)
- # clojure-italy (1)
- # clojure-korea (1)
- # clojure-russia (22)
- # clojure-spec (64)
- # clojure-uk (47)
- # clojurebridge (6)
- # clojurescript (61)
- # clojurex (1)
- # cloverage (11)
- # component (6)
- # cursive (73)
- # data-science (6)
- # datascript (4)
- # datomic (38)
- # editors (1)
- # emacs (4)
- # events (16)
- # funcool (5)
- # garden (3)
- # hoplon (17)
- # jobs (2)
- # klipse (74)
- # off-topic (3)
- # om (81)
- # onyx (35)
- # parinfer (4)
- # pedestal (1)
- # perun (20)
- # planck (9)
- # proton (1)
- # re-frame (17)
- # reagent (3)
- # ring-swagger (1)
- # rum (7)
- # untangled (63)
- # vim (8)
Hi, maybe someone can help with this. I have a live stream of stock prices (1,000/sec), and I want to reduce this stream to store just one price per stock per minute in a db (eg. the first price in that minute interval) What's the right onyx function/window/trigger combo to do this? I do have something running in onyx, but can't help but feel I've got it wrong - I've had to write a custom aggregate and use timer triggers...
@kjothen: you should be able to use group-by-key with a simple aggregation that just returns the new value from both the apply and the create fns. Then use a timed trigger like you are doing.
Hi, anyone has some experience with loading json data into s3 and copying it to redshift?
@eelke Do you even need Onyx? http://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-copy-from-json.html
I suppose if you were looking to alter the data between S3 and Redshift, Onyx would be viable again.
Ah sorry, my intention is to load data from kafka into s3 using onyx. Then indeed use the copy command to load from s3 into redshift. I actually had a very minor issue with the serializer-fn to write to s3 in the right jsonformat so that the copy command could deal with it. That is now resolved. Anyway thanks for the swift response
@kjothen group-by-key on the stock symbol, 1 minute timer triggers, and global windows. Custom aggregate seems okay here since you need some logic to determine that you’re seeing the “latest” stock value, since data order isn’t guaranteed.
@michaeldrogalis the problem with the global window is that I'm not guaranteed to capture the first stock price in that minute interval. That is, the timer fires every minute, not on the minute. Using a fixed one minute, accumulating window with a custom aggregation does though. However, memory usage goes unbounded I think - one window, per stock, per minute with one segment in each.
@kjothen You can use a discarding refinement mode on the trigger to dump unused state. Is there an attribute of the data that signifies that you’re looking at the “right” stock price for that interval?
@michaeldrogalis the discarding refinement mode dumps all state for that window, no? In my case, the right stock price is that with the earliest timestamp in the minute interval. But it's a general sampling problem I think.
@kjothen You can use two triggers on a window. Use a predicate trigger with a discarding refinement to ditch old state, and a timer trigger with an accumulating refinement to sync it to the DB, perhaps?
So in effect, every stock symbol has its own completely isolated window and trigger set
@michaeldrogalis I hadn't considered two triggers, neat. Whilst writing a custom aggregation was fun, I think it would be useful to bundle min-segment and max-segment aggregations in the platform, to pin an entire segment, not just a value. Thanks for your help!
@kjothen Can you either open an issue with that thought or send the code in via a PR? Seems like a reasonable addition
i have some failure-prone network operations which i want to partition by a key and implement a "latest op for a key wins" strategy - i.e. whenever a new op arrives for a key X, forget about any already incomplete ops for X and focus on the new op, otherwise keep re-trying an incomplete op until it succeeds or the give-up threshold is reached. the ops themselves are not serializable data (they are cancellable promises), but are described by serializable data (clojure maps) - this seems like something i might be able to jam into onyx aggregations, but i'm not sure - does it seem feasible ?
@mccraigmccraig I can think of a few features to combine that gets you 80% of the way, but it feels like a paradigm mismatch. I think this would be better served by a Workflow Engine, something like Amazon SWF.
thanks @michaeldrogalis i'll take a look at SWF - but what were the onyx features you were thinking of ?
You coulddd use a window and write an aggregate to only ever keep the last operation map, and store the promise on the Event map, then use a segment trigger of size 1
Does it matter if two operations are running currently for a period? Is this the kind of thing you need an absolute guarantee that n two operations are ever running at the same time?
Read up a little, you want the aggregation that @kjothen wrote earlier today.
no, absolute guarantees are not necessary - the ops are push-notifications partitioned by device, and there are occasional upstream errors which cause missing notifications, so i want to have some retries, but at the same time i don't want to keep retrying a notification which is out of date (since the notifications carry badge-counts with them)
Use a window to keep one op around, use a trigger to transition between ops. Should work okay for that
excellent - i will probably take a little mismatch over adding another major system component 🙂