Fork me on GitHub

when storing aggregates to an outside store (say, redis), are there any tools onyx provides to ensure idempotence for this ?


for example, i’m keeping track of an ever-increasing counter, and my trigger is set to discarding — what would be the best way to ensure consistency (assuming the data store is able to do transactions) ?


i think that’s what i’m talking about


i probably need to implement some additional mechanism on top of onyx to achieve this, eh ? or does onyx provide some tricks internally to achieve this (e.g. inspecting any of the maps that are provided to the trigger/emit / trigger/sync functions)


i’m probably approaching this at the wrong angle, though… another approach might be CQRS-like snapshotting and onyx resume points 🙂


ah, i think i’ve figured this out mentally by now — i think i should treat the last event id as the ‘version’ of the aggregate, and use that as a way to achieve consistency — if i ever need to rebuild the aggregates, i can seek towards that event id


my thinking problem was that i was treating the aggregates as a source of truth, while in fact they are a proxy of the truth


@lmergen Nice conclusion. 🙂


Onyx can’t do much for you at the edges when you start talking to external storage provides as far as idemotency goes. Some design work is needed there - you’re on the right path. 😄


yeah, i believe i read something about this in one of greg young’s documents


He writes quality material.


hmmm, i’m almost thinking this might be related to ABS…


or rather, i might want to use some data of ABS to use the same barrier/epoch for my aggregate versions


@anujsays Case-2 would attempt to put a function in an Onyx job. Jobs are strictly data.


We do it for concision, at any rate. Most real use cases of lifecycles have chains of calls that are a bit pointless to respecify every time.


Ok, I understand now. Thanks @michaeldrogalis


@lmergen If you write an output plugin you can actually be told the current replica version and barrier epoch


and when it needs to recover, it’ll restore to a particular replica version and epoch, so you can determine whether any data is out of date at that point


We just landed a patch in master that extends the public API to optionally receive a persistent Onyx client, rather than resconstructing one from scratch on each call:


This enables rapid, successive calls to submit-job, kill-job, etc.