This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (3)
- # alda (2)
- # beginners (10)
- # boot (44)
- # cider (31)
- # cljs-dev (5)
- # cljsjs (2)
- # cljsrn (17)
- # clojure (181)
- # clojure-austin (2)
- # clojure-brasil (18)
- # clojure-canada (1)
- # clojure-conj (5)
- # clojure-dev (11)
- # clojure-gamedev (30)
- # clojure-russia (380)
- # clojure-spec (50)
- # clojure-uk (35)
- # clojurescript (146)
- # clojutre (1)
- # component (1)
- # cursive (62)
- # datomic (27)
- # dirac (7)
- # editors (23)
- # emacs (7)
- # events (34)
- # funcool (22)
- # hoplon (134)
- # jobs (22)
- # jobs-rus (7)
- # juxt (1)
- # kekkonen (1)
- # lein-figwheel (54)
- # leiningen (7)
- # luminus (2)
- # off-topic (5)
- # om (4)
- # onyx (27)
- # proton (5)
- # protorepl (2)
- # re-frame (16)
- # reagent (29)
- # rethinkdb (2)
- # schema (1)
- # untangled (61)
- # yada (9)
@tony.kay taking a step back, this is consistent with where I eventually want to get: complete asynchrony between the mutations (commands) and the reads, not even returning the tempid mapping.
We eventually got there too with our stuff. I think we just removed the
widget/load-data-stream action in the example above and rely on push notifications via Sente now. Tempids are still useful there though - having a tempid that auto-resolves to a real id makes the push commands dead simple - we just listen on the channel for updates by id and swap those into place.
@therabidbanana yeah, I still want to have tempids 🙂 Just not returning them on the initial mutate command. Like you describe, passing the tempid maps in a separate push once the real ids have been generated on the server/back end.
Ah, we haven't had to do anything like that yet - all our commands return quick enough we're fine with tempids coming back in the initial mutate command
In some scenarios we do assign a randomly generated uuid to a blob of data that takes a background job and 5-10 second wait to fetch, so that we can return that id immediately
I'm thinking of a situation where you're writing the commands to an event queue (think Kafka), then consumers coming along and doing the actual work associated with the command.
I'd probably still go with a tempid resolving in that scenario - resolving implies that the event was successfully written to kafka.
My app is nowhere near requiring that kind of system, but I have worked in environments like that, and I love the decoupling.
Then the real id is what's stored and communicated with via the command consumers. I'd handle it that way mainly because tempids are ephemeral and will disappear on refresh - maybe that's not a concern in your scenario though
So our background job scenario is one of those kinds of cases - we use the uuid as the primary key for the data stream so we can communicate it back immediately
And we set a "status" on that entity with pending so we know it's not actually there yet
That lets us refresh in pending state, you pull from the db that the stream is still pending, and have an id to listen to for further push events.
It's a cross-network advertising reporting dashboard (https://www.adstage.io/reporting/)
So far, yes, very happy. There have been a few rough edges but overall it feels simpler to work with than what our other products have used (frontend Ember / backend Ruby on Rails).
We had the advantage of being able use our existing platform and pull reports from there, I'm not sure if I'd have been as happy to build out the integrations with 5 separate networks in clojure land - there's a few good ruby gems for working with those APIs
I've historically been a backend/data guy. Nearly all of my front-end stuff has been nodejs/express. And while that's been really easy to get started with, I've found it's generally a mess going forward.
Untangled's networking queue on top of that to force synchronous network requests where it makes sense (most of the time, generally), helps a lot with weird edge case type stuff we've seen in Ember too.
Our server for this product takes in the om reads/writes, and when we need a data blob from the Ruby platform, that's where we fetch the datastream in a background job
So you can build an entire report without talking to our Ruby app, it'll just all show with widgets in pending until those background jobs clear.
It let us iterate very quickly at the beginning, because we could just make a dumb worker that returned the same data every time
Then we got it talking and just pointed it to our production app, since it's read-only
Nice! We got one set up a week or so ago - we're still waiting to catch our first session in the wild though. 😄
Heh - yes indeed - but I don't use our production app much. Also I was not aware of the term Musa until this moment.
@grzm You're biting off quite a bit of complexity there. I like the event stream model, too, but reasoning about tempids becomes really difficult if you don't at least do some kind of id step ASAP.
Optimistically add an item. You have a tempid. Now the user can, at any time, delete it. Do you want the tempid going over the network as the id for that request?
With Untangled/Om (and tempid reassign on return and sequential processing) the "right thing" happens.
If you defer remapping, then you have to add extra logic on the server-side. Of course, that could be as simple as using the tempid as a permanent natural key...but that bloats your data a bit
Also remember that with Untangled you get optimistic UI updates, so the user gets immediate feedback, even if your backend takes a while to produce the result. So nothing says you can't do your event stream model and just don't respond until the remap is ready...assuming you can get the response in less than the network timeouts (e.g. 30 seconds)
I think there is room for expansion in Untangled's network stack as well. The sequential thing isn't always appropriate (e.g. blocking future reads because some sequence is pending inthe queue). The :parallel thing helps, but then you don't have sequencing on those reads (which is ok most of the time).
Om supports multiple remotes (e.g.
:remote true could instead be
:remote :A). When we add support for multiple remotes, this could give you a way to use "alternate queues" based on which remote....and the remotes could technically all point to one specific remote (you just get multiple queues)
The CQRS stuff I'm not looking at doing any time soon, but something I want to keep in mind as I consider different architectural decisions.
welcome. The stack is new enough that some of these things are still being explored.
Some have not been added because no one has yet shown the actual need, but that doesn't mean they are not needed for a fully general solution.
supporting multiple remotes for queries (via a parameter to load-data) would also give these same benefits.
On the sequencing thing @tony.kay - what we've seen is that what we'd want :parallel true to do when we use it is go into the queue and when it comes up, then go unblocking.
Often we'd want to do a slow read on something after a mutation that creates that thing (like data streams), and we'd make that read parallel
@therabidbanana Yeah, the "correct" behavior is supported by what we have, but it is obvious that (usually for optimization) there are other queue cases we should support.
It's all in front of an easy abstraction, so it should be pretty easy to add whatever is needed. Just keeping the API clean and simple is the main concern.