Fork me on GitHub
#untangled
<
2016-06-28
>
cjmurphy14:06:21

What are the old and new style app states?

currentoor15:06:45

@cjmurphy: i was wondering that too, but too embarrassed to ask

wilkerlucio15:06:32

I'm wondering the same thing about old vs new style

wilkerlucio15:06:49

I didn't even knew there was a new one 😅

ethangracer16:06:53

@currentoor @cjmurphy @wilkerlucio maybe referring to using the InitialAppState protocol vs. not?

currentoor16:06:04

@ethangracer: yeah i think so, i was on vacation for a little while and i think i missed all the discussions

ethangracer16:06:19

oh, you’re referring to tony’s comment. I was with him when he fixed that bug, I can confirm that’s what he meant

currentoor16:06:33

cool, thanks!

currentoor19:06:19

so i realize untangled makes extensive use db->tree but i was wondering, if i had the ability to write some of the read functions in untangled-client then could i use datascript for my persistent models?

currentoor19:06:14

right now, on the server i computed timing related data based on transaction times, via data-log queries but on the client i have to treat this derived data as actual model attributes

currentoor19:06:23

for example dashboard/created-at

tony.kay19:06:02

You can always use anything you want. When you want to show something it needs to be in the app database. Nothing says you can't have a mutation that uses something (e.g. a datascript db) as a source of information that you put in the UI database.

tony.kay19:06:35

I think doing it via the parser adds a complexity that you probably don't want. Nor do you want the datascript (very very slow) running queries on demand at every frame render

tony.kay19:06:49

you end up putting in memoization to make the render fast again

tony.kay20:06:16

why not just make the memoization a step from datascript to UI db at a designated point of interest?

tony.kay20:06:16

Untangled, in general, is not a fan of using an alternate thing as the app db, nor having your engineers augment a parser. If you want to do that, raw Om might be better.

tony.kay20:06:23

But the approach I outline (treating the app db as a UI graph db) lets you do everything you want (it is completely general), but is more direct (easier to trace deterministic steps from point A to B)

tony.kay20:06:17

With a parser, you switch to another paradigm altogether (transform some arbitrary data into a tree). With a clear app db everything is just a data transform on a simple graph, and the UI query walks the graph to make the tree.

tony.kay20:06:33

the source of that data can be anything

tony.kay20:06:08

So, I recommend treating your mutations as Graph DB -> mutation -> Graph DB, your queries as Query ---(Graph DB)---> Tree. Thus, the only black magic is completely contained in your mutations, which are always treated as a consistent kind of function.

tony.kay20:06:56

Remote queries pull tree data into the graph db, and your post-mutation is just like any other mutation: evolve the graph

tony.kay20:06:43

If you happen to do some alternate network IO, populate a datascript db, and want to use that in a mutation to evolve the graph: fine

currentoor20:06:56

yeah that sounds like a really good approach

tony.kay20:06:05

keeps things ultra simple

currentoor20:06:38

yeah, using datascript sounds sexier initially but i can see why this is simpler

currentoor20:06:57

shared queries on client and server etc

tony.kay20:06:03

Yeah, I initially loved the idea. Then saw the performance and other disadvantages

tony.kay20:06:25

you already have shared queries on client/server with db->tree

currentoor20:06:51

what do you mean?

tony.kay20:06:05

just saying UI query syntax is datomic subset

tony.kay20:06:42

The one place where it might be really nice: mimic your schema on client/server, subscribe to tx on server, push entity updates to datascript db.

tony.kay20:06:56

could easily write a Meteor-like client/server subscription data model

currentoor20:06:23

in my experience the pull syntax occasionally comes up short and we have to supplement it, for example to get dashboard/last-edit-on i use the tx time on the server

currentoor20:06:31

do you find yourself doing that?

tony.kay20:06:42

Sure, of course.

tony.kay20:06:07

The pull syntax is a convenience, and the prime advantage is tuning chattiness of results via the UI.

tony.kay20:06:22

I think an alternative to Datascript, though, might be adding something that can subscribe to a rooted pull fragment...such that server push could re-push some pull query when anything is touched on the db that was included in the original pull result.

tony.kay20:06:32

(rooted, say, at an ident, which in turn corresponds to a particular datomic entity)

tony.kay20:06:14

eliminates the need for Datascript, and gives you instant UI update on server push

currentoor20:06:18

to get the Meteor-like subscription model we were thinking about annotating datomic transactions with client mutation literals, so for example when dashboard/edit happens on the server, the transaction also gets a field for "[(dashboard/load-by-id {:id 3})]" then via datomic's transaction report we read this client mutation literal and broadcast it out to clients

currentoor20:06:24

the clients can then transact these mutations, which will trigger the appropriate (re-)reads

currentoor20:06:54

not as nice as pushing the datoms but seems simpler

tony.kay20:06:18

My initial reaction is "client code injection...erm" and "code in transaction data...erm"

currentoor20:06:20

i see, i thought of them as data, since they are just the names of mutations and some params, the actual logic lives in the client's mutation functions

tony.kay20:06:27

that is true

tony.kay20:06:35

Why not have clients "subscribe" to their server by IDs of interest. Then when the tx log includes those ids you could trigger the update.

tony.kay20:06:04

any number of servers supported, and you don't have to pass extra data through the transactor/db, which are more limited resources.

tony.kay20:06:41

The ids affected are already included in tx log

tony.kay20:06:05

The subscription could include the "here's what you should tell me when ID n changes"

tony.kay20:06:14

then it is just an in-memory component/store

tony.kay20:06:33

which is fine because websockets will already require you being attached to a specific server.

tony.kay20:06:42

subscriptions can be dropped on disconnect events

currentoor20:06:40

IDs in this case being datomic entity ids?

tony.kay20:06:09

which is also what I was referring to in my "subscribe to a pull". When a pull runs, you can derive all IDs of entities involved

tony.kay20:06:23

then you can watch for any of those to change and re-run the pull

tony.kay20:06:58

(and update the sub, since the refs could change)

tony.kay20:06:50

gives you a fully general and reusable subscription system tied to UI queries rooted with an ident

tony.kay20:06:28

Could even tie sub/unsub to component mount/unmount

currentoor20:06:13

ok just to make sure i understand the first part, on the server i would have a set of IDs associated with each open websocket and then for any tx reports i push the updated IDs out to all the clients that are associated with those IDs

tony.kay20:06:47

Depends on which version of the idea you're asking about 🙂

tony.kay20:06:15

We did a tech spike on this, so here's roughly what we did, as steps:

tony.kay20:06:09

1. On subscribe (which is client-centric): The subscription is for a query of the form :`[{ [:db/id 4] [:subquery :props] }]` of arbitrary depth. Any joins should include :db/id in the requested attributes. 2. On the server, run the pull and recursively walk the result, and record the ids into a set 3. Places that set into a subscription record for that client 4. Return the result to the client. A thread watching the datomic tx log, on each tx: 1. Get the set of IDs that have changed 2. Find all clients whose subscription sets have a non-zero intersection 3. Re-run those pulls (as if starting at step 2 in the first list of steps) 4. Push the result and query to the client (which can use om/merge! with that data and query to make app state change properly)

tony.kay20:06:23

We didn't need this yet, so we never finished the implementation. More pressing things needed. The proof of concept seemed solid, though.

tony.kay20:06:29

A "subscribe to a specific entity" mechanism might be useful as well.

tony.kay20:06:55

but really that is just the most simple case of the above

tony.kay20:06:10

(do a join on an ident where the query just contains props for that one entity)

tony.kay20:06:47

If it really is a leaf entity, then you're saying "keep this entity up-to-date". If there is a graph, then the pull query says "keep this graph of objects up-to-date". Additional fun realizations: Your pull query need not actually be on the UI...you could subscribe however you want. One caveat: refresh of UI on server push might require explicit re-rendering stuff. Not sure under what conditions you'd need to help.

tony.kay20:06:55

The ident would definitely refresh components with that ident. You might have to do a no-op transact! with follow-on reads to get related/derived stuff.

tony.kay20:06:21

Optimizing the "find clients to refresh" could also get CPU heavy, though you could heavily index things to make that fast.

currentoor20:06:32

wow that was very thorough, thanks for sharing!

currentoor20:06:53

i'll talk to my team and see if we can do some of this

therabidbanana20:06:55

So in the above scenario is [:db/id 4] an ident? (Normally ours would look something like [:widget/by-id 4], but then the server isn't really going to know what that means)

tony.kay22:06:45

@therabidbanana: It is trivial to use whatever you want for the keyword on the client side, and know that it really means :db/id on the server

tony.kay22:06:07

So, subscription could just ignore the specifics of the keyword, and know that the number has db/id meaning

tony.kay22:06:37

So yes, it is an Om ident.

therabidbanana22:06:06

Yeah, that's what I was imagining we'd do, just pretend it means :db/id on the server

therabidbanana22:06:29

Cool, I think the rest makes sense, we're going to try and get a simple proof of concept going

therabidbanana22:06:21

Though unfortunately it turns out sente relies on CLJX, which we don't have in our boot builds yet, so first some yak shaving. 🙂

tony.kay22:06:33

what, no emoticon for naked yaks?

jasonjckn23:06:45

what's the type inside :handlers?

jasonjckn23:06:47

(defn make-system []
  (core/make-untangled-server
   :parser (om/parser {:read logging-read :mutate logging-mutate})
   :parser-injections #{:elasticsearch}
   :extra-routes {:routes ["/login" :login]
                  :handlers {:handlers (fn [req] )} }

jasonjckn23:06:48

this is working-ish

jasonjckn23:06:50

:extra-routes {:routes ["/login" :login]
                  :handlers {:login (fn [req _ _] (resp/response "Login"))} }