Fork me on GitHub
#untangled
<
2016-06-01
>
graemenelson14:06:57

Is there a way to make the untangle client mutations work with different remotes? For example, let’s say most of my mutations go to endpoint api, but there’s one mutation where I want to use a 3rd party jsonp endpoint.

ethangracer15:06:34

@graemenelson: are you talking only about data fetch? the only built in mutations that hit a remote have to do with data fetch and network error fallbacks

graemenelson16:06:32

@ethangracer: Yes, it would be a data fetch to the http://github.com/users endpoint — does that make sense?

ethangracer16:06:34

@graemenelson: I believe that, as implemented, we only support one remote. we haven’t had a need for more yet. however, you can build a custom UntangledNetwork which could handle multiple remotes. is that correct @tony.kay?

graemenelson16:06:10

I was just looking at that yesterday, and it look like I could build a custom UntangledNetwork. I just wanted to make sure I wasn’t missing something easy, since I am pretty new to clojure/clojurescript as well

tony.kay17:06:52

@graemenelson: @ethangracer So, there was always a plan to support the standard Om idea of multiple remotes. Just haven't needed it yet, so have not dedicated resources to it.

tony.kay17:06:12

I'd have to re-wrap my head around what is needed, but it isn't much

tony.kay17:06:27

don't want you to have to write custom network bit...it is relatively minor to support any number of remotes

tony.kay17:06:19

However, JSON endpoints are not part of the Om story...those would be your own networking code, not integrated with Untangled, over which you hit an endpoint and then call merge! on the reconciler after converting the response to a reasonable bit of data you want to merge.

tony.kay17:06:59

So, you'd do a normal XHRIO kind of thing, and in your callback transform to Om db format, then merge.

tony.kay17:06:30

Om/Untangled remotes and networking are meant to run UI-based queries in Om query syntax, and get back Om-structured responses

graemenelson17:06:19

Thanks @tony.kay that makes sense. And I wasn’t going to jump to making a custom networking, I could simulate what I need just using the app state for now

tony.kay21:06:30

@currentoor: @therabidbanana @ethangracer 0.4.11-SNAPSHOT just updated on clojars. Eliminated most of the prewalks from the code. This version should be much faster on render after query.

tony.kay21:06:42

The notable exceptions: - If you use recursion in your queries (that can be optimized...but I didn't get to it) - Tempid replacement still has to walk all of the state, so if you get back a tempid remap from the server, it could cause a delay...not sure how to fix that one in the general case.

tony.kay21:06:34

TodoMVC works with the new version, all tests pass (including the new ones), and I'd love it if anyone can try it out on an app and give feedback.

currentoor21:06:53

@tony.kay: awesome! We were dreading this issue today in our planning meeting 😄

tony.kay21:06:02

My fingers are crossed. I cannot imagine a method that could make it any faster in the general case. The tempid remapping problem is probably going to require us to make some restrictions on where you can put tempids in the state, or allow ppl to plug in their own tempid remap function.

tony.kay21:06:18

I'll probably just opt for the latter

tony.kay21:06:39

The new sweep-merge should be blazingly fast

tony.kay21:06:06

(compared to the old)...and a silly short-circuit was missing for tempid remap, which was making it run every time, even when it wasn't needed.

tony.kay21:06:44

So, everything you reported should now be "negligible"...which means now you'll find something else 😉

currentoor21:06:29

Yeah I'd be happy to do that.

tony.kay21:06:44

find something else? 😄

tony.kay21:06:46

Also removed the need for specter in client

tony.kay21:06:55

So that cleaned up deps nicely

currentoor21:06:43

@therabidbanana: is going to try it in a few minutes.

currentoor21:06:20

we'll post our results in the issue

therabidbanana22:06:20

One generalizable thing you might be able to do for tempid replacement - though I'm not sure how you'd achieve this - only go as deep as the queries go? Essentially our widgets have a key :widget/current-data-stream - inside that key is a giant nested hash representing the report, but we just pass that raw object to our components without describing what might be in those with om queries. Tempids could never live past that point, and that's the vast majority of our state

tony.kay22:06:39

Yes, but which query?

tony.kay22:06:55

If you did a mutation, I don't know where all you possibly put the thing you made

tony.kay22:06:01

the mutation isn't a query

tony.kay22:06:12

And you may not have everything on the UI, so the root query isn't good enough

tony.kay22:06:34

you could have peppered idents all over creation

therabidbanana22:06:44

I believe om maintains an index of the queries - but that's beyond my current understanding of om internals. 🙂

tony.kay22:06:00

queries for on-screen components

tony.kay22:06:17

your app state is a database, and can have stuff that nothing is currently querying for

tony.kay22:06:42

But I had the same thought: leverage the UI query....I think the right thing is just to make it pluggable

therabidbanana22:06:15

Agreed - instead of some generic thing, that'd be much simpler - we only have like 2 places in our app state temp ids could live.

therabidbanana22:06:07

Getting some baseline numbers now, then I'll try the latest snapshot to compare.

tony.kay22:06:14

If we rewrite: 1. The network queue 2. The database tables then the only thing left is idents. Finding those is the main trick. The expense is walking down stuff that we don't care about...like report data. Perhaps if we augment those with a metadata marker (as an optimization) we could avoid large swaths

therabidbanana22:06:22

Yeah - that's true. I'd even be okay with some sort of special keyword namespace like ui/ (blob/? )

tony.kay22:06:33

we could do something as stupid-simple as let you put :untangled/no-tempids in the state of a map, which would keep algorithms from scanning them more deeply

therabidbanana22:06:51

That'd also work

tony.kay22:06:51

jinx, I think

tony.kay22:06:57

roughly the same, no?

therabidbanana22:06:13

Yeah, essentially same thing

therabidbanana22:06:20

Just stylistic choice at that point

jasonjckn22:06:40

i think it would be useful if the read parser whenever it encountered ":widget/xyz" (anything with keyword namespace "widget") it assumes this is a join, and automatically calls the parser recursively on the query inside of it. The current functionality tries to locate keyval {:widget/xyz ...} in the local state first, and if no such key exists the inner query won't be recursed on

jasonjckn22:06:58

it seems to me that a lot of times you just have boiler plate a lot of nested widgets

jasonjckn22:06:10

but they have links in the inner query that's all you care about

tony.kay22:06:06

@jasonjckn: I do not follow what you are talking about. Not talking about reads...talking about tempid replacement in app state.

jasonjckn22:06:37

my conversation didn't connect with the above 🙂

jasonjckn22:06:00

I didn't even read what came before what I wrote, was just talking about a useful feature in the read parser

tony.kay22:06:12

we don't have a read parser

tony.kay22:06:17

We use Om's db->tree

tony.kay22:06:28

and the established Om query syntax/interpretation

jasonjckn22:06:39

nods yah I was suggesting modifying that slightly

jasonjckn22:06:48

it was just to reduce some boiler plate code, it's okay nevermind 🙂 it comes down to trying to get more syntatic sugar at the end of the day

jasonjckn22:06:00

I can understand why it's just vanilla db->tree

tony.kay22:06:41

Yeah...if the general purpose tool works well, cluttering it with sugar can really make messes. Need very compelling reasons.

tony.kay22:06:47

If you could describe the boilerplate you're trying to clean up, perhaps we can brainstorm a general solution

tony.kay22:06:41

@therabidbanana: you seeing breakage? I'm trying to run a bigger app and having problems with SNAPSHOT

jasonjckn22:06:20

Well let's me try explaining with this example, let's say in todomvc, we're trying to simplify the application state from

:todos           {:list/title  ""
                                              :list/items  []
                                              :list/filter :none}})
to merely
:todo-item/by-id {1 {:title "foo" :id 1}, 2 {:title "bar" :id 2}}
we completely remove
:list/items []
in this example, and we should be able to build the app this way, for example to query all the todo items we could use the link
[:todo-item/by-id '_]
so now the query expression for TodoList is
[{[:todo-item/by-id '_] ~(om/get-query TodoItem)}]
in the root component query expression we still need {:items nil} to appear in the application state, just to write the root query expression
[{:items ~(om/get-query TodoList)}]
so the :items is redundant, i was trying to propose anything with "widget" namespace is a kind of blank container for an inner query expression that is rerooted in a sense

tony.kay22:06:45

mmmm. no. The link query cannot be used to query a table as if it were a to-many join. If it works, it is by accident and not design...such a map has an undefined order, so what would you even expect? If you change the table, your list order can totally change on the fly.

jasonjckn22:06:56

if I understand you correctly

{[:todo-item/by-id 43] '[*]}
is valid and should work but
{[:todo-item/by-id '_] '[*]}
is undefined?

jasonjckn22:06:03

they both work in practice

jasonjckn22:06:11

when I tested them

tony.kay22:06:47

The first indicates you want a specific item. The second treats the db table (a map) as a collection...which as I said might accidentally work, but it will not work the way you want

tony.kay22:06:56

e.g. maintain a predictable order

jasonjckn22:06:22

well my example is flawed I know, but maybe you can still use it to understand my point around the boilerplate

tony.kay22:06:32

I'm surprised it works at all, actually...since a map goes to k,v pairs when you treat it as a sequence

tony.kay22:06:13

There is no boilerplate...the vector is needed state...it represents the desired order

tony.kay22:06:47

:todos is required state...it has the title of the todo list and current sort order

tony.kay22:06:12

so, I see no boilerplate

therabidbanana22:06:25

@tony.kay: Near as I can tell everything is working great - and it's notably faster

tony.kay22:06:44

@therabidbanana: Great! Not sure what my issue is

tony.kay22:06:52

is it fast enough?

therabidbanana22:06:46

It's never fast enough. 😈 But yeah - definitely more usable now. Before there was a noticeable lag whenever you did a mutation and already had some reports merged into the app state

tony.kay22:06:30

Hm...if you want to profile it some more, perhaps there is something else to tune.

therabidbanana22:06:06

(Just giving you a hard time. 🙂 This is great, thanks for turning around with a new solution so quickly when the last snapshot didn't seem to do it)

tony.kay22:06:30

Sure...it was affecting us too, so it had dual motivation

therabidbanana22:06:05

I think the next thing I'm seeing as a target to speed things up is the force rerender when ajax comes back? I know you said that's required for post mutations - but we rarely have those - wonder if we could short-circuit around that most of the time?

therabidbanana22:06:10

Might be reading that wrong though. I'll have to dig in more and see what I can find.

tony.kay22:06:19

Hm. Hadn't thought that would be very slow, since there is a lot of Om short-circuiting in that path

tony.kay22:06:51

It would be trivial to short-circuit that if there were no post-mutations.

tony.kay23:06:46

It seems like we should be able to do something that mitigates needing that at all

therabidbanana23:06:35

Oh, this is one in mark-loading I'm seeing.

therabidbanana23:06:52

A force-rerender call

tony.kay23:06:43

Oooo. Maybe we could turn those into follow-on Om reads

tony.kay23:06:50

Yeah, that might work

therabidbanana23:06:06

But the ajax callback is definitely always suffering

therabidbanana23:06:27

Found it in the timeline - 285ms for force_root_render there

tony.kay23:06:30

and rerender is the CPU suck?

therabidbanana23:06:30

Used to be the merge function was equaling/overpowering that

therabidbanana23:06:42

But now it's the majority of the the loaded callback time from the looks of it

therabidbanana23:06:11

Also in mark-loading:

tony.kay23:06:15

so, 134ms is pretty small

tony.kay23:06:36

almost 10fps...

tony.kay23:06:57

But I guess if you're clicking through an interface rapidly, it is noticeable?

therabidbanana23:06:20

Much less so now, for sure - but it's still just within the realm of noticeable for some things - we have these pretty animated loaders (svg I think?) and they pause when the loaded callback activates. Not as critical any more - you just opened the door with "if you want to profile it more." 🙂

tony.kay23:06:43

Ah. OK, well, if that root re-render is the only thing left, then perhaps my follow-on read idea will work. Could you break the re-render timing down a bit more? E.g. get-query cost, etc.

tony.kay23:06:12

I'm guessing get-query is cheap...but what does the db->tree cost, for example.

tony.kay23:06:27

Also, React is in dev mode, which is much slower (I hear)...If it turns out that the vdom stuff is the slow bit, then it is possible that a production compile might be a lot faster...but again, knowing where you're seeing lag on a granular level is ideal.

therabidbanana23:06:09

Yeah, that's always the trouble - production compile makes it so hard to figure out where the performance bugs are. I haven't tried on with these latest changes, but I could do that.

therabidbanana23:06:22

get-query, as you suspect, only 17ms

therabidbanana23:06:39

The db->tree stuff appears to be about 75ms

tony.kay23:06:07

hm...so, yeah...React isn't going to help

tony.kay23:06:13

you're already up to abt 80% of the time

therabidbanana23:06:20

Then like 190ms in react land

tony.kay23:06:38

oh, your image had 135ms total

therabidbanana23:06:39

(This is the loaded-callback one - 285ms total)

therabidbanana23:06:01

Somehow the mark-loading case was much less

therabidbanana23:06:15

Might be some garbage collection or something?

tony.kay23:06:43

so, the follow-on read idea might help. It would reduce the queries down to targeted components (if they have idents) and would eliminate a lot of the React work.

tony.kay23:06:41

We'd just need to derive which components need to re-render

tony.kay23:06:52

ok...I have a thing to go to. I'll note this stuff in the issue, and see what we can come up with

tony.kay23:06:07

seems plausible to fix

therabidbanana23:06:27

Let me try in production mode, might be chasing ghosts here. I could log an issue it it's still there and attach screenshots like the above if that'd be helpful

tony.kay23:06:43

More info is always welcome