Fork me on GitHub
#pathom
<
2019-09-23
>
cjsauer17:09:54

>This is my first look at pathom internals, but for some reason performing resolution in order to compute outputs feels…weird Just had a thought, would datascript integration be better implemented as a custom reader? I can’t help but feel that this approach is similar to the built-in map-reader. Datascript appears to omit values it doesn’t have when using pull. So then I imagine a custom reader could be developed that simply performs the pull on the root/parent query, and then just hands that map to the normal map-reader…and resolution would continue normally from there.

wilkerlucio18:09:05

@cjsauer I guess that can work, a concern I can think is that Pathom does a lot of merging/changing the map as it progress, there is a risk at some point it would try to merge and get a regular map out, and then you can't keep pulling from datascript from that

cjsauer18:09:46

I’m imagining the datascript pull happening once, only on the root query. Datascript’s pull is roughly a resolver parser on its own, meaning it can digest eql queries out-of-the-box. So I suppose maybe a reader isn’t the correct place for this, because they are called on every step of resolution, correct?

wilkerlucio18:09:22

ah yeah, thats how people have been doing datomic so far, and should work well with datascript too

wilkerlucio18:09:41

and helps with auto-complete (as you declare the resolver outputs)

cjsauer18:09:22

Part of the challenge here, and where my understanding of Pathom is lacking, is that you can’t know the outputs until you’ve already performed the pull. With Datomic, the schema is all-knowing, but with datascript, the schema is completely optional.

cjsauer18:09:11

Is there a way to do this without declaring the outputs? I think I remember reading that you can include data in the env that you already have in-hand. Maybe I could perform the pull, assoc that result into the env/context, and then invoke the parser. The map-reader would then pull what exists in the env, and will use resolvers for everything else.

cjsauer18:09:06

I’m unsure tho whether pathom lets you specify ::pc/input of some attributes without specifying those attributes in some respective ::pc/output

wilkerlucio18:09:28

@cjsauer yeah, its a bit of tricky thing, if we don't know anything ahead of time we cant predict the paths

wilkerlucio18:09:23

well, can work, just not get proper auto-complete

cjsauer18:09:32

From your datomic plugin, I saw that ::pc/compute-output is a possible hook for plugins, so I suppose a datascript plugin could dynamically compute the output from the pull result. Is there an existing function that takes a map and returns its respective eql query? The ::pc/compute-output hook for datascript would then be something like (-> (pull db parent-query ident) map->query)) where an example of map->query would be:

(map->query {:a 1 :b 2 :c {:d 3}})
[:a :b {:c [:d]}]
Is this the return value that ::pc/compute-output is expecting?

wilkerlucio18:09:26

@cjsauer there is pc/data->shape that does what your map->query there is doing, a issue I see is that then you would have to compute it multiple times, depending on the query size that may be unwanted (and consider that compute-output may be called multiple times in some edge cases), also, compute-output has nothing to do with auto-complete, that is still a separated thing, that's provided using the ::pc/index-io

cjsauer18:09:37

I see. So the index would still be incomplete with this solution.

cjsauer18:09:01

I also don’t like it because it means that output computation depends on the db…which feels wrong.

wilkerlucio18:09:04

yeah, auto-complete currently doesn't have support for a dynamic schema like datascript, the keys have to be known ahead of time

wilkerlucio18:09:34

I still have to complete the datomic impl on that, auto-complete is not there (filling the index-io, but that's trivial to implement in the datomic case)

wilkerlucio18:09:17

maybe would be a good idea to have the schema for the datascript anyway? can be as simple as listing the keys you wanna use there

cjsauer18:09:04

Yeah that seems to be the direction this is leaning. Having a schema is probably good practice anyway…plus with a datomic backend the respective datascript schema is trivial to derive.

cjsauer18:09:14

Would just be super cool to automate the parts of app state that are client-only

wilkerlucio19:09:55

yup, and just out of curiosity, so you are using datomic on the server, and also having a datascript partial copy on the client?

cjsauer19:09:32

Yeah, and with these two pathom plugins I could avoid writing a substantial number of resolvers. Instead, I’d just need to specify my attributes in a central schema, and would get all of this query power “for free”, with the ability to query for out-of-band/derived data using hand-written resolvers. The reason for datascript on the client is drastically simpler client-side merges of remote data (literally just a transact! call), and also the ability to use datalog in resolvers is pretty sweet. Not to mention unified server/client paradigms.

cjsauer19:09:53

I used to develop apps with Meteor, and ever since then the idea of using a matching client-side database (in Meteor’s case it was Mongo + MiniMongo) has been very attractive to me. It allows the framework to do additional heavy-lifting in certain cases.

wilkerlucio19:09:54

interesting, do you already have some feelings about how Fulcro + Pathom approach compares with Meteor? specially interested in cool features that you had on Meteor and miss with Fulcro + Pathom

cjsauer19:09:33

One of the coolest features that Meteor had was the ability to have real-time data sync working in about 30 seconds. They would tail the Mongo op-log, and sync database changes over a ws in real-time. Because the client was also using (Mini)Mongo, reconciling those changes was really simple. I haven’t used Fulcro enough to know what kind of real-time sync capability it has out-of-the-box, but that was surely a super power in Meteor. With the default app you could make a change in one browser, and see it reflected in another. The analog with this setup would be taking the :datoms from datomic’s tx-result map, and then syncing those down a ws to datascript. Obviously you’d need to be careful with syncing private datoms down the wire, but I think these security concerns could be encoded in the schema.

wilkerlucio19:09:17

the hard part on real time is more in the realm of consistency, in theory you could just do a query and "watch" it, the problem comes as: how do I know that query updated?

kszabo09:09:09

I think reactive dataflows solve this exact problem: https://github.com/sixthnormal/clj-3df

wilkerlucio19:09:45

if you have a way to answer that, pulling the data and populating the UI is trivial, but that question is hard to answer, hehe

cjsauer20:09:13

Yeah definitely. I think the scope of that problem can at least be limited. Rather than watching general queries (hard problem), you could instead just watch specific entities (which is just an ident). The latter solution, while less general, is much much easier to implement. I think Meteor also made this simplification, but they may have indeed added some kind of query-watching feature. Being able to subscribe to individual entities, while maybe not totally optimal, seems like it would solve 90% of cases when you really just want entity syncing.

wilkerlucio21:09:56

yeah, if I were to port that idea, I would subscribe to idents, like [:my-company.product/id 123]