Fork me on GitHub

@hmaurer I think about that a lot, it has some nice properties 🙂


I imagine it can be work as a solution to the "business logic invocation problem", in other words a big controller that eliminates the need to write manual controllers, and instead delegate that to attribute relationships


I guess we can imagine a parallel with memory management, yes we could optimise the memory usage just for our programs and that was really fast, but at the cost of so many issues around bad memory management, can you imagine ideas of attribute relationship alleviating all this "bad input passing" that happens inside your code (specially around controllers code)?


@wilkerlucio yeah exactly. I am writing an application which involves passing a map of information down a sequence of “steps” to derive more information. Each step may use information from previous steps and may add information to the map. While I was writing it it struck me that it felt a lot like Pathom connect


I guess you might quickly get into the “planning” area of AIs though, concerned with getting to some objective by chaining actions in a certain way 😛


one other area I think it have interesting applications is in systems at large, if each service could expose a graph api, that can later be merged with other graph apis from other services, then the composed index could be exposed to everyone, making so every service knows how to get any info, so any information needed can be accessed directly by the service needing it


having proper unique long names for the keywords can make this possible and efficient I think


hmm and so it would orchestrate “hoping” between different services to get information?


yes, correct, since with the index the client knows all the required steps/paths to transition across the data


if we imagine a system with 100+ services, then it gets bad to each service to manually call each other to update index, but the system could have one service dedicated to do that


then when services deploy, they send the new updated index to C (lets say C is the service that knows the indexes)


so C knows about every service and their indexes, C could even make separated "resolver groups", picking and choosing which indexes to merge (so different parts of the system could request different resolvers views in some way)


so other services can request a index for a given resolver groups and use that to navigate though the system (both for request information and sending mutations)


that makes sense?


@wilkerlucio Yep it does. It’s an interesting approach for sure :thinking_face:


that's part of what motivates me to create that exploration graph we were talking about, so this network can be made visible


I look forward to seeing that 🙂 I wonder how far you could push it as well… Like, say you have records in a database that can be rendered as PDF documents stored on S3. Could you somehow encode in a graph that these PDFs should be kept in sync (so you don’t have to wait for generation when you request the PDF).


e.g. pre-compute parts of the graph


Although this might get into harder territory, if you want to “react” to changes in the data graph


Because currently Pathom Connect works by running the computations (resolvers) on the fly


subscriptions could be interesting


and they could have a simple initial implementation, something like fulcro does for refreshing


that meaning, when some mtuatioin occurs, it tells the system to "refresh" some set of keyworks


and any subscription on that keywords get a new load


and that would also refresh all derived keywords


its differnet from reacting, but simpler to implement


(by derived I mean derived through resolvers, not the derive clojure thing)


yup, all the processes