This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-02-03
Channels
- # announcements (8)
- # aws (2)
- # babashka (16)
- # beginners (173)
- # calva (13)
- # cider (4)
- # cljfx (6)
- # cljs-dev (108)
- # clojure (63)
- # clojure-australia (2)
- # clojure-dev (10)
- # clojure-europe (73)
- # clojure-italy (8)
- # clojure-nl (4)
- # clojure-norway (5)
- # clojure-uk (4)
- # clojurescript (49)
- # clojureverse-ops (4)
- # community-development (3)
- # core-async (23)
- # cursive (3)
- # data-science (5)
- # datomic (25)
- # emacs (3)
- # events (1)
- # fulcro (13)
- # helix (5)
- # introduce-yourself (1)
- # lein-figwheel (1)
- # lsp (36)
- # malli (1)
- # meander (2)
- # membrane (4)
- # music (8)
- # nextjournal (51)
- # off-topic (47)
- # other-languages (5)
- # pathom (31)
- # pedestal (5)
- # planck (14)
- # polylith (5)
- # portal (1)
- # re-frame (30)
- # react (2)
- # reagent (24)
- # releases (1)
- # rewrite-clj (18)
- # ring (9)
- # sci (33)
- # shadow-cljs (49)
- # testing (3)
- # tools-build (21)
- # tools-deps (29)
- # vim (19)
- # web-security (1)
- # xtdb (12)
Hi, I’m wondering if this is expected behavior of priority. Here’s an example:
(-> {}
(pci/register [@(pco/defresolver nested-single-attr [{in :input}]
{::pco/output [{:output [:sub-out-1]}]
::pco/priority 1}
(prn "called single")
{:output {:sub-out-1 (format "(single-sub-1 %s)" in)}})
@(pco/defresolver nested-extra-attrs [{in :input}]
{::pco/output [{:output [:sub-out-1
:sub-out-2]}]}
(prn "called double")
{:output {:sub-out-1 (format "(double-sub-1 %s)" in)
:sub-out-2 (format "(double-sub-2 %s)" in)}})])
(p.eql/process [{[:input "my-input"]
[{:output [:sub-out-1
:sub-out-2]}]}]))
returns
{[:input "my-input"]
{:output
{:sub-out-1 "(double-sub-1 my-input)", :sub-out-2 "(double-sub-2 my-input)"}}}
always, but when nested-single-attr
has higher priority than nested-extra-attrs
, it always prints “called single” and “called double”. Otherwise, it only prints “called double”. I’d hope there’d be a configuration so it only prints “called double”. But it seems like resolution happens individually for the nested attributes i.e. it selects a path for :sub-out-1 and then selects for :sub-out-2 if the resolver for :sub-out-1 didn’t already resolve :sub-out-2. I’d hope it would determine earlier that “cost” of the whole query would be more for calling nested-single-attr
first because it would imply an additional call to nested-extra-attrs
.Maybe this is a topic for https://github.com/wilkerlucio/pathom3/discussions/57? but it might be something different entirely
this is an interesting case, Pathom will try its best to fulfill the demand, so in this case, when single
has priority, its called first, but there still the need for :sub-out-2
, which will cause the other resolver to get called as well.
its arguable if Pathom should optimize in this case, for example, imagine if nested-extra-attrs
is some sort of cache that's a fallback for :sub-out-1
, in this scenario is desirable to still call both resolvers.
I don't really like the current priority system, as described in the discussion you pointed, it doesn't work that well
I plan to get the default priorty to the same way pathom 2 did, based on path cost (which is dynamically computed as resolvers run)
I hope some better ideas may come up in the future to deal with manual priority
Could one get a cost estimation at planning time? Like in this case, it’d be nice if the planner could see that the query requires both resolvers and sum of cost of nested-single-attr
and nested-extra-attrs
and compare it to the alternate of just calling nested-extra-attrs
. I understand the cache argument being applicable when the outputs specified in the two resolvers are the same, and at runtime the prioritized resolver doesn’t return one of the attrs so it falls back to calling another. But in this case, it should know ahead of time that the latter resolver will need to be called regardless since the specified output of the first resolver doesn’t satisfy the query.
it is possible to calculate the cost at planning time, one caviat to be careful is because plans in pathom 3 are expected to be cached, and since the cost is dynamic, we need to be sure that's a separate step, the simplest solution is to always do that at running time
also, priority never modifies the plan, the plan is always the same, the priority is about which branch from an OR node (which indicates alternatives for something) should be taken first
that said, Pathom 3 priority algorithm is plugabble, you can write a custom one and test some alternatives
Does anyone know what would be the equivalent or closest approximation of pathom2's ::p/wrap-parser
in p3? Is it ::pcr/wrap-root-run-graph!
?
:com.wsscode.pathom3.interface.eql/wrap-process-ast
I think this needs a new helper, I have made one for the viz connector, feels like this should be ported back to Pathom 3
here is the ws connector code you can copy to do that:
in pathom 2 what’s teh difference between parser
and async-parser
in terms of resolver returns? if I use async, must i return chan
? can I return chan
?
in pathom2, when using an async or parallel parser, your resolvers need to return:
• or something that satisfies chan?
https://github.com/wilkerlucio/wsscode-async/blob/master/src/com/wsscode/async/async_cljs.cljs#L7
• or something that is a map?
We can confirm this information on this line of code:
https://github.com/wilkerlucio/pathom/blob/main/src/com/wsscode/pathom/parser.cljc#L282
I assume that returning promesa promise won’t work though
in Pathom 3 everything is driven by Promesa promises, they proved to be much more efficient then core.async for the particular way Pathom processes things, except for the parallel processor which does use core.async for queuing
core async is a more general tool, channels with queues are a very general approach to async… promises are a special case. For most async flows if you don’t need queues I assume that CompletableFuture that is wrapped by promesa should be more efficient. Did you test performance @U066U8JQJ
yes, performance using Promises is far superior than using promise-chan
, I guess specially the part of the error handling/error propagation
I tried for a long time to just use core.async for everything, but after the experience with Promesa I know see two distinct concepts, for single-hit async processes is way better to use a focused tool like Promesa (CompletableFuture), I still like to use core.async when I need an async queue, but for single hit I'll always pick Promises
no, did you find a situation with some sort locking going on?
Oops, sorry I forgot to respond to this. No, I was converting some test cases and it appears this key was used to set some sort of a global timeout to abort an async resolver that too long to produce a result
the reason this was created initially was more about not letting pathom screw itself, was to prevent hard to find locksteps in the process (usually do to Pathom's itself falt)
for Pathom 3, so far I prefer to leave this in the users hands, one way to do something similar is to write a wrap-resolve plugin, and for any async thing returned, wrap that in a timeout
that said, parallel process still new to Pathom 3 and more info/use cases can better tell the direction to handle these kinds of problems, feedback is much appreciated