This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-07-16
Channels
- # announcements (3)
- # babashka (25)
- # beginners (71)
- # calva (18)
- # clj-kondo (52)
- # cljs-dev (94)
- # cljsrn (12)
- # clojure (33)
- # clojure-europe (52)
- # clojure-nl (2)
- # clojure-uk (27)
- # clojurescript (18)
- # clojureverse-ops (4)
- # datomic (64)
- # deps-new (27)
- # depstar (5)
- # events (5)
- # fulcro (5)
- # graalvm (12)
- # graalvm-mobile (82)
- # helix (2)
- # introduce-yourself (1)
- # juxt (5)
- # lsp (10)
- # malli (7)
- # missionary (1)
- # off-topic (41)
- # pathom (69)
- # pedestal (6)
- # re-frame (4)
- # reagent (8)
- # releases (9)
- # remote-jobs (8)
- # shadow-cljs (3)
- # sql (46)
- # tools-deps (44)
- # uncomplicate (1)
- # vim (83)
Should we expect the planner to discover cycles in the attribute graph? I accidentally this situation and ended up with a StackOverflowError. I'm not too surprised that the planner doesn't detect it but I thought I would ask.
the planner does detect cycles https://github.com/wilkerlucio/pathom3/blob/master/test/com/wsscode/pathom3/connect/planner_test.cljc#L440
it could be stack overflowing more because of too deep branching
can you make a repro?
there is another debug method using the snapshots
if you use pcp/compute-plan-snapshots
, even if there is an exception it still captures the snapshots as it goes, so you can see the steps before the failure
I just remembered that I had a case like that in Pathom 2, and a simple thing that may work for your case is increasing the stack size on the JVM
(in case its not a cycle loop of some kind)
Thanks. I'll look into that. In my particular case, the resolver has an input join where the joined attributes are also output from the same resolver.
I haven't tried a repro case yet but Im pretty sure that no acyclic plan can be created
does it has nested inputs?
Here's the resolver input/output declaration:
{::pco/input [:event :portfolioKey :appKey :event/type {:aliased.event/events [:event/name-expr :event/filter-expr]}]
::pco/output [:event/filter-expr]
::pco/priority 1}
If I remove :event/filter-expr
from the nested input, there is no stack overflow.
I'm working on a repro case now
thanks to your example I was abre to reproduce it here, and yes, Pathom is missing cycle detention when its about nested inputs
Fixed on main
Oh great! You just saved me an hour of creating a repro case. Thanks 😊
please note main has a big breaking change related to the strict requests, other than that it should be good to go
check changelogs for more details on the things at it
Thanks. I have played with strict and discovered that I have some application fixes. I think I'll just turn off strict mode for now
Hi, @wilkerlucio! Recently, I was investigating what I thing it's a memory leak on the Chlorine project and found this situation below.
When I tried to go deeper, every node that consumes more memory points to $com$wsscode$pathom3$connect$planner$compute_missing_chain_deps$$
. I'm only asking if you have anything in mind, like, "ok, maybe I'll have to refactor this code in the future" so I can try to check if that's the code that's giving problems, etc... 🙂
(Just to be clear: I'm not sure that Pathom is at fault here yet, I'm just debugging 😄)
one thing about planning, did you setup a persistent cache for planning? that can reduce a lot the cost of planning, giving queries are usually consistent (same queries, which means same plan result, even if the input/output data is different)
Not really. In fact, wouldn't setup a persistent cache take a bigger hit on memory?
could be, but it would avoid the process of compute-run-graph
completly (just enter, read cache, exit), I expect that to be a good tradeoff for most usages
A question: if I have 2 resolvers for the same data, and one sometimes fails and sometimes don't, do the persistent cache for planning that that into account when resolving things?
(the one that fails have higher priority)
the plan is a variable of indexes + query + available data shape
(not the data specifically, just the shape of it, for instance, if you have the data {:foo "bar" :baz {:deep "data"}}
the shape of that is {:foo {} :baz {:deep {}}}
, so similar data like {:foo "other thing" :baz {:deep "different here"}}
will have the same shape, although the values are different)
so, if something fails sometime,s that's always after the planning is done
because the plan always have every possible option
in other words: the plan is the graph before running it
and the plan never changes after it start running
so for the case of 2 resolvers for the same data, the plan will always include both options, connected via an OR node
has anyone thought of using pathom to replace integrant? connecting integrant pieces together feels too manual
I think my thing is a little different actually since it's an integrant-like API instead of component. do you use summon in prod anywhere?
To be honest, I deprecated this library. The idea is nice, but deal with "smart connect" AND "state management" at the same time is really complicated.
@U797MAJ8M interesting way you got for the halting, one suggestion, you could put done
as part of the env instead of a global variable (an atom at env), so you could control it better, and maybe add a way to avoid starting an already started system (or maybe an auto-restart in that case? by halting everything and starting over)
I made a few changes actually, I think it's actually very ergonomic
(nx/bind! server [{::keys [ring-handler aleph-opts executor]}]
{::nx/halt #(.close %)}
(start-server ring-handler (assoc aleph-opts :executor executor)))
(nx/bind! executor []
{::nx/halt (fn [x] (.shutdown x)
(.awaitTermination x 15 java.util.concurrent.TimeUnit/SECONDS))}
(utilization-executor 0.9 256 {}))
(nx/init {:app.server/ring-handler (reitit.ring/create-default-handler)
:app.server/aleph-opts {:port 1234}}
[:app.server/server])
to start an aleph server + executorproblem is I can only call nx/init
in the same namespace as the one in which I define the resolvers (`nx/bind!`) because I don't know how to get pathom to know what start-server
, inside the :resolve
fn, is at resolver run time, to the macro it's just a symbol like the ones from pco/input
what if you put start-server
on the env?
like
(pco/defresolver resolver [{::keys [start-server]} _]
(start-server))
(p.eql/process
(-> {::start-server (fn [] "implementation")}
(pci/register resolver))
query)
are you familiar with the resolver taking 2 arguments?
yeah, but the defresolver body doesn't have to be a pure function of env + params right
mostly they should be, caching and logging are common exceptions
to what end you are considering the pure sense of it?
you don't have to put them into env
to use them inside the resolver, though maybe that is a good idea? looking through my code to see how often that even happens
correct, and most of the time won't be in the env
its nice to put on the env to create some indirection, in case you want to fill that up later
most common when writing some sort of generic resolver, or to place things like database connections, this gets a bit meta in your case because you are building the config itself on top of that
are you using the result of this setup as config for another Pathom thing, or as a stand alone system build up / shut down?
I actually have no idea how pathom even manages do to this.. how does the runner know to run the :resolve
fn as if it were in the same namespace as it's defined?
the fn binding happens when we make the resolver
looking at the resolver
form:
(pco/resolver 'foo
{::pco/output [:foo]}
(fn [env input]
...))
the resolver stores that fn
defined there, so the bindings are all in place
makes sense?
ahh, finally got it. that's a good hint, the eval needs to be done in resolver
instead of defresolver
going back to what you said about putting done
in env
, can I dynamically modify env
from a resolver call in pathom3?