This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-09-20
Channels
- # announcements (5)
- # beginners (37)
- # calva (3)
- # cider (23)
- # clojure (98)
- # clojure-dev (16)
- # clojure-europe (5)
- # clojure-italy (4)
- # clojure-nl (5)
- # clojure-spec (7)
- # clojure-uk (52)
- # clojurescript (14)
- # cursive (15)
- # data-science (1)
- # datomic (20)
- # emacs (7)
- # flambo (2)
- # fulcro (10)
- # jackdaw (1)
- # jobs (3)
- # joker (2)
- # juxt (3)
- # keechma (3)
- # leiningen (8)
- # luminus (3)
- # music (1)
- # off-topic (83)
- # pathom (19)
- # re-frame (19)
- # reitit (4)
- # shadow-cljs (76)
- # spacemacs (95)
- # tools-deps (16)
any ideas to improve performance of trivial resolvers (think alias-resolver) better when using core.asyns’s threadpool for execution?
My usecase is that on my entity I have a lot of entries with keys where I wrote an alias resolver for each one. Pathom is essentially doing the job of clojure.set/rename-keys
at this point, but the context switches kill the performance (since there are a lot of tiny resolvers to execute)
I could write a single resolver which does the job with clojure.set/rename-keys
but since there is no support for optional inputs (there is an issue for that) it cannot be currently expressed
@thenonameguy one question first, why do you need so many renames? if you are loading from some source that doens't have them, is it possible to convert when you fetch it on the first time?
ok, that's a valid case, just not optimized if you using a lot of them
another thing to consider is, do you need the parallel parser? although I keep recommending it as the default, I'm starting to believe that for most people the serial parser may be a better option
the question is, does the queries you run have parallelism opportunities? usually if its a single or a few datasources that may not be the case, and them you are just paying the higher overhead and complexity price of using the parallel parser
I think a better alternative would be to tag certain resolver as ‘sync’ in the resolver map. These resolvers then could be evaluated inline in the go-loop to avoid asynchronous overhead.
yeah, I was imagining that this could be realised automatically, after the first run pathom could "notice" this is sync, and its fast, using those variables it could automatically choose to not use the thread pool
what you think?
I think that works, also this improvement could be automatically applied to some resolver helpers like alias-resolver*
(if you don’t consider this a breaking change)
an alternative would be to use the weights of resolvers and have a cutoff where resolver-weight < weight-cutoff
is executed serially
yeah, that's what I was thinking 🙂
an even bolder impl could cache "common paths", kind like an internal JIT, get long processing chains and pre-comp the fns
yeah, I guess it does, but there you have to manually build the transformation pipeline, what I'm thinking here is pathom seeing common paths (because they will naturally tend to repeat) and cache that based on usage, makes sense?