This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-07-28
Channels
- # announcements (12)
- # babashka (87)
- # beginners (84)
- # calva (22)
- # circleci (4)
- # clj-kondo (46)
- # cljdoc (6)
- # cljsrn (15)
- # clojure (87)
- # clojure-europe (18)
- # clojure-uk (7)
- # clojurescript (20)
- # community-development (3)
- # conjure (1)
- # cursive (13)
- # datomic (14)
- # events (7)
- # fulcro (27)
- # graphql (31)
- # helix (8)
- # jobs-discuss (1)
- # lsp (43)
- # malli (11)
- # meander (64)
- # off-topic (7)
- # pathom (26)
- # polylith (9)
- # practicalli (2)
- # re-frame (33)
- # reagent (2)
- # reitit (5)
- # releases (2)
- # rewrite-clj (2)
- # shadow-cljs (69)
- # specter (5)
- # sql (1)
- # tools-deps (85)
- # tree-sitter (1)
- # vim (3)
We use lacinia with the component library and reloaded workflow. We have a component for pedestal and one for the schema-provider. It has been great so far but the component has a lot of dependencies (> 150) which makes reloading slow when one of the resolvers is changed. Any tips on how to improve this?
I am aware that lacinia can resolve vars when they’re passed (to skip compiling the schema on each reset), but in my testing this shaved 2 seconds off, the biggest factor seems to be reloading the files
if the bottleneck is code reloading itself (as opposed to starting/stopping) you might benefit from parallelisation
I've used a WIP parallel impl of refresh
for the whole year, it works nice enough that I can forget about it
But I should fix/test a couple things before sharing it
Other than that, another line of research is inspecting the "dependency paths" (from ns1 to ns2) and verifying if they make sense If it doesn't make sense to refresh/reset ns2 ns/component whenever ns1 changes, you might have an architectural problem at the same time, the whole Reloaded workflow is very pessimistic when it comes to consider x a dependency of y. So it's not rare at all to end up "reloading the world" whenever something changes
Yeah it's strictly reloading; the reset takes 1-2 seconds longer than a similar refresh. The dependencies all "make sense", or at least, i think they do 😬. Currently all types that need resolvers have their own namespaces. They are not inter-connected at all. Except that the graphql-component which provides the schema needs a reference to all those resolvers. Which means that if a resolver changes, the change propegates to graphql-component -> all other resolvers. I made a POC which requires all resolvers lazily in the graphql-component, which 'fixes' the reload-behaviour; but requires a lookup/reload of the resolver before resolving the schema in development (which has it's own downsides)
under this scenario if resolver1
changes, the component ns should be resolved, and the user
ns too
but that should be it, right? i.e. resolver2 has no reason to be reloaded unless there's something like a reference in the other direction, such that resolver2 depends (directly or not) on the graphql component
it can be good to try determining this sort of unnecessary dependencies. Sometimes they happen due to bad modularization
I checked just now, and when I change resolver1 at my work codebase, none of the other resolvers get refreshed. That's good :thumbsup:
-test
namespaces from disparate resolvers get refreshed though. That's because they all depend on a central system
ns which depends on graphql
and then on resolver1
. That also makes sense, it's a good use case for parallelization (because normally nothing depends on -test
namespaces)
Hmmm, i thought that resolver2 would be refreshed because it's a dependency of the graphql.component; but maybe there's another reason why it's reloaded. I'll have to do some more experimenting. Thanks for thinking along!
❤️ Thanks for that direction. It was indeed our text-fixture's depenency on the system, that made it circular and load every resolver as a result. Now I'll take a look if I can fix that
Success ⚡, we had a fixture which'd built the system, which is now defered until runtime of the test instead of compile time, which speeds up loading the test, and also fixes the need of the compile-time dependency. Thanks again!
it would be interesting to lint for cycles 👀 circular dependencies are already forbidden by t.n itself, but as we saw here, there are non-critical cycles that can get big without one noticing
It has been a bother for a while, obviously growing over time. Pinpointing was a bit of a headscratcher tho. Maybe a linter could've helped here
Does Lacinia have a way of generating GraphQL schema? My org wants it for registering with backstage.
No, but since it's just Clojure/EDN I think it's easy to create one based on something. Guess introspec PostgreSQL and create the schema and resolvers based on that should be doable. I know with Kotlin it can be done with annotations on functions, but in that case they are fully typed. Maybe something with spec could also work.
Proposal: com.walmartlabs.lacinia.schema/graphql-sdl
or com.walmartlabs.lacinia.schema/graphql-sdl-text
.
We have used a 3rd-party tool, pointing it to our /graphql
http endpoint, and it generated the schema for us. I don't recall the name, we ended up not needing it.
We have also worked around this with a tool that gets the schema from the API endpoint (via the IntrospectionQuery
, the same way that GraphiQL
loads the schema).
Its “good enough” for us and not too much overhead because the development server is always running a we have simple shell script to dump the schema. I can provide details/pointers if anybody want to implement the same workaround.
I was pointed to an npm package that does this, and am considering it. It's possible to boot the system in CI and fail the build if the user did not commit any required updates to the schema file, and wrap this node package. But this makes development more awkward.
I started to work on this at one point, but determined that for Walmart’s needs, starting with an SDL document (rather than generating one from EDN) worked better in the overall picture. It would still be a valuable addition to be able to easily convert back from EDN to SDL.