This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-10-21
Channels
- # announcements (10)
- # aws (38)
- # beginners (220)
- # calva (2)
- # cider (26)
- # clj-kondo (194)
- # cljs-dev (4)
- # clojure (190)
- # clojure-dev (7)
- # clojure-europe (3)
- # clojure-italy (6)
- # clojure-nl (4)
- # clojure-uk (8)
- # clojured (1)
- # clojurescript (29)
- # code-reviews (31)
- # community-development (9)
- # core-async (24)
- # cursive (38)
- # data-science (51)
- # datomic (52)
- # dirac (2)
- # emacs (3)
- # events (1)
- # figwheel-main (4)
- # fulcro (49)
- # graphql (13)
- # heroku (1)
- # hoplon (19)
- # immutant (3)
- # leiningen (1)
- # nrepl (59)
- # off-topic (12)
- # onyx (2)
- # pathom (51)
- # reitit (15)
- # shadow-cljs (88)
- # spacemacs (6)
- # sql (3)
- # tools-deps (107)
- # xtdb (11)
I’m struggling with hooking up a GraphQL API (on the backend), and I could really use some help at this point. The GraphQL API acts as an interface to an ElasticSearch cluster. It’s purely read-only, no mutations. To start at the end, I’d like to pass this as an MVP and receive a response:
[{(:zd/random_articles
{:sample_size 3})
[:article_doi]}]
This is probably the simplest endpoint in our API. It takes an int and returns the equivalent number of random articles.
This is the setup:
(defonce indexes (atom {}))
(def zd-gql
{::p.http/driver p.http.clj-http/request-async
::p.http/headers {"x-auth-token" (auth-token)}
::pcg/url api-url
::pcg/prefix "zd"
::pcg/ident-map <something, haven't worked it out yet>})
(defn load-graphql-index! []
(pcg/load-index zd-gql indexes))
(I call load-graphql-index!
at server startup, and discovered that it has to be called after the parser is initialized, or it doesn’t work.)
This is the parser:
(def pathom-parser
(p/parallel-parser
{::p/env {::p/reader [p/map-reader
pc/parallel-reader
pc/open-ident-reader
p/env-placeholder-reader]
::p/placeholder-prefixes #{">"}
::p.http/driver p.http.clj-http/request-async}
::p/mutate pc/mutate-async
::p/plugins [(pc/connect-plugin {; we can specify the index for the connect plugin to use
; instead of creating a new one internally
::pc/indexes indexes
::pc/register resolvers})
p/error-handler-plugin
p/request-cache-plugin
p/trace-plugin]}))
Almost copied wholesale from the documentation, except that I register some additional resolvers that for now are dummy-esque, but will interface with some database later on to get/set user details and similar.
On the frontend, we have Fulcro.
This is the entry point:
(defn api-parser [query]
(<!! (pathom-parser {} query)))
(Since the server already makes the entire thing async, all async stuff above is obviously redundant, as evidenced by the <!!
in api-parser
, but I’m trying to stay close to the example to avoid confusing myself)
1. Errors
When I run query->graphql
on,
[{(:zd/random_articles
{:sample_size 3})
[:article_doi]}]
It looks right, spitting out,
query {
random_articles(sample_size: 3) {
article_doi
}
}
Which I’ve passed directly to the GraphQL API to verify that it should be returning results. It does.
When running it in the Fulcro Inspector (and thus through Connect etc.), I get,
{:zd/random_articles :com.wsscode.pathom.core/reader-error,
:com.wsscode.pathom.core/errors
{[:zd/random_articles]
{:message "Failed to parse GraphQL query.",
:extensions
{:errors
[{:locations [{:line 3, :column nil}],
:message
"mismatched input '}' expecting {'query', 'mutation', 'subscription', '...', NameId}"}]}}}}
What could be the reason for this? I looks like the GraphQL API is receiving something odd, but could it be something else? Do I need to write a resolver manually or somesuch? Can I log the outgoing calls to the GraphQL API (`:com.wsscode.pathom/trace` didn’t help me produce insight as to what is happening)
2. Autocomplete
I’ve noticed that Fulcro Inspector doesn’t expose the GraphQL queries for autocompletion, but rather just [:com.wsscode.pathom.connect/indexes :com.wsscode.pathom.connect/resolver-weights :com.wsscode.pathom.connect/resolver-weights-sorted :com.wsscode.pathom/trace]
. Is this down to me creating resolvers for the GraphQL queries, or should it be able to autocomplete from the knowledge of the GraphQL schema? If I need to describe resolvers for each query to the GraphQL API, what should they return? The (pc/defresolver repositories …)
in the example looks like it’s there to purely autocomplete some examples? Correct me if I’m wrong.
Terribly sorry for the humongous post.Scratch both questions 1 and 2.
(defonce indexes (atom (<!! (pcg/load-index zd-gql))))
seems to have taken care of both the problem of loading the GraphQL index in order, and the autocomplete. Upon having autocomplete, I realized that the correct way to express the query is:
[{(:zd/random_articles
{:sample_size 3})
[:zd.Article/article_doi]}]
And wow, it feels absolutely great to not fiddle around with prebaked .gql
files on the backend.
hello @henrik, it seems you figured that out, did you got to also understand how to setup the ::pcg/ident-map
?
Am I right in assuming that ident-map
should be used for “ident-like” stuff, like the ISBN of a book for example? You wouldn’t use it for sample_size
in my small example?
sample_size
is certainly a parameter, but it doesn’t refer to anything that is unique about the result.
Alright, I think I understand the intention behind ident-map
now.
In our API, most functions are variable arity. I.e., they don’t take the equivalent of [:some/id 1]
, they take the equivalent of [:some/id [1]]
(to fetch an arbitrary number of entities in one go). Is there a way to adapt for this?
for that I think just using the regular entry points as you doing is the way to go
the ident is really for identifying things
you probably will need it later, if you want your UI to do local data updates, those are usually ground on a single entity, then the ident lookup
syntax will come handy to use with Fulcro
Is there a good way to deal with repeating patterns in EQL queries? I have an argument that can grow rather large, and tends to be repeated two or three times in a query in the worst case scenario. When talking directly to the GraphQL endpoint, I would declare it a variable and send it once, however many times it would be used in the query. I don’t know if this can be expressed in EQL.
The point is to decrease the size on the wire, generating the query itself is straightforward.
GraphQL implemented it to get around the fact that the system is kind of stupid, but as a side effect, it can be used to shrink the size of calls on the wire in the case that the variable holds data that is repeated two or more times in the query.
but would not be something hard to implement, we are getting around with some pretty big queries, it helps that transit can compress the keywords 🙂
Yeah, that helps (in my case) frontend -> backend, but not backend -> API. It’s not as big of a problem since both backend and API are sitting in the same AWS region and AZ. But you know, it adds up when there are enough users. 🙃
humm, so you mean to call graphql views using the eql syntax?
So far, everything has been custom (classic REST) between frontend -> backend, with GraphQL queries hardcoded in .gql files. That’s not great, and I can’t call the API directly from the frontend for reasons.
I sat down and watched your talk, Scaling Full-Stack Applications, and realize I totally do not want to do this. 😅
Now I just want to send the query for a component and have Pathom figure out how to retrieve that data 🙂
thanks 🙂
Joining @schmee on the question: is there a way to address “N+1 problem” for SQL queries in Pathom? This one problem was pretty much the reason we migrated off ORM to writing raw SQL queries in HugSQL. Would be a life changer!
Well, seems like it’s solved in a sense that it doesn’t make a request for every foreign key, which is a common problem for ORMs. Though I wonder how it could make an efficient join in app’s memory without loading the whole tables and without db indexes.
ahh! I don’t think my tables had foreign keys, I’ll add that and see how it affects the result
it is possible to work something using batch resolvers, for one depth level is simple, for more it may require some thinking
if it doesn’t work out for you, try Hasura and integrate with it through the GraphQL integration of Pathom: https://github.com/walkable-server/walkable/issues/153
@U066U8JQJ is optimizations for SQL DBs an interesting use-case for Pathom or would you consider it “out of scope”?
it is interesting for pathom, yes, I think walkable already does a nice job, what's missing is integrating that with the connect ecosystem, so they can compose on top of each other, I'm currently working to try to make those integrations easier, not just for sql, but graphql, pathom to pathom, and any other dynamic source, currently there are a ton of work to get those things "right"
> what’s missing is integrating that with the connect ecosystem, so they can compose on top of each other agree 100%, awesome that you are working on this 🙂
I keep thinking if Pathom should move to a more "planned runner" thing, in the first versions it were purely recursive, meaning it decided what to do right before doing it, currently it does planning "per-attribute", which fixed a bunch of infinity loop cases, and more and more I see some benefits that could emerge by doing a "full planning" ahead of time
right now the algorithm has to reconcile a bunch of those, and when we try to integrate graph <-> graph things it ends up having to recompute paths a lot of times, also forces every resolver implementer to have to deal with that
is it possible currently to check in a resolver what data was requested through it (through env
or something else)? that would also help a lot when optimizing
@schmee not directly, what you have is just the parent-query
, but I think that's something I can add
I'm currently changing the internal format of plan paths, its currently a vector with two items (which attribute its going for, and which resolver will call), I'm turning that into a map so more information can be made available on those
@U08E8UGF7 thanks for suggestion! Unfortunately, we’re not on Postgres and there’s no way to migrate. I heard of Hasura, and your idea is really neat. Wonder if anyone done it that way, seems like a lot of indirection.
@U066U8JQJ cool, that would be really useful for me at least! 🙂