Fork me on GitHub
#graphql
<
2017-09-06
>
andrewtropin13:09:58

@hlship Can you include "-" into spec for identifiers? Maybe add some explicit flag to compile function or some other way. It will allow to have all internals kebab-cased, while requests and responses can be automatically transformed from/in camelCase for Accept/Content-Type application/json and kebab-case for application/edn it is a case, which will not break graphiql and will allow to keep existing clojure codebase untouched, while adding graphql endpoint.

dominicm13:09:13

Key transformation seems like a step that can be done by users of the library, no?

andrewtropin14:09:19

Partially yes, but I really don't want to transform keys for each call of resolver. What I want is to do transformation only when app getting request and sending response. Inside application/resolvers it is expensive and error prone to do tons of transformations.

stijn14:09:13

@andrewtropin we translate our namespaced keywords back and forth between datomic / lacinia. we do this by annotating the lacinia schema (e.g. add a :namespace to each type and use that information in the resolver to determine the datomic attribute) and wrapping each resolver on the way out with a function that strips the keywords

stijn14:09:10

that way, inside each resolver you only handle namespaced keywords. you can do the same with casing

stijn14:09:41

@hlship we only compared alumbra and lacinia and the reason we chose lacinia was: 1/ the schema is edn. this has proven to be very flexible for e.g. implementing generic types and automatically generating basic mutations/queries from the schema 2/ I didn't really grasp claro (which is used by alumbra), but that might not be a very good reason as I didn't look at it long enough maybe

andrewtropin14:09:00

@stijn Did you see that lacinia generates namespaced keywords itself? maybe useful in some cases.

{:type {:kind :root, :type :String},
     :field-name :name,
     :qualified-field-name :human/name,
     :args nil,
     :type-name :human,
Can you provide some code snippets of your translation workflow?

hlship16:09:57

Just want to point out that the structure of the compiled GraphQL schema is subject to change without notice. That's why there's the preview API.

stijn14:09:58

so first of all we set the default-resolver to this

stijn14:09:02

(defn default-resolver
  [field-name]
  ^ResolverResult (fn [{:keys [::auth/db]} args v]
                    (resolve/resolve-as
                      (if (instance? Entity v)
                        (get v (keyword->namespaced db v field-name))
                        (get v field-name)))))

stijn14:09:35

if anything that comes back is a datomic entity, we try to get the namespaced variant of the field-name om the entity

stijn14:09:09

that alone already solves most of the translation

stijn14:09:23

another place where you have to translate is in the input of mutations, and what we do there is to add a :namespace attribute to the input-objects and add that to each field

stijn14:09:15

some context: we are building a new thing, so we match our lacinia schema with the datomic schema. if you can't do this, it all becomes a bit more complex

andrewtropin14:09:26

Yep, we already have codebase designed with RESTapi in mind. Seems like there are too many places with translations will be needed.

hlship22:09:54

FYI we're testing Lacinia 0.21.0 now in our staging environment. Once its running in production (a few days from now) we'll do an official release. We weren't happy with some issues in 0.20.0 (in some cases, the code would fail because org.clojure/test.check was not present) so we're "eating our own dog food" using an internal release candidate.