This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-11-28
Channels
- # announcements (1)
- # babashka (9)
- # beginners (82)
- # calva (6)
- # cider (3)
- # clj-kondo (69)
- # cljdoc (4)
- # cljs-dev (10)
- # cljsrn (2)
- # clojure (74)
- # clojure-europe (11)
- # clojure-italy (9)
- # clojure-nl (15)
- # clojure-spec (18)
- # clojure-uk (89)
- # code-reviews (8)
- # core-async (42)
- # cursive (22)
- # datomic (26)
- # fulcro (13)
- # graalvm (33)
- # graphql (1)
- # leiningen (20)
- # malli (19)
- # music (1)
- # off-topic (4)
- # pathom (56)
- # re-frame (3)
- # reitit (26)
- # shadow-cljs (40)
- # spacemacs (5)
- # tools-deps (25)
@wilkerlucio I am trying to update the env
in a mutation, so that I get the new db in the mutation-join.
When I return just {::p/env env}
I get a NullPointerException from the reader.
Is there anything I have to consider where to update the env?
The Exception:
#:decide.model.proposal{new-proposal #:com.wsscode.pathom.parser{:error #error {
:cause nil
:via
[{:type java.lang.NullPointerException
:message nil
:at [com.wsscode.pathom.core$join invokeStatic "core.cljc" 423]}]
:trace
[[com.wsscode.pathom.core$join invokeStatic "core.cljc" 423]
[com.wsscode.pathom.core$join invoke "core.cljc" 361]
[com.wsscode.pathom.core$join invokeStatic "core.cljc" 370]
[com.wsscode.pathom.core$join invoke "core.cljc" 361]
[com.wsscode.pathom.connect$mutate_async$fn__28020$fn__28104$state_machine__8372__auto____28135$fn__28138 invoke "connect.cljc" 1516]
[com.wsscode.pathom.connect$mutate_async$fn__28020$fn__28104$state_machine__8372__auto____28135 invoke "connect.cljc" 1511]
[clojure.core.async.impl.ioc_macros$run_state_machine invokeStatic "ioc_macros.clj" 973]
[clojure.core.async.impl.ioc_macros$run_state_machine invoke "ioc_macros.clj" 972]
[clojure.core.async.impl.ioc_macros$run_state_machine_wrapped invokeStatic "ioc_macros.clj" 977]
[clojure.core.async.impl.ioc_macros$run_state_machine_wrapped invoke "ioc_macros.clj" 975]
[com.wsscode.pathom.connect$mutate_async$fn__28020$fn__28104 invoke "connect.cljc" 1511]
[clojure.lang.AFn run "AFn.java" 22]
[java.util.concurrent.ThreadPoolExecutor runWorker "ThreadPoolExecutor.java" 1128]
[java.util.concurrent.ThreadPoolExecutor$Worker run "ThreadPoolExecutor.java" 628]
[clojure.core.async.impl.concurrent$counted_thread_factory$reify__3036$fn__3037 invoke "concurrent.clj" 29]
[clojure.lang.AFn run "AFn.java" 22]
[java.lang.Thread run "Thread.java" 834]]}}}
@U4VT24ZM3 thanks for the report, I think its a bug, I never used that in a mutation context
checking on it
Glad I could help. And good to know that I don't have to search any further for what I have done wrong.
just found it 🙂
just gonna write some tests to cover it and send the fix soon
Nice! 👍
are you using deps? if you are, can you try pulling from master and see if works for you?
Will do. mom
Works. 👍
thanks, I'll cut a new release soon
Hi, is there an alias-resolver that switches the E and V in EAV? I want to declare that
person :person/notes note
is equivalent to or can be resolved via
note :note/person person
Is this even possible?@magra Is this a pathom question? In Datomic you can make reverse lookups. https://docs.datomic.com/cloud/query/query-pull.html#reverse-lookup
I write lots of queries [:person/id {:person/notes [:note/id :note/person]] because they preload the normalised DB in the App with the 'backlinks'. At the moment my resolver queries the db for :note/_person, dissoces that and then assoces it as :person/notes. But half the time it hits the DB again to get :note/person becaue it does not 'see' the relationship between :note/person and :note/_person.
@magra I think to get efficient with that, the ideal would be to send the reverse direct to datomic as part of the pull request, had you tried https://github.com/wilkerlucio/pathom-datomic ? still very very alpha, but I think it supports that, have to try and check
It has been some time since I checked that. At the moment I do send it directly to the db. If that is the most efficient I will keep it that way. Thanks!!
that's more true for datomic cloud, on prem it makes little difference
yeah, I'm excited for the new query planner I'm working on, it will be a great improve to the dynamic resolvers integration story (that includes Datomic, GraphQL, Pathom <> Pathom, SQL...)
What's the Pathom story around queries with aggregations? I'm working on an analytics product where the user can create arbitrary queries so slice and dice data. We're suffering with my badly designed DSL to describe aggregate queries and I'd love to switch to something much better thought through and battle tested
Pathom com.wsscode/pathom "2.2.27"
was just released, this fixes a but when trying to augment env with connect mutations, also adds support for docstrings (the become data on the resolver map) on pc/defresolver
hello @mark340, I personally don't have much experience doing it, but I would use EQL parameters to do it, they are an open dimension of information you can use to describe anything you want, you can take Walkable interface as inspiration: https://walkable.gitlab.io/aggregators.html
one suggestion I have if you go in that direction, use namespaced keywords in your interface definition (the names of the parameters), this way keep it open to integrate with other possible definitions you may want to use in the future
Cool. I'll check this out. Thanks!
One more thing: For the most part, our schema is aribitrary key-value pairs. In order to support this in Pathom, I guess I'd write my own resolver that knew how to convert an EDN key-value pair into the appropriate SQL addressing?
that's some room for options in this space, really depends how you wanna go about modeling it
so just to see if we are in the same page, your target database is some SQL, is that correct?
Yep. More specifically, our tables have arbitrarily named columns. It's not too weird from the database perspective. It's just that the UI queries are metadata driven
given this, you want a solution that automatically makes all the table options available, or something more controled on the API side?
What does "table options" mean? Generally, we're trying to provide a high degree of flexibility to the API
I mean, you can try to do things like "introspect my table schema, and generate the resolvers for all of it", or more like: "ok, I'm going to give the user this specific list with these specific fields, and via params I'll customize just filter/aggregation on top of this specific thing"
makes sense?
Yes. Much more like introspection.
But perhaps the implementation matters
We have a metadata that describes all the columns available to the user
*metadata table
yup, the instropection version is much harder to get right
Unlike naive introspection using the db's information schemas, we have a very rich understanding of the columns, their types, etc
I guess it may be easier if start talking more concrete, hehe, what kind of querys you would like to support in the system?
or putting more simply, what are the user inputs?
The user inputs are group-by columns, aggregate columns and functions, and a where clause. the one complication is that the where clause can reference another table that must be joined in
Oh and time range
ok, so its some sort of graphic SQL interface
exactly
The generated SQL can be complicated due to UI niceties. For example, the user may ask for some aggregation grouped by day of week. If the data happens not to have Sunday, for example, we still want the
result set to include Sunday with some default value (null or zero)
seems like Walkable is a good option for you, currently the bad part about it is that it currently doesn't integrate with Connect, so you don't get the graph traversal and auto-complete things
but for a lot of the things you are describing, he seems to have already, aggregations, joins, etc...
so you can convert your user input directly into some EQL representation supported by Walkable, and let it run it
Ok. Auto complete isn't a requirement - at least, that is being handled by a separate system
That sounds promising. I'll investigate.