Fork me on GitHub
#pathom
<
2019-07-27
>
mdhaney18:07:01

I’m using pathom with Datomic, and I have a couple of questions on how to optimally do things. 1) I started out injecting the current db into the environment, because in general you want each resolver using the same snapshot of the db. This breaks with mutation joins, however, because when the resolvers are run on the returned query, they are using the db from before any transactions that ran in the mutation. To get around this, you could just grab the current db in each resolver, but then you run the risk of data changing between one resolver and another. What I ended up doing is creating a global resolver that injects the current db under a special key. Then in all my “context” resolvers (i.e. ones that take an ident and establish the context of the graph traversal) I set this special db key as an input in addition to the ident input. When I do a mutation, I set the special db key to the :db-after value from the transaction, so then subsequent resolvers are using the updated database. This works great, but I just wanted to know if there’s any easier way to do this, since I’m not 100% up to speed on all the features in pathom.

eoliphant18:07:48

I did something similar. talked to @wilkerlucio about this and there’s no easy answer. you generally want the nice stable db that datomic gives you but for a single mutation join you need the new db value, never mind once things are going in parallel

mdhaney18:07:45

Yeah, the only other way I can think of to do it would be to update the db in the environment after the mutation is processed but before the join query is parsed. But I didn’t see anything in the docs to indicate it was possible to change the environment like that.

eoliphant18:07:34

yeah I created a couple funcs that create an atom that gets passed in with the db and conn, then a plugin, that would swap in a new db value if a mutation was processed. Was trying to make it smarter to say only do it if there was in fact a mutation query, but never got around to finishing that

mdhaney18:07:53

Hmm, that’s a good idea. I’ll keep that in mind if I run into any problems with my current approach.

eoliphant18:07:04

i’ll scare it up and send it over when i get a chance

👍 4
mdhaney19:07:52

That would be great. I wouldn’t mind learning more about plugins anyway, since all this Datomic stuff could be a useful plugin to package up and reuse on future projects.

eoliphant18:07:15

learned more about plugins doing it 😉

eoliphant18:07:19

it presents a sorta “NP hard”/Morpheus lol problem “What is current?“. Since the whole point on the one hand is let resolvers go off and do their thing. But on the other, there’s a sorta implicit outside expectation of ‘consistent state’ in terms of the response

mdhaney19:07:40

2) second question You have an entity with a bunch of “regular” attributes (i.e. things that don’t need any special processing or require a separate resolver) - what’s the best way to structure the resolvers? What I’ve been doing is 1 resolver that does a Datomic pull for all the possible fields and returns them, and my understanding is that pathom will just pick the fields it needs and ignore the rest. Not bad, and on Cloud probably the way to go so you don’t have to many round trips. I’m on Peer, though, and it does seem inefficient to pull say 30-40 fields if the client is only asking for 2. So one thing I could do is have the resolver that resolves the ident return the actual Datomic entity. Then for each field have a resolver that takes the entity as input and returns that field. This could easily be wrapped up in a couple of macros so it’s not a pain creating all those little resolvers. I’m trying to decide if it’s worth it or not, performance wise. It’s cheap to retrieve the entity, and then the attributes are pulled lazily, so pathom can quickly get each separate attribute in parallel, vs having to wait for all the attributes to be pulled before it can get any of them. I might have to just benchmark and compare the two approaches, but if someone has already tried, that would save me the time. 😉

eoliphant19:07:23

I’d been playing around with some datomic helper functions that actually only query for the fields requested in the resolver.

eoliphant19:07:05

they worked fairly well, though i was running into some limits of generalization lol

eoliphant19:07:20

something like that might be the ticket for you as well.

eoliphant19:07:47

and of course even with what you’re doing, once the segments or whatever are ‘hot’ in the peer, then datomic generally does a good job of keeping them up to date, so perhaps other than the first hit, the perf wouldn’t be too bad? we’ve moved to clould but still have a few on-prem based services

mdhaney19:07:21

Yeah, we started with Cloud on this project, but then we saw the pricing for the production deployment - yikes!

eoliphant19:07:00

yeah, the min is around $200 a month. but if you don’t need it, there are some tricks you can use like a lambda warmer that make solo an option for ‘production’ deployments, you just don’t get the scalability reliability stuf

mdhaney19:07:00

Really? The numbers I saw were around $450/month. That was about half for instances (the minimum they recommended) and the other half was the license. With Peer, we can start around $125/month and don’t have to worry about the license for the first year.

eoliphant19:07:37

having said that, with the ions, if you can move your code over your TCO numbers might line up. Hmm, yeah shouldn’t be that high. the principal cost is the i3's which are a little over 100/month/each. and the license was just a bit more i thought. Let me double check our billing when i get a chance

eoliphant19:07:41

yeah I ended up moving our on-prems to i3's with valcache enabled

mdhaney19:07:00

I was thrown off because the pricing calculator on AWS marketplace was broken. When I saw the real price, my jaw dropped. It was basically the same as the instances.

eoliphant19:07:48

yeah you’re right my bad. on an i3.large it’s 226/month per

mdhaney19:07:16

:white_frowning_face:

eoliphant19:07:36

gotta double check

eoliphant19:07:47

i know there are more options for running query group nodes

mdhaney19:07:58

I’m sure that setup has a lot of capacity. The thing is, my customer is bootstrapping this app and needs to get to where ad revenue covers costs ASAP. So we need as cheap as possible and then scale up as needed.

eoliphant19:07:09

but not sure if you can say pick a non i3 for the transactors

eoliphant19:07:15

yeah totally get that

eoliphant19:07:45

and that’s a bit more to recoup, even factoring in the ability to eliminate app servers etc

mdhaney19:07:54

There are a lot of things I like about Peer, so not too bad to go back. It definitely made it easier to use web sockets.

eoliphant19:07:25

yeah it’s only http://i3.xxx for production transactors

eoliphant19:07:46

yeah there are a few things i miss for sure with the peer api

eoliphant19:07:54

d/filter, etc for sure

mdhaney19:07:41

Transaction log is useful too. I’m using it to send push updates to clients through web sockets.

eoliphant19:07:49

yeah, i sorta get why it’s not in cloud, as you can pretty easily roll your own, we did, that shoots stuff off to AWS whatever, but still would have been nice to have native support. Started off doing Websocket support by streaming log stuff out, then through AWS IOT, etc etc. A bit roundabout but it worked. But now they have the HTTP direct stuff, so looking at simplifying that bit

Mark Addleman22:07:45

I'm curious what you're approach to emulating the Tx Log is. Do you just wrap datomic's transact fn with something that pushes the results to a queue?

eoliphant01:07:06

I have a lambda that you call, it checks the offset, reads the log entries since the offset calling fns we've configured, then saves the offset. You can only get per minute resolution using cloud watch events, but theres a trick you can do with step functions to say call it every 10 secs etc

👍 4
daniel.spaniel23:08:19

@U380J7PAQ are you doing websocket things with datomic ion server ?

eoliphant22:08:36

Hey @UEC6M0NNB, sorry was traveling, yeah I’m actually working on tryna get websockets/SSE working with HTTP Direct as we speak

daniel.spaniel22:08:05

oh boy .. if you get something let me know . i would love to see how you did it ( we have been puzzled for a while on this one )

eoliphant22:08:19

Will do 🙂