Fork me on GitHub
#datomic
<
2019-05-28
>
hadils00:05:04

Hi @joe.lane. I am having classpath difficulties with Jetty. Did you run into this problem and, if so, how did you fix it?

hadils00:05:02

Hi @joe.lane here's my dev code:

(ns stackz.dev
  (:require [stackz.graphql :as graphql]
            [com.walmartlabs.lacinia.pedestal :as lacinia]
            [io.pedestal.http :as http]))

(def service (lacinia/service-map (graphql/load-schema) {:graphiql true}))

(defonce runnable-service (http/create-server service))
Here's what's in my REPL:
(http/start dev/runnable-service)
NoSuchMethodError org.eclipse.jetty.http.pathmap.PathMappings.put(Lorg/eclipse/jetty/http/pathmap/PathSpec;Ljava/lang/Object;)Z  org.eclipse.jetty.servlet.ServletHandler.updateMappings (ServletHandler.java:1430)
Any suggestions?

hadils00:05:40

Nvm I added the jetty-servlet into my :dev dependency and it works!

kardan05:05:11

Anyone have an idea why I’m seeing error like

"Cause": "No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.api-gateway/ToBbuf found for class: java.io.File"
https://docs.datomic.com/cloud/ions/ions-reference.html#signatures made me think that a response below would be ok
`
{:status 200,
 :headers
 {"Content-Length" "69911",
  "Last-Modified" "Mon, 27 May 2019 10:22:28 GMT",
  "Content-Type" "text/javascript"},
 :body
 #object[java.io.File 0x585ec33a "/..../public/js/main.js"]}

` But I guess the “File” reference is for a lambda ion type and this is a “web service”. But I’m unsure the “spec” of a “web service”.

kardan05:05:16

I’m trying to serve compiled cljs from resources hosted in the Ion via API Gateway and lambda (no network load balance)

kardan05:05:41

https://docs.datomic.com/cloud/troubleshooting.html#org846f16f “Lambda ions must return a String, InputStream, ByteBuffer, or File. Function signatures for all ion types can be found in the ion reference.” <- make me confused

Joe Lane06:05:22

@kardan any reason you don’t want to just serve the content out of a cdn like cloudfront backed by s3 instead?

Joe Lane06:05:08

Thats the solution we’ve used at work and it’s worked very well.

kardan06:05:37

Maybe not a good reason. Just started this way and thought it would be nice to have more control in terms of maybe SSR only one Ion to deploy to upgrade.. etc. But I’m dabbling with Ions & cljs at this point. We have everything in K8s and behind a lot of infrastructure at work so the idea to have only one thing to think about sounded “nice”. But maybe the “API Gateway” should only host the API… 🙂

henrik05:05:48

FWIW, I'm doing exactly that. There's a Clojure/ClojureScript webapp running as an Ion, hooked up to API Gateway via HTTP Direct. Rather than serving resources from S3, I've stuck the resultant API Gateway URL in CloudFront, which means that the CloudFront edge network is taking care of caching and serving static resources, rather than S3. Updating the app is just a matter of pushing and deploying the Ion and invalidating the CloudFront cache.

👍 4
steveb8n07:05:07

@kardan I do the same as Joe is suggesting. My CI server uploads my CLJS artifact to s3 and updates an SSM param with the new url. the host page (and Ion) reads the SSM param and that’s it. I do this to set long cache expiration headers on the CLJS file so it’s only downloaded once

kardan07:05:39

ok, thanks for the pointers. Need to read up on SSm params.

kardan08:05:48

For what it’s worth I did a little of slurping on File types in an interceptor and got things to work for now. Maybe S3 would be better for proper production usage but since I’m only exploring I might not do the investment

Ben Hammond11:05:07

I've been using io.rkn/conformity {:mvn/version "0.5.1"} to manage Schema updates on a local dev datomic I'm experimenting with datomic cloud; conformity relies upon peer api; so I don't expect it to work Is there an equivalent for datomic cloud?

joshkh16:05:49

no big deal, just pointing out a dead link on: https://docs.datomic.com/cloud/ions/ions.html#how-bond the very last bullet's HTTP Direct link returns a 404: https://docs.datomic.com/cloud/ions/ions-http-direct.html

marshall16:05:41

@joshkh thanks i’ll fix it

Christian20:05:12

Hola, I'm trying to export an entire table to csv (blasphemous, I know) and I'm having a little trouble. I'm using https://github.com/bostonaholic/datomic-export, but my tables are a couple of gigs in size and eventually my process dies with a heap error.

jaret20:05:02

No experience using datomic-export, but can you give the process more memory and see if you can get through? I am assuming you’re going OOM.

favila20:05:21

what do you mean by "an entire table"?

favila21:05:08

looking at it's "entity puller" it looks like it reads all entities into memory

favila21:05:26

the "distinct" there

favila21:05:49

so your :e set may be very big

👍 4
favila21:05:52

set of entity ids

favila21:05:48

after that it's lazy though, so probably with enough memory you could do it

favila21:05:24

however you say "tables" so I suspect you have something particular in mind which you might be able to do in a fully incremental manner

Christian21:05:09

Well, yeah, we're definitely not doing anything idiomatically, so we're just treating each... database? as its own table.

Christian21:05:37

As in, all entities in a given db have the same set of attributes.

Christian21:05:16

Sorry, I'm new to this project and my first task is to 'get data out' so I'm still learning the vocabulary.

Christian21:05:38

My lisp is also about 15 years rusty so it's a slog.

Christian21:05:19

@U09R86PA4 Does that make sense as a way to iterate through the set of entities? Pulling distinct of :e? It seems a bit weird to me, wouldn't the index already have the data? Couldn't you just reconstruct the entities after the d/datoms call?

favila21:05:48

it's getting all entities which have any of the specified attributes, so that's why it walks over :aevt indexes and why it gets :e

favila21:05:42

if there's one attribute you know all the entities you are interested in have and only those entities have, you can do this:

Christian21:05:59

Ahh, I see. We've been calling it with a list of all 'columns' (attributes) we expect on a given entity.

favila21:05:31

entities have no "schema" like tables do, so it isn't generally safe to assume anything about what entities an attribute has

favila21:05:07

I think it's probably unusual for all user-partition entities of a db to all have the same attrs on them

Christian21:05:23

Right, the folks who put together the initial system basically treated it like a traditional table.

Christian21:05:42

Definitely there's nothing properly idiomatic in this system. I think you'd be horrified.

Christian21:05:21

Sorry though, you were going to say what I could do if there was a single attribute contained on all entities? (which there is)

favila21:05:28

sorry got interrupted

favila21:05:31

(with-open [csvfile ( the-file)]
  (clojure.data.csv/write-csv csvfile [["column1" "column2" "etc"]])
  (->> (d/datoms db :aevt :the-cardinality-one-attr)
       (map (fn [[e]]
              (d/pull db my-pull-expression-with-all-attrs e)))
       (map (juxt :attr-in-coll-1 :attr-in-coll2,,,))
       (clojure.data.csv/write-csv csvfile)))

favila21:05:58

at the end of the day that project is a fancy flexible wrapper around this core process

favila21:05:38

since you can make a simplifying assumption about what entities to pull, you can do it with a single index seek and no set-construction

favila21:05:05

this sketch won't have the largest possible throughput, but it will have very bounded memory use

favila21:05:10

The d/datoms gets the entities you are interested in based on the attr you know they all have an assertion for. If it's cardinality-one, you know that the entities are unique already as you seek

favila21:05:18

the first map gets all the attrs

favila21:05:54

the second map formats the result of the pull for the csv (i.e. arranges into a flat list of columns)

favila21:05:11

that's all the customization you need

Christian21:05:52

Ohhhhh excellent, I see.

Christian21:05:33

I don't need throughput (or a fully realized sketch) but what would you do conceptually if throughput was important here?

favila21:05:56

add batching and parallelization

favila21:05:08

e.g. seek over datoms, group them into large bundles, perform d/pull-many in parallel over many bundles at once

favila21:05:20

anything to get datomic to do as much IO as possible

Christian21:05:52

So you'd still crawl the index in that case?

favila21:05:54

if parallelizing the pull was not enough I'd try to partition that first index seek, but I think most of the work will be in the d/pull

Christian21:05:09

Interesting.

favila21:05:22

(->> (d/datoms db :avet :the-attr)
     (reduce (fn [c _] (inc c)) 0))
will give you a quick count of how many entities you are dealing with

favila21:05:44

also an idea of how long it takes just to seek the index without doing any other work

Christian21:05:46

I made an attempt to do this as a pull expression passed to find, but I got even worse memory performance. Conceptually, what's different in that case?

Christian22:05:07

I'm having a hard time conceptualizing performance here compared to a trad db.

favila22:05:17

queries work in parallel agressively, but they hold the entire result-set in memory

favila22:05:31

I would not hold the entire result set in memory

favila22:05:48

d/datoms is lazy

favila22:05:57

as is d/index-range

favila22:05:09

d/query is not

favila22:05:15

result set must fit in memory

favila22:05:04

so high throughput with bounded memory involves doing something inbetween

Christian22:05:09

Fascinating. I wish I had more time with this rather than my first project being to put tabular data into a table

Christian20:05:27

Any suggestions on how to do this?

hadils23:05:40

Hi @joe.lane. I cannot get my Datomic ion provider to work with Lacinia. Did you use lacinia or lacinia-pedestal. If the former, don't you have to write your own interceptor? I get a 404 error on / and a 500 error on the /graphql extension

henrik05:05:26

The 404 error on / might be because you don't have a method configured for / in API Gateway.

Joe Lane23:05:09

I used the latter