Fork me on GitHub

Hi @lanejo01. I am having classpath difficulties with Jetty. Did you run into this problem and, if so, how did you fix it?


Hi @lanejo01 here's my dev code:

  (:require [stackz.graphql :as graphql]
            [com.walmartlabs.lacinia.pedestal :as lacinia]
            [io.pedestal.http :as http]))

(def service (lacinia/service-map (graphql/load-schema) {:graphiql true}))

(defonce runnable-service (http/create-server service))
Here's what's in my REPL:
(http/start dev/runnable-service)
NoSuchMethodError org.eclipse.jetty.http.pathmap.PathMappings.put(Lorg/eclipse/jetty/http/pathmap/PathSpec;Ljava/lang/Object;)Z  org.eclipse.jetty.servlet.ServletHandler.updateMappings (
Any suggestions?


Nvm I added the jetty-servlet into my :dev dependency and it works!


Anyone have an idea why I’m seeing error like

"Cause": "No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.api-gateway/ToBbuf found for class:" made me think that a response below would be ok
{:status 200,
 {"Content-Length" "69911",
  "Last-Modified" "Mon, 27 May 2019 10:22:28 GMT",
  "Content-Type" "text/javascript"},
 #object[ 0x585ec33a "/..../public/js/main.js"]}

` But I guess the “File” reference is for a lambda ion type and this is a “web service”. But I’m unsure the “spec” of a “web service”.


I’m trying to serve compiled cljs from resources hosted in the Ion via API Gateway and lambda (no network load balance)

kardan05:05:41 “Lambda ions must return a String, InputStream, ByteBuffer, or File. Function signatures for all ion types can be found in the ion reference.” <- make me confused

Joe Lane06:05:22

@kardan any reason you don’t want to just serve the content out of a cdn like cloudfront backed by s3 instead?

Joe Lane06:05:08

Thats the solution we’ve used at work and it’s worked very well.


Maybe not a good reason. Just started this way and thought it would be nice to have more control in terms of maybe SSR only one Ion to deploy to upgrade.. etc. But I’m dabbling with Ions & cljs at this point. We have everything in K8s and behind a lot of infrastructure at work so the idea to have only one thing to think about sounded “nice”. But maybe the “API Gateway” should only host the API… 🙂


FWIW, I'm doing exactly that. There's a Clojure/ClojureScript webapp running as an Ion, hooked up to API Gateway via HTTP Direct. Rather than serving resources from S3, I've stuck the resultant API Gateway URL in CloudFront, which means that the CloudFront edge network is taking care of caching and serving static resources, rather than S3. Updating the app is just a matter of pushing and deploying the Ion and invalidating the CloudFront cache.

👍 4

@kardan I do the same as Joe is suggesting. My CI server uploads my CLJS artifact to s3 and updates an SSM param with the new url. the host page (and Ion) reads the SSM param and that’s it. I do this to set long cache expiration headers on the CLJS file so it’s only downloaded once


ok, thanks for the pointers. Need to read up on SSm params.


For what it’s worth I did a little of slurping on File types in an interceptor and got things to work for now. Maybe S3 would be better for proper production usage but since I’m only exploring I might not do the investment

Ben Hammond11:05:07

I've been using io.rkn/conformity {:mvn/version "0.5.1"} to manage Schema updates on a local dev datomic I'm experimenting with datomic cloud; conformity relies upon peer api; so I don't expect it to work Is there an equivalent for datomic cloud?


no big deal, just pointing out a dead link on: the very last bullet's HTTP Direct link returns a 404:


@joshkh thanks i’ll fix it


Hola, I'm trying to export an entire table to csv (blasphemous, I know) and I'm having a little trouble. I'm using, but my tables are a couple of gigs in size and eventually my process dies with a heap error.


No experience using datomic-export, but can you give the process more memory and see if you can get through? I am assuming you’re going OOM.


what do you mean by "an entire table"?


looking at it's "entity puller" it looks like it reads all entities into memory


the "distinct" there


so your :e set may be very big

👍 4

set of entity ids


after that it's lazy though, so probably with enough memory you could do it


however you say "tables" so I suspect you have something particular in mind which you might be able to do in a fully incremental manner


Well, yeah, we're definitely not doing anything idiomatically, so we're just treating each... database? as its own table.


As in, all entities in a given db have the same set of attributes.


Sorry, I'm new to this project and my first task is to 'get data out' so I'm still learning the vocabulary.


My lisp is also about 15 years rusty so it's a slog.


@U09R86PA4 Does that make sense as a way to iterate through the set of entities? Pulling distinct of :e? It seems a bit weird to me, wouldn't the index already have the data? Couldn't you just reconstruct the entities after the d/datoms call?


it's getting all entities which have any of the specified attributes, so that's why it walks over :aevt indexes and why it gets :e


if there's one attribute you know all the entities you are interested in have and only those entities have, you can do this:


Ahh, I see. We've been calling it with a list of all 'columns' (attributes) we expect on a given entity.


entities have no "schema" like tables do, so it isn't generally safe to assume anything about what entities an attribute has


I think it's probably unusual for all user-partition entities of a db to all have the same attrs on them


Right, the folks who put together the initial system basically treated it like a traditional table.


Definitely there's nothing properly idiomatic in this system. I think you'd be horrified.


Sorry though, you were going to say what I could do if there was a single attribute contained on all entities? (which there is)


sorry got interrupted


(with-open [csvfile ( the-file)]
  ( csvfile [["column1" "column2" "etc"]])
  (->> (d/datoms db :aevt :the-cardinality-one-attr)
       (map (fn [[e]]
              (d/pull db my-pull-expression-with-all-attrs e)))
       (map (juxt :attr-in-coll-1 :attr-in-coll2,,,))
       ( csvfile)))


at the end of the day that project is a fancy flexible wrapper around this core process


since you can make a simplifying assumption about what entities to pull, you can do it with a single index seek and no set-construction


this sketch won't have the largest possible throughput, but it will have very bounded memory use


The d/datoms gets the entities you are interested in based on the attr you know they all have an assertion for. If it's cardinality-one, you know that the entities are unique already as you seek


the first map gets all the attrs


the second map formats the result of the pull for the csv (i.e. arranges into a flat list of columns)


that's all the customization you need


Ohhhhh excellent, I see.


I don't need throughput (or a fully realized sketch) but what would you do conceptually if throughput was important here?


add batching and parallelization


e.g. seek over datoms, group them into large bundles, perform d/pull-many in parallel over many bundles at once


anything to get datomic to do as much IO as possible


So you'd still crawl the index in that case?


if parallelizing the pull was not enough I'd try to partition that first index seek, but I think most of the work will be in the d/pull




(->> (d/datoms db :avet :the-attr)
     (reduce (fn [c _] (inc c)) 0))
will give you a quick count of how many entities you are dealing with


also an idea of how long it takes just to seek the index without doing any other work


I made an attempt to do this as a pull expression passed to find, but I got even worse memory performance. Conceptually, what's different in that case?


I'm having a hard time conceptualizing performance here compared to a trad db.


queries work in parallel agressively, but they hold the entire result-set in memory


I would not hold the entire result set in memory


d/datoms is lazy


as is d/index-range


d/query is not


result set must fit in memory


so high throughput with bounded memory involves doing something inbetween


Fascinating. I wish I had more time with this rather than my first project being to put tabular data into a table


Any suggestions on how to do this?


Hi @lanejo01. I cannot get my Datomic ion provider to work with Lacinia. Did you use lacinia or lacinia-pedestal. If the former, don't you have to write your own interceptor? I get a 404 error on / and a 500 error on the /graphql extension


The 404 error on / might be because you don't have a method configured for / in API Gateway.

Joe Lane23:05:09

I used the latter