This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-08-01
Channels
- # announcements (3)
- # babashka (1)
- # beginners (29)
- # calva (3)
- # cider (5)
- # clojure (17)
- # clojure-europe (9)
- # clojurescript (31)
- # conjure (2)
- # cursive (14)
- # datomic (51)
- # deps-new (4)
- # honeysql (8)
- # introduce-yourself (1)
- # lumo (8)
- # malli (28)
- # missionary (1)
- # off-topic (20)
- # pathom (2)
- # polylith (22)
- # practicalli (10)
- # reagent (3)
- # reitit (6)
- # ring (2)
- # schema (2)
- # shadow-cljs (25)
- # spacemacs (3)
here https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html it says “For now, you will use the “mem” storage”. And uses -d hello,datomic:
. How does one use a storage that persists?
I guessed :sql: and did:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,datomic:
but get:Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/invalid-sql-connection Must supply jdbc url in uri, or DataSource or Callable<Connection> in protocolObject arg to Peer.connect
where do I get the jdbc url? and how to supply it exactly?
Datomic peer architecture is “bring your own storage” so the jdbc url (if using a sql storage) is specific to whatever db you have set up separately. You can use the dev storage as a persisting store just for testing/experimentation or even very light production use; otherwise use one of the others
Some jdbc url examples are in the datomic.api/connect docstring, but specifics depend on the jdbc driver you are using—check it’s documentation
Eg documentation for postgresql jdbc uri: https://jdbc.postgresql.org/documentation/80/connect.html
This is what I get:
[email protected] datomic-pro-1.0.6269 % bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,datomic:
[1] 36983
zsh: no matches found: humboi,datomic:
[email protected] datomic-pro-1.0.6269 %
[1] + exit 1 bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -
zsh: no matches found
this is an error from your shell, ?
is a wildcard character. Quote the url
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,"datomic:"
Execution error at datomic.peer/get-connection$fn (peer.clj:661).
Could not find humboi in catalog
(datomic.api/create-database "datomic:")
in a repl using the peer (not client) api https://docs.datomic.com/on-prem/peer/peer-getting-started.html#connectingExecution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/read-transactor-location-failed Could not read transactor location from storage
Also the repl route seems to be tricky to do from a dockerfile when creating a datomic image. Is there a command line way to do this?
this exception suggests to me that either postgres isn’t running, or the process can’t communicate with it, or the transactor isn’t running
This is a very useful post for anyone transitioning their solo topology to prod as part of the latest release of datomic. https://forum.datomic.com/t/experience-report-updating-from-solo-to-datomic-cloud-884-9095/1913/3 Is the "Lambda proxy" mentioned (in the above post) the lambda endpoint? https://docs.datomic.com/cloud/ions/ions-tutorial.html#configure-entry-points. Or is the lambda endpoint option still available?
Lambda proxy for http requests is redundant now that http-direct is available for all instances. I know you’re working through a websockets scenario which is why said http requests above :)
Should i be using the Api gateway integration type of HTTP with "use HTTP proxy integration" with http direct? Proxy so i can get the connection id. That makes sense to me, but my first try (sending a ws connect request) didn't seem to register (returned a 400). So i'm thinking it over 🙂
hmm the http request type might need to be a GET not a POST.
If i have to use http-direct for websockets, sense their different protocols i would need something to do translation. Given aws offers HTTP as an integration type for websockets, i assume i can just configure the endpoint as the IonAPiGateWayEndpoint.
or i'm just to impatient, and aws hadn't deployed it yet, the version without the proxy at least is getting through
Hi, I'm the author of the post. Under the "old" datomic I knew how to make work regular Ion Lambdas, and Ion Lambda proxies, and didn't succeed in configuring Http Direct in a prod stack. Things at AWS had evolved very differently than the Datomic docs. Our Websockets were handled through the Ion Lambda proxies, and worked fine that way. It really helped me to use the Replion documentation to set up a remote REPL to debug it until it worked. Wscat was a very useful CLI tool to debug connections on the other side too. When I migrated our stacks to the new Datomic, I moved our HTTP Ion Lambda proxy to become our HTTP Direct entry point, and left out the Websocket lambda proxy. I didn't try migrating it in any way since we stopped using it a few months ago, so we didn't need to make sure it still works under the new Datomic. Therefore I'm not sure how I would approach a solution for both HTTP and WS through HTTP Direct, and if AWS supports it yet.
@U0514DPR7 Thanks for the feedback. I'm finding some time here and there to set up websockets using datomic ions. I'm going to try and share my experience in case they help others. Also as a form of self promotion, i'm looking for a job 🙂 I'm nearly sure what i'll end up with is two api gateways. The browser client would send a message to the one configured to take a websocket request (pictured above) and forward it to the HTTP url setup to the apigateway i assume datomic is creating for us. I'm not sure under the hood what this might mean in terms of performance, i feel thats always part of the challenge with cloud and high level abstractions, the second your off the happiest of paths things get odd.
This is the endpoint I had ionized with the previous Datomic when I made it work:
(defn wss
"Web handler that returns the websocket world."
[req]
(try
(start/start-once!)
(let [ctx (get-in req [:datomic.ion.edn.api-gateway/data :requestContext] {})
body (get-in req [:datomic.ion.edn.api-gateway/data :body])
stage (get ctx :stage "stg")
e-type (get ctx :eventType nil)
action (get ctx :routeKey :default)
connId (get ctx :connectionId nil)
_ (log/info {:op "WssProxy" :step action})]
(case action
"message" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
"$connect" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
"$disconnect" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
"$default" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
(printf "IMPOSSIBLE DEFAULT: %s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body))
(OK (:body req "This response value isn't idiomatically required for the Websocket World
unless we activate the integration response in AWS API Gateway!")))
(catch Throwable t
(log/alert {:op "SystemStartupError" :ex t}) ; TODO separate startup vs handling
(throw t))))
With the new datomic it would also work except those (get-in req [...])
in the let
because you will no more use the ionize
function to wrap this handler.
So from your WS Gateway to your HTTP Gateway, you should make sure you transport the action
and connectionId
that the HTTP endpoint will have to use.
We liked how API Gateway maintained WS connections and translated them to HTTP calls to our handler, so that our app wouldn't almost need to know that it was responding to WS clients.
With that said, I expect your WS Gateway will be configured thusly:
This is the way that AWS will manage connections for you. But I hope I'm not misleading you. You seem to know some more than me where you plan to go!
thanks. Yea, the part i'm working on now is that translation. The aws docs are clear on what to do in general, but i'm not sure what the request template should be. This setup (see pic) gives me a request with a body of "http://java.io[email protected]", which i can probably handle in the app, i'm trying the other way to do content handling (convert to text-> convert to binary) before i do more investigation. as its faster to just try. I have a rudimentary understanding of what i'm doing. I need to be more certain what datomic is doing for me. I assume its setting up an http api gateway, aws seems to allow for websocket apis to pass to http endpoints, but how does that compare to using the lambda? I assume they both just redirect, but one has loadbalancing? Would that mean i could loadbalance at both points?
I hope someone more knowledgeable in this area steps in. 🙂
there exist no examples online of someone using the API-type=websockets-API with the Integration-type=http. My favorite part is the docs are like on it are like "select X if you want X" oh, thanks, thats really useful aws. Then the link in a circle, its like the links are trying to pass the buck.
my next step is to verify that i can't use the ion-config > lambdas option with lambda proxy. like everyone else on the planet 🙂
yep, lambda proxy works just fine.
hm, ok. so clearly my http gateways were doing anything because i passed them the and my ClientApiGatewayEndpoint and not the IonApiGatewayEndpoint.
Ok so this setup for http does work, I got it right the first time, but i didn't note that content handling was passthrough. Every other selection seems to result in the request not making it to the application. Once you select and save something other then passthrough it wont let you select it again!
hazza, ok, i just have to slurp the body of the request which is the Buffered input stream
🎉 And thanks for the guide you wrote here: https://forum.datomic.com/t/websocket-guide-wip/1916
thanks @U0514DPR7.
And thanks for your feedback. I'm curious is sufficient to just slurp
the request body? Asking because i'm getting an error and the cause is "stream closed". see full error below. Honestly, i have avoided thinking about some of the finer points of io and just defaulted to using slurp when ever possible, which seems to be nearly always :0
{
"Msg": "IonHttpDirectException",
"Ex": {
"Via": [
{
"Type": "java.io.IOException",
"Message": "Stream closed",
"At": [
"java.io.BufferedInputStream",
"getBufIfOpen",
"BufferedInputStream.java",
176
]
}
],
"Trace": [
[
"java.io.BufferedInputStream",
"getBufIfOpen",
"BufferedInputStream.java",
176
],
[
"java.io.BufferedInputStream",
"read",
"BufferedInputStream.java",
342
],
[
"sun.nio.cs.StreamDecoder",
"readBytes",
"StreamDecoder.java",
284
],
[
"sun.nio.cs.StreamDecoder",
"implRead",
"StreamDecoder.java",
326
],
[
"sun.nio.cs.StreamDecoder",
"read",
"StreamDecoder.java",
178
],
[
"java.io.InputStreamReader",
"read",
"InputStreamReader.java",
181
],
[
"java.io.BufferedReader",
"fill",
"BufferedReader.java",
161
],
[
"java.io.BufferedReader",
"read1",
"BufferedReader.java",
212
],
[
"java.io.BufferedReader",
"read",
"BufferedReader.java",
287
],
[
"java.io.Reader",
"read",
"Reader.java",
229
],
[
"$fn__11564",
"invokeStatic",
"io.clj",
337
],
[
"$fn__11564",
"invoke",
"io.clj",
334
],
[
"clojure.lang.MultiFn",
"invoke",
"MultiFn.java",
239
],
[
"$copy",
"invokeStatic",
"io.clj",
406
],
[
"$copy",
"doInvoke",
"io.clj",
391
],
[
"clojure.lang.RestFn",
"invoke",
"RestFn.java",
425
],
[
"clojure.core$slurp",
"invokeStatic",
"core.clj",
6956
],
[
"clojure.core$slurp",
"doInvoke",
"core.clj",
6947
],
[
"clojure.lang.RestFn",
"invoke",
"RestFn.java",
410
],
[
"tomatto.backend.datomic.ion.websocket$http_handler",
"invokeStatic",
"websocket.clj",
28
],
[
"tomatto.backend.datomic.ion.websocket$http_handler",
"invoke",
"websocket.clj",
26
],
[
"clojure.lang.Var",
"invoke",
"Var.java",
384
],
[
"datomic.ion.http_direct$invoke_ion",
"invokeStatic",
"http_direct.clj",
79
],
[
"datomic.ion.http_direct$invoke_ion",
"invoke",
"http_direct.clj",
72
],
[
"datomic.ion.http_direct$processing_callback$fn__11670",
"invoke",
"http_direct.clj",
91
],
[
"cognitect.http_endpoint.Endpoint$fn__13285$fn__13286$fn__13287",
"invoke",
"http_endpoint.clj",
181
],
[
"clojure.core$binding_conveyor_fn$fn__5773",
"invoke",
"core.clj",
2034
],
[
"clojure.lang.AFn",
"call",
"AFn.java",
18
],
[
"java.util.concurrent.FutureTask",
"run",
"FutureTask.java",
264
],
[
"java.util.concurrent.ThreadPoolExecutor",
"runWorker",
"ThreadPoolExecutor.java",
1128
],
[
"java.util.concurrent.ThreadPoolExecutor$Worker",
"run",
"ThreadPoolExecutor.java",
628
],
[
"java.lang.Thread",
"run",
"Thread.java",
829
]
],
"Cause": "Stream closed"
},
"Type": "Event",
"Tid": 111,
"Timestamp": 1628020795559
}
hmm, is the error telling me that by the time i try to slurp the body/stream its already close? maybe you can only slurp once for reasons.
@U0DJ4T5U1 That's standard InputStream behavior https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/BufferedInputStream.html#close()
fair enough. i'm surprised i never ran into that before.
thanks!