Fork me on GitHub

How do I connect to the postgres storage in datomic? I ran the following commands:

psql -f bin/sql/postgres-db.sql -U postgres

psql -f bin/sql/postgres-table.sql -U postgres -d datomic

psql -f bin/sql/postgres-user.sql -U postgres -d datomic


And when I run:

bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,datomic:
I get:
[1] 38331
zsh: no matches found: humboi,datomic:
[email protected] datomic-pro-1.0.6269 %
[1]  + exit 1     bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -


@U01F1TM2FD5 you should run a transactor against your postgres storage. It looks like you are running peer-server (which you can do once you have a transactor up and running and a DB created for the peer-server to serve). In addition to I have an of getting Postgres and MySQL storage up and running.


I hope that helps. Shoot me a support e-mail at <mailto:[email protected]|[email protected]> if you run into any issues 🙂


So this is how I’m running the transactor: bin/transactor config/samples/


And it starts:

Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver


This is the config:

1 protocol=sql
  2 host=localhost
  3 port=8998
  7 ###################################################################
  8 # See 
 10 license-key=foobar
 18 ###################################################################
 19 # See 
 21 sql-url=jdbc:
 22 sql-user=datomic
 23 sql-password=datomic
 25 ## The Postgres driver is included with Datomic. For other SQL
 26 ## databases, you will need to install the driver on the
 27 ## transactor classpath, by copying the file into lib/,
 28 ## and place the driver on your peer's classpath.
 29 sql-driver-class=org.postgresql.Driver


This is the config and the conn:

10 ;; as environment variables
    9 (defn cfg [] {:server-type :peer-server
    8                  :access-key "myaccesskey"
    7                  :secret "mysecret"
    6                  :endpoint "datomic:"
    5                  :validate-hostnames false})
    3 (def *conn
    2   "Get shared connection."
    1   (delay (d/connect (d/client (cfg)) {:db-name "humboi"})))
But the transactions don’t seem to be working


Gives invalid connection config


I tried “localhost:8998” too for endpoint but that didn’t work either


You are trying to connect via peer-server. You need to connect and create a DB in order to be able to serve a DB.


launch a peer against the transactor (i.e. a REPL), use the peer library to create a DB like:


(require '[datomic.api :as d])
(def uri "datomic:")
(d/create-database uri)


Then you can standup your peer-server against that DB.


And you'll have the endpoint for your config map


Please help me. So I have this web server. Should it use the peer or the client library?


There’s just a web server and the datomic server


If I use the client library in the web server, then I have to figure out a way to create the database in the dockerfile of the datomic server right?


But if I use the peer library, then I don’t have to create the database in the dockerfile and can create the database when the app starts?


What if both are running on kubernetes pods and the datomic server uses Persistent Volume Claim? If I create the server on startup everytime I deploy the cluster again, wouldn’t it overwrite what was already written in the persistent volume?


create-database is idempotent, so it is safe to run repeatedly (it will return false and do nothing if the db already exists). That said, it doesn’t make sense to me to do is this way because it’s persistent state that’s a prerequisite to the entire system running. Just like you don’t put schemas/create-tables/create-auth, etc into the startup of postgres, it doesn’t make sense to put db creation into the startup of the transactor. (Besides an empty newly-created db is likely not usable by your application in practice anyway--it probably needs schema and some data.)


it does make sense to create a db while creating the datomic image because my web server that uses the client api cannot create a db when it goes up, so the db will be created by the datomic image upon startup and the client can then read and write on the db.

Tatiana Kondratevich13:08:49

Hi, all! I'm currently following datomic-ions tutorial mentioned in documentation. I've noticed an :allow keyword in ion-config.edn with a predicate under it. However, there's no info on it in the docs.

{:allow [datomic.ion.starter.attributes/valid-sku?]
 :lambdas {:ensure-sample-dataset
           {:fn datomic.ion.starter.lambdas/ensure-sample-dataset
            :description "creates database and transacts sample data"}
           {:fn datomic.ion.starter.lambdas/get-schema
            :description "returns the schema for the Datomic docs tutorial"}
           {:fn datomic.ion.starter.lambdas/get-items-by-type
            :description "return inventory items by type"}}
 :http-direct {:handler-fn datomic.ion.starter.http/get-items-by-type}
 :app-name "reltest-781-prod"}
Would be grateful if anyone could explain this for me: what this keyword is responsible for, what can be used under etc Thanks! Link to the repo:


> :allow is a vector of fully qualified symbols naming or functions. When you deploy an application, Datomic will automatically require all the namespaces mentioned under `:allow`.

Tatiana Kondratevich13:08:57

@U09R86PA4 thanks! somehow I wasn't able to find this through search bar

Daniel Jomphe14:08:44

Also, note that the first time you use an unallowed function in a query or transaction function, you'll see an error appear, telling you that you should allow it. That's how you'll know you stepped out of the sandbox and must take action. For me, the first time it happened was when I used a function in the clojure.string namespace!


I’m trying to install datomic peer with leiningen:

[com.datomic/datomic-pro "1.0.6316"]
in :dependencies. But:
Could not find artifact com.datomic:datomic-pro:jar:1.0.6316 in central ()
Could not find artifact com.datomic:datomic-pro:jar:1.0.6316 in clojars ()


It’s not in maven central but in a credentialed repository. See for instructions for various build tools (including lein)


{#"my\.datomic\.com" {:username ""
                         :password "foo"}}
gpg --default-recipient-self -e \
~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg
[email protected] humboi % lein repl
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: Inappropriate ioctl for device
Could not decrypt credentials from /Users/prikshetsharma/.lein/credentials.clj.gpg
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: Inappropriate ioctl for device


gpg is looking for a device to read your passphrase from and failing. (BTW, you shouldn’t have pasted your password above)

😂 2

I don’t understand


Doing export GPG_TTY=$(tty) and then running lein repl gives:


Please enter the passphrase to unlock the OpenPGP secret key:


Where do I find this password?


and also how would this work in the context on running it in a dockerfile?


because you can’t type in a password after doing lein uberjar in a dockerfile?


Is there a way to dynamically compose d/q where clauses? I keep running into issues assembling the map because of the pesky ?e's floating around.


The map form is easier to construct {:find [?a ?b ?c] :where [[?a :foo ?b][?b :bar ?c]]} not [:find ?a ?b ?c :where [?a :foo ?b][?b :bar ?c]]


(cond-> [] condition (conj clause clause2) …) is handy


other than that it’s Just Data, what specific pain are you hitting?


Thats actually what I just stumbled on


I was trying to do the conj like, inside the form instead of doing it to construct the form


Has anyone run into an issue where they can connect to a cloud system, but running import-cloud throws a :not-found anomaly stating that the configured endpoint “nodename nor servname provided, or not known”?


The fact that I’m running an older version of datomic cloud may likely have an effect here, but upgrading to the latest dev-local doesn’t resolve the issue.


I have the SOCKS proxy up and running (I’m able to connect using regular client)


The ExceptionInfo contains a :config key with a map with a :server-type :cloud instead of the :server-type :ion that I specify when calling import-cloud


Ok. Found this answer on I guess the required :proxy-port option got (rightly) dropped from the documentation.

Drew Verlee21:08:57

Datomic cloud question: my websocket connect call will correctly connect through my aws lambda proxy to a hander and back. But not my Http none proxy API gateway. The request reaches the app handler, no errors are thrown, but i get a 500 response code. I'm going to try to get more visibility on whats going on at the apigateway layer. ideas appreciated.

Joe Lane22:08:00

What is: > But not my H} none proxy.


Turn on logs/tracing in your deployed stage in API Gateway and add logging to your functions that are being called. That allows a good deal of visibility as to where and what the error is.

Drew Verlee22:08:18

@U0CJ19XAM sorry, not sure how that happened. The http proxy API gateway*

Drew Verlee22:08:13

@U0508JRJC yep. That's where I think I should go next, thanks for the suggestion.

Drew Verlee13:08:19

it's clear i need to do more configuration, but its also becoming more clear that this level of configuration (request and response integration/templating) at the aws level isn't ideal. I feel like the proxy should be the way to go. But it is websocket can make a connection now, it was clear once i could see the logs what the issues were.