This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-08-03
Channels
- # announcements (12)
- # babashka (36)
- # beginners (126)
- # calva (26)
- # cider (10)
- # clj-kondo (71)
- # cljdoc (3)
- # cljsrn (2)
- # clojure (232)
- # clojure-australia (1)
- # clojure-europe (11)
- # clojure-france (20)
- # clojure-nl (4)
- # clojure-norway (1)
- # clojure-serbia (4)
- # clojure-uk (6)
- # clojurescript (62)
- # conjure (5)
- # cursive (12)
- # data-science (1)
- # datomic (57)
- # deps-new (1)
- # duct (3)
- # emacs (5)
- # events (8)
- # fulcro (6)
- # graalvm (5)
- # helix (3)
- # jobs (6)
- # jobs-discuss (3)
- # kaocha (4)
- # lsp (128)
- # malli (12)
- # missionary (22)
- # off-topic (1)
- # pathom (7)
- # polylith (27)
- # quil (1)
- # re-frame (20)
- # react (9)
- # reitit (12)
- # releases (8)
- # remote-jobs (3)
- # sci (3)
- # shadow-cljs (9)
- # spacemacs (10)
- # tools-deps (7)
- # vim (7)
- # xtdb (14)
How do I connect to the postgres storage in datomic? I ran the following commands:
psql -f bin/sql/postgres-db.sql -U postgres
psql -f bin/sql/postgres-table.sql -U postgres -d datomic
psql -f bin/sql/postgres-user.sql -U postgres -d datomic
And when I run:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,datomic:
I get:
[1] 38331
zsh: no matches found: humboi,datomic:
[email protected] datomic-pro-1.0.6269 %
[1] + exit 1 bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -
@U01F1TM2FD5 you should run a transactor against your postgres storage. It looks like you are running peer-server (which you can do once you have a transactor up and running and a DB created for the peer-server to serve). In addition to https://docs.datomic.com/on-prem/overview/storage.html#sql-database I have an https://jaretbinford.github.io/SQL-Storage/ of getting Postgres and MySQL storage up and running.
I hope that helps. Shoot me a support e-mail at <mailto:[email protected]|[email protected]> if you run into any issues 🙂
So this is how I’m running the transactor: bin/transactor config/samples/sql-transactor-template.properties
And it starts:
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
This is the config:
1 protocol=sql
2 host=localhost
3 port=8998
4
5
6
7 ###################################################################
8 # See
9
10 license-key=foobar
16
17
18 ###################################################################
19 # See
20
21 sql-url=jdbc:
22 sql-user=datomic
23 sql-password=datomic
24
25 ## The Postgres driver is included with Datomic. For other SQL
26 ## databases, you will need to install the driver on the
27 ## transactor classpath, by copying the file into lib/,
28 ## and place the driver on your peer's classpath.
29 sql-driver-class=org.postgresql.Driver
This is the config and the conn:
10 ;; as environment variables
9 (defn cfg [] {:server-type :peer-server
8 :access-key "myaccesskey"
7 :secret "mysecret"
6 :endpoint "datomic:"
5 :validate-hostnames false})
4
3 (def *conn
2 "Get shared connection."
1 (delay (d/connect (d/client (cfg)) {:db-name "humboi"})))
18
But the transactions don’t seem to be workingGives invalid connection config
I tried “localhost:8998” too for endpoint but that didn’t work either
You are trying to connect via peer-server. You need to connect and create a DB in order to be able to serve a DB.
launch a peer against the transactor (i.e. a REPL), use the peer library to create a DB like:
Please help me. So I have this web server. Should it use the peer or the client library?
There’s just a web server and the datomic server
If I use the client library in the web server, then I have to figure out a way to create the database in the dockerfile of the datomic server right?
But if I use the peer library, then I don’t have to create the database in the dockerfile and can create the database when the app starts?
What if both are running on kubernetes pods and the datomic server uses Persistent Volume Claim? If I create the server on startup everytime I deploy the cluster again, wouldn’t it overwrite what was already written in the persistent volume?
create-database is idempotent, so it is safe to run repeatedly (it will return false and do nothing if the db already exists). That said, it doesn’t make sense to me to do is this way because it’s persistent state that’s a prerequisite to the entire system running. Just like you don’t put schemas/create-tables/create-auth, etc into the startup of postgres, it doesn’t make sense to put db creation into the startup of the transactor. (Besides an empty newly-created db is likely not usable by your application in practice anyway--it probably needs schema and some data.)
it does make sense to create a db while creating the datomic image because my web server that uses the client api cannot create a db when it goes up, so the db will be created by the datomic image upon startup and the client can then read and write on the db.
Hi, all! I'm currently following datomic-ions tutorial mentioned in documentation. I've noticed an :allow keyword in ion-config.edn with a predicate under it. However, there's no info on it in the docs.
{:allow [datomic.ion.starter.attributes/valid-sku?]
:lambdas {:ensure-sample-dataset
{:fn datomic.ion.starter.lambdas/ensure-sample-dataset
:description "creates database and transacts sample data"}
:get-schema
{:fn datomic.ion.starter.lambdas/get-schema
:description "returns the schema for the Datomic docs tutorial"}
:get-items-by-type
{:fn datomic.ion.starter.lambdas/get-items-by-type
:description "return inventory items by type"}}
:http-direct {:handler-fn datomic.ion.starter.http/get-items-by-type}
:app-name "reltest-781-prod"}
Would be grateful if anyone could explain this for me: what this keyword is responsible for, what can be used under etc
Thanks!
Link to the repo: https://github.com/Datomic/ion-starter> :allow
is a vector of fully qualified symbols naming https://docs.datomic.com/cloud/query/query-data-reference.html#deploying or https://docs.datomic.com/cloud/transactions/transaction-functions.html functions. When you deploy an application, Datomic will automatically require all the namespaces mentioned under `:allow`.
@U09R86PA4 thanks! somehow I wasn't able to find this through search bar
Also, note that the first time you use an unallowed function in a query or transaction function, you'll see an error appear, telling you that you should allow it. That's how you'll know you stepped out of the sandbox and must take action.
For me, the first time it happened was when I used a function in the clojure.string
namespace!
I’m trying to install datomic peer with leiningen:
[com.datomic/datomic-pro "1.0.6316"]
in :dependencies
. But:
Could not find artifact com.datomic:datomic-pro:jar:1.0.6316 in central ( )
Could not find artifact com.datomic:datomic-pro:jar:1.0.6316 in clojars ( )
It’s not in maven central but in a credentialed repository. See http://my.datomic.com for instructions for various build tools (including lein)
{#"my\.datomic\.com" {:username ""
:password "foo"}}
and
gpg --default-recipient-self -e \
~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg
gives:
[email protected] humboi % lein repl
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: Inappropriate ioctl for device
Could not decrypt credentials from /Users/prikshetsharma/.lein/credentials.clj.gpg
gpg: public key decryption failed: Inappropriate ioctl for device
gpg: decryption failed: Inappropriate ioctl for device
gpg is looking for a device to read your passphrase from and failing. (BTW, you shouldn’t have pasted your password above)
I don’t understand
Doing export GPG_TTY=$(tty) and then running lein repl gives:
Please enter the passphrase to unlock the OpenPGP secret key:
Where do I find this password?
and also how would this work in the context on running it in a dockerfile?
because you can’t type in a password after doing lein uberjar in a dockerfile?
Is there a way to dynamically compose d/q where clauses? I keep running into issues assembling the map because of the pesky ?e's floating around.
The map form is easier to construct {:find [?a ?b ?c] :where [[?a :foo ?b][?b :bar ?c]]}
not [:find ?a ?b ?c :where [?a :foo ?b][?b :bar ?c]]
I was trying to do the conj like, inside the form instead of doing it to construct the form
Has anyone run into an issue where they can connect to a cloud system, but running import-cloud throws a :not-found anomaly stating that the configured endpoint “nodename nor servname provided, or not known”?
The fact that I’m running an older version of datomic cloud may likely have an effect here, but upgrading to the latest dev-local doesn’t resolve the issue.
The ExceptionInfo contains a :config key with a map with a :server-type :cloud instead of the :server-type :ion that I specify when calling import-cloud
Ok. Found this answer on http://ask.datomic.com. I guess the required :proxy-port option got (rightly) dropped from the documentation.
Datomic cloud question: my websocket connect call will correctly connect through my aws lambda proxy to a hander and back. But not my Http none proxy API gateway. The request reaches the app handler, no errors are thrown, but i get a 500 response code. I'm going to try to get more visibility on whats going on at the apigateway layer. ideas appreciated.
Turn on logs/tracing in your deployed stage in API Gateway and add logging to your functions that are being called. That allows a good deal of visibility as to where and what the error is.
@U0CJ19XAM sorry, not sure how that happened. The http proxy API gateway*
@U0508JRJC yep. That's where I think I should go next, thanks for the suggestion.
it's clear i need to do more configuration, but its also becoming more clear that this level of configuration (request and response integration/templating) at the aws level isn't ideal. I feel like the proxy should be the way to go. But it is websocket can make a connection now, it was clear once i could see the logs what the issues were.