This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-12-03
Channels
- # adventofcode (151)
- # asami (34)
- # babashka (43)
- # beginners (111)
- # cider (2)
- # clj-kondo (6)
- # cljdoc (12)
- # clojure (140)
- # clojure-australia (10)
- # clojure-europe (14)
- # clojure-france (5)
- # clojure-gamedev (5)
- # clojure-nl (4)
- # clojure-uk (10)
- # clojurescript (20)
- # community-development (9)
- # conjure (1)
- # core-async (4)
- # cryogen (3)
- # cursive (2)
- # datomic (17)
- # emacs (9)
- # events (1)
- # fulcro (27)
- # juxt (8)
- # kaocha (2)
- # lambdaisland (14)
- # off-topic (23)
- # pathom (37)
- # pedestal (2)
- # re-frame (8)
- # reagent (8)
- # reclojure (9)
- # reitit (5)
- # reveal (34)
- # shadow-cljs (27)
- # spacemacs (10)
- # tools-deps (123)
- # vim (28)
- # xtdb (17)
I don't suppose there's a way to make a peer server serve up a database that was created after the peer server started? (apart from restarting the peer server)
No you would have to restart. What is the use case for doing this? Perhaps this is something we should consider adding a feature for. As an aside you can pass multiple -d options to serve multiple databases and serve in-memory dbs with the peer server.
this is for a multi-tenant system where one tenant = one db. We are setting up analytics and still figuring out what our setup will look like. It's appealing to have a single peer server = single catalog, but then we would have to restart it when adding a tenant.
Can you clone a full Datomic setup by copying the underlying storage over? For example, when copying Postgres dumps & importing them afterwards
Are you talking about on-prem or cloud? In on-prem the supported method would be backup/restore. You can even use backup and restore to move between underlying storages: https://docs.datomic.com/on-prem/backup.html Please note that Datomic Backup/Restore is not intended as tool for "forking" a DB, but you can restore into a URI that already points to a different point-in-time for the same database. You cannot restore into a URI that points to a different database.
We did copy the Postgres data with pgdump & psql -f
, but now we seem to end up with partial data, with entries that consist of :datomic-replication/source-eid
. Is that expected, or are we expecting something that cannot work?
Ah, we were shadowing our own databases apparently on both storage and Datomic level. It turns out, this is possible btw 🙂
It’s possible if your storage backups are atomic, consistent backups (no read-uncommitted or other read anomalies). Not all can (dynamo) or do by default (MySQL?) so just be careful
Question about on-prem peer capacity.. the docs indicate that 4GB memory is recommended for production. If we have a beefy server that has plenty of ram, is there benefit to scaling everything up? Accounting for other processes being run and 64-bit java, etc.
benefit exists in increasing peer object cache up to the size of the peer’s working set (or the database); you can also run queries with larger intermediate result sets (which must always fit in memory). No benefit beyond these. Risk of large heap is the usual with java CMS or G1GC: longer pauses. If you’re using a fancy new pauseless collector this should also be a non-issue.
Our db is quite large and we've already started rewriting some of our heavier queries with datoms
but having a larger cache in the peer should mean fewer trips to pull indices (hopefully). Does the transactor's memory need to increase to match the peers?
no; you should size the transactor based on its own write and query load, not peers