Fork me on GitHub

With Datomic On-Prem transactor deployed on AWS in a high-availability setup (one active + one standby transactor), what is the simplest way of identifying which instance currently is active and which is standby? I’ve found the instance IP in the transactor logs emitted at startup time, but that’s unwieldy.


Hi @UFR3C1JBU the transactor logs will contain a log line lifecycle event with the status standby



2017-10-19 10:13:26.532 INFO  default    datomic.lifecycle-ext - {:event :transactor/standby, :rev 1238602, :missed 1, :timestamp 1508407998353, :pid 2604, :tid 20}

Ivar Refsdal12:10:16

I see that datomic transactor on-prem bundles org.postgresql/postgresql "9.3-1102-jdbc41", released on Jul 18, 2014. Is that also the recommended PostgreSQL driver for peers? Why such an old release? At my company we are connecting to PostgreSQL 11


I’m having to answer questions about when we apply patches for our infra. When are the EC2 instances that back QGs replaced, is it only during Ions upgrades? I gather it’s not when we deploy. How often is the AMI updated? Have I missed this info in the docs?


Hi @U0VP19K6K, every time you deploy you will cycle the instances and install your ion code on the instances. However, the instances AMI is tied to the CFT version they are on. So it depends on what you define as applying patches. Does that answer your question?


It does thanks Jaret 🙂


I just tried to upgrade an existing datomic production system from 715-8973 to 884-9095 and got this error while upgrading the storage: > UPDATE_FAILED The following resource(s) failed to update: [EnsureAdminPolicyLogGroup]. My stack has a root stack with the name of my system and two childs: compute & storage. I just found this ticket where the solution was to split the stack: So, should I also split our stack and try upgrading again?


hi @U012ADU90SW yes, you will want to perform a split stack operation to get off the Marketplace template. From there on in you will be able to update the individual stacks depending on what has been released.


If you encounter any issues please let me know directly or e-mail us at support, <mailto:[email protected]|[email protected]>.


Thank you jaret! I'll give it a go and report back later

Tobias Sjögren15:10:27

Trying to learn more about Datomic indexes.. I seems “covering indexes” means that the indexes are actually full copies of the datoms but sorted in different E-A-V ways. It seems the four indexes are stored in Amazon S3 - but are they copied to each peer also? I’d like to learn more about both indexes and caching - in general, and Datomic specifically. Anyone know what are the best sources for such information?


it depends on whether you’re using Cloud or On-Prem, but either way the docs is a good place to start: •

Tobias Sjögren16:10:06

Other than the docs I should have said..


Peers pull segments (blocks of sorted datoms) down as they need them from storage or one of the caching laters in front of storage. The unindexed portion of change is kept in memory on all peers


a reindex incorporates the unindexed portion into a new full index, updates the pointer to the root of the segment tree, and the cycle begins again.

Tobias Sjögren08:10:14

Ah, that great article again of course!

Tobias Sjögren06:10:27

From what I understand all indexes are in both main storage and local storage/cache. If I’m correct about that I wonder what the reason behind this is - why wouldn’t it be enough to create the indexes only locally?


The cache is only populated by need


And it’s assumed to be ephemeral


And potentially incomplete

Tobias Sjögren19:10:32

Only the ”original data” (the ”log”?) is local and the four indexes are transferred to peers when queries need them?


the log tail (that hasn’t been incorporated into the stored indexes) is kept in memory, and the indexes of it are kept in memory. The rest is fetched from storage as needed and merged with these to produce a complete view.

Tobias Sjögren08:10:29

I wonder what might be the disadvantage of having the four indexes fully represented on each peer..


What do you mean by “fully represented”?

César Augusto18:10:57

Hello! I am trying to learn more about the transactions functions, but I am having a little difficult on understanding how to install a transaction function. Is it possible to have the transaction function implemented by two or more functions instead of having all the code inside code keyword? For example:

(defn other-func-2 [] <all-code-here>)
(defn other-func-1 [] (let [foo (other-func2)] <all-code-here> ))

#db/fn {:lang     :clojure
                  :params [db offer]
                  :code   (other-func-1)}
Instead of
#db/fn {:lang        :clojure
                     :params [db entity]
                     :code   <all-code-here>}


The problem is the environment isn’t shared among all peers. You can either install the code into the database itself, in which case it can only reference things you know all peers have in their environment (that includes the database itself--you can use d/invoke to invoke other db functions).


or you can ignore all this installation stuff and just put the functions into the transactor’s classpath.


if you use d/with, these need to be in the peer’s classpath too

César Augusto19:10:57

Hey Favila, thank you for the answer!! 1 - I think I didn't understand how to do that... is there any example of install code into the database itself? Because I thought I was doing it when I install the db/fn. 2 - About the transactor's classpath: Do I need to generate my code as a lib in order to add it to the classpath? 3 - Do you know any example using d/with ? I didn't understand how it is related to the other options


That link documents two different things: 1. putting executable code as data into the db; 2. calling a “normal” function from transaction data.


Talking about (2) first. On the transactor, you make functions available by including a jar with that code in it on startup

export DATOMIC_EXT_CLASSPATH=mylibs/mylib.jar


Then you can use it anywhere in the transactor process by using the symbol name, e.g. attribute predicates, entity predicates, as a tx fn, or inside a query running in any of those.


in transaction data, “invoking” one of these looks like this [[my.namespace/myfunction arg1 arg2 argV…]]


d/with does everything a transactor does (takes a db and tx-data and returns a new db), but locally and doesn’t write to storage. But any of these function-symbol references will be resolved as it runs, so that DATOMIC_EXT_CLASSPATH jar has to be in the peer’s classpath too or d/with won’t work in those cases.


(1) is installing a function-code object (as a string that’s compiled+cached on-demand) into the db itself as a value on an entity, and you “invoke” that code in transaction data using its keyword ident, e.g. [[:my/tx-fn arg1 arg2]] . Peers can get this code through reading the database itself, but you have to use the special interfaces specific to that--the normal language runtime (e.g. require) doesn’t know about it.


hopefully that answers all your questions?

César Augusto20:10:43

Thank you again @U09R86PA4, yeah it looks like it answered all questions, just to make sure I got it: 1. for add code as data into db: I can create it using the #db/fn. It only accepts clojure core symbols/function and datomic.api symbols/functions (i.e. d/q function). custom function doesn't work using code as data. It executes like [[:my-fn arg1 arg2 ...]] . 2. for calling a custom function: I need to have this function in a library and add it to the classpath of the datomic using the DATOMIC_EXT_CLASSPATH . It executes like [[lib.namespace/lib-function arg1 arg2...]] The question that raised now for me is about "Peers can get this code through reading the database itself, but you have to use the special interfaces specific to that--the normal language runtime (e.g. `require`) doesn’t know about it" 1. I don't know in which case I would like to peers to get the code Your answer helped me a lot - thank you again - I think there were some concepts I was missing, for example, that the function is sent to transactor to be executed and the transactor doesn't have the same libs as the peers have.


If I send a list of transactions to 2 different databases (even 2 different transactors), will the resulting t be the same of the different databases? Or is there no such guarantee?


No guarantees. T advancement is an implementation detail. In practice the only guarantee is that it won’t go backwards, and it will go up at least 1 for each successful transaction.

👍 1

(this question is about on-prem)


I'm looking to find the state of an entity immediately prior to a transaction. My first inclination is to use `(dec (d/tx-&gt;t db tx))` (dec (d/tx->t tx)) with an as-of query, however I know that t values do not always increase by 1. My impression is that this will still work, but is there better approach?

Lennart Buit20:10:24

You get a :db-before value in your transaction result. Thats a database immediately prior to the transaction just processed

Lennart Buit20:10:15

(Similarly, you get a :db-after value, which is the db value immediately after the transaction just processed 😉 )


Thanks for the response, but in this circumstance I'm querying for the tx-id; the transaction occurred in the past


Otherwise I would absolutely take that route!

Lennart Buit20:10:10

ah yeah, that was the other possibility I wasn’t hoping for haha

😂 1

this will work in that the state of the database you read will be the one immediately prior to the T

🙏 2

however (dec some-t) isn’t guaranteed to be a t


i.e. you won’t necessarily be able to (d/t->tx t) and get an entity with :db/txInstant asserted on it


Thanks @U09R86PA4! I just found an example in day-of-datomic, and it looks like I don't even need to use the t-time. I can just dec the tx-id:


you can dec either one. T and TX differ only by some constant high bits




that’s why d/t->tx and d/tx->t exist and don’t need a db

👍 1

Whoops. that's embarrassing. I should know better to use the wrong signature