Fork me on GitHub
#datomic
<
2019-06-16
>
favila01:06:38

There is no serializer in their transit for the Datom class (an instance of a datom from d/datom, tx data, etc)

favila01:06:59

You will have to coerce to eg vector

4
lboliveira14:06:37

Hello! Last Friday I issued some :db/excise commands on a database that has 50M entities. It was issued 200 transactions with 5000 :db/excise each. No individual attribute was excised, only entities. After that I called (d/request-index conn) and (d/gc-storage conn #inst “2018”). The original transactor that had 2 CPUs and 4GB of RAM stopped to attend new requests. I stopped it and the requests began to be attended by the backup transactor. The backup transactor worked for 10 minutes and stopped answering new requests. This pattern started to repeat until I set up a new transactor that has 8 CPUs and 64GB of RAM. The 8 CPUs were 100% by 2 or 3 minutes, soon they got idle except by 1 CPU that got stuck in 100%. The new transactor was able to satisfy new requests and no timeouts were logged on the peer anymore. Since them, one CPU is being consumed until today and does not seem that the entities are being removed. It there something that I can do? Will this process end some day? Is there a way to find out how much of this job is complete? Should I consider a plan B? I can't afford the new transactor’s bill for too much time. I am using datomic-pro-0.9.5561.50. java -server -cp resources:lib/*:datomic-transactor-pro-0.9.5561.50.jar:samples/clj:bin -Xmx60000m -Xms60000m -XX:+UseG1GC -XX:MaxGCPauseMillis=50 clojure.main --main datomic.launcher /opt/datomic-pro-0.9.5561.50/config/transactor.properties

# config/transactor.properties

memory-index-threshold=20g
memory-index-max=40g
object-cache-max=2g