Fork me on GitHub

What are people using for datamigrations?


Or is it all 'spin your own'?

Steven Deobald17:04:13

Kinda. I wrote this: ... but I don't recommend using it. It artificially enforces schema-on-write by writing a strict schema to the database. That schema is read for every write, which is quite expensive. Unless you require schema-on-write, you probably want to reconsider whether you really want data migrations at all. (And even if you do want schema-on-write, you want to take a hard look at spec or malli or one of their friends, assuming you're working in Clojure.) The intention of an immutable database is to capture state transitions as-and-when they happened. If you go back into your history, look up an old entity, and then shoehorn it into a new (updated) data shape ... the actual history is lost. This is more a philosophical problem than technical problem. If your domain demands that you keep your entities up to date with a more recent schema, you can definitely go over your history and apply that new schema. Those entities will get a new tx-time (as they should) and are now duplicated in your database (as they should be). But if you can relax your domain such that it can handle legacy data shapes, you should probably do that. It's more "honest" and also less hassle.


Bitemporality unlocks a really cool way of doing migrations too. Write the new version into the future and have your query engine fall back to the old version at that point in valid time

👍 1

Hello, I am using xtdb on a simple crud test, but I noticed that sometimes, when I run the query, I simply receive no results. I am using it on a Kafka Confluent node. It seems to be something I did not understand correctly. The query I am running is basically this one:

(xt/q (xt/db (:node database)) '{:find [(pull ?e [*])] :where [[?e :user/name]]})


Are you trying to retrieve data you just added via a transaction? Did you wait for the transaction to complete?

👍 2

I don’t have this explicit constraint, but the data has been there for quite some time. I believe that the transaction is complete, because it sometimes returns the expected result, but there are times that it returns an empty array


Explicit constraint of waiting for the transaction to complete*

Steven Deobald18:04:35

@UUA5X8NRL Could you post your xtdb config?


Sure, here it is:

{ :sasl-conf       {"security.protocol"  "SASL_SSL"
                   "sasl.jaas.config"   #envf [" required username=\"%s\" password=\"%s\";"
                   "sasl.mechanism"     "PLAIN"
                   "client.dns.lookup"  "use_all_dns_ips"
                   "" "45000"
                   "acks"               "all"
                   ""           #ref [:consumer-group]}
 :xtdb            {:xtdb.kafka/kafka-config {:bootstrap-servers #profile {:default #env BOOTSTRAP_SERVER}
                                             :properties-map    #profile {:default #ref [:sasl-conf]}}
                   :xtdb/tx-log             {:xtdb/module   xtdb.kafka/->tx-log
                                             :kafka-config  :xtdb.kafka/kafka-config
                                             :tx-topic-opts {:topic-name         "xtdb-tx-log"
                                                             :replication-factor 3}}
                   :xtdb/document-store     {:xtdb/module    xtdb.kafka/->document-store
                                             :kafka-config   :xtdb.kafka/kafka-config
                                             :doc-topic-opts {:topic-name         "xtdb-doc-store"
                                                              :replication-factor 3}
                                             :group-id        #ref [:consumer-group]}}}

Steven Deobald18:04:41

Hrm. I don't see anything obviously wrong here, but I've also never used Kafka as a doc-store before. I'm guessing your actual source isn't public? (Not that I'd necessarily be help there, either... it's possible your best bet is to wait for @@refset to re-emerge. 🙂 )


It is not, hehehe