This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-05-27
Channels
- # announcements (3)
- # babashka (35)
- # babashka-sci-dev (42)
- # beginners (27)
- # calva (7)
- # clj-kondo (18)
- # cljs-dev (1)
- # clojure (40)
- # clojure-europe (141)
- # clojure-nl (1)
- # clojure-norway (6)
- # clojure-uk (40)
- # clojurescript (15)
- # community-development (4)
- # cursive (54)
- # events (1)
- # fulcro (8)
- # helix (5)
- # hyperfiddle (22)
- # introduce-yourself (6)
- # jobs (3)
- # joyride (26)
- # lsp (7)
- # music (1)
- # nbb (7)
- # off-topic (28)
- # pathom (120)
- # pedestal (3)
- # podcasts-discuss (2)
- # portal (2)
- # rdf (2)
- # releases (20)
- # rewrite-clj (9)
- # shadow-cljs (26)
- # spacemacs (1)
- # sql (13)
- # vim (10)
- # xtdb (63)
Could anyone point me to an example of a code-based migratus migration? I need to move ~1M records from one database to another for the first time and would like to see how others have done this
should this be done within a single transaction? should it be split? is it a terrible idea? 😄
Migratus (and similar tools like Ragtime) are best suited for managing db schema changes, rather than moving data - my team usually breaks this sort of stuff into multiple steps: • schema change in Ragtime • actual data migration using one-off scripts • sometimes follow up schema change to clean up unused columns
@U0JEFEZH6 and how how that one-off script look like? do you dump data to csv and then restore that? or stream data from one db to the other?
It depends on what we're migrating - I'd say... we used each approach in the last couple of years, but mostly it involved PSQL dumps to S3 and restore from there or writing custom migration jobs for our async processing framework based on RabbitMQ
it really depends on what constraints you have - can you do a dry-run? have a maintenance window? can your app read from multiple dbs at once?
then you could write a small Clojure thing to slowly read row by row and copy over the data to the new DB - 1M rows isn't that much, unless each column has 2MB of JSON in it 😢
haha yeah no, it’s just 5 cols with strings 😄 the constraint is memory in the prod machine, so what I have now is a using reduce
and jdbc/plan
to write batches of 100.000 to the target db. It’s not very pretty though.
We just migrated ~45M rows using v. similar approach (but it did involve 2MB JSON fields) - and yeah, it doesn't have to be pretty :-)
(reduce
(fn [prev row]
(if (< (count prev) 100000)
(conj prev (vals row))
(let [res (jdbc/execute-batch!
db/db-ds
"INSERT INTO <some secret schema>) VALUES (?,?,?,?,?)"
prev
{})]
(info "Intermediate migration of" (count res) "rows")
[(vals row)])))
[]
(jdbc/plan ds-opts [query]))]
In the end there’s a small leftover that must handled separately. partition
can be used with plan
and my tired brain couldn’t figure something more elegant now