This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-06-29
Channels
- # beginners (16)
- # cider (7)
- # clj-kondo (4)
- # clj-together (1)
- # cljsrn (1)
- # clojure (11)
- # clojure-europe (4)
- # clojure-norway (6)
- # clojure-spec (1)
- # clojurescript (2)
- # datalevin (35)
- # datomic (9)
- # honeysql (1)
- # introduce-yourself (3)
- # lsp (6)
- # off-topic (25)
- # pathom (1)
- # polylith (1)
- # releases (1)
- # shadow-cljs (38)
- # sql (5)
How do people in clojure world normally manage sql table schema? I saw from https://clojure-doc.org/articles/ecosystem/java_jdbc/using_ddl/
(jdbc/create-table-ddl :fruit
[[:name "varchar(32)" :primary :key]
[:appearance "varchar(32)"]
[:cost :int]
[:grade :real]]
{:table-spec "ENGINE=InnoDB"
:entities clojure.string/upper-case})
Recently I use prisma and love the idea of define tables in a central place, and can do migration, like this:
model User {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
email String @unique
name String?
role Role @default(USER)
posts Post[]
}
model Post {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
published Boolean @default(false)
title String @db.VarChar(255)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
}
enum Role {
USER
ADMIN
}
However, I want to avoid using orms and its cli tooling.
So my question is: How would people in clojure world defining db table schema in a central place and do migrations? (I use postgresql)
(p.s. I am aware of https://github.com/abogoyavlensky/automigrate, which look promising. but not sure whether it is stable)We currently define all our tables and do all of our updates at work as plain SQL migrations.
We have been doing this for a very long time so we wrote our own migration code on top of plain .sql
files but if we were starting over, we'd probably use Migratus
I'll note that the clojure-doc article you link to is for the old (and essentially unmaintained) clojure.java.jdbc
-- so I'd look at next.jdbc
and HoneySQL which, together, provide more flexible DDL-as-data if you want to continue on that path.
See https://cljdoc.org/d/com.github.seancorfield/honeysql/2.6.1147/doc/getting-started/sql-clause-reference#create-table-with-columns for some HoneySQL examples.
Thanks for your info. Do you mean create a bunch of .sql files like:
CREATE TABLE people (
id serial PRIMARY KEY NOT NULL,
name text,
created_at timestamp with time zone
);
How will you run these .sql files in clojure if you have a fresh postgresql db instance?
read the file and then use next.jdbc to create table?Like I say, use something like https://github.com/yogthos/migratus
(we wrote our own version that predates Migratus and isn't as good 🙂 )
You can switch to Migratus even now and maybe even squash those SQL files. :) I usually do it when I get to around 30-50 files, otherwise there's too much noise in search results.
I am using the minimalist "framework" of @U06F82LES https://clojureverse.org/t/how-do-you-do-database-migration-evolution/2005/2
automigrate looks really cool. Having a single edn file with your complete, accurate, current schema would be a big win over having to piece together the schema in your head.
this may be a misuse of honeysql, but I once set up migratus and honeysql so that migrations were collections of edn files describing "up" and "down" actions i liked this approach because i was already using honeysql to write queries, so switching between raw sql and honeysql felt annoying
automigrate sprinkles in honeysql into the :check field for example
> having to piece together the schema in your head FWIW, I prefer to rely on DB tools for that. Same approach if you're given an exiting DB that you have to study. Given how SQL dialects differ, any medium to advanced features will require you looking directly at the DB anyway.
maybe. but a centralized edn file would still rock for quickly coming up to speed and answering many questions.
YMMV, but I don't see how a single EDN file is better than a single SQL file with the complete schema and no information loss or distortion.
https://github.com/donut-party/dbxray is good for pulling the schema out of a database and presenting it as data that can be manipulated.
@U020WAVDMMK Hi! Thank you for the interest to https://github.com/abogoyavlensky/automigrate! I will try to address some points from the original comment and the conversation above. > How do people in clojure world normally manage sql table schema? Normally, the common approach for database migrations in Clojure is using tools like https://github.com/yogthos/migratus, https://github.com/weavejester/ragtime, or https://flywaydb.org/. All of them are stable and provide similar functionality, as many commenters have already described above: migrations are just SQL files; when you want to change your database schema, you just add one more SQL file with the necessary changes. It works perfectly and is very reliable. For a critical production system, I would go with it. However, the downsides of this approach (at least for me) are: • You can’t grasp the database schema at a glance without a database connection. • There is context switching when you need to change the database schema. The goal of Automigrate is to solve both points. > I want to avoid using orms Automigrate is not an ORM, and you can use whatever you want for SQL queries. Automigrate helps manage only the database schema in a nicer way. It is inspired by Django Migrations and Prisma, but it’s designed to support more SQL features since there is no goal of switching between databases on the fly for the same model definitions, nor is it limited by the ORM nature of those tools. There is a plan to support at least SQLite and MySQL (and basically any other SQL database), but for each database, there will be its own model definition to account for differences in SQL dialects. > Given how SQL dialects differ, any medium to advanced features will require you looking directly at the DB anyway. Automigrate is designed to be as close to database SQL dialects as possible. Yes, of course, plain SQL migrations are more flexible and powerful, but Automigrate strives to balance between full control of migrations and the convenience of the development process. > not sure whether it is stable At the moment, Automigrate is in alpha state and supports only PostgreSQL. I consider the tool’s interface relatively stable and am not planning any breaking changes, but they are still possible. The tool already supports the main features required for quite complex web apps. Currently, I recommend using it for personal hobby projects. Any feedback is highly appreciated! You can check the full setup for a project with Automigrate in https://github.com/abogoyavlensky/clojure-kamal-example and an example of a db schema in https://bogoyavlensky.com/blog/db-schema-for-budget-tracker-with-automigrate/. For complex database features, you can use plain SQL migrations, which Automigrate also supports. Additionally, you can opt out at any time and use one of the bulletproof alternatives.
@U02UEGG6P0V > you just add one more SQL file with the necessary changes. so in that migration files, you cannot see the whole database schema, only the tiny piece of changes, right? in the case of prisma, the migration files (xxxxx.sql) will look like this:
-- AlterTable
ALTER TABLE "User" ADD COLUMN "handle" TEXT NOT NULL;
-- CreateIndex
CREATE UNIQUE INDEX "User_handle_key" ON "User"("handle");
thanks for your clear explanation! i will read your blog post later:pray:> so in that migration files, you cannot see the whole database schema, only the tiny piece of changes, right? Yes, correct.
https://mybatis.org/migrations/ its on sdkman as sdk install mybatis