Fork me on GitHub
#aws
<
2022-06-24
>
Martynas Maciulevičius16:06:55

Hey. I'm thinking about a very basic code deploy functionality so that I could push a container and then change that version using Terraform. This works but now I think about DB migration. Basically that means that I want a second docker container that will contain the migrations. Currently I think whether I should include the migration functionality into the main production-like container or I could somehow make it work with two containers :thinking_face: Does anyone have any other ideas? I'll be using this alone so for now I don't need full CICD and code building thing.

valtteri16:06:02

I've encountered two approaches. One is to run migrations each time the app starts. Downside is that the runtime will have privileges for schema alterations. Another approach is to have a build step rhat runs the migrations where the downside is additional complexity. Not aware of other ways.

Martynas Maciulevičius16:06:59

In my case I also have a JVM artifact that has some JARs that it depends on. And that means that the resulting container would have to contain all of them :thinking_face: But probably that's the way for now.

viesti20:06:23

In the past, I have had CI step run the migrations, by tunneling to the database from the CI. You could also use the same container image as the app, and have say an environment variable, based on which you select an entrypoint to run migrations, instead of the normal application start. You could then call https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html to run a single container to do the migrations, and poll for the exit code of the task created by that API call with https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html#API_DescribeTasks_ResponseSyntax. That migration task can then use a user that has more rights (rights to migrate schema) than the application user. This way you don’t need to tunnel access to the database from the CI.