Fork me on GitHub
#polylith
<
2021-05-30
>
seancorfield03:05:16

On Friday, we started pulling apart some of our subprojects into components. Definitely going to be a long, slow path but it's going to be interested watching the info grid grow 🙂

💯 6
polylith 9
tengstrand16:05:06

That’s really cool!

jumar07:05:27

Definitely interested in hearing more about your experiences. A blog post, at some point, would be very appreciated 🙂

cyppan15:05:37

For us it’s especially the CI / deploys that needs a bit of work upfront to adapt to this monorepo structure. And also the non-homogeneous dependencies versions used across our different projects.

seancorfield17:05:25

Being able to just run the subset of tests needed since our last successful CI run is going to be valuable, but that means getting our whole repo restructured — which will be a huge amount of work. Our current monorepo has around 40 subprojects but those are very coarse-grained. Our entire CI pipeline — starting from scratch, building the DB up to current status (about 800 SQL migrations at this point) and running all tests for all projects — takes about 35 minutes and then building the API docs and the dozen+ uberjars for deployment takes about another 10 minutes. So our cycle time from commit to automated deployment on our staging server takes about an hour — we’d love to bring that down to 20-30 minutes.

seancorfield17:05:01

We have our own build shell script which can calculate subproject dependencies and run tests for a given subproject, a given subproject and everything that depends on it, or a given subproject and everything it depends on, so that helps a lot during dev/test locally, but being able to just rely on poly test to run tests from the last stable point would make that better.

seancorfield17:05:37

Also — and this is currently mostly supposition on my part — having finer-grained components should allow for fewer dependencies to be dragged into projects which should reduce our artifact sizes (currently a few of our subprojects drag in a big pile of 3rd party libs and other subprojects that depend on those are “forced” to accept all of those deps too, even though they often don’t use the code that actually depends on a large 3rd party lib).

cyppan14:06:33

I don’t know your constraints / CI of course but can’t you use a structural dump of your mysql database instead of starting from zero? (with the migration version stored in the dump)

seancorfield15:06:09

@U0CL38MU1 We could but then we would need some process to aggregate SQL migrations into some one-off “dump” as they have been applied across all tiers. With the current approach, we have just one mechanism to “build” the DB in dev/test/CI and update the DB in staging/production.

seancorfield15:06:54

Since we also have to populate Elastic Search from scratch in CI, using a process that analyzes the DB contents anyway, We wouldn’t save very much by doing that aggregation and the complexity vs speed tradeoff isn’t really worthwhile.

cyppan15:06:52

ok, a pattern I’ve seen is regularly dumping the production database schema into a tagged mysql docker container (for instance), and use this as the starting point in the tests envs, the migration scripts would then run the missing migration scripts by comparing to the last migration version stored into a specific database table (a single row migration_version or something like that)

seancorfield15:06:45

Well, you’d still need to load all your test data that has to be compatible with whatever state your production schema is in, and you’re introduced a manual step that needs to be performed “periodically” and then integrated back into version control… It’s still a lot of added complexity for a (potentially small) speedup in dev/test/CI…

cyppan15:06:42

yes I understand, it might not be worth it (I’ve also seen fixtures data been put into migration scripts too to solve a part of what you say).