Fork me on GitHub
#sql
<
2020-06-17
>
nachos09:06:57

In next.jdbc is it possible to use builder-fn globally? Maybe at a datasource level?

dharrigan10:06:16

Not sure, but what I have done is have, in my sql functions, an arity that will supply the builder function

dharrigan10:06:49

so, if you say (my-sql-ns/insert foo), I have this (defn insert [statement] (insert statement {:builder-fn foobarbaz}) [statement builder-fn] (jdbc/execute!.....)))

dharrigan10:06:54

something like that

Ludwig14:06:25

yeah, I use the same strategy, just create your own db namespace and wrap the jdbc functions with the opts parameter

seancorfield18:06:15

@danisam17 @vachichng I'm working on a wrapper/middleware approach that will let you augment a DataSource with something that allows for global options to then apply. It's complicated because a) the library relies directly on JDBC types for performance b) adding this must not decrease the performance of existing usage for other people.

seancorfield18:06:33

Generally, I work hard to rely on the default behavior -- because it's the fastest option -- and lean into qualified column names because I believe such keywords are better practice. In other words, try to take another look at your problem and solve it without relying on :builder-fn if possible.

Ludwig18:06:03

thanks for your work, I really appreciate it, I'm satisfied relying on :builder-fn since it does make the code more idiomatic, haven't measured the impact on performance though

seancorfield18:06:40

Not sure how using :builder-fn produces "more idiomatic" code -- what transformation are you doing on column names?

Ludwig18:06:15

well I enjoying having kebab namespaced keywords everywhere

seancorfield18:06:19

Ah, I see what you mean. So you have _ in the DB and - in your code. Fair enough. Are you using camel-snake-kebab as a library for that?

Ludwig18:06:05

yeah, exactly. No, just did a trivial transformation function, but not the best performant for sure haha

seancorfield18:06:33

I'm toying with the idea of detecting if CSK is on the classpath and automatically providing additional builders, built-in, with support for it. Also for :table/`:label`. Haven't decided yet how best to handle that.

😟 3
Ludwig18:06:54

that's an interesting one. I wonder if its possible to use the navify thing to do some magic like creating models from it for basic CRUD

Ludwig18:06:15

sort of what Toucan library does

seancorfield18:06:35

I'm not a fan of anything that smells like ORMs...

seancorfield18:06:25

...that said, what we've done at work is to write specs for database table formats, and then we have a macro that takes the spec and some additional information, and generates CRUD functions from the spec.

seancorfield18:06:07

I'm more inclined to go with that approach (but, right now, what we have at work is heavily tied into our environment so I'd probably have to build it from scratch in order to open source it).

Ludwig18:06:24

how do evolve the database? do you have also a custom migration tool that generates the changes from spec?

seancorfield18:06:52

No. We do write migrations as SQL files, but any corresponding changes in code have to be made manually, and we try very hard to ensure that we can run the latest one or two SQL migrations without having to change the code at the same time, in order to make deployments simpler.

seancorfield18:06:26

So new columns are always added as nullable or with a default value. New tables are easy. Sometimes we need to run a SQL migration on production, then deploy updated code, then run another SQL migration to complete whatever update we need (but that level of coordination is rarely needed).

👍 3
Ludwig18:06:29

cool! I always wondered how to apply new changes to schemas at production without downtime and data loss

Ludwig18:06:38

so actually you have to do like in two steps

Ludwig18:06:05

first create the new schema with lax constrains and copy the data, then in another update, put the actual constrains and new logic

Ludwig18:06:21

thanks for the insights Sean

seancorfield18:06:40

It depends on the changes needed. Most of the time we just have a single SQL migration that updates a table in place, if we can.

Ludwig18:06:28

and I guess that you take down the different replicas/nodes before all that, so one single node is serving the whole thing? until the update is applied in all of the nodes?

seancorfield18:06:25

For larger tables, what we often do is a) create the new table b) update the code to read/write to that table by preference but have reads fall back to the old table c) kick off a background data migration to copy old data across (making sure it's compatible with the on-demand copying in step b), then once the table is fully migrated: c) update the code again to remove the read fallback and finally d) drop the old table at some point.

👍 3
seancorfield18:06:09

Primary/secondary replication of changes is generally fast enough we don't have to worry much about coordinating across the cluster.

seancorfield18:06:01

We have two huge Percona (MySQL) servers, running on SSD-backed virtual servers. All our reporting runs off the secondary. All our transactional queries (and all updates) run against the primary.

seancorfield18:06:31

SQL migrations run against the primary as well (obviously) so that's where we are careful about performance impacts of changes.

seancorfield18:06:47

But the way we have the system set up, we have a migration project that builds a JAR containing all of the SQL migrations, and we can auto-deploy that to an "admin" node in our cluster to kick off any new SQL migrations. We can do that independently (and ahead of) any code changes to the customer-facing nodes.

Ludwig18:06:51

and what about the application server? I'm guessing that you have different copies of it as well for load balacing, how do you apply the updates there without downtime and data loss?

seancorfield18:06:19

Yup, three customer-facing servers behind a load balancer, and staggered auto-deployment. Again, we just take care to make changes that can be rolled out node by node (so any change must be able to run in production on one node while other nodes are still on the old code).

seancorfield18:06:57

We use feature flags for some things where we need to have updated code deployed everywhere before we turn a feature on.

👍 3
seancorfield18:06:40

Occasionally, we need a full down maintenance window but it's pretty rare these days.

seancorfield18:06:54

Any last Qs before I head out for a while @vachichng?

Ludwig18:06:46

no, that's all, thanks a lot @seancorfield!

3
Ludwig18:06:18

have a good day!