Fork me on GitHub
#sql
<
2022-08-08
>
sheluchin13:08:57

Are there any helpers for renaming result keys?

seancorfield17:08:02

Clojure has clojure.set/rename-keys and clojure.core/update-keys -- what additional "helper" would you need/want?

sheluchin18:08:09

Yes, those are what I'm using now. Currently I form my queries with HoneySQL in one place, pass the result to execute! or one of the variants, and then transform the result using rename-keys or update-keys. I was just thinking that perhaps it's a common enough operation that some of the bits would be baked into next.jdbc somewhere for convenience. But I suppose it's also simple enough that it doesn't need to be part of the library.

seancorfield18:08:14

I generally recommend not transforming the result of next.jdbc except right at the edge if you need to pass that data to a third party (calling an external API or returning data to a client).

seancorfield18:08:45

You can always use aliases in the queries to get the column name part closer to your needs.

seancorfield18:08:11

I like to leave stuff in snake_case (which is what our DB is) so it's clearer in the code what is raw data and what is constructed domain data. But some folks like to use CSK (which next.jdbc supports directly if it is on the classpath) to auto-convert results to kebab-case.

sheluchin19:08:01

Hmm, that's kind of the opposite of the approach I was thinking. I was considering isolating the database layer to a small portion of the application and using fully qualified namespace keys from my application model everywhere else. I'm using this with Fulcro RAD and Pathom, where part of the value proposition is that you can think about application entities + their attributes and not need to consider the storage mechanism or underlying data model. Matching up my RAD app model keys with the storage keys seems a bit awkward. I tried using aliases, but it looks like you cannot alias to a fully-qualified key.

seancorfield19:08:45

That's also a perfectly reasonable approach -- translating in both directions just around the DB layer -- it's a different set of tradeoffs. We do a lot of stuff that is purely server-side and often get data from one part of the DB, and store parts of it back into other places, so constantly converting back and forth between DB format and domain format would be a waste of time. If you're treating the DB almost entirely as a persistence mechanism and all of your data handling happens in memory or on the client then transforming both ways around the DB is reasonable.

sheluchin19:08:30

Very well. Thanks very much for the thoughtful feedback, @U04V70XH6.

JohnJ18:08:36

noob question, is working with sql strings for static queries that tedious as the internet makes it out to be?

seancorfield18:08:14

It depends on your threshold for "tedious" I think...

seancorfield18:08:16

At work, we do have a lot of SQL strings, just embedded in calls to next.jdbc functions, where they are static (or very simple conditionals). For more complex conditional SQL, we use HoneySQL.

πŸ‘ 1
JohnJ20:08:55

will play with both methods, thx

zane14:08:52

You might find the β€œhttps://github.com/krisajenkins/yesql#rationale” section of the Yesql README interesting. (Yesql is deprecated, I believe, but HugSQL might be a good alternative.)

JohnJ19:08:28

Thx. Been struggling with this for a week πŸ˜‰ , to answer seancorfield, my threshold for tediousness is high if it keeps things simple, so I'm going with strings inside code files. I don't find syntax highlighting that important and indentation is also not a big deal if you use some editor that is good at processing text like vim/emacs, you can also copy/paste from sql files (for large queries), again, vim/emacs make this a breeze. Agree with the rationale that SQL is already a mature and widely use DSL though and that yesql might be a better fit for teams. Is the sad state affairs when dealing with databases I guess (pick your level of pain :)

1
JohnJ19:08:48

I think the most challenging part is going to be dynamic queries, haven't got to that part yet.

zane19:08:56

Strings in Clojure files seems like a totally reasonable way to get started, and the transition to something like HugSQL should be smooth if you ever decide you need to go down that path.

zane19:08:37

For dynamic queries (where the dynamism is more than just parameter substitution) people seem to love HoneySQL.

zane19:08:01

Glad you found something that’s working for you!

JohnJ19:08:40

thx, and yeah, looks like the best method is just sucking it up and maybe use all three

seancorfield19:08:59

One thing I tend to do for static SQL is this:

(jdbc/execute! ds [(str "SELECT foo, bar"
                        "  FROM table"
                        " WHERE id = ?")
                   table-id])
That gives me readable, indented SQL and makes it easy to add in conditional strings.

1
seancorfield19:08:22

You have to remember leading spaces on everything after the SELECT and for conditionals you can use (when condition " some string") because if the condition is falsey, then when produces nil and str turns that into an empty string.

seancorfield19:08:33

(that first example doesn't need str but is another possibility for SQL layout with multi-line strings)

JohnJ19:08:50

good to know tricks

JohnJ19:08:27

another option that solves this problem is datomic πŸ™‚ but a closed source niche database is a hard sell

seancorfield20:08:57

XTDB is similar and open source πŸ™‚

JohnJ20:08:27

My understanding is that they use different data models document store(xtdb) vs attribute-oriented data modeling (datomic)

JohnJ20:08:05

datomic is extremely tempting though!

seancorfield20:08:13

Yeah, I prefer the document store approach of XTDB and the fact it's basically "schema-less".

JohnJ20:08:43

Interesting, I saw a Hickey video where he is really against tables/documents. So mongodb would be an apt solution for you?

seancorfield21:08:11

We use MySQL (Percona) very heavily at work. We also used MongoDB for several years but it's... problematic... in many ways at scale so we stopped using it. We looked at Datomic and discussed it with Cognitect but some of our use cases do not suit it. XTDB is a much better match for us but I don't know how it would work at scale for us. We have a lot of tables with over 100M rows πŸ™‚

JohnJ21:08:59

ah wow, that's a lot of data, read somewhere datomic has limits of 10 billion datoms, definitely not a good fit, for most internal applications it might not be an issue, is this a saas?

seancorfield22:08:55

Online dating platform. Supports about forty dating sites. 140K lines of Clojure on the backend (and about the same lines for the React.js front end).

JohnJ23:08:53

cool, why schema-less though? tables too strict for that platform?

seancorfield23:08:49

To support the flexibility we need for several things we already essentially have EAV overlaid in a couple of tables as adjuncts to fixed tables -- all of that would be simpler to deal with, both from a query and an update p.o.v, if we were using a document store where you didn't need to predeclare fields.

πŸ‘Œ 1
JohnJ03:08:31

they say mongo is better today no idea

seancorfield03:08:54

Yeah, apparently, but if we were to move away from MySQL/Percona at this point, we'd probably go to XTDB. MongoDB doesn't bring enough benefits to make the change of DB worthwhile really.