Fork me on GitHub
Darin Douglass21:10:53

so i've a few clarifying questions about next.jdbc/plan: from the readme, plan is useful for streaming results from the underlying db. generally this makes sense to me and is a worthwhile abstraction when you have a very large results set. however, what i don't get is the use-case for next.jdbc.plan/select-one! and the vast majority of the plan examples in the docs. 1. what's the benefit of using select-one! instead of simply providing a limit on a normal query? 2. most of the examples of plan either sum up some value from the db or select only a subset of columns from the planned statement. akin to question 1, why do this instead of using sum functions in the db or being explicit about the columns returned from the query? if the answer is just convention (i.e. "we like to have our db just query and keep the business logic (aggregation, etc) in our clojure code") that's ok. --- another question: i'm attempting to use next.jdbc at work in a new project (all of our legacy stuff uses and just started fiddling around with options and the like. my intuition led me to attempt to wrap our hikari pool in a with-options since we're going to want every use of that pool to have the same opts. however doing so breaks everything b/c the resulting object isn't closeable. this leads to us having to either pass the opts to every use of next.jdbc or to wrap every new connection in with-options (as is called out in the docs). • firstly, i was kind of surprised snake-kebab-opts isn't the default given clojure conventions. • secondly, what's the rationale of not having with-options be Closeable? guaranteed i'm missing something important, but it currently feels like the library is sacrificing usability for the sake of using POJOs everywhere

Darin Douglass21:10:31

(hopefully the last question doesn't come off as confrontational. i genuinely am curious about the rationale)


@ddouglass Lots of questions to unpack there... let's see if I can clarify...


plan does allow you to stream large result sets, yes, but more importantly you can process result sets without realizing complete row hash maps which is a big performance benefit.


execute! (and all the friendly SQL functions) will realize every single row of the underlying ResultSet into a fully-fleshed out Clojure hash map with all the column names expanded into Clojure keywords.


plan lets you process the data without that overhead in many cases (if you either just want values back, or you want to reduce/`transduce` the result set down).


On to your specific questions...


1. As the docs say for execute-one! (and, I hope, for select-one!), the idea is to just get back data from the first row of the result set -- a hash map for execute-one! and either a single value or a single hash-map for select-one!. It is up to you to limit the number of rows the DB will return. The main issue here is the result type -- not being a vector of things but just a single result.


Here's what Getting Started says about this: > Note: In general, you should use execute-one! for DDL operations since you will only get back an update count. If you have a SQL statement that you know will only return an update count, execute-one! is the right choice. If you have a SQL statement that you know will only return a single row in the result set, you probably want to use execute-one!. If you use execute-one! for a SQL statement that would return multiple rows in a result set, even though you will only get the first row back (as a hash map), the full result set will still be retrieved from the database -- it does not limit the SQL in any way.


2. Yes, the Getting Started page is using deliberately simple examples and of course you could (and probably should) do simple aggregates directly in the database. The point is to show the structure of reduce-based or transducer-based operations on data. If I tried to come up with examples that could not easily be done in SQL directly, they'd likely be too complicated to use as "teaching" examples and would obscure the concepts.


I'm not sure I understand the problem you are having with a connection pool and with-options. Can you share some code? We use c3p0 for connection pooling at work and we wrap that datasource with-options and pass it into plan, execute! etc with no issues.


Re: snake-kebab-opts -- the default is (generally) higher/highest performance so it performs no string transformation on the table or column names. The addition of csk is recent because some folks were happy to accept the tradeoff of slower performance for "more idiomatic" keys, so I figured I might as well provide a standardized way to do it, if you already have csk on your classpath.

👍 3

And that second question is the thing I don't understand. Why would it need to be Closeable?


(I suspect you're trying to do something "wrong" if you think the result of with-options should be Closeable but I think I need to see code)