Fork me on GitHub

OK, I'm finally at a place where I feel comfortable showing an early working draft of next.jdbc to the world: -- although it is not API-compatible with, it is almost at feature parity at this point.


Feedback here, or via issues, or via DM is all fine. The code is substantially shorter and simpler than and the performance is substantially better.

parrot 16

I guess I find it surprising that find-by-keys, insert!, delete! etc. are in the core API. Unless there're some per-operation optimizations planned I don't see why those functions should be part of the library as those problems (building SQL) are covered by other libs like HoneySQL, HugSQL, etc. Besides minimized API surface area there'll be less things to maintain (differences between SQL dialects, quoting, etc.).


Ok, I think I see the reason in next.jdbc.sql: > This is intended to provide a minimal level of parity with (insert!, update!, delete!, etc). But I'd suggest to put all this "parity" higher level API somewhere not in the core (maybe another library?) and/or make it more obvious that it's an optional part of the API for people who don't want to bother with HoneySQL, etc..


@metametadata Yeah, I'm on the fence about whether those should remain in the primary API but our usage of at work leans on them heavily so they definitely need to be provided somewhere. Having the primary API be just the connection/datasource creation, the basic execute/reducible, and the transaction might be enough.

👍 8

If I moved them to next.jdbc.sql (the logical place) then all the string construction functions could be private, but then that namespace would also need to depend on next.jdbc.result-set. It's the sort of organization I want to get nailed down before I actually cut a release of the library.


moving them to another ns sounds reasonable, but not sure what are the implications of depending on result-set ns.


I was trying to keep the dependencies in the sub-namespaces to a minimum but result-set already depends on prepare so having sql depend on result-set wouldn't be a big deal. It was just an observation (and a little sadness 🙂 ).


I almost have it working, would be nice to have known resulting maps are now namespaced.


Works now, going to run a little benchmark, before run on the


Cool. When I flesh out the docs, I'll make a point to mention the auto-namespaced column keywords. I plan to have a "Differences from" document soon.


Another minor point was the need for the option map making the connection, where I now put some empty maps.


I'm still considering fixed-arity vs multi-arity in several cases. Given so much of the API is different, requiring the options map seemed reasonable but I think it will depend on how folks want to use it and whether they will actually provide options for most calls or not.


In particular, :identifiers and :entities -- since the default has changed on the former (as-is is now the default, instead of clojure.string/lower-case, and the quoted function of old has gone, replaced by specific functions in next.jdbc.quoted).


It's easy enough to add an empty map, or create a function to do so in your own namespace. Benchmark is taking end-to-end measurements from the front-end will taking cpu and memory from the code that's reading/writing to Kafka and PostgreSQL. Will have some results tomorrow.


Interesting! Thank you! I appreciate you taking the time to do this. Bear in mind that much of the performance improvements you can get are based on using reducible! wherever you are processing result sets, instead of realizing them into vectors of hash maps (but even the latter should be faster than in


Some statistics, from 99measurements, every 5 seconds:



interaction-deviation 20.907945050011733 interaction-min 76.0 interaction-max 627.0 interaction-mean 93.69469469469469 cpu-deviation 0.010049241707554321 cpu-min 0.006313131313131313 cpu-max 0.21544308861772354 cpu-mean 0.01007092854859558 mem-deviation 17.98917218013515 mem-min 203.0 mem-max 254.0 mem-mean 230.92992992992993


interaction-deviation 20.878875796091535 interaction-min 79.0 interaction-max 162.0 interaction-mean 107.97997997997997 cpu-deviation 0.013075369648499056 cpu-min 0.004171632896305125 cpu-max 0.3347322142286171 cpu-mean 0.01017211540347393 mem-deviation 21.282347546137593 mem-min 255.0 mem-max 336.0 mem-mean 300.4424424424424


Seems a tad slower in my case, and a bit more memory hungry. update! insert! and execute-one! are the only ones I used.


going to give it another go keeping the same connection


Like I say, the main focus of performance is to use reducible! where you would iterate over a result set.


I am surprised that execute-one! is slower than with {:result-set-fn first} tho'...


FYI, I added some benchmark numbers, using H2 in-memory to reduce I/O overhead so it's just the code being tested:


Most operations show java.jdbc is 1.3-2x slower than the same with next.jdbc -- all operating on the same connection.


It's better now

interaction-deviation  8.368125748012309
interaction-min 50.0
interaction-max 123.0
interaction-mean 95.62762762762763
cpu-deviation  0.009839659311371917
cpu-min 0.003599280143971206
cpu-max 0.19755560008014425
cpu-mean 0.008860218672233983
mem-deviation  14.963682285421568
mem-min 246.0
mem-max 319.0
mem-mean 270.5615615615616


What are you using to get those benchmark numbers? Also, what code are you comparing? And, what platform I guess (maybe that makes a difference too)?


A lot better then the previous one, only changed using one connection, instead of using with-open.


On my mac, with a lot of other stuff running, so it's not the best way. But should be able to compare them to some degree.


Yeah, my numbers were all on a Mac (with a lot of background stuff running too I suppose 🙂 )


As long as you're seeing next.jdbc outperforming, I'm happy 🙂


I have a clojure program where I both check the response in the front-end from two micro-services running in the back-end. Idea is to compare a Clojure, Kotlin and Rust implementation. Also to steadily increase load in the system. Increasing load is not yet done, so it's just one message every 5 seconds for these.


Every benchmark run also means having around 4000 sql operations with no errors. 👍


Very interesting! Not open source I assume? 🙂


I don't really know. I need a bunch of stuff to set it up, most of it undocumented or incomplete. I might put it together in one repo as-is through.


> Another minor point was the need for the option map making the connection, where I now put some empty maps. I've changed this in the latest Git version. opts is now optional everywhere and handled via arities instead.