Fork me on GitHub

@vachichng I think go channels are best used with non-blocking libraries (which next.jdbc is not, and I don't know a Clojure SQL client that is) or for CPU-bound tasks, which this is not. is a great read on the subject.


then maybe Pulsar? it's a Fiber library.


Yeah, I think I'll look into it. It looks promising.


Use async/thread instead async/go for blocking operations. Channels work the same for both blocking and non-blocking operations (i.e. ! and !!). Core.async works great with next.jdbc.


If you treat next.jdbc/plan as a producer for a channel of results, that's reasonable. Channel consumption will automatically create back pressure on reduce-over-`plan` as it streams the result set onto the channel. It's trickier if you want a way to terminate early (and return reduced) since you need a way to tell the reduce to stop but it will be (deliberately) blocked trying to put data on the channel.


Howdy @seancorfield. Do you ever find yourself wanting a clojure keyword mapping from/to java.sql.Types? I.e. to map to/fro java.sql.Types/VARCHAR to :varchar


Had a scan around next.jdbc and and couldn't see such a mapping


I implemented in next.jdbc , now trying to extract the defined enum types out of postgres so they are not hard-coded into the coercion code....


@jonpither I do run into situations where I have a keyword and want to store it as a string (via name usually) but not often enough that I'd want an automatic conversion: I'd rather have it fail in the cases where I didn't expect to get a keyword (and that has, indeed, uncovered several bugs for me in the past).


In general, I tend to find keywords in Clojure may get mapped to ENUM in SQL (MySQL) so having an explicit conversion is safer. Overall, I prefer explicit conversions to/from SQL than global implicit ones.


(I don't even leverage the auto-conversions from Java Time to SQL date/timestamp)


thanks @seancorfield


@salo.kristian Have you/are you using connection pooling yet?


Kinda obvious but I did some testing locally


  (doseq [_ (range 100)]
    (with-open [conn (jdbc/get-connection (jdbc/get-datasource db-spec))]
      (jdbc/execute! conn ["SELECT * FROM USERS"]))))
"Elapsed time: 7557.7078 msecs"
=> nil
  (doseq [_ (range 100)]
    (with-open [conn (jdbc/get-connection datasource)]
      (jdbc/execute! conn ["SELECT * FROM USERS"]))))
"Elapsed time: 282.6531 msecs"
=> nil


  (doseq [_ (range 10000)]
    (with-open [conn (jdbc/get-connection datasource)]
      (jdbc/execute! conn ["SELECT * FROM USERS"]))))
"Elapsed time: 2021.8485 msecs"


you might come pretty close to your perf requirements with just that


I haven't yet done any tests, since I've mainly been familiarizing myself with the alternatives. However, I have a HikariCP connection pool already setup.


How large a connection pool did you use for your tests? They look very promising.


just the default one, 10 active connections


(in my case at least)


@emccue It never occurred to me to even ask that question -- good point! I just sort of assume that anyone who cares about performance is already making sure they use connection pooling and don't try to stand up a new connection for every query. 🙂