This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-17
Channels
- # announcements (1)
- # babashka (94)
- # beginners (76)
- # calva (24)
- # cider (24)
- # clj-kondo (1)
- # cljs-dev (16)
- # cljsrn (45)
- # clojure (135)
- # clojure-europe (9)
- # clojure-france (5)
- # clojure-germany (2)
- # clojure-italy (12)
- # clojure-losangeles (13)
- # clojure-nl (3)
- # clojure-portugal (54)
- # clojure-uk (20)
- # clojurescript (55)
- # conjure (67)
- # core-async (5)
- # cursive (2)
- # datomic (10)
- # docker (7)
- # duct (22)
- # emacs (16)
- # fulcro (34)
- # graalvm (15)
- # hoplon (1)
- # instaparse (1)
- # jobs-discuss (3)
- # juxt (94)
- # luminus (1)
- # meander (4)
- # off-topic (13)
- # pathom (4)
- # pedestal (1)
- # ring (3)
- # ring-swagger (2)
- # shadow-cljs (61)
- # spacemacs (17)
- # specter (2)
- # sql (23)
- # xtdb (33)
@vachichng I think go channels are best used with non-blocking libraries (which next.jdbc
is not, and I don't know a Clojure SQL client that is) or for CPU-bound tasks, which this is not. https://eli.thegreenplace.net/2017/clojure-concurrency-and-blocking-with-coreasync/ is a great read on the subject.
then maybe Pulsar? it's a Fiber library. http://docs.paralleluniverse.co/pulsar/
Yeah, I think I'll look into it. It looks promising.
Use async/thread instead async/go for blocking operations. Channels work the same for both blocking and non-blocking operations (i.e. ! and !!). Core.async works great with next.jdbc.
If you treat next.jdbc/plan
as a producer for a channel of results, that's reasonable. Channel consumption will automatically create back pressure on reduce
-over-`plan` as it streams the result set onto the channel. It's trickier if you want a way to terminate early (and return reduced
) since you need a way to tell the reduce
to stop but it will be (deliberately) blocked trying to put data on the channel.
Howdy @seancorfield. Do you ever find yourself wanting a clojure keyword mapping from/to java.sql.Types? I.e. to map to/fro java.sql.Types/VARCHAR
to :varchar
I implemented https://www.bevuta.com/en/blog/using-postgresql-enums-in-clojure/ in next.jdbc , now trying to extract the defined enum types out of postgres so they are not hard-coded into the coercion code....
@jonpither I do run into situations where I have a keyword and want to store it as a string (via name
usually) but not often enough that I'd want an automatic conversion: I'd rather have it fail in the cases where I didn't expect to get a keyword (and that has, indeed, uncovered several bugs for me in the past).
In general, I tend to find keywords in Clojure may get mapped to ENUM
in SQL (MySQL) so having an explicit conversion is safer. Overall, I prefer explicit conversions to/from SQL than global implicit ones.
(I don't even leverage the next.jdbc.date-time
auto-conversions from Java Time to SQL date/timestamp)
@salo.kristian Have you/are you using connection pooling yet?
(time
(doseq [_ (range 100)]
(with-open [conn (jdbc/get-connection (jdbc/get-datasource db-spec))]
(jdbc/execute! conn ["SELECT * FROM USERS"]))))
"Elapsed time: 7557.7078 msecs"
=> nil
(time
(doseq [_ (range 100)]
(with-open [conn (jdbc/get-connection datasource)]
(jdbc/execute! conn ["SELECT * FROM USERS"]))))
"Elapsed time: 282.6531 msecs"
=> nil
(time
(doseq [_ (range 10000)]
(with-open [conn (jdbc/get-connection datasource)]
(jdbc/execute! conn ["SELECT * FROM USERS"]))))
"Elapsed time: 2021.8485 msecs"
I haven't yet done any tests, since I've mainly been familiarizing myself with the alternatives. However, I have a HikariCP
connection pool already setup.
How large a connection pool did you use for your tests? They look very promising.
@emccue It never occurred to me to even ask that question -- good point! I just sort of assume that anyone who cares about performance is already making sure they use connection pooling and don't try to stand up a new connection for every query. 🙂