This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-12-04
Channels
- # adventofcode (154)
- # announcements (1)
- # babashka (8)
- # beginners (28)
- # bristol-clojurians (3)
- # calva (131)
- # cider (43)
- # clj-kondo (14)
- # clojure (135)
- # clojure-europe (1)
- # clojure-italy (7)
- # clojure-madison (1)
- # clojure-nl (6)
- # clojure-spec (8)
- # clojure-uk (90)
- # clojurescript (47)
- # core-async (9)
- # cryogen (4)
- # cursive (12)
- # datomic (9)
- # emacs (7)
- # fulcro (5)
- # graalvm (56)
- # joker (4)
- # juxt (1)
- # leiningen (6)
- # off-topic (62)
- # pathom (4)
- # pedestal (2)
- # reagent (2)
- # reitit (5)
- # ring (2)
- # schema (4)
- # shadow-cljs (133)
- # sql (38)
- # tools-deps (10)
- # vim (28)
Doing some tests trying to find out why a Spring Boot implementation of the same thing is performing much worse than the Clojure on. Nice to see the load on PostgreSQL is less, using next.jdbc then Java using Spring Data. https://graphql.gklijs.tech/results/graphql-servers/average-db-ge-cpu
I have a weird error in production that I don’t know how to begin to solve. I can’t reproduce it locally, and it doesn’t happen every time the function runs in production. It’s a function that collects some stats and writes it to a table - read some data from the db (PostgreSQL on RDS), process it, write it back to the db. I runs in a future. Here’s a part of the stack trace from the log:
java.lang.Thread.run Thread.java: 748
java.util.concurrent.ThreadPoolExecutor$Worker.run ThreadPoolExecutor.java: 624
java.util.concurrent.ThreadPoolExecutor.runWorker ThreadPoolExecutor.java: 1149
java.util.concurrent.FutureTask.run FutureTask.java: 266
...
clojure.core/binding-conveyor-fn/fn core.clj: 2030
vilect.infrastructure.pub-sub/fn/fn pub_sub.clj: 39
vilect.infrastructure.pub-sub/fn/fn/fn pub_sub.clj: 41
vilect.infrastructure.pub-sub/run-handler pub_sub.clj: 26
...
vilect.recruiting.aggregators/update-process-dashboard! aggregators.clj: 42
vilect.recruiting.aggregators/update-process-dashboard!* aggregators.clj: 32
vilect.recruiting.aggregators/update-process-dashboard!*/save-stats! aggregators.clj: 27
vilect.recruiting.query-repo/update-process-dashboard-data! query_repo.clj: 19
vilect.recruiting.query-repo/fn/G query_repo.clj: 7
vilect.db.core/fn core.clj: 1825
vilect.db.core/execute! core.clj: 252
clojure.java.jdbc/query jdbc.clj: 1171
clojure.java.jdbc/db-query-with-resultset* jdbc.clj: 1094
clojure.java.jdbc/prepare-statement jdbc.clj: 679
com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement HikariProxyConnection.java
com.zaxxer.hikari.pool.ProxyConnection.prepareStatement ProxyConnection.java: 310
com.sun.proxy.$Proxy10.prepareStatement
com.zaxxer.hikari.pool.ProxyConnection$ClosedConnection$1.invoke ProxyConnection.java: 469
java.sql.SQLException: Connection is closed
errorCode: 0
Where and how should I start getting to the bottom of this?you could catch SQLException for a closed connection and retry - using the pool to get a new connection
alternatively you could be using a lazy return type and failing to consume it before exiting the scope of the query
but I would assume that wouldn't be intermittent
Yeah, it could be intermittent when you're working with a connection pool and async (multi-threaded) code...
ahh, right
The connection could stay usable behind the scenes until the pooled datasource reclaims it. I've seen that happen. It's also more likely with some DB drivers than others 😕
so one thing to do there would be to ensure you are asking the pool for new connections with some granularity, and not doing multiple operations with one bound pooled conn?
@msolli Are you using :result-set-fn
to process the results? Is that function completely eager?
@noisesmith With a pooled datasource, closing the connection may just return it to the pool but not actually close it until it is reused by another connection request -- so if you're trying to continue using a connection after it was returned to the pool, it may work for a while (or not).
I was thinking lazy seqs could be the problem, so I added doall
around the calls that produce them. But now I see that the exception is from the write function. I’ll check if there’s anything lazy going on there.
I’m using jdbc/query
from java.jdbc
0.7.10
. On inspecting the class of the return value, it is indeed a clojure.lang.LazySeq
. That could be it, right there.
Are you using :result-set-fn
?
(the default behavior is to return a fully realized sequence -- if you provide your own :result-set-fn
it must be eager)
Ah, ok, I might be doing something stupid. The “query” that’s passed to jdbc/query
is in fact an INSERT … RETURNING *;
I guess I'm surprised that even works at all -- but PostgreSQL continually surprises me with the weird things it allows... 🙂
Sean - may I jump in with a quick Q regarding: https://cljdoc.org/d/seancorfield/next.jdbc/1.0.10/doc/getting-started/friendly-sql-functions . Am I reading the PostgreSQL bit at the bottom correctly that there is no actual support(or intention for support) for unrolling comma syntax in IN queries in next-jdbc for now? I understand the "equivalence" of using ANY instead, but would like to make sure that I am again not missing something.
@boyanb Neither clojure.java.jdbc
nor next.jdbc
provide any support for unrolling IN lists.
It's just that PostgreSQL supports a way to do it via a single ?
parameter. Only PostgreSQL 🙂
That's why I recommend HoneySQL if you want to avoid string-bashing for more complex queries.
clojure.java.jdbc
and next.jdbc
treat the SQL string as completely opaque -- they don't even try to count the parameter substitutions, they leave that validation up to the driver.
I've never found love for honey. We've always been very comfortable rolling with a lot of heavy(sometimes even business) logic in SQL and I've grown not to like data dsls over it. Even (allegedly) as good as honey.
Fair enough. If you're happy with string-bashing (as I am for the most part) then you can avoid DSLs.
We use HoneySQL for situations where we are composing SQL fragments across multiple functions in a pipeline and want to be able to merge into various parts of the SQL query in each function.
Aye. Anyway, thank you. The conversation yesterday was also really helpful. It was very easy to achieve what I was after with a simple datafy call and transducers. I was really overthinking it before.
Oh cool! Glad to hear that.
Would you recommend jdbc/execute!
for HoneySQL queries that are inserts? Something like this:
(jdbc/execute!
*db*
(hc/format query :quoting :ansi :params params)
{:row-fn c/normalize-keys})
I guess so, reading the API. I think what tripped me was that in HoneySQL’s readme there’s no mention of execute!
, only query
.
did you solve your issue? we had something very close to this today and it had to do with future
copying a dynamic var connection from a request
I’m trying to fix it now by using jdbc/execute!
for the insert statement. I won’t know if it works until I’ve pushed to production and seen the lack of error messages in the log. 🙂 But I’m pretty confident this will do the trick, since I can already verify it’s not returning a LazySeq.
Thanks so much, @seancorfield, really saved the day (or rather, night, here in Norway).
It's weird to me that running query
with INSERT ... RETURNING *
produces a lazy sequence tho'... I've no idea how that would happen...