This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-05-18
Channels
- # announcements (2)
- # asami (20)
- # aws (4)
- # babashka (35)
- # beginners (47)
- # calva (65)
- # cider (19)
- # clj-kondo (63)
- # clojure (177)
- # clojure-austin (2)
- # clojure-europe (27)
- # clojure-nl (1)
- # clojure-uk (4)
- # clojurescript (13)
- # community-development (5)
- # conjure (5)
- # css (2)
- # data-oriented-programming (9)
- # datalevin (13)
- # datascript (15)
- # datomic (4)
- # devcards (6)
- # duct (4)
- # emacs (8)
- # funcool (1)
- # gratitude (2)
- # helix (3)
- # hyperfiddle (3)
- # introduce-yourself (1)
- # jobs (4)
- # jobs-discuss (26)
- # lambdaisland (2)
- # lsp (20)
- # malli (2)
- # meander (2)
- # mid-cities-meetup (5)
- # missionary (15)
- # music (4)
- # off-topic (37)
- # reagent (3)
- # reitit (2)
- # releases (2)
- # ring (18)
- # shadow-cljs (70)
- # specter (4)
- # sql (20)
- # timbre (3)
- # tools-build (43)
- # tools-deps (11)
- # vim (29)
- # xtdb (61)
I’m running a SQL-query-heavy report that’s running into some issues. In the original version, I get a dataset from a HugSQL query, then for each row, I need to run 3 more separate queries for data to be merged into the report. This works fine.
I want to stream the result to a file, in a batch job, so I switched to jdbc/plan
and reduce
that into a writer
. This also works fine.
When I run 2 reports at once, I have an issue.
When I call those 3 additional queries, and I use either HugSQL or jdbc/execute! db/**db**
, I get a connection timeout error.
What exactly is db/*db*
? A hash map? A DataSource? A Connection?
Ah, yes. It’s a mount
managed datasource returned from jdbc/get-datasource
connected to postgresql
At one point we used conman, but apparently we’ve migrated to the built-in pooling now.
pgbouncer
Hmm, I would expect that to work then, since execute!
will call get-connection
to get a Connection
, run the query, and then .close
the Connection
-- so it should be a different Connection
to the one that is currently in use (in the plan
reduction) that has an associated open ResultSet
... But I don't use PostgreSQL and I don't know of folks using the built-in pooling stuff...
Is it a local PG instance? Perhaps it has a very low number of connections configured, so a request for a new connection times out while it is still using the other connections? Perhaps .close
on a pooled PG connection doesn't return it to the pool immediately (which would seem like a very poor implementation)? I suspect you may need a PostgreSQL expert to help you with this...
Hmm, ok. Well, that’s helpful that it isn’t something I’m doing wrong with jdbc.next, to my knowledge. Let me try tinkering with my local service.
or I should say, it is not a connection pool like conman or whatever in process pooler, it is like proxysql
I figured it out. Since we switched to pgbouncer in production and stopped using connection pooling on our local machines, I was running out of connections. So if I just re-enable conman connection pools for dev environments, it works.
I would recommend having the "same" setup locally to QA/production in terms of the software in use and just adjusting your local PG instance to have more connections if appropriate. Having things be different between dev/CI/QA/production is a way for bugs to get introduced...
(at work, we use HikariCP for connection pooling across all tiers, although in dev/CI we use a smaller max size on the connection pool)
Yeah, that’s how it was before… I’ll try to set up my local machine accordingly.
I’m not a DBA in any respect, but I thought postgres had built-in connection pooling now, if that even matters here.
Note that I use execute!
for the 3 other queries, not plan
. I’m still learning my way around streaming, and it seemed the best approach given that I’m already inside a ResultSet iteration anyway.