Fork me on GitHub
#sql
<
2022-05-18
>
jmckitrick16:05:13

I’m running a SQL-query-heavy report that’s running into some issues. In the original version, I get a dataset from a HugSQL query, then for each row, I need to run 3 more separate queries for data to be merged into the report. This works fine.

jmckitrick16:05:53

I want to stream the result to a file, in a batch job, so I switched to jdbc/plan and reduce that into a writer. This also works fine.

jmckitrick16:05:57

When I run 2 reports at once, I have an issue.

jmckitrick16:05:15

When I call those 3 additional queries, and I use either HugSQL or jdbc/execute! db/**db** , I get a connection timeout error.

seancorfield17:05:03

What exactly is db/*db*? A hash map? A DataSource? A Connection?

jmckitrick17:05:47

Ah, yes. It’s a mount managed datasource returned from jdbc/get-datasource connected to postgresql

jmckitrick17:05:46

At one point we used conman, but apparently we’ve migrated to the built-in pooling now.

seancorfield17:05:17

Hmm, I would expect that to work then, since execute! will call get-connection to get a Connection, run the query, and then .close the Connection -- so it should be a different Connection to the one that is currently in use (in the plan reduction) that has an associated open ResultSet... But I don't use PostgreSQL and I don't know of folks using the built-in pooling stuff...

seancorfield17:05:31

Is it a local PG instance? Perhaps it has a very low number of connections configured, so a request for a new connection times out while it is still using the other connections? Perhaps .close on a pooled PG connection doesn't return it to the pool immediately (which would seem like a very poor implementation)? I suspect you may need a PostgreSQL expert to help you with this...

jmckitrick17:05:37

Hmm, ok. Well, that’s helpful that it isn’t something I’m doing wrong with jdbc.next, to my knowledge. Let me try tinkering with my local service.

hiredman18:05:59

pgbouncer is not a connection pool

hiredman18:05:46

or I should say, it is not a connection pool like conman or whatever in process pooler, it is like proxysql

jmckitrick18:05:14

I figured it out. Since we switched to pgbouncer in production and stopped using connection pooling on our local machines, I was running out of connections. So if I just re-enable conman connection pools for dev environments, it works.

jmckitrick18:05:34

Everything with jdbc.next works fine

1
seancorfield18:05:40

I would recommend having the "same" setup locally to QA/production in terms of the software in use and just adjusting your local PG instance to have more connections if appropriate. Having things be different between dev/CI/QA/production is a way for bugs to get introduced...

seancorfield18:05:39

(at work, we use HikariCP for connection pooling across all tiers, although in dev/CI we use a smaller max size on the connection pool)

jmckitrick19:05:50

Yeah, that’s how it was before… I’ll try to set up my local machine accordingly.

jmckitrick16:05:18

I’m not a DBA in any respect, but I thought postgres had built-in connection pooling now, if that even matters here.

jmckitrick16:05:08

Note that I use execute! for the 3 other queries, not plan. I’m still learning my way around streaming, and it seemed the best approach given that I’m already inside a ResultSet iteration anyway.