Fork me on GitHub

we’re using java.jdbc to connect to a microsoft sql server. I’ve got a larger query that doesn’t seem to be respecting :fetch-size. any tips?


@uwo i'm not familiar with sql server, however there's often a few other knobs to turn on postgres to get cursors working: (wrappg in transaction, setting :result-type :forward-only), do you have to the same for SQL Server?


@rymndhng thanks for the response! I have tried with :result-type :forward-only. Do we need to declare a cursor in our sql query in order to leverage cursors, or does java.jdbc do that under the covers? Also, are there certain types of queries which would would force all results to be sent to the client? I don’t know if our query could be the culprit.


it usually does it under the cover -- i wrote a blog post around testing this a while ago: you should be able to test if a cursor's used by running a query on a large set of data and measuring when the first row is returned to the client


awesome, thanks! I’ll check out the post


I stole your idea to run (new-measurement) as the :row-fn, but curiously I’m never even seeing it fire. Here’s the code I’m using (approximately).


(let [[base-query & params] query
        _ (prn "AutoCommit?" (.getAutoCommit sql-conn))
        stmt (jdbc/prepare-statement sql-conn base-query {:fetch-size 10000
                                                          :row-fn (new-measurement)
                                                          :result-type :forward-only
                                                          :concurrency :read-only})]
    (jdbc/query sql-conn (concat [stmt] params))))


with new-measurement just lifted from your blog post (thanks!)


Sean clarified for me that I should be passing :row-fn to query not prep-statement. all good here


cool, did it work? 😁


@uwo You might also have to set auto-commit false on the connection you use for that query -- you have to do that for PostgreSQL, as I recall.


(you do that by passing :auto-commit? false to whatever you're using to open that connection)


@seancorfield ah, yeah. thanks. I did see that and tried to setAutoCommit false on my connection object. It didn’t have any effect. (I set fetch-size to 1 and then did (bounded-count 1 result-set), and it had to pull hundreds of MBs before evaluating


JDBC drivers are funny things. For some definition of "funny" 😞


I've learned more than I ever wanted to know about JDBC while maintaining 😐

😅 16

If i recall fetch-size dicts how paging will work under the hood but it wont make it "lazy". With pg you can declare a cursor in a tx and use fetch for each command with a size of data to be returned


Then you can feed your "pages" of rows to a fn/chan or whatever and exert backpressure how you like


@seancorfield hmm. I threw some diagnostic code in row-fn, following rymndhng’s example. But I’m never seeing it called. Anything obvious I might be doing wrong?;cid=C1Q164V29


@mpenet Interesting, thanks. That answer is why I was wondering if i had to explicitly create the cursor in the query


I walked entire (large) tables efficiently like that. The other ways i always had pg send me data continuously, even if i just consumed first


That SO Q&A is very, very out of date regarding java.jdbc's features.


Back then -- in 0.3.0 -- it didn't have support for cursors/lazy reading etc.


@uwo Just to check, you're on the latest version of java.jdbc?


bumping to 0.7.5 now. I have been using 0.7.3


0.7.3 should be new enough for this discussion (but there have been performance improvements since then)


I tried recently with auto-commit false and fetch-size on pg, still had to rely on that trick


There's definitely a "trick" to making it work. I thought you had to explicitly set :fetch-size 0 but maybe I don't recall correctly.


same thing on 0.7.5 (just thought I’d try)


curious why the row-fn isn’t firing though. I must be doing something silly


The :row-fn should be on the query, not the prepare-statement call.


(you should just be able to pass all those options to query and let it create the PreparedStatement for you)

🙏 4

I did a lot of work ensuring that options flowed all the way through all the calls -- if you use a high-level function and pass in the options there. You should almost never need to call prepare-statement yourself these days.


You may also find that reducible-query works better for this (assuming you plan to process all the rows returned, but you want them processed in smaller chunks).


we do plan to use all rows. I think the only reason we’re not using reducible-query at the moment was I had some difficulty loading it onto a chan with to-chan/onto-chan. could be an easy fix. I’ll look into that again


OK. If that ends up looking like a java.jdbc bug, please open a JIRA ticket with repro details. I haven't tried using it with core.async (although I recall having a hard time figuring out how to get a reducible collection into a channel when I was playing with core.async a while back).


that’s the result of a diagnostic :row-fn. It still requires transferring every row in order to take 10 or bounded-count 1. I’ll let you know if I have a break through 🙂


for added context that’s with opts :fetch-size tried 10000, 1 and 0 :cursor :hold and nil :result-type :forward-only :concurrency :read-only


maybe I completely misunderstood the rules for lazily causing the result set to fetched as you go along. I’ve been consuming the return value of query. I’m just now realizing that this section says you need to use :row-fn 😊 ( (“Processing each row lazily”)


Or use reducible-query? 🙂


interestingly I just got the same result with it reducible-query


like (into [] (take 1) (reducible-query etc


I blame jdts:1.3.1 lol 🤷


Have you tried the official MS JDBC driver?


seems like a good idea at this point!


darn. tried with and (once I figured out the connection string) still same behavior (including reducible query)


@uwo The MS driver is available as a regular Maven dependency, and you don't need a connection string if you use {:dbtype "mssql" :dbame "mydb" :user "..." :password "..." ...} for your db spec.


See the deps.edn example in the java.jdbc for the dependency

;; Note: Assumes Java 8; there's a .jre7 version as well
                {:mvn/version "6.2.2.jre8"}}}


And the mssql-db example in the java.jdbc test file:

(def mssql-db {:dbtype   "mssql"
               :dbname   mssql-dbname
               :host     mssql-host
               :port     mssql-port
               :user     mssql-user
               :password mssql-pass})


thanks! we had been using that artifact to connect to our mssql backed datomic instance. so I just went with what I had on hand. Also I wasn’t sure how to specify an instanceName alongside the dbname with the map spec


I’ll certainly try with the driver available on Maven. thanks


By "instanceName" are you referring to the \... portion of the string? If so, you can specify it as part of the :host:

(def mssql-host   (or (System/getenv "TEST_MSSQL_HOST") "\\SQLEXPRESS"))


ah yes! that’s the one! so \\instancename will suffice, then