Fork me on GitHub

> If you can’t get the results all in memory, that’s really your only option -- aside from reading explicitly paginated results and potentially using multiple connections (from a pool, presumably)


what about this:

(def get-fruits-map-qualified-batch
    "SELECT name FROM fruit"
    {:connection connection
     :size 3
     :row (p/rs->map)
     :key (p/qualified-key str/lower-case)}))

(get-fruits-map-qualified-batch connection (partial println "-->"))
;--> [#:fruit{:name Apple} #:fruit{:name Banana} #:fruit{:name Orange}]
;--> [#:fruit{:name Peach}]
; 4


kind of pagination. Would benefit of the reducible appoarch, now it stops only when everything is read (could be a terabyte of data)…


in simple tests, query compilation + realizing the rows into maps at request time seems to be much faster than the reducible way. I guess the difference comes from implementation details, should try reducible in porsas to see how the approaches actually compare perf-wise. Interesting.


;; 1100ns
(let [query (p/compile
              "SELECT * FROM fruit"
              {:connection connection
               :row (p/rs->map)
               :key (p/unqualified-key str/lower-case)})]
    (into [] (map :cost) (query connection))))
; => [59 29 139 89]


Something you need to consider is that it's not practical or idiomatic in most cases to "compile" a query against a connection just once somewhere at startup and then run that compiled query multiple times -- against the same connection -- based on the code fragments you're showing.


It's fine in specific, performance-critical code to do that.


It’s not the same connection


the connection is passed in at request-time


connection can be used to pre-fetch the rs-meta & compile the optimal code to read the row from rs


OK, so you need a connection for the compile and then a connection for the "run". Still, it's not practical or idiomatic for most people to "precompile" all their application queries upfront once like that.


Our application has thousands of distinct queries, for example. It would be ridiculous to try to precompile all those somehow upfront before the app starts to run.


If you have a limited number of queries, sure, it makes sense to split that.


But it really doesn't make sense for most applications.


We have a huge number of queries that are built conditionally, on the fly. The "precompile" space for those is tiny.


I agree, as said, just poking with the perf to see how fast we can go with clojure. We have done some perf-critical apps and have fall-backed to plain jdbc in some of the queries. Maybe something like porsas could be used there instead.


but precomping opens up road for specs. We could extract the row specs and generate function specs to those queries.