This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-06-18
Channels
- # aws (12)
- # beginners (489)
- # calva (32)
- # cider (117)
- # clj-kondo (4)
- # cljdoc (9)
- # cljs-dev (3)
- # clojure (56)
- # clojure-brasil (1)
- # clojure-europe (10)
- # clojure-italy (44)
- # clojure-nl (9)
- # clojure-russia (1)
- # clojure-sweden (6)
- # clojure-uk (64)
- # clojurescript (6)
- # cursive (6)
- # datascript (4)
- # datomic (6)
- # emacs (3)
- # fulcro (15)
- # graalvm (11)
- # jackdaw (7)
- # jobs (8)
- # jobs-discuss (29)
- # jvm (2)
- # leiningen (5)
- # luminus (10)
- # off-topic (23)
- # pathom (21)
- # planck (11)
- # quil (6)
- # re-frame (11)
- # reagent (17)
- # reitit (8)
- # rewrite-clj (6)
- # shadow-cljs (78)
- # slack-help (2)
- # spacemacs (7)
- # specter (4)
- # sql (60)
- # tools-deps (7)
- # xtdb (11)
Hmm... it's impossible to reproduce exact this problem with H2 database (the actual database is Oracle).
Here is a code example: https://github.com/kostafey/db-experiments/blob/master/src/db_experiments/core.clj
I've found solution by using mapv
instead of map
here: https://github.com/kostafey/db-experiments/blob/master/src/db_experiments/core.clj#L41
But map
works for Oracle databases and :as-arrays?
is false
. The :result-set-fn
doesn't change the result for both doall
or vec
values.
Another question.
I'd like to know the total number of records in the result set inside row-fn
.
Something like that: https://github.com/kostafey/java.jdbc/commit/d67fe0d8cf720f5fc37d320bb4c696f3f852f432
@kostafey The :row-fn
cannot possibly know the number of rows -- it is called per-row, while the result set is being processed.
Interesting re: map
vs. mapv
on the array-based results, but I'm not terribly surprised: don't use lazy functions inside result set processing with query
.
Right, but the result set is ready when :row-fn
called per record. So, we can pass a count of records like here: https://github.com/kostafey/java.jdbc/commit/d67fe0d8cf720f5fc37d320bb4c696f3f852f432
You'd probably do better extending IResultReadColumn
to Clob
to do the coercion.
No, the result set is not ready. JDBC is somewhat lazy, so you must only use eager functions inside both :row-fn
and :result-set-fn
And, as you've seen, different JDBC drivers behave differently.
Yes, mapv
is eager. map
is lazy.
Please, look the diff: https://github.com/kostafey/java.jdbc/commit/d67fe0d8cf720f5fc37d320bb4c696f3f852f432
No, the result set can be streamed lazily by the JDBC driver. :row-fn
is called as the ResultSet
is iterated over -- it's a lazy mutable data structure.
count
would force the entire rs
structure to be realized -- preventing any lazy processing.
Ok. Thank you. But probably we can pass a handler to result set by itself, e.g.
(map #(row-fn % (rest rs)) (rest rs))
If you want to do something with knowledge of "the whole result set", use :result-set-fn
. If you want to do something with knowledge of a single row, use :row-fn
.
Just bear in mind that both of those operate in the context of an open, in-process result set which may be both huge (many millions of rows in the general case) and lazy (because JDBC can lazily stream result sets from the DB in blocks of rows).
If you want to avoid the lazy/eager issue, just do a query
and post-process the result (without using :row-fn
or :result-set-fn
at all).
If you want to translate a SQL type to a Clojure type that isn't handled automatically by the JDBC driver, consider IResultSetReadColumn
as the "proper" way to do it (assuming you have a single DB in play).
This all is why next.jdbc
does not support :row-fn
and :result-set-fn
by the way -- I've seen so many people trip up on the laziness issue exposed through those functions.
In next.jdbc
, you either execute!
to get an entire, fully-realized result set as a Clojure data structure (vector of hash maps, or vector of column names followed by row values -- as-arrays
), or you use plan
to produce an IReduceInit
that you then reduce
or transduce.
I don't want to fetch all result set, I only need to know a records count in :row-fn
. Is it impossible?
I mean, ok, I will use :result-set-fn
, but if I call (count rs)
inside it, it will fetch all rs records. That is not good. Is it possible to count records without fetching?
@thenonameguy Thank you, looks it's an idea.
SQL lets you do SELECT COUNT(*) AS total FROM ...
and that won't "fetch all rs records". Use the right tool for the job.
@seancorfield Ok, thank you very much.
Depending on how invested you are already in clojure.java.jdbc
, you might want to consider using next.jdbc
which is a different API -- both simpler and, mostly, faster than clojure.java.jdbc
and it's where effort will be focused for development, from now on (if it isn't clear, I maintain both libraries).
I also meant to say that
(def db
{:subname (str "file://"
(.replaceAll (System/getProperty "user.home") "\\\\" "/")
"/tmp/data;AUTO_SERVER=TRUE")
:subprotocol "h2"})
is not recommended as a way to specify data sources: :dbtype
/ :dbname
is better supported (and is the "only" way in next.jdbc
).It's only for this temporary/demo/test case. I use clojure.java.jdbc
in https://github.com/kostafey/ejc-sql
Using Monolithic Contrib? That's been deprecated for eight years https://github.com/kostafey/ejc-sql/blob/master/project.clj#L7
Just FYI, your commit to pass row count to :row-fn
would be a breaking change for anyone currently using that option so clojure.java.jdbc
could not possibly do that (but I'm sure you know that).
Yes, it's a subject of this discussion. My clone of clojure.java.jdbc
is only for tests & experiments. And I'll fix the last commit in ejc-sql
soon.
FWIW, I just looked over the Java docs for ResultSet
and ResultSetMetadata
and everything is cursor-based so there isn't even a Java-level method for getting the number of rows in a result set.
Looks as simple solution: https://github.com/kostafey/ejc-sql/commit/3af1e0deef9f59c4781ca9df7fd11ad2ce282828
For resolving Clob values to String automatically, I'd probably still recommend extending the IResultSetReadColumn
protocol to Clob. Just sayin' 🙂
(then it happens automatically and you don't need to map over the rows at all)
Hi, I'm learning Clojure about a 2 weeks and I'm building a small project with the language, and I want to build tests, but I don't know how to test functions that need the database, some people say to me use ring-mock but i don't have found any information how to use with a database, someone could help me? Some library? Piece of code? (Example of my code)
(defn insert-to-db
[to-insert destin table-destin]
(loop [index 0]
(when (< index (count to-insert))
(insert-element! destin table-destin (nth to-insert index))
(recur (inc index))))
How do I test this? I think about to do with a database in localhost, but ins't a very good solution.Hi @contato509 First off, a stylistic comment: since you want to iterate through a collection and perform a side-effecting operation (and don't care about the return value), it's more idiomatic (and simpler) to use doseq
:
(defn insert-to-db
[to-insert destin table-destin]
(doseq [row to-insert]
(insert-element! destin table-destin row)))
There are several approaches to testing things when you have a database involved. You can isolate the DB operations from the pure functions and test just the (pure) business logic, on the assumption that simple CRUD operations "just work". You can do that isolation and have the business logic return a data structure that describes the DB updates you want to perform -- and then test that you get the right structures back (and, yes, you'd separately need to write tests for the DB update logic against a real DB but you could treat that as a separate library -- and it should be simple generic CRUD code). You could try to mock the DB access functions (this can be a painful approach but sometimes it's worth it). You can run your tests inside a transaction and roll it back at the end -- so, use a real DB but just ensure your tests make no updates. You can run a DB in Docker and set it up / tear it down for your test suite as needed. And several other options.
What we do at work mostly assumes a real DB in Docker with test data, and our test scripts setup the DB from scratch and tear it down completely around the tests.
Thanks for the tip about doseq, will make my code more clean.
I think I will take the painful approach, in my case will be better.
Thanks for all