Fork me on GitHub

playing around with next.jdbc i’m a bit puzzled by why dissoc doesn’t seem to work on the resultset from plan whereas assoc does…

(reduce (fn [a e] (conj a (assoc e :foo 1))) [] (jdbc/plan db ["select * from data limit 10"]))
(reduce (fn [a e] (conj a (dissoc e :data/id))) [] (jdbc/execute! db ["select * from data limit 10"]))
(reduce (fn [a e] (conj a (dissoc e :data/id))) [] (jdbc/plan db ["select * from data limit 10"]))
Syntax error (ClassCastException) compiling at (src/next_jdbc_streaming.clj:61:3).
next.jdbc.result_set$mapify_result_set$reify__1973 cannot be cast to clojure.lang.IPersistentMap


@grahamcarlyle The abstraction that plan uses tries to avoid building rows so access functions are extremely low overhead. When you call assoc, it builds a full datafiable row and then you have a regular hash map which it assocs the new key/value into. For dissoc, it would also have to build a full datafiable row -- but I haven't implemented what's needed for dissoc yet because it wasn't a use case I was expecting folks to need (you can select-keys tho' which constructs a new, plain hash map with just the specified keys -- still without constructing the underlying row hash map).


I'd have to implement the whole of IPersistentMap to support dissoc which I've been trying to avoid (the docstring makes note of that)... but maybe it's worth doing all of that... Originally, you couldn't even assoc into a row because that would require materializing the whole row (which was the driver for not doing such operations).


the datascript entity implementation has something like this, it doesn't pull the attributes from its "store" until you need it. Then caches that result on the entity record. Not sure that's applicable here


@carkh next.jdbc's plan provides more "map-like" functionality than's reducible-query and tries to do as much as possible by direct access to the underlying (mutable) ResultSet object. reducible-query only supports a limited set of accessor operations (`ILookup` and part of Associative). next.jdbc supports a lot more but that really does require a hash map is going to have to realize the whole row by building a full data structure from the ResultSet.


I'm looking at implementing IPersistentMap completely right now but it has some undesirable side effects which I'm wrestling with...


the use case is very valid, but the perf price... good luck with your wrestling =)


I have completed that on master -- the fiddly part was figuring out what to do about accidental attempts to print a raw row around plan (because sometimes it's the result of an incorrect use of plan and sometimes it's valid):

user=> (require '[next.jdbc :as jdbc])
user=> (def ds (jdbc/get-datasource {:dbtype "h2:mem" :dbname "repl"}))
user=> (jdbc/plan ds ["select 1"])
#object[next.jdbc.result_set$eval1402$fn$reify__1404 0x7cab1508 "`IReduceInit` from `plan` -- missing reduction?"]
user=> (into [] (jdbc/plan ds ["select 1"]))
[{row} from `plan` -- missing `map` or `reduce`?]
user=> (into [] (map str) (jdbc/plan ds ["select 1"]))
["{:1 1}"]


(this is an edge case -- if you use (map rs/datafiable-row) then you get "normal" hash maps that are printable in the usual ways)


But at least now dissoc, cons, empty, count etc all work on plan's rows.

👍 4

Re: misuse of plan -- see for tests around that. Feedback on what those cases should do, if you disagree, as well as suggestions for new tests and how they should work...


Also this onward for tests verifying what you can call without realizing rows, and what will realize a row (these operations should all be intuitive in terms of whether or not they realize rows I hope?).