Fork me on GitHub

Who uses CLOB or BLOB columns? did not do anything to support those so I'm curious as to how folks deal with them using c.j.j.? I've just added a helper function and an example column reader for CLOB (that calls .getObject and then slurps from the CLOB's InputStream to produce a string) and added a brief mention in the result set builders docs: (master, at the end of the first section which describes all the public functions).


We use CLOBs with H2, unfortunately much more than we should (we should have used just VARCHAR pretty much in every case) We have this little function for converting them back to string when they're read:

(defn declob
  "Turn a clob into a String"
  (when clob
    ;; don't strip any new lines! (as shown here: )
    (slurp (.getCharacterStream clob))))


That's pretty much what the new clob->string helper does on master of next.jdbc...

👍 1

Is there a sensible "default" helper/example for BLOB, perhaps producing a byte[]? How do folks normally process BLOB columns? What would you expect to find in the documentation for CLOB/BLOB support/usage? (the reason it matters is that once the result set is closed, you can't read these large fields on several databases, so you have to turn them into data while you are processing the result set)


@seancorfield With c.j.j I have a column which is a Postgres bytea.


In the not-too-distant future, I will migrate this usage to next.jdbc


I haven't had any issues so far (not that my use is extensive). For insert/update, I do this: (.toByteArray bs) Reading it back out, IIRC I just get a ByteArray without doing anything special In my application, this column contains a highly optimized representation of something I am visualizing. It is kind of an app specific materialized view. On the read side, performance is critical. When I migrate this to next.jdbc I've been planning to take a look at the more primitive functions to ensure that Clojure does as little as possible to this result set (not sure this is necessary/needed..) . In my app, I send this over the wire to the browser, pretty much as-is, and I don't want anything to get in the way of that.


@dcj Thanks. So I suspect that PG data type does not depend on the result set remaining open to "inflate" the value from the SQL type to the JDBC type (and hence into a "Clojure" data type).


I guess I could add some CLOB tests to c.j.j. and see whether the same result set closing problem exists there -- I'm just sort of assuming it does right now since the machinery is similar at the value-reading level in both libraries.


@seancorfield my only nagging concern is that generally I use some sort of coercion assistance to convert certain datatypes between Clojure and PG, e.g. times, maps and vectors <-> jsonb/json, etc. (you may recall numerous questions awhile back when I was migrating some of that to next.jdbc...). Anyway, I am virtually certain that I did not dive into that code when I added my bytea column, and IIRC, everything "just worked". It is possible that something in the coercion code is "assisting" with my bytea usage.... I will go take a quick look.


That "coercion assistance" should be easy to migrate to next.jdbc -- happy to take a look and help whenever you get around to that.


@seancorfield you already helped! The first version is working.... Anyway, long story short, for pre-next.jdbc work, I used a (private fork) of this: This file is key: I just took a quick look and not obvious to me if that lib is adding any support for my bytea usage..... FYI only, I have/am re-implementing that library for next.jdbc, and the datatypes I need seems to be working fine (time, clj <-> json(b), arrrays, etc,) I haven't yet started on the PostGIS part yet, I need to understand better what I really want on both the Clojure side and on the PostGIS/Postgres side before working on that part....


Yeah, there's a bunch of extension of the ISQLValue and ISQLParameter protocols in there from c.j.j.


getting my head wrapped around that took a while, and you answered a bunch of questions WRT all that re next.jdbc


So you'll need equivalent extensions done for those protocols for that library in order to migrate completely.


yes, I've get that all working for the data types that I really need


I will try and "get busy" and move my bytea usage to next.jdbc, but to set expectations, it may be 2-3 weeks before I can do that. I have a bunch of other higher priority things going on ATM.


No problem. next.jdbc evolution is slow and gentle -- and purely accretive. It already does everything I need for work so this is mostly about making it easier for others to use for more exotic things.