Fork me on GitHub
#sql
<
2021-05-24
>
practicalli-johnny20:05:24

Given I generate a vector of a large number of hash-maps in a Clojure application, with keys and values that map to a record in a database table. If I want to insert/update the whole data structure into a relational database table Is there a preferred next.jdbc function to look at? e.g. insert-multi, plan (is this just for results) Or would it be more appropriate to iterate over the collection using clojure.core and call insert! High performance is not an issue at this time, although it shouldnt blow up the heap 😉

seancorfield20:05:45

The simplest way would be to just doseq over the vector and call insert! on each hash map — but it won’t be fast.

seancorfield20:05:46

If you want a bit more speed, you could use insert-multi! but you’d have to transform the data a little: you can use (keys (first data)) to get the column names and then (mapv #(select-keys % cols) data) to get the vector-of-vectors (the rows’ values) in the same order as the column names.

practicalli-johnny20:05:04

just seen for-insert-multi, would that be a possible approach too?

seancorfield20:05:35

That’s just the SQL builder that insert-multi! uses.

practicalli-johnny20:05:18

ah yes, that makes sense. Thank you Sean.

seancorfield20:05:28

(some folks asked for the SQL builders to be exposed as public, documented functions so they can customize the output/use it outside the friendly SQL functions)

practicalli-johnny20:05:50

I see there could be some SQL size limit with a very large number of hash-maps when using insert-multi! but yes I see that it would be more efficient. Its most likely a batch-like job, so doseq should be fine Thanks again.

dpsutton20:05:59

and a bit of an understandable error that can happen, watch out for optional keys. As indicated by the (keys (first data)), any keys that aren't in the first map will not get added from later maps.

practicalli-johnny20:05:05

the hash-maps "should" be all the same, but a very good point. Thanks.

practicalli-johnny21:05:32

I create a simple function with insert! inside a doseq

(defn persist-all-subscription-results
  [subscription-results]
  (with-open [connection (jdbc/get-connection db-spec)]
    (doseq [result subscription-results]
      (sql/insert! connection :analysis result))))
This does insert the first hash-map in the database. If I read this correctly, it seems postgres does not like the result and generates an error
org.postgresql.util.PSQLException
   ERROR: syntax error at or near "-" Position: 30

    QueryExecutorImpl.java: 2553  org.postgresql.core.v3.QueryExecutorImpl/receiveErrorResponse
    QueryExecutorImpl.java: 2285  org.postgresql.core.v3.QueryExecutorImpl/processResults
    QueryExecutorImpl.java:  323  org.postgresql.core.v3.QueryExecutorImpl/execute
          PgStatement.java:  481  org.postgresql.jdbc.PgStatement/executeInternal
          PgStatement.java:  401  org.postgresql.jdbc.PgStatement/execute
Any pointers ?? Did I do something silly?

dpsutton21:05:59

shooting from the hip you have snake-case and need to underscore into snake_case

seancorfield21:05:50

@U05254DQM If you’re using the latest next.jdbc, there should be a next.jdbc/snake-kebab-opts that you can pass into insert! as a final argument to do that automatically.

practicalli-johnny21:05:02

Oh yes, I should add the snake-kebab-opts thing as another argument to insert! I seemed to have though next.jdbc did it automatically, wooops...

practicalli-johnny22:05:22

Success. Thank you all.

seancorfield22:05:11

I think c.j.j did too many automatic adjustments to column names both going in and out so next.jdbc defaults to doing nothing and leaves it up to users. If everything you are doing is with kebab-case <-> snake_case then you can use with-options to create a connectable that automatically applies those options to operations (caveat: with-transaction and get-connection return unadorned Java objects that need wrapping via with-options again in those contexts).