Fork me on GitHub
#sql
<
2019-07-06
>
gleisonsilva12:07:53

@seancorfield Nice! I'll investigate the custom row builder option, but maybe I just go with that snippet u show above... the thing is: I have no control over the queries... once they are ran against database, the results need to be wrapped in a Avro Message... so I'm using the Resultset to get ResultsetMetadata so that I can get the field type name, precision, etc...

gleisonsilva12:07:22

And later, when reading those messages, be able to get what was the "real" database type to 'materialize' them inside another database...

seancorfield16:07:07

@gleisonsilva I bet you could create a row builder that produced Avro messages directly.

dmaiocchi16:07:16

@seancorfield so imagine I have the driver on my Linux filessystem

dmaiocchi16:07:12

What I should do to use that from next. Jdbc ? I'm completely a nob with this 😁 clj

seancorfield16:07:49

As JAR file? It's a local dependency.

dmaiocchi16:07:58

Yep it is a jar file

dmaiocchi16:07:45

So I should just add it via deps. Edn?

dmaiocchi16:07:38

Searching with lein way of doing it..

dmaiocchi16:07:54

Thx @seancorfield. I will try out! clj

seancorfield17:07:26

In deps.edn you just add the full path of the JAR file to the :paths vector. Not sure how you do it in project.clj.

clj 4
thiru22:07:26

I'm loading about 13 million records (3 columns) of data into memory (via clojure/java.jdbc). I intend to keep this data in memory and incrementally poll the database to get new records. The initial load takes a few minutes. Is there anything you guys would suggest to make this faster/more efficient? I'm not doing anything special really. I'm using :row-fn and wrapping the final result like (vec rows)