Fork me on GitHub
#announcements
<
2021-08-15
>
seancorfield18:08:11

HoneySQL "2.0 Gold" -- com.github.seancorfield/honeysql {:mvn/version "2.0.783"} is available -- SQL as Clojure data structures. Build queries programmatically - even at runtime - without having to bash strings together! -- https://cljdoc.org/d/com.github.seancorfield/honeysql/2.0.783/doc/readme ā€¢ Uses different coordinates and namespaces to 1.0.x so that you can use both together and migrate on a per-query basis! ā€¢ Completely rewritten to make user-level extension much easier and to fully support PostgreSQL without needing additional libraries -- but still maintains compatibility with the data DSL from 1.0.x and most of the helpers from 1.0.x -- see https://cljdoc.org/d/com.github.seancorfield/honeysql/2.0.783/doc/differences-from-1-x for more details

šŸŽ‰ 104
šŸÆ 18
sheepy 12
catjam 9
vemv20:08:20

Kudos! fully support PostgreSQL without needing additional libraries calls my attention. Does this refer to the revamped extension API, or I can actualy use vendor-specific syntax without extension whatsoever? https://www.postgresql.org/docs/current/functions-array.html#ARRAY-OPERATORS-TABLE comes to mind as a would-be tricky thing

seancorfield20:08:18

Because of Clojure's restrictions on symbols/keywords, for the @> and <@ operators, you need to define a var as an alias:

(def at> (keyword "@>"))
but you can just register that as an operator and use at> in the DSL.

šŸ‘Œ 3
seancorfield20:08:30

The primary goal was to implement everything from the nilenso extension library out of the box and add more over time as folks need.

bananadance 3
seancorfield20:08:28

(sql/register-op! at>)
That's all that is needed to register a binary op for @>.

vemv20:08:16

Thanks! Looking forward to give it a spin. Pleasantly surprised by how simple it is.

āž• 9
tony.kay22:08:54

I've done a bit of work to the datomic-cloud-backup library https://github.com/fulcrologic/datomic-cloud-backup and am up to version 0.0.5. The new version includes new support for writing backups to a local filesystem, a more general-purpose backup-segment! function, and a parallelized backup function for doing the initial backup of large databases in less time. My preliminary tests show a backup speed in the Cloud (writing to S3 in an alternate region) of at least 40k transactions per minute. I expect that to get faster as the I/O subsystems scale due to demand.

šŸ‘ 15
tony.kay04:08:50

Hm. The parallel stuff seems to push Cloud over the DynamoDB limit and it fails. Looks like there needs to be some kind of back-off on that when that exception happens.

steveb8n06:08:07

or a read speed rate-limiter to keep it under. this might be better for exporting from prod databases so that normal operations are less at risk