Fork me on GitHub
#off-topic
<
2020-07-25
>
Drew Verlee01:07:45

What's everyones favorite way to define a usable Grammer that can be used to transform something to valid clojure data structures?

cjsauer13:07:40

Instaparse. I’m using it to define a constraints DSL and it’s been a joy to work with.

sova-soars-the-sora14:07:09

3 cheers for instaparse!

sova-soars-the-sora14:07:44

thank your lucky stars you were born in the age of instaparse! parens

parens 3
p-himik14:07:55

I use ANTLR.

sova-soars-the-sora16:07:08

uu, looks like antlr has a built-in gui for the parsed tree. that's helpful. you can get a fast instance of instaparse to play around with on instaparse live http://instaparse-live.matt.is/#/-LDIkdZFcGNe46VJtmJA

Drew Verlee17:07:57

That's nifty!

Drew Verlee01:07:06

Intraparse is the one lib him familiar with

dpsutton01:07:49

Worked somewhere spec was used quite heavily for edi parsing :)

Drew Verlee12:07:06

I was under the impression clojure spec wasn't designed to handle string to clojure conversions. Though it can be used for it. Everytime it was brought up i recall it being discouraged.

p-himik15:07:38

A PostgreSQL question. Does PREPARE ignore [custom] implicit casts? Because I keep getting "operator does not exist", but only for PREPARE. Regular statements work just fine.

sova-soars-the-sora16:07:42

according to the docs, https://www.postgresql.org/docs/9.3/sql-prepare.html, prepare ought to be able to take any data_type you specify or it will try to infer on context... but does that include custom types? i don't know enough about postgresql to tell you if this documentation is lacking.

p-himik19:07:16

I pass character varying as the data type, and later use the argument in where to compare it with a column of my custom enum type. And it fails. Well, one way to fix it would be to avoid implicit casts altogether.

ec17:07:40

I'm reading data from Kafka (max 1-2kb) and writing to Cassandra (on docker) but my write/read latency & throughpt feels too low. Testing with my xps 16gb ram and m2 ssd, disk write peaks at 20mb/s what might be the problem? Is this normal for my hardware?

borkdude17:07:07

In linux as the host OS?

bartuka18:07:42

I used Cassandra in 2017, cant remember all the details, but cassandra had some shenanigans while writing your data e.g. append to commit log, keep memtable, flush them etc. And also your consistency level strategy for writes/read has huge impact on performance... be successful in ANY node or ALL nodes is very very diferent.

ec18:07:53

@borkdudeyep its linux (Debian 20.04)

ec18:07:55

its actually ScyllaDB which implements Cassandra api and claims to be not needing that shenanigans. (told cassandra since its more popular)