This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-07-25
Channels
- # announcements (1)
- # babashka (3)
- # beginners (48)
- # calva (1)
- # clj-kondo (1)
- # cljs-dev (6)
- # clojure (29)
- # clojure-europe (15)
- # clojure-spec (1)
- # clojure-uk (8)
- # clojurescript (17)
- # conjure (23)
- # css (7)
- # cursive (16)
- # datascript (1)
- # emacs (4)
- # fulcro (32)
- # hoplon (3)
- # keechma (16)
- # leiningen (1)
- # luminus (1)
- # meander (11)
- # off-topic (18)
- # pathom (15)
- # re-frame (12)
- # reagent (12)
- # reitit (5)
- # reveal (5)
- # spacemacs (5)
- # xtdb (18)
What's everyones favorite way to define a usable Grammer that can be used to transform something to valid clojure data structures?
Instaparse. I’m using it to define a constraints DSL and it’s been a joy to work with.
3 cheers for instaparse!
uu, looks like antlr has a built-in gui for the parsed tree. that's helpful. you can get a fast instance of instaparse to play around with on instaparse live http://instaparse-live.matt.is/#/-LDIkdZFcGNe46VJtmJA
That's nifty!
Intraparse is the one lib him familiar with
I was under the impression clojure spec wasn't designed to handle string to clojure conversions. Though it can be used for it. Everytime it was brought up i recall it being discouraged.
A PostgreSQL question.
Does PREPARE
ignore [custom] implicit casts? Because I keep getting "operator does not exist", but only for PREPARE
. Regular statements work just fine.
according to the docs, https://www.postgresql.org/docs/9.3/sql-prepare.html, prepare ought to be able to take any data_type you specify or it will try to infer on context... but does that include custom types? i don't know enough about postgresql to tell you if this documentation is lacking.
I pass character varying
as the data type, and later use the argument in where
to compare it with a column of my custom enum type. And it fails.
Well, one way to fix it would be to avoid implicit casts altogether.
I'm reading data from Kafka (max 1-2kb) and writing to Cassandra (on docker) but my write/read latency & throughpt feels too low. Testing with my xps 16gb ram and m2 ssd, disk write peaks at 20mb/s what might be the problem? Is this normal for my hardware?
I used Cassandra in 2017, cant remember all the details, but cassandra had some shenanigans while writing your data e.g. append to commit log, keep memtable, flush them etc. And also your consistency level strategy for writes/read has huge impact on performance... be successful in ANY node or ALL nodes is very very diferent.