This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-11-10
Channels
- # announcements (4)
- # asami (3)
- # babashka (49)
- # beginners (56)
- # chlorine-clover (42)
- # cider (13)
- # clara (3)
- # cljfx (14)
- # clojure (65)
- # clojure-australia (2)
- # clojure-dev (12)
- # clojure-europe (57)
- # clojure-italy (10)
- # clojure-nl (3)
- # clojure-spec (25)
- # clojure-uk (25)
- # clojuredesign-podcast (11)
- # clojurescript (78)
- # code-reviews (16)
- # community-development (3)
- # cursive (14)
- # datomic (16)
- # depstar (20)
- # emacs (3)
- # figwheel-main (2)
- # fulcro (33)
- # helix (16)
- # jackdaw (15)
- # kaocha (13)
- # leiningen (3)
- # malli (33)
- # reveal (10)
- # shadow-cljs (29)
- # spacemacs (10)
- # sql (13)
morning
Morning
'Mornin all
Bore da
…a bore da hefyd! :flag-wales:
måning
anyone had a good or bad experience with aws athena ?
both: overall it’s a super useful and easy to use service. Occasionally it has latency issues
as in: queries stay in starting
state and AFAIK there is little you can do with it. Happened to me just once
cool, thanks
did you convert your data to parquet before dumping to S3 ?
for other reasons than performance as well, eg handling of multiline strings
if you need it just for performance reasons and csv/json serde works fine for you, there is an option to do the conversion within athena as well
We had a few queries that just broke it (nullpointer exception or something). And then we had to wait for aws support to tell us what was broken so we could stop doing that... But we kinda needed to do that.
@U0G2T8PDM was it getting expensive with plain csv or json, or with parquet ?
i guess i'll try it out and see... i've got a kafka topic with telemetry data - it looks easy enough to dump that to parquet on s3 with kafka-connect, and if that turns out to lead to criminally expensive queries then i'll dump it to CSV and load into redshift