This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-05-03
Channels
- # announcements (21)
- # aws (6)
- # babashka (28)
- # beginners (39)
- # biff (1)
- # calva (23)
- # cider (5)
- # clj-kondo (108)
- # clojure (11)
- # clojure-europe (17)
- # clojure-nl (2)
- # clojure-nlp (10)
- # clojure-uk (8)
- # clojurescript (29)
- # community-development (4)
- # conjure (20)
- # css (3)
- # datalevin (9)
- # datomic (3)
- # events (2)
- # figwheel-main (11)
- # fulcro (36)
- # honeysql (7)
- # humbleui (4)
- # interceptors (4)
- # introduce-yourself (3)
- # jobs (1)
- # lsp (51)
- # malli (1)
- # meander (71)
- # minecraft (8)
- # other-languages (18)
- # pathom (15)
- # polylith (25)
- # portal (10)
- # re-frame (5)
- # reitit (15)
- # releases (1)
- # remote-jobs (1)
- # shadow-cljs (11)
- # tools-deps (27)
vim-iced plugin for neil: https://twitter.com/uochan/status/1521265752506245120
One question for you all: I have enabled some things in babashka to be able to run fipp and puget from source. The biggest thing to add was rrb-vector/catvec
which adds some 350kb of binary size. What it does is concatenate vectors in O(log n)
instead of O(n)
. I could easily "fake" that function by replacing it with into
. The trade-off here is:
• do we want higher performance for fipp (pretty printing)
• do we want to save 350kb of binary size.
The reason to add compatibility for fipp and puget are mainly for debugging purposes, e.g. having hashp work in babashka. I don't know many other libraries that need rrb-vector. Respond in thread with your thoughts please. 🧵
Personally, I prefer performance over binary size. My usage is 100% local dev though, so I'm not sure how much this impacts other use-cases.
To start with, support for debugging sounds important to me, w.r.t. babashka. But then... sorry if these are stupid questions: 1. How large vectors need to be processed for the algorithmic complexity to have a visible impact? Visible -> because this is for pretty-printing, so it is only a case of being interactive with the programmer, or? Is it... vectors in the mega-byte range, for example? 2. Do people pretty-print mega-byte sized data structures a lot?
@U0359E1F02H 1) Yes, this would probably only used during dev time by programmers
I'm just trying to see if I'm missing something, since I'm leaning towards saving space definitely, if it has negligible impact (=space for other bb features to come)
Not that you should put an unnecessary amount of time into this. But I bet you if anyone could whip up a benchmark in almost no time 🙂 Or maybe some other wizards will take the challenge and suggest a bit of code that generates large data structures to measure?
The surface area of fipp with rrb vector is here: https://github.com/brandonbloom/fipp/blob/master/src/fipp/deque.cljc
If more libraries that would be useful to run in bb would rely on rrb vector, the argument to "not fake" it would become stronger
True. And switching over to rrb is a change that you can still make in the future then? Or, if someone complains about performance. My opinion only, and I'm a very small voice in this, but I'd "save" on the binary size whenever you can, sort of like saving for the future.
> I'm not exactly sure why fipp uses rrb vector
I think it's because fipp
starts with f
for fast
, and Brandon Bloom, the author, is a language and data-structures geek.
I've always been tempted to use fipp because of that.
For the current question, I also think it's safe to start with a simulation (using into
).
Alrighty, pushed. Let's see in #babashka-circleci-builds if the 350-ish kbs go away
“Programs must be written for people to read and debug, and only incidentally for machines to execute.” updated with respect to H. Abelson Don't care about speed or size as long as I have hashp and other debug tools.
@U01BDT7622X Alright. Let's try without rrb first and we can always improve. Here is the full sequence of what I did to make hashp work in babashka: https://github.com/brandonbloom/fipp/issues/81#issuecomment-1113957919 with babashka from master.
Just converted the build tooling for the official Docker clojure images to use babashka instead of shell scripts. It was stupid easy, even taking care of its usage of specs. And now it's faster too! Just a mind-bogglingly great tool you've got here, @borkdude! 😁 (not on GitHub yet but will be shortly)
trying to use csv/read-csv
but it says Could not resolve symbol
and it couldn't find the namespace either with (ns foo (:require csv))
so I'm not sure how the (multiple) code examples i found are working?
figured it out myself - the examples rely on being in the user
namespace
easy once i saw that in the readme
i searched "babashka csv" and found a github link: https://github.com/babashka/babashka/blob/master/test/babashka/scripts/csv.bb and this one https://cljdoc.org/d/borkdude/babashka/0.0.88-2 was where i got the answer that got me past it, although possibly not the best way?
yeah, i noticed that - it was #4 on my google search or something though, thus my click