Fork me on GitHub
practicalli-johnny15:04:37 user level aliases for Clojure CLI - library version updates and a few removals • Removed :deps from configuration to avoid over-riding version from install of Clojure CLI • Removed :inspect/rebl (alias is commented) after deprecating 6 months ago • GitHub action .github/workflows/lint-with-clj-kondo.yml updated to clj-kondo version 2022.04.08 • Update library versions using clojure -T:search/outated > command (see for details)

practicalli 4
🎉 2

Introducing - fast json/csv encode and decode. This library finalizes my research into csv and json parsing and is a complete drop-in replacement for and Same API, much better (5-10x) performance. This library gets as good performance for those tasks as anything on the JVM and avoids the jackson hairball entirely. You can find my previous post on fast csv parsing for the reasons why the system is fast or just read the source code. All the files are pretty short. I moved the code from dtype-next into a stand-alone library and added encoding (writing) to the mix so you don't need any other dependencies. Finally this library has the same conformance suite as the libraries it replaces so you can feel at least somewhat confident it will handle your data with respect. Enjoy :-)

wow 29
🎉 42
💯 21
⏱️ 4
clojure-spin 11
gratitude 4
❤️ 6

I'm curious why org.clojure/tools.logging {:mvn/version "1.2.4"} is a dependency of it?


Because it uses an offline thread to do blocking reads and sometimes that thread may log depending on the situation.


I love tools.logging, btw. Far and away the best logging framework IMO. The whole zero dependencies thing is extremely helpful when it comes to logging systems.

👍 4

I and some other people are looking to extend the tools.logging approach to more libraries like JSON and http clients here: Feedback is welcome on that!

☝️ 6
❤️ 3

Do you happen to have benchmarks, especially against Jackson-based libraries? (I know you've discussed your findings with the community, but would be nice to have results in one place.)


I see. So was Charred extracted from the larger dtype-next library?


Yes - and I added writing the formats efficiently. I thought it would be more palatable to many people if the library was small and exact and had more minimal deps. dtype-next is specifically targetted towards HPC and thus it has somewhat more dependencies and many dependencies unrelated to reading or writing csv and json data.

👍 2

I guess additionally I feel like charred is a good library to learn techniques from as it is precisely targetted. dtype is a bit of a battleship.


on the name.


I am unreasonably excited about this.


It's going to be great to get rid of Jackson dependencies for my etl pipelines. Thanks so much Chris.


@U1S4MH05T - That is great!! The fastest pathway for parsing is to create a with the options you want and call that. It turns out for parsing small json blobs simply mapping the options map to a parser is a significant portion of the parse time. That function is safe to use in a multithreaded context so you probably only need to ever create one.


That's terrific! We're also parsing a LOT of json in our normal database and message queue usage of the Google cloud api, so this will give us a great lift in performance there too.


This is an awesome library! FWIW, a way to be more flexible w.r.t. logging is to be able to pass a log function as option. I so like this approach that I could not stop sharing it 😄

metal 2

That is a great point. Then I could say it has zero dependencies 🙂.

❤️ 3
🙏 2
🌈 1

This looks fantastic — thanks for making it!


Looks a little better than twice as fast as on a couple of 30-60k, 3-column, CSVs I am inhaling. 🏎️


Try :async? false


And a smaller buffer size


:bufsize 8192


The test csv was 1.7Gb


So the system is tuned for larger files


Also - use the supplier interface that avoids creation of persistent vectors


I guess at those sizes you can just load things into memory in an offfline thread pool and just parse the actual strings.


Thx! Don't mind me, I am just an applications guy. I thought 2X was great! 🙂 Now I have maybe 100x faster, For one, I started with:

(with-open [reader (io/reader "subtlex-lite.csv")]
                       (charred/read-csv reader))))
This is the 100x improvement:
(charred/read-csv ( "subtlex-lite.csv")
                    :async? false
                    :bufsize 8192)
Not having luck finding a replacement for this in the doc:
(json/parse-stream ( "wordnet.json") true)
I stole this from the test suite:
(let [input ( "wordnet.json")]
    (charred/read-json input :key-fn keyword))
Gives me:
Execution error at charred.JSONReader/readObject (
JSON parse error - unrecognized 'null' entry.
Anyway, 100x, not too shabbly! 👏

awesome 2
🎉 1

100X is amazing actually - That error means something started with 'n' and didn't finish with 'ull' - I really should print more context there. Is that json file accessible publicly?


Wordnet: The so-called synset IDs are indeed strings like "n.123456". The 100x came on I believe I grabbed the 75k Excel 2007 version then deleted all but a new words in OS X Numbers, then exported as CSV>


Actually, I recall I got my json version from here:


That ^^ conveniently knits everything together in one JSON file.


Perhaps that is JSON5. The JSON spec I targeted requires all strings including map keys to be quoted. I will however check it out and get it to work - thanks for the heads up :-).


Maybe I am mis-describing the data. Here is the start via less in a terminal:

  "synset": {
    "a1000283": {
      "offset": 1000283,
      "pos": "s",
      "word": [
      "pointer": [
          "symbol": "\u0026",
          "synset": "a999867",
          "source": -1,
          "target": -1

Ben Sless03:04:49

Now the obvious question - EDN reader next?

❤️ 3

@UK0810AQ2 I thought about edn but I don't know anyone who reads/writes edn at scale. I know plenty of people who process JSON at scale and large CSV files are all over the place.

Ben Sless12:04:43

@UDRJMEFSN besides all of use reading and compiling clojure every day. Faster compiler and tools?


@U0PUGPSFR - Wordnet found 2 issues related to reading data right at the edge of one buffer leading to the next. New version - 1.001 🙂. Great testcase - lots of escaped data and quite large.

🦾 2

@UK0810AQ2 - OK now you are talking but that is a bit more involved than faster edn/lisp readers. That involves looking at the entire architecture of the Clojure compiler itself as reading the source code is like 1 step out of 5.


I am sure Rich would love it if I forked Clojure and started making aggressive changes 🙂.

👍 1

If I were going to speed things up I would speed up the time it takes to compile dtype-next and core.async. core.async, once required is a 3+ second hit I think due to the compilation of macros and such. dtype-next is a 1.5 second hit at least with the dataset library doubling it - faster require time of those would be beneficial to me and to new people comparing the system against pandas and dplyr. So I think that would involve looking at the macro execution pathway and seeing if I can find some gain there. I guess it would start with the edn and lisp readers however.

Ben Sless15:04:45

@UDRJMEFSN did you by any chance profile this?


I have only roughly profiled it. The document on goes into what I found. Going from dtype v2 to v3 halved the require time and I figured this stuff out by starting a blank repl and then timing the require statement.


I have not (yet) figured out a way to profile this stuff meaningfully aside from starting a jar with a main function that does a dynamic require and that is fairly tedious but doable as visualvm allows you to profile an executable startup up.

Ben Sless15:04:21

Doesn't have to be a jar, you can even pass the flag to clj and eval the require expression


Oh rly? I hadn't considered that. That is very useful.

Ben Sless15:04:47

clj -J-(visual VM flags) -e (require ...)


I wonder then if visualvm is the best option. We are getting into where I would just want a data file produced and a tool to look at it. Do you have a suggestion for a tool for that type of profiling?

Ben Sless15:04:40

JFR, attach to JVM at startup, capture recording to file, analyze and share freely

Ben Sless15:04:34

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=duration=60s,filename=myrecording.jfr


got it- you are extremely helpful - thank you!

Ben Sless16:04:39

clojure -J-XX:+FlightRecorder -J-XX:StartFlightRecording=duration=20s,filename=myrecording.jfr -Sdeps '{:deps {org.clojure/core.async      {:mvn/version "1.3.618"}}}' -M -e "(require '[clojure.core.async])"
Glad I could help 🙂


"I am sure Rich would love it if I forked Clojure and started making aggressive changes 🙂." Great, @UDRJMEFSN! I'll add the OOP module! GDR&amp;H

😆 3

@U0C8489U6 - Now zero deps for real 🙂. Thanks for the idea, log-fn is totally fine for this use case.

❤️ 1

Already integrated into our data pipelines. 🙂 And just upgraded to the zero dep version. 😄


Fortune favors the bold 🙂.

Ben Sless05:04:08

@UDRJMEFSN any chance for readers for arbitrary data sources? Especially byte arrays and input streams? Currently all parsing is done in terms of chars and strings, any reason not to use bytes?

Ben Sless05:04:07

Would you like me to open an issue for that?


@UK0810AQ2 - If you pass in something that is not a string or an char[] the system attempts to make a reader out of it. This allows it to parse input streams and's and such. Is that what you are looking for? So for instance for the large wordnet.json Kenny used earlier you just do (read-json ( "wordnet.json) :async? true) and off you go. More abstractly you could create a CharReader from anything that can supply a sequence of character arrays.


Put another way anything that can turn into a reader is fair game.


One thing I think is an issue is a file that starts with the unicode BOM. All the systems I have seen have specific handling of that and I have none so I imagine those files will fail.

Ben Sless13:04:03

Since you're using a buffer I guess the parser is still shuffling some data around. I thought that parsing bytes directly could help avoid casting and copying


If you knew you were only going to parse files of a particular encoding then perhaps but my guess is the actual decoding of the byte information into characters is a very minor part of the overall time. the :async? flag allows you to move that conversion to an offline thread and that helps with larger files. You could use a stream of bytes and an encoder and this is how do it. What makes the parsers fast, however, is implementing things like parsing strings or numbers into My guess is that doing the decode from bytes to chars in those loops would not be an overall win as you would increase the code size in each loop. I could definitely be wrong on this, however.


Reading from bytes has the strong advantage of fitting twice as much data into the cache and in this realm that could actually be nearly a 2x gain.


Potentially you could write the parser to work from byte data and just encode chars that are uninteresting to the parser as a special byte value in a lot of cases. I don't know, you would want to make sure there was a potential win there before committing real work to trying it out.

chrisn15:04:55 has some very interesting information in the comments, btw.

❤️ 1
👀 1

Typed Clojure 1.0.27 - Check your programs without depending on typedclojure! Also includes fixes to malli->type translation and other improvements.

😮 13
🆒 10
🎉 12
🏆 5
clojure-spin 5

Here's a fleshed out tutorial on how to type check a Clojure library without introducing a runtime dependency on Typed Clojure