Fork me on GitHub
#announcements
<
2019-03-27
>
metasoarous08:03:39

Greetings! I'm happy to announce an exciting new feature of Oz: Live code reloading for Clojure. This idea was inspired by a talk on data science in Clojure by @aria42. In the talk Aria mused about what it might be like to bridge the divide between REPL, editor and notebook with a Fighweel-like hot-code reloading experience. This idea intrigued me and so I decided to take a stab at it. What came from this is a function oz.core/live-reload! which takes a filename and initiates a watch routine. When the file changes, Oz reruns starting from the first changed code-form in the file (ignoring whitespace). This simple strategy allows for a tight feedback loop, even with code which periodically takes a while to fetch or process data. This functionality (as well as a bunch of updates to Oz's core data visualization utilities) is available on [metasoarous/oz "1.6.0-alpha1"] (expect alpha2 out shortly as well, with the latest Vega updates). Please let me know what you think, and where you might see this being useful. https://github.com/metasoarous/oz

metasoarous08:03:56

If you're interested in seeing a brief demonstration of this functionality, please take a look at the short screencast I put together: https://www.youtube.com/watch?v=yUTxm29fjT4

metasoarous17:03:40

Quick update: [metasoarous/oz "1.6.0-alpha2"] is now out, with the latest Vega libraries, as promised. Hope you enjoy. Thanks!

luposlip20:03:38

Simple library to utilize huge JSON database files (`.ndjson`) as read-only database with next to no memory impact: https://github.com/luposlip/ndjson-db

borkdude10:03:02

I wonder if I can use this as a replacement for writing and reading EDN files, since I’m having performance issues with big ones.

borkdude10:03:26

Will this still read and deserialize the entire JSON file each time you do a query?

borkdude10:03:33

Ah, so it will index the start and end of each json object and only read that when you query. Clever.

borkdude10:03:57

So it’s optimized for reads.

luposlip10:03:31

Exactly. But could easily be extended to support lots of other file formats. You're welcome to submit PR's if you want, otherwise I'll probably do it myself some weekend soon 🙂

luposlip10:03:23

It's really good for pipelining stuff - I personally use it together with JSON Schema validations with this small wrapper library: https://github.com/luposlip/json-schema

borkdude10:03:55

Cool stuff.

luposlip10:03:36

I'll take a look at it later!

borkdude10:03:06

Thanks. You obviously have thought about this already, so your input is valued highly.

luposlip22:03:45

Will get back to it tomorrow, it's been a crazy day ;)

borkdude23:03:42

No problem.

borkdude07:03:55

I’ve decided to solve my problem otherwise, so I don’t need big json files