Fork me on GitHub
Cam Saul05:09:14

Methodical 0.13.2 is out! Methodical is a multimethod library that acts like a drop-in replacement for defmulti and defmethod, written entirely in Clojure. It supports building multimethods programatically in a nondestructive/functional manner; methods with "partial-default" dispatch values like [:some-keyword :default]; CLOS-style :before, :after, and :around aux methods, and method combinations; easy next-method invocation; and ships with helpful debugging tools. 0.13.2 is a small bugfix release that fixes issues with nil dispatch values.

šŸŽ‰ 14
lambda-prompt 2
Noah Bogart13:09:49

this library is so friggin cool

šŸ™ 1
borkdude12:09:16, static analyzer and linter for Clojure code that sparks joy 2022.09.08 ā€¢ :config-in-call - see ā€¢ :config-in-tag - see ā€¢ Fix issue with namespace-name-mismatch on namespaces with + sign (and others) ā€¢ Fix issue with jar URI missreporting to file-analyzed-fn, bump babashka/fs to 0.1.11 ā€¢ Can not use NPM dependency namespaces beginning with "@" in consistent-linter alias ā€¢ don't crash on empty ns clauses: (require '[]) and (import '()) ā€¢ Add support for sourcehut inferred git dep urls for the Clojure CLI ā€¢ Expose a tag function in clj-kondo.hooks-api ā€¢ Add support for :filename-pattern in :ns-group ā€¢ false positive type mismatch warning with hook ā€¢ lazy-seq should be coerced a list node ā€¢ store overrides in cache and don't run them at runtime Channel: #clj-kondo

clj-kondo 12
šŸŽ‰ 11

Carve: carve out the essentials of your Clojure app Carve will search through your code for unused vars and will remove them. v0.2.0: misc. fixes and clj-kondo upgrade

šŸŽ‰ 11
šŸ”Ŗ 4
clojure-spin 1

Article: A Significant clojure.test Enhancement Project announcement: Calling for early testers and feedback for a clojure.test enhancement: Dependency info:

šŸŽ‰ 10
šŸ’Æ 3
Alex Miller (Clojure team)19:09:49

I have no memory of this ticket :) but interested in seeing a proposal!

Alex Miller (Clojure team)19:09:41

Oh, this came from someone else, that's why I don't remember it, just forwarded it along


@U064X3EF3 Well, it was fun speculating about how you might have come about it anyway šŸ™‚

Alex Miller (Clojure team)21:09:36

I do have a vaguely related clojure.test ticket that I did file though

Alex Miller (Clojure team)21:09:50

vaguely related in that it's about getting better feedback on exceptions from tests


Introducing - A high performance core Clojure research project. For those of you who aren't, I have released a project which provides some high performance base Clojure datastructures such as persistent maps and persistent vectors as well as some higher performance base primtiives such as frequencies and group-by. I focus on many things and the library has a lot of ideas, some good and some that didn't pan out, and some public benchmarks. My hope is this becomes a community driven project to improve the platform we are all standing on. Take a moment to check it out and take it for a test drive. I am always open to issues and PRs and honestly this morning seeing two issues - which means people are actually working with the code - made my day. So don't hesitate to tell me what you really think šŸ™‚. Enjoy!

šŸ‘ 19
šŸŽ‰ 14
šŸ‘€ 4
šŸ”„ 6
Darrick Wiebe19:09:54

How does this compare to the data structures in


They are definitely comparable - it would be interesting to profile them against each other. Aside from performance (which if Bifurcan has any major advantages we should usurp them) I would say the ham-fisted datastructures implement all the Clojure interfaces - Indexed, IPersistentVector, IEditableCollection, etc. You can see this with a quick scan of the java source code or the api implementation. So a mutable ham-fisted map is a bitmap trie but it implements IEditableCollection so you can use the transient functions, assoc! and friends and persistent! works. ham-fisted aims to be drop-in compatible with Clojure's existing pathways. Furthermore there is a lot in ham-fisted aside from datastructures - as I note in the reddit post and the README there are a lot of algorithmic primitives that either perform better, such as group-by, or that plain don't exist and/or perform much better such as update-values and group-by-reduce. Finally there are Clojure-friendly wrappers for all the java primitive array types so you can pass a primitive array backed data to the rest of your Clojure program and nth, reduce and friends all work correctly and perform much better. There are type-specific implementations of sort and sort-indirect provided for ints, longs, and doubles that perform much better than the generic java sort pathways. So I would say overall two things aside from performance which both libraries focus on. First - it is drop-in replacement for many core Clojure datastructures and functions and second there are new primitives to the core clojure functions that I think are interesting/worth researching further.


Well, and another thing there is a lazy-noncaching namespace which provides drop-in replacements of map, filter, and concat which perform better than eduction in some pathways and don't require you to rewire your code from pure clojure.core to transducer form and which in map's case also preserve the random-access property of the input if all the inputs are themselves random-access.


@UDRJMEFSN > there is a lazy-noncaching namespace which provides drop-in replacements of map, filter, and concat which perform better than eduction in some pathways and don't require you to rewire your code from pure clojure.core to transducer form That's really interesting! @U4YGF4NGM and I have been doing a similar thing in #clavascript which is a CLJS syntax to JS compiler which implements some CLJS-esque things directly in JS (but most of the things work directly with mutable data structures, so no CLJS data structures). The implementation there is based on generators. Properties of lazy values returned from map, filter, range, etc are that they are not cached and not chunked, so consuming the value twice will make calculations happen twice, but otherwise they should behave pretty much identical to CLJ(S). You can play around with that here: ham-fisted seems like a really interesting lib: I guess the idea is that you can use many things as drop-in replacements with clojure.core and it should all mix and match without rewriting your code, but get better performance... right?


You are exactly right about lazy-noncached. I think that is a more error prone programming model overall but the performance benefits can be substantial as long as you respect the sharp edges. Drop-in - well that is the theory:-). - I am working on this on far less than part time so ymmv. You will get better performance if you move further towards the library and use some of the primitives in it - update-values, union, etc are much faster if you are using datastructures from the library and if you know you are dealing with doubles sort is much faster if you pass in a double array - also true for int and longs. In some respects it is also a far simpler bridge into dtype-next land and something I intend to integrate into dtype next as a substrate. I think overall I want a community platform where we can try out different datastructures and algorithmic primitives in a testable way to move the language forward. I think there is room to improve both clojure.core and the persistent datastructures but I don't think the core language is the place to do it until we are damn sure we are ahead and not causing regressions.


Speaking of JS - there is probably room for this type of experimentation in CLJS meaning I think a very careful examination of the datastructures and algorithms that underly the CLJS system would yield substantial benefits but I have no proof of this -- it is just a suspicion.

šŸ‘ 1

Yes and since they are built on protocols, it should be fairly similar to what you did I guess


Yeah, I'd be interested in helping out with a cljs port of this


Rather than a port, I would love a CLJC, even if the CLJS part just contains scaffolding to begin with, subcontracting to cljs.core (where semantically equivalent, of course). This would provide an additional layer of generality as well as a baseline to test against.


My thought would be that you would need to write very directly to the js substrate and you would lose performance with cljs or cljc. This may not be true but that is my experience from clj/java - some pieces really do need to be written in Java.


So you would have a js library of datastructures and a cljs wrapper. Again, there is an assumption here but if you are going to absolute top performance you need to make sure the cljs->js translation is perfect for that use case and I don't know that it is.


I'm sure, but would that change the surface verbs: group-by-reduce, etc.?


Oh no it wouldn't. I see what you mean now.

šŸ‘ 1

The key would be to go CLJC at the level where there's semantic equivalency. At that level, test suite should be reusable for both even.


Yes- speaking of test suite - I could use help there.


Rewriting the core data structures and core functions in JS would have tree-shaking benefits with #cherry too - currently we're trying to figure out how to make the Google Closure output tree-shakable but it's very hard


Maybe it's possible to plain steal some parts from the Clojure test suite where applicable.


The idea is to have a once-optimized compiled version of CLJS core which we can then re-use to build upon, so you can continue tree-shaking with esbuild


The vectors test suite is a great start. I thought the google clojure compiler did the tree shaking.


I'd like to see a persistent data structure lib that can read and write to wasm memory, such that one day when the JVM runs on WASM, we can synchronously bang on the same structure from both CLJ and CLJS


@UDRJMEFSN google closure does whole-program optimization, but the output isn't compatible with es6 treeshakers, so you can't re-use that beyond the context of an application


Don't know if you're aware but recently I found this approach to immutability in JS:


Their objects are 100% compatible with JS interop, so you don't have to do conversions which are often expensive.


They "freeze" their objects in development, to protect you from accidentally mutating, but they elide that in production


They use a sort of transactional model with copy-on-write within the transaction. Once copied within a transaction there is no need to copy further. That is a sort of interesting twist similar to transients.