This page is not created by, affiliated with, or supported by Slack Technologies, Inc.

## 2018-03-04

## Channels

- # beginners (149)
- # cider (1)
- # clara (12)
- # cljs-dev (226)
- # cljsrn (2)
- # clojure (275)
- # clojure-russia (5)
- # clojure-uk (14)
- # clojurescript (57)
- # cursive (23)
- # data-science (15)
- # datomic (1)
- # fulcro (8)
- # hoplon (9)
- # onyx (5)
- # portkey (15)
- # protorepl (1)
- # re-frame (8)
- # reagent (17)
- # shadow-cljs (22)
- # uncomplicate (13)
- # vim (36)

@metasoarous not yet. but the overhead compared to neanderthal is one protocol dispatch, so neanderthal benchmarks should apply: https://neanderthal.uncomplicate.org/articles/benchmarks.html

i do a hack in a special case of the QR decomposition, that i am not sure about yet. other than that the porting was straightforward. the core.matrix APIs do not respect the low-level BLAS representations though. they e.g. always unpack the matrices for QR or LU decompositions. so neanderthal is very recommendable if you care about performance.

ideally you can do so in parts of your algorithm by specializing on neanderthal and using the core.matrix API for the rest

neanderthal is also closer to standard literature on numerical algorithms, but i am not too familiar with that literature yet

i need pytorch like autograd for dynamic compute graphs, tensorflow is moving there, i think

i want to implement this https://openreview.net/forum?id=BJ8c3f-0b