Fork me on GitHub
#data-science
<
2018-03-04
>
whilo13:03:24

@metasoarous not yet. but the overhead compared to neanderthal is one protocol dispatch, so neanderthal benchmarks should apply: https://neanderthal.uncomplicate.org/articles/benchmarks.html

whilo14:03:23

i do a hack in a special case of the QR decomposition, that i am not sure about yet. other than that the porting was straightforward. the core.matrix APIs do not respect the low-level BLAS representations though. they e.g. always unpack the matrices for QR or LU decompositions. so neanderthal is very recommendable if you care about performance.

whilo14:03:57

ideally you can do so in parts of your algorithm by specializing on neanderthal and using the core.matrix API for the rest

whilo14:03:02

that is my take on it atm.

whilo14:03:17

neanderthal is also closer to standard literature on numerical algorithms, but i am not too familiar with that literature yet

whilo14:03:24

i mostly care about autograd atm.

whilo14:03:48

@bpiel have you thought about a core.matrix backend for tensorflow?

whilo14:03:40

i need pytorch like autograd for dynamic compute graphs, tensorflow is moving there, i think

whilo14:03:12

(in anglican)

whilo14:03:53

the problem is that the traces can have different length and are steered from clojure's control flow

bpiel14:03:08

@whilo I only have a minute, but yes, I have and it seems like a good idea.

whilo14:03:20

i also think so 🙂

whilo14:03:24

great, we can chat later

whilo14:03:28

ping me when you have time