Fork me on GitHub
#uncomplicate
<
2017-10-10
>
whilo19:10:00

The math operations in uncomplicate.neanderthal.math are supposed to work with fmap, I think. Wouldn't it be beneficial to do them on device, e.g. GPU?

whilo19:10:48

Basically a neural network graph will be matrix-matrix multiplications + point-wise non-linearities. If I do them with fmap, I will implicitly copy the intermediary results to the CPU on each layer, right?

blueberry19:10:54

That's what's comming in 0.17.0. There will be vectorized equivalents for all math functions on CPU & GPU for all different types of vectors and matrices

blueberry19:10:39

fmap works only with Clojure functions, of course, since GPU kernels cannot meaningfully implement IFn

blueberry19:10:18

the namespace that you're interested in is vect-math

blueberry19:10:46

Now, if you need some mathematical function that is not in math (and thus vector-math) you'd have to implement the kernel yourself (relatively easy with clojurecuda/clojurecl)

blueberry19:10:58

The current snapshot has implementations for vectors and GE matrices on CPU. Other types of matrices and GPU is on the TODO, but should be available soon.

whilo19:10:45

Ok, perfect.

whilo19:10:37

It does not support GE matrices for me, only vectors:

whilo19:10:58

As an exercise I have reactivated the core.matrix backend of @mikera and made it work with the current master branch and with the high-perf. BLAS routines for dense matrices. I think the autograd code could work with this core.matrix backend without major performance penalties compared to direct usage of neanderthal, since it only needs a small subset of neanderthal's routines, but some things that core.matrix provides, like automatic broadcasting.

whilo19:10:46

Broadcasting and reshaping (common ops in scientific computation languages like Python, Matlab or Julia) will be expensive right? I could copy into a new matrix, but usually these other languages provide a view on the data. Don't know how they do it, do you have any suggestions?

blueberry20:10:05

untested (yet!)

whilo20:10:32

Seems to work. core.matrix tests pass.

whilo20:10:44

(an my manual tests)

blueberry20:10:47

I meant the vector-math engine

whilo20:10:10

I have just added it with your commit for Vector and Matrix.

blueberry20:10:13

Old stuff, of course, passes the tests

blueberry20:10:34

what did you add?

whilo20:10:47

vm/pow vm/mul vm/div etc.

blueberry20:10:14

Those have been there and have the tests.

blueberry20:10:02

But the matrix implementation of those functions do not. They work generally, but there might be a few bugs to fix.

whilo20:10:03

What is the prefered way to add a scalar to vector/matrix?

blueberry20:10:58

`(linear-frac a 3.3)

blueberry20:10:24

(linear-frac a 3.3)

whilo21:10:58

The sign function and elementwise <,> and = (+ eps) would be nice.

whilo21:10:44

Just as a note, I can try to do it, but I guess you have a list of missing features where things like this can be tracked, so they don't get lost.

blueberry21:10:16

open an issue on github

blueberry21:10:44

= + eps is already in math, f=

blueberry21:10:00

you also have f<, f> etc

whilo21:10:43

Yes, but they are for usage with fluokitten. I think it would be nice to have something on device. And max would also be cool for ReLU activations.

whilo21:10:15

I would like to select only positive entries in a vector for example.

whilo21:10:43

I will open an issue.

whilo22:10:08

Good night 🙂.