This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-10-10
Channels
- # aleph (4)
- # beginners (32)
- # cider (12)
- # cljs-dev (56)
- # cljsrn (7)
- # clojars (3)
- # clojure (165)
- # clojure-dev (33)
- # clojure-germany (1)
- # clojure-italy (27)
- # clojure-russia (7)
- # clojure-spec (24)
- # clojure-uk (62)
- # clojurescript (37)
- # core-async (7)
- # core-matrix (1)
- # cursive (9)
- # data-science (8)
- # datomic (8)
- # duct (4)
- # events (1)
- # figwheel (7)
- # flambo (3)
- # fulcro (43)
- # hoplon (25)
- # jobs-discuss (8)
- # lein-figwheel (4)
- # luminus (2)
- # off-topic (35)
- # om (8)
- # om-next (3)
- # onyx (30)
- # pedestal (62)
- # portkey (2)
- # protorepl (2)
- # re-frame (40)
- # reagent (9)
- # shadow-cljs (123)
- # specter (30)
- # sql (22)
- # testing (1)
- # uncomplicate (40)
- # unrepl (3)
- # vim (13)
- # yada (5)
The math operations in uncomplicate.neanderthal.math are supposed to work with fmap, I think. Wouldn't it be beneficial to do them on device, e.g. GPU?
Basically a neural network graph will be matrix-matrix multiplications + point-wise non-linearities. If I do them with fmap, I will implicitly copy the intermediary results to the CPU on each layer, right?
That's what's comming in 0.17.0. There will be vectorized equivalents for all math functions on CPU & GPU for all different types of vectors and matrices
fmap works only with Clojure functions, of course, since GPU kernels cannot meaningfully implement IFn
Now, if you need some mathematical function that is not in math (and thus vector-math) you'd have to implement the kernel yourself (relatively easy with clojurecuda/clojurecl)
The current snapshot has implementations for vectors and GE matrices on CPU. Other types of matrices and GPU is on the TODO, but should be available soon.
As an exercise I have reactivated the core.matrix backend of @mikera and made it work with the current master branch and with the high-perf. BLAS routines for dense matrices. I think the autograd code could work with this core.matrix backend without major performance penalties compared to direct usage of neanderthal, since it only needs a small subset of neanderthal's routines, but some things that core.matrix provides, like automatic broadcasting.
Broadcasting and reshaping (common ops in scientific computation languages like Python, Matlab or Julia) will be expensive right? I could copy into a new matrix, but usually these other languages provide a view on the data. Don't know how they do it, do you have any suggestions?
@blueberry Thanks!
But the matrix implementation of those functions do not. They work generally, but there might be a few bugs to fix.
Just as a note, I can try to do it, but I guess you have a list of missing features where things like this can be tracked, so they don't get lost.