Fork me on GitHub
#numerical-computing
<
2017-05-31
>
qqq10:05:59

@blueberry : have you seen situations where bad calls in jcuda crashes the entire jvm ?

qqq10:05:18

I'm running into this issue alot recently, and it's submoptimal as it forces ms to wait + start a new repl and really reduceds on ability to experiment quickly

blueberry10:05:44

you can prevent this by using ClojureCUDA, which offers safe functions.

qqq11:05:47

(JCublas2/cublasSetPointerMode cublas-handle cublasPointerMode/CUBLAS_POINTER_MODE_DEVICE) <-- lacking that one line cause cublasSdot to crash

qqq11:05:18

@blueberry : clojureCuda doesn't wrap cuBlas/cuDnn yet, does it ?

qqq11:05:22

I need both of those libraries

blueberry11:05:45

neanderthal has a cuBlas engine

blueberry13:05:53

that's right

qqq22:05:59

@blueberry: does "cudastreams" play well with "cublas" ?

qqq22:05:16

if so, how do you handle it in neandertheral ? do you break up big matcies into smaller pieces that you can put into different cudastreams ?

qqq23:05:23

is there any support for doing elementwise exp on the gpu ?

qqq23:05:31

or with neanderthal, I have to write a custom cuda kernel for that?

blueberry23:05:26

that is a trivial kernel to write in clojurecuda.

blueberry23:05:05

i didn't want to clutter neanderthal with a bunch of kernels for every possible simple mathematical function.