Fork me on GitHub
#uncomplicate
<
2017-12-25
>
qqq02:12:33

@blueberry: can you point me at the right way to do convolutions 2d in jcuda? (I expect there to be a builtin but I am not finding it)

qqq02:12:22

@blueberry: also, do you know of any efforts to bind the *.cu files from Tensorflow, PyTorch, or CuPY with JCuda? At the end of the day, there's no reason to rewrite hte .cu files, I just want a way to use them from Clojure, and the .cu/jcuda layer seems like a great place to intercept on.

blueberry11:12:50

I do not understand what you are asking.

qqq12:12:53

@blueberry: libraries like tensorflow / pytorch / cupy provide, at some level, an accelerated nd-array + various ops on cuda

qqq12:12:01

then there's some python wrapper over it

qqq12:12:26

this "accelerated cuda nd-array + ops" seems like something that can be extracted, and then bound to via jcuda, and thenuseable via clojure

blueberry12:12:54

Those cu kernels are a small percentage of the overall code, and they do not work in isolation from the host code. You'd have to take the cu files (respecting the license, of course) and write matching host code in Java and/or Clojure.