Fork me on GitHub
alan06:07:29 @blueberry update, now we are basically crashing Numpy, a few things I noted: 1. For very simple stuff like matrix multiplication times are similar: the language has basically nothing to do, is just a call to MKL, and this is what I would expect 2. With SVD I removed U and V calculation from Neanderthal code and it got much faster than Numpy (I don't feel like it is cheating, they should give me the option to calculate them or not) 3. PCA: either I did something stupid in Neanderthal code, or here we are seeing a huge difference in "real world" usage 😄 4. I used 0.19.0, I tried to build 0.20.0-SNAPSHOT but it was complaining about neanderthal-native version (can you help me with that?), if SVD and PCA are getting even faster that's it, I'm writing a statistical library in Neanderthal at least for myself 😏 5. I tried running Numpy with linalg.eig instead of linalg.eigh and it couldn't finish running, the 4096x4096 PCA would take more than 40 seconds!!! So i reverted to linalg.eigh, but we're still beating it anyway


Edit: I did something stupid in Neanderthal I was benchmarking only the mm! call 😓


I'll rerun everything, but the other results are ok


I guess I am pretty tired lately... :rolling_on_the_floor_laughing:


you'd have to build the new version of neanderthal-native, clojurecl, and clojurecuda, of course. or you can wait a couple of days until I release new versions.

👍 4

Would it make sense to test the GPU as well? I mean I know Numpy can't do it, but it might be interesting to show that Neanderthal can do that while the other can't


It does, of course, and I guess that should be compared to Numba or something similar. However, note that GPU engine does not (yet) support svd nor eigendecomposition, so you won't be able to compute PCA (unless you add bindings to that part of cuSolver or write the kernels yourself).


I'm not great at C++, so I guess I'll have to wait


No worries - there is zero C++ programming required. Only (Cuda or OpenCL) C.


The numerical algorithms and their optimization, on the other hand...


Yeah that's the issue (C or C++ doesn't really make much difference)


if anyone is interested. In my math library you can find (among other namespaces): descriptive statistics, distributions, interpolations and clustering. Almost all functions are backed by SMILE or Apache Commons Math and work with native Clojure sequences.

👏 4

@tsulej is it comparable to Numpy?


after quick check (I don't know Numpy/Scipy). fastmath's random numbers, distributions can be compared to numpy.random. Plus the rest of math and statistics.


scipy.interpolate <-> fastmath.interpolation


scipy.cluster -> fastmath.clustering (however scipy is very limited here)


maybe one more: fastmath.transform includes DFT (+ various wavelets, DCT, DST and Hadamard)


If I use mm! inside cov code it fails, while with mm everything works


It probably crashed because you have released the result and later tried to use it outside that scope when it was already released. Why would you release the result? You'd want to keep the result, while releasing temporary objects...


although it would be great if you could share a github project with a minimal crashing code that I could try and debug. Even if it's due to misuse, maybe I can add a layer of protection against JVM crash...


Ok, this is what I meant with "I didn't understand well with-release and let-release", now it's clearer


If you want you can take the code from the repo, that's failing code, it's sha is d70b011be3c6dd4bf35fdfd348f0939d3dbf49a9


But why it was failing only with 1024x1024 version?


@justalanm These two functions have documentation 🙂


It would be easier for me to reproduce if you made a mini leiningen project wit one test file with examples that fail and examples that don't.