Fork me on GitHub
#uncomplicate
<
2018-04-12
>
matan07:04:24

@blueberry thank you for commenting! this library is kind of amazing. Docs are quite good, I already ran some code with it yesterday.

matan07:04:50

Notwithstanding that idiomatic clojure is less cpu efficient than mutable java code, I've yet to be convinced otherwise, e.g. for mutation heavy algorithms (e.g. take Levenshtein distance calculation) > Properly written Clojure is equivalent to Java. It compiles to the same bytecode.

matan07:04:28

In a nutshell, can you help me hammer this down ― > Moreover, Neandethal is faster even than Java libraries that use the same MKL binaries. why? better design around MKL? or it is best seen in the benchmark? feeling curious

matan07:04:45

Thanks for this cool lib!

blueberry08:04:00

@matan of course idiomatic clojure is slower for numerical tasks than mutable Java. That's the point: use the tool that is proper for the job. Clojure supports mutable arrays, buffers etc. That's what I was talking about. When you use equivalent code, you get (almost) equivalent bytecode, and can get (practically) the same speed. The problem is that the speed at the level of Java is still many times slower than the speed that the hardware can execute.

blueberry08:04:12

Better design around MKL.

blueberry08:04:26

You're welcome.

blueberry08:04:49

Now when I think about what I have said... The design is not around MKL, it is overall better design for this particular domain. It accomodates MKL, cuBLAS, and a few other libraries in a quite efficient way.

blueberry08:04:05

and could even support pure Java implementations transparently... if there were any.

matan17:04:31

@blueberry Would you happen to know of any machine learning clojure libraries using Neanderthal?

jsa-aerial19:04:52

Well, there is Uncomplicate's own Bayadera: https://github.com/uncomplicate/bayadera and for one 'outside' there is Flare: https://github.com/aria42/flare

whilo21:04:15

I have also provided https://github.com/cailuno/denisovan so all core.matrix code can use neanderthal now. Still missing the GPU backends though.

matan13:04:39

nice homonoid species naming style 🙂 hope they don't end up the same....

matan13:04:39

@U1C36HC6N is core.matrix itself a very good API? I've never used it yet

matan13:04:13

@U06C63VL4 thanks! does bayadera have any explicit docs to it?

matan13:04:14

@U06C63VL4 > A Clojure Library for Bayesian Data Analysis and Machine Learning on the GPU. The tagline left me a bit unsure, e.g. does it implement bayesian machine learning a la bayesian neural networks? I should read the source

jsa-aerial17:04:55

@matan bayadera is one of Dragan's projects and so he would be able to give best advice. I don't think it is 'officially' released yet. I have not actually used it, but I don't believe it uses any NN stuff.

whilo21:04:43

@matan i think core.async is fine if you care about a numpy like high-level API that is polymorphic w.r.t. to implementations. unfortunately it was not as focused on performance as neanderthal, i hope denisovan provides a reasonable tradeoff for code implemented in core.matrix. still need to check https://github.com/whilo/boltzmann against it

blueberry22:04:16

@matan Bayadera does not use NNs, nor it makes sense to use them in general case. Bayadera is closest to an implementation of automated probabilistic decision making where you combine knowledge and data to update the knowledge and evaluate it taking utility (or cost) into account. In theory, it could be used to make automatic decisions on parameters and/or structure of neural networks, but it is not practical to do this with real-world million-node deep networks. Nor it is necessary IMHO. When people use the term "bayesian" together with deep learning, they usually mean "I replaced some numbers with (normal) distrubutions of those numbers". That is not Bayesian in the usual sense of what "Bayesian" mean: updating your priors with data to find out what is your updated knowledge.

blueberry22:04:39

BTW. I'm in the phase of polishing Bayadera for the first release to Clojars, so expect docs and tutorials this spring/summer. I already updated all engines to work on both AMD (OpenCL) and Nvidia (CUDA) GPU's.

ðŸ’Ŋ 4
blueberry22:04:58

@matan You mean end with Homo Sapiens? BTW there's something like a few percents of Neandertal genes in European population. Being used in a few percents of each European Clojure project might actually be a nice thing for Neanderthal (the library) 🙂

blueberry22:04:39

... and in large part of North and South American projects, now that I think of it 😛

matan17:04:31

BTW, it's odd that the Java libraries wrapping around MKL have been so sloppy, maybe they weren't important enough for anyone but still kind of odd

matan17:04:55

Either way I'll use Neanderthal for my current project

matan17:04:07

Managed to set up MKL despite its terrible intel docs websites ðŸĪŠ

blueberry19:04:28

@matan they are not sloppy at all; they just miss some performance opportunities here and there. it's not an easy problem. there are thousands ways where you can lose performance, and those add up.