Fork me on GitHub

@fabrao do you need to learn the basics, means do you need to learn eg how works the convolutional neural network in first. After this, you will know the answer. Lot of course on the net, bests in Python.


is anyone able to suggest a good automatic differentiation library for clojure? One that can handle expressions with matrix operations?


@michaellindon thank for you talk about bayesian data analysis on clojure/conj, just saw video! I was wondering — why do you used commons.math instead of neanderthal?


@rustam.gilaztdinov thank you! I don't use commons.math for any linear algebra, I actually ported some of their distributions to clojure and provided some idiomatic wrappers for interacting with their distributions. Their code is well maintained and used heavily, so I trust their random number generation


Incanter for example uses a java library parallel colt for their random number generation, but parallel colt had not been maintained in a while i think... i trust commons math more than parallel colt


yes, but for bayesian inference — performance is such important thing. Do you have any look on bayadera? Builded on top of neanderthal and has his own distributions implementation. I think it’s very cool


i know of bayadera, but i dont know enough implementation details about it


i dont know what algorithms are being used for inference


if people are interested in the distributions library i talked about, its here:


Note also that Bayadera's approach to RNG is a bit different. The linked distributions are only plain functions for computing interesting (scalar) values in Clojure. RNG is supposed to be done in bulk billions directly from the (GPU) computing engine with sampler 's sample method (


In that case, you are using (built-in or custom-written) OpenCL/CUDA distribution definitions. And, what is not covered in other libraries that happily sample "easy" 1-dimensional distributions or special n-dim ones, this is mainly beneficial when sampling wild hierarchical models. Of course, for easier stuff, you easily get billions of samples in a blink of an eye.


The main thing to note is that the point of (MCMC) sampling is to get the sample of an unknown (posterior) distribution. Getting the sample of a known distribution is in many (most?) cases applying circular reasoning. If I know the distribution at hand, then I can answer the questions about it directly, I don't need a sample from it.


How does bayadera generate samples from the posterior distribution?


its not clear from the slideshow i found online


MCMC on the GPU


you'll need to be more specific


MCMC is a broad class of algorithms, there are many


i can find no details on what mcmc algorithm they are using