Fork me on GitHub

Yeah, I’m just not totally sure of exactly what’s intended by “make something weird and magical to make it work.” @yashaka could you rephrase that, or somehow say it in a different way?


Besides Cognitect, are there any other Clojure Consulting/Contracting companies, or is no one else starting one because Cognitect has such an impressive group of devs.


I know 8th Light has taken a shine to Clojure; they don't use it exclusively, I think, but they've been bringing it up pretty regularly iirc


This thing has an APL repl environment


I am looking into machine learning and I have a problem that has a set of inputs and outputs, which is how most examples I found, are built upon. The thing is, to tell if the outputs are good I would have to run some calculations afterwards with the outputs. So I dont have data handy that says this output is good for this input, I would have to calculate that myself. Is there a way to kind of give the learning algorithm feedback about its output?


That doesn't sound like the kind of scenario Machine Learning is built for


It's not usually a cyclic process; there's not a direct feedback loop. It's simply a model generated from a sample data set. If you want the model to change, you change the generative data set.


(This is drawing from memory of my ML class several years ago, so may be inaccurate; I think I still have our course textbook on it, though)


ML models are statistical models - if you added a feedback loop to that, you're basically creating an echo chamber, reinforcing the model unnaturally.


The only type of feedback loop that's beneficial requires data from its consumers - for example, a model that categorizes music based on a listener's preferences can receive feedback from the user whether the song was a good fit or not. It can use that to tune the existing model, because it constitutes a new piece of categorizing data.


Depending on the type of models you're using, you can verify that the model is accurate by testing it against: * your input data set (more likely to give biased results, but if it deviates significantly from expected values then the model may not have been built correctly or may not be founded on a correct statistical model of the data set) * a benchmark data set, independent from your input (less likely to be biased, but may also reveal areas where the input data set didn't have sufficient coverage)


Poking around the book - 'Machine Learning', by Tom M. Mitchell. The recommendation is to split your data set into training and test data sets, especially if you're comparing two models that are designed to predict the same attribute.


Machine Learning and Deep Neural Networks sounds like magic to me. For example, is it possible to teach machine to consume news (as text input) and make a short description out of it? Based on many input/test data sets.


I have already seen many cases where machines could redraw images using famous artist's style (, compose music (, play video games (, resize images ( better than lancosz algorithm, generating voices (, etc, etc.


So, is there a limit to what it can do?


Keep in mind that these all require a sample corpus - input data that generates a statistical model


It can't generate something from nothing; many of these systems are bounded by the information you feed them.


It's one of the lines that separates ML from AI


ML is bounded to the data you give it. Trying to extrapolate beyond that is statistically risky.


AI, under theory, entails the ability to extrapolate and apply a model to unknown problem spaces using mechanics and processes learned in known ones.


To make it clear, deep learning is a subset of ML, right?


I believe so, yes. Neural networks are a relatively simple model, iirc.


@fellshard Thanks for your detailed explanation


That also explains why my google fuu seemed to have left me


Unfortunately there was no ML class when I was in university, I am missing that right now a bit


Yeah. And that ML book I mentioned is... dense reading, to say the least.


Terse, heavy on the statistical notation, etc.


If you're wanting to learn more about practical ML, you'll probably want at least an initial grasp on basic statistics, since that'll help you interpret what ML gives you.


What I want to do is solve s specific problem: and I was curious if I could approach my idea with some ML, which, like it seems, does not work.


So in that category, you're probably going to be looking for a Genetic Algorithm, a specialized category of ML


Hm, isnt that that, what Carin Meyer made a talk about?


CAn't say I've seen the talk :S


I haven't dived into this area as deeply, but the basic high-level premise is that you are trying to optimize a 'fitness function' by adjusting aspects of some system's behaviour


I think it uses the same structures as ML models, but has different goals and conclusions


You can think of it as searching a space of hypotheses by choosing mutations from existing models to create new models.


So you're basically cultivating the 'best fit' hypotheses at any given time


Hm, I see


And you can score the outcome based on a function


which is what I basically want


Yep. Very common strategy for these types of bot battles