Fork me on GitHub
#data-science
<
2018-12-05
>
henrygarner11:12:40

@mattford I understand now. It sounds like a bad idea to me, but it’s taken me a while to figure out why. (Is it because you’re taking model outputs and treating them as inputs? Not really. Is it because your model is introducing positive feedback loops that don’t have analogues in the domain being modelled? Maybe). So I broke out a REPL and had a look at the effect on an analogous situation (I hope it’s analogous anyway): coin flips. Let’s say we’ve observed 3 flips landing 2 heads and 1 tail. What is the likely distribution of heads after 20 flips of the same coin? (The reason I hope this is analogous is that we could construct a confidence interval around our expectation of the total of heads after the 20th flip. And yet it’s a much simpler model than SEND.) I ran my simulations for 1M iterations.

henrygarner11:12:25

If we start by using a point estimate of the probability and keep p fixed for 20 draws in a Bernoulli trial, we end up with a binomial distribution of heads, (as we would expect - it’s the very definition of the binomial!). But after some playing around I’ve discovered something which wasn’t intuitively obvious to me. If instead we update the point estimate after each of the draws (your proposal as I understand it above) then we end up with a beta-binomial distribution of heads. There’s probably a blog post in explaining why this is the case. But the key point is that this is a probabilistic way of arriving at something we can already do analytically just by using the beta-binomial distribution in the first place. I’m not familiar with the latest SEND model, but this is certainly what it used to do.

henrygarner11:12:08

So the million-dollar question: what happens if we update the params to the beta-Bernoulli (the beta analogue of the Bernoulli we used above) after each trial in the same way? In other words, how is the distribution of heads after 20 draws from the beta-Bernoulli different if we update the params in between each draw? Answer: it’s identical. There’s absolutely no effect on the distribution of heads, and there’s probably a blog post in explaining why this is the case too.

henrygarner11:12:31

My takeaway is this: using the beta distribution to estimate the unobserved p already encodes the amount of uncertainty. Each time we sample from this distribution we add no new information. In the end I think that’s the reason the suggestion made me uneasy. On top of that, it seems (if the analogy holds and 20 coin flips is a suitable proxy for the SEND model), it wouldn’t actually do anything to affect the confidence intervals after all.

henrygarner11:12:56

The way the model was encoded, the confidence intervals wouldn’t fan out indefinitely for the same reason you’re very unlikely to flip a coin 20 times and get 20 heads: extreme events tend not to occur consecutively. This is regression towards the mean. You would almost certainly find the confidence intervals fan out wider if you were to look at the 99.9% CI (assuming enough simulations were run to catch those ‘black swan’ chains of very high or low estimates).

henrygarner11:12:21

Additionally, if even then the model seems too conservative, perhaps there are other aspects of the domain that aren’t being modelled? (This is certainly the case!) For example, external influences or periodic shocks which would introduce additional noise or systemic fluctuations? It would be impossible to enumerate all of these, but it might be valuable to include some sort of noise term to stand in for them (assuming it’s not practical to model them more precisely).

bherrmann11:12:00

any recommendations for a clojure notebook which can run locally?