Binomial testing with buttered toast

7 comments
Rasmus' post of last week on binomial testing made me think about p-values and testing again. In my head I was tossing coins, thinking about gender diversity and toast. The toast and tossing a buttered toast in particular was the most helpful thought experiment, as I didn't have a fixed opinion on the probabilities for a toast to land on either side. I have yet to carry out some real experiments.

Suppose I tossed 6 buttered toasts and observed that all but one toast landed on their buttered side.

Now I have two questions:
  • Would it be reasonable to assume that there is a 50/50 chance for a toast to land on either side?
  • Which probability should I assume?
If I believe the toast is fair, then the probability for landing on the buttered (B) or unbuttered (U) side is 50%.

The probability of observing one ore more B (right tail event) is:
(gt <- 1-1/2^6)
# [1] 0.984375
and the probability of observing one or fewer B (left tail event) is:
(lt <- 1/2^6*choose(6,1) + 1/2^6)
# [1] 0.109375
while the probability of either extreme event, one or fewer B (or U), is:
2*min(c(gt, lt))
# [1] 0.21875
In summary, if the toast has an equally probability to fall on either side, then there is 22% chance to observe one or fewer B (or U) in 6 tosses. That's not that unlikely and hence I would not dismiss the hypothesis that the toast is fair, despite the fact that the sample frequency is only 1/6.

Actually, the probabilities I calculated above are exactly the p-values I get from the classical binomal tests:
## Right tail event
binom.test(1, 6, alternative="greater")
## Left tail event
binom.test(1, 6, alternative="less")
## Double tail event
binom.test(1, 6, alternative="two.sided")
Additionally I can read from the tests that my assumption of a 50% probability is on the higher end of the 95 percent confidence interval. Thus, wouldn't it make sense to update my belief about the toast following my observations? In particular, I am not convinced that a 50/50 probability is a good assumption to start with. Arguably the toast is biased by the butter.

Here the concept of a conjugate prior becomes handy again. The idea is to assume that the parameter \(\theta\) of the binomial distribution is a random variable itself. Suppose I have no prior knowledge about the true probability of the toast falling on either side, then a flat prior, such as the Uniform distribution would be reasonable. However, the beta distribution with parameter \(\alpha=1\) and \(\beta=1\) has the same property and is a conjugate to the binomial distribution with parameter \(\theta\). That means there is an analytical solution, in this case the posterior distribution is beta-binomial with hyperparaemters:
\[\alpha':=\alpha + \sum_{i=1}^n x_i,\; \beta':=\beta + n - \sum_{i=1}^n x_i,\]
and the posterior predictor for one trial is given as
\[\frac{\alpha'}{\alpha' + \beta'}\]
so in my case:
alpha <- 1; beta <- 1; n <- 6; success <- 1
alpha1 <- alpha + success
beta1 <- beta + n - success
(theta <- alpha1 / ( alpha1 + beta1))
# [1] 0.25
My updated believe about the toast landing on the unbuttered side is a probability of 25%. That's lower than my prior of 50% but still higher than the sample frequency of 1/6. If I would have more toasts I could run more experiments and update my posterior predictor.

I get the same answer from Rasmus' bayes.binom.test function:
> library(BayesianFirstAid)
> bayes.binom.test(1, 6)

 Bayesian first aid binomial test

data: 1 and 6
number of successes = 1, number of trials = 6
Estimated relative frequency of success:
  0.25 
95% credible interval:
  0.014 0.527 
The relative frequency of success is more than 0.5 by a probability of 0.061 
and less than 0.5 by a probability of 0.939 
Of course I could change my view on the prior and come to a different conclusion. I could follow the Wikipedia article on buttered toast and believe that the chance of the toast landing on the buttered side is 62%. I further have to express my uncertainty, say a standard deviation of 10%, that is a variance of 1%. With that information I can update my belief of the toast landing on the unbuttered side following my observations (and transforming the variables):
x <- 0.38
v <- 0.01
alpha <- x*(x*(1-x)/v-1)
beta <- (1-x)*(x*(1-x)/v-1)
alpha1 <- alpha + success
beta1 <- beta + n - success
(theta <- alpha1 / ( alpha1 + beta1))
# [1] 0.3351821
I would conclude that for my toasts / tossing technique the portability is 34% to land on the unbuttered side.

In summary, although their is no sufficient evidence to reject the hypothesis that the 'true' probability is not 50% (at the typical 5% level), I would work with 34% until I have more data. Toast and butter please!

Session Info

R version 3.0.1 (2013-05-16)
Platform: x86_64-apple-darwin10.8.0 (64-bit)

locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods  
[7] base     

other attached packages:
[1] BayesianFirstAid_0.1 rjags_3-12           coda_0.16-1         
[4] lattice_0.20-24     

loaded via a namespace (and not attached):
[1] grid_3.0.1  tools_3.0.1

7 comments :

Post a Comment

Fun with the Raspberry Pi

No comments
Since Christmas I have been playing around with a Raspberry Pi. It is certainly not the fastest computer, but what a great little toy! Here are a few experiences and online resources that I found helpful.

Setup

Initially I connected the Raspberry Pi via HDMI to a TV; together with keyboard, mouse and an old USB Wifi adapter. Everything worked out of the box and I could install Raspbian and set up the network.

HDMI to VGA

Using an old VGA computer monitor via an adapter required changes to the file /boot/config.txt. You can find the parameters that match your monitor on Raspberry Pi StackExchange. In my case I had to set:
hdmi_group=2
hdmi_mode=35 # 1280x1024 @ 60Hz

Remote Access

But who needs a monitor when you can access the Pi remotely anyway? The command ifconfig tells me the local IP address of the Raspberry Pi.

With XQuartz on my Mac running I can connect to the Raspberry Pi via ssh with the X session forwarded:
ssh -X pi@your.ip.address.here
However, the performance is a bit sluggish and online comments suggest to use VNC instead. Nothing easier than that, install the VNC server on the Pi and use Screen Sharing on your Mac to access the Pi. Mitch Malone has a great post on this subject. Following the VNC setup on the Raspberry Pi I can type:
vnc://pi@your.ip.address.here:5901
into Safari and that will bring up the Screen Sharing App; see screen shot below.

AirPrint and AirPlay

Ok, let's give the Pi something to do: Rohan Kapoor explains how to set up the Raspberry Pi as a print server with AirPrint.

How about AirPlay as well? Follow Thorin Klosowski's steps on Lifehacker and you can stream music from your iOS devices to the Raspberry Pi's audio out.

Mathematica and R



Just before Christmas Stephen Wolfram announced that Mathematica would be made freely available for the Raspberry Pi. I had used Mathematica a little at university and was curious to see it on the Pi. Alex Newman posted the installation instructions on the Wolfram community site. It is as simple as:
sudo apt-get update && sudo apt-get install wolfram-engine
As a little toy example I run Paul Nylander's Mathematica code for a Feigenbaum diagram:

Surprisingly, it took about 3.5 minutes to run. Curious to find out if my old R routine would run faster I installed R on my Pi:
sudo apt-get install r-base
and adapted the code to match the parameters of the Mathematica routine (well, so I think):

The R code finished after about 10 seconds. Surely, there must be ways to speed up the Mathematica code that I have to investigate.

No comments :

Post a Comment

How many more R-bloggers posts can I expect?

4 comments
I noticed that the monthly number of posts on R-bloggers stopped increasing over the last year. Indeed, the last couple of months saw a decline in posts compared to the previous year. Thus, has most been said and written about R already?


Who knows? Well, I took a stab at looking into the future. However, I can tell you already that I am not convinced by my predictions. But maybe someone else will be inspired to take this work forward.

First, I have to get the data - that's easy, I can scrape the monthly post counts from the R-bloggers homepage.


Looking at the incremental and cumulative plots, and believing that eventually the number of R posts will decrease, I thought that a logistic growth function would provide a nice fit to the data and also give an asymptotic view of the total number of posts on R-bloggers.

Although the fit, see below, looks reasonable at first glance, I don't believe it provides a sensible prediction of the future. The model would forecast only another 1,269 post by the end of 2016 with not much more to expect after that. Indeed the asymptotic total number of posts K is only 14,396. I don't believe this can be right, not even as a proxy, when the current count of monthly posts is well above 100.



I played around with data and the logistic growth function a little further, using annual instead of monthly data, changing the time horizon and fixing K, yet without much success.

Eventually I recalled a talk by Rob Hyndman's about his forecast package. After all, I have a time series here. So, applying the forecast function to the incremental data provides a somewhat more realistic prediction of 2,695 posts for the next 12 months, but with an increasing trend in monthly posts for 2014, which I find hard to believe given the observations over the last year.



Well, I presented two models here: One predicts a rapid decline in monthly posts on R-bloggers, while the other forecasts an increase. Neither feels right to me. Of course time will tell, but have you got any ideas or views?

Session Info

R version 3.0.2 (2013-09-25)
Platform: x86_64-apple-darwin10.8.0 (64-bit)

locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats  graphics  grDevices  utils  datasets  methods   base     

other attached packages:
[1] forecast_4.8 xts_0.9-7    zoo_1.7-10   XML_3.95-0.2

loaded via a namespace (and not attached):
 [1] colorspace_1.2-4        fracdiff_1.4-2         
 [3] grid_3.0.2              lattice_0.20-23        
 [5] nnet_7.3-7              parallel_3.0.2         
 [7] quadprog_1.5-5          Rcpp_0.10.6            
 [9] RcppArmadillo_0.3.920.1 tools_3.0.2            
[11] tseries_0.10-32

4 comments :

Post a Comment

Whale charts - Visualising customer profitability

3 comments
The Christmas and New Year's break is over, yet there is still time to return unwanted presents. Return to Santa was the title of an article in the Economist that highlighted the impact on online retailers, as return rates can be alarmingly high.

The article quotes a study by Christian Schulze of the Frankfurt School of Finance and Management, which analyses the return habits of customers who bought at least five items over a five year period from a large European online retailer. Although only a few figures are cited I will attempt to create a little model that replicates the customer behaviour and visualises the impact on overall profitability.

The study found that 5% of customers sent back more than 80% of the items they had bought; and that 1% of customers sent back at least 90% of their purchases. Or in other words 95% of customers send back less than 80% and 99% of customers send back less than 90%. To model this behaviour an S-shape curve seems appropriate, such as the logistic curve, as no-one can return more than they bought or less than nothing. With location and scale parameters m and s the logistic function can be fitted to the data, see the R code below.


The return rates do look quite high. However, if the products were shoes rather than books then I find them believable.

Additionally the article cites studies that suggest handling each returned item costs online sellers between $6 and $18, not to mention losses from items that are returned in unsaleable condition. Furthermore, without the cost of returns, online retailer's profits would be almost 50% higher.

Thus, to spin my toy model further, I assume 100 customers with revenues following an exponential distribution (λ=1/250), the cost ratio of sold goods to be lognormal (μ=-0.1, σ=0.1) and the cost of returns to follow a normal distribution with mean of $12 and standard deviation of $6.

In my simulation I could have made a profit of $1,979 instead of $1,441. Clearly the customers who return many items cause a real dent to my bottom line.

This situation is best visualised in what is often called a Whale Chart. Here I plot the cumulative profit against customers, with the most profitable customer on the left and the least profitable customer on the right. This chart shows me how much profit the first x number of customers generated. Often this graphs looks like a whale coming out of the water - hence its name.


In my little toy simulation I note that the first 20 most profitable customers would have generated more profit than the revenue of all customers. Indeed, profitability could have been 37% higher if it wasn't for loss making customers.

So, what shall I do? Manage my customers, know who I should reward and keep and whose loss wouldn't hurt at all. More customers are not the answer. I need more customers who return less.

R Code

3 comments :

Post a Comment