Generating and visualising multivariate random numbers in R
This post will present the wonderfulpairs.panels
function of the psych
package [1] that I discovered recently to visualise multivariate random numbers.Here is a little example with a Gaussian copula and normal and log-normal marginal distributions. I use
pairs.panels
to illustrate the steps along the way.I start with standardised multivariate normal random numbers:
library(psych)
library(MASS)
Sig <- matrix(c(1, -0.7, -.5,
-0.7, 1, 0.6,
-0.5, 0.6, 1),
nrow=3)
X <- mvrnorm(1000, mu=rep(0,3), Sigma = Sig,
empirical = TRUE)
pairs.panels(X)
Who will win the World Cup and which prediction model?
The World Cup has finally kicked off last Thursday and I have seen some fantastic games already. Perhaps the Netherlands appears to be the strongest side so far, following their 5-1 victory over Spain.To me the question is not only which country will win the World Cup, but also which prediction model will come closest to the actual results. Here I present three teams, FiveThirtyEight, a polling aggregation website, Groll & Schauberger, two academics from Munich and finally Lloyd's of London, the insurance market.
The guys around Nate Silver at FiveThirtyEight have used the ESPN’s Soccer Power Index to predict the outcomes of games and continue to update their model. Brazil is their clear favourite.
Andreas Groll & Gunther Schauberger from the LMU Munich developed a model, which like the approach from FiveThirtyEight aims to estimate the probability of a team winning the world cup. But unlike FiveThirtyEight, they see Germany to take the trophy home.
Lloyd's chose a different approach for predicting the World Cup final. The insurance market looked at the risk aspect of the teams and ranked the teams by their insured value. Arguably the better a team the higher their insured value. As a result Lloyd's predicts Germany to win the World Cup.
Quick reminder; what's the difference between insurance and gambling? Gambling introduces risk, where none exists. Insurance mitigates risk, where risk exists.
The joy of joining data.tables
The example I present here is a little silly, yet it illustrates how to join tables withdata.table
in R. Mapping old data to new data
Categories in general are never fixed, they always change at some point. And then the trouble starts with the data. For example not that long ago we didn't distinguish between smartphones and dumbphones, or video on demand and video rental shops.I would like to back track price change data for smartphones and online movie rental shops, assuming that their earlier development can be set to the categories they were formerly part of, namely mobile and video rental shops to create indices.
Here is my toy data:
I'd like to create price indices for all products and where data for the new product categories is missing, use the price changes of the old product category.
The
data.table
package helps here. I start with my original data, convert it into a data.table
and create mapping tables. That allows me to add the old product with its price change to the new product. The trick here is to set certain columns as key. Two data tables with the same key can easily be joined. Once I have the price changes for products and years the price index can be added. The plot illustrates the result, the R code is below.
No comments :
Post a Comment