Robust regression
For training purposes, I was looking for a way to illustrate some of the different properties of two different robust estimation methods for linear regression models. The two methods I’m looking at are:
- least trimmed squares, implemented as the default option in
lqs()
- a Huber M-estimator, implemented as the default option in
rlm()
Both functions are in Venables and Ripley’s MASS
R package which comes with the standard distribution of R.
These methods are alternatives to ordinary least squares that can provide estimates with superior qualities when the classical assumptions of linear regression aren’t met. M-estimators are effective in dealing with various breakdowns in the classical linear model assumptions, while still giving excellent results if the assumptions happen to be valid after all. Least trimmed squares is very resistant to outliers, at a cost to efficiency. Unfortunately, neither of these (or any other) robust estimation methods is the best in all situations, although they generally out-perform ordinary least squares in most real-life situations (my own, non-systematic observation).
New Zealand Income Survey 2011
To explore this I use the simulated unit record file from the New Zealand Income Survey 2011 that I’ve used in a lot of blog posts in the past year. My first post on this dataset sets out how to import this data and tidy it up into a database. The data shows weekly individual income from all sources for New Zealanders aged over 15, as well as hours worked and a range of demographic information I won’t be using today. For looking at robust regression estimation methods, I’m going to pretend that I only ever have a sample of thirty observations from this dataset.
To illustrate the data, here’s an animation showing the full dataset (in grey) and repeated different samples of thirty data points, as well as lines representing linear models fit to those small samples with ordinary least squares (lm
) and the two robust regression methods I’m investigating today.
Some characteristics of this data that make it a useful illustration for robust regression include:
- It’s reasonable to postulate the underlying relationship between hours worked and income as linear for much of the population. So a linear model on the original scale is likely to be appropriate.
- However, the population seems likely to be made up of a number of diverse groups, making assumptions of homogeneity implausible. For example, a large number of people have incomes, some of them quite substantial, despite working zero hours. General economic and social knowledge suggests that divisions into wage-earners, profit-receivers, and social security receivers probably constitute three quite different populations, while still being a significant simplification of the actual heterogeneity.
- On the original scale the data are expected to show heteroscedasticity (in this case, higher variance for higher expected income), making classical inference invalid
- There are a number of outliers on both the x and y axes; and a cluster of individuals with negative income that appear to form yet another group in the mixture.
Comparing robust methods
To reflect one of the key features of the data which is the unusual distribution of income when working hours are zero, the models fit in this post are all of the structure:
income ~ hours + (hours == 0)
The graphics show just the relationship between income and hours for when a non-zero amount of hours is worked. In effect, (just for the graphics), this is like fitting the model after removing all the individuals that performed zero hours of work in the surveyed week.
The animation shown above was the germ idea for this post and already illustrates the key points. In particular, it shows how the randomness associated with a small sample of thirty points leads to a different result for any particular sample; and once you have that sample, different estimation methods can give you materially differing results. In some frames of the animation, we see a sample of thirty where slopes from the different estimation methods point in different directions, suggesting that if that particular small sample was the only one available, one might conclude that working more hours leads to decreased income.
The animation is nice for reminding us of the randomness associated with small samples, but actually isn’t a particularly effective method of looking at how the different robust estimation methods are doing. Here’s a better way of doing that; a plot of the actual regression lines fit to 10,000 different samples of thirty. Each line is semi-transparent, to give a better visual impression of where the bulk of fit lines end up:
This makes it graphically clear that the lqs
method (least trimmed squares) gives a noticeably wider range of results than does the Huber M-estimator. This is an expected result; least trimmed squares is highly resistant to outliers, but this comes at the cost of sufficiency (effectively, a significant part of the data has very little impact on the slope, so the information contained in the data is not utilised to the full).
Here’s another way of looking at that; the density plots of the slopes of the three estimation methods (two robust methods plus ordinary least squares fitted with lm
). I am using slope because it’s the most likely parameter of interest - it represents the marginal increase in income from an extra hour of work.
The density plot shows that the estimates from lqs
tend to be lower than for either rlm
or lm
. This is to be expected, in a way related to the reason that trimmed mean or median income is usually less than mean income. An estimation method that knocks out of the data the most extreme points (as done by least trimmed squares) will give differing results for samples drawn from skewed distributions.
Here are the mean, median and standard deviation of the slopes estimated by the three methods for the 10,000 samples of thirty:
We see that lm
and rlm
on average give similar results, but rlm
is much more stable, with a noticeably lower variance in results (shown by the standard deviation). lqs
on the other hand is actually quite unstable, for this small sample size.
Preliminary thoughts on robust regression estimators
This post uses Venables and Ripley’s MASS
package and the accompanying book Modern Applied Statistics with S. Their own discussion of the issue implies (to my reading) the preferred use of the MM-estimator proposed by Yohai, Stahel & Zamar in 1991, which retains the outlier-resistance of the S-estimator methods related to trimmed least squares, and the greater efficiency of M-estimators. MM-estimators are available via rlm()
with the method = 'MM'
argument.
Another book that has been influential for me and I thoroughly recommend is Wilcox’ Modern Statistics for the Social and Behavioral Sciences. His comments on the matter are so pertinent that I am going to quote two paragraphs at length:
Comments on Choosing a Regression Estimator (Wilcox)
“Many regression estimators have been proposed that can have substantial advantages over the ordinary least squares estimator. But no single estimator dominates and there is the practical issue that the choice of estimator can make a practical difference in the conclusions reached. One point worth stressing is that even when an estimator has a high breakdown point, only two outliers, properly placed, can have a large impact on some of the robust estimates of the slopes…. Estimators that seem to be particularly good at avoiding this problem are the Least Trimmed Squares, Least Trimmed Absolute Value and OP regression estimators. Among these three estimators, the OP estimator seems to have an advantage in terms of achieving a relatively low standard error and so has the potential of relatively high power. A negative feature of the OP estimator is that with large sample sizes, execution time might be an issue…
“In terms of achieving a relatively small standard error when dealing with heteroscedasticity, certain robust regression estimators can offer a distinct advantage compared to the least squares estimator…. Ones that perform well include Theil-Sen, the OP regression estimator and the TSTS estimator. The MM-estimator does not perform well for the situations considered (ie heteroscedasticity)… but perhaps there are situations where it offers a practical advantage.”
It should be noted that earlier in the chapter Wilcox pointed out that the MM-estimator “enjoys excellent theoretical properties … but its standard error can be relatively large when there is heteroscedasticity.”
The “OP” estimator is a method of outlier detection and removal followed by Theil-Sen estimation, a method that takes the median slope between all the pairs of sample points. Topic for a future post.
Code
Here’s the R code behind all this. It won’t be immediately reproducible. Prior work is needed to establish the database as described in my earlier post; and the animation sequence is dependent on a particular folder structure.