Data on the shadow economy?
I’m reading Kenneth Rogoff’s The Curse of Cash. It was one of Bloomberg’s Best Books of 2016 and the Financial Times’ Best Economics Books of 2016, and I recommend it. It’s an excellent and convincing book, making the case for getting rid of large denomination notes for three reasons:
- to put the squeeze on the shadow economy (everything from breaches of labour standards, visa requirements, and tax evasion)
- make life harder for organised crime
- make negative interest rates a credible weapon in the armoury of central banks.
If you’re interested, there’s also a talk at the London School of Economics from 23 November 2016 that’s worth listening to or watching; actually that talk was what prompted me to buy the book for holiday reading.
The book has prompted me to follow up something I’ve wondered about for a while, which is “how do people try to measure the shadow economy?”. One of the broadest definitions of the shadow economy is:
This includes both legal and illegal economic activities. What I am interested in here is the measurement of otherwise legal economic activities, such as cash payments for the purpose of tax-evasion, rather than intrinsically illegal activities, such as drug trafficking.
Estimating the size of this shadow economy is fraught with obvious difficulties and there is no universally accepted way of doing it. Rogoff cites research by Friedrich Schneider and colleagues. A bit of googling shows that their approach seems to be pretty systematic and mature, and takes into account a range of previous efforts. It looks like Schneider et al’s 2010 New Estimates for the Shadow Economies all over the World is a seminal paper, with the data reproduced or cited by a range of researchers and NGOs. An updated consideration of methodological issues is in Schneider and Buehn’s 2016 Estimating the Size of the Shadow Economy: Methods, Problems and Open Questions.
Figures in the 2010 paper were improved and updated in the short January 2015 paper Size and Development of the Shadow Economy of 31 European and 5 other OECD Countries from 2003 to 2015: Different Developments. It includes tables with estimates from 2003 to 2015 of the shadow economy for 28 European Union (EU), three non-EU, and five non-European countries. There’s only a basic barchart of average values in that paper, but here’s a graphic I made that I think summarises the data:
It’s pretty self-explanatory. The universal (for these generally wealthier countries) downwards trend over time is the obvious feature. The inter-country variations won’t surprise anyone aware of the different institutional arrangements and cultural practices across these countries. Worth noting is that value-added taxes in the USA are low by international standards, with proportionately more government revenue coming from income tax. Rogoff and others argue that lower incentives to avoid sales and value-added taxes can partly explain the relatively low level of this kind of shadow economy activity in countries like the USA. A cursory glance at VAT rates in wikipedia suggests this factor, if it exists, is not a particularly important one in explaining differences across countries. A question I may follow up one day.
A close up view of the same data, by allowing a tailored vertical axis for each country, draws attention to micro structural influences on this kind of shadow economic activity. Note the upward blips across countries around the time of the recent international financial crises in 2008-2009.
According to Schneider the decrease since the 2008-2009 economic crisis can be explained as follows:
“…the most important reason for this decrease is that if the official economy is recovering or booming, people have fewer incentives to undertake additional activities in the shadow economy and to earn extra “black” money.”
I’m sure Schneider is correct that economic boom and bust has a short term impact, but the general trend downwards over the longer period (2003 onwards) makes it clear that something more substantive is going on.
Data extraction from PDF files
Extracting the data was a pain because it is in three tables in a PDF document. My strategy to make this easier was to:
- download and save the PDF
- use the extremely excellent and useful
pdftools
R package to import the article into R as a list of character vectors, with one element of the list per page of the PDF - use regular expressions to extract and tidy up tables 1, 2 and 3 from pages 6 and 7 (most importantly, replace sequences of adjacent spaces with a single “pipe”
|
) - take advantage of those
|
delimiters to save each table as its own pipe-delimited text file - re-import back to R as three different data frames, combine and tidy up for analysis
- check final results against the original PDF.
As far as I’m aware there’s no general and robust way of converting the wide variety of tables in PDF documents into machine-usable data, but this approach could be used in quite a wide variety of situations. There’s a bit of manual adjustment required (for example, dealing with the spaces in “New Zealand” and “Czech Republic”, which I do in quite an ad hoc way given there are only a few to take into account). Here’s the code for the data import and table extraction:
Analysis
Once the data are available in the same shape (in the three data frames eur
, noneu
and noneur
), combining and tidying up for analysis is straightforward.