It seems that much of the data analysis work I’ve done over the last few months has followed a “script”. First, identify data, often government-sponsored and freely-available, that’s of keen interest. Next, find the websites that house the data and download the relevant files to my notebook. The downloads might ultimately be one-off or included in the data analysis programs. Finally, load the data into either R or python and have at it with queries, visualizations, and statistical/learning models.
Several examples of recent work come to mind. The first involves data from the Bureau of Labor Statistics on jobs and educational attainment from 1992-2017. On this site, 24 spreadsheets are available detailing monthly statistics on six measures of employment by 4 levels of educational attainment. For a previous blog, I downloaded 12 of these files, including measurements laborforce, employed, and unemployed by educational levels no HS, HS grad, some college, and college grad. I renamed the spreadsheets on my SSD to include indicators of dimension — eg “clf_laborforce_collegegrad.xls”.
The second example, the topic of this blog, has to do with Medicare charges summarized by hospital and diagnostic related grou… At this point there are five files (either csv or spreadsheet) available for download, representing the years 2011-2015. Year is embedded in the website file names.
A third illustration, and the topic of a coming blog, revolves on census data from the American Community Survey.
The pattern behind the three cases includes:
- readily downloadable files, generally either csv or xls. These files can either be copied by hand or moved programmatically.
- multiple of these files, often dimensioned by time or other variable(s).
- a common, consistent format to the files, so that “reads” will work similarly on each.
- a structured file naming convention, either given or assigned, that provides dimensional info for data loads. Date/Time is the most common diemnsion.
In this article, I looked at the Medicare charges data that consists of five yearly csv files. I expect the next year in the sequence, 2016, to be posted in the near future. I first downloaded the five files to my laptop, programmatically in this case. I then used Jupyter Notebook and R 3.4 to analyze the data with code that takes advantage of the commonalities. The data.table package along with the tidyverse ecosystem and fast file compression, read/write library fst, plus a functional approach, are central to the work.
This is not your older sibling’s R. None of the highly-performant packages used here are included in core R. Rather, they are the bounty of an exhuberant and capable R ecosystem of developers. Incidentally, if this notebook were written in python, I’d be giving similar accolades to its community.
The remainder of the notebook details the R code to implement the loading of this data into R and some simple subsequent analysis. The structure of the code in this instance is very similar to that of the other examples cited.
To read entire content, click here