“Machine Learning (ML)” and “Traditional Statistics(TS)” have different philosophies in their approaches. With “Data Science” in the forefront getting lots of attention and interest, I like to dedicate this blog to discuss the differentiation between the two. I often see discussions and arguments between statisticians and data miners/machine learning practitioners on the definition of “data science” and its coverage and the required skill sets. All is needed, is just paying attention to the evolution of these fields.
.
There is no doubt that when we talk about “Analytics,” both data mining/machine learning and traditional statisticians have been a player. However, there is a significant difference in approach, applications, and philosophies of the two camps that is often overlooked.
What is ML?
ML Application Variety
Data mining and predictive analytics
- Fraud detection, ad placement, credit scoring, recommenders, drug design, stock trading, customer relationship & experience, …
Text processing & analysis
- Web search, spam filtering, sentiment analysis, …
Graph mining
Other:
- Speech recognition, human genome, bioinformatics, optical character recognition (OCR), face recognition, self-driving cars, scene analysis, …
ML Community/Practitioners
- Typically computer science and/or engineering background
- More programming savvy
- Not confined with a single tool
- Open-source friendly
- Rapid prototyping of the ideas/solutions desired
ML vs. Traditional Statistics
Historically, ML techniques and approach heavily relies on computing power. On the other hand, TS techniques were mostly developed where computing power was not an option. As a result, TS heavily relies on small samples and heavy assumptions about data and its distributions,
.
ML in general tends to make less pre-assumptions about the problem and is liberal in its approaches and techniques to find a solution, many times using heuristics. The preferred learning method in machine learning and data mining is inductive learning. At its extreme, in inductive learning the data is plentiful or abundant, and often not much prior knowledge exists or is needed about the problem and data distributions for learning to succeed. The other side of the learning spectrum is called analytical learning, (deductive learning), where data is often scarce or it is preferred (or customary) to work with small samples of it. There is also good prior knowledge about the problem and data. In real world, one often operates between these two extremes. On the other hand, traditional statistics is conservative in its approaches and techniques and often makes tight assumptions about the problem, especially data distributions.
.
The following table shows some of the differences in approach and philosophy between the two fields:
Machine Learning (ML)
|
Traditional statistics (TS)
|
Goal: “learning” from data of all sorts
|
Goal: Analyzing and summarizing data
|
No rigid pre-assumptions about the problem and data distributions in general
|
Tight assumptions about the problem and data distributions
|
More liberal in the techniques and approaches
|
Conservative in techniques and approaches
|
Generalization is pursued empirically through training, validation and test datasets
|
Generalization is pursued using statistical tests on the training dataset
|
Not shy of using heuristics in approaches in search of a “good solution”
|
Using tight initial assumptions about data and the problem, typically in search of an optimal solution under those assumptions
|
Redundancy in features (variables) is okay, and often helpful. Preferable to use algorithms designed to handle large number of features
|
Often requires independent features. Preferable to use less number of input features
|
Does not promote data reduction prior to learning. Promotes a culture of abundance: “the more data, the better”
|
Promotes data reduction as much as possible before modeling (sampling, less inputs, …)
|
Has faced with solving more complex problems in learning, reasoning, perception, knowledge presentation, …
|
Mainly focused on traditional data analysis
|
- Classification: Predicting to which discrete class an entity belongs (binary classification is used the most)—e.g., whether a customer will be high-risk.
- Regression: Predicting continuous values of an entity’s characteristic—e.g., how much an individual will spend next month on his or her credit card, given all other available information.
- Forecasting: Estimation of macro (aggregated) variables such as total monthly sales of a particular product.
- Attribute Importance: Identifying the variables (attributes) that are the most important in predicting different classification or regression outcomes.
- Clustering: Finding natural groupings in the data.
- Association models: Analyzing “market baskets” (e.g., novel combinations of the products that are often bought together in shopping carts).