Through exposure to the news and social media, you are probably familiar with the fact that machine learning has become one of the most exciting technologies of our time. While it may seem that machine learning has become the buzzword of our age, it is certainly not just hype. This exciting field opens up the way to new possibilities and has become indispensable to our daily lives.
Being exposed to practical code examples and working through example applications
of machine learning are great ways to dive into this field. If you want to become a machine learning practitioner or a better problem solver, or maybe you are even considering a career in machine learning research, then Python Machine Learning, Third Edition is for you! This book is a comprehensive guide to machine learning and deep learning with Python. It acts as both a step-by-step tutorial, and a reference you’ll keep coming back to as you build your machine learning systems.
Let’s hear Sebastian Raschka’s views on the benefits of TensorFlow 2.0 and the key takeaways from the new edition of his bestselling Python Machine Learning book.
- With the TensorFlow 2.0 release, what do you think are its biggest benefits in the computation world?
One of the biggest complaints about the first version of TensorFlow was that it was centred around static computation graphs. While static computation graphs are great from a software engineering and optimization standpoint, many users associate TensorFlow with a tedious experience when implementing and experimenting with neural networks.
Now, TensorFlow 2.0 uses dynamic graphs by default to address user feedback and make the deep learning framework more user-friendly. Emphasizing usability, TensorFlow is now also centred around the built-in Keras API—it makes using TensorFlow easier than ever.
- TensorFlow developers seem to be promoting Keras as tf.keras, a recommended high-level API for TensorFlow 2.0. But Keras has its own separate package. How is the Keras package different from tf.keras?
Initially, the Keras project started as an API around Theano, one of the earlier deep learning frameworks, which was phased out in 2017. Due to its popularity in the deep learning community, Keras started supporting different backends for its API, including TensorFlow. Around the same time, the TensorFlow team and various open source communities experimented with different higher-level APIs as convenient abstraction layers for TensorFlow. As it turns out, Keras became the users’ favorite, and the TensorFlow team started incorporating it into TensorFlow’s core library to avoid reinventing the wheel.
Today, the standalone Keras library is an API that supports multiple backends, including Theano, TensorFlow, and CNTK. However, the tf.keras module inside TensorFlow has been partly rewritten and optimized for TensorFlow 2.0. Since tf.keras doesn’t have to provide support for other backends, such as Theano and CNTK, I would argue that tf.keras is a better, more optimized, and more native solution for users who would like to use TensorFlow 2.0.
- Reinforcement learning is said to be the hope of true artificial intelligence, how much do you agree with this statement?
I think we are still very, very far away from true AI, which is also known as strong artificial intelligence and artificial general intelligence. As of today, there is no clear path towards achieving artificial general intelligence or even predicting a rough time estimate for when we’ll get there.
I would argue that the closest we got to human-level performance in complex tasks is AlphaGo and AlphaStar, which are both based on reinforcement learning. However, a model like AlphaGo that can beat players in a complex board game cannot be compared to human-level thinking—it cannot even generalize to other, more or less related tasks without retraining the model from scratch.
Obviously, reinforcement learning allows us to solve very complex tasks, and in that sense, it is much more advanced than algorithms for predictive analytics. At the same time, reinforcement learning models are costly to train and are specific to particular tasks. Whether reinforcement learning will play a role in achieving artificial general intelligence remains to be seen when we eventually get there.
- Coming to your book Python Machine Learning, what’s new in the 3rd edition?
Many readers and students told us how much they love the first 12 chapters as a comprehensive introduction to machine learning and Python’s scientific computing stack. To keep these chapters relevant, we went back and updated these chapters to support the latest versions of NumPy, SciPy, and scikit-learn. Also, we refined several sections to improve the readability and explanations based on reader feedback.
One of the most exciting events in the deep learning world was the release of TensorFlow 2.0. Consequently, all of the TensorFlow-related deep learning chapters (chapters 13-16) received a big overhaul. Since TensorFlow 2.0 introduced many new features and fundamental changes, we rewrote these chapters from scratch. Furthermore, we added a brand-new chapter on Generative Adversarial Networks, which are one of the hottest topics in deep learning research.
In the first chapter of the previous editions of Python Machine Learning, we introduced the three subcategories of machine learning: unsupervised machine learning, supervised machine learning, and reinforcement learning. Readers of the first two editions will know that a more detailed coverage of reinforcement learning was out of the scope for these editions, though. However, based on the many requests we have received from readers, we are very excited to announce that we have written a comprehensive introduction to reinforcement learning, which will be included as the longest chapter in this book.
- Why have you decided to cover reinforcement learning and GANs in this edition?
The first GANs paper just came out two years before we started working on the second edition. At that time, we weren’t sure whether GANs would stay an essential and relevant topic. However, without a doubt, GANs have evolved into one of the hottest and most widely used deep learning techniques. People use it for creating artwork and colorizing and improving the quality of photos. Even the video game modeling communities picked up on GANs to recreate textures of old video games in higher resolutions. Nowadays, various scientific research areas make use of GANs; for example, cosmologists use GANs for the generation of gravitational lensing effects for studying the effects of dark matter in the universe. I think it goes without saying that an introduction to GANs was long overdue.
Another important topic, or rather a whole subcategory of machine learning that we skipped in previous editions, is reinforcement learning. Reinforcement learning has received a massive boost in attention recently. Thanks to impressive projects such as DeepMind’s AlphaGo and AlphaGo Zero, which beat the world’s best players in the strategy board game Go, reinforcement learning received very extensive news coverage. Just recently, reinforcement learning has been used to compete with the world’s top e-sports players in the real-time strategy video game StarCraft II. The chances are that many people have heard of these achievements of reinforcement learning in the news by now, and we hope that our new chapters can provide an accessible and practical introduction to this exciting field.
- What are the key takeaways from your book?
Machine learning can be useful in almost every problem domain. We cover a lot of different subfields of machine learning in the book. My hope is that people can find inspiration for applying these fundamental techniques to drive their research or industrial applications. Also, using well-developed and maintained open source software makes machine learning very accessible to a wide audience of experienced programmers, as well as those who are new to programming.
Python Machine Learning, Third Edition is also different from a classic academic machine learning textbook due to its emphasis on practical code examples. However, I think this approach is highly valuable for both students and young researchers who are getting started in machine learning and deep learning. We heard from readers of previous editions that the book strikes a good balance between explaining the broader concepts supported with great hands-on examples, giving a light introduction to the mathematical underpinnings.
About the Authors
Sebastian Raschka is an Assistant Professor of Statistics at the University of Wisconsin-Madison focusing on machine learning and deep learning research. Some of his recent research methods have been applied to solving problems in the field of biometrics for imparting privacy to face images. Other research focus areas include the development of methods related to model evaluation in machine learning, deep learning for ordinal targets, and applications of machine learning to computational biology.
Vahid Mirjalili obtained his Ph.D. in mechanical engineering working on novel methods for large-scale, computational simulations of molecular structures. Currently, he is focusing his research efforts on applications of machine learning in various computer vision projects at the Department of Computer Science and Engineering at Michigan State University. He recently joined 3M Company as a research scientist, where he uses his expertise and applies state-of-the-art machine learning and deep learning techniques to solve real-world problems in various applications to make life better.