This article was written by Andy at Adventures in Deep Learning.
Fully connected neural network example architecture
In previous tutorials on deep learning, I have taught how to build networks in the TensorFlow deep learning framework. There is no doubt that TensorFlow is an immensely popular deep learning framework at present, with a large community supporting it. However, there is another contending framework which I think may actually be better – it is called the Microsoft Cognitive Toolkit, or more commonly known as CNTK. Why do I believe it to be better? Two main reasons – it has a more intuitive and easy to use Python API than TensorFlow, and it is faster. It also can be used as a back-end to Keras, but I would argue that there is little benefit to doing so as CNTK is already very streamlined. How much faster is it? Some benchmarks show that is it generally faster than TensorFlow and up to 5-10 times faster for recurrent / LSTM networks. That’s a pretty impressive achievement. This article is a comprehensive CNTK tutorial to teach you more about this exciting framework.
Should you switch from using TensorFlow to CNTK? TensorFlow definitely has much more hype than Microsoft’s CNTK and therefore a bigger development community, more answers on Stack Overflow and so on. Also, many people are down on Microsoft which is often perceived as a big greedy corporation. However, Microsoft has opened up a lot, and CNTK is now open-source, so I would recommend giving it a try. Let me know what you think in the comments. This post will be a comprehensive CNTK tutorial which you can use to get familiar with the framework – I suspect that you might be surprised at how streamlined it is.
CNTK inputs and variables:
The first thing to learn about any deep learning framework is how it deals with input data, variables and how it executes operations/nodes in the computational graph. In this CNTK tutorial, we’ll be creating a three layer densely connected neural network to recognize handwritten images in the MNIST data-set, so in the below explanations, I’ll be using examples from this problem. See the above-mentioned tutorials for other implementations of the MNIST classification problem.
To read the full original article and learn more about variables, data readers, and CNTK operations click here. For more neural network related articles on DSC click here.
DSC Resources
- Services: Hire a Data Scientist | Search DSC | Classifieds | Find a Job
- Contributors: Post a Blog | Ask a Question
- Follow us: @DataScienceCtrl | @AnalyticBridge
Popular Articles