Deep Learning can help computers perform human-like tasks such as speech recognition and image classification.

With Deep Learning – a form of Machine Learning (Artificial Intelligence) – computers can extract and transform data using multiple layers of neural networks.

You might think that in order to use Deep Learning techniques, you'd need to know advanced mathematics, or have access to powerful computers.

Well as long as you've passed high school math, know the basics of coding, and have a computer that's connected to the internet, you can learn to do world-class Deep Learning.

We published a 15-hour Deep Learning course on the freeCodeCamp.org YouTube channel with the goal of making Deep Learning accessible to as many people as possible.

The course is from fast.ai, and was developed by Jeremy Howard and Sylvain Gugger. Sylvain Gugger is a researcher who has written 10 math textbooks. And Jeremy has taught machine learning for the past 30 years. He is the former president and chief scientist of Kaggle, the world's largest machine learning community.

Additionally, the course includes a book that you can access online for free. You can also purchase a physical copy. The book is one of the top selling Deep Learning books on Amazon.

After finishing this course you will know:

  • How to train models that achieve state-of-the-art results in computer vision, natural language processing (NLP), tabular data, and collaborative filtering
  • How to turn your models into web applications, and deploy them
  • How Deep Learning models work
  • How to use that knowledge to improve the accuracy, speed, and reliability of your models
  • The latest Deep Learning techniques that really matter in practice
  • How to implement stochastic gradient descent and a complete training loop from scratch
  • How to think about the ethical implications of your work, and how minimize the likelihood that your work is misused for harm

Here are some of the techniques covered in this course:

  • Random forests and gradient boosting
  • Affine functions and nonlinearities
  • Parameters and activations
  • Random initialization and transfer learning
  • SGD, Momentum, Adam, and other optimizers
  • Convolutions
  • Batch normalization
  • Dropout
  • Data augmentation
  • Weight decay
  • Image classification and regression
  • Entity and word embeddings
  • Recurrent neural networks (RNNs)
  • Segmentation
  • And much more

Watch the full course on the freeCodeCamp.org YouTube channel (15-hour watch).