Get $1 credit for every $25 spent!

The Advanced Guide to Deep Learning and Artificial Intelligence Bundle

Ending In:
Add to Cart - $42
Add to Cart ($42)
$480
91% off
wishlist
(55)
Courses
4
Lessons
127
Enrolled
543

What's Included

Product Details

Access
Lifetime
Content
3 hours
Lessons
25

Deep Learning: Convolutional Neural Networks in Python

Dig Into the Deep Learning Concepts Behind Computer Vision

By Lazy Progammer | in Online Courses

In this course, intended to expand upon your knowledge of neural networks and deep learning, you'll harness these concepts for computer vision using convolutional neural networks. Going in-depth on the concept of convolution, you'll discover its wide range of applications, from generating image effects to modeling artificial organs.

  • Access 25 lectures & 3 hours of content 24/7
  • Explore the StreetView House Number (SVHN) dataset using convolutional neural networks (CNNs)
  • Build convolutional filters that can be applied to audio or imaging
  • Extend deep neural networks w/ just a few functions
  • Test CNNs written in both Theano & TensorFlow
Note: we strongly recommend taking The Deep Learning & Artificial Intelligence Introductory Bundle before this course.
The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory cnn_class

Compatibility

  • Internet required

Course Outline

  • Outline and Review
    • Introduction and Outline (1:50)
    • Review of Important Concepts (3:42)
    • Where to get the data for this course (3:12)
    • How to load the SVHN data and benchmark a vanilla deep network (5:03)
  • Convolution
    • What is convolution? (5:18)
    • Convolution example with audio: Echo (6:39)
    • Convolution example with images: Gaussian Blur (5:32)
    • Convolution example with images: Edge Detection (3:21)
    • Write Convolution Yourself (9:15)
  • Convolutional Neural Network Description
    • Architecture of a CNN (5:08)
    • Relationship to Biology (2:18)
    • Convolution and Pooling Gradients (2:39)
    • LeNet - How the Shapes Go Together (12:52)
  • Convolutional Neural Network in Theano
    • Theano - Building the CNN components (4:19)
    • Theano - Full CNN and Test on SVHN (17:26)
    • Visualizing the Learned Filters (3:35)
  • Convolutional Neural Network in TensorFlow
    • TensorFlow - Building the CNN components (3:39)
    • TensorFlow - Full CNN and Test on SVHN (10:41)
  • Practical Tips
    • Practical Image Processing Tips (3:07)
  • Project: Facial Expression Recognition
    • Facial Expression Recognition Problem Description (12:21)
    • The class imbalance problem (6:01)
    • Utilities walkthrough (5:45)
    • Convolutional Net in Theano (21:04)
    • Convolutional Net in TensorFlow (19:03)
  • Appendix
    • How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)

View Full Curriculum


Access
Lifetime
Content
3 hours
Lessons
30

Unsupervised Deep Learning in Python

Uncover the Power of Autoencoders & Restricted Boltzmann Machines in Unsupervised Deep Learning

By Lazy Programmer | in Online Courses

In this course, you'll dig deep into deep learning, discussing principal components analysis and a popular nonlinear dimensionality reduction technique known as t-distributed stochastic neighbor embedding (t-SNE). From there you'll learn about a special type of unsupervised neural network called the autoencoder, understanding how to link many together to get a better performance out of deep neural networks.

  • Access 30 lectures & 3 hours of content 24/7
  • Discuss restricted Boltzmann machines (RBMs) & how to pretrain supervised deep neural networks
  • Learn about Gibbs sampling
  • Use PCA & t-SNE on features learned by autoencoders & RBMs
  • Understand the most modern deep learning developments
The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: intermediate, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory unsupervised_class2

Compatibility

  • Internet required

Course Outline

  • Introduction and Outline
    • Introduction and Outline (1:55)
  • Principal Components Analysis
    • What does PCA do? (6:14)
    • PCA derivation (4:22)
    • MNIST visualization, finding the optimal number of principal components (3:39)
    • PCA objective function (2:06)
  • t-SNE (t-distributed Stochastic Neighbor Embedding)
    • t-SNE Theory (4:28)
    • t-SNE on the Donut (5:51)
    • t-SNE on XOR (4:36)
    • t-SNE on MNIST (2:13)
  • Autoencoders
    • Autoencoders (3:20)
    • Denoising Autoencoders (1:55)
    • Stacked Autoencoders (3:32)
    • Writing the autoencoder class in code (11:55)
    • Writing the deep neural network class in code (12:42)
    • Testing greedy layer-wise autoencoder training vs. pure backpropagation (3:33)
    • Cross Entropy vs. KL Divergence (4:40)
    • Deep Autoencoder Visualization Description (1:32)
    • Deep Autoencoder Visualization in Code (11:14)
  • Restricted Boltzmann Machines
    • Restricted Boltzmann Machine Theory (9:31)
    • Contrastive Divergence for RBM Training (2:45)
    • RBM in Code + Testing a greedily pre-trained deep belief network on MNIST (14:24)
  • The Vanishing Gradient Problem
    • The Vanishing Gradient Problem Description (3:08)
    • The Vanishing Gradient Problem Demo in Code (12:17)
  • Extras + Visualizing what features a neural network has learned
    • Exercises on feature visualization and interpretation (2:07)
    • BONUS: How to derive the free energy formula (6:32)
  • BONUS: Application of PCA / SVD to NLP (Natural Language Processing)
    • BONUS: Application of PCA and SVD to NLP (Natural Language Processing) (2:30)
    • BONUS: Latent Semantic Analysis in Code (10:08)
  • Appendix
    • How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)

View Full Curriculum


Access
Lifetime
Content
4 hours
Lessons
32

Deep Learning: Recurrent Neural Networks in Python

Discover the Artificial Science Behind Speech Recognition & Other Futuristic Sciences

By Lazy Programmer | in Online Courses

A recurrent neural network is a class of artificial neural network where connections form a directed cycle, using their internal memory to process arbitrary sequences of inputs. This makes them capable of tasks like handwriting and speech recognition. In this course, you'll explore this extremely expressive facet of deep learning and get up to speed on this revolutionary new advance.

  • Access 32 lectures & 4 hours of content 24/7
  • Get introduced to the Simple Recurrent Unit, also known as the Elman unit
  • Extend the XOR problem as a parity problem
  • Explore language modeling
  • Learn Word2Vec to create word vectors or word embeddings
  • Look at the long short-term memory unit (LSTM), & gated recurrent unit (GRU)
  • Apply what you learn to practical problems like learning a language model from Wikipedia data
The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory rnn_class

Compatibility

  • Internet required

Course Outline

  • Introduction and Outline
    • Outline of this Course (2:55)
    • Review of Important Deep Learning Concepts (3:31)
    • Where to get the Code (1:49)
  • The Simple Recurrent Unit
    • Architecture of a Recurrent Unit (4:39)
    • Prediction and Relationship to Markov Models (5:14)
    • Unfolding a Recurrent Network (1:56)
    • Backpropagation Through Time (BPTT) (4:17)
    • The Parity Problem - XOR on Steroids (4:32)
    • The Parity Problem in Code using a Feedforward ANN (15:05)
    • Theano Scan Tutorial (12:40)
    • The Parity Problem in Code using a Recurrent Neural Network (15:14)
    • On Adding Complexity (1:16)
  • Recurrent Neural Networks for NLP
    • Word Embeddings and Recurrent Neural Networks (5:01)
    • Word Analogies with Word Embeddings (2:25)
    • Representing a sequence of words as a sequence of word embeddings (3:14)
    • Generating Poetry (4:23)
    • Generating Poetry in Code (part 1) (19:23)
    • Generating Poetry in Code (part 2) (4:34)
    • Classifying Poetry (3:39)
    • Classifying Poetry in Code (16:42)
  • Advanced RNN Units
    • Rated RNN Unit (3:25)
    • RRNN in Code - Revisiting Poetry Generation (8:49)
    • Gated Recurrent Unit (GRU) (5:17)
    • GRU in Code (6:28)
    • Long Short-Term Memory (LSTM) (4:30)
    • LSTM in Code (8:14)
    • Learning from Wikipedia Data (6:57)
    • Learning from Wikipedia Data in Code (part 1) (17:56)
    • Learning from Wikipedia Data in Code (part 2) (8:38)
    • Visualizing the Word Embeddings (11:06)
  • Appendix
    • How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)

View Full Curriculum


Access
Lifetime
Content
4.5 hours
Lessons
40

Natural Language Processing with Deep Learning in Python

The Complete Guide on Deriving & Implementing Word2Vec, GLoVe, Word Embeddings & Sentiment Analysis

By Lazy Programmer | in Online Courses

In this course you'll explore advanced natural language processing - the field of computer science and AI that concerns interactions between computer and human languages. Over the course you'll learn four new NLP architectures and explore classic NLP problems like parts-of-speech tagging and named entity recognition, and use recurrent neural networks to solve them. By course's end, you'll have a firm grasp on natural language processing and its many applications.

  • Access 40 lectures & 4.5 hours of content 24/7
  • Discover Word2Vec & how it maps words to a vector space
  • Explore GLoVe's use of matrix factorization & how it contributes to recommendation systems
  • Learn about recursive neural networks which will help solve the problem of negation in sentiment analysis
The Lazy Programmer is a data scientist, big data engineer, and full stack software engineer. For his master's thesis he worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons to communicate with their family and caregivers.

He has worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. He has created new big data pipelines using Hadoop/Pig/MapReduce, and created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

He has taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School.

Multiple businesses have benefitted from his web programming expertise. He does all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies he has used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases he has used MySQL, Postgres, Redis, MongoDB, and more.

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: advanced, but you must have some knowledge of calculus, linear algebra, probability, Python, Numpy, and be able to write a feedforward neural network in Theano and TensorFlow.
  • All code for this course is available for download here, in the directory nlp_class2

Compatibility

  • Internet required

Course Outline

  • Outline, Review, and Logistical Things
    • Introduction, Outline, and Review (2:42)
    • Where to get the code / data for this course (2:00)
  • Word Embeddings and Word2Vec
    • What is a word embedding? (10:00)
    • Using pre-trained word embeddings (2:17)
    • Word analogies using word embeddings (3:51)
    • TF-IDF and t-SNE experiment (12:24)
    • Word2Vec introduction (5:07)
    • CBOW (2:19)
    • Skip-Gram (3:30)
    • Skip-Gram (3:30)
    • Negative Sampling (7:36)
    • Why do I have 2 word embedding matrices and what do I do with them? (1:36)
    • Word2Vec in Code with Numpy (part 1) (19:49)
    • Word2Vec in Code with Numpy (part 2) (1:53)
    • Converting a sequence of word indexes to a sequence of word vectors (3:14)
    • How to update only part of a Theano shared variable (5:29)
    • Word2Vec in Code with Theano (9:57)
  • Word Embeddings using GLoVe
    • Recommender systems and matrix factorization tutorial (11:08)
    • GLoVe - Global Vectors for Word Representation (4:12)
    • GLoVe in Code - Numpy Gradient Descent (16:48)
    • GLoVe in Code - Theano Gradient Descent (3:50)
    • GLoVe in Code - Alternating Least Squares (4:42)
    • Visualizing country analogies with t-SNE (4:25)
    • Hyperparameter Challenge (2:19)
  • Using Neural Networks to Solve NLP Problems
    • Parts-of-Speech (POS) Tagging (5:00)
    • Parts-of-Speech Tagging Baseline (15:18)
    • Parts-of-Speech Tagging Recurrent Neural Network (13:05)
    • Parts-of-Speech Tagging Hidden Markov Model (HMM) (5:58)
    • Named Entity Recognition (NER) (3:01)
    • Named Entity Recognition Baseline (5:54)
    • Named Entity Recognition RNN (2:19)
    • Hyperparameter Challenge II (2:13)
  • Recursive Neural Networks (Tree Neural Networks)
    • Data Description for Recursive Neural Networks (6:52)
    • What are Recursive Neural Networks / Tree Neural Networks (TNNs)? (5:41)
    • Building a TNN with Recursion (4:47)
    • Trees to Sequences (6:39)
    • Recursive Neural Network in Theano (18:34)
    • Recursive Neural Tensor Networks (6:22)
    • Recursive Neural Network in TensorFlow with Recursion (4:12)
  • Appendix
    • How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow (17:22)

View Full Curriculum



Terms

  • Instant digital redemption

15-Day Satisfaction Guarantee

We want you to be happy with every course you purchase! If you're unsatisfied for any reason, we will issue a store credit refund within 15 days of purchase.