Backpropagation: A Journey Down the Gradient

Here’s a short story about how a little neural network gets trained. We uncover here a bit of the magic of deep learning, with a step by step example of how to update one weight. This post will go over the intuition and process of gradient descent and error backpropagation used to train neural networks. General neural network structure and feed forward step won’t be covered in detail here.

Variational Inference in Theory

This post will explain what Variational Inference is about (TLDR: it’s a method to approximate the elusive posterior distribution). We’ll get past those tricky hairy math details with some nicer to read color-coded hairy math. Enjoy 😏 ​ Why Variational Inference? Before we dive into it, first a quick recap of Bayesian Inference. Remember Bayesian Inference is a statistical method which uses Bayes' theorem to update the probability of our hypothesis as we get more data, and can compute the posterior probability of our model parameters.

An exploration of Gaussian Processes (Part I)

We’ll explore Gaussian Processes (GPs) through examples written mostly in Python. You can find the jupyter notebook for the images and Octave code here. If you are not familiar with regression or multivariate Gaussians (MVGs), you should brush up on those topics first. This post will go into the intuition and implemention of GPs, and sampling from GP priors using 1D data. Part 2 will cover 2D input data and prediction.