Machine learning: Feature detection

Feature detection #

In a prior exercise, we found that convolutions can be used in to detect features in a signal. The purpose of this exercise is to adapt our findings into the setting of neural networks.

Exercise 1 #

Suppose that we have a vector \(\vec{x}\) whose entries oscillate around \(0\) and would like to determine whether a feature given by a vector \(\vec{h}\) appears, or nearly appears, among the entries of \(\vec{x}\). Assume that \(\vec{x} \in \mathbb{R}^7\) and \(\vec{h} \in \mathbb{R}^3.\) Using the nodes in the diagram below, design a neural network whose output will give the desired answer. You will have to specify:

  • which edges to draw,
  • weights and biases for each neuron, and
  • appropriate activation functions.

Exercise 2 #

Now assume that \(\vec{x} \in \mathbb{R}^7\) and \(\vec{h} \in \mathbb{R}^6.\) Repeat the exercise above, but this time design a network that first detects two different features in each of which lies in \(\mathbb{R}^3\) and then combines this knowledge to detect the presence of \(\vec{h}\).

Convolutions for 2-D arrays #

We will talk about this in class, but Stanford’s CS 231n has a beautiful technical writeup. Here’s another useful post.

Testing ideas IRL #

This Colab Notebook allows you to try some of these ideas on the MNIST digit-classification task.