Machine learning: Lab: expressiveness

Expressiveness and an introduction to neural networks #

This lab has two goals:

  • It will give you a very brief introduction to Keras, a Python package for building and training neural networks, and
  • Allow you to gain some intuition about how simple dense neural networks can be structured to approximate functions \(f:\mathbb{R}\rightarrow \mathbb{R}\).

Keras #

As always, we will use a Colab notebook in our work. Your first task is to define a neural network that represents the following graph:

Follow the notebook and make sure to graph the function \(\mathcal{N}\) represented by the network you have defined.

Expressiveness #

Below is a sequence of functions in increasing order of difficulty. For each, your task is to define a neural network representation with one hidden layer. For simplicity, use the sigmoid activation function on the hidden layer neurons and no activation function on the output neuron.

  1. A translated sigmoid. It should be centered at \(x=2\) and be \(4\) units in height.
  1. A translated sigmoid, but with a steeper ramp. Again, it should be centered at \(x=2\) and be \(4\) units in height.

    What is the relationship between the weight and bias that keeps the ramp centered at \(x=2\)?

  2. Another sigmoid. It should be centered at \(x=4\) and be \(4\) units in height, but face in the opposite direction.

  3. A bump function. It should be \(4\) units in height.

  4. A two-bump function. One bump should be \(4\) units in height, the other \(10\).

  5. A two-adjacent-bumps function. One bump should be \(4\) units in height, the other \(10\).

Exercise: Suppose that \(f: [0,1] \rightarrow \mathbb{R} \) is a continuous function. Based on your work above, how would you go about designing a one-hidden-layer neural network that approximates \(f\)?

Exercise: Suppose that \(f(x) = e^{x-1}\) as below. Find a one-hidden-layer neural network approximation of \(f\). Use the \(\text{ReLU}(x)\) activation function for neurons in the hidden layer. Graph your results.