So here is my first monthly review video – I decided to make this video just to share with you what’s been going on over the last few weeks in my life as a professional maths tutor. Sometimes I get some really interesting questions from my students during lessons and I thought that it would be a good idea to share some of those questions and their answers with you.

In this video, for example, I discuss weak and strong forms of proof by induction – usually A Level Further Maths students will only really get familiar with weak induction but strong induction is a lifesaver in some cases where weak induction just wouldn’t cut it. I also talk about functions as I was asked some really good questions about functions in one of my lessons when I was doing transformations of graphs with a GCSE student of mine.

I made a couple of videos over the last month – one was about how to improve your GCSE Maths and the other was how to improve your A Level Maths. There will be another couple of videos over the next month so keep an eye out for them.

Other things that I touch on are a couple of books that I read over the last month. The first was Alan Sugar’s Autobiography – I know, I know, it’s not exactly a maths book but I read it anyway. I also read Surely You’re Joking Mr. Feynman by Richard Feynman, which is another autobiography, and I started reading, or should I say working through, The Works of Archimedes (that’s the famous Greek scientist Archimedes).

And lastly I introduce my new friend the Soroban (Japanese abacus) which I’ve started to learn to use. I’ve only been learning for a weeks or so but things are going in the right direction. I’ve found a link that is really useful for learning – it doesn’t tell you how to use a soroban but it throws up random addition problems so that you have to try to figure them out on the soroban; it’s something called Flash Anzan and has been really useful so far in getting me used to the various combinations that you need to know to use the soroban.

This is an example of a very simple real function that is only differentiable a finite number of times.

Let $f:\mathbb{R} \to \mathbb{R}$ be a function defined by $f(x)=x|x|$. The aim is to show that this function is differentiable but that it is not twice differentiable. Notice that $f(x)$ can also be written as
$$ f(x) = \left\{
\begin{array}{l l}
x^2 & \quad \text{if $x\geq0$}\\
-x^2 & \quad \text{if $x<0$} \end{array} \right.$$ The graph of this function looks as follows

The graph of y=x|x|

It is clear that $f(x)$ is differentiable for $x\!>\!0$ and $x\!<\!0$ with derivative $f^{\prime}(x)=2x$ and $f^{\prime}(x)=-2x$ respectively. We only need to check that $f(x)$ is differentiable at $x=0$.

By definition a function $g:I \to \mathbb{R}$ where $I \subset \mathbb{R}$ is differentiable at a point $x \in I$ if the limit
$$\lim_{h\to 0}\frac{g(x+h)-g(x)}{h}$$
exists. $g$ is called differentiable if it is differentiable at every point $x \in I$.

Let’s check that this limit exists for $f$ when $x=0$
$$\lim_{h\to 0}\frac{f(0+h)-f(0)}{h}=\lim_{h\to 0}\frac{(0+h)|0+h|}{h}$$
$$=\lim_{h\to 0}\frac{h|h|}{h}=\lim_{h\to 0}|h|=0$$

So $f(x)$ is indeed differentiable at $x=0$ and we can write the derivative of $f(x)$ as $f^{\prime}(x)=2|x|$. The graph of the derivative of $f$ looks as follows

The graph of 2|x|

The graph of 2|x| and the derivative of x|x|

We see that $f^{\prime}(x)$ is differentiable when $x \neq 0$ but when we try to find the derivative of $f^{\prime}$ at $x=0$ we have
$$\lim_{h\to 0^{+}}\frac{f^{\prime}(0+h)-f^{\prime}(0)}{h}=\lim_{h\to 0^{+}}\frac{2|h|}{h}=2$$
$$\lim_{h\to 0^{-}}\frac{f^{\prime}(0+h)-f^{\prime}(0)}{h}=\lim_{h\to 0^{-}}\frac{2|h|}{h}=-2$$
The left and right limits are not the same and therefore $f^{\prime}$ is not differentiable at $x=0$. The conclusion is that $f$ is differentiable but not twice differentiable.

It is not much more difficult to show (by induction is possibly the easiest way) that the function given by $h(x)=x^{n}|x|$ is $n$-times differentiable but not $n+1$ times differentiable.

Interpolation is widely used in mathematics and in the sciences – mathematics tends to concentrate on the theoretical side of things and the methods are then used by scientists with the confidence that what they are using works.

What is interpolation? Well an example would be if you are measuring the temperature outside on a particular day then you would take the temperature at certain times and record the temperature – but what about the times in between? We can’t record everything but we might need to know what the temperature was at one of these intermediate times – this is where interpolation comes in. Interpolation is a way of “joining the dots” within (not outside) a dataset so that estimates can be made about the behaviour. This can be done in hundreds of different ways using different methods each with their pros and cons.

In a previous post I talked about the limitations of Euler’s method – well here I’m going to talk about the limitations of polynomial interpolation and specifically the Lagrange representation and one way that the problem can be partially resolved. Don’t misunderstand me – the Lagrange representation is very useful but it, as with almost any numerical technique, has its own problems.

Let’s look at the function $f(x)=\dfrac{1}{1+25x^{2}}$ on the interval $[-1,1]$. This is what it looks like

Graph of f(x)

A graph of f(x) drawn using SAGE Math

 

Now given $n+1$ distinct points $x_{0}, x_{1}, \ldots , x_{n}$ in $[-1,1]$ and their corresponding $y$-values $y_{0}, y_{1}, \ldots , y_{n}$ then the Lagrange representation is defined as $$P_{n}=\sum_{j=0}^{n}y_{j}\ell_{j}$$ where $$\ell_{j}=\prod_{i=0,i\neq k}^{n}\dfrac{x-x_{i}}{x_{k}-x_{i}}$$

This representation can be proved to be unique and that $P(x_{i})=y_{i}$ but I don’t want to get sidetracked with this. So going back to the function $f(x)$ – first I’m going to choose $5$ equally-spaced points along the interval [-1,1]. The resulting interpolating polynomial looks as follows (I am not going to try to write an expression for the interpolating polynomials because they are very difficult to find explicitly and they don’t really tell us anything anyway)

Lagrange representation of f(x)

Lagrange representation of f(x) with 5 equally spaced points

The interpolating curve is shown in red and $f(x)$ is shown in blue. This is not bad, after all we only have $5$ points to work with so we might expect that as the number of points increases we get a more accurate picture – right? Well…no. As the number of points increases we have to increase the degree of the interpolating polynomial and if we choose $10$ equally spaced points on the interval $[-1,1]$ we get this picture

Lagrange representation of f(x)

Lagrange representation of f(x) with 10 equally spaced points

Things are starting to go a bit awry – maybe if we increase the number of points even further then things will start to settle down. Let’s look what happens when we have $16$ equally spaced points.

Lagrange representation of f(x)

Lagrange representation of f(x) with 16 equally spaced points

Maybe that wasn’t such a good idea. Things are starting to get so out of control that as the number of interpolation points (and consequently the degree of the interpolating polynomial) increases, the interpolating polynomial oscillates wildly between both extremely large positive and extremely large negative values near the edges of the interval $[-1,1]$. This behaviour gets so bad $n$ increases that the interpolating polynomial grows without bound near the edges of the interval – this is known as Runge’s phenomenon and makes the interpolating polynomial practically useless.

One way around this is to choose a different set of interpolation points. One of the problems is the equal spacing of the points – to resolve this in part we can choose Chebyshev nodes. Using these Chebyshev nodes we get a very different picture – the following diagram shows what things look like when $10$ points are chosen

Lagrange representation of f(x)

Lagrange representation of f(x) using 10 Chebyshev nodes

Now compare that to what we had before and we see something that is much better behaved. Has Runge’s phenomenon been eliminated? Mostly, yes – but in truth it can never be completely eliminated; the Chebyshev nodes, however, massively reduce the effects of Runge’s phenomenon. Runge’s phenomenon does not always occur but it is something that can go wrong from time-to-time, so as with all numerical methods you have to take care when applying the method to solve problems; there may be another more suitable method that needs to be applied.