## The Quickest Way to Learn Mathematics

I’m going to let you in on a big secret about how to get good at mathematics. You might not learn this in school but there IS a quick way of learning mathematics; it’s been known for hundreds of years but seldom talked about and it’s this – hard work.

If that’s not really what you wanted to hear and feel disgusted that I would say such a thing then please do not continue reading this post as I don’t want to waste any more of your time; if you would like to know what my reasons are for believing that hard work is THE QUICKEST way of learning mathematics then please read on…

Unfortunately many of my students get conned during their school maths classes; they’re told “quick and easy” or “cheaty” methods of doing everything from simple multiplications and percentages to trigonometry and integration. It has to be remembered that these cheaty methods were devised by people that understood the theory in the first place – a good example that springs to mind is the CAST diagrams method for solving trigonometric equations (which I don’t encourage using). If these methods are then shown to people that don’t understand the theory from which the cheaty method comes then, sooner or later, you end up with big problems – yet this is done year in and year out in many schools by many teachers and tutors alike in an effort to try and circumvent the hard work aspect of learning mathematics. Teaching these cheaty methods from the outset eventually leads to spectacular failure and lack of understanding as it shows students how to do something in a very limited and narrow range of cases and doesn’t usually provide any kind of flexibility or ability to adapt to unfamiliar situations. Sadly, students are duped into believing that they don’t need to know the theory and develop an unhealthy expectation that maths can always be reduced to cheaty methods.

Here’s the thing – if you understand the theory you can adapt very easily to new sitiuations, solve a wider range of problems and generally enjoy the learning process more because you get more out of it. If you rely on cheaty methods you have to learn a new method of solution for each and every “type” of question that you encounter. To start with this might not be too much of a problem – at GCSE for example you will only really encounter a fairly limited range of possible questions – but further down the line at A Level the doors are flung wide open and if you don’t have some understanding you’re up the proverbial creek without a paddle.

By teaching along these short-sighted lines you encourage an expectation within a student that everything can be reduced to a cheaty method. Which it can’t. From my personal experiences as a maths tutor the largest category of people that fall victim to this way of thinking are those wanting to do the QTS Numeracy test. I’ve lost count of the number of times that I have been asked to provide some tuition for the QTS Numeracy test but insisting that I just tell them all of the “cheaty short-cut methods” to do the questions which they’ll get on the test (by the way; I don’t know beforehand exactly which questions you will get asked on the test; and even if I did I wouldn’t tell you). I’m happy to show people how to take short-cuts provided that they have a sufficiently high level of understanding in the first place. If they don’t understand the basics then we are both wasting our time and I may as well go and talk to a brick wall for a while because they will not understand when or how to apply such short-cuts.

I understand what I’m doing when I do maths but it isn’t because I learned all of the short-cut ways of doing everything. Quite the opposite – I learned to understand what I was doing by working hard and then the cheaty methods become trivial facts; in fact they almost become redundant. By understanding what I’m doing I see where these cheaty methods come from and how they work – better still I can make them even more “cheaty” if I want to in some cases. There is NO WAY to cut out the understanding when learning mathematics and the understanding can only come about through hard work. You have to be prepared to use your own brain to solve problems and not leave yourself at the mercy of some miscellaneous method that you don’t understand but which you keep your fingers crossed that you’re using it right and that it will give you the right answer. Is that really a good way to learn?

IMPORTANT!!! By trying to avoid the hard work of learning to do mathematics properly you will end up spending (wasting) more time, energy and effort trying to get to grips with loads of shot-cut, cheaty techniques that you don’t have any understanding of and will most probably forget every couple of weeks and have to keep re-learning. So hard work really is the quickest way to learn mathematics – not really what you want to hear is it? But that’s how it is.

There is a place for short-cut methods when it comes to mathematics; they can sometimes take the pain out of an otherwise lengthy and tedious calculation but they should NOT under any circumstances be a complete substitute for learning through hard work to acquire the necessary level of competence. You wouldn’t expect to become a world-champion 100m sprinter without hard work would you? And nor would any sprint coach who knew what they were talking about tell you that you could become world champion without a lot of hard work. Leading people to believe that all mathematics can be simplified to such a point where you just need to follow a nice cheaty method is cruel and if you do it and encourage it then shame on you!

## A Skeptical Look at Statistics

This video shows a few clips from a presentation that I gave almost two years ago now for Leeds Skeptics (thank you to Chris Worfolk for inviting me to give the presentation and to follow in the footsteps of some fantastic speakers that the club has had over the years). The room was quite dark and so the video may not be very clear at times but hey, you get to watch it for free anyway!

The presentation came about through my annoyance with the huge amount of crappy statistics that float around everywhere (and I mean everywhere!) we go; in newspapers, on television, on the internet, advertisements, in our mail, on food packaging and of course the inevitable stream of carefully chosen (but often misleading and in many cases very suspect) statistics that flows out of any mealy-mouthed politician or pressure-group leader that gets a few minutes of airtime on the telly.

Although statistics and probability has good intentions and is a tremendously fascinating area of study (according to recent polls at least) it is wide open to abuses of all kinds. What a surprise! I’m sure most people have come to realise this over the years and may even have become quite passive about it and just accept it. Most of the time I do in all honesty! At the risk of sounding a bit negative, there is very little that can be done to stop statistical misuse; the only way that it can really change is if people choose (all by themselves) to be clear and transparent with the basis of the statistics that they produce. However, it does pay to be more familiar with the sorts of things that go on so that you can make a more informed decision as to whether the statistic that you are given is reasonable or is a big steaming pile of … (hold it right there!!) So I decided to familiarise myself with what goes on a bit more behind the scenes. Although I am quite well acquainted with various statistical methods learned in the academic bubble of university I was really quite surprised at how statistics can be manipulated so easily and so irresponsibly.

Here’s a few ways that everyday statistics might be fudged;

• Choosing a biased sample – this may be deliberate or not in some cases.
• Omitting certain undesirable outcomes – almost as if they never happened.
• Moving the goalposts – changing the significance level of a statistical test after an experiment has been carried out to give the desired result.
• Using a sample size that is too small which increases the chances of skewed reults.
• Incompetence – You might think that it’s obvious that someone who doesn’t know how to deal with statistics isn’t going to be very good at producing reliable statistics.

The list goes on. If you are interested in reading more about uses and abuses of statistics here is a list of some good books that I read on the subject – they are aimed at the general reader and so they are not highly technical with pages crammed with jargon and the like. They’re definitely worth a read.

• How to Lie with Statistics – Darrell Huff
• A Mathematician Reads the Newspaper – John Allen Paulos
• Damned Lies and Statistics – Joel Best

After the presentation I got talking to some of the audience members (I know; who would have thought that people would come to see a presentation on statistics!) and the following website was recommended to me for a voluntary organisation called Radical Statistics that analyse and put statistics through their paces to check their credibility. I’m not currently a member of the organisation but I have been on the website regularly and there is some really good information there.

By the way – my whole presentation can be seen on the Worfolk Lectures website in much better video quality than my own recording I hasten to add.

## Defining Uniform B-Splines

In a previous post I spoke a little bit about B-Splines and in particular, Uniform B-Splines. I didn’t really go into much detail about how they are defined, how we decide what our b-spline is going to look like or even what an explicit expression for a spline would look like given our set of control points.

I decided to type up a LaTeX document which introduces the theory of b-splines. The document looks at b-splines from a practical perspective and so it doesn’t get too bogged down with the analysis side of things but concentrates on the tools required to find explicit expressions for b-splines.

The document contains expressions for piecing together a b-spline. I spent a good number of hours deriving these expressions and they got very messy to deal with but, eventually, persistence prevailed. It is likely that someone, somewhere has already derived these formulae but by doing the work myself I was able to learn so much about these splines that I wouldn’t have appreciated if I had just read someone else’s work.

This is one of the most important things about becoming better as a mathematician – being prepared to spend time with a problem and being willing to make a lot of mistakes until you get things right. Persistence is rarely a bad thing. I make no apology for the length of some of the expressions – I have not made any real attempt to simplify the expressions as I feel that it would be a glorious waste of time to try and do so. If anyone out there would like to have a go at tidying up the expressions then feel free to go right ahead.

## Finite Difference Methods in Two Dimensions

During my final year at Warwick University I did a project on Numerical Weather Forecasting and one of the methods that came up was the method of finite differences. I recently came across finite differences again over the last few weeks and how they can be used to solve some partial differential equations (PDEs). Here is an example that I found in one of my books that I decided to play around with

Solve, using the method of finite differences, the PDE $$x\dfrac{\partial{f}}{\partial{x}}+(y+1)\dfrac{\partial{f}}{\partial{y}}=0$$ for $0\leq x,y \leq 1$ with $$f(x,0)=x-1$$ $$f(x,1)=\dfrac{x-2}{2}$$ $$f(0,y)=-1$$ $$f(1,y)=-\dfrac{y}{y+1}$$

To solve this problem I used the central difference equations given by

$$\dfrac{\partial{f}}{\partial{x}}\approx \dfrac{f(x+h,y)-f(x-h,y)}{2h}$$

$$\dfrac{\partial{f}}{\partial{y}}\approx \dfrac{f(x,y+k)-f(x,y-k)}{2k}$$

with a step length in both the $x$ and $y$ directions of $\frac{1}{3}$. The domain can be overlaid with a grid as shown in the diagram This makes the whole problem easier to deal with because now we can see visually what is happening rather than just dealing with everything purely algebraically – there’s no need to make things more difficult than they need to be after all.

The idea is to form a system of four simultaneous equations with $A$, $B$, $C$, and $D$ as the unknown quantities. When the system is solved then the values obtained correspond to the approximate values of $f$ at each of the grid points. Here is my full solution to this problem – PDE Solution Using Finite Differences. This fills in some of the gaps in our knowledge about the function $f$ but there is still a lot of information missing – in particular the points between the grid points.

To resolve this we can make the step-lengths smaller and create more interior grid points. This gives us more information but at a price – for example with a step length of $\frac{1}{5}$ in both directions we would have a system of $16$ simultaneous equations in $16$ unknowns – I don’t really fancy sorting that mess out without the help of a computer. And what’s more – as the step-length gets even smaller we get more and more complicated systems with more unknowns so eventually we would reach a point where even a computer would struggle to do anything with the information.

Nevertheless – finite difference methods are used and are useful for all kinds of problems and can provide a very visual way of numerically solving some otherwise very difficult equations.

## Uniform B-Splines

Over the last few days I have been getting interested in basis-splines (or B-Splines). B-Splines are functions $Q(u)$ that are piecewise polynomial of degree $n$, which means that they are made up of sections of polynomials of degree $n$ attached together, that approximate a polygon (called a control polygon).

The sections of polynomials are fitted together smoothly – how smoothly depends on the degree of the polynomials that make up the spline. For example a spline of degree $3$ is made up of sections of cubic polynomials such that the first and second derivatives of the spline $Q(u)$ are continuous at the attachment point but will usually fail to be three times differentiable at these points. More generally – for a spline $Q(u)$ of degree $n$, then $Q(u)$ will be $n-1$ times differentiable but will usually fail to be $n$ times differentiable.

For example, in the following diagram the control polygon is shown in red and the spline, shown in blue, is piecewise cubic

The attachment points for this spline are at the integer values along the $x$ axis –  I chose integer points because it makes the algebra considerably easier but in theory there is nothing restricting me to these points. Strictly speaking this is a uniform B-Spline which means that the attachment points are at regular intervals; if the attachment points occur on intervals of different lengths then it is a non-uniform spline which are also very commonly used.

The following graph is a graph of the first derivative of the above spline

Clearly this curve is continuous. I have fixed the spline so that the attachment points occur at integer values on the $x$-axis and importantly this curve is continuous at each of these points. We are not necessarily interested in what happens in between these points so it is a bit of a bonus that the derivative is continuous at all intermediate points. Now let’s look at the second derivative

Again the graph is continuous but we can see just by looking at the graph that it fails to be differentiable at the attachment points – but this is what we expect to happen. The only way that this spline could be more than twice differentiable would be if it was only made up of a single section – but this wouldn’t be a very interesting spline really…

Splines and spline interpolation are very useful in computer-aided design and particularly in computer games design. There is so much that I have learned and have yet to learn about splines that I can’t possible fit everything into a single post so I’m sure that I will be coming back to this topic again very soon.

## Derivatives of the function y=x|x|

This is an example of a very simple real function that is only differentiable a finite number of times.

Let $f:\mathbb{R} \to \mathbb{R}$ be a function defined by $f(x)=x|x|$. The aim is to show that this function is differentiable but that it is not twice differentiable. Notice that $f(x)$ can also be written as
$$f(x) = \left\{ \begin{array}{l l} x^2 & \quad \text{if x\geq0}\\ -x^2 & \quad \text{if x<0} \end{array} \right.$$ The graph of this function looks as follows It is clear that $f(x)$ is differentiable for $x\!>\!0$ and $x\!<\!0$ with derivative $f^{\prime}(x)=2x$ and $f^{\prime}(x)=-2x$ respectively. We only need to check that $f(x)$ is differentiable at $x=0$.

By definition a function $g:I \to \mathbb{R}$ where $I \subset \mathbb{R}$ is differentiable at a point $x \in I$ if the limit
$$\lim_{h\to 0}\frac{g(x+h)-g(x)}{h}$$
exists. $g$ is called differentiable if it is differentiable at every point $x \in I$.

Let’s check that this limit exists for $f$ when $x=0$
$$\lim_{h\to 0}\frac{f(0+h)-f(0)}{h}=\lim_{h\to 0}\frac{(0+h)|0+h|}{h}$$
$$=\lim_{h\to 0}\frac{h|h|}{h}=\lim_{h\to 0}|h|=0$$

So $f(x)$ is indeed differentiable at $x=0$ and we can write the derivative of $f(x)$ as $f^{\prime}(x)=2|x|$. The graph of the derivative of $f$ looks as follows

We see that $f^{\prime}(x)$ is differentiable when $x \neq 0$ but when we try to find the derivative of $f^{\prime}$ at $x=0$ we have
$$\lim_{h\to 0^{+}}\frac{f^{\prime}(0+h)-f^{\prime}(0)}{h}=\lim_{h\to 0^{+}}\frac{2|h|}{h}=2$$
$$\lim_{h\to 0^{-}}\frac{f^{\prime}(0+h)-f^{\prime}(0)}{h}=\lim_{h\to 0^{-}}\frac{2|h|}{h}=-2$$
The left and right limits are not the same and therefore $f^{\prime}$ is not differentiable at $x=0$. The conclusion is that $f$ is differentiable but not twice differentiable.

It is not much more difficult to show (by induction is possibly the easiest way) that the function given by $h(x)=x^{n}|x|$ is $n$-times differentiable but not $n+1$ times differentiable.

## Topology – An Informal Introduction

A square and a circle, as I imagine you probably know, look completely different – and you would certainly be right to say that they are different. Mathematically, however, a circle and a square share many structural properties and it may come as a surprise to some that in some cases mathematicians may go so far as to not even bother to distinguish between the two – in other words, the circle and the square can be considered one and the same thing. This is where topology comes in…

Topology is a very abstract area of geometry that simplifies many problems that would be very difficult if not impossible to solve. A formal introduction to topology requires knowledge of some basic set-theory and a knowledge of some analysis and metric spaces is usually helpful. I want to give a simple and informal introduction to this fascinating area of mathematics.

Given a set, $X$, a topological space is a couple formed from the set $X$ and a collection $\tau$ of subsets (called a topology) of $X$ that satisfy

• $\emptyset \in \tau$ and $X \in \tau$
• $\tau$ is closed under finite intersections
• $\tau$ is closed under arbitrary unions

Anything (yes anything) that satisfies the above definition is a topological space. The set $X$ may be a region of the $x$-$y$ plane, a $3$-dimensional region of space, or even an abstract collection of objects – the level of abstractness of topological spaces can be demonstrated in the following example of a topological space

Let $X=\{Amy, Boris, Charlie\}$.

One possible topology, $\tau$, for $X$ could be $\{\emptyset, \{Amy\}, \{Amy, Boris\}, X\}$.

What has all this got to do with the square and the circle? Well two topological spaces can be considered the same (topologically indistinguishable) if a homeomorphism exists between the two. Homeomorphism is mathematical jargon but in simple terms it is a way of “moulding one space into the shape of another space.” Homeomorphisms can be informally visualised by imagining a circular piece of plasticine. It wouldn’t be too difficult to re-shape the plasticine into the shape of a square and vice-versa – this way of deforming the plasticine obeys the rules of a homeomorphism and therefore the circle and the square are topologically equivalent.

One possible function that takes the square onto the circle is $f(x,y)=\dfrac{(x,y)}{\sqrt{x^{2}+y^{2}}}$

The word topology doesn’t usually manage to make its way out of university maths departments, yet topology does have practical uses. If you have seen a London underground map before then you have seen a practical application of topology – if you have not seen an underground map before then you can see one here. In reality the tube tracks do not run in perfect straight lines but by representing the tube tracks as straight lines the whole thing is much easier to understand – the map of the London underground is topologically identical to the real thing. In other words, even though they are two different things, they have identical structures.

There are many other examples of topologically equivalent pairs of objects such as

• the interval $(0,1)$ and the real-line $\mathbb{R}$
• a sphere and a cube
• $\mathbb{R}^{2}$ and the punctured sphere ($S^{2}$ with a point removed)

There are also many examples of pairs of objects that have significant structural differences and are not topologically equivalent such as

• A sphere and a torus – these are not homeomorphic because the torus has a hole but the sphere does not; both are shown below for comparison.
• $\mathbb{R}^{2}$ and $S^{2}$ are not homeomorphic
• the letter $A$ is not homeomorphic to the letter $T$

## Polynomial Interpolation – Lagrange Representations

Interpolation is widely used in mathematics and in the sciences – mathematics tends to concentrate on the theoretical side of things and the methods are then used by scientists with the confidence that what they are using works.

What is interpolation? Well an example would be if you are measuring the temperature outside on a particular day then you would take the temperature at certain times and record the temperature – but what about the times in between? We can’t record everything but we might need to know what the temperature was at one of these intermediate times – this is where interpolation comes in. Interpolation is a way of “joining the dots” within (not outside) a dataset so that estimates can be made about the behaviour. This can be done in hundreds of different ways using different methods each with their pros and cons.

In a previous post I talked about the limitations of Euler’s method – well here I’m going to talk about the limitations of polynomial interpolation and specifically the Lagrange representation and one way that the problem can be partially resolved. Don’t misunderstand me – the Lagrange representation is very useful but it, as with almost any numerical technique, has its own problems.

Let’s look at the function $f(x)=\dfrac{1}{1+25x^{2}}$ on the interval $[-1,1]$. This is what it looks like

Now given $n+1$ distinct points $x_{0}, x_{1}, \ldots , x_{n}$ in $[-1,1]$ and their corresponding $y$-values $y_{0}, y_{1}, \ldots , y_{n}$ then the Lagrange representation is defined as $$P_{n}=\sum_{j=0}^{n}y_{j}\ell_{j}$$ where $$\ell_{j}=\prod_{i=0,i\neq k}^{n}\dfrac{x-x_{i}}{x_{k}-x_{i}}$$

This representation can be proved to be unique and that $P(x_{i})=y_{i}$ but I don’t want to get sidetracked with this. So going back to the function $f(x)$ – first I’m going to choose $5$ equally-spaced points along the interval [-1,1]. The resulting interpolating polynomial looks as follows (I am not going to try to write an expression for the interpolating polynomials because they are very difficult to find explicitly and they don’t really tell us anything anyway)

The interpolating curve is shown in red and $f(x)$ is shown in blue. This is not bad, after all we only have $5$ points to work with so we might expect that as the number of points increases we get a more accurate picture – right? Well…no. As the number of points increases we have to increase the degree of the interpolating polynomial and if we choose $10$ equally spaced points on the interval $[-1,1]$ we get this picture

Things are starting to go a bit awry – maybe if we increase the number of points even further then things will start to settle down. Let’s look what happens when we have $16$ equally spaced points.

Maybe that wasn’t such a good idea. Things are starting to get so out of control that as the number of interpolation points (and consequently the degree of the interpolating polynomial) increases, the interpolating polynomial oscillates wildly between both extremely large positive and extremely large negative values near the edges of the interval $[-1,1]$. This behaviour gets so bad $n$ increases that the interpolating polynomial grows without bound near the edges of the interval – this is known as Runge’s phenomenon and makes the interpolating polynomial practically useless.

One way around this is to choose a different set of interpolation points. One of the problems is the equal spacing of the points – to resolve this in part we can choose Chebyshev nodes. Using these Chebyshev nodes we get a very different picture – the following diagram shows what things look like when $10$ points are chosen

Now compare that to what we had before and we see something that is much better behaved. Has Runge’s phenomenon been eliminated? Mostly, yes – but in truth it can never be completely eliminated; the Chebyshev nodes, however, massively reduce the effects of Runge’s phenomenon. Runge’s phenomenon does not always occur but it is something that can go wrong from time-to-time, so as with all numerical methods you have to take care when applying the method to solve problems; there may be another more suitable method that needs to be applied.

## The Limitations of Euler’s Method

There are many examples of differential equations that cannot be solved analytically – in fact, it is very rare for a differential equation to have an explicit solution. Euler’s Method is a way of numerically solving differential equations that are difficult or that can’t be solved analytically. Euler’s method, however, still has its limitations.

For a differential equation $y^{\prime}=f(x,y(x))$ with initial condition $y(x_{0})=y_{0}$ we can choose a step-length $h$ and approximate the solution to the differential equation by defining $x_{n}=x_{0}+nh$ and then for each $x_{n}$ finding a corresponding $y_{n}$ where $y_{n}=x_{n-1}+hf(x_{n-1},y_{n-1})$. This method works quite well in many cases and gives good approxiamtions to the actual solution to a differential equation, but there are some differential equations that are very sensitive to the choice of step-length $h$ as the following demonstrates.

Let’s look at the differential equation $y^{\prime}+110y=100$ with initial condition $y(0)=2$.

This differential equation has an exact solution given by $y=1+\mathrm{e}^{-100t}$ but this example is a very good example which demonstrates that Euler’s method cannot be used blindly. Let’s look at what happens for a few different step-lengths.

For the step-length $h=0.019$ step-length we get the following behaviour The red curve is the actual solution and the blue curve represents the behaviour of the numerical solution given by the Euler method – it is clear that the numerical solution converges to the actual solution so we should be very happy. However, look what happens when the step-length $h=0.021$ is chosen Again the actual solution is represented by the red line which on this diagram looks like a flat line because the blue curve gets bigger and bigger as you move along the $x$-axis. So a change of just $0.002$ in the step-length has completely changed the behaviour of the numerical solution. For a step-length $h=0.03$ the graph would look as follows The actual solution can barely be seen and the numerical solution gets out of control very quickly – this solution is completely useless – the scales on the $y$-axis are enormous and increasing the step-length only makes this worse. What has happened?

It can be shown by induction that for $n \in \mathbb{N}$ that $y_{n}=1+(1-100h)^{n}$. This converges only for $h<0.02$ and diverges for $h>0.02$. $h=0.02$ is a limiting case and gives an oscillating numerical solution that looks as follows For this particular example for $h<0.02$ and as the step-length gets closer to $0$ the solution will converge faster and for $h>0.02$ as the step-length increases the solution will diverge more rapidly.

So even though we have Euler’s method at our disposal for differential equations this example shows that care must be taken when dealing with numerical solutions because they may not always behave as you want them to. This differential equation is an example of a stiff equation – in other words, one that is very sensitive to the choice of step length. In general as the step-length increases the accuracy of the solution decreases but not all differential equations will be as sensitive to the step-length as this differential equation – but they do exist.

## Phase Portraits of Ordinary Differential Equations

Some differential equations are relatively easy to solve – even though at A-Level this may seem to be the norm, in reality it is very rare for a differential equation to have an explicit solution. The types of differential equations at A-Level can be solved by direct integration or by using a range of nifty techniques like making a substitution to change the differential equation into one that can be solved directly, separation of the variables can be used to solve some equations of the form $f(y)y’=g(x)$; sometimes the integrating factor $\mathrm{e}^{\int P(x) \mathrm{d}x}$ can be used to solve equations of the form $y^{\prime}+P(x)y=Q(x)$ and second-order equations of the form $ay^{\prime \prime}+by’+cy=f(x)$ where $a,b$ and $c$ are constants, can be solved by finding the complementary function and particular integral and combining the two.

However, these are very special cases. Many differential equations that come up in real-life situations are very non-linear in nature. Non-linear differential equations are generally very difficult to solve, but quite often, it is the behaviour of the solutions that interests us more than the actual solution. We may not be really interested in a particular solution but a family of solutions so that we can easily compare their behaviours particularly for equations that evolve over time. In some cases – even though it may be very difficult or even impossible to find the solution to a differential equation analytically, we can still analyse the behaviour through use of a phase-diagram.

A phase-diagram is a vector field that we can use to visually present the solutions to a differential equation. For example here is a second-order differential equation – (this is an example that I did that appears in the book by D. W. Jordan and P. Smith titled Nonlinear Ordinary Differential Equations – An Introduction for Scientists and Engineers Fourth Edition)
$$\ddot{x} = x-x^{2}$$

This second order-differential equation can be separated into a system of first-order differential equations given by $$\dot{x}=y$$ $$\dot{y}=x-x^{2}$$

This is a very common technique for solving differential equations since first-order ODEs are generally easier to solve than second-order ODEs. Note that $$\frac{\mathrm{d}y}{\mathrm{d}x}= \frac{\dot{y}}{\dot{x}}=\frac{x-x^{2}}{y}$$

The phase-portrait of this looks as follows

At a point on the phase-diagram there is attached a vector which are represented here by the arrows – the vector tells you which direction to move in. So given a starting point it is simply a case of moving from point to point in the direction that the vector tells you to move in. So different starting points will give different paths through the phase-portrait called trajectories, some of which are shown in the following diagram

Each trajectory is described by the equation $3y^{2}=-2x^{3}+3x^{2}+C$ for different values of $C$. The trajectories above are for $C=-8$, $-4$, $-0.5$, $0$, $1$, $2$, $4$ and $6$. Here the horizontal $x$-axis represents the position of a particle and the vertical $y$-axis represents the velocity of the particle – remember that $y=\dot{x}$.

So now if we start in the top left hand corner of the diagram on the curve corresponding to when $C=8$ (the right-most blue line) the arrows tell us to move to the right and down – so our displacement increases as we move to the right but our velocity decreases as we move down. Eventually we cross the $x$-axis and here our velocity is instantaneously zero; then we start moving backwards since our velocity becomes negative as we continue moving downwards, and our displacement decreases as we move to the left. So I have not even calculated the actual solution to the ODE but I still know how the solution behaves.