Skip to main content

Proving error bound of Simpson's rule

I feel like writing about some techniques I found online on how people prove the error bound of Simpson's rule for numerical integration.

The Simpson's rule:
\( \int_a^b f(x) dx \) can be approximated as \(\frac{b-a}{6}(f(a) + 4 f(\frac{a+b}{2}) + f(b))\).

You can find more info about this rule on Wikipedia.

The topic of today is: how do you prove the error bound for the Simpson's rule, which is given by:
\[ -\frac{1}{90} \left( \frac{a-b}{2} \right)^5 f^{(4)}(c) \]
for some constant \(c\).

And I think this is the best proof I have found out there, which might probably be the standard treatment of proving such error bounds: https://math.stackexchange.com/questions/1759326/proving-error-bound-on-simpsons-rule-numerical-integration

Since the answer there is quite succinct, here I'm going to expand it for my own understanding and future reference.

Also, something that I found quite interesting, turns out MVT is quite useful in coming up with error bounds such as this. I want to write it down here:

Mean value theorem (this is not the most accurate version):
If \(f\) is continuous and differentiable function, then there exists \(c \in (a, b)\) such that \( \frac{f(b)-f(a)}{b-a} = f'(c) \).

Also there's this "extended" mean value theorem, also known as the "Cauchy's mean value theorem" which roughly says:
If \(f, g\) are continuous and differentiable, then there exists \( c \in (a, b) \) such that \( \frac{f(b)-f(a)}{g(b)-g(a)} = \frac{f'(c)}{g'(c)} \).

Both can be proved by Rolle's theorem, which states that:
If \(f\) is continuous in \([a,b]\) and differentiable in \((a,b)\), then if \(f(a) = f(b)\), there must exists \(c \in (a,b)\) such that \(f'(c) = 0\).

The proofs for all of those are available in Wikipedia.

Ok let's dive into the proof to the error bound. The technique described here can also be applied to proving the rectangle rule and trapezoid rule, I think (I have tried it on the rectangle rule, I haven't tried it on the trapezoid rule).

What do we want?

We want to establish the bound of the following:
\[ \int_{a}^{b} f(x) dx - \frac{b-a}{6}( f(a) + 4f((a+b)/2) + f(b) ) \].

Simplify upper and lower bound

Instead of dealing with \(a\) and \(b\), the standard treatment I think is to rewrite it as integration from \(-t\) to \(t\). Let me show you how we can do that.

Starting with
\[ \int_a^b f(x) dx \]

Rewrite \( x = y + \frac{a+b}{2} \) to obtain:
\[ \int_{(a-b)/2}^{(b-a)/2} f\left(y + \frac{a+b}{2}\right) dy \]

Then, define \(g(y) = f(y + \frac{a+b}{2}) \) to obtain:
\[ \int_{(a-b)/2}^{(b-a)/2} g(y) dy \]

Finally define \(t = \frac{b-a}{2} \), then we basically are looking at:
\[ \int_{-t}^{t} g(y) dy \].

Also, 
\[\begin{align} \frac{b-a}{6}(f(a) + 4 f((a+b)/2) + f(b)) & = \frac{b-a}{6}(g((a-b)/2) + 4g(0) + g((b-a)/2)) \\ &= \frac{t}{3}(g(-t) + 4g(0) + g(t)) \end{align}\].

So indeed what we are interested in is the following:
\[ I(t) = \int_{-t}^t g(x) dx - \frac{t}{3}(g(-t) + 4g(0) + g(t)) \]

By the fundamental theorem of calculus, there exists \(G(x)\) such that \(G'(x) = g(x)\) which means \( \int_{-t}^{t} g(x) dx = G(t) - G(-t) \). Hence we can finally derive the final form of \(I(t)\):
\[ I(t) = G(t) - G(-t) - \frac{t}{3}(g(-t) + 4g(0) + g(t)) \]

Note that
\[\begin{align} I'(t) &= g(t) + g(-t) - \frac{1}{3}(g(t) + g(-t) + 4g(0)) - \frac{t}{3}(g'(t) - g'(-t))) \\  I''(t) &= \frac{1}{3}(g'(t) - g'(-t)) - \frac{t}{3}(g''(t) + g''(-t)) \\  I'''(t) &= -\frac{t}{3}(g'''(t)-g'''(-t) ) \end{align}\]

And more importantly, that \(I(0) = I'(0) = I''(0) = I'''(0) = 0\).

MVT Chain

This is the meat of the technique, which is to apply Cauchy's MVT several time. Let me show you in detail the argument for the first iteration, and then I'll show you the chain.

Pick \(h(x) = x^5\) which is continuous and differentiable in the range. Then by Cauchy's MVT we have \(t_1 \in (0, t) \) such that
\[ \frac{I(t) - I(0)}{h(t) - h(0)} = \frac{I'(t_1)}{h'(t_1)} = \frac{I'(t_1)}{5{t_1}^4}\].

But since \(I(0) = 0\) and \(h(0)\), really what we have is:
\[ \frac{I(t)}{t^5} = \frac{I'(t_1)}{5{t_1}^4} \].

Repeat this argument in a chain, we have:
\[ \frac{I(t)}{t^5} =  \frac{I'(t_1)}{5{t_1}^4} =  \frac{I''(t_2)}{20{t_2}^3} =  \frac{I'''(t_3)}{60{t_3}^2} \].

Where \(t > t_1 > t_2 > t_3 > 0 \).

Also notice that
\[\begin{align} \frac{I'''(t_3)}{60{t_3}^2} &= -\frac{1}{90}\frac{g'''(t_3) - g'''(-t_3)}{{t_3} - (-t_3)} \\    &= -\frac{1}{90} g^{(4)}(t_4) \end{align}\]

by MVT where \( t_4 \in (-t_3, t_3)\).

Hence we have
\[ I(t) = -\frac{1}{90}g^{(4)}(t_4)t^5 \].

Substituting \(t = (b-a)/2\) and \(f\) back, we get the desired results. \( \square\)

Comments

Popular posts from this blog

641 | 2^32 + 1

Problem: Show that \( 641 | 2^{32} + 1 \). Solution: (From Problem Solving Strategies, Arthur Engel) \( 641 = 625 + 16 = 5^4 + 2^4 \). So \( 641 | 2^{32} + 2^{28} \cdot 5^4 \). Also, \( 641 = 640 + 1 = 2^7 \cdot 5 + 1\). So \( 641 | (2^7 \cdot 5)^4 - 1 = 2^{28}\cdot 5^4 - 1 \). Hence \( 641 | 2^{32} + 2^{28} \cdot 5^4 -(2^{28}\cdot 5^4 - 1) \). QED

Derive e

Prove that \( \left( 1 + \frac{1}{n} \right)^n \) converges. Solution Let \( G(n) =  \left( 1 + \frac{1}{n} \right)^n \). Binomial expansion, \( G(n) = \sum_{k = 0}^{n} \frac{ \binom{n}{k} }{n^k} \). By simple inequality argument, \( \frac{ \binom{n}{k} }{n^k} < \frac{1}{k!} \). So if we let \( F(n) = \sum_{k = 0}^{n} \frac{1}{k!} \), we have \( G(n) < F(n) \). By simple inequality argument, \( F(n) < 1 + \sum_{k = 1}^{n} \frac{1}{2^{k-1}} < 3 \). So G and F are bounded above. By AM-GM, \( \frac{G(n-1)}{G(n)} = \left( \frac{ n^2 }{ n^2 - 1} \right)^{(n-1)} \frac{ n}{n+1} < \left(  \frac{ \frac{ n^2 }{ n^2 - 1} (n-1) + \frac{ n}{n+1} }{n} \right)^n = 1 \). So \( G(n-1) < G(n) \). Since G is monotonic increasing and bounded above, G converges. Similarly F converges. Let's also prove that G and F converges to the same limit. By studying individual terms in \( F(n) - G(n) \) and making simple inequality arguments, we can show that for each \( \epsilon > 0 \) w...

Random walk by the cliff

You are one step away from falling down a cliff. In each step, you have 1/3 chance of moving towards the cliff, 2/3 chance of moving away from the cliff. What is the probability you fall down the cliff? Solution It is 1/2, and there are 3 ways to approach this. Let's say we are at position 0, and the cliff is at position -1. We go +1 with probability p, and go -1 with probability 1-p. Approach 1. Catalan number: All paths that end with falling down the cliff can be broken down to two parts: 0 -> ... -> 0 and 0 -> -1. The path 0 -> ... -> 0 is Catalan; if we fix the path length to 2n, then the number of paths would be C_n. What we want to compute is the total probability = (1-p) sum { C_n (p(1-p))^n }. We can use the generating function for Catalan to arrive at: if p <= 1/2, probability = 1; if p > 1/2, probability = 1/p - 1. Approach 2. Let P(i) be the probability of reaching -1 from position i.  P(0) = 1-p + pP(1) Also, P(1) can be derived as follows: can we r...