(19 intermediate revisions by 3 users not shown)
Line 1: Line 1:
(Yes, I made up the word Multimeans)
+
We will start this from the beginning with the series:
  
It was pointed out to me by Professor Bell that if averaging two consecutive sums significantly increases the accuracy of the sum, then wouldn't averaging two consecutive averages of three consecutive sums be even more accurate?  And then we could add in another layer of averages.  In fact we could do N-1 layers of accuracy for a series that goes to x^N (Which would mean (N-1)! averages to make along the way).  This makes sense since the error of two consecutive averages would either be positive or negative since the average always favor the less accurate side (In the case of the average of S(N) and S(N+1), the error would be negative if the error of S(N) is negative and positive if the error of S(N) is positive.).  Therefore, within just a very few sums and averages (which are much easier to make than sums themselves), we could get very accurate very fast, no?  I've decided to show it works.
+
<math>1+r+r^2+r^3+\cdots+r^N=\frac{1}{1-r}-\frac{r^{N+1}}{1-r}</math>
  
For the sake of space and organization let <math> S(N)=\int_0^1S_N(x)dx=\frac{\pi}{4}+E(N)</math>
+
From here we substitute <math>r=-x^2</math> to get
  
We will show averaging functions with subscripted functions where:
+
<math>1-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}=\frac{1}{1+x^2}-\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2}</math>
  
<math>A_0(N)=\frac{S(N)+S(N+1)}{2}</math>
+
Now integrate both sides from 0 to 1 with respect to x.
  
<math>A_1(N)=\frac{A_0(N)+A_0(N+1)}{2}</math>
+
<math>\int_0^11-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}dx=\int_0^1\frac{1}{1+x^2}dx-\int_0^1\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2}dx</math>
  
<math>A_K(N)=\frac{A_{K-1}(N)+A_{K-1}(N+1)}{2}</math> Where <math>1\le K\le (N-1)</math>  
+
<math>1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1}=\frac{\pi}{4}+\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx</math>
  
Okay, now to make this work I'm gonna have to clarify certain things.  We're concerned not with finding the actual averages (That can be done later), but the error of these averagesAs I showed in [[ChallengeProblem_MA181Fall2008bell]] the error of an average where one average is too great the other average is too small is the average of the errors which is the difference in the absolute values of the errors divided by two. Because if:
+
Notice that <math>\lim_{N\to\infty}\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx=0</math> so the more terms we use on the left side of the equation the closer we get to <math>\frac{\pi}{4}</math>This integral can therefore be called the error function.
  
<math>X<0</math>
+
For the sake of space we will give labels for several parts.
  
<math>Y>0</math>
+
<math>S(N)=1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1}</math>
  
<math>\frac{X + Y}{2}=\frac{|X|-|Y|}{2}</math>
+
<math>E(N)=\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx</math>
  
So remember from the past proof that
+
<math>S(N)=\frac{\pi}{4}+E(N)</math>
  
<math>A_0(N)=\frac{S(N)+S(N+1)}{2}=\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}</math>
+
We can call S(N) the sum and E(N) the error, since as you add more and more terms to the sum, the error becomes closer and closer to zero.
  
Where <math>\frac{E(N)+E(N+1)}{2}</math> is our error for <math>A_0(N)</math>  And this error turned out to be: <math>|ERROR|\le\frac{1}{4N^2+16N+15}</math>
+
Just how quickly does this sum approach <math>\frac{\pi}{4}</math>?  To find out, we can estimate the error for any N by the following comparison:
  
Now we want to find the error of taking the average of two consecutive averages.
+
for <math>0\le x\le 1 </math>
  
<math>A_1(N)=\frac{A_0(N)+A_0(N+1)}{2}</math>
+
<math>0\le x^2\le 1 </math>
  
We know what A0(N) is, but what is A0(N+1)?
+
<math>1\le 1+x^2 \le 2</math>
  
<math>A_0(N+1)=\frac{S(N+1)+S(N+2)}{2}=\frac{2\frac{\pi}{4}+E(N+1)+E(N+2)}{2}=\frac{\pi}{4}+\frac{E(N+1)+E(N+2)}{2}</math>
+
<math>\frac{1}{2}\le \frac{1}{1+x^2}\le 1</math>
  
 
so
 
so
  
<math>A_1(N)=\frac{\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}+\frac{\pi}{4}+\frac{E(N+1)+E(N+2)}{2}}{2}=\frac{\pi}{4}+\frac{\frac{E(N+1)+E(N+2)}{2}+\frac{E(N)+E(N+1)}{2}}{2}</math>
+
<math>\frac{x^{2(N+1)}}{1+x^2}\le x^{2(N+1)}</math>
  
Once again the error is the average of the errors of the previous sum.
+
And finally
  
Okay, we already know that
+
<math>\begin{align}
 +
|E(N)|&=\left|\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx\right|\\
 +
&=\int_0^1\frac{x^{2(N+1)}}{1+x^2}dx\\
 +
&\le\int_0^1x^{2(N+1)}dx
 +
\end{align}</math>
  
<math>|E(N)|\le\frac{1}{2(N+1)+1}</math>
+
Integrating the estimate gives us:
  
It immediately follows then that
+
<math>|E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N}</math>
  
<math>|E(N+1)|\le\frac{1}{2(N+2)+1}</math>
+
Wow, the sum of N terms can reall only be guaranteed to be within <math>\frac{1}{2N}</math> of <math>\frac{\pi}{4}</math>.  That's a lot of terms we have to take to guarantee even two digits of accuracy.  And it's even worse when we multiply everything by four to estimate pi. 
  
<math>|E(N+2)|\le\frac{1}{2(N+3)+1}</math>
+
We need a faster way.  There's lots of faster ways.  But let's say we're too lazy to find another method.  We just want to work with this series.  What can we do to make it converge to <math>\frac{\pi}{4}</math> faster?
  
So we can find that
+
First, let's notice that for every other term the sum is too great and then too little.  <math>\frac{\pi}{4}</math> is always between two consecutive sums.  Dr. Bell noticed this and posed the challenge to exploit this property in order to make a new way to estimate that converges more quickly.  He said we should try to take the average of two consecutive terms.  Since the target lies between the two sums, the average of the sums should yield a new estimate that is closer than both the other two.  The challenge was to prove this true.
  
<math>|\frac{E(N)+E(N+1)}{2}|=\frac{|E(N)|-|E(N+1)|}{2}\le|\frac{\frac{1}{2(N+1)+1}-\frac{1}{2(N+2)+1}}{2}|=\frac{1}{(2N+3)(2N+5)}</math>
+
So we notice that for
  
and
+
<math>N=\text{odd}, E(N)<0</math>
  
<math>|\frac{E(N+1)+E(N+2)}{2}|=\frac{|E(N+1)|-|E(N+2)|}{2}\le|\frac{\frac{1}{2(N+2)+1}-\frac{1}{2(N+3)+1}}{2}|=\frac{1}{(2N+5)(2N+7)}</math>
+
<math>N=\text{even}, E(N)>0 </math>
  
So now the error of A1(N) is:
+
Now we add two consecutive sums together and divide by two to get the average:
  
<math>\frac{\frac{E(N+1)+E(N+2)}{2}+\frac{E(N)+E(N+1)}{2}}{2}\le|\frac{\frac{1}{(2N+3)(2N+5)}-\frac{1}{(2N+5)(2N+7)}}{2}|=|\frac{\frac{(2N+7)-(2N+3)}{(2N+3)(2N+5)(2N+7)}}{2}|=\frac{2}{(2N+3)(2N+5)(2N+7)}</math>
+
<math>\begin{align}
 +
\frac{S(N) + S(N+1)}{2} &= \frac{2\frac{\pi}{4}+E(N)+E(N+1)}{2}\\
 +
&=\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}
 +
\end{align}</math>
  
Which is now a third order on the bottom and no order on the top!  Excellent!  So now the error is going to get smaller even faster.  If you want to really generalize the accuracy of finding the averages of consecutive terms after N and finding the averages of those averages and so on, we can write this formula:
+
So now we have a new error because
  
<math>|\text{Error}(A_K(N))|\le\frac{2K}{(2N+3)(2N+5)(2N+7)...(2N+2(K+2)+1)}</math> where <math>0\le K\le N-1</math>
+
<math>\begin{align}
 +
\lim_{N\to\infty}\frac{E(N)+E(N+1)}{2}&=\lim_{N\to\infty}\frac{E(N)}{2}+\lim_{N\to\infty}\frac{E(N+1)}{2}\\
 +
&=0+0\\
 +
&=0
 +
\end{align}</math>
  
Which means that we can get really accurate estimates with very few sums.
+
And so the average still goes to pi/4 as N goes to infinity because:
  
NOTE: I have to continue to test my error estimate formula at the end I'm not entirely sure if it's correct or not.  I don't have time right now thoughAnyone else, feel welcome to jump in at this time and add/subtract stuff.)
+
<math>\begin{align}
 +
\lim_{N\to\infty}\frac{S(N) + S(N+1)}{2}&=\lim_{N\to\infty}\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}\\
 +
&=\frac{\pi}{4} + 0 \\
 +
&= \frac{\pi}{4}\end{align}</math>
 +
 
 +
Okay, so the new series still approaches <math>\frac{\pi}{4}</math>, but does it approach it more quickly?  Again, we can use error estimation to predict the convergence of the series.
 +
 
 +
<math>|E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N}</math>
 +
 
 +
<math>|E(N+1)|\le\frac{1}{2(N+2)+1}\le\frac{1}{2N+2}</math>
 +
 
 +
Also remember that either E(N) or E(N+1) is negative and that <math>|E(N)|>|E(N+1|</math> for all N.  So if we sum them, the absolute value of the sum will be the difference of their absolute values.
 +
 
 +
<math>|E(N)+E(N+1)|=|E(N)|-|E(N+1)|\le\frac{1}{2N}-\frac{1}{2N+2}=\frac{2}{(2N)(2N+2)}</math>
 +
 
 +
Finally, the new error function is this sum divided by two.
 +
 
 +
<math>|E'(N)|=\frac{|E(N)+E(N+1)|}{2}\le\frac{\frac{2}{(2N)(2N+2)}}{2}=\frac{1}{(2N)(2N+2)}</math>
 +
 
 +
Wow!  By taking the average of two consecutive sums we can guarantee that the new error will be less than <math>\frac{1}{(2N)^2}</math>.
 +
 
 +
So taking the mean of two consecutive series is in fact much more accurate than doing the simple series, especially for large N.  Oh, but what if we were to take the mean of two consecutive means.  Does the same principal apply?  Let's review.
 +
 
 +
The first average worked because for any two consecutive sums, one would be too high and one too little,  Furthermore, the absolute value of the second error would be less than the absolute value of the first.  Because of these two important characteristics, when we take the average of two consecutive terms the average of their errors is actually the difference of their absolute values.  Upon further investigation, we see that since the absolute value of the preceding error is greater than the absolute value of the second error, the new error of the average will have the same sign as the first error.  Or in other words.
 +
 
 +
<math>\text{For N even, }E'(N)>0</math>
 +
 
 +
<math>\text{For N odd, }E'(N)<0</math>
 +
 
 +
And from our new error estimates we see that <math>|E'(N)|>|E'(N+1)|</math>
 +
 
 +
Hooray! The same characteristics that allowed us to take averages of the original series and find a new estimate that was more accurate than both of the previous estimates are now present in this new method.  So let's try to take the average of two consecutive averages from the beginning and see what happens.
 +
 
 +
<math>\begin{align}
 +
\dfrac{\dfrac{S(N)+S(N+1)}{2}+\dfrac{S(N+1)+S(N+2)}{2}}{2}&=\frac{S(N)+2S(N+1)+S(N+2)}{4}\\
 +
&=\frac{\pi+E(N)+2E(N+1)+E(N+2)}{4}\\
 +
&=\frac{\pi}{4}+\frac{E(N)+2E(N+1)+E(N+2)}{4}
 +
\end{align}</math>
 +
 
 +
Once again we get a new error term which also goes to zero since each error goes to zero, their sum will go to zero.  But again, how fast does it go to zero?  Let's do some tinkering.
 +
 
 +
Remember from earlier that <math>E'(N) = \frac{E(N)+E(N+1)}{2}</math> So the average of two consecutive mean errors is
 +
 
 +
<math>\begin{align}
 +
\frac{E'(N)+E'(N+1)}{2}&=\frac{\frac{E(N)+E(N+1)}{2}+\frac{E(N+1)+E(N+2)}{2}}{2}\\
 +
&=\frac{E(N)+2E(N+1)+E(N+2)}{4}\end{align}</math>
 +
 
 +
Great!  That's the same as the error of the new estimate.  So the error of the mean of two consecutive means is the same as the mean of the errors of two consecutive means.  Keeping track of what level the errors is at is getting confusing.  To make it more clear what error we're speaking of, let's use a subscript on the error function E(N) to denote how many means have been takenThis will come in handy later.  So
 +
 
 +
<math>E_0(N)=\text{error of original series}.</math>
 +
 
 +
<math>E_1(N)=E_0(N)+E_0(N+1)</math>
 +
 
 +
<math>E_2(N)=E_1(N)+E_1(N+1)</math>
 +
 
 +
<math>E_k(N)=E_{k-1}(N)+E_{k-1}(N+1)</math>
 +
 
 +
Subscripts on the series function, S(N), can be used in the same way. 
 +
 
 +
So, back to calculating the new error, <math>E_2(N)</math>
 +
 
 +
<math>|E_2(N)|=\frac{|E_1(N)+E_1(N+1)|}{2}</math>
 +
 
 +
Using the characteristics outlined above, we can make this next step
 +
 
 +
<math>\begin{align}
 +
\frac{|E_1(N)+E_1(N+1)|}{2}&=\frac{|E_1(N)|-|E_1(N+1)|}{2}\\
 +
&\le\frac{\frac{1}{(2N)(2N+2)}-\frac{1}{(2N+2)(2N+4)}}{2}\\
 +
&\le\frac{\frac{2N+4-2N}{(2N)(2N+2)(2N+4)}}{2}\\
 +
&\le\frac{2}{(2N)(2N+2)(2N+4)}\end{align}</math>
 +
 
 +
<math>E_2(N)\le\frac{2}{(2N)(2N+2)(2N+4)}\le\frac{2}{(2N)^3}</math>
 +
 
 +
Wow, so this is even faster convergence!  Now we're taking N to the third!  And this is all with just the N+2 terms of the series and some adding and dividing skills.
 +
 
 +
I don't think I need to continue doing this to convince you that as we take more terms and more averages, we get a better estimate of <math>\frac{\pi}{4}</math>As we continue to take averages, it turns out that the error can be approximated by:
 +
 
 +
<math>|E_k(N)|\le\frac{k}{(2N)^{k+1}}</math>
 +
 
 +
So we can get really fast convergence in not many terms.  By reviewing the process of taking averages, we realize that k is in fact the number of terms after N we will need in order to get this new average.  So we need N+k+1 terms (since there is a 0th term). 
 +
 
 +
But part of the reason for showing that taking averages makes the series converge faster is to find an easier way to approximate <math>\pi</math>.  Taking all these averages doesn't sound very easy.  Well, it's actually very easy.  We can just use Pascal's triangle as the coefficients of the sums and then just divide by the number of terms to get the average.
 +
 
 +
For instance.  Say we want to find the <math>S_4(N)</math>
 +
 
 +
The 4 row of Pascal's triangle is 1 4 6 4 1, use these as the coefficients of the sums like this:
 +
 
 +
<math>S_4(N)=\frac{S(N)+4S(N+1)+6S(N+2)+4S(N+3)+S(N+4)}{16}</math>
 +
 
 +
To make matters even easier, we realize that the number of terms used in k averages is <math>2^k</math> terms.  So let's wrap this up by writing a generalized formula for this method.
 +
 
 +
Suppose we start at term N and want to use k more terms to make k averages.  And let <math>P(S_k(N))</math> mean that you use the kth row of pascal's triangle as the coefficients for the terms of S(N)+S(N+2)+...S(N+k) respectively. 
 +
 
 +
<math>S_k(N)=\frac{P(S_k(N))}{2^k}=\frac{\pi}{4}+E_k(N)</math>
 +
 
 +
Where
 +
 
 +
<math>|E_k(N)|\le\frac{k}{(2N)^k}</math>
 +
 
 +
And that concludes the exploration of taking the averages of averages of alternating series.  Further exploration, I believe, would show that this method could be generalized to work for all alternating series since all alternating series have the same characteristics that make this method work for this series.

Latest revision as of 15:32, 8 December 2008

We will start this from the beginning with the series:

$ 1+r+r^2+r^3+\cdots+r^N=\frac{1}{1-r}-\frac{r^{N+1}}{1-r} $

From here we substitute $ r=-x^2 $ to get

$ 1-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}=\frac{1}{1+x^2}-\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2} $

Now integrate both sides from 0 to 1 with respect to x.

$ \int_0^11-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}dx=\int_0^1\frac{1}{1+x^2}dx-\int_0^1\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2}dx $

$ 1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1}=\frac{\pi}{4}+\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx $

Notice that $ \lim_{N\to\infty}\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx=0 $ so the more terms we use on the left side of the equation the closer we get to $ \frac{\pi}{4} $. This integral can therefore be called the error function.

For the sake of space we will give labels for several parts.

$ S(N)=1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1} $

$ E(N)=\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx $

$ S(N)=\frac{\pi}{4}+E(N) $

We can call S(N) the sum and E(N) the error, since as you add more and more terms to the sum, the error becomes closer and closer to zero.

Just how quickly does this sum approach $ \frac{\pi}{4} $? To find out, we can estimate the error for any N by the following comparison:

for $ 0\le x\le 1 $

$ 0\le x^2\le 1 $

$ 1\le 1+x^2 \le 2 $

$ \frac{1}{2}\le \frac{1}{1+x^2}\le 1 $

so

$ \frac{x^{2(N+1)}}{1+x^2}\le x^{2(N+1)} $

And finally

$ \begin{align} |E(N)|&=\left|\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx\right|\\ &=\int_0^1\frac{x^{2(N+1)}}{1+x^2}dx\\ &\le\int_0^1x^{2(N+1)}dx \end{align} $

Integrating the estimate gives us:

$ |E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N} $

Wow, the sum of N terms can reall only be guaranteed to be within $ \frac{1}{2N} $ of $ \frac{\pi}{4} $. That's a lot of terms we have to take to guarantee even two digits of accuracy. And it's even worse when we multiply everything by four to estimate pi.

We need a faster way. There's lots of faster ways. But let's say we're too lazy to find another method. We just want to work with this series. What can we do to make it converge to $ \frac{\pi}{4} $ faster?

First, let's notice that for every other term the sum is too great and then too little. $ \frac{\pi}{4} $ is always between two consecutive sums. Dr. Bell noticed this and posed the challenge to exploit this property in order to make a new way to estimate that converges more quickly. He said we should try to take the average of two consecutive terms. Since the target lies between the two sums, the average of the sums should yield a new estimate that is closer than both the other two. The challenge was to prove this true.

So we notice that for

$ N=\text{odd}, E(N)<0 $

$ N=\text{even}, E(N)>0 $

Now we add two consecutive sums together and divide by two to get the average:

$ \begin{align} \frac{S(N) + S(N+1)}{2} &= \frac{2\frac{\pi}{4}+E(N)+E(N+1)}{2}\\ &=\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2} \end{align} $

So now we have a new error because

$ \begin{align} \lim_{N\to\infty}\frac{E(N)+E(N+1)}{2}&=\lim_{N\to\infty}\frac{E(N)}{2}+\lim_{N\to\infty}\frac{E(N+1)}{2}\\ &=0+0\\ &=0 \end{align} $

And so the average still goes to pi/4 as N goes to infinity because:

$ \begin{align} \lim_{N\to\infty}\frac{S(N) + S(N+1)}{2}&=\lim_{N\to\infty}\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}\\ &=\frac{\pi}{4} + 0 \\ &= \frac{\pi}{4}\end{align} $

Okay, so the new series still approaches $ \frac{\pi}{4} $, but does it approach it more quickly? Again, we can use error estimation to predict the convergence of the series.

$ |E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N} $

$ |E(N+1)|\le\frac{1}{2(N+2)+1}\le\frac{1}{2N+2} $

Also remember that either E(N) or E(N+1) is negative and that $ |E(N)|>|E(N+1| $ for all N. So if we sum them, the absolute value of the sum will be the difference of their absolute values.

$ |E(N)+E(N+1)|=|E(N)|-|E(N+1)|\le\frac{1}{2N}-\frac{1}{2N+2}=\frac{2}{(2N)(2N+2)} $

Finally, the new error function is this sum divided by two.

$ |E'(N)|=\frac{|E(N)+E(N+1)|}{2}\le\frac{\frac{2}{(2N)(2N+2)}}{2}=\frac{1}{(2N)(2N+2)} $

Wow! By taking the average of two consecutive sums we can guarantee that the new error will be less than $ \frac{1}{(2N)^2} $.

So taking the mean of two consecutive series is in fact much more accurate than doing the simple series, especially for large N. Oh, but what if we were to take the mean of two consecutive means. Does the same principal apply? Let's review.

The first average worked because for any two consecutive sums, one would be too high and one too little, Furthermore, the absolute value of the second error would be less than the absolute value of the first. Because of these two important characteristics, when we take the average of two consecutive terms the average of their errors is actually the difference of their absolute values. Upon further investigation, we see that since the absolute value of the preceding error is greater than the absolute value of the second error, the new error of the average will have the same sign as the first error. Or in other words.

$ \text{For N even, }E'(N)>0 $

$ \text{For N odd, }E'(N)<0 $

And from our new error estimates we see that $ |E'(N)|>|E'(N+1)| $

Hooray! The same characteristics that allowed us to take averages of the original series and find a new estimate that was more accurate than both of the previous estimates are now present in this new method. So let's try to take the average of two consecutive averages from the beginning and see what happens.

$ \begin{align} \dfrac{\dfrac{S(N)+S(N+1)}{2}+\dfrac{S(N+1)+S(N+2)}{2}}{2}&=\frac{S(N)+2S(N+1)+S(N+2)}{4}\\ &=\frac{\pi+E(N)+2E(N+1)+E(N+2)}{4}\\ &=\frac{\pi}{4}+\frac{E(N)+2E(N+1)+E(N+2)}{4} \end{align} $

Once again we get a new error term which also goes to zero since each error goes to zero, their sum will go to zero. But again, how fast does it go to zero? Let's do some tinkering.

Remember from earlier that $ E'(N) = \frac{E(N)+E(N+1)}{2} $ So the average of two consecutive mean errors is

$ \begin{align} \frac{E'(N)+E'(N+1)}{2}&=\frac{\frac{E(N)+E(N+1)}{2}+\frac{E(N+1)+E(N+2)}{2}}{2}\\ &=\frac{E(N)+2E(N+1)+E(N+2)}{4}\end{align} $

Great! That's the same as the error of the new estimate. So the error of the mean of two consecutive means is the same as the mean of the errors of two consecutive means. Keeping track of what level the errors is at is getting confusing. To make it more clear what error we're speaking of, let's use a subscript on the error function E(N) to denote how many means have been taken. This will come in handy later. So

$ E_0(N)=\text{error of original series}. $

$ E_1(N)=E_0(N)+E_0(N+1) $

$ E_2(N)=E_1(N)+E_1(N+1) $

$ E_k(N)=E_{k-1}(N)+E_{k-1}(N+1) $

Subscripts on the series function, S(N), can be used in the same way.

So, back to calculating the new error, $ E_2(N) $

$ |E_2(N)|=\frac{|E_1(N)+E_1(N+1)|}{2} $

Using the characteristics outlined above, we can make this next step

$ \begin{align} \frac{|E_1(N)+E_1(N+1)|}{2}&=\frac{|E_1(N)|-|E_1(N+1)|}{2}\\ &\le\frac{\frac{1}{(2N)(2N+2)}-\frac{1}{(2N+2)(2N+4)}}{2}\\ &\le\frac{\frac{2N+4-2N}{(2N)(2N+2)(2N+4)}}{2}\\ &\le\frac{2}{(2N)(2N+2)(2N+4)}\end{align} $

$ E_2(N)\le\frac{2}{(2N)(2N+2)(2N+4)}\le\frac{2}{(2N)^3} $

Wow, so this is even faster convergence! Now we're taking N to the third! And this is all with just the N+2 terms of the series and some adding and dividing skills.

I don't think I need to continue doing this to convince you that as we take more terms and more averages, we get a better estimate of $ \frac{\pi}{4} $. As we continue to take averages, it turns out that the error can be approximated by:

$ |E_k(N)|\le\frac{k}{(2N)^{k+1}} $

So we can get really fast convergence in not many terms. By reviewing the process of taking averages, we realize that k is in fact the number of terms after N we will need in order to get this new average. So we need N+k+1 terms (since there is a 0th term).

But part of the reason for showing that taking averages makes the series converge faster is to find an easier way to approximate $ \pi $. Taking all these averages doesn't sound very easy. Well, it's actually very easy. We can just use Pascal's triangle as the coefficients of the sums and then just divide by the number of terms to get the average.

For instance. Say we want to find the $ S_4(N) $

The 4 row of Pascal's triangle is 1 4 6 4 1, use these as the coefficients of the sums like this:

$ S_4(N)=\frac{S(N)+4S(N+1)+6S(N+2)+4S(N+3)+S(N+4)}{16} $

To make matters even easier, we realize that the number of terms used in k averages is $ 2^k $ terms. So let's wrap this up by writing a generalized formula for this method.

Suppose we start at term N and want to use k more terms to make k averages. And let $ P(S_k(N)) $ mean that you use the kth row of pascal's triangle as the coefficients for the terms of S(N)+S(N+2)+...S(N+k) respectively.

$ S_k(N)=\frac{P(S_k(N))}{2^k}=\frac{\pi}{4}+E_k(N) $

Where

$ |E_k(N)|\le\frac{k}{(2N)^k} $

And that concludes the exploration of taking the averages of averages of alternating series. Further exploration, I believe, would show that this method could be generalized to work for all alternating series since all alternating series have the same characteristics that make this method work for this series.

Alumni Liaison

EISL lab graduate

Mu Qiao