Searching \ for 'Math basics (was Averaging)' in subject line. ()
Help us get a faster server
FAQ page: massmind.org/techref/method/math.htm?key=math
Search entire site for: 'Math basics (was Averaging)'.

Truncated match.
'Math basics (was Averaging)'
1997\10\03@092009 by

PICkles:

While writing some averaging code, I started realizing my
grounding in the theory of how binary math works is very fuzzy.  It
seems like I worked through binary multiplication some 20 years ago
and then promptly forgot how it worked.

Could one of you kind folks let me know if this is on the right
track?

One of the ways to implement a running averaging algorithm is

(Old average) *256 - (Old Average) + (New data point)

This gives a result that is 256 times the actual average, which
would be quite useful in my case.  Why divide? Or, if I really need
the divided result, just use the high byte of the result.

Multiplication by 256 is just shifting left 8 times, right?  In this
case a one byte number shifted left 8 times becaomes a two byte
number with the low byte being zero.  No actual shifting required.

If (Old average)*256  is stored in AVGHI,AVGLO

the formula becomes

AVGHI,AVGLO  -  AVGHI + (NEW DATA POINT)

It would be a teensy bit more accurate if I could add in a single
digit based on rounding  (AVGLO/256).  This could be computed by
comparing AVGLO to 128 decimal and setting 1 or 0.

No need to call a mupltiply routine, just double precision
addition and subraction routines.

Does this make sense?

Best Regards,

Lawrence Lile
On Fri, 3 Oct 1997 08:18:00 +0000, you wrote:

>PICkles:
>
>While writing some averaging code, I started realizing my
>grounding in the theory of how binary math works is very fuzzy.  It
>seems like I worked through binary multiplication some 20 years ago
>and then promptly forgot how it worked.
>
>Could one of you kind folks let me know if this is on the right
>track?
>
>One of the ways to implement a running averaging algorithm is
>
>(Old average) *256 - (Old Average) + (New data point)
>

Why are you subtracting the old average? I must be missing something
here (too many ways of thinking about things). Here's how I implement
a average of the last 256 samples:

Reserve 2 bytes, add the new sample to the 16 bit number, grab the
MS byte as the average. Is this what you're trying to do?

Brian
----------------------------------------------------------------------
Nexus Computing : DigiTemp temperature sensors for Linux, DOS, Win95
http://www.eskimo.com/~nexus
======================================================================

Lawrence Lile wrote:

> One of the ways to implement a running averaging algorithm is
>
> (Old average) *256 - (Old Average) + (New data point)

and Brian C. Lane <PICLISTMITVMA.MIT.EDU> replied:

> Why are you subtracting the old average? I must be missing
> something here (too many ways of thinking about things). Here's how
> I implement a average of the last 256 samples:
>
> Reserve 2 bytes, add the new sample to the 16 bit number, grab the
> MS byte as the average.

Brian:

Your method works if you start with the 16-bit accumulator set to
0000, then add ONLY 256 values.  Lawrence's method lets you
accumulate ANY NUMBER of samples while keeping a more-or-less
accurate running average of just the PREVIOUS 256 samples.

-Andy

=== Andrew Warren - fastfwdix.netcom.com
=== Fast Forward Engineering - Vista, California
=== http://www.geocities.com/SiliconValley/2499

> >One of the ways to implement a running averaging algorithm is
> >
> >(Old average) *256 - (Old Average) + (New data point)
> >
>
>   Why are you subtracting the old average? I must be missing
>   something

This is the same as saying 255*(old average).  I'm trying to cheat
and not use a "real" general purpose multiply routine, just a left
shift 8 places, from a single byte to the high byte of my (old
average) number.

THat gets us into trouble.......  Other people have posted that this
method ignores floating point math, and will result in an increasing
error over time.

If you are only interested in 256 data points, then you zero the
average and start over again, this might work fine.

> here (too many ways of thinking about things). Here's how I
> implement a average of the last 256 samples:
>
>   Reserve 2 bytes, add the new sample to the 16 bit number, grab the
> MS byte as the average. Is this what you're trying to do?

Very close.  What you are describing is the same as (257)*(Old
average) - (Old average) + (New data point) .  This would give a
running average of 257 points.
Best Regards,

Lawrence Lile

Bob Beuge wrote:
>
> I think you're getting real close. Unfortunately, no matter how you
> try to round the answer you will introduce errors because you don't
> have infinite precision with any digital representation of the
> average. By comparing AVGLO to 128 and rounding you are just moving
> the probable error by one place. Eventually the errors will cause
> drift.
>

I think my approach works great if you are only interested in 256
data points.  Bombs if you are interested in the long term average.
Clear out the average after 256 points, start over from zero.  The
application I am working on will work fine with this approach.

{Quote hidden}

I've used a similar technique, assigning a number of weights to
possible data points and adding or subtracting from an average to get
to a setpopint.  With a lot of fiddleing and hand tuning, this can be
VERY stable in the face of HORRENDOUS noise.  Maybe even better than
averaging from a theoretical standpoint.

{Quote hidden}

Best Regards,

Lawrence Lile

1997\10\08@123220 by
{Quote hidden}

Best Regards,

Lawrence Lile
Lawrence Lile <lileltoastmaster.com> wrote:

> > Reserve 2 bytes, add the new sample to the 16 bit number, grab the
> > MS byte as the average. Is this what you're trying to do?
>
> Very close.  What you are describing is the same as (257)*(Old
> average) - (Old average) + (New data point) .  This would give a
> running average of 257 points.

Lawrence:

I think you (or I) may have misunderstood the paragraph you quoted.
It seems to me that what he's doing is simply accumulating samples
and ALWAYS dividing by 256, no matter how many samples have actually
been taken.

This method isn't equivalent to a running average of 257 points...
In fact, it's only accurate right after the 256th sample has been
taken, and its accuracy falls off dramatically both before and after
that point.

-Andy

=== Andrew Warren - fastfwdix.netcom.com
=== Fast Forward Engineering - Vista, California
=== http://www.geocities.com/SiliconValley/2499
Andrew Warren wrote:
{Quote hidden}

Maybe this explanation can muddle what ever little understanding
remains. (And I thought this thread would die before I felt an urge to
respond. Especially since Bob Blick succinctly expressed the concepts
earlier.)

The last time the "averaging" thread was around I posted some stuff
about IIR filters. Without going into detail, this is the pertinent
equation:

avg = (m*avg + n*sample) /(m+n)

If you wish to give the average more weight than the current sample
then you can do something like this:
avg = (3*avg + sample)/4

Or, if you want faster response:
avg = (avg + 3*sample)/4

In Lawerence's case, m = 255 and n = 1. He's got a whole lot weight
on the average. To say that this is a 256 sample running average is
not correct, though.

In Brian's case, the "averaging" function as he described it
is like this:

total = total + sample;
avg = total/256

It's not recursive as Lawerence suggests. Furthermore, it works
the way Andy described it last week: it doesn't. Maybe you left
something out of your explanation, Brian.

---------------
If you don't have it already, get "Numerical Analysis for Scientist
and Engineers" by Richard Hamming (I'm recalling this title from
memory, so please forgive me if it's wrong). I don't have the ISBN
number handy. The publisher is Dover which means the book is
inexpensive.

Hamming has a very interesting chapter on the frequency response of
recursive type functions. (He actually discusses integration instead
of averaging, but the concept is applicable). He explains the subject
in a way which I found easily understandable. I was amazed at the
difference in the frequency responses of Simpson's 1/3 rule vs the
"rectangular rule" integrating methods. When I get some time...
I'll try to explain the trade offs between weighting the average
and sample. That is, unless someone else wishes to explain it.

Scott
{Quote hidden}

Aha!, so this method would only really work if you took 256 samples,
then made a decision, then flushed the average and started over.
Best Regards,

Lawrence Lile
Yesterday I made a reference to Richard Hamming's book. Here are
the (correct) details:

"Numerical Methods for Scientist and Engineers" 2nd Edition
ISBN 0-486-65241-6

I paid \$14.95 for mine about 5 years ago.

My reference was to chapter 36, "Integrals and Differential
Equations". Specifically, section 36.3 covers "The Transfer
Function Approach to Integration Formulas". The math is a little
thick, but compared to other technical authors Hamming provides
copious explanations.

Scott
On Wed, 8 Oct 1997 15:14:38 -0800, you wrote:

>This method isn't equivalent to a running average of 257 points...
>In fact, it's only accurate right after the 256th sample has been
>taken, and its accuracy falls off dramatically both before and after
>that point.

Geez, do I feel dumb. Of course you're right, I'm not at all sure
what I was thinking, you have to clear the accumulator after the 256th
sample for the method I described to work... The bad thing is that now
I cannot remember how I wrote that averaging routine at work last week
-- I'd better check it Monday.

Thanks for straightening me out guys.

Brian

----------------------------------------------------------------------
Nexus Computing : DigiTemp temperature sensors for Linux, DOS, Win95
http://www.eskimo.com/~nexus
======================================================================

More... (looser matching)
- Last day of these posts
- In 1997 , 1998 only
- Today
- New search...