Searching \ for '[EE] fractional powers in integer math.' in subject line. ()
Help us get a faster server
FAQ page: massmind.org/techref/method/math.htm?key=math
Search entire site for: 'fractional powers in integer math.'.

Exact match. Not showing close matches.
'[EE] fractional powers in integer math.'
2005\10\14@193144 by

Could I get a hand with simplifying some calculations in integer math?

I would like to calculate some values to plug into simple digital filter
code inside the actual assembler. The assembler supports integer variables
of 32 bits precision. The idea is that the student can put in the actual
values of the R and the C in a basic RC type low pass filter and the code
will load the literal values appropriate for the digital simulation of that
filter.

I am working from this text:

Let us take for example the Simple RC (resistor-capacitor) low pass filter.
This circuit passes frequencies in the input voltage that are lower than
some critical frequency (called the cutoff frequency) determined by the
resistor and capacitor values, while severely attenuating any higher
frequencies.

The output voltage Vout, then, is simply a selectively reduced version of
Vin (input voltage). The resistor drops the output voltage, as does the
voltage that goes into charging C. At first, Vout = a * Vin (where the
constant a is the amount of attenuation caused by R and C). It can be shown
that a=1/RC (where R is in ohms and C is in farads).

The voltage change with time across a capacitor is an exponential function.
If the voltage at time zero is V, then the voltage at time t is given by
Ve^(-at) (a is the Same constant as above).

I need to find e^(-at) given the value of at. I can find at pretty easy:
Since t=1 (ms, sec, usec, etc...) at is 1/RC expressed as a fraction of 256.
So for example, the current code has a section like this

at EQU 256* 360000/1000000
movlw at
;for a digital equivalent to an RC filter with a
;360kOhm resistor and a 1uF capacitor

And that puts 92 (hex 5C) into w just dandy.

E^(-at) is the same as (1/e) ^ (at) right?

Now, 1/e is approximately 368/1000 and so I can calculate a power by making
a macro in the assembler that multiplies that value times itself (at) times
and then take that time 256 divided by 1000. The value (at) can not be more
than 255/256 and probably will not be more than about 192/256.

368^192 exceeds 32 bits by a little tiny bit.

I guess the question is: How does one calculate a fractional power to any
degree of accuracy in 32 bit integer math?

---
James Newton, massmind.org Knowledge Archiver
jamesmassmind.org 1-619-652-0593 fax:1-208-279-8767
All the engineering secrets worth knowing:
http://techref.massmind.org What do YOU know?

James Newtons Massmind wrote:
> Could I get a hand with simplifying some calculations in integer math?

Sounds like floating point math would be more suitable.  My PIC preprocessor
lets you do a bunch of assembly time calculations using floating point, then
convert to integer right before using it in the actual assembly
instructions.

> I would like to calculate some values to plug into simple digital filter
> code inside the actual assembler. The assembler supports integer
> variables of 32 bits precision. The idea is that the student can put in
> the actual values of the R and the C in a basic RC type low pass filter
> and the code will load the literal values appropriate for the digital
> simulation of that filter.

The digital realm is a bit different from the analog realm.  First,
specifying an R and C value is really only specifying a single value since
there is no issue of impedence in the digital filter.  For your purposes,
only the product of R and C matter.  But you do need an additional parameter
that isn't relevant in the analog domain: the filter iteration rate.  Inside
the processor it doesn't make sense to talk about a half life, for example,
of 100mS.  You can only specify a half life in filter iterations, which may
take a lot less or a lot more time than the analog filter which always
operates in real time.

The simple digital filter

FILT <-- FILT + FF * (NEW - FILT)

is a low pass filter with an exponential decay just like an RC analog
filter.  Note there is only one parameter to adjust.  I call it FF for
"filter fraction".  FF of 0 is an infinitely heavy filter as the output
never changes.  FF = 1 disables all filtering since the input is just passed
to the output.  Values in between simulate real RC low pass filters, within
the constraint of discrete samples of course.

> The voltage change with time across a capacitor is an exponential
> function. If the voltage at time zero is V, then the voltage at time t
> is given by Ve^(-at) (a is the Same constant as above).

They failed to mention that the input voltage is assumed to be 0 in this
case.  I don't think any text this sloppy is worth spending time on.

> I need to find e^(-at) given the value of at.

No, you don't.  You are thinking of modeling analog again instead of
embracing digital.  There is no need (and it would take a lot more cycles)
to compute the full equation of the filter each iteration.  Simple
incremental operations performed each iteration as shown above will model
the filter without ever requiring exponentiation.

> E^(-at) is the same as (1/e) ^ (at) right?

Yes.

{Quote hidden}

This sounds like more beating of head against wall.  You can make this easy
on yourself by using the FFTC2 or FFREQ inline functions in my preprocessor.
These both calculate FF given appropriate parameters.  You can then do
additional floating point manipulations until finally ending up with the
integer value you will use in the fixed point multiply by FF.

Another strategy is to make sure FF ends up being 2 ^ -N so that the
multiply by FF becomes just a right shift.  To get an arbitrary "frequency"
response you adjust the filter iteration frequency.

The PREPIC documentation file is at
http://www.embedinc.com/pic/prepic.txt.htm.

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com
Thank you, Olin, for that informative response to a question I didn't ask.

This is NOT about designing professional quality digital filters. Nor is it

This is about educating people who understand analog systems and showing
them that the same basic thing can be done in a digital system.

How about we just stick to this question:

How would you calculate e^(-at) in 32 bit integer math?

---
James.

> {Original Message removed}
I would use the fact that e^x = SUM(n = 0, inf, [(x^n)/n!]). ie.. e^x =
1+x+(x^2/2!)+(x^3/3!)+(x^4/4!)+---
Now just use -at for x and iterate that sum to desired precision. Then with
a negative exponent, the error would be between 0 and abs(x)^(n+1) /(n+1)!.
So, the more precise you need it, the larger you make n. Using fixed point
for the fractions here is no problem.
That can be implemented in a simple for loop and could be written to avoid
all the multiplies. Lots of times it only takes a few iterations to be
somewhat close. If you're not familiar with summations, I could pseudo code
it.
James

On 10/14/05, James Newtons Massmind <jamesnewtonmassmind.org> wrote:
{Quote hidden}

> > {Original Message removed}
James Humes wrote:

>I would use the fact that e^x = SUM(n = 0, inf, [(x^n)/n!]). ie.. e^x =
>1+x+(x^2/2!)+(x^3/3!)+(x^4/4!)+---
> Now just use -at for x and iterate that sum to desired precision. Then with
>a negative exponent, the error would be between 0 and abs(x)^(n+1) /(n+1)!.
>So, the more precise you need it, the larger you make n. Using fixed point
>for the fractions here is no problem.
> That can be implemented in a simple for loop and could be written to avoid
>all the multiplies. Lots of times it only takes a few iterations to be
>somewhat close. If you're not familiar with summations, I could pseudo code
>it.
> James
>
>
>

What he said.
Seriously, I was just going to write about the same thing.

--
Martin Klingensmith
http://wwia.org/
http://nnytech.net/

>
> I would use the fact that e^x = SUM(n = 0, inf, [(x^n)/n!]).
> ie.. e^x =
> 1+x+(x^2/2!)+(x^3/3!)+(x^4/4!)+---
>  Now just use -at for x and iterate that sum to desired
> precision. Then with a negative exponent, the error would be
> between 0 and abs(x)^(n+1) /(n+1)!.
> So, the more precise you need it, the larger you make n.
> Using fixed point for the fractions here is no problem.
>  That can be implemented in a simple for loop and could be
> written to avoid all the multiplies. Lots of times it only
> takes a few iterations to be somewhat close. If you're not
> familiar with summations, I could pseudo code it.
>  James

First, congratulations on a great name.

Now, that "!" in there is a factorial right? So I have to have one macro to
calculate that...
...errr, hang on is that (X^2)/(2!) or (X^2/2)!?

Then the power function x^2 then x^3 and so on. No problem.

Now, x in this case is a negative number... So are we really calculating

(1/e)^(x)

? And does that affect that series? Or do we just put in a negative number
so that every other term of the series turns out negative?

Finally, the input value is expressed as a fraction of 256. E.g. the value
is 92 to represent 92/256 which is about .360 so what we really want to
calculate is
e^(-.36) which is supposed to come out to .697 or 179/256.

is then
=1-.36+(.36*.36/2)+(-.36*.36*.36/6)+(.36*.36*.36*.36/24) and so on?
=1-.36+0.0648-0.007776+0.00069984
=0.69772384

Well, would you look at that! It works....

factorial.

But I can't put in 92 and get 179 out.

---
James.

{Quote hidden}

Thanks, sounds like a future planetary governer's name doesn't it?
I'm not sure why you want to get the exponent out in that way exactly, but
see if this helps..
e^(a/256) = x/256 =>
256*e^(a/256) = x
put a in and get x out.
so, once you calculate e^(a/256) (ie, a = -92, a/256 = -.36), just shift
the result left 8 times and you have the output x = 178.something, do a
ceiling operation to get the correct number 179.
James
James,

achieve.
Do you want to compute e^x with PIC code or do you want the assembler to
do it? If you're using the PIC, then I'd consider a lookup table with
first order interpolation or a CORDIC implementation. You'll have to
normalize your input (or your 'at') so that a single table can accomodate
a large range with a single table. However, for the series approximation
you'll also need to perform some kind of normalization if you wish to keep
just a few terms in the sum. I get suspicious when people claim they need
32-bits for a computation, and it sounds like this is what you're claiming
here!

Scott

Hi Scott.

1/(2^32) ~= 0.23283 * 10^(-9)

Pretty damn good almost for everything :)

WBR Dmitry.

Scott Dattalo wrote:
{Quote hidden}

> -
James Newtons Massmind wrote:
> Thank you, Olin, for that informative response to a question I didn't

You didn't explain the context of the question at all.  As you well know,
most questions here about how to accomplish a specific detail without
context are really the wrong question and therefore answering them directly
is pointless.

> This is NOT about designing professional quality digital filters. Nor
> is it about embracing the digital realm.
>
> This is about educating people who understand analog systems and
> showing them that the same basic thing can be done in a digital
> system.

There are many ways to accomplish this.  My first reaction is that this
could be well served with a simulation on a PC where graphical output could
be useful.  You want to do this in separate hardware in a PIC, so there is
obviously a lot going on and many requirements you're not telling us.

> How about we just stick to this question:

Because I still have a strong suspicion you're asking the wrong question in
the first place.  Until you explain a compelling reason for it, I can't get
away from thinking the right answer is to avoid requiring exponentiation in
the first place.

> How would you calculate e^(-at) in 32 bit integer math?

Why does it need to be integer math?  From your previous post I got the
impression this calculation was going to be done at assembly time.  In that
case you have a whole PC at your disposal, so I don't understand the
restriction on integer math.  On the other hand if you really really need to
do an exponentiation at run time on a PIC, then the method will probably be
very different than for doing it at assembly time on a PC.  Even then, I
don't see why floating point isn't an option.  Too many things about the
"requirements" don't make sense, at least with the limited information
given.  It's not possible to provide a decent answer without more
information.

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com
James Newtons Massmind wrote:
> Now, that "!" in there is a factorial right? So I have to have one
> macro to calculate that...
> ...errr, hang on is that (X^2)/(2!) or (X^2/2)!?

The former.  Note that both the X^N and factorial can be computed
incrementally each term from the previous term.

> But I can't put in 92 and get 179 out.

Right.  This algorithm doesn't lend itself to fixed point in a useful way.
You can't for example scale the input by 256 and then scale the output by
1/256 to compensate.  Compensating the output for a scale factor on the
input is not trivial since you are scaling a logarithm.  In other words, if
you added a constant to the input you could compensate by dividing the
output by a constant.  Multiplying the input by a constant would require
finding the root of the output to that constant to compensate, not to
mention overflowing the 32 bit range very quickly.

Again, why do you really really need to do this with integers, or even do an
exponential in the first place?

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

On Sat, 2005-10-15 at 02:32 +0400, Dmitriy Kiryashov wrote:
> Hi Scott.
>
> 1/(2^32) ~= 0.23283 * 10^(-9)
>
> Pretty damn good almost for everything :)

I reckon!

If you think about a capacitor discharging through a resistor (which is
along the lines of what I James is describing), you can write the voltage
as a function of time to be:

V(t) = Vo *e^(-t/RC)

Now if you wish to evaluate this at equally spaced time intervals,

t = i*DT

You'd write:

V(i*DT) = Vo * e^(-i*DT/RC)
= Vo * (e^(-DT/RC))^i
= Vo * K^i

But the samples are probably computed consecutively. Thus you can write:

V((i+1)*DT) = V(i*DT) * K

In other words, the next sample is the previous sample times a constant. I
see no reason to compute K with 32 bits of resolution, but even if you had
to it'd only be once.

Scott
Scott Dattalo wrote:
>   V(t) = Vo *e^(-t/RC)
>
> Now if you wish to evaluate this at equally spaced time intervals,
>
>   t = i*DT
>
> You'd write:
>
>   V(i*DT) = Vo * e^(-i*DT/RC)
>           = Vo * (e^(-DT/RC))^i
>           = Vo * K^i
>
> But the samples are probably computed consecutively. Thus you can
> write:
>
>   V((i+1)*DT) = V(i*DT) * K

In other words

FILT <-- FILT * K

Where K is some number between 0 and 1 for useful modeling of C discharging
thru R.  This is exactly the standard single pole low pass iterative digital
filter

FILT <-- FILT + FF * (NEW - FILT)

with the input 0 (NEW = 0) and FF = 1 - K.  It's not clear to me whether
James just wants to show decay of a capacitor discharging thru a resistor,
or model a full R/C filter where the input could change.

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com
Not the PIC. The Assembler. The assembler only supports integer math. It
happens to be 32 bit.

The problem is that I have to work with a fractional value. For example .360
and it is represented as a fraction of 256. e.g. .360 = 92/256 so the value
I have is 92 which means .36

I need to calculate e^(-x)

Even if I had a power function (and I don't) it goes wrong at the point that
I put in 92/256. At that point the assembler returns a value of 0. In
integer math, 92/256 is zero.

So how do you calculate e^(-x) in integer math?

---
James.

> {Original Message removed}
> How would you calculate e^(-at) in 32 bit integer math?
>
> Why does it need to be integer math?  From your previous post
> I got the impression this calculation was going to be done at
> assembly time.

The assembler I'm using only supports integer math. No fractions. No
floating point. Just integers. So if you put in .36 it becomes 0. 92/256 =
0. Integer math.

How do you calculate e^(-x) where x is between 1/256 and 192/256 using only
integer math.

---
James.

And that is exactly what the PIC routine is doing. What I need help with is
the computation of K. It is based on t/a were t=1 and a=RC so it is a
fractional value expressed as a part of 256. E.g. a 360KOhm resistor and a
1uF capacitor results in a value of .360 but that is expressed as 92. In the
actual assembler code is

at EQU 256* 360000/1000000
movlw at
;for a digital equivalent to an RC filter with a
;360kOhm resistor and a 1uF capacitor

And that ends up with 92 in W. The people who will be playing about with
this code could put in different values for R and C and change the code
generated by the assembler. E.g.

at EQU 256* 470000/1000000
movlw at
;for a digital equivalent to an RC filter with a
;470kOhm resistor and a 1uF capacitor

And that would put a hex 78 decimal 120 into w.

The second part of the routine is what you are talking about below, and that
needs this value K which is derived from e^(-at). Again, t=1 so it is really
just e^(-a)

Again, the problem is that the assembler only supports integer math. So for
example, 92/256=0

So how do you calculate e^(-x) using only integer math?

---
James.

> {Original Message removed}
At 09:34 AM 10/15/2005 -0700, you wrote:
> > How would you calculate e^(-at) in 32 bit integer math?
> >
> > Why does it need to be integer math?  From your previous post
> > I got the impression this calculation was going to be done at
> > assembly time.
>
>The assembler I'm using only supports integer math. No fractions. No
>floating point. Just integers. So if you put in .36 it becomes 0. 92/256 =
>0. Integer math.

Okay so you imagine a radix point at an appropriate place. You can move
it around after each intermediate calculation with shifts, provided you
don't overflow. Unfortunately, you've got to deal with "C-like" integer
math and don't have access to the full 64-bits of a multiply, only the
least significant 32 bits.

For example, 0.36 might be represented as 0x0002E14 (0.36 * 32768). The
radix point is between the MS 2 bytes and the LS 2 bytes.

>How do you calculate e^(-x) where x is between 1/256 and 192/256 using only
>integer math.

A Taylor series might do it acceptably well.

>Best regards,

Spehro Pefhany --"it's the network..."            "The Journey is the reward"
speffinterlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
->> Inexpensive test equipment & parts http://search.ebay.com/_W0QQsassZspeff

part 1 2513 bytes content-type:text/plain; (decoded 7bit)

> with the input 0 (NEW = 0) and FF = 1 - K.  It's not clear to
> me whether James just wants to show decay of a capacitor
> discharging thru a resistor, or model a full R/C filter where
> the input could change.

The latter, but that code is done and working. If I hand it a correct value
of K, it does it's thing just fine.

Actually, the code is based on the diagram I've attached to this email. It
uses two constants, one is the original attenuation represented by at (a=RC)
and the second attenuation is e^(-at). There is a routine that does
fractional multiplication:

;For PIC Microcontrollers
fracmul
CLRW            ;W=0
fracmul1
BCF status,c
RRF arg1        ;ARG1:=ARG1/2
BCF status,c    ; don't copy the carry
RLF arg2        ;CY:=MSB(ARG2),
; ARG2:=ASL(ARG2,1)
BTFSC status,c  ;IF CY=0 SKIP THE ADD
MOVF arg2
BTFSS status,z  ;IF ARG2 NE 0 REITERATE
GOTO fracmul1
RETLW 0h        ;ELSE RETURN RESULT IN W

So to attenuate the sample by at, we compute at as a fraction of 256 and put
the sample in arg1, the fraction in arg2 and call this routine. That result
is then set aside. Next we calculate the feedback from the last value which
is attenuated by e^(-at)

This is just the previous output value in arg1 and the fraction e^(-at) in
arg2 and a call to the same routine, the results of the two calls are summed
and that is the new output value.

If I calculate the two values for arg2 and put them in, the code does
exactly as expected and simulates a low pass RC filter having the values of
R and C specified. I can get the assembler to calculate the first value with
the code:

at EQU 256* 360000/1000000
movlw at
;for a digital equivalent to an RC filter with a
;360kOhm resistor and a 1uF capacitor

That puts 92 in w which is 92/256 = 0.360 = RC

So that students may alter the R and C values somewhat and see the results.
But I can't get it to calculate the second value e^(-at)

Olin was exactly right in a previous post. You can't scale it by 256 (or any
other constant) and then de-scale it by the same amount at the end.

There may very well be more efficient ways to do this, but this way is
directly derived from the diagram above and that diagram is very easily
derived from the operation of the standard low pass RC filter circuit.

Again, the point is education, not optimal operation.

---
James.

part 2 2812 bytes content-type:image/gif; (decode)

part 3 35 bytes content-type:text/plain; charset="us-ascii"
(decoded 7bit)

> So how do you calculate e^(-x) using only integer math?
I took a spin through Hacker's Delight, and didn't see anything.

James Newtons Massmind wrote:
> Not the PIC. The Assembler. The assembler only supports integer math.
> It happens to be 32 bit.

But you don't have to live with that.  For example, you could write a bunch
of macros that performed floating point operations on pre-defined assembler
variables.  That would still be using the native assembler, but you'd get
floating point.  A lot easier would be to use my preprocessor PREPIC.  It's
got floating point handling built in.  For example, the following code sets
the MPASM symbol P to e^0.360 as a 24.8 fixed point number.

p        equ     [rnd [* [exp 2.71828183 .360] 256]]

which is translated into the MPASM source:

p        equ     367

PREPIC is part of the free PIC development environment described at
http://www.embedinc.com/pic.

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com

> How would you calculate e^(-at) in 32 bit integer math?

You turn it around a little, use fixed point instead of integer once you
notice that with a,x positive y is <= 1, and then choose a series that
converges on the solution. Fixed point is 'integer math' in a way.

Peter

2005\10\15@171717 by
>
>
> > So how do you calculate e^(-x) using only integer math?
> I took a spin through Hacker's Delight, and didn't see anything.
>
I don't have a copy at hand but Jack Crenshaw's book, "Math Toolkit for
Real-Time Programming" might have something useful.

Rob
At 01:11 PM 10/15/2005 -0400, you wrote:
{Quote hidden}

James:

Here is an example in c, where xf, yf are 32-bit integers.

xf = x * 128;
yf = 32768 - xf  + (xf* xf)/65536 - (((xf * xf)/32768 )*xf)/(6*32768) +
(((xf*xf)/32768)* ((xf*xf)/32768))/(32768 * 24)
- (((((xf*xf)/32768)* ((xf*xf)/32768))/32768) * xf)/(32768 * 120)  ;

The result is ~= exp(-x/256) * 32768 over the range
x = 1 to 192.

yf/32768 agrees with exp(-x/256) within 0.001 over that range.

You can easily extend this to additional terms, although it gets a bit
tedious.

It's based on the textbook Taylor/Maclaurin series

infinity
---
\
exp(x) =    >  (x^n)/n!
/
---
n= 0

Best regards,

----
Tria

Hi!

On 10/15/05, James Newtons Massmind <jamesnewtonmassmind.org> wrote:
>
> Actually, the code is based on the diagram I've attached to this email. It
> uses two constants, one is the original attenuation represented by at (a=RC)
> and the second attenuation is e^(-at). There is a routine that does
> fractional multiplication:
>
[...]
{Quote hidden}

So, you want an approximation formula. I think that the best way
is using a spreadsheet (i.e., excel). Just build a table, input values
(the values of "a") and output values in two columns, like:

A  B
1  1234
2  5678
3  ...

as an X/Y plot and fit a polynomial function to the data. In excel, this
is really easy (add tendency line, select "polynomial", and "show formula".

Daniel.

>
> James:
>
> Here is an example in c, where xf, yf are 32-bit integers.
>
>    xf = x * 128;
>    yf = 32768 - xf  + (xf* xf)/65536 - (((xf * xf)/32768
> )*xf)/(6*32768) +
> (((xf*xf)/32768)* ((xf*xf)/32768))/(32768 * 24)
>     - (((((xf*xf)/32768)* ((xf*xf)/32768))/32768) *
> xf)/(32768 * 120)  ;
>
> The result is ~= exp(-x/256) * 32768 over the range x = 1 to 192.
>
> yf/32768 agrees with exp(-x/256) within 0.001 over that range.

Did you mean yf/128 at the end there? Other than that, YOU DA MAN!

Here is the final, working code.

; Calculate e^(-at)
; Many thanks to Spehro of speff.com who says this is
; "based on the textbook Taylor/Maclaurin series:"
;
;            infinity
;            ---
;            \
;exp(x) =    >  (x^n)/n!
;            /
;            ---
;           n= 0
;http://en.wikipedia.org/wiki/E_(mathematical_constant)

x = at * 128
y = 32768
y = y - x
y = y + (x*x)/(2*32768)
y = y - (((x*x)/32768)*x)/(6*32768)
y = y + (((x*x)/32768)* ((x*x)/32768))/(32768 * 24)
; y = y - (((((x*x)/32768)* ((x*x)/32768))/32768) * x)/(32768 * 120)
; this last term not needed for single byte accuracy
result = y / 128

;.0039 =   1/256 should be .996   = 255/256
;.3600 =  92/256 should be .69760 = 178/256
;.3906 = 100/256 should be .67663 = 172/256 (ACh)
;.4687 = 120/256 should be .62578 = 160/256 (A0h)
;.7500 = 192/256 should be .47236 = 121/256 (79h)

It works like a charm. And THANKS!

---
James.

> So, you want an approximation formula. I think that the best
> way is using a spreadsheet (i.e., excel). Just build a table,
> input values (the values of "a") and output values in two
> columns, like:
>
> A  B
> 1  1234
> 2  5678
> 3  ...
>
> The values should be already scaled. Then, you plot your table
> as an X/Y plot and fit a polynomial function to the data. In
> excel, this is really easy (add tendency line, select
> "polynomial", and "show formula".

Well, yes, that would get me a formula, but would the formula evaluate
correctly in integer only math?

In this case, the real trick was figuring out how to scale the values
correctly so that they didn't become fractional. Speff somehow figured that
out for me, but I admit, I still don't really understand how he did it...

---
James.

Hi!

On 10/16/05, James Newtons Massmind wrote:
> >
> > The values should be already scaled. Then, you plot your table
> > as an X/Y plot and fit a polynomial function to the data. In
> > excel, this is really easy (add tendency line, select
> > "polynomial", and "show formula".
>
> Well, yes, that would get me a formula, but would the formula evaluate
> correctly in integer only math?

The formula would be like
a * x^2  + b * x + c

As X is an integer (already scaled), X^2 is also an integer.
Now, you can replace "a" and "b" by two fractions:
b = Bn / Bd

So, you have the formula:
(An * X * X / Ad) + (Bn * X / Bd) +  C

As you are rounding each of the terms, the largest error
would be 0.5 + 0.5 + 0.5 = 1.5, less than 2.

You need to make sure the multiplications are less than
2^31 = 2147483648 , to avoid overflow, this will restrict
the values for "An" and "Bn" above.

A better formula is possible using an approximation like:
(A1 * X * X + A2 * X + A3) / (B1 * X * X + B2 * X + B3)

But I think that excel can't give you that kind of approximation
(it's a rational function). A program like "gnuplot" can fit

Daniel.

If you use the solver tool in Excell (requires the Analysis add-in
IIRC) then it seems able to solve (A1 * X * X + A2 * X + A3) / (B1 * X
* X + B2 * X + B3) OK. You can even constrain the coefficients to be
integers.
You would still need to watch out for overflows in the intermediate
calculations but it should be workable.
RP

On 16/10/05, Daniel Serpell <daniel.serpellgmail.com> wrote:
{Quote hidden}

> -
Hi James. By using a c language possibly ? :)

( which is tolerating assembler inserts of course :))

WBR Dmitry.

James Newtons Massmind wrote:
{Quote hidden}

Dear (apparently) James Newtons Massmind,

Interesting name.

Spehro Pefhany has done a good job answering exactly the question you
).

Pefhany used the technique known as "fixed-point math", which is
useful whenever you want to do math on fractions, but you only have
integer math available.
There's been a lot written about fixed-point math,
most of it making it *sound* far more complicated than it really is.

http://en.wikipedia.org/wiki/Fixed-point

Microchip AN617  Fixed Point
http://microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1824&appnote=en010962

(Is there an article on Massmind.org dedicated to explaining fixed
point in general? Massmind has lots of PIC code for *doing* fixed
point math, but they all seem to assume you already know all about
fixed point math and things like the 7:8 notation.)

I am sure you are sincere and well-meaning, but whenever I hear that
someone is planning to teach students something technical, I have 2
simultaneous reactions:
* That's wonderful. Better-educated students are nearly always a good thing.
* I hope this teacher teaches how things really work, rather than one
of the popular "Science Myths"[*] that confuse people and take forever
to unlearn.

> The idea is that the student can put in the actual
> values of the R and the C in a basic RC type low pass filter and the code
> will load the literal values appropriate for the digital simulation of that
> filter.

That is certainly one way to do it.

I failed to see how that is any improvement on allowing students to
put in actual resistors and capacitors into a basic RC low pass
filter, then look at the output on an o'scope,
until you clarified:

>> This is about educating people who understand analog systems
>> and showing them that the same basic thing
>> can be done in a digital system.

OK then, now it makes more sense -- they already how to make a low
pass filter from actual resistors and capacitors, and you're trying to
teach them digital filtering based on their previous experience
(rather than starting from scratch).

> I am working from this text:
>
> Let us take for example the Simple RC (resistor-capacitor) low pass filter.
> This circuit passes frequencies in the input voltage that are lower than
> some critical frequency (called the cutoff frequency) determined by the
> resistor and capacitor values, while severely attenuating any higher
> frequencies.

Good so far.

> The output voltage Vout, then, is simply a selectively reduced version of
> Vin (input voltage). The resistor drops the output voltage, as does the
> voltage that goes into charging C. At first, Vout = a * Vin (where the
> constant a is the amount of attenuation caused by R and C). It can be shown
> that a=1/RC (where R is in ohms and C is in farads).

To quote Bob the Angry Flower,
No! Wrong! Totally wrong! Where'd you learn this?
-- http://www.angryflower.com/bobsqu.gif

It's not even dimensionally consistent.

For the very special case where we have been applying a sine wave to
the input for a long time,
Vout(t) = a * Vin(t-p) (where the constant a is the amount of
attenuation, p is the phase lag),
but that constant a depends on R, C,  *and* the frequency f of the sine wave.
For any other input (square waves, triangle waves, white noise, etc.),
the output is *not* a scaled version of the input, but a distorted
("rounded-off") version of the input.

When we apply a slow square wave to the input (one far longer than 10
time constants), the output is *not* an attenuated square wave.
When the input switches from Vhi to 0,
I see that the output starts at
Vout(0) = Vhi
.
and decays like
Vout(t) =  Vhi * exp( -t / (R*C) ).
until the output is practically 0.

When the input switches from 0 to Vi,
the output starts at
Vout(0) = 0
and rises like
Vout(t) = Vhi *( 1 -  exp( -t / (R*C) ) ).
until the output is practically the same as Vhi.

(There are very small deviations from this theory caused by the
non-ideal parasitic ESR of the capacitor, but the ESR (and the
deviations which depend on it) cannot be calculated from R and C.)

> The voltage change with time across a capacitor is an exponential function.
> If the voltage at time zero is V, then the voltage at time t is given by
> Ve^(-at) (a is the Same constant as above).

> I need to find e^(-at) given the value of at.

I think someone has unnecessarily confused you.

While the output voltage
(while the input is zero)
may decay like
Vout(t) =  Vhi * exp( -t / (R*C) ),
that is *because*

* electrons are leaking out of one side of the capacitor, through the resistor,
and back to the other side of the capacitor.
* When there is a higher voltage across the resistor, the electrons flow faster.
* When there is a lower voltage across the resistor, the electrons flow faster.

No matter what the input voltage does, if use very small time steps h,
and keep track of the number of the number of excess electrons Q
across the capacitor, we can approximate the situation as

Vout(t) = Q(t)/C ; output voltage
i(t) = (Vin(t) - Vout(t) ) / R ; current through resistor into capacitor
Q(t+h) = Q(t) + h*i(t) ; next count is the old count plus however many
new electrons came in (or out) over that time increment.

In C, one could write this as something like
const conductance = 1/R;
const reactance = 1/C;
for(;;){
Vin = get_next_input_voltage();
current = ( Vin - Vout )*conductance;
time = time + h;
charge = charge + h*current;
Vout = charge * reactance;
cout << "Output voltage at time" << time << "was" << Vout;
}

Did you notice that there are no exponentials in this code?

Some people would "optimize" the inner calculations of that loop into:
for(;;){
Vout = Vout + FF*( Vin - Vout ) ;
time = time + h;
cout << "Output voltage at time" << time << "was" << Vout;
}

which is astonishingly similar to the very first response you got (from olin):
FILT <-- FILT + FF * (NEW - FILT)
.

Have I given you enough information to be able to calculate FF?

Forgive me for rambling so long.

[*] "Science Myths"
* www.eskimo.com/~billb/miscon/miscon4.html
* http://www.amasci.com/miscon/miscon.html

--
David Cary

> http://en.wikipedia.org/wiki/Fixed-point
>
> Microchip AN617  Fixed Point
> microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&no
deId=1824&appnote=en010962
>
> (Is there an article on Massmind.org dedicated to explaining
> fixed point in general? Massmind has lots of PIC code for
> *doing* fixed point math, but they all seem to assume you
> already know all about fixed point math and things like the
> 7:8 notation.)
>

There is not as far as I know (the site is big enough that I may not
remember)

It do think it would be a great idea, and I would personally benefit from it
since I couldn't figure this out in the first place. The idea is clear to
me, but its execution in complex formula is not.

(more below)

{Quote hidden}

A) In anything where we are approximating or simulating another system, no
answer can be "totally wrong" Some answers may be more wrong than others.
The method I'm using to simulate the filter may be more complex and less
accurate, but that doesn't make it totally wrong.

B) Other than compexity and accuracy, other criteria for a method may
include clarity to the intended audience. To someone who learned analog
electronics and is only now learning microcontrollers, I feel that my method
is more clear.

C) You cut off the explanation after "At first..." and before "...then at
time t...". Any new voltage introduced into an RC circuit WILL be
attenuated. The first part of the simulation is modeling that attenuation.
Then, at time t later, the prior voltage may contribute to the output due to
having been stored in the capacitor. That is the second part of the
simulation. Both parts must be included for the result to be seen as
anything like accurate.

Yes, it is possible to model the filter in other ways. But this way does it
in terms that are obvious and understandable to those who understand RC time
calculations.

If you would like to write another tutorial that does the job in a different
way, please feel free. I would very much appreciate seeing it fully
explained in another way. You've made a start on it above, but it could use
some sample code and some graphs of variables vs time with an example input
wave form.

If you would like a copy of the tutorial I'm working on (I'm not the
it to anyone else and I'll be happy to send it to you. Perhaps when you see
the entire thing, it will make more sense.

---
James.

James Newtons Massmind wrote:
> If you would like to write another tutorial that does the job in a
> different way, please feel free.

No I'm not interested in writing a tutorial myself, but I still think you
are presenting this in the wrong way from what I understand of your tutorial
so far.  I think the assumption is that you have old crufty analog people
and you are trying to introduce them to how a lot of that stuff can be done
digitally.  I'm fine with that.  However, what's the point of showing that
some analog things can be done verbatim digitally?  Other than proving it's
possible, it won't leave anyone with the ability to design a decent digital
filter.  Instead I would point out instead of trying to hide the
digitalness.  Digital and analog are different, each with their own
like an analog filter any more than it makes sense to design an analog
filter like a digital filter.  Teaching people this will only lead to
confusion and delay true understanding of how effective digital filters
work.

*****************************************************************
Embed Inc, embedded system specialists in Littleton Massachusetts
(978) 742-9014, http://www.embedinc.com
James Newtons Massmind wrote:

> B) Other than compexity and accuracy, other criteria for a method may
> include clarity to the intended audience. To someone who learned analog
> electronics and is only now learning microcontrollers, I feel that my method
> is more clear.
...
> If you would like to write another tutorial that does the job in a different
> way, please feel free. I would very much appreciate seeing it fully
> explained in another way. You've made a start on it above, but it could use
> some sample code and some graphs of variables vs time with an example input
> wave form.

I didn't get a full grasp of what you are doing, but it seemed complex to
me. It in a way seems as if you modeled an RC low pass with analog computer
modules, to go on then to model these modules digitally.

David's "tutorial" seems quite complete and succinct to me. No necessary
steps and complications, and it goes directly to the target:

> > Vout(t) = Q(t)/C ; output voltage
> > i(t) = (Vin(t) - Vout(t) ) / R ; current through resistor into capacitor
> > Q(t+h) = Q(t) + h*i(t) ; next count is the old count plus however many
> > new electrons came in (or out) over that time increment.

IMO this can be understood by everybody with minimal analog knowledge, as
the basic concepts are simple: current through a resistor, and charge in a
capacitor, and how these relate to the voltages across them. For me, that's
close to perfect as an introduction. From here to a program is a very small
step (as David also showed). This also can be very easily implemented in a
spreadsheet, with the associated graphs. (FWIW, I'm sending one to you by
email.)

And since this is the "right" thing (IMO, of course :), it contains two
hooks for further exploration:
1- proper modeling with derivative equations (that's possibly not the
proper term...) and then solving these digitally, to increase the "depth"
of understanding of this -- because it already presents one result of this
more theoretical approach in a simplified form, and
2- the various forms of digital filter equations as they are being used, to
increase the knowledge about digital filter applications -- because it ends
up with one of the simplest of them in exactly the form it is being used in
"real" applications.

Gerhard
{Quote hidden}

Ok. I'm sure it is very nice. But you still haven't even seen the tutorial
I'm working on. You are taking a different approach and I have to problem
with that.

I didn't ask for a critiqe of the tutorial, just for help with the integer
math.

You haven't even seen the tutorial and are commenting only on the code that
is a result of it without reading the actual thing. I wish I could post it,
but it is being sold, so I can only send it to you if you promise not to
send it to anyone else.

Please don't continue to assume it is "wrong" until you have seen it.

Please understand that I agree your approach is probably fine, but it isn't
a complete tutorial and therefore isn't really compareable. I would love to
see you write a complete tutorial.

---
James.

James Newtons Massmind wrote:

> I didn't ask for a critiqe of the tutorial, just for help with the integer
> math.

I find that one of the real values of this list is when I get answers to
questions I didn't ask :)  More often than not, when one knows what exactly
the question is not quite the right one that are the ones where I really
need help -- IMO, of course.

> You haven't even seen the tutorial and are commenting only on the code
> that is a result of it without reading the actual thing. [...] Please
> don't continue to assume it is "wrong" until you have seen it.

Just to make that clear: I didn't comment on any code of yours, nor did I
assume the approach is wrong (or "wrong" :).  I said "I didn't get a full
grasp of what you are doing, but it seemed complex to me." I guess that
contains enough disclaimers...

> I wish I could post it, but it is being sold, so I can only send it to
> you if you promise not to send it to anyone else.

That would be interesting indeed. I promise not to send or show it to
anyone else... can you send me a copy please?

> Please understand that I agree your approach is probably fine, but it
> isn't a complete tutorial and therefore isn't really compareable. I
> would love to see you write a complete tutorial.

I guess I would love to write one if I could sell it :) -- or maybe better,
if I knew who really wanted one and what exactly they want.

Gerhard

More... (looser matching)
- Last day of these posts
- In 2005 , 2006 only
- Today
- New search...