> >
> > > David Mosberger-Tang wrote:
> >
> > > [ Answering the following: ]
> >
> > > Naively, one would expect that, to compute the sine of some
> > > argument in single precision would require *less* fp operations than
> > > to get it correct up to double precision.
> >
> > You have a good point here.
> ....
> > The algorithm per se is the same. I don't
> > recall off-hand how many bits of precisions you get with each step in
> > a Taylor series, but it seems to me that you should be able to save
> > one or two steps when using single-precision.
>
> Ugh! I do hope that these routines do not use Taylor expansions
> to compute elementary functions.
Looks like a Taylor series to me. It isn't a terrible algorithm; looks
pretty clean actually.
> If yes, then no wonder that DEC library is so much faster.
Well, shucks; what does the DEC library use? I'd look myself, but I lost
access to the Digital Unix machine I used to have access to (which also
had a lot of my math library noodlings :<).
> I do not have right now neither hardware
> nor sources, but in general Taylor expansions converge very slowly.
> There are better ways to compute such things.
Share! I was looking at an interpolated table lookup scheme I found in
ACM transactions. It might beat the fdlibm, but I haven't tried yet and
likely won't in the near future, as my source went *poof* with my
digiUnix account.
-Scott
-- To unsubscribe: send e-mail to axp-list-request@redhat.com with 'unsubscribe' as the subject. Do not send it to axp-list@redhat.com
Copyright © 1995-1997 Red Hat Software. Legal notices