Re: using linux instead of osf

David Mosberger (
Tue, 26 Nov 1996 15:51:29 -0700 (MST)

> David Mosberger-Tang wrote:

> [ Answering the following: ]

> > I ran some code to check these functions before I made my
> > statement and when I used the sinf and cosf functions the test
> > program I was using showed no improvement. So does this mean that
> > my libm is is broken?

> > Always possible, but very unlikely. You can look at
> > s_sinf.o and k_sinf.o in libm using objdump -d. You'll
> > find that all fp ops are done in single precision (muls,
> > lds, etc) so, fundamentally, the routines are certainly
> > not broken (at least not in the way you suggest ;-).

> Naively, one would expect that, to compute the sine of some
> argument in single precision would require *less* fp operations than
> to get it correct up to double precision.

You have a good point here. And it looks like sinf really might be
broken in that respect. A quick diff between the float and double
kernels for sin shows that the _only_ difference between the two
routines is that they use float instead of double and correspondingly
less-precise constants. The algorithm per se is the same. I don't
recall off-hand how many bits of precisions you get with each step in
a Taylor series, but it seems to me that you should be able to save
one or two steps when using single-precision. If so, you should
indeed see a performance difference (though it won't be _that_ much
since there is still a lot of other junk going on that could be
avoided by more careful coding).


To unsubscribe: send e-mail to with
'unsubscribe' as the subject.  Do not send it to

Feedback | Store | News | Support | Product Errata | About Us | Linux Info | Search | JumpWords
No Frames | Show Frames

Copyright © 1995-1997 Red Hat Software. Legal notices