> I ran some code to check these functions before I made my statement and
> when I used the sinf and cosf functions the test program I was using
> showed no improvement. So does this mean that my libm is is broken?
Always possible, but very unlikely. You can look at s_sinf.o and
k_sinf.o in libm using objdump -d. You'll find that all fp ops are
done in single precision (muls, lds, etc) so, fundamentally, the
routines are certainly not broken (at least not in the way you suggest
On the Alpha, there is only two reasons why single precision might
give improve execution time:
(a) Your code is memory bandwidth limited. Using single
precision halves the memory footprint for floating point data.
If your algorithm has poor locality (as many large-vector/matrix
oriented codes do), then this may mean that the memory bandwidth
requirements are cut in half, too. This is particularly important
on machines with poor memory systems and/or small caches (UDB comes
to mind... ;-).
(b) Your code does a lot of floating point divisions.
That's the only fp operation in the Alpha CPUs that is not
fully pipelined and whose latency is precision dependent. The
very first implementation of Alpha (21064) had a fixed execution
time given the size of the operands (single vs. double precision).
The newer CPUs seem to use a data-dependent algorithm but I'm not
sure what the exact relationship between input values and execution
time is. I assume execution time is related precision and the
difference of the exponents of the two input values, but that's just
So, whether single-precision buys you execution time or not depends
very much on what you're doing. In a naive timing loop measuring
sinf() performance you'd not expect to see any difference since
everything will be cached and sinf() uses no fp divisions.
-- To unsubscribe: send e-mail to firstname.lastname@example.org with 'unsubscribe' as the subject. Do not send it to email@example.com
Copyright © 1995-1997 Red Hat Software. Legal notices