So, I understood this: Two copies of Quake on the Alpha, displaying
locally, get each 20fps. Of two copies of Quake on the Alpha, on
displaying locally, the other remotely, the first produces 6fps, the
later 16fps. Total frame rate thus is 22, vs 40 before.
So sending 16fps over lowers overall performance by 50%, vs displaying
it locally. This is the 50% spend on system time, which seems
a lot to me.
> > Is this a scheduler thing, where the remote Quake process gets scheduled
> > all the time due to network interrupts? But if so, why does top show all
> > copies getting equal CPU time?
> It's probably simply because the time-consuming part of quake is not so much
> the actual drawing as the _calculations_ it is doing.
Well actually the time consuming part seems to be to send the data over the
net. Probably sending anything at that rate (16fps*320*200 is
1MByte/s) costs 50% CPU time of anything calculated at the same time.
> And with the remote copy, the load is a lot lower. With a local copy of
> quake, the local load is pretty high, so the local quake binary has
> trouble getting CPU for calculations, while the remote quake obviously
> doesn't have that kind of performance problems..
Maybe the time spend in system mode should be isn't accounted
correctly to the process causing it?
-- Martin Ostermann | mailto:firstname.lastname@example.org Communication Networks | http://www.comnets.rwth-aachen.de/~ost Aachen University of Technology | phoneto:++49/241/807917 Germany | faxto:++49/241/8890378
-- To unsubscribe: send e-mail to email@example.com with 'unsubscribe' as the subject. Do not send it to firstname.lastname@example.org
Copyright © 1995-1997 Red Hat Software. Legal notices