Correctly measure system load averages > 1024

The old fixed-point arithmetic used for calculating load averages had an
overflow at 1024.  So on systems with extremely high load, the observed
load average would actually fall back to 0 and shoot up again, creating
a kind of sawtooth graph.

Fix this by using 64-bit math internally, while still reporting the load
average to userspace as a 32-bit number.

Sponsored by:	Axcient
Reviewed by:	imp
Differential Revision: https://reviews.freebsd.org/D35134
This commit is contained in:
Alan Somers 2022-05-05 15:35:23 -06:00 committed by Sebastian Huber
parent 0ed668df2c
commit 27dfb5f33f
1 changed files with 4 additions and 4 deletions

View File

@ -248,12 +248,12 @@
* Scale factor for scaled integers used to count %cpu time and load avgs.
*
* The number of CPU `tick's that map to a unique `%age' can be expressed
* by the formula (1 / (2 ^ (FSHIFT - 11))). The maximum load average that
* can be calculated (assuming 32 bits) can be closely approximated using
* the formula (2 ^ (2 * (16 - FSHIFT))) for (FSHIFT < 15).
* by the formula (1 / (2 ^ (FSHIFT - 11))). Since the intermediate
* calculation is done with 64-bit precision, the maximum load average that can
* be calculated is approximately 2^32 / FSCALE.
*
* For the scheduler to maintain a 1:1 mapping of CPU `tick' to `%age',
* FSHIFT must be at least 11; this gives us a maximum load avg of ~1024.
* FSHIFT must be at least 11. This gives a maximum load avg of 2 million.
*/
#define FSHIFT 11 /* bits to right of fixed binary point */
#define FSCALE (1<<FSHIFT)