MPS issue job003384

TitleCollector goes mad when low on address space
Statusopen
Priorityoptional
Assigned userRichard Brooksby
OrganizationRavenbrook
DescriptionThis table (taken from [1]) shows the effect of varying the initial allocation of address space to the arena when running the test case "scheme-advanced test-leaf.scm":

Space Extns Colls Time
2 32 371 52.0
4 21 370 47.0
8 0 * *
14 0 * *
16 0 2436 160.5
18 0 1135 89.1
20 0 673 60.6
22 0 484 48.7
24 0 400 43.1
32 0 368 41.2

"Space" is the initial allocation of address space (in MiB) when calling mps_arena_create. "Extns" is the number of times the arena was extended (the count of vmArenaExtendStart events). "Colls" is the number of garbage collections (the count of TraceFlipBegin events). "Time" is the total time taken by the test case in seconds (user+sys). An entry "*" indicates that the test case failed to run to completion after thousands of seconds and tens of thousands of garbage collection cycles.

You'll see that performance falls off a cliff.
AnalysisIn TracePoll, the MPS considers whether it should start a trace, and if so, how much it should condemn. It does not consider extending the arena. So as the working set approaches the total address space, the MPS makes more and more collections in a desperate and useless effort to free up space.

The converse of this problem is that arenaAllocPolicy considers whether it should extend the arena in order to satisfy an allocation request. It does not consider doing some garbage collection. (See job003789.)

There needs to be a unified policy function. See RB's proposal for time control [2].
How foundmanual_test
Evidence[1] <https://info.ravenbrook.com/project/mp.../user-guide/manual/html/guide/perf.html>
[2] <https://info.ravenbrook.com/mail/2013/07/05/00-46-27/0/>
Observed in1.110.0
Created byGareth Rees
Created on2012-11-16 12:57:11
Last modified byGareth Rees
Last modified on2016-09-13 10:37:32
History2012-11-16 GDR Created.