Ravenbrook / Projects / Memory Pool System / Master Product Sources / Design Documents

Memory Pool System Project


                          VIRTUAL MEMORY ARENA
                          design.mps.arena.vm
                             incomplete doc
                             drj 1996-07-16

INTRODUCTION

.intro: This document describes the detailed design of the Virtual Memory Arena 
Class of the Memory Pool System.  The VM Arena Class is just one class 
available in the MPS.  The generic arena part is described in design.mps.arena.


OVERVIEW

.overview: VM arenas provide blocks of memory to all other parts of the MPS in 
the form of "tracts" using the virtual mapping interface (design.mps.vm) to the 
operating system.  The VM Arena Class is not expected to be provided on 
platforms that do not have virtual memory (like MacOS, os.s7(1)).

.overview.gc: The VM Arena Class provides some special services on these blocks 
in order to facilitate garbage collection:

.overview.gc.zone: Allocation of blocks with specific zones.  This means that 
the generic fix function (design.mps.fix) can use a fast refset test to 
eliminate references to addresses that are not in the condemned set.  This 
assumes that a pool class that uses this placement appropriately is being used 
(such as the generation placement policy used by AMC, see design.mps.poolamc(1)
) and that the pool selects the condemned sets to coincide with zone stripes.

.overview.gc.tract: A fast translation from addresses to tract. (See 
design.mps.arena.req.fun.trans)


NOTES

.note.refset: Some of this document simply assumes that RefSets (see the 
horribly incomplete design.mps.refset) have been chosen as the solution for 
design.mps.arena.req.fun.set.  It's a lot simpler that way.  Both to write and 
understand.


REQUIREMENTS


Most of the requirements are in fact on the generic arena (See design.mps.arena
.req.*).  However, many of those requirements can only be met by a suitable 
arena class design.

Requirements particular to this arena class:

Placement

.req.fun.place: It must be possible for pools to obtain tracts at particular 
addresses.  Such addresses shall be declared by the pool specifying what refset 
zones the tracts should lie in and what refset zones the tracts should not lie 
in.  It is acceptable for the arena to not always honour the request in terms 
of placement if it has run out of suitable addresses.

Arena Partition

.req.fun.set: See design.mps.arena.req.fun.set.  The approximation to sets of 
address must cooperate with the placement mechanism in the way required by 
.req.fun.place (above).


ARCHITECTURE

.arch.memory: The underlying memory is obtained from whatever Virtual Memory 
interface (see design.mps.vm).  Explain why this is used ###


SOLUTION IDEAS

.idea.grain: Set the arena granularity to the grain provided by the virtual 
mapping module.

.idea.mem: Get a single large contiguous address area from the virtual mapping 
interface and divide that up.

.idea.table: Maintain a table with one entry per grain in order to provide fast 
mapping (shift and add) between addresses and table entries.

.idea.table.figure:
  

.idea.map: Store the pointers (.req.fun.trans) in the table directly for every 
grain.

.idea.zones: Partition the managed address space into zones (see idea.zones) 
and provide the set approximation as a reference signature.

.idea.first-fit: Use a simple first-fit allocation policy for tracts within 
each zone (.idea.zones).  Store the freelist in the table (.idea.table).

.idea.base: Store information about each contiguous area (allocated of free) in 
the table entry (.idea.table) corresponding to the base address of the area.

.idea.shadow: Use the table (.idea.table) as a "shadow" of the operating 
system's page table.  Keep information such as last access, protection, etc. in 
this table, since we can't get at this information otherwise.

.idea.barrier: Use the table (.idea.table) to implement the software barrier.  
Each segment can have a read and/or write barrier placed on it by each 
process.  (.idea.barrier.bits: Store a bit-pattern which remembers which 
process protected what.)  This will give a fast translation from a 
barrier-protected address to the barrier handler via the process table.

.idea.demand-table: For a 1Gb managed address space with a 4Kb page size, the 
table will have 256K-entries.  At (say) four words per entry, this is 4Mb of 
table.  Although this is only an 0.4%, the table shouldn't be preallocated or 
initially it is an infinite overhead, and with 1Mb active, it is a 300% 
overhead!  The address space for the table should be reserved, but the pages 
for it mapped and unmapped on demand.  By storing the table in a tract, the 
status of the table's pages can be determined by looking at it's own entries in 
itself, and thus the translation lookup (.req.fun.trans) is slowed to two 
lookups rather than one.

.idea.pool: Make the Arena Manager a pool class.  Arena intialization becomes 
pool creation.  Tract allocation becomes PoolAlloc.  Other operations become 
class-specific operations on the "arena pool".


DATA STRUCTURES

.tables: There are two table data structures: a page table, and an alloc table.

.table.page.map: Each page in the VM has a corresponding page table entry.

.table.page.linear: The table is a linear array of PageStruct entries; there is 
a simple mapping between the index in the table and the base address in the VM 
(viz. base-address = arena-base + (index * page-size), one way, index = 
(base-address - arena-base) / page-size, the other).

.table.page.partial: The table is partially mapped on an "as-needed" basis.  
The function unusedTablePages identifies entirely unused pages occupied by the 
page table itself (ie those pages of the page table which are occupied by 
PageStructs which all describe free pages).  Tract allocation and freeing use 
this function to map and unmap the page table with no hysteresis.  (there is 
restriction on the parameters you may pass to unusedTablePages)

.table.page.tract: Each page table entry contains a tract, which is only valid 
if it is allocated to a pool. If it is not allocated to a pool, the fields of 
the tract are used for other purposes. (See design.mps.arena.tract.field.pool)

.table.alloc: The alloc table is a simple bit table (implemented using the BT 
module, design.mps.bt).

.table.alloc.map: Each page in the VM has a corresponding alloc table entry.

.table.alloc.semantics: The bit in the alloc table is set iff the corresponding 
page is allocated (to a pool).  



NOTES


.fig.page: How the pages in the arena area are represented in the tables.

.fig.count: How a count table can be used to partially map the page table, as 
proposed in request.dylan.170049.sol.map.

 - arenavm diagrams 

ATTACHMENT
   "arenavm diagrams"

A. References

B. Document History

2002-06-07 RB Converted from MMInfo database design document.