Virtual Memory Arena

author David Joes
date 1996-07-16
index terms pair: virtual memory arena; design pair: VM arena; design
revision //info.ravenbrook.com/project/mps/master/design/arenavm.txt#16
status incomplete document
tag design.mps.arena.vm

Introduction

.intro: This is the design of the Virtual Memory Arena Class of the Memory Pool System. The VM Arena Class is just one class available in the MPS. The generic arena part is described in design.mps.arena.

Overview

.overview: VM arenas provide blocks of memory to all other parts of the MPS in the form of "tracts" using the virtual mapping interface (design.mps.vm) to the operating system. The VM Arena Class is not expected to be provided on platforms that do not have virtual memory (for example, Macintosh System 7).

.overview.gc: The VM Arena Class provides some special services on these blocks in order to facilitate garbage collection:

.overview.gc.zone: Allocation of blocks with specific zones. This means that the generic fix function (design.mps.fix) can use a fast refset test to eliminate references to addresses that are not in the condemned set. This assumes that a pool class that uses this placement appropriately is being used (such as the generation placement policy used by AMC: see design.mps.poolamc) and that the pool selects the condemned sets to coincide with zone stripes.

.overview.gc.tract: A fast translation from addresses to tract. (See design.mps.arena.req.fun.trans.)

Notes

.note.refset: Some of this document simply assumes that RefSets (see design.mps.collections.refsets) have been chosen as the solution for design.mps.arena.req.fun.set. It's a lot simpler that way. Both to write and understand.

Requirements

Most of the requirements are in fact on the generic arena (see design.mps.arena.req). However, many of those requirements can only be met by a suitable arena class design.

Requirements particular to this arena class:

Placement

.req.fun.place: It must be possible for pools to obtain tracts at particular addresses. Such addresses shall be declared by the pool specifying what refset zones the tracts should lie in and what refset zones the tracts should not lie in. It is acceptable for the arena to not always honour the request in terms of placement if it has run out of suitable addresses.

Arena partition

.req.fun.set: See design.mps.arena.req.fun.set. The approximation to sets of address must cooperate with the placement mechanism in the way required by .req.fun.place (above).

Architecture

.arch.memory: The underlying memory is obtained from whatever Virtual Memory interface (see design.mps.vm). @@@@ Explain why this is used.

Solution ideas

.idea.grain: Set the arena granularity to the grain provided by the virtual mapping module.

.idea.mem: Get a single large contiguous address area from the virtual mapping interface and divide that up.

.idea.table: Maintain a table with one entry per grain in order to provide fast mapping (shift and add) between addresses and table entries.

.idea.table.figure: [missing figure]

.idea.map: Store the pointers (design.arena.req.fun.trans) in the table directly for every grain.

.idea.zones: Partition the managed address space into zones (see idea.zones) and provide the set approximation as a reference signature.

.idea.first-fit: Use a simple first-fit allocation policy for tracts within each zone (.idea.zones). Store the freelist in the table (.idea.table).

.idea.base: Store information about each contiguous area (allocated of free) in the table entry (.idea.table) corresponding to the base address of the area.

.idea.shadow: Use the table (.idea.table) as a "shadow" of the operating system's page table. Keep information such as last access, protection, etc. in this table, since we can't get at this information otherwise.

.idea.barrier: Use the table (.idea.table) to implement the software barrier. Each segment can have a read and/or write barrier placed on it by each process. (.idea.barrier.bits: Store a bit-pattern which remembers which process protected what.) This will give a fast translation from a barrier-protected address to the barrier handler via the process table.

.idea.demand-table: For a 1 GiB managed address space with a 4 KiB page size, the table will have 256K-entries. At, say, four words per entry, this is 4 MiB of table. Although this is only an 0.4%, the table shouldn't be preallocated or initially it is an infinite overhead, and with 1 MiB active, it is a 300% overhead! The address space for the table should be reserved, but the pages for it mapped and unmapped on demand. By storing the table in a tract, the status of the table's pages can be determined by looking at it's own entries in itself, and thus the translation lookup (design.arena.req.fun.trans) is slowed to two lookups rather than one.

.idea.pool: Make the Arena Manager a pool class. Arena initialization becomes pool creation. Tract allocation becomes PoolAlloc(). Other operations become class-specific operations on the "arena pool".

Data structures

.tables: There are two table data structures: a page table, and an alloc table.

.table.page.map: Each page in the VM has a corresponding page table entry.

.table.page.linear: The table is a linear array of PageStruct entries; there is a simple mapping between the index in the table and the base address in the VM. Namely:

  • index to base address: base-address = arena-base + (index * page-size)
  • base address to index: index = (base-address - arena-base) / page-size

.table.page.partial: The table is partially mapped on an "as-needed" basis using the SparseArray abstract type.

.table.page.tract: Each page table entry contains a tract, which is only valid if it is allocated to a pool. If it is not allocated to a pool, the fields of the tract are used for other purposes. (See design.mps.arena.tract.field.pool)

.table.alloc: The alloc table is a simple bit table (implemented using the BT module, design.mps.bt).

.table.alloc.map: Each page in the VM has a corresponding alloc table entry.

.table.alloc.semantics: The bit in the alloc table is set iff the corresponding page is allocated (to a pool).

Notes

.fig.page: How the pages in the arena area are represented in the tables.

[missing figure]

.fig.count: How a count table can be used to partially map the page table, as proposed in request.dylan.170049.sol.map.

[missing figure]

Document History

  • 1996-07-16 David Jones. Incomplete document.
  • 2002-06-07 RB Converted from MMInfo database design document.
  • 2013-05-24 GDR Converted to reStructuredText.
  • 2014-02-17 RB Updated to note use of SparseArray rather than direct management of page table mapping.