Memory Management Glossary: L

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

large object area

An allocation mechanism designed to optimize the management of large objects by separating them from small ones.

Large objects, typically objects one or more orders of magnitude larger than the virtual memory page of a platform, can be costly to allocate, initialize, and recycle. By segregating those objects into a separate area, they can be managed using specific mechanisms that would be inefficient for smaller objects but which can reduce the cost of manipulating large ones.

Some example mechanisms:

  1. In a copying collector large objects can be managed separately using a mark-and-sweep collector to avoid copying costs. See Ungar (1988).

  2. By aligning large objects on page boundaries, they can be compacted or copied by adjusting their mapping in virtual memory. See Withington (1991).

  3. Large objects may be split into a header and a body, where the header is fixed size and the bulk of the object is in the body. See Ungar (1988).

  4. By using a page-based read barrier, large objects can be initialized incrementally. For example, each page of the large object is initialized to zero when it is first read, rather than all at once at creation time.

  5. In a copying collector, large objects can be copied incrementally using a similar technique (the new copy is initialized from the old copy). See Baker (1978).

  6. Large objects are often leaf objects, so do not need to be scanned, or are known to have a fixed format with only a few references so they can be scanned more efficiently by a specialized scanner.

  7. Large objects often have longer than average lifetimes, so are not allocated in a nursery space of a generational garbage collector.

large page

See

huge page.

leaf object

Also known as

atomic object.

A leaf object is an object that does not reference any other objects.

In a typed language, the compiler can often determine at compile time that certain types can be represented as leaf objects. Usually these types are either a scalar data type or a vector data type of scalars with bounded magnitude.

Relevance to memory management

If leaf objects can be identified, a garbage collector can make certain optimizations: leaf objects do not have to be scanned for references nor are barrier (1) needed to detect and maintain references in the object.

leak
life

See

lifetime.

lifetime

Also known as

extent, life.

The lifetime or extent of an object is the time for which the object is live.

LIFO-ordered first fit

The allocation policy that always uses the most-recently freed (1) suitable free block. Commonly implemented by pushing freed blocks on the front of a free block chain, and then using first fit allocation on this chain. free (1) can be very quick, depending on the coalescing policy.

This policy may suffer from severe fragmentation in the presence of short-lived large objects of a single size. As smaller objects are allocated, the free block chain fills up with fragments a little smaller than the large object size.

Related publication

Wilson et al. (1995).

limited-field reference count

Also known as

sticky reference count.

A reference counting technique whereby the field used to store the number of references to an object has a limited size. In particular, the field is not large enough to represent the maximum possible number of references to an object.

Using the observation that most objects are not referenced a great number of times, some systems that use reference counts only store the count accurately up to a certain maximum value. If an object has more references than the maximum then the count “sticks” at the maximum and is never decremented. Such objects are expected to be rare, but their memory (1) can never be reclaimed using reference counting. A separate (infrequently run) tracing garbage collector is often employed to reclaim this storage.

A degenerate form of limited-field reference counting is one-bit reference counting where an object is considered to be referenced either exactly once or many times.

linear addressing

In linear addressing, addresses form a single, continuous address space. This term is used mostly in opposition to segmented addressing.

Opposite term

segmented addressing.

live

Also known as

alive, active.

Memory (2) or an object is live if the program will read from it in future. The term is often used more broadly to mean reachable.

It is not possible, in general, for garbage collectors to determine exactly which objects are still live. Instead, they use some approximation to detect objects that are provably dead, such as those that are not reachable.

Similar term

reachable.

Opposite term

dead.

See also

undead.

load

To transfer data from memory (2) to a processor’s registers.

Load can also be used in the more general sense of moving data from a part of the memory hierarchy that is slow to access to one that is fast to access (For example, “it takes about 3 ms for the virtual memory system to load a page from disk on this system”). When used in this sense, the qualified term cache (2) load is common.

LOAD (or an abbreviation) is also commonly used in many processor architectures as the mnemonic name for the machine code instructions that are used primarily to make data accessible to the CPU (by loading the data into registers usually). In RISC architectures it is common for the load instructions to be the only means of making data accessible to the CPU; in CISC architectures it is common for a wide variety of instructions to implicitly or explicitly load data from memory.

Opposite term

store (1).

locality of reference

Locality of reference is the extent to which successive accesses of nearby memory (1) locations are nearby in time; for example, a program that reads all the elements of a contiguous array in turn or that repeatedly uses the same memory variable has good locality of reference.

Good locality of reference interacts well with virtual memory and memory caches, as it reduces the working set and improves the hit rate.

There are a number of specialized senses of locality of reference in certain fields such as distributed systems; these are not covered in depth here.

Relevance to memory management

A mutator may exhibit predictable properties such as accessing in turn objects which were allocated in turn, or accessing in turn objects which have references to each other. An intelligent allocator or copying garbage collector can use this observation to improve locality of reference.

location
location dependency

In the MPS

A location dependency records the fact that the client program depends on the bit patterns of some references (and not merely on the identity of the block to which the reference refers), and provides a function (mps_ld_isstale()) to find out whether any of these references have been changed because a block has been moved. See Location dependency.

lock free

A multi-threaded program is lock free if all schedules for the threads make progress: in particular, no schedule leads to deadlock. This is most easily implemented by avoiding taking locks.

logical address
longword

See

doubleword.