THREAD SAFETY IN THE MPS
design.mps.thread-safety
incomplete design
dsm 1995-10-03
INTRODUCTION:
This describes how Thread Safety is achieved in the MPS
OVERVIEW:
The MPS is expected to run in an environment with multiple threads all calling
into the MPS. The initial approach is very simple. Some of the code is known
to operate with exclusive access to the data it manipulates, so this code is
safe. For the rest of the code, shared data structures are locked by the use
of a single binary lock (design.mps.lock(0)) per arena. This lock is claimed
on entry to the MPS and released on exit from it. So there is at most a single
thread (per arena) running "inside" the MPS at a time.
REQUIREMENTS:
.req.mt: Code must work correctly in presence of multiple threads all calling
into the MPS.
.req.perf: Performance should not be unreasonably hindered.
ARCHITECTURE:
.arch.arena: Arena Lock: no shared data between arenas
.arch.global.binary: Global binary lock: protects mutable data shared between
arenas - i.e. the arena ring, see design.mps.arena.static.ring.lock.
.arch.global.recursive: Global recursive lock: protects static data which must
be initialized once - e.g. pool classes, see design.mps.protocol.impl.init-lock
.
.arch.other: Other: data not shared
.arch.static: static data: sigs: shared-non-mutable always inited to same thing.
.arena-entry: Each arena has a single lock.
Externally visible calls fall into two categories.
Simple: arena lock not held. Lock is claimed on entry, and released on exit.
Recall: These are callable only after a call-back from the MPS. In this case a
arena lock is already held.
.interface: The definition of the interface should guarantee safe use of calls
(from a locking point of view). For example, a buffer must be exclusive to a
thread.
.buffers: The buffer code is designed not to need a lock in the fast case. A
lock is only claimed on the exceptional reserve, trip and commit cases (fill
and trip?). A buffer contains references to shared data (via pool field).
Accessing this shared data must involve a lock.
.deadlock: A strict ordering is required between the global and arena locks to
prevent deadlock. The binary global lock may not be claimed while either the
arena or recursive global lock is held; the arena lock may not be claimed while
the recursive global lock is held. Each arena lock is independent of all other
arena locks; i.e. a thread may not attempt to claim more than one arena lock at
a time.
ANALYSIS:
.anal.simple: To have the code functioning correctly it should be easy to
change correctly. So a simple approach is desirable. We have to also ensure
that performance is not unreasonably downgraded.
Performance cost of Locking
.lock-cost: The cost of locking in performance terms are:
.lock-cost.overhead: the overhead of claiming and releasing locks.
.lock-cost.pause: the pauses caused by one thread being blocked on another
thread.
.lock-cost.wait: the time wasted by one thread being blocked on another thread.
.anal.perf.signif: .lock-cost.pause is significant if there are MPS functions
that take a long time. Using more locks, e.g. having a lock per pool as well
as a lock per arena, is a way of decreasing the locking conflict between
threads (.lock-cost.pause and .lock-cost.wait). However this could increase
.lock-cost.overhead significantly.
.anal.perf.work: But all MPS functions imply a small work-load unless a
collection is taking place. In the case of a collection, in practice and
certainly in the near future, all threads will most likely be suspended while
the collection work is going on. (The pages being scanned will need to be
unprotected which implies the mutator will have to be stopped.) We also have
to remember that unless we are running on genuine multiprocessor
.lock-cost.wait is irrelevant.
.anal.perf.alloc: During typical use we expect that it is allocation that is
the most frequent activity. Allocation buffers (design.mps.buffer) are
designed to allow allocation in concurrent threads without needing a lock. So
the most significant time a thread spends in the MPS will be on a buffer-fill
or during a collection. The next most significant use is likely to be buffer
create and deletion, as a separate buffer will be required for each thread.
.anal.perf.lock: So overall the performance cost of locking is, I estimate,
most significantly the overhead of calling the locking functions. Hence it
would be undesirable from a performance point of view to have more than one
lock.
Recursive vs Binary locks
.anal.reentrance: The simplest way to lock the code safely is to define which
code runs inside or outside the lock. Calling from the outside to the inside
implies a lock has to be claimed. Returning means the lock has to be
released. Control flow from outside to outside and from inside to inside needs
no locking action. To implement this a function defined on the external
interface needs to claim the lock on entry and release it on exit. Our code
currently uses some external functions with the lock already held. There are
two ways to implement this:
.recursive: Each external function claims a recursive lock.
+ simple
- have to worry about locking depth
- extra locking overhead on internal calls of external functions
.binary: Each external function claims a binary lock. Replace each internal
call of an external function with a call to a newly defined internal one.
- more code
+ slightly easier to reason about
.anal.strategy: It seems that the .recursive strategy is the easiest to
implement first, but could be evolved into a .binary strategy. (That evolution
has now happened. tony 1999-08-31).
IDEAS:
.sol.arena-lock: Lock per arena which locks all MPS structures associated with
the arena, except allocation buffers.
.sol.init: Shared static data may not be changed. It is initialised before
being read, and if re-initalised the values written must be identical to those
already there. Essentially only read-only shared static data is allowed.
.sol.fine-grain: Use finer grained locks, e.g. a lock per per pool instance.
Arena lock locks only operations on arena. Pool locks are claimed per pool.
An ordering on pool instances would avoid deadlock.
.sol.global: Use global locks for genuinely global data which must be updated
dynamically. An ordering between global and arena locks would avoid deadlock.
IMPLEMENTATION:
Use MPS locks (design.mps.lock) to do locking.
Locking Functions
ArenaEnter/ArenaLeave are used to claim and release the arena lock. To
implement this:
- there is a lock for every arena. The arena class init function allocates the
lock as well as the arena itself.
- ArenaInit LockInits the lock and initializes the pointer to it from the arena.
- ArenaDestroy LockFinishs it.
- ArenaEnter function for claiming the lock
- ArenaLeave function for releasing the lock
Shared and Non-Shared Data
non-shared data is data for which no other thread has a handle on it.
shared-non-mutable data is data which is never changed after initialisation.
It may be re-initialised, if re-initialisation does not change its value.
atomically updatable data is data which is not locked, but may be shared
because it is in a consistent state before and after an update.
A function is "safe" if it may safely execute without exclusive access to the
data it manipulates.
A "safe" function may:
- call other safe functions
- manipulate non-shared data
- read shared-non-mutable data
- claim the arena lock around code which may manipulate shared data in the
arena.
For each function in the external MPS interface either
- the function ArenaEnter on entry and ArenaLeave on exit
- the function uses PoolArena to identify the arena, before claiming the lock
- the function uses BufferPool & PoolArena to identify the arena, before
claiming the lock
- the function is not defined as external but is listed for explicitness.
- only claim the lock in otherwise unsafe situations (buffer code)????
- the function may be called externally but only in a situation where the arena
lock is already held
- the function is the unique accessor of its data.
So PoolArena and BufferPool must be "safe".
pool->arena is shared-non-mutable
buffer->pool is shared-non-mutable
Validation
We have to be careful about validation. Any function that is called from a
arena-safe function without the arena-lock held, must itself be safe, or
manipulating non-shared data.
For example calling PoolIsValid before claiming the lock would be wrong if
PoolIsValid is unsafe. Defining it to be safe would involve locking it, which
if done in all similar situations would be very expensive.
Possibly remove validation from accessor methods; replace with sig check and
isvalid calls in callers of accessor functions.
Annotations?:
safe
non-shared
shared-non-mutable
Safe functions:
Arena
ArenaCreate no shared data; no lock; calls LockInit
ArenaDestroy no shared data; no lock (should only finish arena after
use); calls LockFinish
ArenaDescribe lock
Root
For the purposes of locking this module can be thought of as external.
RootCreate calls create
RootCreateTable calls create
create lock
RootDestroy lock
RootDescribe lock
will be attached to arena, can lock now.
Pool
PoolCreate / PoolCreateV lock (Create calls CreateV which locks).
PoolDestroy lock
PoolAlloc lock
PoolFree lock
PoolArena accesses shared-non-mutable data only
PoolDescribe lock
Format
FormatCreate lock
FormatDestroy lock
Buffer
BufferCreate lock
BufferDestroy lock
BufferFill lock
BufferTrip lock
BufferPool accesses shared-non-mutable data only
BufferDescribe lock
BufferCommit "unsafe": buffer may be used by single thread only. (but safe
wrt arena)
BufferReserve "unsafe": also
PoolClass -- only shared data is static and non-mutable
PoolClass
PoolClassAMC
PoolClassMV
PoolClassMFS
Sig -- as with PoolClasses, relies on static data reinitialised to
constant value
Collect
Collect lock
Thread
ThreadRegister lock
ThreadDeregister lock
A. References
B. Document History
| 2002-06-21 | RB | Converted from MMInfo database design document. |
C. Copyright and License
This document is copyright © #YEAR# Ravenbrook Limited. All rights reserved. This is an open source license. Contact Ravenbrook for commercial licensing options.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Redistributions in any form must be accompanied by information on how to obtain complete source code for the this software and any accompanying software that uses this software. The source code must either be included in the distribution or be available for no more than the cost of distribution plus a nominal fee, and must be freely redistributable under reasonable conditions. For an executable file, complete source code means the source code for all modules it contains. It does not include source code for modules or files that typically accompany the major components of the operating system on which the executable file runs.
This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement, are disclaimed. In no event shall the copyright holders and contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.