Set alternate default number of partial collections
between full collections (matters only if incremental collection is on).
NO_CANCEL_SAFE (Posix platforms with threads only) Don't bother trying
to make the collector safe for thread cancellation; cancellation is not
used. (Note that if cancellation is used anyway, threads may end up
getting canceled in unexpected places.) Even without this option,
PTHREAD_CANCEL_ASYNCHRONOUS is never safe with the collector. (We could
argue about its safety without the collector.)
UNICODE (Win32 only) Use the Unicode variant ('W') of the Win32 API instead
of ANSI/ASCII one ('A'). Useful for WinCE.
PLATFORM_ANDROID (or __ANDROID__) Compile for Android NDK platform.
SN_TARGET_PS3 Compile for Sony PS/3.
USE_GET_STACKBASE_FOR_MAIN (Linux only) Use pthread_attr_getstack() instead
of __libc_stack_end (or instead of any hard-coded value) for getting the
primordial thread stack base (useful if the client modifies the program's
address space).
Gauche-0.9.6/gc/doc/README.hp 0000664 0000764 0000764 00000001515 13227007433 014415 0 ustar shiro shiro Dynamic loading support requires that executables be linked with -ldld.
The alternative is to build the collector without defining DYNAMIC_LOADING
in gcconfig.h and ensuring that all garbage collectible objects are
accessible without considering statically allocated variables in dynamic
libraries.
The collector should compile with either plain cc or cc -Ae. Cc -Aa
fails to define _HPUX_SOURCE and thus will not configure the collector
correctly.
Incremental collection support was added recently, and should now work.
In spite of past claims, pthread support under HP/UX 11 should now work.
Define GC_HPUX_THREADS for the build. Incremental collection still does not
work in combination with it.
The stack finding code can be confused by putenv calls before collector
initialization. Call GC_malloc or GC_init before any putenv calls.
Gauche-0.9.6/gc/doc/README.rs6000 0000664 0000764 0000764 00000000775 13074053206 014746 0 ustar shiro shiro We have so far failed to find a good way to determine the stack base.
It is highly recommended that GC_stackbottom be set explicitly on program
startup. The supplied value sometimes causes failure under AIX 4.1, though
it appears to work under 3.X. HEURISTIC2 seems to work under 4.1, but
involves a substantial performance penalty, and will fail if there is
no limit on stack size.
There is no thread support. (I assume recent versions of AIX provide
pthreads? I no longer have access to a machine ...)
Gauche-0.9.6/gc/doc/gcdescr.html 0000664 0000764 0000764 00000075256 13227007433 015444 0 ustar shiro shiro
Conservative GC Algorithmic Overview
Hans-J. Boehm, HP Labs (Some of this was written at SGI)
This is under construction, and may always be.
Conservative GC Algorithmic Overview
This is a description of the algorithms and data structures used in our
conservative garbage collector. I expect the level of detail to increase
with time. For a survey of GC algorithms, see for example
Paul Wilson's
excellent paper. For an overview of the collector interface,
see here.
This description is targeted primarily at someone trying to understand the
source code. It specifically refers to variable and function names.
It may also be useful for understanding the algorithms at a higher level.
The description here assumes that the collector is used in default mode.
In particular, we assume that it used as a garbage collector, and not just
a leak detector. We initially assume that it is used in stop-the-world,
non-incremental mode, though the presence of the incremental collector
will be apparent in the design.
We assume the default finalization model, but the code affected by that
is very localized.
Introduction
The garbage collector uses a modified mark-sweep algorithm. Conceptually
it operates roughly in four phases, which are performed occasionally
as part of a memory allocation:
-
Preparation Each object has an associated mark bit.
Clear all mark bits, indicating that all objects
are potentially unreachable.
-
Mark phase Marks all objects that can be reachable via chains of
pointers from variables. Often the collector has no real information
about the location of pointer variables in the heap, so it
views all static data areas, stacks and registers as potentially containing
pointers. Any bit patterns that represent addresses inside
heap objects managed by the collector are viewed as pointers.
Unless the client program has made heap object layout information
available to the collector, any heap objects found to be reachable from
variables are again scanned similarly.
-
Sweep phase Scans the heap for inaccessible, and hence unmarked,
objects, and returns them to an appropriate free list for reuse. This is
not really a separate phase; even in non incremental mode this is operation
is usually performed on demand during an allocation that discovers an empty
free list. Thus the sweep phase is very unlikely to touch a page that
would not have been touched shortly thereafter anyway.
-
Finalization phase Unreachable objects which had been registered
for finalization are enqueued for finalization outside the collector.
The remaining sections describe the memory allocation data structures,
and then the last 3 collection phases in more detail. We conclude by
outlining some of the additional features implemented in the collector.
Allocation
The collector includes its own memory allocator. The allocator obtains
memory from the system in a platform-dependent way. Under UNIX, it
uses either malloc, sbrk, or mmap.
Most static data used by the allocator, as well as that needed by the
rest of the garbage collector is stored inside the
_GC_arrays structure.
This allows the garbage collector to easily ignore the collectors own
data structures when it searches for root pointers. Other allocator
and collector internal data structures are allocated dynamically
with GC_scratch_alloc. GC_scratch_alloc does not
allow for deallocation, and is therefore used only for permanent data
structures.
The allocator allocates objects of different kinds.
Different kinds are handled somewhat differently by certain parts
of the garbage collector. Certain kinds are scanned for pointers,
others are not. Some may have per-object type descriptors that
determine pointer locations. Or a specific kind may correspond
to one specific object layout. Two built-in kinds are uncollectible.
One (STUBBORN) is immutable without special precautions.
In spite of that, it is very likely that most C clients of the
collector currently
use at most two kinds: NORMAL and PTRFREE objects.
The GCJ runtime
also makes heavy use of a kind (allocated with GC_gcj_malloc) that stores
type information at a known offset in method tables.
The collector uses a two level allocator. A large block is defined to
be one larger than half of HBLKSIZE, which is a power of 2,
typically on the order of the page size.
Large block sizes are rounded up to
the next multiple of HBLKSIZE and then allocated by
GC_allochblk. Recent versions of the collector
use an approximate best fit algorithm by keeping free lists for
several large block sizes.
The actual
implementation of GC_allochblk
is significantly complicated by black-listing issues
(see below).
Small blocks are allocated in chunks of size HBLKSIZE.
Each chunk is
dedicated to only one object size and kind.
The allocator maintains
separate free lists for each size and kind of object.
Associated with each kind is an array of free list pointers,
with entry freelist[i] pointing to
a free list of size i objects.
In recent versions of the
collector, index i is expressed in granules, which are the
minimum allocatable unit, typically 8 or 16 bytes.
The free lists themselves are
linked through the first word in each object (see obj_link()
macro).
Once a large block is split for use in smaller objects, it can only
be used for objects of that size, unless the collector discovers a completely
empty chunk. Completely empty chunks are restored to the appropriate
large block free list.
In order to avoid allocating blocks for too many distinct object sizes,
the collector normally does not directly allocate objects of every possible
request size. Instead, the request is rounded up to one of a smaller number
of allocated sizes, for which free lists are maintained. The exact
allocated sizes are computed on demand, but subject to the constraint
that they increase roughly in geometric progression. Thus objects
requested early in the execution are likely to be allocated with exactly
the requested size, subject to alignment constraints.
See GC_init_size_map for details.
The actual size rounding operation during small object allocation is
implemented as a table lookup in GC_size_map which maps
a requested allocation size in bytes to a number of granules.
Both collector initialization and computation of allocated sizes are
handled carefully so that they do not slow down the small object fast
allocation path. An attempt to allocate before the collector is initialized,
or before the appropriate GC_size_map entry is computed,
will take the same path as an allocation attempt with an empty free list.
This results in a call to the slow path code (GC_generic_malloc_inner)
which performs the appropriate initialization checks.
In non-incremental mode, we make a decision about whether to garbage collect
whenever an allocation would otherwise have failed with the current heap size.
If the total amount of allocation since the last collection is less than
the heap size divided by GC_free_space_divisor, we try to
expand the heap. Otherwise, we initiate a garbage collection. This ensures
that the amount of garbage collection work per allocated byte remains
constant.
The above is in fact an oversimplification of the real heap expansion
and GC triggering heuristic, which adjusts slightly for root size
and certain kinds of
fragmentation. In particular:
- Programs with a large root set size and
little live heap memory will expand the heap to amortize the cost of
scanning the roots.
- Versions 5.x of the collector actually collect more frequently in
nonincremental mode. The large block allocator usually refuses to split
large heap blocks once the garbage collection threshold is
reached. This often has the effect of collecting well before the
heap fills up, thus reducing fragmentation and working set size at the
expense of GC time. Versions 6.x choose an intermediate strategy depending
on how much large object allocation has taken place in the past.
(If the collector is configured to unmap unused pages, versions 6.x
use the 5.x strategy.)
- In calculating the amount of allocation since the last collection we
give partial credit for objects we expect to be explicitly deallocated.
Even if all objects are explicitly managed, it is often desirable to collect
on rare occasion, since that is our only mechanism for coalescing completely
empty chunks.
It has been suggested that this should be adjusted so that we favor
expansion if the resulting heap still fits into physical memory.
In many cases, that would no doubt help. But it is tricky to do this
in a way that remains robust if multiple application are contending
for a single pool of physical memory.
Mark phase
At each collection, the collector marks all objects that are
possibly reachable from pointer variables. Since it cannot generally
tell where pointer variables are located, it scans the following
root segments for pointers:
- The registers. Depending on the architecture, this may be done using
assembly code, or by calling a setjmp-like function which saves
register contents on the stack.
- The stack(s). In the case of a single-threaded application,
on most platforms this
is done by scanning the memory between (an approximation of) the current
stack pointer and GC_stackbottom. (For Itanium, the register stack
scanned separately.) The GC_stackbottom variable is set in
a highly platform-specific way depending on the appropriate configuration
information in gcconfig.h. Note that the currently active
stack needs to be scanned carefully, since callee-save registers of
client code may appear inside collector stack frames, which may
change during the mark process. This is addressed by scanning
some sections of the stack "eagerly", effectively capturing a snapshot
at one point in time.
- Static data region(s). In the simplest case, this is the region
between DATASTART and DATAEND, as defined in
gcconfig.h. However, in most cases, this will also involve
static data regions associated with dynamic libraries. These are
identified by the mostly platform-specific code in dyn_load.c.
The marker maintains an explicit stack of memory regions that are known
to be accessible, but that have not yet been searched for contained pointers.
Each stack entry contains the starting address of the block to be scanned,
as well as a descriptor of the block. If no layout information is
available for the block, then the descriptor is simply a length.
(For other possibilities, see gc_mark.h.)
At the beginning of the mark phase, all root segments
(as described above) are pushed on the
stack by GC_push_roots. (Registers and eagerly processed
stack sections are processed by pushing the referenced objects instead
of the stack section itself.) If ALL_INTERIOR_POINTERS is not
defined, then stack roots require special treatment. In this case, the
normal marking code ignores interior pointers, but GC_push_all_stack
explicitly checks for interior pointers and pushes descriptors for target
objects.
The marker is structured to allow incremental marking.
Each call to GC_mark_some performs a small amount of
work towards marking the heap.
It maintains
explicit state in the form of GC_mark_state, which
identifies a particular sub-phase. Some other pieces of state, most
notably the mark stack, identify how much work remains to be done
in each sub-phase. The normal progression of mark states for
a stop-the-world collection is:
- MS_INVALID indicating that there may be accessible unmarked
objects. In this case GC_objects_are_marked will simultaneously
be false, so the mark state is advanced to
- MS_PUSH_UNCOLLECTABLE indicating that it suffices to push
uncollectible objects, roots, and then mark everything reachable from them.
Scan_ptr is advanced through the heap until all uncollectible
objects are pushed, and objects reachable from them are marked.
At that point, the next call to GC_mark_some calls
GC_push_roots to push the roots. It the advances the
mark state to
- MS_ROOTS_PUSHED asserting that once the mark stack is
empty, all reachable objects are marked. Once in this state, we work
only on emptying the mark stack. Once this is completed, the state
changes to
- MS_NONE indicating that reachable objects are marked.
The core mark routine GC_mark_from, is called
repeatedly by several of the sub-phases when the mark stack starts to fill
up. It is also called repeatedly in MS_ROOTS_PUSHED state
to empty the mark stack.
The routine is designed to only perform a limited amount of marking at
each call, so that it can also be used by the incremental collector.
It is fairly carefully tuned, since it usually consumes a large majority
of the garbage collection time.
The fact that it performs only a small amount of work per call also
allows it to be used as the core routine of the parallel marker. In that
case it is normally invoked on thread-private mark stacks instead of the
global mark stack. More details can be found in
scale.html
The marker correctly handles mark stack overflows. Whenever the mark stack
overflows, the mark state is reset to MS_INVALID.
Since there are already marked objects in the heap,
this eventually forces a complete
scan of the heap, searching for pointers, during which any unmarked objects
referenced by marked objects are again pushed on the mark stack. This
process is repeated until the mark phase completes without a stack overflow.
Each time the stack overflows, an attempt is made to grow the mark stack.
All pieces of the collector that push regions onto the mark stack have to be
careful to ensure forward progress, even in case of repeated mark stack
overflows. Every mark attempt results in additional marked objects.
Each mark stack entry is processed by examining all candidate pointers
in the range described by the entry. If the region has no associated
type information, then this typically requires that each 4-byte aligned
quantity (8-byte aligned with 64-bit pointers) be considered a candidate
pointer.
We determine whether a candidate pointer is actually the address of
a heap block. This is done in the following steps:
The candidate pointer is checked against rough heap bounds.
These heap bounds are maintained such that all actual heap objects
fall between them. In order to facilitate black-listing (see below)
we also include address regions that the heap is likely to expand into.
Most non-pointers fail this initial test.
The candidate pointer is divided into two pieces; the most significant
bits identify a HBLKSIZE-sized page in the address space, and
the least significant bits specify an offset within that page.
(A hardware page may actually consist of multiple such pages.
HBLKSIZE is usually the page size divided by a small power of two.)
The page address part of the candidate pointer is looked up in a
table.
Each table entry contains either 0, indicating that the page is not part
of the garbage collected heap, a small integer n, indicating
that the page is part of large object, starting at least n pages
back, or a pointer to a descriptor for the page. In the first case,
the candidate pointer i not a true pointer and can be safely ignored.
In the last two cases, we can obtain a descriptor for the page containing
the beginning of the object.
The starting address of the referenced object is computed.
The page descriptor contains the size of the object(s)
in that page, the object kind, and the necessary mark bits for those
objects. The size information can be used to map the candidate pointer
to the object starting address. To accelerate this process, the page header
also contains a pointer to a precomputed map of page offsets to displacements
from the beginning of an object. The use of this map avoids a
potentially slow integer remainder operation in computing the object
start address.
The mark bit for the target object is checked and set. If the object
was previously unmarked, the object is pushed on the mark stack.
The descriptor is read from the page descriptor. (This is computed
from information GC_obj_kinds when the page is first allocated.)
At the end of the mark phase, mark bits for left-over free lists are cleared,
in case a free list was accidentally marked due to a stray pointer.
Sweep phase
At the end of the mark phase, all blocks in the heap are examined.
Unmarked large objects are immediately returned to the large object free list.
Each small object page is checked to see if all mark bits are clear.
If so, the entire page is returned to the large object free list.
Small object pages containing some reachable object are queued for later
sweeping, unless we determine that the page contains very little free
space, in which case it is not examined further.
This initial sweep pass touches only block headers, not
the blocks themselves. Thus it does not require significant paging, even
if large sections of the heap are not in physical memory.
Nonempty small object pages are swept when an allocation attempt
encounters an empty free list for that object size and kind.
Pages for the correct size and kind are repeatedly swept until at
least one empty block is found. Sweeping such a page involves
scanning the mark bit array in the page header, and building a free
list linked through the first words in the objects themselves.
This does involve touching the appropriate data page, but in most cases
it will be touched only just before it is used for allocation.
Hence any paging is essentially unavoidable.
Except in the case of pointer-free objects, we maintain the invariant
that any object in a small object free list is cleared (except possibly
for the link field). Thus it becomes the burden of the small object
sweep routine to clear objects. This has the advantage that we can
easily recover from accidentally marking a free list, though that could
also be handled by other means. The collector currently spends a fair
amount of time clearing objects, and this approach should probably be
revisited.
In most configurations, we use specialized sweep routines to handle common
small object sizes. Since we allocate one mark bit per word, it becomes
easier to examine the relevant mark bits if the object size divides
the word length evenly. We also suitably unroll the inner sweep loop
in each case. (It is conceivable that profile-based procedure cloning
in the compiler could make this unnecessary and counterproductive. I
know of no existing compiler to which this applies.)
The sweeping of small object pages could be avoided completely at the expense
of examining mark bits directly in the allocator. This would probably
be more expensive, since each allocation call would have to reload
a large amount of state (e.g. next object address to be swept, position
in mark bit table) before it could do its work. The current scheme
keeps the allocator simple and allows useful optimizations in the sweeper.
Finalization
Both GC_register_disappearing_link and
GC_register_finalizer add the request to a corresponding hash
table. The hash table is allocated out of collected memory, but
the reference to the finalizable object is hidden from the collector.
Currently finalization requests are processed non-incrementally at the
end of a mark cycle.
The collector makes an initial pass over the table of finalizable objects,
pushing the contents of unmarked objects onto the mark stack.
After pushing each object, the marker is invoked to mark all objects
reachable from it. The object itself is not explicitly marked.
This assures that objects on which a finalizer depends are neither
collected nor finalized.
If in the process of marking from an object the
object itself becomes marked, we have uncovered
a cycle involving the object. This usually results in a warning from the
collector. Such objects are not finalized, since it may be
unsafe to do so. See the more detailed
discussion of finalization semantics.
Any objects remaining unmarked at the end of this process are added to
a queue of objects whose finalizers can be run. Depending on collector
configuration, finalizers are dequeued and run either implicitly during
allocation calls, or explicitly in response to a user request.
(Note that the former is unfortunately both the default and not generally safe.
If finalizers perform synchronization, it may result in deadlocks.
Nontrivial finalizers generally need to perform synchronization, and
thus require a different collector configuration.)
The collector provides a mechanism for replacing the procedure that is
used to mark through objects. This is used both to provide support for
Java-style unordered finalization, and to ignore certain kinds of cycles,
e.g. those arising from C++ implementations of virtual inheritance.
Generational Collection and Dirty Bits
We basically use the concurrent and generational GC algorithm described in
"Mostly Parallel Garbage Collection",
by Boehm, Demers, and Shenker.
The most significant modification is that
the collector always starts running in the allocating thread.
There is no separate garbage collector thread. (If parallel GC is
enabled, helper threads may also be woken up.)
If an allocation attempt either requests a large object, or encounters
an empty small object free list, and notices that there is a collection
in progress, it immediately performs a small amount of marking work
as described above.
This change was made both because we wanted to easily accommodate
single-threaded environments, and because a separate GC thread requires
very careful control over the scheduler to prevent the mutator from
out-running the collector, and hence provoking unneeded heap growth.
In incremental mode, the heap is always expanded when we encounter
insufficient space for an allocation. Garbage collection is triggered
whenever we notice that more than
GC_heap_size/2 * GC_free_space_divisor
bytes of allocation have taken place.
After GC_full_freq minor collections a major collection
is started.
All collections initially run uninterrupted until a predetermined
amount of time (50 msecs by default) has expired. If this allows
the collection to complete entirely, we can avoid correcting
for data structure modifications during the collection. If it does
not complete, we return control to the mutator, and perform small
amounts of additional GC work during those later allocations that
cannot be satisfied from small object free lists. When marking completes,
the set of modified pages is retrieved, and we mark once again from
marked objects on those pages, this time with the mutator stopped.
We keep track of modified pages using one of several distinct mechanisms:
-
Through explicit mutator cooperation. Currently this requires
the use of GC_malloc_stubborn, and is rarely used.
-
(MPROTECT_VDB) By write-protecting physical pages and
catching write faults. This is
implemented for many Unix-like systems and for win32. It is not possible
in a few environments.
-
(GWW_VDB) By using the Win32 GetWriteWatch function to read dirty
bits.
-
(PROC_VDB) By retrieving dirty bit information from /proc.
(Currently only Sun's
Solaris supports this. Though this is considerably cleaner, performance
may actually be better with mprotect and signals.)
-
(PCR_VDB) By relying on an external dirty bit implementation, in this
case the one in Xerox PCR.
-
(DEFAULT_VDB) By treating all pages as dirty. This is the default if
none of the other techniques is known to be usable, and
GC_malloc_stubborn is not used. Practical only for testing, or if
the vast majority of objects use GC_malloc_stubborn.
Black-listing
The collector implements black-listing of pages, as described
in
Boehm, ``Space Efficient Conservative Collection'', PLDI '93, also available
here.
During the mark phase, the collector tracks ``near misses'', i.e. attempts
to follow a ``pointer'' to just outside the garbage-collected heap, or
to a currently unallocated page inside the heap. Pages that have been
the targets of such near misses are likely to be the targets of
misidentified ``pointers'' in the future. To minimize the future
damage caused by such misidentification, they will be allocated only to
small pointer-free objects.
The collector understands two different kinds of black-listing. A
page may be black listed for interior pointer references
(GC_add_to_black_list_stack), if it was the target of a near
miss from a location that requires interior pointer recognition,
e.g. the stack, or the heap if GC_all_interior_pointers
is set. In this case, we also avoid allocating large blocks that include
this page.
If the near miss came from a source that did not require interior
pointer recognition, it is black-listed with
GC_add_to_black_list_normal.
A page black-listed in this way may appear inside a large object,
so long as it is not the first page of a large object.
The GC_allochblk routine respects black-listing when assigning
a block to a particular object kind and size. It occasionally
drops (i.e. allocates and forgets) blocks that are completely black-listed
in order to avoid excessively long large block free lists containing
only unusable blocks. This would otherwise become an issue
if there is low demand for small pointer-free objects.
Thread support
We support several different threading models. Unfortunately Pthreads,
the only reasonably well standardized thread model, supports too narrow
an interface for conservative garbage collection. There appears to be
no completely portable way to allow the collector
to coexist with various Pthreads
implementations. Hence we currently support only the more
common Pthreads implementations.
In particular, it is very difficult for the collector to stop all other
threads in the system and examine the register contents. This is currently
accomplished with very different mechanisms for some Pthreads
implementations. For Linux/HPUX/OSF1, Solaris and Irix it sends signals to
individual Pthreads and has them wait in the signal handler.
The Linux and Irix implementations use
only documented Pthreads calls, but rely on extensions to their semantics.
The Linux implementation pthread_stop_world.c relies on only very
mild extensions to the pthreads semantics, and already supports a large number
of other Unix-like pthreads implementations. Our goal is to make this the
only pthread support in the collector.
All implementations must
intercept thread creation and a few other thread-specific calls to allow
enumeration of threads and location of thread stacks. This is current
accomplished with # define's in gc.h
(really gc_pthread_redirects.h), or optionally
by using ld's function call wrapping mechanism under Linux.
Recent versions of the collector support several facilities to enhance
the processor-scalability and thread performance of the collector.
These are discussed in more detail here.
We briefly outline the data approach to thread-local allocation in the
next section.
Thread-local allocation
If thread-local allocation is enabled, the collector keeps separate
arrays of free lists for each thread. Thread-local allocation
is currently only supported on a few platforms.
The free list arrays associated
with each thread are only used to satisfy requests for objects that
are both very small, and belong to one of a small number of well-known
kinds. These currently include "normal" and pointer-free objects.
Depending on the configuration, "gcj" objects may also be included.
Thread-local free list entries contain either a pointer to the first
element of a free list, or they contain a counter of the number of
allocation granules, corresponding to objects of this size,
allocated so far. Initially they contain the
value one, i.e. a small counter value.
Thread-local allocation allocates directly through the global
allocator, if the object is of a size or kind not covered by the
local free lists.
If there is an appropriate local free list, the allocator checks whether it
contains a sufficiently small counter value. If so, the counter is simply
incremented by the counter value, and the global allocator is used.
In this way, the initial few allocations of a given size bypass the local
allocator. A thread that only allocates a handful of objects of a given
size will not build up its own free list for that size. This avoids
wasting space for unpopular objects sizes or kinds.
Once the counter passes a threshold, GC_malloc_many is called
to allocate roughly HBLKSIZE space and put it on the corresponding
local free list. Further allocations of that size and kind then use
this free list, and no longer need to acquire the allocation lock.
The allocation procedure is otherwise similar to the global free lists.
The local free lists are also linked using the first word in the object.
In most cases this means they require considerably less time.
Local free lists are treated buy most of the rest of the collector
as though they were in-use reachable data. This requires some care,
since pointer-free objects are not normally traced, and hence a special
tracing procedure is required to mark all objects on pointer-free and
gcj local free lists.
On thread exit, any remaining thread-local free list entries are
transferred back to the global free list.
Note that if the collector is configured for thread-local allocation,
GC versions before 7 do not invoke the thread-local allocator by default.
GC_malloc only uses thread-local allocation in version 7 and later.
For some more details see here, and the
technical report entitled
"Fast Multiprocessor Memory Allocation and Garbage Collection"
Gauche-0.9.6/gc/doc/README.arm.cross 0000664 0000764 0000764 00000003534 13074101475 015721 0 ustar shiro shiro From: Margaret Fleck
Here's the key details of what worked for me, in case anyone else needs them.
There may well be better ways to do some of this, but ....
-- Margaret
The badge4 has a StrongArm-1110 processor and a StrongArm-1111 coprocessor.
Assume that the garbage collector distribution is unpacked into /home/arm/gc6.0,
which is visible to both the ARM machine and a linux desktop (e.g. via NFS mounting).
Assume that you have a file /home/arm/config.site with contents something like the
example attached below. Notice that our local ARM toolchain lives in
/skiff/local.
Go to /home/arm/gc6.0 directory. Do
CONFIG_SITE=/home/arm/config.site ./configure --target=arm-linux
--prefix=/home/arm/gc6.0
On your desktop, do:
make
make install
The main garbage collector library should now be in ../gc6.0/lib/libgc.so.
To test the garbage collector, first do the following on your desktop
make gctest
./gctest
Then do the following on the ARM machine
cd .libs
./lt-gctest
Do not try to do "make test" (the usual way of running the test
program). This does not work and seems to erase some of the important
files.
The gctest program claims to have succeeded. Haven't run any further tests
with it, though I'll be doing so in the near future.
-------------------------------
# config.site for configure
HOSTCC=gcc
# Names of the cross-compilers
CC=/skiff/local/bin/arm-linux-gcc
CXX=/skiff/local/bin/arm-linux-gcc
# The cross compiler specific options
CFLAGS="-O2 -fno-exceptions"
CXXFLAGS="-O2 -fno-exceptions"
CPPFLAGS="-O2 -fno-exceptions"
LDFLAGS=""
# Some other programs
AR=/skiff/local/bin/arm-linux-ar
RANLIB=/skiff/local/bin/arm-linux-ranlib
NM=/skiff/local/bin/arm-linux-nm
ac_cv_path_NM=/skiff/local/bin/arm-linux-nm
ac_cv_func_setpgrp_void=yes
x_includes=/skiff/local/arm-linux/include/X11
x_libraries=/skiff/local/arm-linux/lib/X11
Gauche-0.9.6/gc/doc/README.environment 0000664 0000764 0000764 00000024626 13074101475 016363 0 ustar shiro shiro The garbage collector looks at a number of environment variables which are,
then, used to affect its operation.
GC_INITIAL_HEAP_SIZE= - Initial heap size in bytes. May speed up
process start-up. Optionally, may be
specified with a multiplier ('k', 'M' or 'G')
suffix.
GC_MAXIMUM_HEAP_SIZE= - Maximum collected heap size. Allows
a multiplier suffix.
GC_LOOP_ON_ABORT - Causes the collector abort routine to enter a tight loop.
This may make it easier to debug, such a process, especially
for multi-threaded platforms that don't produce usable core
files, or if a core file would be too large. On some
platforms, this also causes SIGSEGV to be caught and
result in an infinite loop in a handler, allowing
similar debugging techniques.
GC_PRINT_STATS - Turn on GC logging. Not functional with SMALL_CONFIG.
GC_LOG_FILE - The name of the log file. Stderr by default. Not functional
with SMALL_CONFIG.
GC_ONLY_LOG_TO_FILE - Turns off redirection of GC stdout and stderr to the log
file specified by GC_LOG_FILE. Has no effect unless
GC_LOG_FILE is set. Not functional with SMALL_CONFIG.
GC_PRINT_VERBOSE_STATS - Turn on even more logging. Not functional with
SMALL_CONFIG.
GC_DUMP_REGULARLY - Generate a GC debugging dump GC_dump() on startup
and during every collection. Very verbose. Useful
if you have a bug to report, but please include only the
last complete dump.
GC_COLLECT_AT_MALLOC= - Override the default value specified by
GC_COLLECT_AT_MALLOC macro. Has no effect unless
GC is built with GC_COLLECT_AT_MALLOC defined.
GC_BACKTRACES= - Generate n random back-traces (for heap profiling) after
each GC. Collector must have been built with
KEEP_BACK_PTRS. This won't generate useful output unless
most objects in the heap were allocated through debug
allocators. This is intended to be only a statistical
sample; individual traces may be erroneous due to
concurrent heap mutation.
GC_PRINT_ADDRESS_MAP - Linux only. Dump /proc/self/maps, i.e. various address
maps for the process, to stderr on every GC. Useful for
mapping root addresses to source for deciphering leak
reports.
GC_NPROCS= - Linux w/threads only. Explicitly sets the number of processors
that the GC should expect to use. Note that setting this to 1
when multiple processors are available will preserve
correctness, but may lead to really horrible performance,
since the lock implementation will immediately yield without
first spinning.
GC_MARKERS= - Only if compiled with PARALLEL_MARK. Set the number
of marker threads. This is normally set to the number of
processors. It is safer to adjust GC_MARKERS than GC_NPROCS,
since GC_MARKERS has no impact on the lock implementation.
GC_NO_BLACKLIST_WARNING - Prevents the collector from issuing
warnings about allocations of very large blocks.
Deprecated. Use GC_LARGE_ALLOC_WARN_INTERVAL instead.
GC_LARGE_ALLOC_WARN_INTERVAL= - Print every nth warning about very large
block allocations, starting with the nth one. Small values
of n are generally benign, in that a bounded number of
such warnings generally indicate at most a bounded leak.
For best results it should be set at 1 during testing.
Default is 5. Very large numbers effectively disable the
warning.
GC_IGNORE_GCJ_INFO - Ignore the type descriptors implicitly supplied by
GC_gcj_malloc and friends. This is useful for debugging
descriptor generation problems, and possibly for
temporarily working around such problems. It forces a
fully conservative scan of all heap objects except
those known to be pointer-free, and may thus have other
adverse effects.
GC_PRINT_BACK_HEIGHT - Print max length of chain through unreachable objects
ending in a reachable one. If this number remains
bounded, then the program is "GC robust". This ensures
that a fixed number of misidentified pointers can only
result in a bounded space leak. This currently only
works if debugging allocation is used throughout.
It increases GC space and time requirements appreciably.
This feature is still somewhat experimental, and requires
that the collector have been built with MAKE_BACK_GRAPH
defined. For details, see Boehm, "Bounding Space Usage
of Conservative Garbage Collectors", POPL 2001
(http://www.hpl.hp.com/techreports/2001/HPL-2001-251.html).
GC_RETRY_SIGNALS, GC_NO_RETRY_SIGNALS - Try to compensate for lost
thread suspend signals (Pthreads only). On by
default for GC_OSF1_THREADS, off otherwise. Note
that this does not work around a possible loss of
thread restart signals. This seems to be necessary for
some versions of Tru64. Since we've previously seen
similar issues on some other operating systems, it
was turned into a runtime flag to enable last-minute
work-arounds.
GC_USE_GETWRITEWATCH= - Only if MPROTECT_VDB and GWW_VDB are both defined
(Win32 only). Explicitly specify which strategy of
keeping track of dirtied pages should be used.
If n=0 then GetWriteWatch() is not used (falling back to
protecting pages and catching memory faults strategy)
else the collector tries to use GetWriteWatch-based
strategy (GWW_VDB) first if available.
GC_DISABLE_INCREMENTAL - Ignore runtime requests to enable incremental GC.
Useful for debugging.
The following turn on runtime flags that are also program settable. Checked
only during initialization. We expect that they will usually be set through
other means, but this may help with debugging and testing:
GC_ENABLE_INCREMENTAL - Turn on incremental collection at startup. Note that,
depending on platform and collector configuration, this
may involve write protecting pieces of the heap to
track modifications. These pieces may include
pointer-free objects or not. Although this is intended
to be transparent, it may cause unintended system call
failures. Use with caution.
GC_PAUSE_TIME_TARGET - Set the desired garbage collector pause time in msecs.
This only has an effect if incremental collection is
enabled. If a collection requires appreciably more time
than this, the client will be restarted, and the collector
will need to do additional work to compensate. The
special value "999999" indicates that pause time is
unlimited, and the incremental collector will behave
completely like a simple generational collector. If
the collector is configured for parallel marking, and
run on a multiprocessor, incremental collection should
only be used with unlimited pause time.
GC_FULL_FREQUENCY - Set the desired number of partial collections between full
collections. Matters only if GC_incremental is set.
Not functional with SMALL_CONFIG.
GC_FREE_SPACE_DIVISOR - Set GC_free_space_divisor to the indicated value.
Setting it to larger values decreases space consumption
and increases GC frequency.
GC_UNMAP_THRESHOLD - Set the desired memory blocks unmapping threshold (the
number of sequential garbage collections for which
a candidate block for unmapping should remain free). The
special value "0" completely disables unmapping.
GC_FORCE_UNMAP_ON_GCOLLECT - Turn "unmap as much as possible on explicit GC"
mode on (overrides the default value). Has no effect on
implicitly-initiated garbage collections. Has no effect if
memory unmapping is disabled (or not compiled in) or if the
unmapping threshold is 1.
GC_FIND_LEAK - Turns on GC_find_leak and thus leak detection. Forces a
collection at program termination to detect leaks that would
otherwise occur after the last GC.
GC_FINDLEAK_DELAY_FREE - Turns on deferred freeing of objects in the
leak-finding mode (see the corresponding macro
description for more information).
GC_ABORT_ON_LEAK - Causes the application to be terminated once leaked or
smashed objects are found.
GC_ALL_INTERIOR_POINTERS - Turns on GC_all_interior_pointers and thus interior
pointer recognition.
GC_DONT_GC - Turns off garbage collection. Use cautiously.
GC_USE_ENTIRE_HEAP - Set desired GC_use_entire_heap value at start-up. See
the similar macro description in README.macros.
GC_TRACE=addr - Intended for collector debugging. Requires that the collector
have been built with ENABLE_TRACE defined. Causes the debugger
to log information about the tracing of address ranges
containing addr. Typically addr is the address that contains
a pointer to an object that mysteriously failed to get marked.
Addr must be specified as a hexadecimal integer.
Gauche-0.9.6/gc/doc/overview.html 0000664 0000764 0000764 00000050512 13227007433 015664 0 ustar shiro shiro
A garbage collector for C and C++
A garbage collector for C and C++
[ This is an updated version of the page formerly at
www.hpl.hp.com/personal/Hans_Boehm/gc/,
before that at
http://reality.sgi.com/boehm/gc.html
and before that at
ftp://ftp.parc.xerox.com/pub/gc/gc.html. ]
The Boehm-Demers-Weiser
conservative Garbage Collector (BDWGC) can
be used as a garbage collecting
replacement for C malloc or C++ new.
It allows you to allocate memory basically as you normally would,
without explicitly deallocating memory that is no longer useful.
The collector automatically recycles memory when it determines
that it can no longer be otherwise accessed.
A simple example of such a use is given
here.
The collector is also used by a number of programming language
implementations that either use C as intermediate code, want
to facilitate easier interoperation with C libraries, or
just prefer the simple collector interface.
For a more detailed description of the interface, see
here.
Alternatively, the garbage collector may be used as
a leak detector
for C or C++ programs, though that is not its primary goal.
Typically several versions are offered for
downloading:
preview, stable, legacy.
Usually you should use the one marked as the latest stable release.
Preview versions may contain additional features, platform support,
but are likely to be less well tested.
The list of changes for each version is specified on the
releases page.
The arguments for and against conservative garbage collection
in C and C++ are briefly
discussed in
issues.html.
The beginnings of a frequently-asked-questions list are
here.
The garbage collector code is copyrighted by
Hans-J. Boehm,
Alan J. Demers,
Xerox Corporation,
Silicon Graphics,
and
Hewlett-Packard Company.
It may be used and copied without payment of a fee under minimal restrictions.
See the README file in the distribution or the
license for more details.
IT IS PROVIDED AS IS,
WITH ABSOLUTELY NO WARRANTY EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
Empirically, this collector works with most unmodified C programs,
simply by replacing
malloc with GC_malloc calls,
replacing realloc with GC_realloc calls, and removing
free calls. Exceptions are discussed
in issues.html.
The collector is not completely portable, but the distribution
includes ports to most standard PC and UNIX/Linux platforms.
The collector should work on Linux, *BSD, recent Windows versions,
MacOS X, HP/UX, Solaris,
Tru64, Irix and a few other operating systems.
Some ports are more polished than others.
Irix pthreads, Linux threads, Win32 threads, Solaris threads
(pthreads only),
HP/UX 11 pthreads, Tru64 pthreads, and MacOS X threads are supported
in recent versions.
Separately distributed ports
For MacOS 9/Classic use, Patrick Beard's latest port is available from
http://homepage.mac.com/pcbeard/gc/.
(Unfortunately, that's now quite dated.
I'm not in a position to test under MacOS. Although I try to
incorporate changes, it is impossible for
me to update the project file.)
Precompiled versions of the collector for NetBSD are available
here.
Debian Linux includes prepackaged
versions of the collector.
Kenjiro Taura, Toshio Endo, and Akinori Yonezawa have made available
a parallel collector
based on this one. Their collector takes advantage of multiple processors
during a collection. Starting with collector version 6.0alpha1
we also do this, though with more modest processor scalability goals.
Our approach is discussed briefly in
scale.html.
The collector uses a mark-sweep algorithm.
It provides incremental and generational
collection under operating systems which provide the right kind of
virtual memory support. (Currently this includes SunOS[45], IRIX,
OSF/1, Linux, and Windows, with varying restrictions.)
It allows finalization code
to be invoked when an object is collected.
It can take advantage of type information to locate pointers if such
information is provided, but it is usually used without such information.
See the README and
gc.h files in the distribution for more details.
For an overview of the implementation, see here.
The garbage collector distribution includes a C string
(cord) package that provides
for fast concatenation and substring operations on long strings.
A simple curses- and win32-based editor that represents the entire file
as a cord is included as a
sample application.
Performance of the nonincremental collector is typically competitive
with malloc/free implementations. Both space and time overhead are
likely to be only slightly higher
for programs written for malloc/free
(see Detlefs, Dosser and Zorn's
Memory Allocation Costs in Large C and C++ Programs.)
For programs allocating primarily very small objects, the collector
may be faster; for programs allocating primarily large objects it will
be slower. If the collector is used in a multi-threaded environment
and configured for thread-local allocation, it may in some cases
significantly outperform malloc/free allocation in time.
We also expect that in many cases any additional overhead
will be more than compensated for by decreased copying etc.
if programs are written
and tuned for garbage collection.
The beginnings of a frequently asked questions list for this
collector are here.
The following provide information on garbage collection in general:
Paul Wilson's garbage collection ftp archive and GC survey.
The Ravenbrook
Memory Management Reference.
David Chase's
GC FAQ.
Richard Jones'
Garbage Collection Page and
his book.
The following papers describe the collector algorithms we use
and the underlying design decisions at
a higher level.
(Some of the lower level details can be found
here.)
The first one is not available
electronically due to copyright considerations. Most of the others are
subject to ACM copyright.
Boehm, H., "Dynamic Memory Allocation and Garbage Collection", Computers in Physics
9, 3, May/June 1995, pp. 297-303. This is directed at an otherwise sophisticated
audience unfamiliar with memory allocation issues. The algorithmic details differ
from those in the implementation. There is a related letter to the editor and a minor
correction in the next issue.
Boehm, H., and M. Weiser,
"Garbage Collection in an Uncooperative Environment",
Software Practice & Experience, September 1988, pp. 807-820.
Boehm, H., A. Demers, and S. Shenker, "Mostly Parallel Garbage Collection",
Proceedings of the ACM SIGPLAN '91 Conference on Programming Language Design and Implementation,
SIGPLAN Notices 26, 6 (June 1991), pp. 157-164.
Boehm, H., "Space Efficient Conservative Garbage Collection",
Proceedings of the ACM SIGPLAN '93 Conference on Programming Language Design
and Implementation, SIGPLAN Notices 28, 6 (June 1993), pp. 197-206.
Boehm, H., "Reducing Garbage Collector Cache Misses",
Proceedings of the 2000 International Symposium on Memory Management .
Official version.
Technical report version. Describes the prefetch strategy
incorporated into the collector for some platforms. Explains why
the sweep phase of a "mark-sweep" collector should not really be
a distinct phase.
M. Serrano, H. Boehm,
"Understanding Memory Allocation of Scheme Programs",
Proceedings of the Fifth ACM SIGPLAN International Conference on
Functional Programming, 2000, Montreal, Canada, pp. 245-256.
Official version.
Earlier Technical Report version. Includes some discussion of the
collector debugging facilities for identifying causes of memory retention.
Boehm, H.,
"Fast Multiprocessor Memory Allocation and Garbage Collection",
HP Labs Technical Report HPL 2000-165. Discusses the parallel
collection algorithms, and presents some performance results.
Boehm, H., "Bounding Space Usage of Conservative Garbage Collectors",
Proceedings of the 2002 ACM SIGPLAN-SIGACT Symposium on Principles of
Programming Languages, Jan. 2002, pp. 93-100.
Official version.
Technical report version.
Includes a discussion of a collector facility to much more reliably test for
the potential of unbounded heap growth.
The following papers discuss language and compiler restrictions necessary to guaranteed
safety of conservative garbage collection.
We thank John Levine and JCLT for allowing
us to make the second paper available electronically, and providing PostScript for the final
version.
Boehm, H., "Simple Garbage-Collector-Safety",
Proceedings of the ACM SIGPLAN '96 Conference on Programming Language Design
and Implementation.
Boehm, H., and D. Chase, "A Proposal for Garbage-Collector-Safe C Compilation",
Journal of C Language Translation 4, 2 (December 1992), pp. 126-141.
Other related information:
The Detlefs, Dosser and Zorn's Memory Allocation Costs in Large C and C++ Programs.
This is a performance comparison of the Boehm-Demers-Weiser collector to malloc/free,
using programs written for malloc/free.
Joel Bartlett's mostly copying conservative garbage collector for C++.
John Ellis and David Detlef's
Safe Efficient Garbage Collection for C++
proposal.
Henry Baker's paper collection.
Slides for Hans Boehm's Allocation and GC Myths talk.
Known current users of some variant of this collector include:
The runtime system for
GCJ,
the static GNU java compiler.
W3m, a text-based web browser.
Some versions of the Xerox DocuPrint printer software.
The Mozilla project, as leak
detector.
The Mono project,
an open source implementation of the .NET development framework.
The DotGNU Portable.NET
project, another open source .NET implementation.
The Irssi IRC client.
The Berkeley Titanium project.
The NAGWare f90 Fortran 90 compiler.
Elwood Corporation's Eclipse Common Lisp system, C library, and translator.
The Bigloo Scheme
and Camloo ML compilers
written by Manuel Serrano and others.
Brent Benson's libscheme.
The MzScheme scheme implementation.
The University of Washington Cecil Implementation.
The Berkeley Sather implementation.
The Berkeley Harmonia Project.
The Toba Java Virtual
Machine to C translator.
The Gwydion Dylan compiler.
The
GNU Objective C runtime.
Macaulay 2, a system to support
research in algebraic geometry and commutative algebra.
The Vesta configuration management
system.
Visual Prolog 6.
Asymptote LaTeX-compatible
vector graphics language.
A simple illustration of how to build and
use the collector.
Description of alternate interfaces to the
garbage collector.
Slides from an ISMM 2004 tutorial about the GC.
A FAQ (frequently asked questions) list.
How to use the garbage collector as a leak detector.
Some hints on debugging garbage collected
applications.
An overview of the implementation of the
garbage collector.
The data structure used for fast pointer lookups.
Scalability of the collector to multiprocessors.
Directory containing
the distribution files of all garbage collector releases.
It duplicates
Download page on
GitHub.
An attempt to establish a bound on space usage of
conservative garbage collectors.
Mark-sweep versus copying garbage collectors
and their complexity.
Pros and cons of conservative garbage collectors,
in comparison to other collectors.
Issues related to garbage collection vs.
manual memory management in C/C++.
An example of a case in which garbage collection
results in a much faster implementation as a result of reduced synchronization.
Slide set discussing performance of nonmoving
garbage collectors.
Slide set discussing Destructors, Finalizers, and Synchronization
(POPL 2003).
Paper corresponding to above slide set
(
Technical Report version).
A Java/Scheme/C/C++ garbage collection benchmark.
Slides for talk on memory allocation myths.
Slides for OOPSLA 98 garbage collection talk.
Related papers.
GitHub and Stack Overflow are the major two places for communication.
Technical questions (how to, how does it work, etc.) should be posted to
Stack Overflow
with "boehm-gc" tag.
To contribute, please rebase your code to the latest
master and submit
a pull request to GitHub.
To report a bug, or propose (request) a new feature, create
a GitHub issue.
Please make sure it has not been reported yet by someone else.
To receive notifications on every release, please subscribe to
Releases RSS feed.
Notifications on all issues and pull requests are available by
watching the project.
Mailing lists (bdwgc-announce@lists.opendylan.org, bdwgc@lists.opendylan.org,
and the former gc-announce@linux.hpl.hp.com and gc@linux.hpl.hp.com) are not
used at this moment. Their content is available in
bdwgc-announce
and
bdwgc
archive files, respectively.
The gc list archive may also be read at
Narkive.
Some prior discussion of the collector has taken place on the gcc
java mailing list, whose archives appear
here, and also on
gclist@iecc.com.
Gauche-0.9.6/gc/doc/leak.html 0000664 0000764 0000764 00000021516 13227007433 014734 0 ustar shiro shiro
Using the Garbage Collector as Leak Detector
Using the Garbage Collector as Leak Detector
The garbage collector may be used as a leak detector.
In this case, the primary function of the collector is to report
objects that were allocated (typically with GC_MALLOC),
not deallocated (normally with GC_FREE), but are
no longer accessible. Since the object is no longer accessible,
there in normally no way to deallocate the object at a later time;
thus it can safely be assumed that the object has been "leaked".
This is substantially different from counting leak detectors,
which simply verify that all allocated objects are eventually
deallocated. A garbage-collector based leak detector can provide
somewhat more precise information when an object was leaked.
More importantly, it does not report objects that are never
deallocated because they are part of "permanent" data structures.
Thus it does not require all objects to be deallocated at process
exit time, a potentially useless activity that often triggers
large amounts of paging.
All non-ancient versions of the garbage collector provide
leak detection support. Version 5.3 adds the following
features:
- Leak detection mode can be initiated at run-time by
setting GC_find_leak instead of building the
collector with FIND_LEAK
defined. This variable should be set to a nonzero value
at program startup.
- Leaked objects should be reported and then correctly garbage collected.
Prior versions either reported leaks or functioned as a garbage collector.
For the rest of this description we will give instructions that work
with any reasonable version of the collector.
To use the collector as a leak detector, follow the following steps:
- Build the collector with -DFIND_LEAK. Otherwise use default
build options.
- Change the program so that all allocation and deallocation goes
through the garbage collector.
- Arrange to call GC_gcollect at appropriate points to check
for leaks.
(For sufficiently long running programs, this will happen implicitly,
but probably not with sufficient frequency.)
The second step can usually be accomplished with the
-DREDIRECT_MALLOC=GC_malloc option when the collector is built,
or by defining malloc, calloc,
realloc and free
to call the corresponding garbage collector functions.
But this, by itself, will not yield very informative diagnostics,
since the collector does not keep track of information about
how objects were allocated. The error reports will include
only object addresses.
For more precise error reports, as much of the program as possible
should use the all uppercase variants of these functions, after
defining GC_DEBUG, and then including gc.h.
In this environment GC_MALLOC is a macro which causes
at least the file name and line number at the allocation point to
be saved as part of the object. Leak reports will then also include
this information.
Many collector features (e.g. stubborn objects, finalization,
and disappearing links) are less useful in this context, and are not
fully supported. Their use will usually generate additional bogus
leak reports, since the collector itself drops some associated objects.
The same is generally true of thread support. However, as of 6.0alpha4,
correct leak reports should be generated with linuxthreads.
On a few platforms (currently Solaris/SPARC, Irix, and, with -DSAVE_CALL_CHAIN,
Linux/X86), GC_MALLOC
also causes some more information about its call stack to be saved
in the object. Such information is reproduced in the error
reports in very non-symbolic form, but it can be very useful with the
aid of a debugger.
An Example
The following header file leak_detector.h is included in the
"include" subdirectory of the distribution:
#define GC_DEBUG
#include "gc.h"
#define malloc(n) GC_MALLOC(n)
#define calloc(m,n) GC_MALLOC((m)*(n))
#define free(p) GC_FREE(p)
#define realloc(p,n) GC_REALLOC((p),(n))
#define CHECK_LEAKS() GC_gcollect()
Assume the collector has been built with -DFIND_LEAK. (For
newer versions of the collector, we could instead add the statement
GC_find_leak = 1 as the first statement in main().
The program to be tested for leaks can then look like:
#include "leak_detector.h"
main() {
int *p[10];
int i;
/* GC_find_leak = 1; for new collector versions not */
/* compiled with -DFIND_LEAK. */
for (i = 0; i < 10; ++i) {
p[i] = malloc(sizeof(int)+i);
}
for (i = 1; i < 10; ++i) {
free(p[i]);
}
for (i = 0; i < 9; ++i) {
p[i] = malloc(sizeof(int)+i);
}
CHECK_LEAKS();
}
On an Intel X86 Linux system this produces on the stderr stream:
Leaked composite object at 0x806dff0 (leak_test.c:8, sz=4)
(On most unmentioned operating systems, the output is similar to this.
If the collector had been built on Linux/X86 with -DSAVE_CALL_CHAIN,
the output would be closer to the Solaris example. For this to work,
the program should not be compiled with -fomit_frame_pointer.)
On Irix it reports
Leaked composite object at 0x10040fe0 (leak_test.c:8, sz=4)
Caller at allocation:
##PC##= 0x10004910
and on Solaris the error report is
Leaked composite object at 0xef621fc8 (leak_test.c:8, sz=4)
Call chain at allocation:
args: 4 (0x4), 200656 (0x30FD0)
##PC##= 0x14ADC
args: 1 (0x1), -268436012 (0xEFFFFDD4)
##PC##= 0x14A64
In the latter two cases some additional information is given about
how malloc was called when the leaked object was allocated. For
Solaris, the first line specifies the arguments to GC_debug_malloc
(the actual allocation routine), The second the program counter inside
main, the third the arguments to main, and finally the program
counter inside the caller to main (i.e. in the C startup code).
In the Irix case, only the address inside the caller to main is given.
In many cases, a debugger is needed to interpret the additional information.
On systems supporting the "adb" debugger, the tools/callprocs.sh
script can be used to replace program counter values with symbolic names.
As of version 6.1, the collector tries to generate symbolic names for
call stacks if it knows how to do so on the platform. This is true on
Linux/X86, but not on most other platforms.
Simplified leak detection under Linux
Since version 6.1, it should be possible to run the collector in leak
detection mode on a program a.out under Linux/X86 as follows:
- Ensure that a.out is a single-threaded executable, or you are using
a very recent (7.0alpha7+) collector version on Linux.
On most platforms this does not work at all for the multi-threaded programs.
- If possible, ensure that the addr2line program is installed in
/usr/bin. (It comes with most Linux distributions.)
- If possible, compile your program, which we'll call a.out,
with full debug information.
This will improve the quality of the leak reports. With this approach, it is
no longer necessary to call GC_ routines explicitly,
though that can also
improve the quality of the leak reports.
- Build the collector and install it in directory foo as follows:
- configure --prefix=foo --enable-gc-debug --enable-redirect-malloc
--disable-threads
- make
- make install
With a very recent collector on Linux, it may sometimes be safe to omit
the --disable-threads. But the combination of thread support
and malloc replacement is not yet rock solid.
- Set environment variables as follows:
- LD_PRELOAD=foo/lib/libgc.so
- GC_FIND_LEAK
- You may also want to set GC_PRINT_STATS
(to confirm that the collector is running) and/or
GC_LOOP_ON_ABORT (to facilitate debugging from another
window if something goes wrong).
- Simply run a.out as you normally would. Note that if you run anything
else (e.g. your editor) with those environment variables set,
it will also be leak tested. This may or may not be useful and/or
embarrassing. It can generate
mountains of leak reports if the application wasn't designed to avoid leaks,
e.g. because it's always short-lived.
This has not yet been thoroughly tested on large applications, but it's known
to do the right thing on at least some small ones.
Gauche-0.9.6/gc/doc/doc.am 0000664 0000764 0000764 00000002742 13227007433 014216 0 ustar shiro shiro #
# THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
# OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
#
# Permission is hereby granted to use or copy this program
# for any purpose, provided the above notices are retained on all copies.
# Permission to modify the code and to distribute modified code is granted,
# provided the above notices are retained, and a notice that the code was
# modified is included with the above copyright notice.
## Process this file with automake to produce Makefile.in.
# installed documentation
if ENABLE_DOCS
dist_doc_DATA = \
AUTHORS \
README.md \
doc/README.DGUX386 \
doc/README.Mac \
doc/README.OS2 \
doc/README.amiga \
doc/README.arm.cross \
doc/README.autoconf \
doc/README.cmake \
doc/README.cords \
doc/README.darwin \
doc/README.environment \
doc/README.ews4800 \
doc/README.hp \
doc/README.linux \
doc/README.macros \
doc/README.rs6000 \
doc/README.sgi \
doc/README.solaris2 \
doc/README.symbian \
doc/README.uts \
doc/README.win32 \
doc/README.win64 \
doc/debugging.html \
doc/finalization.html \
doc/gc.man \
doc/gcdescr.html \
doc/gcinterface.html \
doc/leak.html \
doc/overview.html \
doc/porting.html \
doc/scale.html \
doc/simple_example.html \
doc/tree.html
endif
Gauche-0.9.6/gc/doc/tree.html 0000664 0000764 0000764 00000024270 13227007433 014757 0 ustar shiro shiro
Two-Level Tree Structure for Fast Pointer Lookup
Hans-J. Boehm, Silicon Graphics (now at HP)
Two-Level Tree Structure for Fast Pointer Lookup
The BDWGC conservative garbage collector uses a 2-level tree
data structure to aid in fast pointer identification.
This data structure is described in a bit more detail here, since
- Variations of the data structure are more generally useful.
- It appears to be hard to understand by reading the code.
- Some other collectors appear to use inferior data structures to
solve the same problem.
- It is central to fast collector operation.
A candidate pointer is divided into three sections, the high,
middle, and low bits. The exact division between these
three groups of bits is dependent on the detailed collector configuration.
The high and middle bits are used to look up an entry in the table described
here. The resulting table entry consists of either a block descriptor
(struct hblkhdr * or hdr *)
identifying the layout of objects in the block, or an indication that this
address range corresponds to the middle of a large block, together with a
hint for locating the actual block descriptor. Such a hint consist
of a displacement that can be subtracted from the middle bits of the candidate
pointer without leaving the object.
In either case, the block descriptor (struct hblkhdr)
refers to a table of object starting addresses (the hb_map field).
The starting address table is indexed by the low bits if the candidate pointer.
The resulting entry contains a displacement to the beginning of the object,
or an indication that this cannot be a valid object pointer.
(If all interior pointer are recognized, pointers into large objects
are handled specially, as appropriate.)
The Tree
The rest of this discussion focuses on the two level data structure
used to map the high and middle bits to the block descriptor.
The high bits are used as an index into the GC_top_index (really
GC_arrays._top_index) array. Each entry points to a
bottom_index data structure. This structure in turn consists
mostly of an array index indexed by the middle bits of
the candidate pointer. The index array contains the actual
hdr pointers.
Thus a pointer lookup consists primarily of a handful of memory references,
and can be quite fast:
- The appropriate bottom_index pointer is looked up in
GC_top_index, based on the high bits of the candidate pointer.
- The appropriate hdr pointer is looked up in the
bottom_index structure, based on the middle bits.
- The block layout map pointer is retrieved from the hdr
structure. (This memory reference is necessary since we try to share
block layout maps.)
- The displacement to the beginning of the object is retrieved from the
above map.
In order to conserve space, not all GC_top_index entries in fact
point to distinct bottom_index structures. If no address with
the corresponding high bits is part of the heap, then the entry points
to GC_all_nils, a single bottom_index structure consisting
only of NULL hdr pointers.
Bottom_index structures contain slightly more information than
just hdr pointers. The asc_link field is used to link
all bottom_index structures in ascending order for fast traversal.
This list is pointed to be GC_all_bottom_indices.
It is maintained with the aid of key field that contains the
high bits corresponding to the bottom_index.
64 bit addresses
In the case of 64 bit addresses, this picture is complicated slightly
by the fact that one of the index structures would have to be huge to
cover the entire address space with a two level tree. We deal with this
by turning GC_top_index into a chained hash table, instead of
a simple array. This adds a hash_link field to the
bottom_index structure.
The "hash function" consists of dropping the high bits. This is cheap to
compute, and guarantees that there will be no collisions if the heap
is contiguous and not excessively large.
A picture
The following is an ASCII diagram of the data structure.
This was contributed by Dave Barrett several years ago.
Data Structure used by GC_base in gc3.7:
21-Apr-94
63 LOG_TOP_SZ[11] LOG_BOTTOM_SZ[10] LOG_HBLKSIZE[13]
+------------------+----------------+------------------+------------------+
p:| | TL_HASH(hi) | | HBLKDISPL(p) |
+------------------+----------------+------------------+------------------+
\-----------------------HBLKPTR(p)-------------------/
\------------hi-------------------/
\______ ________/ \________ _______/ \________ _______/
V V V
| | |
GC_top_index[] | | |
--- +--------------+ | | |
^ | | | | |
| | | | | |
TOP +--------------+<--+ | |
_SZ +-<| [] | * | |
(items)| +--------------+ if 0 < bi< HBLKSIZE | |
| | | | then large object | |
| | | | starts at the bi'th | |
v | | | HBLK before p. | i |
--- | +--------------+ | (word- |
v | aligned) |
bi= |GET_BI(p){->hash_link}->key==hi | |
v | |
| (bottom_index) \ scratch_alloc'd | |
| ( struct bi ) / by get_index() | |
--- +->+--------------+ | |
^ | | | |
^ | | | |
BOTTOM | | ha=GET_HDR_ADDR(p) | |
_SZ(items)+--------------+<----------------------+ +-------+
| +--<| index[] | |
| | +--------------+ GC_obj_map: v
| | | | from / +-+-+-----+-+-+-+-+ ---
v | | | GC_add < 0| | | | | | | | ^
--- | +--------------+ _map_entry \ +-+-+-----+-+-+-+-+ |
| | asc_link | +-+-+-----+-+-+-+-+ MAXOBJSZ
| +--------------+ +-->| | | j | | | | | +1
| | key | | +-+-+-----+-+-+-+-+ |
| +--------------+ | +-+-+-----+-+-+-+-+ |
| | hash_link | | | | | | | | | | v
| +--------------+ | +-+-+-----+-+-+-+-+ ---
| | |<--MAX_OFFSET--->|
| | (bytes)
HDR(p)| GC_find_header(p) | |<--MAP_ENTRIES-->|
| \ from | =HBLKSIZE/WORDSZ
| (hdr) (struct hblkhdr) / alloc_hdr() | (1024 on Alpha)
+-->+----------------------+ | (8/16 bits each)
GET_HDR(p)| word hb_sz (words) | |
+----------------------+ |
| struct hblk *hb_next | |
+----------------------+ |
|mark_proc hb_mark_proc| |
+----------------------+ |
| char * hb_map |>-------------+
+----------------------+
| ushort hb_obj_kind |
+----------------------+
| hb_last_reclaimed |
--- +----------------------+
^ | |
MARK_BITS| hb_marks[] | *if hdr is free, hb_sz
_SZ(words)| | is the size of a heap chunk (struct hblk)
v | | of at least MININCR*HBLKSIZE bytes (below),
--- +----------------------+ otherwise, size of each object in chunk.
Dynamic data structures above are interleaved throughout the heap in blocks of
size MININCR * HBLKSIZE bytes as done by gc_scratch_alloc which cannot be
freed; free lists are used (e.g. alloc_hdr). HBLK's below are collected.
(struct hblk) HDR_BYTES
--- +----------------------+ < HBLKSIZE --- (bytes)
^ +-----hb_body----------+ (and WORDSZ) ^ --- ---
| | | aligned | ^ ^
| | | | hb_sz |
| | | | (words) |
| | Object 0 | | | |
| | | i |(word- v |
| + - - - - - - - - - - -+ --- (bytes)|aligned) --- |
| | | ^ | ^ |
| | | j (words) | | |
n * | Object 1 | v v hb_sz BODY_SZ
HBLKSIZE | |--------------- | (words)
(bytes) | | v MAX_OFFSET
| + - - - - - - - - - - -+ --- (bytes)
| | | !ALL_INTERIOR_POINTERS ^ |
| | | sets j only for hb_sz |
| | Object N | valid object offsets. | |
v | | All objects WORDSZ v v
--- +----------------------+ aligned. --- ---
Gauche-0.9.6/gc/doc/gcinterface.html 0000664 0000764 0000764 00000032162 13074101475 016272 0 ustar shiro shiro
Garbage Collector Interface
C Interface
On many platforms, a single-threaded garbage collector library can be built
to act as a plug-in malloc replacement.
(Build with -DREDIRECT_MALLOC=GC_malloc -DIGNORE_FREE.)
This is often the best way to deal with third-party libraries
which leak or prematurely free objects.
-DREDIRECT_MALLOC=GC_malloc is intended
primarily as an easy way to adapt old code, not for new development.
New code should use the interface discussed below.
Code must be linked against the GC library. On most UNIX platforms,
depending on how the collector is built, this will be gc.a
or libgc.{a,so}.
The following describes the standard C interface to the garbage collector.
It is not a complete definition of the interface. It describes only the
most commonly used functionality, approximately in decreasing order of
frequency of use.
The full interface is described in
gc.h
or gc.h in the distribution.
Clients should include gc.h.
In the case of multi-threaded code,
gc.h should be included after the threads header file, and
after defining the appropriate GC_XXXX_THREADS macro.
(For 6.2alpha4 and later, simply defining GC_THREADS should suffice.)
The header file gc.h must be included
in files that use either GC or threads primitives, since threads primitives
will be redefined to cooperate with the GC on many platforms.
Thread users should also be aware that on many platforms objects reachable
only from thread-local variables may be prematurely reclaimed.
Thus objects pointed to by thread-local variables should also be pointed to
by a globally visible data structure. (This is viewed as a bug, but as
one that is exceedingly hard to fix without some libc hooks.)
- void * GC_MALLOC(size_t nbytes)
-
Allocates and clears nbytes of storage.
Requires (amortized) time proportional to nbytes.
The resulting object will be automatically deallocated when unreferenced.
References from objects allocated with the system malloc are usually not
considered by the collector. (See GC_MALLOC_UNCOLLECTABLE, however.
Building the collector with -DREDIRECT_MALLOC=GC_malloc_uncollectable
is often a way around this.)
GC_MALLOC is a macro which invokes GC_malloc by default or,
if GC_DEBUG
is defined before gc.h is included, a debugging version that checks
occasionally for overwrite errors, and the like.
- void * GC_MALLOC_ATOMIC(size_t nbytes)
-
Allocates nbytes of storage.
Requires (amortized) time proportional to nbytes.
The resulting object will be automatically deallocated when unreferenced.
The client promises that the resulting object will never contain any pointers.
The memory is not cleared.
This is the preferred way to allocate strings, floating point arrays,
bitmaps, etc.
More precise information about pointer locations can be communicated to the
collector using the interface in
gc_typed.h in the distribution.
- void * GC_MALLOC_UNCOLLECTABLE(size_t nbytes)
-
Identical to GC_MALLOC,
except that the resulting object is not automatically
deallocated. Unlike the system-provided malloc, the collector does
scan the object for pointers to garbage-collectible memory, even if the
block itself does not appear to be reachable. (Objects allocated in this way
are effectively treated as roots by the collector.)
- void * GC_REALLOC(void *old, size_t new_size)
-
Allocate a new object of the indicated size and copy (a prefix of) the
old object into the new object. The old object is reused in place if
convenient. If the original object was allocated with
GC_MALLOC_ATOMIC,
the new object is subject to the same constraints. If it was allocated
as an uncollectible object, then the new object is uncollectible, and
the old object (if different) is deallocated.
- void GC_FREE(void *dead)
-
Explicitly deallocate an object. Typically not useful for small
collectible objects.
- void * GC_MALLOC_IGNORE_OFF_PAGE(size_t nbytes)
-
- void * GC_MALLOC_ATOMIC_IGNORE_OFF_PAGE(size_t nbytes)
-
Analogous to GC_MALLOC and GC_MALLOC_ATOMIC,
except that the client
guarantees that as long
as the resulting object is of use, a pointer is maintained to someplace
inside the first 512 bytes of the object. This pointer should be declared
volatile to avoid interference from compiler optimizations.
(Other nonvolatile pointers to the object may exist as well.)
This is the
preferred way to allocate objects that are likely to be > 100KBytes in size.
It greatly reduces the risk that such objects will be accidentally retained
when they are no longer needed. Thus space usage may be significantly reduced.
- void GC_INIT(void)
-
On some platforms, it is necessary to invoke this
from the main executable, not from a dynamic library, before
the initial invocation of a GC routine. It is recommended that this be done
in portable code, though we try to ensure that it expands to a no-op
on as many platforms as possible. In GC 7.0, it was required if
thread-local allocation is enabled in the collector build, and malloc
is not redirected to GC_malloc.
- void GC_gcollect(void)
-
Explicitly force a garbage collection.
- void GC_enable_incremental(void)
-
Cause the garbage collector to perform a small amount of work
every few invocations of GC_MALLOC or the like, instead of performing
an entire collection at once. This is likely to increase total
running time. It will improve response on a platform that either has
suitable support in the garbage collector (Linux and most Unix
versions, win32 if the collector was suitably built) or if "stubborn"
allocation is used (see
gc.h).
On many platforms this interacts poorly with system calls
that write to the garbage collected heap.
- GC_warn_proc GC_set_warn_proc(GC_warn_proc p)
-
Replace the default procedure used by the collector to print warnings.
The collector
may otherwise write to stderr, most commonly because GC_malloc was used
in a situation in which GC_malloc_ignore_off_page would have been more
appropriate. See gc.h for details.
- void GC_REGISTER_FINALIZER(...)
-
Register a function to be called when an object becomes inaccessible.
This is often useful as a backup method for releasing system resources
(e.g. closing files) when the object referencing them becomes
inaccessible.
It is not an acceptable method to perform actions that must be performed
in a timely fashion.
See gc.h for details of the interface.
See here for a more detailed discussion
of the design.
Note that an object may become inaccessible before client code is done
operating on objects referenced by its fields.
Suitable synchronization is usually required.
See here
or here
for details.
If you are concerned with multiprocessor performance and scalability,
you should consider enabling and using thread local allocation.
If your platform
supports it, you should build the collector with parallel marking support
(-DPARALLEL_MARK, or --enable-parallel-mark).
If the collector is used in an environment in which pointer location
information for heap objects is easily available, this can be passed on
to the collector using the interfaces in either gc_typed.h
or gc_gcj.h.
The collector distribution also includes a string package that takes
advantage of the collector. For details see
cord.h
C++ Interface
The C++ interface is implemented as a thin layer on the C interface.
Unfortunately, this thin layer appears to be very sensitive to variations
in C++ implementations, particularly since it tries to replace the global
::new operator, something that appears to not be well-standardized.
Your platform may need minor adjustments in this layer (gc_cpp.cc, gc_cpp.h,
and possibly gc_allocator.h). Such changes do not require understanding
of collector internals, though they may require a good understanding of
your platform. (Patches enhancing portability are welcome.
But it's easy to break one platform by fixing another.)
Usage of the collector from C++ is also complicated by the fact that there
are many "standard" ways to allocate memory in C++. The default ::new
operator, default malloc, and default STL allocators allocate memory
that is not garbage collected, and is not normally "traced" by the
collector. This means that any pointers in memory allocated by these
default allocators will not be seen by the collector. Garbage-collectible
memory referenced only by pointers stored in such default-allocated
objects is likely to be reclaimed prematurely by the collector.
It is the programmers responsibility to ensure that garbage-collectible
memory is referenced by pointers stored in one of
- Program variables
- Garbage-collected objects
- Uncollected but "traceable" objects
"Traceable" objects are not necessarily reclaimed by the collector,
but are scanned for pointers to collectible objects.
They are usually allocated by GC_MALLOC_UNCOLLECTABLE, as described
above, and through some interfaces described below.
(On most platforms, the collector may not trace correctly from in-flight
exception objects. Thus objects thrown as exceptions should only
point to otherwise reachable memory. This is another bug whose
proper repair requires platform hooks.)
The easiest way to ensure that collectible objects are properly referenced
is to allocate only collectible objects. This requires that every
allocation go through one of the following interfaces, each one of
which replaces a standard C++ allocation mechanism. Note that
this requires that all STL containers be explicitly instantiated with
gc_allocator.
- STL allocators
-
Recent versions of the collector include a hopefully standard-conforming
allocator implementation in gc_allocator.h. It defines
- traceable_allocator
- gc_allocator
which may be used either directly to allocate memory or to instantiate
container templates.
The former allocates uncollectible but traced memory.
The latter allocates garbage-collected memory.
These should work with any fully standard-conforming C++ compiler.
Users of the SGI extended STL
or its derivatives (including most g++ versions)
may instead be able to include new_gc_alloc.h before including
STL header files. This is increasingly discouraged.
This defines SGI-style allocators
- alloc
- single_client_alloc
- gc_alloc
- single_client_gc_alloc
The first two allocate uncollectible but traced
memory, while the second two allocate collectible memory.
The single_client versions are not safe for concurrent access by
multiple threads, but are faster.
For an example, click here.
- Class inheritance based interface for new-based allocation
-
Users may include gc_cpp.h and then cause members of classes to
be allocated in garbage collectible memory by having those classes
inherit from class gc.
For details see gc_cpp.h.
Linking against libgccpp in addition to the gc library overrides
::new (and friends) to allocate traceable memory but uncollectible
memory, making it safe to refer to collectible objects from the resulting
memory.
- C interface
-
It is also possible to use the C interface from
gc.h directly.
On platforms which use malloc to implement ::new, it should usually be possible
to use a version of the collector that has been compiled as a malloc
replacement. It is also possible to replace ::new and other allocation
functions suitably, as is done by libgccpp.
Note that user-implemented small-block allocation often works poorly with
an underlying garbage-collected large block allocator, since the collector
has to view all objects accessible from the user's free list as reachable.
This is likely to cause problems if GC_MALLOC
is used with something like
the original HP version of STL.
This approach works well with the SGI versions of the STL only if the
malloc_alloc allocator is used.
Gauche-0.9.6/gc/doc/gc.man 0000664 0000764 0000764 00000012625 13302340445 014216 0 ustar shiro shiro .TH GC_MALLOC 3 "2 October 2003"
.SH NAME
GC_malloc, GC_malloc_atomic, GC_free, GC_realloc, GC_enable_incremental, GC_register_finalizer, GC_malloc_ignore_off_page, GC_malloc_atomic_ignore_off_page, GC_set_warn_proc \- Garbage collecting malloc replacement
.SH SYNOPSIS
#include "gc.h"
.br
void * GC_malloc(size_t size);
.br
void GC_free(void *ptr);
.br
void * GC_realloc(void *ptr, size_t size);
.br
.sp
cc ... -lgc
.LP
.SH DESCRIPTION
.I GC_malloc
and
.I GC_free
are plug-in replacements for standard malloc and free. However,
.I
GC_malloc
will attempt to reclaim inaccessible space automatically by invoking a conservative garbage collector at appropriate points. The collector traverses all data structures accessible by following pointers from the machines registers, stack(s), data, and bss segments. Inaccessible structures will be reclaimed. A machine word is considered to be a valid pointer if it is an address inside an object allocated by
.I
GC_malloc
or friends.
.LP
In most cases it is preferable to call the macros GC_MALLOC, GC_FREE, etc.
instead of calling GC_malloc and friends directly. This allows debugging
versions of the routines to be substituted by defining GC_DEBUG before
including gc.h.
.LP
See the documentation in the include files gc_cpp.h and gc_allocator.h,
as well as the gcinterface.html file in the distribution,
for an alternate, C++ specific interface to the garbage collector.
Note that C++ programs generally
need to be careful to ensure that all allocated memory (whether via new,
malloc, or STL allocators) that may point to garbage collected memory
is either itself garbage collected, or at least traced by the collector.
.LP
Unlike the standard implementations of malloc,
.I
GC_malloc
clears the newly allocated storage.
.I
GC_malloc_atomic
does not. Furthermore, it informs the collector that the resulting object will never contain any pointers, and should therefore not be scanned by the collector.
.LP
.I
GC_free
can be used to deallocate objects, but its use is optional, and generally discouraged.
.I
GC_realloc
has the standard realloc semantics. It preserves pointer-free-ness.
.I
GC_register_finalizer
allows for registration of functions that are invoked when an object becomes inaccessible.
.LP
The garbage collector tries to avoid allocating memory at locations that already appear to be referenced before allocation. (Such apparent ``pointers'' are usually large integers and the like that just happen to look like an address.) This may make it hard to allocate very large objects. An attempt to do so may generate a warning.
.LP
.I
GC_malloc_ignore_off_page
and
.I
GC_malloc_atomic_ignore_off_page
inform the collector that the client code will always maintain a pointer to near the beginning of the object (within the first 512 bytes), and that pointers beyond that can be ignored by the collector. This makes it much easier for the collector to place large objects. These are recommended for large object allocation. (Objects expected to be larger than about 100KBytes should be allocated this way.)
.LP
It is also possible to use the collector to find storage leaks in programs destined to be run with standard malloc/free. The collector can be compiled for thread-safe operation. Unlike standard malloc, it is safe to call malloc after a previous malloc call was interrupted by a signal, provided the original malloc call is not resumed.
.LP
The collector may, on rare occasion produce warning messages. On UNIX machines these appear on stderr. Warning messages can be filtered, redirected, or ignored with
.I
GC_set_warn_proc
This is recommended for production code. See gc.h for details.
.LP
Fully portable code should call
.I
GC_INIT
from the main program before making any other GC calls.
On most platforms this does nothing and the collector is initialized on first use.
On a few platforms explicit initialization is necessary. And it can never hurt.
.LP
Debugging versions of many of the above routines are provided as macros. Their names are identical to the above, but consist of all capital letters. If GC_DEBUG is defined before gc.h is included, these routines do additional checking, and allow the leak detecting version of the collector to produce slightly more useful output. Without GC_DEBUG defined, they behave exactly like the lower-case versions.
.LP
On some machines, collection will be performed incrementally after a call to
.I
GC_enable_incremental.
This may temporarily write protect pages in the heap. See the README file for more information on how this interacts with system calls that write to the heap.
.LP
Other facilities not discussed here include limited facilities to support incremental collection on machines without appropriate VM support, provisions for providing more explicit object layout information to the garbage collector, more direct support for ``weak'' pointers, support for ``abortable'' garbage collections during idle time, etc.
.LP
.SH "SEE ALSO"
The README and gc.h files in the distribution. More detailed definitions of the functions exported by the collector are given there. (The above list is not complete.)
.LP
The web site at http://www.hboehm.info/gc/ (or https://github.com/ivmai/bdwgc/).
.LP
Boehm, H., and M. Weiser, "Garbage Collection in an Uncooperative Environment",
"Software Practice & Experience", September 1988, pp. 807-820.
.LP
The malloc(3) man page.
.LP
.SH AUTHOR
Hans-J. Boehm (boehm@acm.org).
Some of the code was written by others, most notably Alan Demers.
Gauche-0.9.6/gc/doc/README.ews4800 0000664 0000764 0000764 00000004070 13227007433 015117 0 ustar shiro shiro GC on EWS4800
-------------
1. About EWS4800
EWS4800 is a 32/64-bit workstation.
Vendor: NEC Corporation
OS: UX/4800 R9.* - R13.* (SystemV R4.2)
CPU: R4000, R4400, R10000 (MIPS)
2. Compiler
32-bit:
Use ANSI C compiler.
CC = /usr/abiccs/bin/cc
64-bit:
Use the 64-bit ANSI C compiler.
CC = /usr/ccs64/bin/cc
AR = /usr/ccs64/bin/ar
3. ELF file format
*** Caution: The following information is empirical. ***
32-bit:
ELF file has an unique format. (See a.out(4) and end(3C).)
&_start
: text segment
&etext
DATASTART
: data segment (initialized)
&edata
DATASTART2
: data segment (uninitialized)
&end
Here, DATASTART and DATASTART2 are macros of GC, and are defined as
the following equations. (See include/private/gcconfig.h.)
The algorithm for DATASTART is similar with the function
GC_SysVGetDataStart() in os_dep.c.
DATASTART = ((&etext + 0x3ffff) & ~0x3ffff) + (&etext & 0xffff)
Dynamically linked:
DATASTART2 = (&_gp + 0x8000 + 0x3ffff) & ~0x3ffff
Statically linked:
DATASTART2 = &edata
GC has to check addresses both between DATASTART and &edata, and
between DATASTART2 and &end. If a program accesses between &etext
and DATASTART, or between &edata and DATASTART2, the segmentation
error occurs and the program stops.
If a program is statically linked, there is not a gap between
&edata and DATASTART2. The global symbol &_DYNAMIC_LINKING is used
for the detection.
64-bit:
ELF file has a simple format. (See end(3C).)
_ftext
: text segment
_etext
_fdata = DATASTART
: data segment (initialized)
_edata
_fbss
: data segment (uninitialized)
_end = DATAEND
--
Hironori SAKAMOTO
When using the new "configure; make" build process, please
run configure with the --disable-shared option. "Make check" does not
yet pass with dynamic libraries. The reasons for that are not yet
understood. (HB, paraphrasing message from Hironori SAKAMOTO.)
Gauche-0.9.6/gc/doc/debugging.html 0000664 0000764 0000764 00000037016 13227007433 015755 0 ustar shiro shiro
Debugging Garbage Collector Related Problems
Debugging Garbage Collector Related Problems
This page contains some hints on
debugging issues specific to
the Boehm-Demers-Weiser conservative garbage collector.
It applies both to debugging issues in client code that manifest themselves
as collector misbehavior, and to debugging the collector itself.
If you suspect a bug in the collector itself, it is strongly recommended
that you try the latest collector release before proceeding.
Bus Errors and Segmentation Violations
If the fault occurred in GC_find_limit, or with incremental collection enabled,
this is probably normal. The collector installs handlers to take care of
these. You will not see these unless you are using a debugger.
Your debugger should allow you to continue.
It's often preferable to tell the debugger to ignore SIGBUS and SIGSEGV
("handle SIGSEGV SIGBUS nostop noprint" in gdb,
"ignore SIGSEGV SIGBUS" in most versions of dbx)
and set a breakpoint in abort.
The collector will call abort if the signal had another cause,
and there was not other handler previously installed.
We recommend debugging without incremental collection if possible.
(This applies directly to UNIX systems.
Debugging with incremental collection under win32 is worse. See README.win32.)
If the application generates an unhandled SIGSEGV or equivalent, it may
often be easiest to set the environment variable GC_LOOP_ON_ABORT. On many
platforms, this will cause the collector to loop in a handler when the
SIGSEGV is encountered (or when the collector aborts for some other reason),
and a debugger can then be attached to the looping
process. This sidesteps common operating system problems related
to incomplete core files for multi-threaded applications, etc.
Other Signals
On most platforms, the multi-threaded version of the collector needs one or
two other signals for internal use by the collector in stopping threads.
It is normally wise to tell the debugger to ignore these. On Linux,
the collector currently uses SIGPWR and SIGXCPU by default.
Warning Messages About Needing to Allocate Blacklisted Blocks
The garbage collector generates warning messages of the form
Needed to allocate blacklisted block at 0x...
or
Repeated allocation of very large block ...
when it needs to allocate a block at a location that it knows to be
referenced by a false pointer. These false pointers can be either permanent
(e.g. a static integer variable that never changes) or temporary.
In the latter case, the warning is largely spurious, and the block will
eventually be reclaimed normally.
In the former case, the program will still run correctly, but the block
will never be reclaimed. Unless the block is intended to be
permanent, the warning indicates a memory leak.
- Ignore these warnings while you are using GC_DEBUG. Some of the routines
mentioned below don't have debugging equivalents. (Alternatively, write
the missing routines and send them to me.)
- Replace allocator calls that request large blocks with calls to
GC_malloc_ignore_off_page or
GC_malloc_atomic_ignore_off_page. You may want to set a
breakpoint in GC_default_warn_proc to help you identify such calls.
Make sure that a pointer to somewhere near the beginning of the resulting block
is maintained in a (preferably volatile) variable as long as
the block is needed.
-
If the large blocks are allocated with realloc, we suggest instead allocating
them with something like the following. Note that the realloc size increment
should be fairly large (e.g. a factor of 3/2) for this to exhibit reasonable
performance. But we all know we should do that anyway.
void * big_realloc(void *p, size_t new_size)
{
size_t old_size = GC_size(p);
void * result;
if (new_size <= 10000) return(GC_realloc(p, new_size));
if (new_size <= old_size) return(p);
result = GC_malloc_ignore_off_page(new_size);
if (result == 0) return(0);
memcpy(result,p,old_size);
GC_free(p);
return(result);
}
- In the unlikely case that even relatively small object
(<20KB) allocations are triggering these warnings, then your address
space contains lots of "bogus pointers", i.e. values that appear to
be pointers but aren't. Usually this can be solved by using GC_malloc_atomic
or the routines in gc_typed.h to allocate large pointer-free regions of bitmaps, etc. Sometimes the problem can be solved with trivial changes of encoding
in certain values. It is possible, to identify the source of the bogus
pointers by building the collector with -DPRINT_BLACK_LIST,
which will cause it to print the "bogus pointers", along with their location.
- If you get only a fixed number of these warnings, you are probably only
introducing a bounded leak by ignoring them. If the data structures being
allocated are intended to be permanent, then it is also safe to ignore them.
The warnings can be turned off by calling GC_set_warn_proc with a procedure
that ignores these warnings (e.g. by doing absolutely nothing).
The Collector References a Bad Address in GC_malloc
This typically happens while the collector is trying to remove an entry from
its free list, and the free list pointer is bad because the free list link
in the last allocated object was bad.
With > 99% probability, you wrote past the end of an allocated object.
Try setting GC_DEBUG before including gc.h and
allocating with GC_MALLOC. This will try to detect such
overwrite errors.
Unexpectedly Large Heap
Unexpected heap growth can be due to one of the following:
- Data structures that are being unintentionally retained. This
is commonly caused by data structures that are no longer being used,
but were not cleared, or by caches growing without bounds.
- Pointer misidentification. The garbage collector is interpreting
integers or other data as pointers and retaining the "referenced"
objects. A common symptom is that GC_dump() shows much of the heap
as black-listed.
- Heap fragmentation. This should never result in unbounded growth,
but it may account for larger heaps. This is most commonly caused
by allocation of large objects. On some platforms it can be reduced
by building with -DUSE_MUNMAP, which will cause the collector to unmap
memory corresponding to pages that have not been recently used.
- Per object overhead. This is usually a relatively minor effect, but
it may be worth considering. If the collector recognizes interior
pointers, object sizes are increased, so that one-past-the-end pointers
are correctly recognized. The collector can be configured not to do this
(-DDONT_ADD_BYTE_AT_END).
The collector rounds up object sizes so the result fits well into the
chunk size (HBLKSIZE, normally 4K on 32 bit machines, 8K
on 64 bit machines) used by the collector. Thus it may be worth avoiding
objects of size 2K + 1 (or 2K if a byte is being added at the end.)
The last two cases can often be identified by looking at the output
of a call to GC_dump(). Among other things, it will print the
list of free heap blocks, and a very brief description of all chunks in
the heap, the object sizes they correspond to, and how many live objects
were found in the chunk at the last collection.
Growing data structures can usually be identified by
- Building the collector with -DKEEP_BACK_PTRS,
- Preferably using debugging allocation (defining GC_DEBUG
before including gc.h and allocating with GC_MALLOC),
so that objects will be identified by their allocation site,
- Running the application long enough so
that most of the heap is composed of "leaked" memory, and
- Then calling GC_generate_random_backtrace() from gc_backptr.h
a few times to determine why some randomly sampled objects in the heap are
being retained.
The same technique can often be used to identify problems with false
pointers, by noting whether the reference chains printed by
GC_generate_random_backtrace() involve any misidentified pointers.
An alternate technique is to build the collector with
-DPRINT_BLACK_LIST which will cause it to report values that
are almost, but not quite, look like heap pointers. It is very likely that
actual false pointers will come from similar sources.
In the unlikely case that false pointers are an issue, it can usually
be resolved using one or more of the following techniques:
- Use GC_malloc_atomic for objects containing no pointers.
This is especially important for large arrays containing compressed data,
pseudo-random numbers, and the like. It is also likely to improve GC
performance, perhaps drastically so if the application is paging.
- If you allocate large objects containing only
one or two pointers at the beginning, either try the typed allocation
primitives is gc_typed.h, or separate out the pointer-free component.
- Consider using GC_malloc_ignore_off_page()
to allocate large objects. (See gc.h and above for details.
Large means > 100K in most environments.)
- If your heap size is larger than 100MB or so, build the collector with
-DLARGE_CONFIG.
This allows the collector to keep more precise black-list
information.
- If you are using heaps close to, or larger than, a gigabyte on a 32-bit
machine, you may want to consider moving to a platform with 64-bit pointers.
This is very likely to resolve any false pointer issues.
Prematurely Reclaimed Objects
The usual symptom of this is a segmentation fault, or an obviously overwritten
value in a heap object. This should, of course, be impossible. In practice,
it may happen for reasons like the following:
- The collector did not intercept the creation of threads correctly in
a multi-threaded application, e.g. because the client called
pthread_create without including gc.h, which redefines it.
- The last pointer to an object in the garbage collected heap was stored
somewhere were the collector couldn't see it, e.g. in an
object allocated with system malloc, in certain types of
mmaped files,
or in some data structure visible only to the OS. (On some platforms,
thread-local storage is one of these.)
- The last pointer to an object was somehow disguised, e.g. by
XORing it with another pointer.
- Incorrect use of GC_malloc_atomic or typed allocation.
- An incorrect GC_free call.
- The client program overwrote an internal garbage collector data structure.
- A garbage collector bug.
- (Empirically less likely than any of the above.) A compiler optimization
that disguised the last pointer.
The following relatively simple techniques should be tried first to narrow
down the problem:
- If you are using the incremental collector try turning it off for
debugging.
- If you are using shared libraries, try linking statically. If that works,
ensure that DYNAMIC_LOADING is defined on your platform.
- Try to reproduce the problem with fully debuggable unoptimized code.
This will eliminate the last possibility, as well as making debugging easier.
- Try replacing any suspect typed allocation and GC_malloc_atomic
calls with calls to GC_malloc.
- Try removing any GC_free calls (e.g. with a suitable
#define).
- Rebuild the collector with -DGC_ASSERTIONS.
- If the following works on your platform (i.e. if gctest still works
if you do this), try building the collector with
-DREDIRECT_MALLOC=GC_malloc_uncollectable. This will cause
the collector to scan memory allocated with malloc.
If all else fails, you will have to attack this with a debugger.
Suggested steps:
- Call GC_dump() from the debugger around the time of the failure. Verify
that the collectors idea of the root set (i.e. static data regions which
it should scan for pointers) looks plausible. If not, i.e. if it doesn't
include some static variables, report this as
a collector bug. Be sure to describe your platform precisely, since this sort
of problem is nearly always very platform dependent.
- Especially if the failure is not deterministic, try to isolate it to
a relatively small test case.
- Set a break point in GC_finish_collection. This is a good
point to examine what has been marked, i.e. found reachable, by the
collector.
- If the failure is deterministic, run the process
up to the last collection before the failure.
Note that the variable GC_gc_no counts collections and can be used
to set a conditional breakpoint in the right one. It is incremented just
before the call to GC_finish_collection.
If object p was prematurely recycled, it may be helpful to
look at *GC_find_header(p) at the failure point.
The hb_last_reclaimed field will identify the collection number
during which its block was last swept.
- Verify that the offending object still has its correct contents at
this point.
Then call GC_is_marked(p) from the debugger to verify that the
object has not been marked, and is about to be reclaimed. Note that
GC_is_marked(p) expects the real address of an object (the
address of the debug header if there is one), and thus it may
be more appropriate to call GC_is_marked(GC_base(p))
instead.
- Determine a path from a root, i.e. static variable, stack, or
register variable,
to the reclaimed object. Call GC_is_marked(q) for each object
q along the path, trying to locate the first unmarked object, say
r.
- If r is pointed to by a static root,
verify that the location
pointing to it is part of the root set printed by GC_dump(). If it
is on the stack in the main (or only) thread, verify that
GC_stackbottom is set correctly to the base of the stack. If it is
in another thread stack, check the collector's thread data structure
(GC_thread[] on several platforms) to make sure that stack bounds
are set correctly.
- If r is pointed to by heap object s, check that the
collector's layout description for s is such that the pointer field
will be scanned. Call *GC_find_header(s) to look at the descriptor
for the heap chunk. The hb_descr field specifies the layout
of objects in that chunk. See gc_mark.h for the meaning of the descriptor.
(If its low order 2 bits are zero, then it is just the length of the
object prefix to be scanned. This form is always used for objects allocated
with GC_malloc or GC_malloc_atomic.)
- If the failure is not deterministic, you may still be able to apply some
of the above technique at the point of failure. But remember that objects
allocated since the last collection will not have been marked, even if the
collector is functioning properly. On some platforms, the collector
can be configured to save call chains in objects for debugging.
Enabling this feature will also cause it to save the call stack at the
point of the last GC in GC_arrays._last_stack.
- When looking at GC internal data structures remember that a number
of GC_xxx variables are really macro defined to
GC_arrays._xxx, so that
the collector can avoid scanning them.
Gauche-0.9.6/gc/doc/README.amiga 0000664 0000764 0000764 00000030152 13227007433 015063 0 ustar shiro shiro Kjetil S. Matheussen's notes (28-11-2000)
Compiles under SAS/C again. Should also still compile under other
Amiga compilers without big changes. I haven't checked if it still
works under gcc, because I don't have gcc for Amiga. But I have
updated 'Makefile', and hope it compiles fine.
WHATS NEW:
1.
Made a pretty big effort in preventing GCs allocating-functions from returning
chip-mem.
The lower part of the new file AmigaOS.c does this in various ways, mainly by
wrapping GC_malloc, GC_malloc_atomic, GC_malloc_uncollectable,
GC_malloc_atomic_uncollectable, GC_malloc_stubborn, GC_malloc_ignore_off_page
and GC_malloc_atomic_ignore_off_page. GC_realloc is also wrapped, but
doesn't do the same effort in preventing to return chip-mem.
Other allocating-functions (f.ex. GC_*_typed_) can probably be
used without any problems, but beware that the warn hook will not be called.
In case of problems, don't define GC_AMIGA_FASTALLOC.
Programs using more time actually using the memory allocated
(instead of just allocate and free rapidly) have
the most to earn on this, but even gctest now normally runs twice
as fast and uses less memory, on my poor 8MB machine.
The changes have only effect when there is no more
fast-mem left. But with the way GC works, it
could happen quite often. Beware that an atexit handler had to be added,
so using the abort() function will make a big memory-loss.
If you absolutely must call abort() instead of exit(), try calling
the GC_amiga_free_all_mem function before abort().
New Amiga-specific compilation flags:
GC_AMIGA_FASTALLOC - By NOT defining this option, GC will work like before,
it will not try to force fast-mem out of the OS, and
it will use normal calloc for allocation, and the rest
of the following flags will have no effect.
GC_AMIGA_ONLYFAST - Makes GC never to return chip-mem. GC_AMIGA_RETRY have
no effect if this flag is set.
GC_AMIGA_GC - If gc returns NULL, do a GC_gcollect, and try again. This
usually is a success with the standard GC configuration.
It is also the most important flag to set to prevent
GC from returning chip-mem. Beware that it slows down a lot
when a program is rapidly allocating/deallocating when
there's either very little fast-memory left or very little
chip-memory left. Its not a very common situation, but gctest
sometimes (very rare) use many minutes because of this.
GC_AMIGA_RETRY - If gc succeed allocating memory, but it is chip-mem,
try again and see if it is fast-mem. Most of the time,
it will actually return fast-mem for the second try.
I have set max number of retries to 9 or size/5000. You
can change this if you like. (see GC_amiga_rec_alloc())
GC_AMIGA_PRINTSTATS - Gather some statistics during the execution of a
program, and prints out the info when the atexit-handler
is called.
My recommendation is to set all this flags, except GC_AMIGA_PRINTSTATS and
GC_AMIGA_ONLYFAST.
If your program demands high response-time, you should
not define GC_AMIGA_GC, and possible also define GC_AMIGA_ONLYFAST.
GC_AMIGA_RETRY does not seem to slow down much.
Also, when compiling up programs, and GC_AMIGA_FASTALLOC was not defined when
compiling gc, you can define GC_AMIGA_MAKINGLIB to avoid having these allocation-
functions wrapped. (see gc.h)
Note that GC_realloc must not be called before any of
the other above mentioned allocating-functions have been called. (shouldn't be
any programs doing so either, I hope).
Another note. The allocation-function is wrapped when defining
GC_AMIGA_FASTALLOC by letting the function go thru the new
GC_amiga_allocwrapper_do function-pointer (see gc.h). Means that
sending function-pointers, such as GC_malloc, GC_malloc_atomic, etc.,
for later to be called like f.ex this, (*GC_malloc_function_pointer)(size),
will not wrap the function. This is normally not a big problem, unless
all allocation function is called like this, which will cause the
atexit un-allocating function never to be called. Then you either
have to manually add the atexit handler, or call the allocation-
functions function-pointer functions like this;
(*GC_amiga_allocwrapper_do)(size,GC_malloc_function_pointer).
There are probably better ways this problem could be handled, unfortunately,
I didn't find any without rewriting or replacing a lot of the GC-code, which
I really didn't want to. (Making new GC_malloc_* functions, and just
define f.ex GC_malloc as GC_amiga_malloc should work too).
New Amiga-specific function:
void GC_amiga_set_toany(void (*func)(void));
'func' is a function that will be called right before gc has to change
allocation-method from MEMF_FAST to MEMF_ANY. Ie. when it is likely
it will return chip-mem.
2. A few small compiler-specific additions to make it compile with SAS/C again.
3. Updated and rewritten the smakefile, so that it works again and that
the "unnecessary" 'SCOPTIONS' files could be removed. Also included
the cord-smakefile stuff in the main smakefile, so that the cord smakefile
could be removed too. By writing smake -f Smakefile.smk, both gc.lib and
cord.lib will be made.
STILL MISSING:
Programs can not be started from workbench, at least not for SAS/C. (Martin
Tauchmanns note about that it now works with workbench is definitely wrong
when concerning SAS/C). An iconx-script solves this problem.
BEWARE!
-To run gctest, set the stack to around 200000 bytes first.
-SAS/C-specific: cord will crash if you compile gc.lib with
either parm=reg or parm=both. (missing legal prototypes for
function-pointers someplace is the reason I guess.).
tested with software: Radium, http://www.stud.ifi.uio.no/~ksvalast/radium/
tested with hardware: MC68060
Martin Tauchmann's notes (1-Apr-99)
Works now, also with the GNU-C compiler V2.7.2.1.
Modify the `Makefile`
CC=cc $(ABI_FLAG)
to
CC=gcc $(ABI_FLAG)
TECHNICAL NOTES
- `GC_get_stack_base()`, `GC_register_data_segments()` works now with every
C compiler; also Workbench.
- Removed AMIGA_SKIP_SEG, but the Code-Segment must not be scanned by GC.
PROBLEMS
- When the Linker, does`t merge all Code-Segments to an single one. LD of GCC
do it always.
- With ixemul.library V47.3, when an GC program launched from another program
(example: `Make` or `if_mach M68K AMIGA gctest`), `GC_register_data_segments()`
found the Segment-List of the caller program.
Can be fixed, if the run-time initialization code (for C programs, usually *crt0*)
support `__data` and `__bss`.
- PowerPC Amiga currently not supported.
- Dynamic libraries (dyn_load.c) not supported.
TESTED WITH SOFTWARE
`Optimized Oberon 2 C` (oo2c)
TESTED WITH HARDWARE
MC68030
Michel Schinz's notes
WHO DID WHAT
The original Amiga port was made by Jesper Peterson. I (Michel Schinz)
modified it slightly to reflect the changes made in the new official
distributions, and to take advantage of the new SAS/C 6.x features. I also
created a makefile to compile the "cord" package (see the cord
subdirectory).
TECHNICAL NOTES
In addition to Jesper's notes, I have the following to say:
- Starting with version 4.3, gctest checks to see if the code segment is
added to the root set or not, and complains if it is. Previous versions
of this Amiga port added the code segment to the root set, so I tried to
fix that. The only problem is that, as far as I know, it is impossible to
know which segments are code segments and which are data segments (there
are indeed solutions to this problem, like scanning the program on disk
or patch the LoadSeg functions, but they are rather complicated). The
solution I have chosen (see os_dep.c) is to test whether the program
counter is in the segment we are about to add to the root set, and if it
is, to skip the segment. The problems are that this solution is rather
awkward and that it works only for one code segment. This means that if
your program has more than one code segment, all of them but one will be
added to the root set. This isn't a big problem in fact, since the
collector will continue to work correctly, but it may be slower.
Anyway, the code which decides whether to skip a segment or not can be
removed simply by not defining AMIGA_SKIP_SEG. But notice that if you do
so, gctest will complain (it will say that "GC_is_visible produced wrong
failure indication"). However, it may be useful if you happen to have
pointers stored in a code segment (you really shouldn't).
If anyone has a good solution to the problem of finding, when a program
is loaded in memory, whether a segment is a code or a data segment,
please let me know.
Jesper Peterson's notes
ADDITIONAL NOTES FOR AMIGA PORT
These notes assume some familiarity with Amiga internals.
WHY I PORTED TO THE AMIGA
The sole reason why I made this port was as a first step in getting
the Sather(*) language on the Amiga. A port of this language will
be done as soon as the Sather 1.0 sources are made available to me.
Given this motivation, the garbage collection (GC) port is rather
minimal.
(*) For information on Sather read the comp.lang.sather newsgroup.
LIMITATIONS
This port assumes that the startup code linked with target programs
is that supplied with SAS/C versions 6.0 or later. This allows
assumptions to be made about where to find the stack base pointer
and data segments when programs are run from WorkBench, as opposed
to running from the CLI. The compiler dependent code is all in the
GC_get_stack_base() and GC_register_data_segments() functions, but
may spread as I add Amiga specific features.
Given that SAS/C was assumed, the port is set up to be built with
"smake" using the "SMakefile". Compiler options in "SCoptions" can
be set with "scopts" program. Both "smake" and "scopts" are part of
the SAS/C commercial development system.
In keeping with the porting philosophy outlined above, this port
will not behave well with Amiga specific code. Especially not inter-
process comms via messages, and setting up public structures like
Intuition objects or anything else in the system lists. For the
time being the use of this library is limited to single threaded
ANSI/POSIX compliant or near-compliant code. (ie. Stick to stdio
for now). Given this limitation there is currently no mechanism for
allocating "CHIP" or "PUBLIC" memory under the garbage collector.
I'll add this after giving it considerable thought. The major
problem is the entire physical address space may have to me scanned,
since there is no telling who we may have passed memory to.
If you allocate your own stack in client code, you will have to
assign the pointer plus stack size to GC_stackbottom.
The initial stack size of the target program can be compiled in by
setting the __stack symbol (see SAS documentation). It can be over-
ridden from the CLI by running the AmigaDOS "stack" program, or from
the WorkBench by setting the stack size in the tool types window.
SAS/C COMPILER OPTIONS (SCoptions)
You may wish to check the "CPU" code option is appropriate for your
intended target system.
Under no circumstances set the "StackExtend" code option in either
compiling the library or *ANY* client code.
All benign compiler warnings have been suppressed. These mainly
involve lack of prototypes in the code, and dead assignments
detected by the optimizer.
THE GOOD NEWS
The library as it stands is compatible with the GigaMem commercial
virtual memory software, and probably similar PD software.
The performance of "gctest" on an Amiga 2630 (68030 @ 25Mhz)
compares favorably with an HP9000 with similar architecture (a 325
with a 68030 I think).
-----------------------------------------------------------------------
Gauche-0.9.6/gc/doc/README.cmake 0000664 0000764 0000764 00000002277 13074101475 015075 0 ustar shiro shiro
CMAKE
-----
Win32 binaries (both 32- and 64-bit) can be built using CMake. CMake is an
open-source tool like automake - it generates makefiles.
Some preliminary work has been done to make this work on other platforms, but
the support is not yet complete.
CMake will generate:
Borland Makefiles
MSYS Makefiles
MinGW Makefiles
NMake Makefiles
Unix Makefiles
. Visual Studio project files
Visual Studio 6
Visual Studio 7
Visual Studio 7 .NET 2003
Visual Studio 8 2005
Visual Studio 8 2005 Win64
Visual Studio 9 2008
Visual Studio 9 2008 Win64
Watcom WMake
BUILD PROCESS
-------------
. install cmake (cmake.org)
. add directory containing cmake.exe to %PATH%
. run cmake from the gc root directory, passing the target with -G:
e.g.,
> cmake -G "Visual Studio 8 2005"
use the gc.sln file generated by cmake to build gc
. you can also run cmake from a build directory to build outside of
the source tree. Just specify the path to the source tree:
e.g.,
> mkdir build
> cd build
> cmake .. -G "Visual Studio 8 2005"
INPUT
-----
The main input to cmake are the CMakeLists.txt files in each directory. For
help, goto cmake.org.
Gauche-0.9.6/gc/doc/README.cords 0000664 0000764 0000764 00000004570 13227007433 015124 0 ustar shiro shiro Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
Permission is hereby granted to use or copy this program
for any purpose, provided the above notices are retained on all copies.
Permission to modify the code and to distribute modified code is granted,
provided the above notices are retained, and a notice that the code was
modified is included with the above copyright notice.
Please send bug reports to Hans-J. Boehm.
This is a string packages that uses a tree-based representation.
See cord.h for a description of the functions provided. Ec.h describes
"extensible cords", which are essentially output streams that write
to a cord. These allow for efficient construction of cords without
requiring a bound on the size of a cord.
More details on the data structure can be found in
Boehm, Atkinson, and Plass, "Ropes: An Alternative to Strings",
Software Practice and Experience 25, 12, December 1995, pp. 1315-1330.
A fundamentally similar "rope" data structure is also part of SGI's standard
template library implementation, and its descendants, which include the
GNU C++ library. That uses reference counting by default.
There is a short description of that data structure at
http://www.sgi.com/tech/stl/ropeimpl.html .
All of these are descendants of the "ropes" in Xerox Cedar.
cord/tests/de.c is a very dumb text editor that illustrates the use of cords.
It maintains a list of file versions. Each version is simply a
cord representing the file contents. Nonetheless, standard
editing operations are efficient, even on very large files.
(Its 3 line "user manual" can be obtained by invoking it without
arguments. Note that ^R^N and ^R^P move the cursor by
almost a screen. It does not understand tabs, which will show
up as highlighted "I"s. Use the UNIX "expand" program first.)
To build the editor, type "make cord/de" in the gc directory.
This package assumes an ANSI C compiler such as gcc. It will
not compile with an old-style K&R compiler.
Note that CORD_printf and friends use C functions with variable numbers
of arguments in non-standard-conforming ways. This code is known to
break on some platforms, notably PowerPC. It should be possible to
build the remainder of the library (everything but cordprnt.c) on
any platform that supports the collector.
Gauche-0.9.6/gc/doc/README.linux 0000664 0000764 0000764 00000010443 13245551623 015152 0 ustar shiro shiro See README.alpha for Linux on DEC AXP info.
This file applies mostly to Linux/Intel IA32. Ports to Linux on an M68K,
IA64, SPARC, MIPS, Alpha and PowerPC are integrated too. They should behave
similarly, except that the PowerPC port lacks incremental GC support, and
it is unknown to what extent the Linux threads code is functional.
See below for M68K specific notes.
Incremental GC is generally supported.
Dynamic libraries are supported on an ELF system.
The collector appears to work reliably with Linux threads, but beware
of older versions of glibc and gdb.
The garbage collector uses SIGPWR and SIGXCPU if it is used with
Linux threads. These should not be touched by the client program.
To use threads, you need to abide by the following requirements:
1) You need to use LinuxThreads or NPTL (which are included in libc6).
The collector relies on some implementation details of the LinuxThreads
package. This code may not work on other
pthread implementations (in particular it will *not* work with
MIT pthreads).
2) You must compile the collector with -DGC_LINUX_THREADS (or
just -DGC_THREADS) and -D_REENTRANT specified in the Makefile.
3a) Every file that makes thread calls should define GC_LINUX_THREADS and
_REENTRANT and then include gc.h. Gc.h redefines some of the
pthread primitives as macros which also provide the collector with
information it requires.
3b) A new alternative to (3a) is to build the collector and compile GC clients
with -DGC_USE_LD_WRAP, and to link the final program with
(for ld) --wrap dlopen --wrap pthread_create \
--wrap pthread_join --wrap pthread_detach \
--wrap pthread_sigmask --wrap pthread_exit --wrap pthread_cancel
(for gcc) -Wl,--wrap -Wl,dlopen -Wl,--wrap -Wl,pthread_create \
-Wl,--wrap -Wl,pthread_join -Wl,--wrap -Wl,pthread_detach \
-Wl,--wrap -Wl,pthread_sigmask -Wl,--wrap -Wl,pthread_exit \
-Wl,--wrap -Wl,pthread_cancel
In any case, _REENTRANT should be defined during compilation.
4) Dlopen() disables collection during its execution. (It can't run
concurrently with the collector, since the collector looks at its
data structures. It can't acquire the allocator lock, since arbitrary
user startup code may run as part of dlopen().) Under unusual
conditions, this may cause unexpected heap growth.
5) The combination of GC_LINUX_THREADS, REDIRECT_MALLOC, and incremental
collection is probably not fully reliable, though it now seems to work
in simple cases.
6) Thread local storage may not be viewed as part of the root set by the
collector. This probably depends on the linuxthreads version. For the
time being, any collectible memory referenced by thread local storage
should also be referenced from elsewhere, or be allocated as uncollectible.
(This is really a bug that should be fixed somehow. The current GC
version probably gets things right if there are not too many tls locations
and if dlopen is not used.)
M68K LINUX:
(From Richard Zidlicky)
The bad news is that it can crash every linux-m68k kernel on a 68040,
so an additional test is needed somewhere on startup. I have meanwhile
patches to correct the problem in 68040 buserror handler but it is not
yet in any standard kernel.
Here is a simple test program to detect whether the kernel has the
problem. It could be run as a separate check in configure or tested
upon startup. If it fails (return !0) than mprotect can't be used
on that system.
/*
* test for bug that may crash 68040 based Linux
*/
#include
#include
#include
#include
#include
char *membase;
int pagesize=4096;
int pageshift=12;
int x_taken=0;
int sighandler(int sig)
{
mprotect(membase,pagesize,PROT_READ|PROT_WRITE);
x_taken=1;
}
main()
{
long l;
signal(SIGSEGV,sighandler);
l=(long)mmap(NULL,pagesize,PROT_READ,MAP_PRIVATE | MAP_ANON,-1,0);
if (l==-1)
{
perror("mmap/malloc");
abort();
}
membase=(char*)l;
*(long*)(membase+sizeof(long))=123456789;
if (*(long*)(membase+sizeof(long)) != 123456789 )
{
fprintf(stderr,"writeback failed !\n");
exit(1);
}
if (!x_taken)
{
fprintf(stderr,"exception not taken !\n");
exit(1);
}
fprintf(stderr,"vmtest Ok\n");
exit(0);
}
Gauche-0.9.6/gc/doc/README.symbian 0000664 0000764 0000764 00000001114 13074101475 015444 0 ustar shiro shiro Instructions for Symbian:
1. base version: libgc 7.1
2. Build: use libgc.mmp
3. Limitations
3.1.No multi-threaded support
3.2. Be careful with limitation that emulator introduces: Static roots are not
dynamically accessible (there are Symbian APIs for this purpose but are just
stubs, returning irrelevant values).
Consequently, on emulator, you can only use dlls or exe, and retrieve static
roots by calling global_init_static_root per dll (or exe).
On target, only libs are supported, because static roots are retrieved by
linker flags, by calling global_init_static_root in main exe.
Gauche-0.9.6/gc/doc/README.win32 0000664 0000764 0000764 00000023512 13074101475 014752 0 ustar shiro shiro The collector has at various times been compiled under Windows 95 & later, NT,
and XP, with the original Microsoft SDK, with Visual C++ 2.0, 4.0, and 6, with
the GNU win32 tools, with Borland 4.5, with Watcom C, and recently
with the Digital Mars compiler. It is likely that some of these have been
broken in the meantime. Patches are appreciated.
For historical reasons,
the collector test program "gctest" is linked as a GUI application,
but does not open any windows. Its output normally appears in the file
"gctest.gc.log". It may be started from the file manager. The hour glass
cursor may appear as long as it's running. If it is started from the
command line, it will usually run in the background. Wait a few
minutes (a few seconds on a modern machine) before you check the output.
You should see either a failure indication or a "Collector appears to
work" message.
The cord test program has not been ported (but should port
easily). A toy editor (cord/de.exe) based on cords (heavyweight
strings represented as trees) has been ported and is included.
It runs fine under either win32 or win32S. It serves as an example
of a true Windows application, except that it was written by a
nonexpert Windows programmer. (There are some peculiarities
in the way files are displayed. The is displayed explicitly
for standard DOS text files. As in the UNIX version, control
characters are displayed explicitly, but in this case as red text.
This may be suboptimal for some tastes and/or sets of default
window colors.)
In general -DREDIRECT_MALLOC is unlikely to work unless the
application is completely statically linked.
The collector normally allocates memory from the OS with VirtualAlloc.
This appears to cause problems under Windows NT and Windows 2000 (but
not Windows 95/98) if the memory is later passed to CreateDIBitmap.
To work around this problem, build the collector with -DUSE_GLOBAL_ALLOC.
This is currently incompatible with -DUSE_MUNMAP. (Thanks to Jonathan
Clark for tracking this down. There's some chance this may be fixed
in 6.1alpha4, since we now separate heap sections with an unused page.)
[Threads and incremental collection are discussed near the end, below.]
Microsoft Tools
---------------
For Microsoft development tools, rename NT_MAKEFILE as
MAKEFILE. (Make sure that the CPU environment variable is defined
to be i386.) In order to use the gc_cpp.h C++ interface, all
client code should include gc_cpp.h.
For historical reasons,
the collector test program "gctest" is linked as a GUI application,
but does not open any windows. Its output appears in the file
"gctest.gc.log". It may be started from the file manager. The hour glass
cursor may appear as long as it's running. If it is started from the
command line, it will usually run in the background. Wait a few
minutes (a few seconds on a modern machine) before you check the output.
You should see either a failure indication or a "Collector appears to
work" message.
If you would prefer a VC++ .NET project file, ask Hans Boehm. One has
been contributed, but it seems to contain some absolute paths etc., so
it can presumably only be a starting point, and is not in the standard
distribution. It is unclear (to me, Hans Boehm) whether it is feasible to
change that.
Clients may need to define GC_NOT_DLL before including gc.h, if the
collector was built as a static library (as it normally is in the
absence of thread support).
GNU Tools
---------
The collector should be buildable under Cygwin with the
"./configure; make check" machinery.
MinGW builds (including for x86_64) are available via cross-compilation, e.g.
"./configure --host=i686-pc-mingw32; make check"
To build the collector as a DLL, pass "--enable-shared --disable-static" to
configure (this will instruct make compile with -D GC_DLL).
Parallel marker could be enabled via "--enable-parallel-mark".
Memory unmapping could be enabled via "--enable-munmap".
Borland Tools
-------------
[Rarely tested.]
For Borland tools, use BCC_MAKEFILE. Note that
Borland's compiler defaults to 1 byte alignment in structures (-a1),
whereas Visual C++ appears to default to 8 byte alignment (/Zp8).
The garbage collector in its default configuration EXPECTS AT
LEAST 4 BYTE ALIGNMENT. Thus the BORLAND DEFAULT MUST
BE OVERRIDDEN. (In my opinion, it should usually be anyway.
I expect that -a1 introduces major performance penalties on a
486 or Pentium.) Note that this changes structure layouts. (As a last
resort, gcconfig.h can be changed to allow 1 byte alignment. But
this has significant negative performance implications.)
The Makefile is set up to assume Borland 4.5. If you have another
version, change the line near the top. By default, it does not
require the assembler. If you do have the assembler, I recommend
removing the -DUSE_GENERIC.
Digital Mars compiler
---------------------
Same as MS Visual C++ but might require
-DAO_OLD_STYLE_INTERLOCKED_COMPARE_EXCHANGE option to compile with the
parallel marker enabled.
Watcom compiler
---------------
Ivan V. Demakov's README for the Watcom port:
The collector has been compiled with Watcom C 10.6 and 11.0.
It runs under win32, win32s, and even under msdos with dos4gw
dos-extender. It should also run under OS/2, though this isn't
tested. Under win32 the collector can be built either as dll
or as static library.
Note that all compilations were done under Windows 95 or NT.
For unknown reason compiling under Windows 3.11 for NT (one
attempt has been made) leads to broken executables.
Incremental collection is not supported.
cord is not ported.
Before compiling you may need to edit WCC_MAKEFILE to set target
platform, library type (dynamic or static), calling conventions, and
optimization options.
To compile the collector and testing programs use the command:
wmake -f WCC_MAKEFILE
All programs using gc should be compiled with 4-byte alignment.
For further explanations on this see comments about Borland.
If the gc is compiled as dll, the macro "GC_DLL" should be defined before
including "gc.h" (for example, with -DGC_DLL compiler option). It's
important, otherwise resulting programs will not run.
Special note for OpenWatcom users: the C (unlike the C++) compiler (of the
latest stable release, not sure for older ones) doesn't force pointer global
variables (i.e. not struct fields, not sure for locals) to be aligned unless
optimizing for speed (e.g., "-ot" option is set); the "-zp" option (or align
pragma) only controls alignment for structs; I don't know whether it's a bug or
a feature (see an old report of same kind -
http://bugzilla.openwatcom.org/show_bug.cgi?id=664), so You are warned.
Incremental Collection
----------------------
There is some support for incremental collection. By default, the
collector chooses between explicit page protection, and GetWriteWatch-based
write tracking automatically, depending on the platform.
The former is slow and interacts poorly with a debugger.
Pages are protected. Protection faults are caught by a handler
installed at the bottom of the handler
stack. Whenever possible, I recommend adding a call to
GC_enable_incremental at the last possible moment, after most
debugging is complete. No system
calls are wrapped by the collector itself. It may be necessary
to wrap ReadFile calls that use a buffer in the heap, so that the
call does not encounter a protection fault while it's running.
(As usual, none of this is an issue unless GC_enable_incremental
is called.)
Note that incremental collection is disabled with -DSMALL_CONFIG.
Threads
-------
This version of the collector by default handles threads similarly
to other platforms. James Clark's code which tracks threads attached
to the collector DLL still exists, but requires that both
- the collector is built in a DLL with GC_DLL defined, and
- GC_use_threads_discovery() is called before GC initialization, which
in turn must happen before creating additional threads.
We generally recommend avoiding this if possible, since it seems to
be less than 100% reliable.
Use gc.mak (a.k.a NT_THREADS_MAKEFILE) instead of NT_MAKEFILE
to build a version that supports both kinds of thread tracking.
To build the garbage collector
test with VC++ from the command line, use
nmake /F ".\gc.mak" CFG="gctest - Win32 Release"
This requires that the subdirectory gctest\Release exist.
The test program and DLL will reside in the Release directory.
This version currently supports incremental collection only if it is
enabled before any additional threads are created.
Since 6.3alpha2, threads are also better supported in static library builds
with Microsoft tools (use NT_STATIC_THREADS_MAKEFILE) and with the GNU
tools. The collector must be built with GC_THREADS defined.
(NT_STATIC_THREADS_MAKEFILE does this implicitly. Under Cygwin,
./configure --enable-threads=posix should be used.)
For the normal, non-dll-based thread tracking to work properly,
threads should be created with GC_CreateThread or GC_beginthreadex,
and exit normally or call GC_endthreadex or GC_ExitThread. (For
Cygwin, use standard pthread calls instead.) As in the pthread
case, including gc.h will redefine CreateThread, _beginthreadex,
_endthreadex, and ExitThread to call the GC_ versions instead.
Note that, as usual, GC_CreateThread tends to introduce resource leaks
that are avoided by GC_beginthreadex. There is currently no equivalent of
_beginthread, and it should not be used.
GC_INIT should be called from the main executable before other GC calls.
We strongly advise against using the TerminateThread() win32 API call,
especially with the garbage collector. Any use is likely to provoke a
crash in the GC, since it makes it impossible for the collector to
correctly track threads.
To build the collector for MinGW pthreads-win32 (or other non-Cygwin pthreads
implementation for Windows), use Makefile.direct and explicitly set
GC_WIN32_PTHREADS (or pass --enable-threads=pthreads to configure).
Use -DPTW32_STATIC_LIB for the static threads library.
Gauche-0.9.6/gc/doc/README.win64 0000664 0000764 0000764 00000002226 13074053206 014754 0 ustar shiro shiro 64-bit Windows on AMD64/Intel EM64T is somewhat supported in the 7.0
and later release. A collector can be built with Microsoft Visual C++ 2005
or with mingw-w64 gcc.
More testing would clearly be helpful.
NT_X64_STATIC_THREADS_MAKEFILE has been used in
this environment. Copy this file to MAKEFILE, and then type "nmake"
in a Visual C++ command line window to build the static library
and the usual test programs. To verify that the collector is
at least somewhat functional, run gctest.exe. This should create
gctest.gc.log after a few seconds.
This process is completely analogous to NT_STATIC_THREADS_MAKEFILE
for the 32-bit version.
A similar procedure using NT_X64_THREADS_MAKEFILE should be usable to
build the dynamic library. Test_cpp.exe did not seem to run correctly this
way. It seems that we're getting the wrong instances of operator new/delete
in some cases. The C tests seemed OK.
Note that currently a few warnings are still generated by default,
and a number of others have been explicitly turned off in the makefile.
VC++ note: to suppress warnings use -D_CRT_SECURE_NO_DEPRECATE.
gcc note: -fno-strict-aliasing should be used if optimizing.
Gauche-0.9.6/gc/doc/README.Mac 0000664 0000764 0000764 00000030427 13227007433 014512 0 ustar shiro shiro The contents of this file are old and pertain to pre-MacOSX versions.
You probably really wanted README.darwin.
---------------------------------------------
Patrick Beard's Notes for building GC v4.12 with CodeWarrior Pro 2:
----------------------------------------------------------------------------
The current build environment for the collector is CodeWarrior Pro 2.
Projects for CodeWarrior Pro 2 (and for quite a few older versions)
are distributed in the file Mac_projects.sit.hqx. The project file
:Mac_projects:gc.prj builds static library versions of the collector.
:Mac_projects:gctest.prj builds the GC test suite.
Configuring the collector is still done by editing the file
:extra:Mac_files:MacOS_config.h.
Lars Farm's suggestions on building the collector:
----------------------------------------------------------------------------
Garbage Collection on MacOS - a manual 'MakeFile'
-------------------------------------------------
Project files and IDE's are great on the Macintosh, but they do have
problems when used as distribution media. This note tries to provide
porting instructions in pure TEXT form to avoid those problems. A manual
'makefile' if you like.
GC version: 4.12a2
Codewarrior: CWPro1
date: 18 July 1997
The notes may or may not apply to earlier or later versions of the
GC/CWPro. Actually, they do apply to earlier versions of both except that
until recently a project could only build one target so each target was a
separate project. The notes will most likely apply to future versions too.
Possibly with minor tweaks.
This is just to record my experiences. These notes do not mean I now
provide a supported port of the GC to MacOS. It works for me. If it works
for you, great. If it doesn't, sorry, try again...;-) Still, if you find
errors, please let me know.
Porting to MacOS is a bit more complex than it first seems. Which MacOS?
68K/PowerPC? Which compiler? Each supports both 68K and PowerPC and offer a
large number of (unique to each environment) compiler settings. Each
combination of compiler/68K/PPC/settings require a unique combination of
standard libraries. And the IDE's does not select them for you. They don't
even check that the library is built with compatible setting and this is
the major source of problems when porting the GC (and otherwise too).
You will have to make choices when you configure the GC. I've made some
choices here, but there are other combinations of settings and #defines
that work too.
As for target settings the major obstacles may be:
- 68K Processor: check "4-byte Ints".
- PPC Processor: uncheck "Store Static Data in TOC".
What you need to do:
1) Build the GC as a library
2) Test that the library works with 'test.c'.
3) Test that the C++ interface 'gc_cpp.cc/h' works with 'test_cpp.cc'.
== 1. The Libraries ==
I made one project with four targets (68K/PPC tempmem or appheap). One target
will suffice if you're able to decide which one you want. I wasn't...
Codewarrior allows a large number of compiler/linker settings. I used these:
Settings shared by all targets:
------------------------------
o Access Paths:
- User Paths: the GC folder
- System Paths: {Compiler}:Metrowerks Standard Library:
{Compiler}:MacOS Support:Headers:
{Compiler}:MacOS Support:MacHeaders:
o C/C++ language:
- inlining: normal
- direct to SOM: off
- enable/check: exceptions, RTTI, bool (and if you like pool strings)
PowerPC target settings
-----------------------
o Target Settings:
- name of target
- MacOS PPC Linker
o PPC Target
- name of library
o C/C++ language
- prefix file as described below
o PPC Processor
- Struct Alignment: PowerPC
- uncheck "Store Static Data in TOC" -- important!
I don't think the others matter, I use full optimization and it is OK
o PPC Linker
- Factory Settings (SYM file with full paths, faster linking, dead-strip
static init, Main: __start)
68K target settings
-------------------
o Target Settings:
- name of target
- MacOS 68K Linker
o 68K Target
- name of library
- A5 relative data
o C/C++ language
- prefix file as described below
o 68K Processor
- Code model: smart
- Struct alignment: 68K
- FP: SANE
- enable 4-Byte Ints -- important!
I don't think the others matter. I selected...
- enable: 68020
- enable: global register allocation
o IR Optimizer
- enable: Optimize Space, Optimize Speed
I suppose the others would work too, but haven't tried...
o 68K Linker
- Factory Settings (New Style MacsBug, SYM file with full paths,
A6 Frames, fast link, Merge compiler glue into segment 1,
dead-strip static init)
Prefix Files to configure the GC sources
----------------------------------------
The Codewarrior equivalent of command-line compilers -DNAME=X is to use
prefix-files. A TEXT file that is automatically #included before the first byte
of every source file. I used these:
---- ( cut here ) ---- gc_prefix_tempmem.h -- 68K and PPC -----
#include "gc_prefix_common.h"
#undef USE_TEMPORARY_MEMORY
#define USE_TEMPORARY_MEMORY
---- ( cut here ) ---- gc_prefix_appmem.h -- 68K and PPC -----
#include "gc_prefix_common.h"
#undef USE_TEMPORARY_MEMORY
// #define USE_TEMPORARY_MEMORY
---- ( cut here ) ---- gc_prefix_common.h --------------------
// gc_prefix_common.h
// ------------------
// Codewarrior prefix file to configure the GC libraries
//
// prefix files are the Codewarrior equivalent of the
// command line option -Dname=x frequently seen in makefiles
#if !__MWERKS__
#error only tried this with Codewarrior
#endif
#if macintosh
#define MSL_USE_PRECOMPILED_HEADERS 0
#include
// See list of #defines to configure the library in: 'MakeFile'
// see also README
#define ALL_INTERIOR_POINTERS // follows interior pointers.
//#define DONT_ADD_BYTE_AT_END // disables the padding if defined.
//#define SMALL_CONFIG // whether to use a smaller heap.
#define GC_ATOMIC_UNCOLLECTABLE // GC_malloc_atomic_uncollectable()
// define either or none as per personal preference
// used in malloc.c
#define REDIRECT_MALLOC GC_malloc
//#define REDIRECT_MALLOC GC_malloc_uncollectable
// if REDIRECT_MALLOC is #defined make sure that the GC library
// is listed before the ANSI/ISO libs in the Codewarrior
// 'Link order' panel
//#define IGNORE_FREE
// mac specific configs
//#define USE_TEMPORARY_MEMORY // use Macintosh temporary memory.
//#define SHARED_LIBRARY_BUILD // build for use in a shared library.
#else
// could build Win32 here too, or in the future
// Rhapsody PPC-mach, Rhapsody PPC-MacOS,
// Rhapsody Intel-mach, Rhapsody Intel-Win32,...
// ... ugh this will get messy ...
#endif
// make sure ints are at least 32-bit
// ( could be set to 16-bit by compiler settings (68K) )
struct gc_private_assert_intsize_{ char x[ sizeof(int)>=4 ? 1 : 0 ]; };
#if __powerc
#if __option(toc_data)
#error turn off "store static data in TOC" when using GC
// ... or find a way to add TOC to the root set...(?)
#endif
#endif
---- ( cut here ) ---- end of gc_prefix_common.h -----------------
Files to build the GC libraries:
--------------------------------
allchblk.c
alloc.c
blacklst.c
checksums.c
dbg_mlc.c
finalize.c
headers.c
mach_dep.c
MacOS.c -- contains MacOS code
malloc.c
mallocx.c
mark.c
mark_rts.c
misc.c
new_hblk.c
obj_map.c
os_dep.c -- contains MacOS code
ptr_chck.c
reclaim.c
stubborn.c
typd_mlc.c
gc++.cc -- this is 'gc_cpp.cc' with less 'inline' and
-- throw std::bad_alloc when out of memory
-- gc_cpp.cc works just fine too
== 2. Test that the library works with 'test.c' ==
The test app is just an ordinary ANSI-C console app. Make sure settings
match the library you're testing.
Files
-----
test.c
the GC library to test -- link order before ANSI libs
suitable Mac+ANSI libraries
prefix:
------
---- ( cut here ) ---- gc_prefix_testlib.h -- all libs -----
#define MSL_USE_PRECOMPILED_HEADERS 0
#include
#undef NDEBUG
#define ALL_INTERIOR_POINTERS /* for GC_priv.h */
---- ( cut here ) ----
== 3. Test that the C++ interface 'gc_cpp.cc/h' works with 'test_cpp.cc' ==
The test app is just an ordinary ANSI-C console app. Make sure settings match
the library you're testing.
Files
-----
test_cpp.cc
the GC library to test -- link order before ANSI libs
suitable Mac+ANSI libraries
prefix:
------
same as for test.c
For convenience I used one test-project with several targets so that all
test apps are build at once. Two for each library to test: test.c and
gc_app.cc. When I was satisfied that the libraries were OK. I put the
libraries + gc.h + the c++ interface-file in a folder that I then put into
the MSL hierarchy so that I don't have to alter access-paths in projects
that use the GC.
After that, just add the proper GC library to your project and the GC is in
action! malloc will call GC_malloc and free GC_free, new/delete too. You
don't have to call free or delete. You may have to be a bit cautious about
delete if you're freeing other resources than RAM. See gc_cpp.h. You can
also keep coding as always with delete/free. That works too. If you want,
include "gc.h" and tweak its use a bit.
== Symantec SPM ==
It has been a while since I tried the GC in SPM, but I think that the above
instructions should be sufficient to guide you through in SPM too. SPM
needs to know where the global data is. Use the files 'datastart.c' and
'dataend.c'. Put 'datastart.c' at the top of your project and 'dataend.c'
at the bottom of your project so that all data is surrounded. This is not
needed in Codewarrior because it provides intrinsic variables
__datastart__, __data_end__ that wraps all globals.
== Source Changes (GC 4.12a2) ==
Very few. Just one tiny in the GC, not strictly needed.
- test_cpp.cc
made the first lines of main() look like this:
------------
int main( int argc, char* argv[] ) {
#endif
#if macintosh // MacOS
char* argv_[] = {"test_cpp","10"}; // doesn't
argv=argv_; // have a
argc = sizeof(argv_)/sizeof(argv_[0]); // commandline
#endif //
int i, iters, n;
# ifndef __GNUC__
alloc dummy_to_fool_the_compiler_into_doing_things_it_currently_cant_handle;
------------
- config.h [now gcconfig.h]
__MWERKS__ does not have to mean MACOS. You can use Codewarrior to
build a Win32 or BeOS library and soon a Rhapsody library. You may
have to change that #if...
It worked for me, hope it works for you.
Lars Farm
----------------------------------------------------------------------------
Patrick Beard's instructions (may be dated):
v4.3 of the collector now runs under Symantec C++/THINK C v7.0.4, and
Metrowerks C/C++ v4.5 both 68K and PowerPC. Project files are provided
to build and test the collector under both development systems.
Configuration
-------------
To configure the collector, under both development systems, a prefix file
is used to set preprocessor directives. This file is called "MacOS_config.h".
Testing
-------
To test the collector (always a good idea), build one of the gctest projects,
gctest. (Symantec C++/THINK C), mw/gctest.68K, or mw/gctest.PPC. The
test will ask you how many times to run; 1 should be sufficient.
Building
--------
For your convenience project files for the major Macintosh development
systems are provided.
For Symantec C++/THINK C, you must build the two projects gclib-1 and
gclib-2. It has to be split up because the collector has more than 32k
of static data and no library can have more than this in the Symantec
environment. (Future versions will probably fix this.)
For Metrowerks C/C++ 4.5 you build gc.68K/PPC and the result will
be a library called gc.68K.lib/gc.PPC.lib.
Using
-----
Under Symantec C++/THINK C, you can just add the gclib-1 and gclib-2
projects to your own project. Under Metrowerks, you add gc.68K.lib or
gc.PPC.lib and two additional files. You add the files called datastart.c
and dataend.c to your project, bracketing all files that use the collector.
See mw/gctest for an example.
Include the projects/libraries you built above into your own project,
#include "gc.h", and call GC_malloc. You don't have to call GC_free.
Patrick C. Beard
Gauche-0.9.6/gc/doc/README.OS2 0000664 0000764 0000764 00000000555 13074053206 014413 0 ustar shiro shiro The code assumes static linking, and a single thread. The editor de has
not been ported. The cord test program has. The supplied OS2_MAKEFILE
assumes the IBM C Set/2 environment, but the code shouldn't.
Since we haven't figured out hoe to do perform partial links or to build static
libraries, clients currently need to link against a long list of executables.
Gauche-0.9.6/gc/doc/README.sgi 0000664 0000764 0000764 00000003457 13227007433 014577 0 ustar shiro shiro Performance of the incremental collector can be greatly enhanced with
-DNO_EXECUTE_PERMISSION.
The collector should run with all of the -32, -n32 and -64 ABIs. Remember to
define the AS macro in the Makefile to be "as -64", or "as -n32".
If you use -DREDIRECT_MALLOC=GC_malloc with C++ code, your code should make
at least one explicit call to malloc instead of new to ensure that the proper
version of malloc is linked in.
Sproc threads are not supported in this version, though there may exist other
ports.
Pthreads support is provided. This requires that:
1) You compile the collector with -DGC_IRIX_THREADS specified in the Makefile.
2) You have the latest pthreads patches installed.
(Though the collector makes only documented pthread calls,
it relies on signal/threads interactions working just right in ways
that are not required by the standard. It is unlikely that this code
will run on other pthreads platforms. But please tell me if it does.)
3) Every file that makes thread calls should define IRIX_THREADS and then
include gc.h. Gc.h redefines some of the pthread primitives as macros which
also provide the collector with information it requires.
4) pthread_cond_wait and pthread_cond_timedwait should be prepared for
premature wakeups. (I believe the pthreads and related standards require this
anyway. Irix pthreads often terminate a wait if a signal arrives.
The garbage collector uses signals to stop threads.)
5) It is expensive to stop a thread waiting in IO at the time the request is
initiated. Applications with many such threads may not exhibit acceptable
performance with the collector. (Increasing the heap size may help.)
6) The collector should not be compiled with -DREDIRECT_MALLOC. This
confuses some library calls made by the pthreads implementation, which
expect the standard malloc.
Gauche-0.9.6/gc/doc/README.uts 0000664 0000764 0000764 00000000126 13074053206 014615 0 ustar shiro shiro Alistair Crooks supplied the port. He used Lexa C version 2.1.3 with
-Xa to compile.
Gauche-0.9.6/gc/doc/scale.html 0000664 0000764 0000764 00000023425 13227007433 015110 0 ustar shiro shiro
Garbage collector scalability
Garbage collector scalability
In its default configuration, the Boehm-Demers-Weiser garbage collector
is not thread-safe. It can be made thread-safe for a number of environments
by building the collector with the appropriate
-DXXX-THREADS compilation
flag. This has primarily two effects:
- It causes the garbage collector to stop all other threads when
it needs to see a consistent memory state.
- It causes the collector to acquire a lock around essentially all
allocation and garbage collection activity.
Since a single lock is used for all allocation-related activity, only one
thread can be allocating or collecting at one point. This inherently
limits performance of multi-threaded applications on multiprocessors.
On most platforms, the allocator/collector lock is implemented as a
spin lock with exponential back-off. Longer wait times are implemented
by yielding and/or sleeping. If a collection is in progress, the pure
spinning stage is skipped. This has the advantage that uncontested and
thus most uniprocessor lock acquisitions are very cheap. It has the
disadvantage that the application may sleep for small periods of time
even when there is work to be done. And threads may be unnecessarily
woken up for short periods. Nonetheless, this scheme empirically
outperforms native queue-based mutual exclusion implementations in most
cases, sometimes drastically so.
Options for enhanced scalability
Version 6.0 of the collector adds two facilities to enhance collector
scalability on multiprocessors. As of 6.0alpha1, these are supported
only under Linux on X86 and IA64 processors, though ports to other
otherwise supported Pthreads platforms should be straightforward.
They are intended to be used together.
-
Building the collector with -DPARALLEL_MARK allows the collector to
run the mark phase in parallel in multiple threads, and thus on multiple
processors. The mark phase typically consumes the large majority of the
collection time. Thus this largely parallelizes the garbage collector
itself, though not the allocation process. Currently the marking is
performed by the thread that triggered the collection, together with
N-1 dedicated
threads, where N is the number of processors detected by the collector.
The dedicated threads are created once at initialization time.
A second effect of this flag is to switch to a more concurrent
implementation of GC_malloc_many, so that free lists can be
built, and memory can be cleared, by more than one thread concurrently.
-
Building the collector with -DTHREAD_LOCAL_ALLOC adds support for thread
local allocation. This causes GC_malloc, GC_malloc_atomic, and
GC_gcj_malloc to be redefined to perform thread-local allocation.
Memory returned from thread-local allocators is completely interchangeable
with that returned by the standard allocators. It may be used by other
threads. The only difference is that, if the thread allocates enough
memory of a certain kind, it will build a thread-local free list for
objects of that kind, and allocate from that. This greatly reduces
locking. The thread-local free lists are refilled using
GC_malloc_many.
An important side effect of this flag is to replace the default
spin-then-sleep lock to be replaced by a spin-then-queue based implementation.
This reduces performance for the standard allocation functions,
though it usually improves performance when thread-local allocation is
used heavily, and thus the number of short-duration lock acquisitions
is greatly reduced.
The Parallel Marking Algorithm
We use an algorithm similar to
that developed by
Endo, Taura, and Yonezawa at the University of Tokyo.
However, the data structures and implementation are different,
and represent a smaller change to the original collector source,
probably at the expense of extreme scalability. Some of
the refinements they suggest, e.g. splitting large
objects, were also incorporated into out approach.
The global mark stack is transformed into a global work queue.
Unlike the usual case, it never shrinks during a mark phase.
The mark threads remove objects from the queue by copying them to a
local mark stack and changing the global descriptor to zero, indicating
that there is no more work to be done for this entry.
This removal
is done with no synchronization. Thus it is possible for more than
one worker to remove the same entry, resulting in some work duplication.
The global work queue grows only if a marker thread decides to
return some of its local mark stack to the global one. This
is done if the global queue appears to be running low, or if
the local stack is in danger of overflowing. It does require
synchronization, but should be relatively rare.
The sequential marking code is reused to process local mark stacks.
Hence the amount of additional code required for parallel marking
is minimal.
It should be possible to use generational collection in the presence of the
parallel collector, by calling GC_enable_incremental().
This does not result in fully incremental collection, since parallel mark
phases cannot currently be interrupted, and doing so may be too
expensive.
Gcj-style mark descriptors do not currently mix with the combination
of local allocation and incremental collection. They should work correctly
with one or the other, but not both.
The number of marker threads is set on startup to the number of
available processors (or to the value of the GC_NPROCS
environment variable). If only a single processor is detected,
parallel marking is disabled.
Note that setting GC_NPROCS to 1 also causes some lock acquisitions inside
the collector to immediately yield the processor instead of busy waiting
first. In the case of a multiprocessor and a client with multiple
simultaneously runnable threads, this may have disastrous performance
consequences (e.g. a factor of 10 slowdown).
Performance
We conducted some simple experiments with a version of
our GC benchmark
that was slightly modified to
run multiple concurrent client threads in the same address space.
Each client thread does the same work as the original benchmark, but they share
a heap.
This benchmark involves very little work outside of memory allocation.
This was run with GC 6.0alpha3 on a dual processor Pentium III/500 machine
under Linux 2.2.12.
Running with a thread-unsafe collector, the benchmark ran in 9
seconds. With the simple thread-safe collector,
built with -DLINUX_THREADS, the execution time
increased to 10.3 seconds, or 23.5 elapsed seconds with two clients.
(The times for the malloc/free version
with glibc malloc
are 10.51 (standard library, pthreads not linked),
20.90 (one thread, pthreads linked),
and 24.55 seconds respectively. The benchmark favors a
garbage collector, since most objects are small.)
The following table gives execution times for the collector built
with parallel marking and thread-local allocation support
(-DGC_LINUX_THREADS -DPARALLEL_MARK -DTHREAD_LOCAL_ALLOC). We tested
the client using either one or two marker threads, and running
one or two client threads. Note that the client uses thread local
allocation exclusively. With -DTHREAD_LOCAL_ALLOC the collector
switches to a locking strategy that is better tuned to less frequent
lock acquisition. The standard allocation primitives thus perform
slightly worse than without -DTHREAD_LOCAL_ALLOC, and should be
avoided in time-critical code.
(The results using pthread_mutex_lock
directly for allocation locking would have been worse still, at
least for older versions of linuxthreads.
With THREAD_LOCAL_ALLOC, we first repeatedly try to acquire the
lock with pthread_mutex_try_lock(), busy-waiting between attempts.
After a fixed number of attempts, we use pthread_mutex_lock().)
These measurements do not use incremental collection, nor was prefetching
enabled in the marker. We used the C version of the benchmark.
All measurements are in elapsed seconds on an unloaded machine.
Number of threads | 1 marker thread (secs.) |
2 marker threads (secs.) |
1 client | 10.45 | 7.85 |
2 clients | 19.95 | 12.3 |
The execution time for the single threaded case is slightly worse than with
simple locking. However, even the single-threaded benchmark runs faster than
even the thread-unsafe version if a second processor is available.
The execution time for two clients with thread local allocation time is
only 1.4 times the sequential execution time for a single thread in a
thread-unsafe environment, even though it involves twice the client work.
That represents close to a
factor of 2 improvement over the 2 client case with the old collector.
The old collector clearly
still suffered from some contention overhead, in spite of the fact that the
locking scheme had been fairly well tuned.
Full linear speedup (i.e. the same execution time for 1 client on one
processor as 2 clients on 2 processors)
is probably not achievable on this kind of
hardware even with such a small number of processors,
since the memory system is
a major constraint for the garbage collector,
the processors usually share a single memory bus, and thus
the aggregate memory bandwidth does not increase in
proportion to the number of processors.
These results are likely to be very sensitive to both hardware and OS
issues. Preliminary experiments with an older Pentium Pro machine running
an older kernel were far less encouraging.
Gauche-0.9.6/gc/doc/README.solaris2 0000664 0000764 0000764 00000007602 13227007433 015547 0 ustar shiro shiro The collector supports both incremental collection and threads under
Solaris 2. The incremental collector normally retrieves page dirty information
through the appropriate /proc calls. But it can also be configured
(by defining MPROTECT_VDB instead of PROC_VDB in gcconfig.h) to use mprotect
and signals. This may result in shorter pause times, but it is no longer
safe to issue arbitrary system calls that write to the heap.
Under other UNIX versions,
the collector normally obtains memory through sbrk. There is some reason
to expect that this is not safe if the client program also calls the system
malloc, or especially realloc. The sbrk man page strongly suggests this is
not safe: "Many library routines use malloc() internally, so use brk()
and sbrk() only when you know that malloc() definitely will not be used by
any library routine." This doesn't make a lot of sense to me, since there
seems to be no documentation as to which routines can transitively call malloc.
Nonetheless, under Solaris2, the collector now allocates
memory using mmap by default. (It defines USE_MMAP in gcconfig.h.)
You may want to reverse this decisions if you use -DREDIRECT_MALLOC=...
Note:
Before you run "make check", you need to set your LD_LIBRARY_PATH correctly
(e.g., to "/usr/local/lib") so that tests can find the shared library
libgcc_s.so.1. Alternatively, you can configure with --disable-shared.
SOLARIS THREADS:
Threads support is enabled by configure "--enable-threads=posix" option.
(In case of GCC compiler, multi-threading support is on by default.)
This causes the collector to be compiled with -D GC_THREADS (or
-D GC_SOLARIS_THREADS) ensuring thread safety.
This assumes use of the pthread_ interface. Old style Solaris threads
are no longer supported.
Thread-local allocation is now on by default. Parallel marking is on by
default starting from GC v7.3 but it could be enabled or disabled manually
by the corresponding "--enable/disable-parallel-mark" options.
It is also essential that gc.h be included in files that call pthread_create,
pthread_join, pthread_detach, or dlopen. gc.h macro defines these to also do
GC bookkeeping, etc. gc.h must be included with one or both of these macros
defined, otherwise these replacements are not visible. A collector built in
this way way only be used by programs that are linked with the threads library.
Since 5.0 alpha5, dlopen disables collection temporarily,
unless USE_PROC_FOR_LIBRARIES is defined. In some unlikely cases, this
can result in unpleasant heap growth. But it seems better than the
race/deadlock issues we had before.
If threads are used on an X86 processor with malloc redirected to
GC_malloc, it is necessary to call GC_INIT explicitly before forking the
first thread. (This avoids a deadlock arising from calling GC_thr_init
with the allocation lock held.)
It appears that there is a problem in using gc_cpp.h in conjunction with
Solaris threads and Sun's C++ runtime. Apparently the overloaded new operator
is invoked by some iostream initialization code before threads are correctly
initialized. As a result, call to thr_self() in garbage collector
initialization SEGV faults. Currently the only known workaround is to not
invoke the garbage collector from a user defined global operator new, or to
have it invoke the garbage-collector's allocators only after main has started.
(Note that the latter requires a moderately expensive test in operator
delete.)
I encountered "symbol : offset .... is non-aligned" errors. These
appear to be traceable to the use of the GNU assembler with the Sun linker.
The former appears to generate a relocation not understood by the latter.
The fix appears to be to use a consistent tool chain. (As a non-Solaris-expert
my solution involved hacking the libtool script, but I'm sure you can
do something less ugly.)
Hans-J. Boehm
(The above contains my personal opinions, which are probably not shared
by anyone else.)
Gauche-0.9.6/gc/doc/simple_example.html 0000664 0000764 0000764 00000017101 13227007433 017017 0 ustar shiro shiro
Using the Garbage Collector: A simple example
Using the Garbage Collector: A simple example
The following consists of step-by-step instructions for building and
using the collector. We'll assume a Linux/gcc platform and
a single-threaded application. The green
text contains information about other platforms or scenarios.
It can be skipped, especially on first reading.
Building the collector
If you have not so yet, unpack the collector and enter
the newly created directory with
tar xvfz gc<version>.tar.gz
cd gc<version>
You can configure, build, and install the collector in a private
directory, say /home/xyz/gc, with the following commands:
./configure --prefix=/home/xyz/gc --disable-threads
make
make check
make install
Here the "make check" command is optional, but highly recommended.
It runs a basic correctness test which usually takes well under a minute.
Other platforms
On non-Unix, non-Linux platforms, the collector is usually built by copying
the appropriate makefile (see the platform-specific README in doc/README.xxx
in the distribution) to the file "Makefile", and then typing "make"
(or "nmake" or ...). This builds the library in the source tree. You may
want to move it and the files in the include directory to a more convenient
place.
If you use a makefile that does not require running a configure script,
you should first look at the makefile, and adjust any options that are
documented there.
If your platform provides a "make" utility, that is generally preferred
to platform- and compiler- dependent "project" files. (At least that is the
strong preference of the would-be maintainer of those project files.)
Threads
If you need thread support, configure the collector with
--enable-threads=posix --enable-parallel-mark
instead of
--disable-threads
If your target is a real old-fashioned uniprocessor (no "hyperthreading",
etc.) you will want to omit --enable-parallel-mark.
C++
You will need to include the C++ support, which unfortunately tends to
be among the least portable parts of the collector, since it seems
to rely on some corner cases of the language. On Linux, it
suffices to add --enable-cplusplus to the configure options.
Writing the program
You will need a
#include "gc.h"
at the beginning of every file that allocates memory through the
garbage collector. Call GC_MALLOC wherever you would
have call malloc. This initializes memory to zero like
calloc; there is no need to explicitly clear the
result.
If you know that an object will not contain pointers to the
garbage-collected heap, and you don't need it to be initialized,
call GC_MALLOC_ATOMIC instead.
A function GC_FREE is provided but need not be called.
For very small objects, your program will probably perform better if
you do not call it, and let the collector do its job.
A GC_REALLOC function behaves like the C library realloc.
It allocates uninitialized pointer-free memory if the original
object was allocated that way.
The following program loop.c is a trivial example:
#include "gc.h"
#include <assert.h>
#include <stdio.h>
int main()
{
int i;
GC_INIT();
for (i = 0; i < 10000000; ++i)
{
int **p = (int **) GC_MALLOC(sizeof(int *));
int *q = (int *) GC_MALLOC_ATOMIC(sizeof(int));
assert(*p == 0);
*p = (int *) GC_REALLOC(q, 2 * sizeof(int));
if (i % 100000 == 0)
printf("Heap size = %d\n", GC_get_heap_size());
}
return 0;
}
Interaction with the system malloc
It is usually best not to mix garbage-collected allocation with the system
malloc-free. If you do, you need to be careful not to store
pointers to the garbage-collected heap in memory allocated with the system
malloc.
Other Platforms
On some other platforms it is necessary to call GC_INIT() from the main program,
which is presumed to be part of the main executable, not a dynamic library.
This can never hurt, and is thus generally good practice.
Threads
For a multi-threaded program, some more rules apply:
-
Files that either allocate through the GC or make thread-related calls
should first define the macro GC_THREADS, and then
include "gc.h". On some platforms this will redefine some
threads primitives, e.g. to let the collector keep track of thread creation.
C++
In the case of C++, you need to be especially careful not to store pointers
to the garbage-collected heap in areas that are not traced by the collector.
The collector includes some alternate interfaces
to make that easier.
Debugging
Additional debug checks can be performed by defining GC_DEBUG before
including gc.h. Additional options are available if the collector
is also built with --enable-gc-debug (--enable-full-debug in
some older versions) and all allocations are
performed with GC_DEBUG defined.
What if I can't rewrite/recompile my program?
You may be able to build the collector with --enable-redirect-malloc
and set the LD_PRELOAD environment variable to point to the resulting
library, thus replacing the standard malloc with its garbage-collected
counterpart. This is rather platform dependent. See the
leak detection documentation for some more details.
Compiling and linking
The above application loop.c test program can be compiled and linked
with
cc -I/home/xyz/gc/include loop.c /home/xyz/gc/lib/libgc.a -o loop
The -I option directs the compiler to the right include
directory. In this case, we list the static library
directly on the compile line; the dynamic library could have been
used instead, provided we arranged for the dynamic loader to find
it, e.g. by setting LD_LIBRARY_PATH.
Threads
On pthread platforms, you will of course also have to link with
-lpthread,
and compile with any thread-safety options required by your compiler.
On some platforms, you may also need to link with -ldl
or -lrt.
Looking at tools/threadlibs.c should give you the appropriate
list if a plain -lpthread doesn't work.
Running the executable
The executable can of course be run normally, e.g. by typing
./loop
The operation of the collector is affected by a number of environment variables.
For example, setting GC_PRINT_STATS produces some
GC statistics on stdout.
See README.environment in the distribution for details.
Gauche-0.9.6/gc/doc/finalization.html 0000664 0000764 0000764 00000022526 13074101475 016512 0 ustar shiro shiro
Finalization in the Boehm-Demers-Weiser collector
Finalization
Many garbage collectors provide a facility for executing user code
just before an object is collected. This can be used to reclaim any
system resources or non-garbage-collected memory associated with the
object.
Experience has shown that this can be a useful facility.
It is indispensable in cases in which system resources are embedded
in complex data structures (e.g. file descriptors
in the cord package).
Our collector provides the necessary functionality through
GC_register_finalizer in
gc.h, or by
inheriting from gc_cleanup
in gc_cpp.h.
However, finalization should not be used in the same way as C++
destructors. In well-written programs there will typically be
very few uses of finalization. (Garbage collected programs that
interact with explicitly memory-managed libraries may be an exception.)
In general the following guidelines should be followed:
Topologically Ordered Finalization
Our conservative garbage collector supports
a form of finalization
(with GC_register_finalizer)
in which objects are finalized in topological
order. If A points to B, and both are registered for
finalization, it is guaranteed the A will be finalized first.
This usually guarantees that finalization procedures see only
unfinalized objects.
This decision is often questioned, particularly since it has an obvious
disadvantage. The current implementation finalizes long chains of
finalizable objects one per collection. This is hard to avoid, since
the first finalizer invoked may store a pointer to the rest of the chain
in a global variable, making it accessible again. Or it may mutate the
rest of the chain.
Cycles involving one or more finalizable objects are never finalized.
Why topological ordering?
It is important to keep in mind that the choice of finalization ordering
matters only in relatively rare cases. In spite of the fact that it has
received a lot of discussion, it is not one of the more important
decisions in designing a system. Many, especially smaller, applications
will never notice the difference. Nonetheless, we believe that topologically
ordered finalization is the right choice.
To understand the justification, observe that if As
finalization procedure does not refer to B, we could fairly easily have
avoided the dependency. We could have split A into A'
and A'' such that any references to A become references to
A', A' points to A'' but not vice-versa, only fields
needed for finalization are stored in A'', and A'' is enabled
for finalization. (GC_register_disappearing_link provides an
alternative mechanism that does not require breaking up objects.)
Thus assume that A actually does need access to B during
finalization. To make things concrete, assume that B is
finalizable because it holds a pointer to a C object, which must be
explicitly deallocated. (This is likely to be one of the most common
uses of finalization.) If B happens to be finalized first,
A will see a dangling pointer during its finalization. But a
principal goal of garbage collection was to avoid dangling pointers.
Note that the client program could enforce topological ordering
even if the system didn't. A pointer to B could be stored in
some globally visible place, where it is cleared only by As
finalizer. But this puts the burden to ensure safety back on the
programmer.
With topologically ordered finalization, the programmer
can fail to split an object, thus leaving an accidental cycle. This
results in a leak, which is arguably less dangerous than a dangling
pointer. More importantly, it is much easier to diagnose,
since the garbage collector would have to go out of its way not to
notice finalization cycles. It can trivially report them.
Furthermore unordered finalization does not really solve the problem
of cycles. Consider the above case in which As
finalization procedure depends on B, and thus a pointer to B
is stored in a global data structure, to be cleared by As finalizer.
If there is an accidental pointer from B back to A, and
thus a cycle, neither B nor A will become unreachable.
The leak is there, just as in the topologically ordered case, but it is
hidden from easy diagnosis.
A number of alternative finalization orderings have been proposed, e.g.
based on statically assigned priorities. In our opinion, these are much
more likely to require complex programming discipline to use in a large
modular system. (Some of them, e.g. Guardians proposed by Dybvig,
Bruggeman, and Eby, do avoid some problems which arise in combination
with certain other collection algorithms.)
Fundamentally, a garbage collector assumes that objects reachable
via pointer chains may be accessed, and thus should be preserved.
Topologically ordered finalization simply extends this to object finalization;
an finalizable object reachable from another finalizer via a pointer chain
is presumed to be accessible by the finalizer, and thus should not be
finalized.
Programming with topological finalization
Experience with Cedar has shown that cycles or long chains of finalizable
objects are typically not a problem.
Finalizable objects are typically rare.
There are several ways to reduce spurious dependencies between finalizable
objects. Splitting objects as discussed above is one technique.
The collector also provides GC_register_disappearing_link, which
explicitly nils a pointer before determining finalization ordering.
Some so-called "operating systems" fail to clean up some resources associated
with a process. These resources must be deallocated at all cost before
process exit whether or not they are still referenced. Probably the best
way to deal with those is by not relying exclusively on finalization.
They should be registered in a table of weak pointers (implemented as
disguised pointers cleared by the finalization procedure that deallocates
the resource). If any references are still left at process exit, they
can be explicitly deallocated then.
Getting around topological finalization ordering
There are certain situations in which cycles between finalizable objects are
genuinely unavoidable. Most notably, C++ compilers introduce self-cycles
to represent inheritance. GC_register_finalizer_ignore_self tells the
finalization part of the collector to ignore self cycles.
This is used by the C++ interface.
Finalize.c actually contains an intentionally undocumented mechanism
for registering a finalizable object with user-defined dependencies.
The problem is that this dependency information is also used for memory
reclamation, not just finalization ordering. Thus misuse can result in
dangling pointers even if finalization doesn't create any.
The risk of dangling pointers can be eliminated by building the collector
with -DJAVA_FINALIZATION. This forces objects reachable from finalizers
to be marked, even though this dependency is not considered for finalization
ordering.
Gauche-0.9.6/gc/doc/README.autoconf 0000664 0000764 0000764 00000004534 13074101475 015631 0 ustar shiro shiro Starting from GC v6.0, we support GNU-style builds based on automake,
autoconf and libtool. This is based almost entirely on Tom Tromey's work
with gcj.
To build and install libraries use
configure; make; make install
The advantages of this process are:
1) It should eventually do a better job of automatically determining the
right compiler to use, etc. It probably already does in some cases.
2) It tries to automatically set a good set of default GC parameters for
the platform (e.g. thread support). It provides an easier way to configure
some of the others.
3) It integrates better with other projects using a GNU-style build process.
4) It builds both dynamic and static libraries.
The known disadvantages are:
1) The build scripts are much more complex and harder to debug (though largely
standard). I don't understand them all, and there's probably lots of redundant
stuff.
2) It probably doesn't work on all Un*x-like platforms yet. It probably will
never work on the rest.
3) The scripts are not yet complete. Some of the standard GNU targets don't
yet work. (Corrections/additions are very welcome.)
The distribution should contain all files needed to run "configure" and "make",
as well as the sources needed to regenerate the derived files. (If I missed
some, please let me know.)
Note that the distribution comes without "Makefile" which is generated by
"configure". The distribution also contains "Makefile.direct" which is not
always equivalent to the generated one.
Important options to configure:
--prefix=PREFIX install architecture-independent files in PREFIX
[/usr/local]
--exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
[same as prefix]
--enable-threads=TYPE choose threading package
--enable-parallel-mark parallelize marking and free list construction
--enable-gc-debug (--enable-full-debug before about 7.0)
include full support for pointer back-tracing etc.
Unless --prefix is set (or --exec-prefix or one of the more obscure options),
make install will install libgc.a and libgc.so in /usr/local/bin, which
would typically require the "make install" to be run as root.
Most commonly --enable-threads=posix or will be needed. --enable-parallel-mark
is recommended for multiprocessors if it is supported on the platform.
Gauche-0.9.6/gc/src/ 0000775 0000764 0000764 00000000000 13074065614 013154 5 ustar shiro shiro Gauche-0.9.6/gc/src/.deps/ 0000775 0000764 0000764 00000000000 12730645630 014165 5 ustar shiro shiro Gauche-0.9.6/gc/src/.deps/sparc_mach_dep.Plo 0000664 0000764 0000764 00000000010 12730645630 017560 0 ustar shiro shiro # dummy
Gauche-0.9.6/gc/backgraph.c 0000664 0000764 0000764 00000043013 13302340445 014444 0 ustar shiro shiro /*
* Copyright (c) 2001 by Hewlett-Packard Company. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*
*/
#include "private/dbg_mlc.h"
/*
* This implements a full, though not well-tuned, representation of the
* backwards points-to graph. This is used to test for non-GC-robust
* data structures; the code is not used during normal garbage collection.
*
* One restriction is that we drop all back-edges from nodes with very
* high in-degree, and simply add them add them to a list of such
* nodes. They are then treated as permanent roots. Id this by itself
* doesn't introduce a space leak, then such nodes can't contribute to
* a growing space leak.
*/
#ifdef MAKE_BACK_GRAPH
#define MAX_IN 10 /* Maximum in-degree we handle directly */
/* #include */
#if !defined(DBG_HDRS_ALL) || (ALIGNMENT != CPP_WORDSZ/8) /* || !defined(UNIX_LIKE) */
# error The configuration does not support MAKE_BACK_GRAPH
#endif
/* We store single back pointers directly in the object's oh_bg_ptr field. */
/* If there is more than one ptr to an object, we store q | FLAG_MANY, */
/* where q is a pointer to a back_edges object. */
/* Every once in a while we use a back_edges object even for a single */
/* pointer, since we need the other fields in the back_edges structure to */
/* be present in some fraction of the objects. Otherwise we get serious */
/* performance issues. */
#define FLAG_MANY 2
typedef struct back_edges_struct {
word n_edges; /* Number of edges, including those in continuation */
/* structures. */
unsigned short flags;
# define RETAIN 1 /* Directly points to a reachable object; */
/* retain for next GC. */
unsigned short height_gc_no;
/* If height > 0, then the GC_gc_no value when it */
/* was computed. If it was computed this cycle, then */
/* it is current. If it was computed during the */
/* last cycle, then it represents the old height, */
/* which is only saved for live objects referenced by */
/* dead ones. This may grow due to refs from newly */
/* dead objects. */
signed_word height;
/* Longest path through unreachable nodes to this node */
/* that we found using depth first search. */
# define HEIGHT_UNKNOWN ((signed_word)(-2))
# define HEIGHT_IN_PROGRESS ((signed_word)(-1))
ptr_t edges[MAX_IN];
struct back_edges_struct *cont;
/* Pointer to continuation structure; we use only the */
/* edges field in the continuation. */
/* also used as free list link. */
} back_edges;
/* Allocate a new back edge structure. Should be more sophisticated */
/* if this were production code. */
#define MAX_BACK_EDGE_STRUCTS 100000
static back_edges *back_edge_space = 0;
STATIC int GC_n_back_edge_structs = 0;
/* Serves as pointer to never used */
/* back_edges space. */
static back_edges *avail_back_edges = 0;
/* Pointer to free list of deallocated */
/* back_edges structures. */
static back_edges * new_back_edges(void)
{
if (0 == back_edge_space) {
size_t bytes_to_get = ROUNDUP_PAGESIZE_IF_MMAP(MAX_BACK_EDGE_STRUCTS
* sizeof(back_edges));
back_edge_space = (back_edges *)GET_MEM(bytes_to_get);
if (NULL == back_edge_space)
ABORT("Insufficient memory for back edges");
GC_add_to_our_memory((ptr_t)back_edge_space, bytes_to_get);
}
if (0 != avail_back_edges) {
back_edges * result = avail_back_edges;
avail_back_edges = result -> cont;
result -> cont = 0;
return result;
}
if (GC_n_back_edge_structs >= MAX_BACK_EDGE_STRUCTS - 1) {
ABORT("Needed too much space for back edges: adjust "
"MAX_BACK_EDGE_STRUCTS");
}
return back_edge_space + (GC_n_back_edge_structs++);
}
/* Deallocate p and its associated continuation structures. */
static void deallocate_back_edges(back_edges *p)
{
back_edges *last = p;
while (0 != last -> cont) last = last -> cont;
last -> cont = avail_back_edges;
avail_back_edges = p;
}
/* Table of objects that are currently on the depth-first search */
/* stack. Only objects with in-degree one are in this table. */
/* Other objects are identified using HEIGHT_IN_PROGRESS. */
/* FIXME: This data structure NEEDS IMPROVEMENT. */
#define INITIAL_IN_PROGRESS 10000
static ptr_t * in_progress_space = 0;
static size_t in_progress_size = 0;
static size_t n_in_progress = 0;
static void push_in_progress(ptr_t p)
{
if (n_in_progress >= in_progress_size) {
ptr_t * new_in_progress_space;
if (NULL == in_progress_space) {
in_progress_size = ROUNDUP_PAGESIZE_IF_MMAP(INITIAL_IN_PROGRESS
* sizeof(ptr_t))
/ sizeof(ptr_t);
new_in_progress_space =
(ptr_t *)GET_MEM(in_progress_size * sizeof(ptr_t));
} else {
in_progress_size *= 2;
new_in_progress_space = (ptr_t *)
GET_MEM(in_progress_size * sizeof(ptr_t));
if (new_in_progress_space != NULL)
BCOPY(in_progress_space, new_in_progress_space,
n_in_progress * sizeof(ptr_t));
}
GC_add_to_our_memory((ptr_t)new_in_progress_space,
in_progress_size * sizeof(ptr_t));
# ifndef GWW_VDB
GC_scratch_recycle_no_gww(in_progress_space,
n_in_progress * sizeof(ptr_t));
# elif defined(LINT2)
/* TODO: implement GWW-aware recycling as in alloc_mark_stack */
GC_noop1((word)in_progress_space);
# endif
in_progress_space = new_in_progress_space;
}
if (in_progress_space == 0)
ABORT("MAKE_BACK_GRAPH: Out of in-progress space: "
"Huge linear data structure?");
in_progress_space[n_in_progress++] = p;
}
static GC_bool is_in_progress(ptr_t p)
{
size_t i;
for (i = 0; i < n_in_progress; ++i) {
if (in_progress_space[i] == p) return TRUE;
}
return FALSE;
}
GC_INLINE void pop_in_progress(ptr_t p GC_ATTR_UNUSED)
{
--n_in_progress;
GC_ASSERT(in_progress_space[n_in_progress] == p);
}
#define GET_OH_BG_PTR(p) \
(ptr_t)GC_REVEAL_POINTER(((oh *)(p)) -> oh_bg_ptr)
#define SET_OH_BG_PTR(p,q) (((oh *)(p)) -> oh_bg_ptr = GC_HIDE_POINTER(q))
/* Execute s once for each predecessor q of p in the points-to graph. */
/* s should be a bracketed statement. We declare q. */
#define FOR_EACH_PRED(q, p, s) \
do { \
ptr_t q = GET_OH_BG_PTR(p); \
if (!((word)q & FLAG_MANY)) { \
if (q && !((word)q & 1)) s \
/* !((word)q & 1) checks for a misinterpreted freelist link */ \
} else { \
back_edges *orig_be_ = (back_edges *)((word)q & ~FLAG_MANY); \
back_edges *be_ = orig_be_; \
int local_; \
word total_; \
word n_edges_ = be_ -> n_edges; \
for (total_ = 0, local_ = 0; total_ < n_edges_; ++local_, ++total_) { \
if (local_ == MAX_IN) { \
be_ = be_ -> cont; \
local_ = 0; \
} \
q = be_ -> edges[local_]; s \
} \
} \
} while (0)
/* Ensure that p has a back_edges structure associated with it. */
static void ensure_struct(ptr_t p)
{
ptr_t old_back_ptr = GET_OH_BG_PTR(p);
if (!((word)old_back_ptr & FLAG_MANY)) {
back_edges *be = new_back_edges();
be -> flags = 0;
if (0 == old_back_ptr) {
be -> n_edges = 0;
} else {
be -> n_edges = 1;
be -> edges[0] = old_back_ptr;
}
be -> height = HEIGHT_UNKNOWN;
be -> height_gc_no = (unsigned short)(GC_gc_no - 1);
GC_ASSERT((word)be >= (word)back_edge_space);
SET_OH_BG_PTR(p, (word)be | FLAG_MANY);
}
}
/* Add the (forward) edge from p to q to the backward graph. Both p */
/* q are pointers to the object base, i.e. pointers to an oh. */
static void add_edge(ptr_t p, ptr_t q)
{
ptr_t old_back_ptr = GET_OH_BG_PTR(q);
back_edges * be, *be_cont;
word i;
GC_ASSERT(p == GC_base(p) && q == GC_base(q));
if (!GC_HAS_DEBUG_INFO(q) || !GC_HAS_DEBUG_INFO(p)) {
/* This is really a misinterpreted free list link, since we saw */
/* a pointer to a free list. Don't overwrite it! */
return;
}
if (0 == old_back_ptr) {
static unsigned random_number = 13;
# define GOT_LUCKY_NUMBER (((++random_number) & 0x7f) == 0)
/* A not very random number we use to occasionally allocate a */
/* back_edges structure even for a single backward edge. This */
/* prevents us from repeatedly tracing back through very long */
/* chains, since we will have some place to store height and */
/* in_progress flags along the way. */
SET_OH_BG_PTR(q, p);
if (GOT_LUCKY_NUMBER) ensure_struct(q);
return;
}
/* Check whether it was already in the list of predecessors. */
FOR_EACH_PRED(pred, q, { if (p == pred) return; });
ensure_struct(q);
old_back_ptr = GET_OH_BG_PTR(q);
be = (back_edges *)((word)old_back_ptr & ~FLAG_MANY);
for (i = be -> n_edges, be_cont = be; i > MAX_IN; i -= MAX_IN)
be_cont = be_cont -> cont;
if (i == MAX_IN) {
be_cont -> cont = new_back_edges();
be_cont = be_cont -> cont;
i = 0;
}
be_cont -> edges[i] = p;
be -> n_edges++;
# ifdef DEBUG_PRINT_BIG_N_EDGES
if (GC_print_stats == VERBOSE && be -> n_edges == 100) {
GC_err_printf("The following object has big in-degree:\n");
GC_print_heap_obj(q);
}
# endif
}
typedef void (*per_object_func)(ptr_t p, size_t n_bytes, word gc_descr);
static void per_object_helper(struct hblk *h, word fn)
{
hdr * hhdr = HDR(h);
size_t sz = hhdr -> hb_sz;
word descr = hhdr -> hb_descr;
per_object_func f = (per_object_func)fn;
int i = 0;
do {
f((ptr_t)(h -> hb_body + i), sz, descr);
i += (int)sz;
} while ((word)i + sz <= BYTES_TO_WORDS(HBLKSIZE));
}
GC_INLINE void GC_apply_to_each_object(per_object_func f)
{
GC_apply_to_all_blocks(per_object_helper, (word)f);
}
static void reset_back_edge(ptr_t p, size_t n_bytes GC_ATTR_UNUSED,
word gc_descr GC_ATTR_UNUSED)
{
/* Skip any free list links, or dropped blocks */
if (GC_HAS_DEBUG_INFO(p)) {
ptr_t old_back_ptr = GET_OH_BG_PTR(p);
if ((word)old_back_ptr & FLAG_MANY) {
back_edges *be = (back_edges *)((word)old_back_ptr & ~FLAG_MANY);
if (!(be -> flags & RETAIN)) {
deallocate_back_edges(be);
SET_OH_BG_PTR(p, 0);
} else {
GC_ASSERT(GC_is_marked(p));
/* Back edges may point to objects that will not be retained. */
/* Delete them for now, but remember the height. */
/* Some will be added back at next GC. */
be -> n_edges = 0;
if (0 != be -> cont) {
deallocate_back_edges(be -> cont);
be -> cont = 0;
}
GC_ASSERT(GC_is_marked(p));
/* We only retain things for one GC cycle at a time. */
be -> flags &= ~RETAIN;
}
} else /* Simple back pointer */ {
/* Clear to avoid dangling pointer. */
SET_OH_BG_PTR(p, 0);
}
}
}
static void add_back_edges(ptr_t p, size_t n_bytes, word gc_descr)
{
word *currentp = (word *)(p + sizeof(oh));
/* For now, fix up non-length descriptors conservatively. */
if((gc_descr & GC_DS_TAGS) != GC_DS_LENGTH) {
gc_descr = n_bytes;
}
while ((word)currentp < (word)(p + gc_descr)) {
word current = *currentp++;
FIXUP_POINTER(current);
if (current >= (word)GC_least_plausible_heap_addr &&
current <= (word)GC_greatest_plausible_heap_addr) {
ptr_t target = GC_base((void *)current);
if (0 != target) {
add_edge(p, target);
}
}
}
}
/* Rebuild the representation of the backward reachability graph. */
/* Does not examine mark bits. Can be called before GC. */
GC_INNER void GC_build_back_graph(void)
{
GC_apply_to_each_object(add_back_edges);
}
/* Return an approximation to the length of the longest simple path */
/* through unreachable objects to p. We refer to this as the height */
/* of p. */
static word backwards_height(ptr_t p)
{
word result;
ptr_t back_ptr = GET_OH_BG_PTR(p);
back_edges *be;
if (0 == back_ptr) return 1;
if (!((word)back_ptr & FLAG_MANY)) {
if (is_in_progress(p)) return 0; /* DFS back edge, i.e. we followed */
/* an edge to an object already */
/* on our stack: ignore */
push_in_progress(p);
result = backwards_height(back_ptr)+1;
pop_in_progress(p);
return result;
}
be = (back_edges *)((word)back_ptr & ~FLAG_MANY);
if (be -> height >= 0 && be -> height_gc_no == (unsigned short)GC_gc_no)
return be -> height;
/* Ignore back edges in DFS */
if (be -> height == HEIGHT_IN_PROGRESS) return 0;
result = (be -> height > 0? be -> height : 1);
be -> height = HEIGHT_IN_PROGRESS;
FOR_EACH_PRED(q, p, {
word this_height;
if (GC_is_marked(q) && !(FLAG_MANY & (word)GET_OH_BG_PTR(p))) {
GC_COND_LOG_PRINTF("Found bogus pointer from %p to %p\n",
(void *)q, (void *)p);
/* Reachable object "points to" unreachable one. */
/* Could be caused by our lax treatment of GC descriptors. */
this_height = 1;
} else {
this_height = backwards_height(q);
}
if (this_height >= result) result = this_height + 1;
});
be -> height = result;
be -> height_gc_no = (unsigned short)GC_gc_no;
return result;
}
STATIC word GC_max_height = 0;
STATIC ptr_t GC_deepest_obj = NULL;
/* Compute the maximum height of every unreachable predecessor p of a */
/* reachable object. Arrange to save the heights of all such objects p */
/* so that they can be used in calculating the height of objects in the */
/* next GC. */
/* Set GC_max_height to be the maximum height we encounter, and */
/* GC_deepest_obj to be the corresponding object. */
static void update_max_height(ptr_t p, size_t n_bytes GC_ATTR_UNUSED,
word gc_descr GC_ATTR_UNUSED)
{
if (GC_is_marked(p) && GC_HAS_DEBUG_INFO(p)) {
word p_height = 0;
ptr_t p_deepest_obj = 0;
ptr_t back_ptr;
back_edges *be = 0;
/* If we remembered a height last time, use it as a minimum. */
/* It may have increased due to newly unreachable chains pointing */
/* to p, but it can't have decreased. */
back_ptr = GET_OH_BG_PTR(p);
if (0 != back_ptr && ((word)back_ptr & FLAG_MANY)) {
be = (back_edges *)((word)back_ptr & ~FLAG_MANY);
if (be -> height != HEIGHT_UNKNOWN) p_height = be -> height;
}
FOR_EACH_PRED(q, p, {
if (!GC_is_marked(q) && GC_HAS_DEBUG_INFO(q)) {
word q_height;
q_height = backwards_height(q);
if (q_height > p_height) {
p_height = q_height;
p_deepest_obj = q;
}
}
});
if (p_height > 0) {
/* Remember the height for next time. */
if (be == 0) {
ensure_struct(p);
back_ptr = GET_OH_BG_PTR(p);
be = (back_edges *)((word)back_ptr & ~FLAG_MANY);
}
be -> flags |= RETAIN;
be -> height = p_height;
be -> height_gc_no = (unsigned short)GC_gc_no;
}
if (p_height > GC_max_height) {
GC_max_height = p_height;
GC_deepest_obj = p_deepest_obj;
}
}
}
STATIC word GC_max_max_height = 0;
GC_INNER void GC_traverse_back_graph(void)
{
GC_max_height = 0;
GC_apply_to_each_object(update_max_height);
if (0 != GC_deepest_obj)
GC_set_mark_bit(GC_deepest_obj); /* Keep it until we can print it. */
}
void GC_print_back_graph_stats(void)
{
GC_ASSERT(I_HOLD_LOCK());
GC_printf("Maximum backwards height of reachable objects at GC %lu is %lu\n",
(unsigned long) GC_gc_no, (unsigned long)GC_max_height);
if (GC_max_height > GC_max_max_height) {
ptr_t obj = GC_deepest_obj;
GC_max_max_height = GC_max_height;
UNLOCK();
GC_err_printf(
"The following unreachable object is last in a longest chain "
"of unreachable objects:\n");
GC_print_heap_obj(obj);
LOCK();
}
GC_COND_LOG_PRINTF("Needed max total of %d back-edge structs\n",
GC_n_back_edge_structs);
GC_apply_to_each_object(reset_back_edge);
GC_deepest_obj = 0;
}
#endif /* MAKE_BACK_GRAPH */
Gauche-0.9.6/gc/cord/ 0000775 0000764 0000764 00000000000 13316646663 013324 5 ustar shiro shiro Gauche-0.9.6/gc/cord/cord.am 0000664 0000764 0000764 00000001737 13227007433 014565 0 ustar shiro shiro ## This file is processed with automake.
# Info (current:revision:age) for the Libtool versioning system.
# These numbers should be updated at most once just before the release,
# and, optionally, at most once during the development (after the release).
LIBCORD_VER_INFO = 4:0:3
lib_LTLIBRARIES += libcord.la
libcord_la_LIBADD = $(top_builddir)/libgc.la
libcord_la_LDFLAGS = -version-info $(LIBCORD_VER_INFO) -no-undefined
libcord_la_CPPFLAGS = $(AM_CPPFLAGS)
libcord_la_SOURCES = \
cord/cordbscs.c \
cord/cordprnt.c \
cord/cordxtra.c
TESTS += cordtest$(EXEEXT)
check_PROGRAMS += cordtest
cordtest_SOURCES = cord/tests/cordtest.c
cordtest_LDADD = $(top_builddir)/libgc.la $(top_builddir)/libcord.la
EXTRA_DIST += \
cord/tests/de.c \
cord/tests/de_cmds.h \
cord/tests/de_win.c \
cord/tests/de_win.h \
cord/tests/de_win.rc
pkginclude_HEADERS += \
include/cord.h \
include/cord_pos.h \
include/ec.h
Gauche-0.9.6/gc/cord/tests/ 0000775 0000764 0000764 00000000000 13316646663 014466 5 ustar shiro shiro Gauche-0.9.6/gc/cord/tests/de.c 0000664 0000764 0000764 00000043355 13227007433 015217 0 ustar shiro shiro /*
* Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
/*
* A really simple-minded text editor based on cords.
* Things it does right:
* No size bounds.
* Inbounded undo.
* Shouldn't crash no matter what file you invoke it on (e.g. /vmunix)
* (Make sure /vmunix is not writable before you try this.)
* Scrolls horizontally.
* Things it does wrong:
* It doesn't handle tabs reasonably (use "expand" first).
* The command set is MUCH too small.
* The redisplay algorithm doesn't let curses do the scrolling.
* The rule for moving the window over the file is suboptimal.
*/
#include
#include /* for exit() */
#include "gc.h"
#include "cord.h"
#ifdef THINK_C
#define MACINTOSH
#endif
#include
#if (defined(__BORLANDC__) || defined(__CYGWIN__)) && !defined(WIN32)
/* If this is DOS or win16, we'll fail anyway. */
/* Might as well assume win32. */
# define WIN32
#endif
#if defined(WIN32)
# include
# include "de_win.h"
#elif defined(MACINTOSH)
# include
/* curses emulation. */
# define initscr()
# define endwin()
# define nonl()
# define noecho() csetmode(C_NOECHO, stdout)
# define cbreak() csetmode(C_CBREAK, stdout)
# define refresh()
# define addch(c) putchar(c)
# define standout() cinverse(1, stdout)
# define standend() cinverse(0, stdout)
# define move(line,col) cgotoxy(col + 1, line + 1, stdout)
# define clrtoeol() ccleol(stdout)
# define de_error(s) { fprintf(stderr, s); getchar(); }
# define LINES 25
# define COLS 80
#else
# include
# include /* for sleep() */
# define de_error(s) { fprintf(stderr, s); sleep(2); }
#endif
#include "de_cmds.h"
#define OUT_OF_MEMORY do { \
fprintf(stderr, "Out of memory\n"); \
exit(3); \
} while (0)
/* List of line number to position mappings, in descending order. */
/* There may be holes. */
typedef struct LineMapRep {
int line;
size_t pos;
struct LineMapRep * previous;
} * line_map;
/* List of file versions, one per edit operation */
typedef struct HistoryRep {
CORD file_contents;
struct HistoryRep * previous;
line_map map; /* Invalid for first record "now" */
} * history;
history now = 0;
CORD current; /* == now -> file_contents. */
size_t current_len; /* Current file length. */
line_map current_map = 0; /* Current line no. to pos. map */
size_t current_map_size = 0; /* Number of current_map entries. */
/* Not always accurate, but reset */
/* by prune_map. */
# define MAX_MAP_SIZE 3000
/* Current display position */
int dis_line = 0;
int dis_col = 0;
# define ALL -1
# define NONE - 2
int need_redisplay = 0; /* Line that needs to be redisplayed. */
/* Current cursor position. Always within file. */
int line = 0;
int col = 0;
size_t file_pos = 0; /* Character position corresponding to cursor. */
/* Invalidate line map for lines > i */
void invalidate_map(int i)
{
while(current_map -> line > i) {
current_map = current_map -> previous;
current_map_size--;
}
}
/* Reduce the number of map entries to save space for huge files. */
/* This also affects maps in histories. */
void prune_map(void)
{
line_map map = current_map;
int start_line = map -> line;
current_map_size = 0;
do {
current_map_size++;
if (map -> line < start_line - LINES && map -> previous != 0) {
map -> previous = map -> previous -> previous;
}
map = map -> previous;
} while (map != 0);
}
/* Add mapping entry */
void add_map(int line, size_t pos)
{
line_map new_map = GC_NEW(struct LineMapRep);
if (NULL == new_map) OUT_OF_MEMORY;
if (current_map_size >= MAX_MAP_SIZE) prune_map();
new_map -> line = line;
new_map -> pos = pos;
new_map -> previous = current_map;
current_map = new_map;
current_map_size++;
}
/* Return position of column *c of ith line in */
/* current file. Adjust *c to be within the line.*/
/* A 0 pointer is taken as 0 column. */
/* Returns CORD_NOT_FOUND if i is too big. */
/* Assumes i > dis_line. */
size_t line_pos(int i, int *c)
{
int j;
size_t cur;
line_map map = current_map;
while (map -> line > i) map = map -> previous;
if (map -> line < i - 2) /* rebuild */ invalidate_map(i);
for (j = map -> line, cur = map -> pos; j < i;) {
cur = CORD_chr(current, cur, '\n');
if (cur == current_len-1) return(CORD_NOT_FOUND);
cur++;
if (++j > current_map -> line) add_map(j, cur);
}
if (c != 0) {
size_t next = CORD_chr(current, cur, '\n');
if (next == CORD_NOT_FOUND) next = current_len - 1;
if (next < cur + *c) {
*c = (int)(next - cur);
}
cur += *c;
}
return(cur);
}
void add_hist(CORD s)
{
history new_file = GC_NEW(struct HistoryRep);
if (NULL == new_file) OUT_OF_MEMORY;
new_file -> file_contents = current = s;
current_len = CORD_len(s);
new_file -> previous = now;
if (now != 0) now -> map = current_map;
now = new_file;
}
void del_hist(void)
{
now = now -> previous;
current = now -> file_contents;
current_map = now -> map;
current_len = CORD_len(current);
}
/* Current screen_contents; a dynamically allocated array of CORDs */
CORD * screen = 0;
int screen_size = 0;
# ifndef WIN32
/* Replace a line in the curses stdscr. All control characters are */
/* displayed as upper case characters in standout mode. This isn't */
/* terribly appropriate for tabs. */
void replace_line(int i, CORD s)
{
CORD_pos p;
# if !defined(MACINTOSH)
size_t len = CORD_len(s);
# endif
if (screen == 0 || LINES > screen_size) {
screen_size = LINES;
screen = (CORD *)GC_MALLOC(screen_size * sizeof(CORD));
if (NULL == screen) OUT_OF_MEMORY;
}
# if !defined(MACINTOSH)
/* A gross workaround for an apparent curses bug: */
if (i == LINES-1 && len == (unsigned)COLS) {
s = CORD_substr(s, 0, len - 1);
}
# endif
if (CORD_cmp(screen[i], s) != 0) {
move(i, 0); clrtoeol(); move(i,0);
CORD_FOR (p, s) {
int c = CORD_pos_fetch(p) & 0x7f;
if (iscntrl(c)) {
standout(); addch(c + 0x40); standend();
} else {
addch(c);
}
}
screen[i] = s;
}
}
#else
# define replace_line(i,s) invalidate_line(i)
#endif
/* Return up to COLS characters of the line of s starting at pos, */
/* returning only characters after the given column. */
CORD retrieve_line(CORD s, size_t pos, unsigned column)
{
CORD candidate = CORD_substr(s, pos, column + COLS);
/* avoids scanning very long lines */
size_t eol = CORD_chr(candidate, 0, '\n');
int len;
if (eol == CORD_NOT_FOUND) eol = CORD_len(candidate);
len = (int)eol - (int)column;
if (len < 0) len = 0;
return(CORD_substr(s, pos + column, len));
}
# ifdef WIN32
# define refresh();
CORD retrieve_screen_line(int i)
{
register size_t pos;
invalidate_map(dis_line + LINES); /* Prune search */
pos = line_pos(dis_line + i, 0);
if (pos == CORD_NOT_FOUND) return(CORD_EMPTY);
return(retrieve_line(current, pos, dis_col));
}
# endif
/* Display the visible section of the current file */
void redisplay(void)
{
register int i;
invalidate_map(dis_line + LINES); /* Prune search */
for (i = 0; i < LINES; i++) {
if (need_redisplay == ALL || need_redisplay == i) {
register size_t pos = line_pos(dis_line + i, 0);
if (pos == CORD_NOT_FOUND) break;
replace_line(i, retrieve_line(current, pos, dis_col));
if (need_redisplay == i) goto done;
}
}
for (; i < LINES; i++) replace_line(i, CORD_EMPTY);
done:
refresh();
need_redisplay = NONE;
}
int dis_granularity;
/* Update dis_line, dis_col, and dis_pos to make cursor visible. */
/* Assumes line, col, dis_line, dis_pos are in bounds. */
void normalize_display(void)
{
int old_line = dis_line;
int old_col = dis_col;
dis_granularity = 1;
if (LINES > 15 && COLS > 15) dis_granularity = 2;
while (dis_line > line) dis_line -= dis_granularity;
while (dis_col > col) dis_col -= dis_granularity;
while (line >= dis_line + LINES) dis_line += dis_granularity;
while (col >= dis_col + COLS) dis_col += dis_granularity;
if (old_line != dis_line || old_col != dis_col) {
need_redisplay = ALL;
}
}
# if defined(WIN32)
# elif defined(MACINTOSH)
# define move_cursor(x,y) cgotoxy(x + 1, y + 1, stdout)
# else
# define move_cursor(x,y) move(y,x)
# endif
/* Adjust display so that cursor is visible; move cursor into position */
/* Update screen if necessary. */
void fix_cursor(void)
{
normalize_display();
if (need_redisplay != NONE) redisplay();
move_cursor(col - dis_col, line - dis_line);
refresh();
# ifndef WIN32
fflush(stdout);
# endif
}
/* Make sure line, col, and dis_pos are somewhere inside file. */
/* Recompute file_pos. Assumes dis_pos is accurate or past eof */
void fix_pos(void)
{
int my_col = col;
if ((size_t)line > current_len)
line = (int)current_len;
file_pos = line_pos(line, &my_col);
if (file_pos == CORD_NOT_FOUND) {
for (line = current_map -> line, file_pos = current_map -> pos;
file_pos < current_len;
line++, file_pos = CORD_chr(current, file_pos, '\n') + 1);
line--;
file_pos = line_pos(line, &col);
} else {
col = my_col;
}
}
#if defined(WIN32)
# define beep() Beep(1000 /* Hz */, 300 /* msecs */)
#elif defined(MACINTOSH)
# define beep() SysBeep(1)
#else
/*
* beep() is part of some curses packages and not others.
* We try to match the type of the builtin one, if any.
*/
int beep(void)
{
putc('\007', stderr);
return(0);
}
#endif /* !WIN32 && !MACINTOSH */
# define NO_PREFIX -1
# define BARE_PREFIX -2
int repeat_count = NO_PREFIX; /* Current command prefix. */
int locate_mode = 0; /* Currently between 2 ^Ls */
CORD locate_string = CORD_EMPTY; /* Current search string. */
char * arg_file_name;
#ifdef WIN32
/* Change the current position to whatever is currently displayed at */
/* the given SCREEN coordinates. */
void set_position(int c, int l)
{
line = l + dis_line;
col = c + dis_col;
fix_pos();
move_cursor(col - dis_col, line - dis_line);
}
#endif /* WIN32 */
/* Perform the command associated with character c. C may be an */
/* integer > 256 denoting a windows command, one of the above control */
/* characters, or another ASCII character to be used as either a */
/* character to be inserted, a repeat count, or a search string, */
/* depending on the current state. */
void do_command(int c)
{
int i;
int need_fix_pos;
FILE * out;
if ( c == '\r') c = '\n';
if (locate_mode) {
size_t new_pos;
if (c == LOCATE) {
locate_mode = 0;
locate_string = CORD_EMPTY;
return;
}
locate_string = CORD_cat_char(locate_string, (char)c);
new_pos = CORD_str(current, file_pos - CORD_len(locate_string) + 1,
locate_string);
if (new_pos != CORD_NOT_FOUND) {
need_redisplay = ALL;
new_pos += CORD_len(locate_string);
for (;;) {
file_pos = line_pos(line + 1, 0);
if (file_pos > new_pos) break;
line++;
}
col = (int)(new_pos - line_pos(line, 0));
file_pos = new_pos;
fix_cursor();
} else {
locate_string = CORD_substr(locate_string, 0,
CORD_len(locate_string) - 1);
beep();
}
return;
}
if (c == REPEAT) {
repeat_count = BARE_PREFIX; return;
} else if (c < 0x100 && isdigit(c)){
if (repeat_count == BARE_PREFIX) {
repeat_count = c - '0'; return;
} else if (repeat_count != NO_PREFIX) {
repeat_count = 10 * repeat_count + c - '0'; return;
}
}
if (repeat_count == NO_PREFIX) repeat_count = 1;
if (repeat_count == BARE_PREFIX && (c == UP || c == DOWN)) {
repeat_count = LINES - dis_granularity;
}
if (repeat_count == BARE_PREFIX) repeat_count = 8;
need_fix_pos = 0;
for (i = 0; i < repeat_count; i++) {
switch(c) {
case LOCATE:
locate_mode = 1;
break;
case TOP:
line = col = 0;
file_pos = 0;
break;
case UP:
if (line != 0) {
line--;
need_fix_pos = 1;
}
break;
case DOWN:
line++;
need_fix_pos = 1;
break;
case LEFT:
if (col != 0) {
col--; file_pos--;
}
break;
case RIGHT:
if (CORD_fetch(current, file_pos) == '\n') break;
col++; file_pos++;
break;
case UNDO:
del_hist();
need_redisplay = ALL; need_fix_pos = 1;
break;
case BS:
if (col == 0) {
beep();
break;
}
col--; file_pos--;
/* FALLTHRU */
case DEL:
if (file_pos == current_len-1) break;
/* Can't delete trailing newline */
if (CORD_fetch(current, file_pos) == '\n') {
need_redisplay = ALL; need_fix_pos = 1;
} else {
need_redisplay = line - dis_line;
}
add_hist(CORD_cat(
CORD_substr(current, 0, file_pos),
CORD_substr(current, file_pos+1, current_len)));
invalidate_map(line);
break;
case WRITE:
{
CORD name = CORD_cat(CORD_from_char_star(arg_file_name),
".new");
if ((out = fopen(CORD_to_const_char_star(name), "wb")) == NULL
|| CORD_put(current, out) == EOF) {
de_error("Write failed\n");
need_redisplay = ALL;
} else {
fclose(out);
}
}
break;
default:
{
CORD left_part = CORD_substr(current, 0, file_pos);
CORD right_part = CORD_substr(current, file_pos, current_len);
add_hist(CORD_cat(CORD_cat_char(left_part, (char)c),
right_part));
invalidate_map(line);
if (c == '\n') {
col = 0; line++; file_pos++;
need_redisplay = ALL;
} else {
col++; file_pos++;
need_redisplay = line - dis_line;
}
break;
}
}
}
if (need_fix_pos) fix_pos();
fix_cursor();
repeat_count = NO_PREFIX;
}
/* OS independent initialization */
void generic_init(void)
{
FILE * f;
CORD initial;
if ((f = fopen(arg_file_name, "rb")) == NULL) {
initial = "\n";
} else {
size_t len;
initial = CORD_from_file(f);
len = CORD_len(initial);
if (0 == len || CORD_fetch(initial, len - 1) != '\n') {
initial = CORD_cat(initial, "\n");
}
}
add_map(0,0);
add_hist(initial);
now -> map = current_map;
now -> previous = now; /* Can't back up further: beginning of the world */
need_redisplay = ALL;
fix_cursor();
}
#ifndef WIN32
int main(int argc, char **argv)
{
int c;
void *buf;
# if defined(MACINTOSH)
console_options.title = "\pDumb Editor";
cshow(stdout);
argc = ccommand(&argv);
# endif
GC_INIT();
if (argc != 2) {
fprintf(stderr, "Usage: %s file\n", argv[0]);
fprintf(stderr, "Cursor keys: ^B(left) ^F(right) ^P(up) ^N(down)\n");
fprintf(stderr, "Undo: ^U Write to .new: ^W");
fprintf(stderr, "Quit:^D Repeat count: ^R[n]\n");
fprintf(stderr, "Top: ^T Locate (search, find): ^L text ^L\n");
exit(1);
}
arg_file_name = argv[1];
buf = GC_MALLOC_ATOMIC(8192);
if (NULL == buf) OUT_OF_MEMORY;
setvbuf(stdout, buf, _IOFBF, 8192);
initscr();
noecho(); nonl(); cbreak();
generic_init();
while ((c = getchar()) != QUIT) {
if (c == EOF) break;
do_command(c);
}
move(LINES-1, 0);
clrtoeol();
refresh();
nl();
echo();
endwin();
return 0;
}
#endif /* !WIN32 */
Gauche-0.9.6/gc/cord/tests/de_win.c 0000664 0000764 0000764 00000025417 13227007433 016073 0 ustar shiro shiro /*
* Copyright (c) 1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
/*
* The MS Windows specific part of de.
* This started as the generic Windows application template
* but significant parts didn't survive to the final version.
*
* This was written by a nonexpert windows programmer.
*/
#include "windows.h"
#include "gc.h"
#include "cord.h"
#include "de_cmds.h"
#include "de_win.h"
int LINES = 0;
int COLS = 0;
#define szAppName TEXT("DE")
HWND hwnd;
void de_error(char *s)
{
(void)MessageBoxA(hwnd, s, "Demonstration Editor",
MB_ICONINFORMATION | MB_OK);
InvalidateRect(hwnd, NULL, TRUE);
}
int APIENTRY WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPSTR command_line, int nCmdShow)
{
MSG msg;
WNDCLASS wndclass;
HANDLE hAccel;
GC_INIT();
# if defined(CPPCHECK)
GC_noop1((GC_word)&WinMain);
# endif
if (!hPrevInstance)
{
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.lpfnWndProc = WndProc;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = DLGWINDOWEXTRA;
wndclass.hInstance = hInstance;
wndclass.hIcon = LoadIcon (hInstance, szAppName);
wndclass.hCursor = LoadCursor (NULL, IDC_ARROW);
wndclass.hbrBackground = GetStockObject(WHITE_BRUSH);
wndclass.lpszMenuName = TEXT("DE");
wndclass.lpszClassName = szAppName;
if (RegisterClass (&wndclass) == 0) {
de_error("RegisterClass error");
return(0);
}
}
/* Empirically, the command line does not include the command name ...
if (command_line != 0) {
while (isspace(*command_line)) command_line++;
while (*command_line != 0 && !isspace(*command_line)) command_line++;
while (isspace(*command_line)) command_line++;
} */
if (command_line == 0 || *command_line == 0) {
de_error("File name argument required");
return( 0 );
} else {
char *p = command_line;
while (*p != 0 && !isspace(*(unsigned char *)p))
p++;
arg_file_name = CORD_to_char_star(
CORD_substr(command_line, 0, p - command_line));
}
hwnd = CreateWindow (szAppName,
TEXT("Demonstration Editor"),
WS_OVERLAPPEDWINDOW | WS_CAPTION, /* Window style */
CW_USEDEFAULT, 0, /* default pos. */
CW_USEDEFAULT, 0, /* default width, height */
NULL, /* No parent */
NULL, /* Window class menu */
hInstance, NULL);
if (hwnd == NULL) {
de_error("CreateWindow error");
return(0);
}
ShowWindow (hwnd, nCmdShow);
hAccel = LoadAccelerators( hInstance, szAppName );
while (GetMessage (&msg, NULL, 0, 0))
{
if( !TranslateAccelerator( hwnd, hAccel, &msg ) )
{
TranslateMessage (&msg);
DispatchMessage (&msg);
}
}
return (int)msg.wParam;
}
/* Return the argument with all control characters replaced by blanks. */
char * plain_chars(char * text, size_t len)
{
char * result = GC_MALLOC_ATOMIC(len + 1);
register size_t i;
if (NULL == result) return NULL;
for (i = 0; i < len; i++) {
if (iscntrl(((unsigned char *)text)[i])) {
result[i] = ' ';
} else {
result[i] = text[i];
}
}
result[len] = '\0';
return(result);
}
/* Return the argument with all non-control-characters replaced by */
/* blank, and all control characters c replaced by c + 32. */
char * control_chars(char * text, size_t len)
{
char * result = GC_MALLOC_ATOMIC(len + 1);
register size_t i;
if (NULL == result) return NULL;
for (i = 0; i < len; i++) {
if (iscntrl(((unsigned char *)text)[i])) {
result[i] = text[i] + 0x40;
} else {
result[i] = ' ';
}
}
result[len] = '\0';
return(result);
}
int char_width;
int char_height;
void get_line_rect(int line, int win_width, RECT * rectp)
{
rectp -> top = line * (LONG)char_height;
rectp -> bottom = rectp->top + char_height;
rectp -> left = 0;
rectp -> right = win_width;
}
int caret_visible = 0; /* Caret is currently visible. */
int screen_was_painted = 0;/* Screen has been painted at least once. */
void update_cursor(void);
INT_PTR CALLBACK AboutBoxCallback( HWND hDlg, UINT message,
WPARAM wParam, LPARAM lParam )
{
(void)lParam;
switch( message )
{
case WM_INITDIALOG:
SetFocus( GetDlgItem( hDlg, IDOK ) );
break;
case WM_COMMAND:
switch( wParam )
{
case IDOK:
EndDialog( hDlg, TRUE );
break;
}
break;
case WM_CLOSE:
EndDialog( hDlg, TRUE );
return TRUE;
}
return FALSE;
}
LRESULT CALLBACK WndProc (HWND hwnd, UINT message,
WPARAM wParam, LPARAM lParam)
{
static HANDLE hInstance;
HDC dc;
PAINTSTRUCT ps;
RECT client_area;
RECT this_line;
RECT dummy;
TEXTMETRIC tm;
register int i;
int id;
switch (message)
{
case WM_CREATE:
hInstance = ( (LPCREATESTRUCT) lParam)->hInstance;
dc = GetDC(hwnd);
SelectObject(dc, GetStockObject(SYSTEM_FIXED_FONT));
GetTextMetrics(dc, &tm);
ReleaseDC(hwnd, dc);
char_width = tm.tmAveCharWidth;
char_height = tm.tmHeight + tm.tmExternalLeading;
GetClientRect(hwnd, &client_area);
COLS = (client_area.right - client_area.left)/char_width;
LINES = (client_area.bottom - client_area.top)/char_height;
generic_init();
return(0);
case WM_CHAR:
if (wParam == QUIT) {
SendMessage( hwnd, WM_CLOSE, 0, 0L );
} else {
do_command((int)wParam);
}
return(0);
case WM_SETFOCUS:
CreateCaret(hwnd, NULL, char_width, char_height);
ShowCaret(hwnd);
caret_visible = 1;
update_cursor();
return(0);
case WM_KILLFOCUS:
HideCaret(hwnd);
DestroyCaret();
caret_visible = 0;
return(0);
case WM_LBUTTONUP:
{
unsigned xpos = LOWORD(lParam); /* From left */
unsigned ypos = HIWORD(lParam); /* from top */
set_position(xpos / (unsigned)char_width,
ypos / (unsigned)char_height);
return(0);
}
case WM_COMMAND:
id = LOWORD(wParam);
if (id & EDIT_CMD_FLAG) {
if (id & REPEAT_FLAG) do_command(REPEAT);
do_command(CHAR_CMD(id));
return( 0 );
} else {
switch(id) {
case IDM_FILEEXIT:
SendMessage( hwnd, WM_CLOSE, 0, 0L );
return( 0 );
case IDM_HELPABOUT:
if( DialogBox( hInstance, TEXT("ABOUTBOX"),
hwnd, AboutBoxCallback ) )
InvalidateRect( hwnd, NULL, TRUE );
return( 0 );
case IDM_HELPCONTENTS:
de_error(
"Cursor keys: ^B(left) ^F(right) ^P(up) ^N(down)\n"
"Undo: ^U Write: ^W Quit:^D Repeat count: ^R[n]\n"
"Top: ^T Locate (search, find): ^L text ^L\n");
return( 0 );
}
}
break;
case WM_CLOSE:
DestroyWindow( hwnd );
return 0;
case WM_DESTROY:
PostQuitMessage (0);
GC_win32_free_heap();
return 0;
case WM_PAINT:
dc = BeginPaint(hwnd, &ps);
GetClientRect(hwnd, &client_area);
COLS = (client_area.right - client_area.left)/char_width;
LINES = (client_area.bottom - client_area.top)/char_height;
SelectObject(dc, GetStockObject(SYSTEM_FIXED_FONT));
for (i = 0; i < LINES; i++) {
get_line_rect(i, client_area.right, &this_line);
if (IntersectRect(&dummy, &this_line, &ps.rcPaint)) {
CORD raw_line = retrieve_screen_line(i);
size_t len = CORD_len(raw_line);
char * text = CORD_to_char_star(raw_line);
/* May contain embedded NULLs */
char * plain = plain_chars(text, len);
char * blanks = CORD_to_char_star(CORD_chars(' ',
COLS - len));
char * control = control_chars(text, len);
if (NULL == plain || NULL == control)
de_error("Out of memory!");
# define RED RGB(255,0,0)
SetBkMode(dc, OPAQUE);
SetTextColor(dc, GetSysColor(COLOR_WINDOWTEXT));
if (plain != NULL)
TextOutA(dc, this_line.left, this_line.top,
plain, (int)len);
TextOutA(dc, this_line.left + (int)len * char_width,
this_line.top,
blanks, (int)(COLS - len));
SetBkMode(dc, TRANSPARENT);
SetTextColor(dc, RED);
if (control != NULL)
TextOutA(dc, this_line.left, this_line.top,
control, (int)strlen(control));
}
}
EndPaint(hwnd, &ps);
screen_was_painted = 1;
return 0;
}
return DefWindowProc (hwnd, message, wParam, lParam);
}
int last_col;
int last_line;
void move_cursor(int c, int l)
{
last_col = c;
last_line = l;
if (caret_visible) update_cursor();
}
void update_cursor(void)
{
SetCaretPos(last_col * char_width, last_line * char_height);
ShowCaret(hwnd);
}
void invalidate_line(int i)
{
RECT line;
if (!screen_was_painted) return;
/* Invalidating a rectangle before painting seems result in a */
/* major performance problem. */
get_line_rect(i, COLS*char_width, &line);
InvalidateRect(hwnd, &line, FALSE);
}
Gauche-0.9.6/gc/cord/tests/de_win.h 0000664 0000764 0000764 00000006345 13074101475 016100 0 ustar shiro shiro /*
* Copyright (c) 1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
/* cord.h, de_cmds.h, and windows.h should be included before this. */
# define OTHER_FLAG 0x100
# define EDIT_CMD_FLAG 0x200
# define REPEAT_FLAG 0x400
# define CHAR_CMD(i) ((i) & 0xff)
/* MENU: DE */
#define IDM_FILESAVE (EDIT_CMD_FLAG + WRITE)
#define IDM_FILEEXIT (OTHER_FLAG + 1)
#define IDM_HELPABOUT (OTHER_FLAG + 2)
#define IDM_HELPCONTENTS (OTHER_FLAG + 3)
#define IDM_EDITPDOWN (REPEAT_FLAG + EDIT_CMD_FLAG + DOWN)
#define IDM_EDITPUP (REPEAT_FLAG + EDIT_CMD_FLAG + UP)
#define IDM_EDITUNDO (EDIT_CMD_FLAG + UNDO)
#define IDM_EDITLOCATE (EDIT_CMD_FLAG + LOCATE)
#define IDM_EDITDOWN (EDIT_CMD_FLAG + DOWN)
#define IDM_EDITUP (EDIT_CMD_FLAG + UP)
#define IDM_EDITLEFT (EDIT_CMD_FLAG + LEFT)
#define IDM_EDITRIGHT (EDIT_CMD_FLAG + RIGHT)
#define IDM_EDITBS (EDIT_CMD_FLAG + BS)
#define IDM_EDITDEL (EDIT_CMD_FLAG + DEL)
#define IDM_EDITREPEAT (EDIT_CMD_FLAG + REPEAT)
#define IDM_EDITTOP (EDIT_CMD_FLAG + TOP)
/* Windows UI stuff */
LRESULT CALLBACK WndProc (HWND hwnd, UINT message,
WPARAM wParam, LPARAM lParam);
/* Screen dimensions. Maintained by de_win.c. */
extern int LINES;
extern int COLS;
/* File being edited. */
extern char * arg_file_name;
/* Current display position in file. Maintained by de.c */
extern int dis_line;
extern int dis_col;
/* Current cursor position in file. */
extern int line;
extern int col;
/*
* Calls from de_win.c to de.c
*/
CORD retrieve_screen_line(int i);
/* Get the contents of i'th screen line. */
/* Relies on COLS. */
void set_position(int x, int y);
/* Set column, row. Upper left of window = (0,0). */
void do_command(int);
/* Execute an editor command. */
/* Agument is a command character or one */
/* of the IDM_ commands. */
void generic_init(void);
/* OS independent initialization */
/*
* Calls from de.c to de_win.c
*/
void move_cursor(int column, int line);
/* Physically move the cursor on the display, */
/* so that it appears at */
/* (column, line). */
void invalidate_line(int line);
/* Invalidate line i on the screen. */
void de_error(char *s);
/* Display error message. */
Gauche-0.9.6/gc/cord/tests/de_cmds.h 0000664 0000764 0000764 00000001733 13074101475 016225 0 ustar shiro shiro /*
* Copyright (c) 1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#ifndef DE_CMDS_H
# define DE_CMDS_H
# define UP 16 /* ^P */
# define DOWN 14 /* ^N */
# define LEFT 2 /* ^B */
# define RIGHT 6 /* ^F */
# define DEL 127 /* ^? */
# define BS 8 /* ^H */
# define UNDO 21 /* ^U */
# define WRITE 23 /* ^W */
# define QUIT 4 /* ^D */
# define REPEAT 18 /* ^R */
# define LOCATE 12 /* ^L */
# define TOP 20 /* ^T */
#endif
Gauche-0.9.6/gc/cord/tests/cordtest.c 0000664 0000764 0000764 00000024065 13227007433 016453 0 ustar shiro shiro /*
* Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
# include "gc.h" /* For GC_INIT() only */
# include "cord.h"
# include
# include
# include
# include
/* This is a very incomplete test of the cord package. It knows about */
/* a few internals of the package (e.g. when C strings are returned) */
/* that real clients shouldn't rely on. */
# define ABORT(string) \
{ fprintf(stderr, "FAILED: %s\n", string); abort(); }
#if defined(CPPCHECK)
# undef CORD_iter
# undef CORD_next
# undef CORD_pos_fetch
# undef CORD_pos_to_cord
# undef CORD_pos_to_index
# undef CORD_pos_valid
# undef CORD_prev
#endif
int count;
int test_fn(char c, void * client_data)
{
if (client_data != (void *)13) ABORT("bad client data");
if (count < 64*1024+1) {
if ((count & 1) == 0) {
if (c != 'b') ABORT("bad char");
} else {
if (c != 'a') ABORT("bad char");
}
count++;
return(0);
} else {
if (c != 'c') ABORT("bad char");
count++;
return(1);
}
}
char id_cord_fn(size_t i, void * client_data)
{
if (client_data != 0) ABORT("id_cord_fn: bad client data");
return((char)i);
}
void test_basics(void)
{
CORD x = CORD_from_char_star("ab");
register int i;
CORD y;
CORD_pos p;
x = CORD_cat(x,x);
if (x == CORD_EMPTY) ABORT("CORD_cat(x,x) returned empty cord");
if (!CORD_IS_STRING(x)) ABORT("short cord should usually be a string");
if (strcmp(x, "abab") != 0) ABORT("bad CORD_cat result");
for (i = 1; i < 16; i++) {
x = CORD_cat(x,x);
}
x = CORD_cat(x,"c");
if (CORD_len(x) != 128*1024+1) ABORT("bad length");
count = 0;
if (CORD_iter5(x, 64*1024-1, test_fn, CORD_NO_FN, (void *)13) == 0) {
ABORT("CORD_iter5 failed");
}
if (count != 64*1024 + 2) ABORT("CORD_iter5 failed");
count = 0;
CORD_set_pos(p, x, 64*1024-1);
while(CORD_pos_valid(p)) {
(void) test_fn(CORD_pos_fetch(p), (void *)13);
CORD_next(p);
}
if (count != 64*1024 + 2) ABORT("Position based iteration failed");
y = CORD_substr(x, 1023, 5);
if (!y) ABORT("CORD_substr returned NULL");
if (!CORD_IS_STRING(y)) ABORT("short cord should usually be a string");
if (strcmp(y, "babab") != 0) ABORT("bad CORD_substr result");
y = CORD_substr(x, 1024, 8);
if (!y) ABORT("CORD_substr returned NULL");
if (!CORD_IS_STRING(y)) ABORT("short cord should usually be a string");
if (strcmp(y, "abababab") != 0) ABORT("bad CORD_substr result");
y = CORD_substr(x, 128*1024-1, 8);
if (!y) ABORT("CORD_substr returned NULL");
if (!CORD_IS_STRING(y)) ABORT("short cord should usually be a string");
if (strcmp(y, "bc") != 0) ABORT("bad CORD_substr result");
x = CORD_balance(x);
if (CORD_len(x) != 128*1024+1) ABORT("bad length");
count = 0;
if (CORD_iter5(x, 64*1024-1, test_fn, CORD_NO_FN, (void *)13) == 0) {
ABORT("CORD_iter5 failed");
}
if (count != 64*1024 + 2) ABORT("CORD_iter5 failed");
y = CORD_substr(x, 1023, 5);
if (!y) ABORT("CORD_substr returned NULL");
if (!CORD_IS_STRING(y)) ABORT("short cord should usually be a string");
if (strcmp(y, "babab") != 0) ABORT("bad CORD_substr result");
y = CORD_from_fn(id_cord_fn, 0, 13);
i = 0;
CORD_set_pos(p, y, i);
while(CORD_pos_valid(p)) {
char c = CORD_pos_fetch(p);
if(c != i) ABORT("Traversal of function node failed");
CORD_next(p);
i++;
}
if (i != 13) ABORT("Bad apparent length for function node");
# if defined(CPPCHECK)
/* TODO: Actually test these functions. */
CORD_prev(p);
(void)CORD_pos_to_cord(p);
(void)CORD_pos_to_index(p);
(void)CORD_iter(CORD_EMPTY, test_fn, NULL);
(void)CORD_riter(CORD_EMPTY, test_fn, NULL);
CORD_dump(y);
# endif
}
void test_extras(void)
{
# define FNAME1 "cordtst1.tmp" /* short name (8+3) for portability */
# define FNAME2 "cordtst2.tmp"
register int i;
CORD y = "abcdefghijklmnopqrstuvwxyz0123456789";
CORD x = "{}";
CORD u, w, z;
FILE *f;
FILE *f1a, *f1b, *f2;
w = CORD_cat(CORD_cat(y,y),y);
z = CORD_catn(3,y,y,y);
if (CORD_cmp(w,z) != 0) ABORT("CORD_catn comparison wrong");
for (i = 1; i < 100; i++) {
x = CORD_cat(x, y);
}
z = CORD_balance(x);
if (CORD_cmp(x,z) != 0) ABORT("balanced string comparison wrong");
if (CORD_cmp(x,CORD_cat(z, CORD_nul(13))) >= 0) ABORT("comparison 2");
if (CORD_cmp(CORD_cat(x, CORD_nul(13)), z) <= 0) ABORT("comparison 3");
if (CORD_cmp(x,CORD_cat(z, "13")) >= 0) ABORT("comparison 4");
if ((f = fopen(FNAME1, "w")) == 0) ABORT("open failed");
if (CORD_put(z,f) == EOF) ABORT("CORD_put failed");
if (fclose(f) == EOF) ABORT("fclose failed");
f1a = fopen(FNAME1, "rb");
if (!f1a) ABORT("Unable to open " FNAME1);
w = CORD_from_file(f1a);
if (CORD_len(w) != CORD_len(z)) ABORT("file length wrong");
if (CORD_cmp(w,z) != 0) ABORT("file comparison wrong");
if (CORD_cmp(CORD_substr(w, 50*36+2, 36), y) != 0)
ABORT("file substr wrong");
f1b = fopen(FNAME1, "rb");
if (!f1b) ABORT("2nd open failed: " FNAME1);
z = CORD_from_file_lazy(f1b);
if (CORD_cmp(w,z) != 0) ABORT("File conversions differ");
if (CORD_chr(w, 0, '9') != 37) ABORT("CORD_chr failed 1");
if (CORD_chr(w, 3, 'a') != 38) ABORT("CORD_chr failed 2");
if (CORD_rchr(w, CORD_len(w) - 1, '}') != 1) ABORT("CORD_rchr failed");
x = y;
for (i = 1; i < 14; i++) {
x = CORD_cat(x,x);
}
if ((f = fopen(FNAME2, "w")) == 0) ABORT("2nd open failed");
# ifdef __DJGPP__
/* FIXME: DJGPP workaround. Why does this help? */
if (fflush(f) != 0) ABORT("fflush failed");
# endif
if (CORD_put(x,f) == EOF) ABORT("CORD_put failed");
if (fclose(f) == EOF) ABORT("fclose failed");
f2 = fopen(FNAME2, "rb");
if (!f2) ABORT("Unable to open " FNAME2);
w = CORD_from_file(f2);
if (CORD_len(w) != CORD_len(x)) ABORT("file length wrong");
if (CORD_cmp(w,x) != 0) ABORT("file comparison wrong");
if (CORD_cmp(CORD_substr(w, 1000*36, 36), y) != 0)
ABORT("file substr wrong");
if (strcmp(CORD_to_char_star(CORD_substr(w, 1000*36, 36)), y) != 0)
ABORT("char * file substr wrong");
u = CORD_substr(w, 1000*36, 2);
if (!u) ABORT("CORD_substr returned NULL");
if (strcmp(u, "ab") != 0)
ABORT("short file substr wrong");
if (CORD_str(x,1,"9a") != 35) ABORT("CORD_str failed 1");
if (CORD_str(x,0,"9abcdefghijk") != 35) ABORT("CORD_str failed 2");
if (CORD_str(x,0,"9abcdefghijx") != CORD_NOT_FOUND)
ABORT("CORD_str failed 3");
if (CORD_str(x,0,"9>") != CORD_NOT_FOUND) ABORT("CORD_str failed 4");
/* Note: f1a, f1b, f2 handles are closed lazily by CORD library. */
/* TODO: Propose and use CORD_fclose. */
*(CORD volatile *)&w = CORD_EMPTY;
*(CORD volatile *)&z = CORD_EMPTY;
GC_gcollect();
GC_invoke_finalizers();
/* Of course, this does not guarantee the files are closed. */
if (remove(FNAME1) != 0) {
/* On some systems, e.g. OS2, this may fail if f1 is still open. */
/* But we cannot call fclose as it might lead to double close. */
fprintf(stderr, "WARNING: remove failed: " FNAME1 "\n");
}
if (remove(FNAME2) != 0) {
fprintf(stderr, "WARNING: remove failed: " FNAME2 "\n");
}
}
int wrap_vprintf(CORD format, ...)
{
va_list args;
int result;
va_start(args, format);
result = CORD_vprintf(format, args);
va_end(args);
return result;
}
int wrap_vfprintf(FILE * f, CORD format, ...)
{
va_list args;
int result;
va_start(args, format);
result = CORD_vfprintf(f, format, args);
va_end(args);
return result;
}
#if defined(__DJGPP__) || defined(__STRICT_ANSI__)
/* snprintf is missing in DJGPP (v2.0.3) */
#else
# if defined(_MSC_VER)
# if defined(_WIN32_WCE)
/* _snprintf is deprecated in WinCE */
# define GC_SNPRINTF StringCchPrintfA
# else
# define GC_SNPRINTF _snprintf
# endif
# else
# define GC_SNPRINTF snprintf
# endif
#endif
void test_printf(void)
{
CORD result;
char result2[200];
long l = -1;
short s = (short)-1;
CORD x;
if (CORD_sprintf(&result, "%7.2f%ln", 3.14159F, &l) != 7)
ABORT("CORD_sprintf failed 1");
if (CORD_cmp(result, " 3.14") != 0)ABORT("CORD_sprintf goofed 1");
if (l != 7) ABORT("CORD_sprintf goofed 2");
if (CORD_sprintf(&result, "%-7.2s%hn%c%s", "abcd", &s, 'x', "yz") != 10)
ABORT("CORD_sprintf failed 2");
if (CORD_cmp(result, "ab xyz") != 0)ABORT("CORD_sprintf goofed 3");
if (s != 7) ABORT("CORD_sprintf goofed 4");
x = "abcdefghij";
x = CORD_cat(x,x);
x = CORD_cat(x,x);
x = CORD_cat(x,x);
if (CORD_sprintf(&result, "->%-120.78r!\n", x) != 124)
ABORT("CORD_sprintf failed 3");
# ifdef GC_SNPRINTF
(void)GC_SNPRINTF(result2, sizeof(result2), "->%-120.78s!\n",
CORD_to_char_star(x));
# else
(void)sprintf(result2, "->%-120.78s!\n", CORD_to_char_star(x));
# endif
result2[sizeof(result2) - 1] = '\0';
if (CORD_cmp(result, result2) != 0)ABORT("CORD_sprintf goofed 5");
/* TODO: Better test CORD_[v][f]printf. */
(void)CORD_printf(CORD_EMPTY);
(void)wrap_vfprintf(stdout, CORD_EMPTY);
(void)wrap_vprintf(CORD_EMPTY);
}
int main(void)
{
# ifdef THINK_C
printf("cordtest:\n");
# endif
GC_INIT();
test_basics();
test_extras();
test_printf();
CORD_fprintf(stdout, "SUCCEEDED\n");
return(0);
}
Gauche-0.9.6/gc/cord/tests/de_win.rc 0000664 0000764 0000764 00000003770 13227007433 016253 0 ustar shiro shiro /*
* Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to copy this garbage collector for any purpose,
* provided the above notices are retained on all copies.
*/
#include "windows.h"
#include "de_cmds.h"
#include "de_win.h"
ABOUTBOX DIALOG 19, 21, 163, 47
STYLE DS_MODALFRAME | WS_POPUP | WS_CAPTION | WS_SYSMENU
CAPTION "About Demonstration Text Editor"
BEGIN
/* ICON "DE", -1, 8, 8, 13, 13, WS_CHILD | WS_VISIBLE */
LTEXT "Demonstration Text Editor", -1, 44, 8, 118, 8, WS_CHILD | WS_VISIBLE | WS_GROUP
LTEXT "Version 4.1", -1, 44, 16, 60, 8, WS_CHILD | WS_VISIBLE | WS_GROUP
PUSHBUTTON "OK", IDOK, 118, 27, 24, 14, WS_CHILD | WS_VISIBLE | WS_TABSTOP
END
DE MENU
BEGIN
POPUP "&File"
BEGIN
MENUITEM "&Save\t^W", IDM_FILESAVE
MENUITEM "E&xit\t^D", IDM_FILEEXIT
END
POPUP "&Edit"
BEGIN
MENUITEM "Page &Down\t^R^N", IDM_EDITPDOWN
MENUITEM "Page &Up\t^R^P", IDM_EDITPUP
MENUITEM "U&ndo\t^U", IDM_EDITUNDO
MENUITEM "&Locate\t^L ... ^L", IDM_EDITLOCATE
MENUITEM "D&own\t^N", IDM_EDITDOWN
MENUITEM "U&p\t^P", IDM_EDITUP
MENUITEM "Le&ft\t^B", IDM_EDITLEFT
MENUITEM "&Right\t^F", IDM_EDITRIGHT
MENUITEM "Delete &Backward\tBS", IDM_EDITBS
MENUITEM "Delete F&orward\tDEL", IDM_EDITDEL
MENUITEM "&Top\t^T", IDM_EDITTOP
END
POPUP "&Help"
BEGIN
MENUITEM "&Contents", IDM_HELPCONTENTS
MENUITEM "&About...", IDM_HELPABOUT
END
MENUITEM "Page_&Down", IDM_EDITPDOWN
MENUITEM "Page_&Up", IDM_EDITPUP
END
DE ACCELERATORS
BEGIN
"^R", IDM_EDITREPEAT
"^N", IDM_EDITDOWN
"^P", IDM_EDITUP
"^L", IDM_EDITLOCATE
"^B", IDM_EDITLEFT
"^F", IDM_EDITRIGHT
"^T", IDM_EDITTOP
VK_DELETE, IDM_EDITDEL, VIRTKEY
VK_BACK, IDM_EDITBS, VIRTKEY
END
/* DE ICON cord\de_win.ICO */
Gauche-0.9.6/gc/cord/cordbscs.c 0000664 0000764 0000764 00000070554 13227007433 015270 0 ustar shiro shiro /*
* Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#ifndef CORD_BUILD
# define CORD_BUILD
#endif
# include "gc.h"
# include "cord.h"
# include
# include
# include
/* An implementation of the cord primitives. These are the only */
/* Functions that understand the representation. We perform only */
/* minimal checks on arguments to these functions. Out of bounds */
/* arguments to the iteration functions may result in client functions */
/* invoked on garbage data. In most cases, client functions should be */
/* programmed defensively enough that this does not result in memory */
/* smashes. */
typedef void (* oom_fn)(void);
oom_fn CORD_oom_fn = (oom_fn) 0;
# define OUT_OF_MEMORY { if (CORD_oom_fn != (oom_fn) 0) (*CORD_oom_fn)(); \
ABORT("Out of memory"); }
# define ABORT(msg) { fprintf(stderr, "%s\n", msg); abort(); }
typedef unsigned long word;
typedef union {
struct Concatenation {
char null;
char header;
char depth; /* concatenation nesting depth. */
unsigned char left_len;
/* Length of left child if it is sufficiently */
/* short; 0 otherwise. */
# define MAX_LEFT_LEN 255
word len;
CORD left; /* length(left) > 0 */
CORD right; /* length(right) > 0 */
} concatenation;
struct Function {
char null;
char header;
char depth; /* always 0 */
char left_len; /* always 0 */
word len;
CORD_fn fn;
void * client_data;
} function;
struct Generic {
char null;
char header;
char depth;
char left_len;
word len;
} generic;
char string[1];
} CordRep;
# define CONCAT_HDR 1
# define FN_HDR 4
# define SUBSTR_HDR 6
/* Substring nodes are a special case of function nodes. */
/* The client_data field is known to point to a substr_args */
/* structure, and the function is either CORD_apply_access_fn */
/* or CORD_index_access_fn. */
/* The following may be applied only to function and concatenation nodes: */
#define IS_CONCATENATION(s) (((CordRep *)s)->generic.header == CONCAT_HDR)
#define IS_FUNCTION(s) ((((CordRep *)s)->generic.header & FN_HDR) != 0)
#define IS_SUBSTR(s) (((CordRep *)s)->generic.header == SUBSTR_HDR)
#define LEN(s) (((CordRep *)s) -> generic.len)
#define DEPTH(s) (((CordRep *)s) -> generic.depth)
#define GEN_LEN(s) (CORD_IS_STRING(s) ? strlen(s) : LEN(s))
#define LEFT_LEN(c) ((c) -> left_len != 0? \
(c) -> left_len \
: (CORD_IS_STRING((c) -> left) ? \
(c) -> len - GEN_LEN((c) -> right) \
: LEN((c) -> left)))
#define SHORT_LIMIT (sizeof(CordRep) - 1)
/* Cords shorter than this are C strings */
/* Dump the internal representation of x to stdout, with initial */
/* indentation level n. */
void CORD_dump_inner(CORD x, unsigned n)
{
register size_t i;
for (i = 0; i < (size_t)n; i++) {
fputs(" ", stdout);
}
if (x == 0) {
fputs("NIL\n", stdout);
} else if (CORD_IS_STRING(x)) {
for (i = 0; i <= SHORT_LIMIT; i++) {
if (x[i] == '\0') break;
putchar(x[i]);
}
if (x[i] != '\0') fputs("...", stdout);
putchar('\n');
} else if (IS_CONCATENATION(x)) {
register struct Concatenation * conc =
&(((CordRep *)x) -> concatenation);
printf("Concatenation: %p (len: %d, depth: %d)\n",
(void *)x, (int)(conc -> len), (int)(conc -> depth));
CORD_dump_inner(conc -> left, n+1);
CORD_dump_inner(conc -> right, n+1);
} else /* function */{
register struct Function * func =
&(((CordRep *)x) -> function);
if (IS_SUBSTR(x)) printf("(Substring) ");
printf("Function: %p (len: %d): ", (void *)x, (int)(func -> len));
for (i = 0; i < 20 && i < func -> len; i++) {
putchar((*(func -> fn))(i, func -> client_data));
}
if (i < func -> len) fputs("...", stdout);
putchar('\n');
}
}
/* Dump the internal representation of x to stdout */
void CORD_dump(CORD x)
{
CORD_dump_inner(x, 0);
fflush(stdout);
}
CORD CORD_cat_char_star(CORD x, const char * y, size_t leny)
{
register size_t result_len;
register size_t lenx;
register int depth;
if (x == CORD_EMPTY) return(y);
if (leny == 0) return(x);
if (CORD_IS_STRING(x)) {
lenx = strlen(x);
result_len = lenx + leny;
if (result_len <= SHORT_LIMIT) {
register char * result = GC_MALLOC_ATOMIC(result_len+1);
if (result == 0) OUT_OF_MEMORY;
memcpy(result, x, lenx);
memcpy(result + lenx, y, leny);
result[result_len] = '\0';
return((CORD) result);
} else {
depth = 1;
}
} else {
register CORD right;
register CORD left;
register char * new_right;
lenx = LEN(x);
if (leny <= SHORT_LIMIT/2
&& IS_CONCATENATION(x)
&& CORD_IS_STRING(right = ((CordRep *)x) -> concatenation.right)) {
size_t right_len;
/* Merge y into right part of x. */
if (!CORD_IS_STRING(left = ((CordRep *)x) -> concatenation.left)) {
right_len = lenx - LEN(left);
} else if (((CordRep *)x) -> concatenation.left_len != 0) {
right_len = lenx - ((CordRep *)x) -> concatenation.left_len;
} else {
right_len = strlen(right);
}
result_len = right_len + leny; /* length of new_right */
if (result_len <= SHORT_LIMIT) {
new_right = GC_MALLOC_ATOMIC(result_len + 1);
if (new_right == 0) OUT_OF_MEMORY;
memcpy(new_right, right, right_len);
memcpy(new_right + right_len, y, leny);
new_right[result_len] = '\0';
y = new_right;
leny = result_len;
x = left;
lenx -= right_len;
/* Now fall through to concatenate the two pieces: */
}
if (CORD_IS_STRING(x)) {
depth = 1;
} else {
depth = DEPTH(x) + 1;
}
} else {
depth = DEPTH(x) + 1;
}
result_len = lenx + leny;
}
{
/* The general case; lenx, result_len is known: */
register struct Concatenation * result;
result = GC_NEW(struct Concatenation);
if (result == 0) OUT_OF_MEMORY;
result->header = CONCAT_HDR;
result->depth = (char)depth;
if (lenx <= MAX_LEFT_LEN)
result->left_len = (unsigned char)lenx;
result->len = (word)result_len;
result->left = x;
result->right = y;
if (depth >= MAX_DEPTH) {
return(CORD_balance((CORD)result));
} else {
return((CORD) result);
}
}
}
CORD CORD_cat(CORD x, CORD y)
{
register size_t result_len;
register int depth;
register size_t lenx;
if (x == CORD_EMPTY) return(y);
if (y == CORD_EMPTY) return(x);
if (CORD_IS_STRING(y)) {
return(CORD_cat_char_star(x, y, strlen(y)));
} else if (CORD_IS_STRING(x)) {
lenx = strlen(x);
depth = DEPTH(y) + 1;
} else {
register int depthy = DEPTH(y);
lenx = LEN(x);
depth = DEPTH(x) + 1;
if (depthy >= depth) depth = depthy + 1;
}
result_len = lenx + LEN(y);
{
register struct Concatenation * result;
result = GC_NEW(struct Concatenation);
if (result == 0) OUT_OF_MEMORY;
result->header = CONCAT_HDR;
result->depth = (char)depth;
if (lenx <= MAX_LEFT_LEN)
result->left_len = (unsigned char)lenx;
result->len = (word)result_len;
result->left = x;
result->right = y;
if (depth >= MAX_DEPTH) {
return(CORD_balance((CORD)result));
} else {
return((CORD) result);
}
}
}
static CordRep *CORD_from_fn_inner(CORD_fn fn, void * client_data, size_t len)
{
if (len == 0) return(0);
if (len <= SHORT_LIMIT) {
register char * result;
register size_t i;
char buf[SHORT_LIMIT+1];
for (i = 0; i < len; i++) {
char c = (*fn)(i, client_data);
if (c == '\0') goto gen_case;
buf[i] = c;
}
result = GC_MALLOC_ATOMIC(len+1);
if (result == 0) OUT_OF_MEMORY;
memcpy(result, buf, len);
result[len] = '\0';
return (CordRep *)result;
}
gen_case:
{
register struct Function * result;
result = GC_NEW(struct Function);
if (result == 0) OUT_OF_MEMORY;
result->header = FN_HDR;
/* depth is already 0 */
result->len = (word)len;
result->fn = fn;
result->client_data = client_data;
return (CordRep *)result;
}
}
CORD CORD_from_fn(CORD_fn fn, void * client_data, size_t len)
{
return (/* const */ CORD) CORD_from_fn_inner(fn, client_data, len);
}
size_t CORD_len(CORD x)
{
if (x == 0) {
return(0);
} else {
return(GEN_LEN(x));
}
}
struct substr_args {
CordRep * sa_cord;
size_t sa_index;
};
char CORD_index_access_fn(size_t i, void * client_data)
{
register struct substr_args *descr = (struct substr_args *)client_data;
return(((char *)(descr->sa_cord))[i + descr->sa_index]);
}
char CORD_apply_access_fn(size_t i, void * client_data)
{
register struct substr_args *descr = (struct substr_args *)client_data;
register struct Function * fn_cord = &(descr->sa_cord->function);
return((*(fn_cord->fn))(i + descr->sa_index, fn_cord->client_data));
}
/* A version of CORD_substr that simply returns a function node, thus */
/* postponing its work. The fourth argument is a function that may */
/* be used for efficient access to the ith character. */
/* Assumes i >= 0 and i + n < length(x). */
CORD CORD_substr_closure(CORD x, size_t i, size_t n, CORD_fn f)
{
register struct substr_args * sa = GC_NEW(struct substr_args);
CordRep * result;
if (sa == 0) OUT_OF_MEMORY;
sa->sa_cord = (CordRep *)x;
sa->sa_index = i;
result = CORD_from_fn_inner(f, (void *)sa, n);
if ((CORD)result != CORD_EMPTY && 0 == result -> function.null)
result -> function.header = SUBSTR_HDR;
return (CORD)result;
}
# define SUBSTR_LIMIT (10 * SHORT_LIMIT)
/* Substrings of function nodes and flat strings shorter than */
/* this are flat strings. Othewise we use a functional */
/* representation, which is significantly slower to access. */
/* A version of CORD_substr that assumes i >= 0, n > 0, and i + n < length(x).*/
CORD CORD_substr_checked(CORD x, size_t i, size_t n)
{
if (CORD_IS_STRING(x)) {
if (n > SUBSTR_LIMIT) {
return(CORD_substr_closure(x, i, n, CORD_index_access_fn));
} else {
register char * result = GC_MALLOC_ATOMIC(n+1);
if (result == 0) OUT_OF_MEMORY;
strncpy(result, x+i, n);
result[n] = '\0';
return(result);
}
} else if (IS_CONCATENATION(x)) {
register struct Concatenation * conc
= &(((CordRep *)x) -> concatenation);
register size_t left_len;
register size_t right_len;
left_len = LEFT_LEN(conc);
right_len = conc -> len - left_len;
if (i >= left_len) {
if (n == right_len) return(conc -> right);
return(CORD_substr_checked(conc -> right, i - left_len, n));
} else if (i+n <= left_len) {
if (n == left_len) return(conc -> left);
return(CORD_substr_checked(conc -> left, i, n));
} else {
/* Need at least one character from each side. */
register CORD left_part;
register CORD right_part;
register size_t left_part_len = left_len - i;
if (i == 0) {
left_part = conc -> left;
} else {
left_part = CORD_substr_checked(conc -> left, i, left_part_len);
}
if (i + n == right_len + left_len) {
right_part = conc -> right;
} else {
right_part = CORD_substr_checked(conc -> right, 0,
n - left_part_len);
}
return(CORD_cat(left_part, right_part));
}
} else /* function */ {
if (n > SUBSTR_LIMIT) {
if (IS_SUBSTR(x)) {
/* Avoid nesting substring nodes. */
register struct Function * f = &(((CordRep *)x) -> function);
register struct substr_args *descr =
(struct substr_args *)(f -> client_data);
return(CORD_substr_closure((CORD)descr->sa_cord,
i + descr->sa_index,
n, f -> fn));
} else {
return(CORD_substr_closure(x, i, n, CORD_apply_access_fn));
}
} else {
char * result;
register struct Function * f = &(((CordRep *)x) -> function);
char buf[SUBSTR_LIMIT+1];
register char * p = buf;
register size_t j;
register size_t lim = i + n;
for (j = i; j < lim; j++) {
char c = (*(f -> fn))(j, f -> client_data);
if (c == '\0') {
return(CORD_substr_closure(x, i, n, CORD_apply_access_fn));
}
*p++ = c;
}
result = GC_MALLOC_ATOMIC(n+1);
if (result == 0) OUT_OF_MEMORY;
memcpy(result, buf, n);
result[n] = '\0';
return(result);
}
}
}
CORD CORD_substr(CORD x, size_t i, size_t n)
{
register size_t len = CORD_len(x);
if (i >= len || n == 0) return(0);
if (i + n > len) n = len - i;
return(CORD_substr_checked(x, i, n));
}
/* See cord.h for definition. We assume i is in range. */
int CORD_iter5(CORD x, size_t i, CORD_iter_fn f1,
CORD_batched_iter_fn f2, void * client_data)
{
if (x == 0) return(0);
if (CORD_IS_STRING(x)) {
register const char *p = x+i;
if (*p == '\0') ABORT("2nd arg to CORD_iter5 too big");
if (f2 != CORD_NO_FN) {
return((*f2)(p, client_data));
} else {
while (*p) {
if ((*f1)(*p, client_data)) return(1);
p++;
}
return(0);
}
} else if (IS_CONCATENATION(x)) {
register struct Concatenation * conc
= &(((CordRep *)x) -> concatenation);
if (i > 0) {
register size_t left_len = LEFT_LEN(conc);
if (i >= left_len) {
return(CORD_iter5(conc -> right, i - left_len, f1, f2,
client_data));
}
}
if (CORD_iter5(conc -> left, i, f1, f2, client_data)) {
return(1);
}
return(CORD_iter5(conc -> right, 0, f1, f2, client_data));
} else /* function */ {
register struct Function * f = &(((CordRep *)x) -> function);
register size_t j;
register size_t lim = f -> len;
for (j = i; j < lim; j++) {
if ((*f1)((*(f -> fn))(j, f -> client_data), client_data)) {
return(1);
}
}
return(0);
}
}
#undef CORD_iter
int CORD_iter(CORD x, CORD_iter_fn f1, void * client_data)
{
return(CORD_iter5(x, 0, f1, CORD_NO_FN, client_data));
}
int CORD_riter4(CORD x, size_t i, CORD_iter_fn f1, void * client_data)
{
if (x == 0) return(0);
if (CORD_IS_STRING(x)) {
register const char *p = x + i;
for(;;) {
char c = *p;
if (c == '\0') ABORT("2nd arg to CORD_riter4 too big");
if ((*f1)(c, client_data)) return(1);
if (p == x) break;
p--;
}
return(0);
} else if (IS_CONCATENATION(x)) {
register struct Concatenation * conc
= &(((CordRep *)x) -> concatenation);
register CORD left_part = conc -> left;
register size_t left_len;
left_len = LEFT_LEN(conc);
if (i >= left_len) {
if (CORD_riter4(conc -> right, i - left_len, f1, client_data)) {
return(1);
}
return(CORD_riter4(left_part, left_len - 1, f1, client_data));
} else {
return(CORD_riter4(left_part, i, f1, client_data));
}
} else /* function */ {
register struct Function * f = &(((CordRep *)x) -> function);
register size_t j;
for (j = i; ; j--) {
if ((*f1)((*(f -> fn))(j, f -> client_data), client_data)) {
return(1);
}
if (j == 0) return(0);
}
}
}
int CORD_riter(CORD x, CORD_iter_fn f1, void * client_data)
{
size_t len = CORD_len(x);
if (len == 0) return(0);
return(CORD_riter4(x, len - 1, f1, client_data));
}
/*
* The following functions are concerned with balancing cords.
* Strategy:
* Scan the cord from left to right, keeping the cord scanned so far
* as a forest of balanced trees of exponentially decreasing length.
* When a new subtree needs to be added to the forest, we concatenate all
* shorter ones to the new tree in the appropriate order, and then insert
* the result into the forest.
* Crucial invariants:
* 1. The concatenation of the forest (in decreasing order) with the
* unscanned part of the rope is equal to the rope being balanced.
* 2. All trees in the forest are balanced.
* 3. forest[i] has depth at most i.
*/
typedef struct {
CORD c;
size_t len; /* Actual length of c */
} ForestElement;
static size_t min_len [ MAX_DEPTH ];
static int min_len_init = 0;
int CORD_max_len;
typedef ForestElement Forest [ MAX_DEPTH ];
/* forest[i].len >= fib(i+1) */
/* The string is the concatenation */
/* of the forest in order of DECREASING */
/* indices. */
void CORD_init_min_len(void)
{
register int i;
size_t last, previous;
min_len[0] = previous = 1;
min_len[1] = last = 2;
for (i = 2; i < MAX_DEPTH; i++) {
size_t current = last + previous;
if (current < last) /* overflow */ current = last;
min_len[i] = current;
previous = last;
last = current;
}
CORD_max_len = (int)last - 1;
min_len_init = 1;
}
void CORD_init_forest(ForestElement * forest, size_t max_len)
{
register int i;
for (i = 0; i < MAX_DEPTH; i++) {
forest[i].c = 0;
if (min_len[i] > max_len) return;
}
ABORT("Cord too long");
}
/* Add a leaf to the appropriate level in the forest, cleaning */
/* out lower levels as necessary. */
/* Also works if x is a balanced tree of concatenations; however */
/* in this case an extra concatenation node may be inserted above x; */
/* This node should not be counted in the statement of the invariants. */
void CORD_add_forest(ForestElement * forest, CORD x, size_t len)
{
register int i = 0;
register CORD sum = CORD_EMPTY;
register size_t sum_len = 0;
while (len > min_len[i + 1]) {
if (forest[i].c != 0) {
sum = CORD_cat(forest[i].c, sum);
sum_len += forest[i].len;
forest[i].c = 0;
}
i++;
}
/* Sum has depth at most 1 greter than what would be required */
/* for balance. */
sum = CORD_cat(sum, x);
sum_len += len;
/* If x was a leaf, then sum is now balanced. To see this */
/* consider the two cases in which forest[i-1] either is or is */
/* not empty. */
while (sum_len >= min_len[i]) {
if (forest[i].c != 0) {
sum = CORD_cat(forest[i].c, sum);
sum_len += forest[i].len;
/* This is again balanced, since sum was balanced, and has */
/* allowable depth that differs from i by at most 1. */
forest[i].c = 0;
}
i++;
}
i--;
forest[i].c = sum;
forest[i].len = sum_len;
}
CORD CORD_concat_forest(ForestElement * forest, size_t expected_len)
{
register int i = 0;
CORD sum = 0;
size_t sum_len = 0;
while (sum_len != expected_len) {
if (forest[i].c != 0) {
sum = CORD_cat(forest[i].c, sum);
sum_len += forest[i].len;
}
i++;
}
return(sum);
}
/* Insert the frontier of x into forest. Balanced subtrees are */
/* treated as leaves. This potentially adds one to the depth */
/* of the final tree. */
void CORD_balance_insert(CORD x, size_t len, ForestElement * forest)
{
register int depth;
if (CORD_IS_STRING(x)) {
CORD_add_forest(forest, x, len);
} else if (IS_CONCATENATION(x)
&& ((depth = DEPTH(x)) >= MAX_DEPTH
|| len < min_len[depth])) {
register struct Concatenation * conc
= &(((CordRep *)x) -> concatenation);
size_t left_len = LEFT_LEN(conc);
CORD_balance_insert(conc -> left, left_len, forest);
CORD_balance_insert(conc -> right, len - left_len, forest);
} else /* function or balanced */ {
CORD_add_forest(forest, x, len);
}
}
CORD CORD_balance(CORD x)
{
Forest forest;
register size_t len;
if (x == 0) return(0);
if (CORD_IS_STRING(x)) return(x);
if (!min_len_init) CORD_init_min_len();
len = LEN(x);
CORD_init_forest(forest, len);
CORD_balance_insert(x, len, forest);
return(CORD_concat_forest(forest, len));
}
/* Position primitives */
/* Private routines to deal with the hard cases only: */
/* P contains a prefix of the path to cur_pos. Extend it to a full */
/* path and set up leaf info. */
/* Return 0 if past the end of cord, 1 o.w. */
void CORD__extend_path(register CORD_pos p)
{
register struct CORD_pe * current_pe = &(p[0].path[p[0].path_len]);
register CORD top = current_pe -> pe_cord;
register size_t pos = p[0].cur_pos;
register size_t top_pos = current_pe -> pe_start_pos;
register size_t top_len = GEN_LEN(top);
/* Fill in the rest of the path. */
while(!CORD_IS_STRING(top) && IS_CONCATENATION(top)) {
register struct Concatenation * conc =
&(((CordRep *)top) -> concatenation);
register size_t left_len;
left_len = LEFT_LEN(conc);
current_pe++;
if (pos >= top_pos + left_len) {
current_pe -> pe_cord = top = conc -> right;
current_pe -> pe_start_pos = top_pos = top_pos + left_len;
top_len -= left_len;
} else {
current_pe -> pe_cord = top = conc -> left;
current_pe -> pe_start_pos = top_pos;
top_len = left_len;
}
p[0].path_len++;
}
/* Fill in leaf description for fast access. */
if (CORD_IS_STRING(top)) {
p[0].cur_leaf = top;
p[0].cur_start = top_pos;
p[0].cur_end = top_pos + top_len;
} else {
p[0].cur_end = 0;
}
if (pos >= top_pos + top_len) p[0].path_len = CORD_POS_INVALID;
}
char CORD__pos_fetch(register CORD_pos p)
{
/* Leaf is a function node */
struct CORD_pe * pe = &((p)[0].path[(p)[0].path_len]);
CORD leaf = pe -> pe_cord;
register struct Function * f = &(((CordRep *)leaf) -> function);
if (!IS_FUNCTION(leaf)) ABORT("CORD_pos_fetch: bad leaf");
return ((*(f -> fn))(p[0].cur_pos - pe -> pe_start_pos, f -> client_data));
}
void CORD__next(register CORD_pos p)
{
register size_t cur_pos = p[0].cur_pos + 1;
register struct CORD_pe * current_pe = &((p)[0].path[(p)[0].path_len]);
register CORD leaf = current_pe -> pe_cord;
/* Leaf is not a string or we're at end of leaf */
p[0].cur_pos = cur_pos;
if (!CORD_IS_STRING(leaf)) {
/* Function leaf */
register struct Function * f = &(((CordRep *)leaf) -> function);
register size_t start_pos = current_pe -> pe_start_pos;
register size_t end_pos = start_pos + f -> len;
if (cur_pos < end_pos) {
/* Fill cache and return. */
register size_t i;
register size_t limit = cur_pos + FUNCTION_BUF_SZ;
register CORD_fn fn = f -> fn;
register void * client_data = f -> client_data;
if (limit > end_pos) {
limit = end_pos;
}
for (i = cur_pos; i < limit; i++) {
p[0].function_buf[i - cur_pos] =
(*fn)(i - start_pos, client_data);
}
p[0].cur_start = cur_pos;
p[0].cur_leaf = p[0].function_buf;
p[0].cur_end = limit;
return;
}
}
/* End of leaf */
/* Pop the stack until we find two concatenation nodes with the */
/* same start position: this implies we were in left part. */
{
while (p[0].path_len > 0
&& current_pe[0].pe_start_pos != current_pe[-1].pe_start_pos) {
p[0].path_len--;
current_pe--;
}
if (p[0].path_len == 0) {
p[0].path_len = CORD_POS_INVALID;
return;
}
}
p[0].path_len--;
CORD__extend_path(p);
}
void CORD__prev(register CORD_pos p)
{
register struct CORD_pe * pe = &(p[0].path[p[0].path_len]);
if (p[0].cur_pos == 0) {
p[0].path_len = CORD_POS_INVALID;
return;
}
p[0].cur_pos--;
if (p[0].cur_pos >= pe -> pe_start_pos) return;
/* Beginning of leaf */
/* Pop the stack until we find two concatenation nodes with the */
/* different start position: this implies we were in right part. */
{
register struct CORD_pe * current_pe = &((p)[0].path[(p)[0].path_len]);
while (p[0].path_len > 0
&& current_pe[0].pe_start_pos == current_pe[-1].pe_start_pos) {
p[0].path_len--;
current_pe--;
}
}
p[0].path_len--;
CORD__extend_path(p);
}
#undef CORD_pos_fetch
#undef CORD_next
#undef CORD_prev
#undef CORD_pos_to_index
#undef CORD_pos_to_cord
#undef CORD_pos_valid
char CORD_pos_fetch(register CORD_pos p)
{
if (p[0].cur_end != 0) {
return(p[0].cur_leaf[p[0].cur_pos - p[0].cur_start]);
} else {
return(CORD__pos_fetch(p));
}
}
void CORD_next(CORD_pos p)
{
if (p[0].cur_pos + 1 < p[0].cur_end) {
p[0].cur_pos++;
} else {
CORD__next(p);
}
}
void CORD_prev(CORD_pos p)
{
if (p[0].cur_end != 0 && p[0].cur_pos > p[0].cur_start) {
p[0].cur_pos--;
} else {
CORD__prev(p);
}
}
size_t CORD_pos_to_index(CORD_pos p)
{
return(p[0].cur_pos);
}
CORD CORD_pos_to_cord(CORD_pos p)
{
return(p[0].path[0].pe_cord);
}
int CORD_pos_valid(CORD_pos p)
{
return(p[0].path_len != CORD_POS_INVALID);
}
void CORD_set_pos(CORD_pos p, CORD x, size_t i)
{
if (x == CORD_EMPTY) {
p[0].path_len = CORD_POS_INVALID;
return;
}
p[0].path[0].pe_cord = x;
p[0].path[0].pe_start_pos = 0;
p[0].path_len = 0;
p[0].cur_pos = i;
CORD__extend_path(p);
}
Gauche-0.9.6/gc/cord/cordprnt.c 0000664 0000764 0000764 00000034777 13227007433 015330 0 ustar shiro shiro /*
* Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
/* An sprintf implementation that understands cords. This is probably */
/* not terribly portable. It assumes an ANSI stdarg.h. It further */
/* assumes that I can make copies of va_list variables, and read */
/* arguments repeatedly by applying va_arg to the copies. This */
/* could be avoided at some performance cost. */
/* We also assume that unsigned and signed integers of various kinds */
/* have the same sizes, and can be cast back and forth. */
/* We assume that void * and char * have the same size. */
/* All this cruft is needed because we want to rely on the underlying */
/* sprintf implementation whenever possible. */
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#ifndef CORD_BUILD
# define CORD_BUILD
#endif
#include "cord.h"
#include "ec.h"
#include
#include
#include
#include
#include "gc.h"
#define CONV_SPEC_LEN 50 /* Maximum length of a single */
/* conversion specification. */
#define CONV_RESULT_LEN 50 /* Maximum length of any */
/* conversion with default */
/* width and prec. */
#define OUT_OF_MEMORY do { \
if (CORD_oom_fn != 0) (*CORD_oom_fn)(); \
fprintf(stderr, "Out of memory\n"); \
abort(); \
} while (0)
static int ec_len(CORD_ec x)
{
return(CORD_len(x[0].ec_cord) + (x[0].ec_bufptr - x[0].ec_buf));
}
/* Possible nonumeric precision values. */
# define NONE -1
# define VARIABLE -2
/* Copy the conversion specification from CORD_pos into the buffer buf */
/* Return negative on error. */
/* Source initially points one past the leading %. */
/* It is left pointing at the conversion type. */
/* Assign field width and precision to *width and *prec. */
/* If width or prec is *, VARIABLE is assigned. */
/* Set *left to 1 if left adjustment flag is present. */
/* Set *long_arg to 1 if long flag ('l' or 'L') is present, or to */
/* -1 if 'h' is present. */
static int extract_conv_spec(CORD_pos source, char *buf,
int * width, int *prec, int *left, int * long_arg)
{
register int result = 0;
register int current_number = 0;
register int saw_period = 0;
register int saw_number = 0;
register int chars_so_far = 0;
register char current;
*width = NONE;
buf[chars_so_far++] = '%';
while(CORD_pos_valid(source)) {
if (chars_so_far >= CONV_SPEC_LEN) return(-1);
current = CORD_pos_fetch(source);
buf[chars_so_far++] = current;
switch(current) {
case '*':
saw_number = 1;
current_number = VARIABLE;
break;
case '0':
if (!saw_number) {
/* Zero fill flag; ignore */
break;
}
current_number *= 10;
break;
case '1':
case '2':
case '3':
case '4':
case '5':
case '6':
case '7':
case '8':
case '9':
saw_number = 1;
current_number *= 10;
current_number += current - '0';
break;
case '.':
saw_period = 1;
if(saw_number) {
*width = current_number;
saw_number = 0;
}
current_number = 0;
break;
case 'l':
case 'L':
*long_arg = 1;
current_number = 0;
break;
case 'h':
*long_arg = -1;
current_number = 0;
break;
case ' ':
case '+':
case '#':
current_number = 0;
break;
case '-':
*left = 1;
current_number = 0;
break;
case 'd':
case 'i':
case 'o':
case 'u':
case 'x':
case 'X':
case 'f':
case 'e':
case 'E':
case 'g':
case 'G':
case 'c':
case 'C':
case 's':
case 'S':
case 'p':
case 'n':
case 'r':
goto done;
default:
return(-1);
}
CORD_next(source);
}
return(-1);
done:
if (saw_number) {
if (saw_period) {
*prec = current_number;
} else {
*prec = NONE;
*width = current_number;
}
} else {
*prec = NONE;
}
buf[chars_so_far] = '\0';
return(result);
}
#if defined(DJGPP) || defined(__STRICT_ANSI__)
/* vsnprintf is missing in DJGPP (v2.0.3) */
# define GC_VSNPRINTF(buf, bufsz, format, args) vsprintf(buf, format, args)
#elif defined(_MSC_VER)
# ifdef MSWINCE
/* _vsnprintf is deprecated in WinCE */
# define GC_VSNPRINTF StringCchVPrintfA
# else
# define GC_VSNPRINTF _vsnprintf
# endif
#else
# define GC_VSNPRINTF vsnprintf
#endif
int CORD_vsprintf(CORD * out, CORD format, va_list args)
{
CORD_ec result;
register int count;
register char current;
CORD_pos pos;
char conv_spec[CONV_SPEC_LEN + 1];
CORD_ec_init(result);
for (CORD_set_pos(pos, format, 0); CORD_pos_valid(pos); CORD_next(pos)) {
current = CORD_pos_fetch(pos);
if (current == '%') {
CORD_next(pos);
if (!CORD_pos_valid(pos)) return(-1);
current = CORD_pos_fetch(pos);
if (current == '%') {
CORD_ec_append(result, current);
} else {
int width, prec;
int left_adj = 0;
int long_arg = 0;
CORD arg;
size_t len;
if (extract_conv_spec(pos, conv_spec,
&width, &prec,
&left_adj, &long_arg) < 0) {
return(-1);
}
current = CORD_pos_fetch(pos);
switch(current) {
case 'n':
/* Assign length to next arg */
if (long_arg == 0) {
int * pos_ptr;
pos_ptr = va_arg(args, int *);
*pos_ptr = ec_len(result);
} else if (long_arg > 0) {
long * pos_ptr;
pos_ptr = va_arg(args, long *);
*pos_ptr = ec_len(result);
} else {
short * pos_ptr;
pos_ptr = va_arg(args, short *);
*pos_ptr = ec_len(result);
}
goto done;
case 'r':
/* Append cord and any padding */
if (width == VARIABLE) width = va_arg(args, int);
if (prec == VARIABLE) prec = va_arg(args, int);
arg = va_arg(args, CORD);
len = CORD_len(arg);
if (prec != NONE && len > (size_t)prec) {
if (prec < 0) return(-1);
arg = CORD_substr(arg, 0, prec);
len = (unsigned)prec;
}
if (width != NONE && len < (size_t)width) {
char * blanks = GC_MALLOC_ATOMIC(width-len+1);
if (NULL == blanks) OUT_OF_MEMORY;
memset(blanks, ' ', width-len);
blanks[width-len] = '\0';
if (left_adj) {
arg = CORD_cat(arg, blanks);
} else {
arg = CORD_cat(blanks, arg);
}
}
CORD_ec_append_cord(result, arg);
goto done;
case 'c':
if (width == NONE && prec == NONE) {
register char c;
c = (char)va_arg(args, int);
CORD_ec_append(result, c);
goto done;
}
break;
case 's':
if (width == NONE && prec == NONE) {
char * str = va_arg(args, char *);
register char c;
while ((c = *str++)) {
CORD_ec_append(result, c);
}
goto done;
}
break;
default:
break;
}
/* Use standard sprintf to perform conversion */
{
register char * buf;
va_list vsprintf_args;
int max_size = 0;
int res = 0;
# if defined(CPPCHECK)
va_copy(vsprintf_args, args);
# elif defined(__va_copy)
__va_copy(vsprintf_args, args);
# elif defined(__GNUC__) && !defined(__DJGPP__) \
&& !defined(__EMX__) /* and probably in other cases */
va_copy(vsprintf_args, args);
# else
vsprintf_args = args;
# endif
if (width == VARIABLE) width = va_arg(args, int);
if (prec == VARIABLE) prec = va_arg(args, int);
if (width != NONE) max_size = width;
if (prec != NONE && prec > max_size) max_size = prec;
max_size += CONV_RESULT_LEN;
if (max_size >= CORD_BUFSZ) {
buf = GC_MALLOC_ATOMIC(max_size + 1);
if (NULL == buf) OUT_OF_MEMORY;
} else {
if (CORD_BUFSZ - (result[0].ec_bufptr-result[0].ec_buf)
< max_size) {
CORD_ec_flush_buf(result);
}
buf = result[0].ec_bufptr;
}
switch(current) {
case 'd':
case 'i':
case 'o':
case 'u':
case 'x':
case 'X':
case 'c':
if (long_arg <= 0) {
(void) va_arg(args, int);
} else /* long_arg > 0 */ {
(void) va_arg(args, long);
}
break;
case 's':
case 'p':
(void) va_arg(args, char *);
break;
case 'f':
case 'e':
case 'E':
case 'g':
case 'G':
(void) va_arg(args, double);
break;
default:
res = -1;
}
if (0 == res)
res = GC_VSNPRINTF(buf, max_size + 1, conv_spec,
vsprintf_args);
# if defined(CPPCHECK) || defined(__va_copy) \
|| (defined(__GNUC__) && !defined(__DJGPP__) \
&& !defined(__EMX__))
va_end(vsprintf_args);
# endif
len = (size_t)res;
if ((char *)(GC_word)res == buf) {
/* old style vsprintf */
len = strlen(buf);
} else if (res < 0) {
return(-1);
}
if (buf != result[0].ec_bufptr) {
register char c;
while ((c = *buf++)) {
CORD_ec_append(result, c);
}
} else {
result[0].ec_bufptr = buf + len;
}
}
done:;
}
} else {
CORD_ec_append(result, current);
}
}
count = ec_len(result);
*out = CORD_balance(CORD_ec_to_cord(result));
return(count);
}
int CORD_sprintf(CORD * out, CORD format, ...)
{
va_list args;
int result;
va_start(args, format);
result = CORD_vsprintf(out, format, args);
va_end(args);
return(result);
}
int CORD_fprintf(FILE * f, CORD format, ...)
{
va_list args;
int result;
CORD out = CORD_EMPTY; /* initialized to prevent compiler warning */
va_start(args, format);
result = CORD_vsprintf(&out, format, args);
va_end(args);
if (result > 0) CORD_put(out, f);
return(result);
}
int CORD_vfprintf(FILE * f, CORD format, va_list args)
{
int result;
CORD out = CORD_EMPTY;
result = CORD_vsprintf(&out, format, args);
if (result > 0) CORD_put(out, f);
return(result);
}
int CORD_printf(CORD format, ...)
{
va_list args;
int result;
CORD out = CORD_EMPTY;
va_start(args, format);
result = CORD_vsprintf(&out, format, args);
va_end(args);
if (result > 0) CORD_put(out, stdout);
return(result);
}
int CORD_vprintf(CORD format, va_list args)
{
int result;
CORD out = CORD_EMPTY;
result = CORD_vsprintf(&out, format, args);
if (result > 0) CORD_put(out, stdout);
return(result);
}
Gauche-0.9.6/gc/cord/cordxtra.c 0000664 0000764 0000764 00000043546 13227007433 015315 0 ustar shiro shiro /*
* Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
/*
* These are functions on cords that do not need to understand their
* implementation. They serve also serve as example client code for
* cord_basics.
*/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#ifndef CORD_BUILD
# define CORD_BUILD
#endif
# include
# include
# include
# include
# include "cord.h"
# include "ec.h"
# define I_HIDE_POINTERS /* So we get access to allocation lock. */
/* We use this for lazy file reading, */
/* so that we remain independent */
/* of the threads primitives. */
# include "gc.h"
/* For now we assume that pointer reads and writes are atomic, */
/* i.e. another thread always sees the state before or after */
/* a write. This might be false on a Motorola M68K with */
/* pointers that are not 32-bit aligned. But there probably */
/* aren't too many threads packages running on those. */
# define ATOMIC_WRITE(x,y) (x) = (y)
# define ATOMIC_READ(x) (*(x))
/* The standard says these are in stdio.h, but they aren't always: */
# ifndef SEEK_SET
# define SEEK_SET 0
# endif
# ifndef SEEK_END
# define SEEK_END 2
# endif
# define BUFSZ 2048 /* Size of stack allocated buffers when */
/* we want large buffers. */
typedef void (* oom_fn)(void);
# define OUT_OF_MEMORY { if (CORD_oom_fn != (oom_fn) 0) (*CORD_oom_fn)(); \
ABORT("Out of memory"); }
# define ABORT(msg) { fprintf(stderr, "%s\n", msg); abort(); }
#if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)
# define CORD_ATTR_UNUSED __attribute__((__unused__))
#else
# define CORD_ATTR_UNUSED /* empty */
#endif
CORD CORD_cat_char(CORD x, char c)
{
register char * string;
if (c == '\0') return(CORD_cat(x, CORD_nul(1)));
string = GC_MALLOC_ATOMIC(2);
if (string == 0) OUT_OF_MEMORY;
string[0] = c;
string[1] = '\0';
return(CORD_cat_char_star(x, string, 1));
}
CORD CORD_catn(int nargs, ...)
{
register CORD result = CORD_EMPTY;
va_list args;
register int i;
va_start(args, nargs);
for (i = 0; i < nargs; i++) {
register CORD next = va_arg(args, CORD);
result = CORD_cat(result, next);
}
va_end(args);
return(result);
}
typedef struct {
size_t len;
size_t count;
char * buf;
} CORD_fill_data;
int CORD_fill_proc(char c, void * client_data)
{
register CORD_fill_data * d = (CORD_fill_data *)client_data;
register size_t count = d -> count;
(d -> buf)[count] = c;
d -> count = ++count;
if (count >= d -> len) {
return(1);
} else {
return(0);
}
}
int CORD_batched_fill_proc(const char * s, void * client_data)
{
register CORD_fill_data * d = (CORD_fill_data *)client_data;
register size_t count = d -> count;
register size_t max = d -> len;
register char * buf = d -> buf;
register const char * t = s;
while((buf[count] = *t++) != '\0') {
count++;
if (count >= max) {
d -> count = count;
return(1);
}
}
d -> count = count;
return(0);
}
/* Fill buf with len characters starting at i. */
/* Assumes len characters are available in buf. */
/* Return 1 if buf is filled fully (and len is */
/* non-zero), 0 otherwise. */
int CORD_fill_buf(CORD x, size_t i, size_t len, char * buf)
{
CORD_fill_data fd;
fd.len = len;
fd.buf = buf;
fd.count = 0;
return CORD_iter5(x, i, CORD_fill_proc, CORD_batched_fill_proc, &fd);
}
int CORD_cmp(CORD x, CORD y)
{
CORD_pos xpos;
CORD_pos ypos;
if (y == CORD_EMPTY) return(x != CORD_EMPTY);
if (x == CORD_EMPTY) return(-1);
if (CORD_IS_STRING(y) && CORD_IS_STRING(x)) return(strcmp(x,y));
CORD_set_pos(xpos, x, 0);
CORD_set_pos(ypos, y, 0);
for(;;) {
size_t avail, yavail;
if (!CORD_pos_valid(xpos)) {
if (CORD_pos_valid(ypos)) {
return(-1);
} else {
return(0);
}
}
if (!CORD_pos_valid(ypos)) {
return(1);
}
avail = CORD_pos_chars_left(xpos);
if (avail == 0
|| (yavail = CORD_pos_chars_left(ypos)) == 0) {
register char xcurrent = CORD_pos_fetch(xpos);
register char ycurrent = CORD_pos_fetch(ypos);
if (xcurrent != ycurrent) return(xcurrent - ycurrent);
CORD_next(xpos);
CORD_next(ypos);
} else {
/* process as many characters as we can */
register int result;
if (avail > yavail) avail = yavail;
result = strncmp(CORD_pos_cur_char_addr(xpos),
CORD_pos_cur_char_addr(ypos), avail);
if (result != 0) return(result);
CORD_pos_advance(xpos, avail);
CORD_pos_advance(ypos, avail);
}
}
}
int CORD_ncmp(CORD x, size_t x_start, CORD y, size_t y_start, size_t len)
{
CORD_pos xpos;
CORD_pos ypos;
register size_t count;
CORD_set_pos(xpos, x, x_start);
CORD_set_pos(ypos, y, y_start);
for(count = 0; count < len;) {
long avail, yavail;
if (!CORD_pos_valid(xpos)) {
if (CORD_pos_valid(ypos)) {
return(-1);
} else {
return(0);
}
}
if (!CORD_pos_valid(ypos)) {
return(1);
}
if ((avail = CORD_pos_chars_left(xpos)) <= 0
|| (yavail = CORD_pos_chars_left(ypos)) <= 0) {
register char xcurrent = CORD_pos_fetch(xpos);
register char ycurrent = CORD_pos_fetch(ypos);
if (xcurrent != ycurrent) return(xcurrent - ycurrent);
CORD_next(xpos);
CORD_next(ypos);
count++;
} else {
/* process as many characters as we can */
register int result;
if (avail > yavail) avail = yavail;
count += avail;
if (count > len)
avail -= (long)(count - len);
result = strncmp(CORD_pos_cur_char_addr(xpos),
CORD_pos_cur_char_addr(ypos), (size_t)avail);
if (result != 0) return(result);
CORD_pos_advance(xpos, (size_t)avail);
CORD_pos_advance(ypos, (size_t)avail);
}
}
return(0);
}
char * CORD_to_char_star(CORD x)
{
register size_t len = CORD_len(x);
char * result = GC_MALLOC_ATOMIC(len + 1);
if (result == 0) OUT_OF_MEMORY;
if (len > 0 && CORD_fill_buf(x, 0, len, result) != 1)
ABORT("CORD_fill_buf malfunction");
result[len] = '\0';
return(result);
}
CORD CORD_from_char_star(const char *s)
{
char * result;
size_t len = strlen(s);
if (0 == len) return(CORD_EMPTY);
result = GC_MALLOC_ATOMIC(len + 1);
if (result == 0) OUT_OF_MEMORY;
memcpy(result, s, len+1);
return(result);
}
const char * CORD_to_const_char_star(CORD x)
{
if (x == 0) return("");
if (CORD_IS_STRING(x)) return((const char *)x);
return(CORD_to_char_star(x));
}
char CORD_fetch(CORD x, size_t i)
{
CORD_pos xpos;
CORD_set_pos(xpos, x, i);
if (!CORD_pos_valid(xpos)) ABORT("bad index?");
return(CORD_pos_fetch(xpos));
}
int CORD_put_proc(char c, void * client_data)
{
register FILE * f = (FILE *)client_data;
return(putc(c, f) == EOF);
}
int CORD_batched_put_proc(const char * s, void * client_data)
{
register FILE * f = (FILE *)client_data;
return(fputs(s, f) == EOF);
}
int CORD_put(CORD x, FILE * f)
{
if (CORD_iter5(x, 0, CORD_put_proc, CORD_batched_put_proc, f)) {
return(EOF);
} else {
return(1);
}
}
typedef struct {
size_t pos; /* Current position in the cord */
char target; /* Character we're looking for */
} chr_data;
int CORD_chr_proc(char c, void * client_data)
{
register chr_data * d = (chr_data *)client_data;
if (c == d -> target) return(1);
(d -> pos) ++;
return(0);
}
int CORD_rchr_proc(char c, void * client_data)
{
register chr_data * d = (chr_data *)client_data;
if (c == d -> target) return(1);
(d -> pos) --;
return(0);
}
int CORD_batched_chr_proc(const char *s, void * client_data)
{
register chr_data * d = (chr_data *)client_data;
register char * occ = strchr(s, d -> target);
if (occ == 0) {
d -> pos += strlen(s);
return(0);
} else {
d -> pos += occ - s;
return(1);
}
}
size_t CORD_chr(CORD x, size_t i, int c)
{
chr_data d;
d.pos = i;
d.target = (char)c;
if (CORD_iter5(x, i, CORD_chr_proc, CORD_batched_chr_proc, &d)) {
return(d.pos);
} else {
return(CORD_NOT_FOUND);
}
}
size_t CORD_rchr(CORD x, size_t i, int c)
{
chr_data d;
d.pos = i;
d.target = (char)c;
if (CORD_riter4(x, i, CORD_rchr_proc, &d)) {
return(d.pos);
} else {
return(CORD_NOT_FOUND);
}
}
/* Find the first occurrence of s in x at position start or later. */
/* This uses an asymptotically poor algorithm, which should typically */
/* perform acceptably. We compare the first few characters directly, */
/* and call CORD_ncmp whenever there is a partial match. */
/* This has the advantage that we allocate very little, or not at all. */
/* It's very fast if there are few close misses. */
size_t CORD_str(CORD x, size_t start, CORD s)
{
CORD_pos xpos;
size_t xlen = CORD_len(x);
size_t slen;
register size_t start_len;
const char * s_start;
unsigned long s_buf = 0; /* The first few characters of s */
unsigned long x_buf = 0; /* Start of candidate substring. */
/* Initialized only to make compilers */
/* happy. */
unsigned long mask = 0;
register size_t i;
register size_t match_pos;
if (s == CORD_EMPTY) return(start);
if (CORD_IS_STRING(s)) {
s_start = s;
slen = strlen(s);
} else {
s_start = CORD_to_char_star(CORD_substr(s, 0, sizeof(unsigned long)));
slen = CORD_len(s);
}
if (xlen < start || xlen - start < slen) return(CORD_NOT_FOUND);
start_len = slen;
if (start_len > sizeof(unsigned long)) start_len = sizeof(unsigned long);
CORD_set_pos(xpos, x, start);
for (i = 0; i < start_len; i++) {
mask <<= 8;
mask |= 0xff;
s_buf <<= 8;
s_buf |= (unsigned char)s_start[i];
x_buf <<= 8;
x_buf |= (unsigned char)CORD_pos_fetch(xpos);
CORD_next(xpos);
}
for (match_pos = start; ; match_pos++) {
if ((x_buf & mask) == s_buf) {
if (slen == start_len ||
CORD_ncmp(x, match_pos + start_len,
s, start_len, slen - start_len) == 0) {
return(match_pos);
}
}
if ( match_pos == xlen - slen ) {
return(CORD_NOT_FOUND);
}
x_buf <<= 8;
x_buf |= (unsigned char)CORD_pos_fetch(xpos);
CORD_next(xpos);
}
}
void CORD_ec_flush_buf(CORD_ec x)
{
register size_t len = x[0].ec_bufptr - x[0].ec_buf;
char * s;
if (len == 0) return;
s = GC_MALLOC_ATOMIC(len+1);
if (NULL == s) OUT_OF_MEMORY;
memcpy(s, x[0].ec_buf, len);
s[len] = '\0';
x[0].ec_cord = CORD_cat_char_star(x[0].ec_cord, s, len);
x[0].ec_bufptr = x[0].ec_buf;
}
void CORD_ec_append_cord(CORD_ec x, CORD s)
{
CORD_ec_flush_buf(x);
x[0].ec_cord = CORD_cat(x[0].ec_cord, s);
}
char CORD_nul_func(size_t i CORD_ATTR_UNUSED, void * client_data)
{
return (char)(GC_word)client_data;
}
CORD CORD_chars(char c, size_t i)
{
return CORD_from_fn(CORD_nul_func, (void *)(GC_word)(unsigned char)c, i);
}
CORD CORD_from_file_eager(FILE * f)
{
CORD_ec ecord;
CORD_ec_init(ecord);
for(;;) {
int c = getc(f);
if (c == 0) {
/* Append the right number of NULs */
/* Note that any string of NULs is represented in 4 words, */
/* independent of its length. */
register size_t count = 1;
CORD_ec_flush_buf(ecord);
while ((c = getc(f)) == 0) count++;
ecord[0].ec_cord = CORD_cat(ecord[0].ec_cord, CORD_nul(count));
}
if (c == EOF) break;
CORD_ec_append(ecord, (char)c);
}
(void) fclose(f);
return(CORD_balance(CORD_ec_to_cord(ecord)));
}
/* The state maintained for a lazily read file consists primarily */
/* of a large direct-mapped cache of previously read values. */
/* We could rely more on stdio buffering. That would have 2 */
/* disadvantages: */
/* 1) Empirically, not all fseek implementations preserve the */
/* buffer whenever they could. */
/* 2) It would fail if 2 different sections of a long cord */
/* were being read alternately. */
/* We do use the stdio buffer for read ahead. */
/* To guarantee thread safety in the presence of atomic pointer */
/* writes, cache lines are always replaced, and never modified in */
/* place. */
# define LOG_CACHE_SZ 14
# define CACHE_SZ (1 << LOG_CACHE_SZ)
# define LOG_LINE_SZ 9
# define LINE_SZ (1 << LOG_LINE_SZ)
typedef struct {
size_t tag;
char data[LINE_SZ];
/* data[i%LINE_SZ] = ith char in file if tag = i/LINE_SZ */
} cache_line;
typedef struct {
FILE * lf_file;
size_t lf_current; /* Current file pointer value */
cache_line * volatile lf_cache[CACHE_SZ/LINE_SZ];
} lf_state;
# define MOD_CACHE_SZ(n) ((n) & (CACHE_SZ - 1))
# define DIV_CACHE_SZ(n) ((n) >> LOG_CACHE_SZ)
# define MOD_LINE_SZ(n) ((n) & (LINE_SZ - 1))
# define DIV_LINE_SZ(n) ((n) >> LOG_LINE_SZ)
# define LINE_START(n) ((n) & ~(LINE_SZ - 1))
typedef struct {
lf_state * state;
size_t file_pos; /* Position of needed character. */
cache_line * new_cache;
} refill_data;
/* Executed with allocation lock. */
static char refill_cache(refill_data * client_data)
{
register lf_state * state = client_data -> state;
register size_t file_pos = client_data -> file_pos;
FILE *f = state -> lf_file;
size_t line_start = LINE_START(file_pos);
size_t line_no = DIV_LINE_SZ(MOD_CACHE_SZ(file_pos));
cache_line * new_cache = client_data -> new_cache;
if (line_start != state -> lf_current
&& fseek(f, (long)line_start, SEEK_SET) != 0) {
ABORT("fseek failed");
}
if (fread(new_cache -> data, sizeof(char), LINE_SZ, f)
<= file_pos - line_start) {
ABORT("fread failed");
}
new_cache -> tag = DIV_LINE_SZ(file_pos);
/* Store barrier goes here. */
ATOMIC_WRITE(state -> lf_cache[line_no], new_cache);
state -> lf_current = line_start + LINE_SZ;
return(new_cache->data[MOD_LINE_SZ(file_pos)]);
}
char CORD_lf_func(size_t i, void * client_data)
{
register lf_state * state = (lf_state *)client_data;
register cache_line * volatile * cl_addr =
&(state -> lf_cache[DIV_LINE_SZ(MOD_CACHE_SZ(i))]);
register cache_line * cl = (cache_line *)ATOMIC_READ(cl_addr);
if (cl == 0 || cl -> tag != DIV_LINE_SZ(i)) {
/* Cache miss */
refill_data rd;
rd.state = state;
rd.file_pos = i;
rd.new_cache = GC_NEW_ATOMIC(cache_line);
if (rd.new_cache == 0) OUT_OF_MEMORY;
return((char)(GC_word)
GC_call_with_alloc_lock((GC_fn_type) refill_cache, &rd));
}
return(cl -> data[MOD_LINE_SZ(i)]);
}
void CORD_lf_close_proc(void * obj, void * client_data CORD_ATTR_UNUSED)
{
if (fclose(((lf_state *)obj) -> lf_file) != 0) {
ABORT("CORD_lf_close_proc: fclose failed");
}
}
CORD CORD_from_file_lazy_inner(FILE * f, size_t len)
{
register lf_state * state = GC_NEW(lf_state);
register int i;
if (state == 0) OUT_OF_MEMORY;
if (len != 0) {
/* Dummy read to force buffer allocation. */
/* This greatly increases the probability */
/* of avoiding deadlock if buffer allocation */
/* is redirected to GC_malloc and the */
/* world is multi-threaded. */
char buf[1];
if (fread(buf, 1, 1, f) > 1
|| fseek(f, 0l, SEEK_SET) != 0) {
ABORT("Bad f argument or I/O failure");
}
}
state -> lf_file = f;
for (i = 0; i < CACHE_SZ/LINE_SZ; i++) {
state -> lf_cache[i] = 0;
}
state -> lf_current = 0;
GC_REGISTER_FINALIZER(state, CORD_lf_close_proc, 0, 0, 0);
return(CORD_from_fn(CORD_lf_func, state, len));
}
CORD CORD_from_file_lazy(FILE * f)
{
register long len;
if (fseek(f, 0l, SEEK_END) != 0
|| (len = ftell(f)) < 0
|| fseek(f, 0l, SEEK_SET) != 0) {
ABORT("Bad f argument or I/O failure");
}
return(CORD_from_file_lazy_inner(f, (size_t)len));
}
# define LAZY_THRESHOLD (128*1024 + 1)
CORD CORD_from_file(FILE * f)
{
register long len;
if (fseek(f, 0l, SEEK_END) != 0
|| (len = ftell(f)) < 0
|| fseek(f, 0l, SEEK_SET) != 0) {
ABORT("Bad f argument or I/O failure");
}
if (len < LAZY_THRESHOLD) {
return(CORD_from_file_eager(f));
} else {
return(CORD_from_file_lazy_inner(f, (size_t)len));
}
}
Gauche-0.9.6/gc/os_dep.c 0000664 0000764 0000764 00000517270 13302340445 014006 0 ustar shiro shiro /*
* Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
* Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
* Copyright (c) 1996-1999 by Silicon Graphics. All rights reserved.
* Copyright (c) 1999 by Hewlett-Packard Company. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#include "private/gc_priv.h"
#if !defined(OS2) && !defined(PCR) && !defined(AMIGA) && !defined(MACOS) \
&& !defined(MSWINCE) && !defined(__CC_ARM)
# include
# if !defined(MSWIN32)
# include
# endif
#endif
#include
#if defined(MSWINCE) || defined(SN_TARGET_PS3)
# define SIGSEGV 0 /* value is irrelevant */
#else
# include
#endif
#if defined(UNIX_LIKE) || defined(CYGWIN32) || defined(NACL) \
|| defined(SYMBIAN)
# include
#endif
#if defined(LINUX) || defined(LINUX_STACKBOTTOM)
# include
#endif
/* Blatantly OS dependent routines, except for those that are related */
/* to dynamic loading. */
#ifdef AMIGA
# define GC_AMIGA_DEF
# include "extra/AmigaOS.c"
# undef GC_AMIGA_DEF
#endif
#if defined(MSWIN32) || defined(MSWINCE) || defined(CYGWIN32)
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN 1
# endif
# define NOSERVICE
# include
/* It's not clear this is completely kosher under Cygwin. But it */
/* allows us to get a working GC_get_stack_base. */
#endif
#ifdef MACOS
# include
#endif
#ifdef IRIX5
# include
# include /* for locking */
#endif
#if defined(MMAP_SUPPORTED) || defined(ADD_HEAP_GUARD_PAGES)
# if defined(USE_MUNMAP) && !defined(USE_MMAP) && !defined(CPPCHECK)
# error "invalid config - USE_MUNMAP requires USE_MMAP"
# endif
# include
# include
# include
# include
#endif
#ifdef DARWIN
/* for get_etext and friends */
# include
#endif
#ifdef DJGPP
/* Apparently necessary for djgpp 2.01. May cause problems with */
/* other versions. */
typedef long unsigned int caddr_t;
#endif
#ifdef PCR
# include "il/PCR_IL.h"
# include "th/PCR_ThCtl.h"
# include "mm/PCR_MM.h"
#endif
#if !defined(NO_EXECUTE_PERMISSION)
STATIC GC_bool GC_pages_executable = TRUE;
#else
STATIC GC_bool GC_pages_executable = FALSE;
#endif
#define IGNORE_PAGES_EXECUTABLE 1
/* Undefined on GC_pages_executable real use. */
#ifdef NEED_PROC_MAPS
/* We need to parse /proc/self/maps, either to find dynamic libraries, */
/* and/or to find the register backing store base (IA64). Do it once */
/* here. */
#define READ read
/* Repeatedly perform a read call until the buffer is filled or */
/* we encounter EOF. */
STATIC ssize_t GC_repeat_read(int fd, char *buf, size_t count)
{
size_t num_read = 0;
ASSERT_CANCEL_DISABLED();
while (num_read < count) {
ssize_t result = READ(fd, buf + num_read, count - num_read);
if (result < 0) return result;
if (result == 0) break;
num_read += result;
}
return num_read;
}
#ifdef THREADS
/* Determine the length of a file by incrementally reading it into a */
/* buffer. This would be silly to use it on a file supporting lseek, */
/* but Linux /proc files usually do not. */
STATIC size_t GC_get_file_len(int f)
{
size_t total = 0;
ssize_t result;
# define GET_FILE_LEN_BUF_SZ 500
char buf[GET_FILE_LEN_BUF_SZ];
do {
result = read(f, buf, GET_FILE_LEN_BUF_SZ);
if (result == -1) return 0;
total += result;
} while (result > 0);
return total;
}
STATIC size_t GC_get_maps_len(void)
{
int f = open("/proc/self/maps", O_RDONLY);
size_t result;
if (f < 0) return 0; /* treat missing file as empty */
result = GC_get_file_len(f);
close(f);
return result;
}
#endif /* THREADS */
/* Copy the contents of /proc/self/maps to a buffer in our address */
/* space. Return the address of the buffer, or zero on failure. */
/* This code could be simplified if we could determine its size ahead */
/* of time. */
GC_INNER char * GC_get_maps(void)
{
ssize_t result;
static char *maps_buf = NULL;
static size_t maps_buf_sz = 1;
size_t maps_size, old_maps_size = 0;
/* The buffer is essentially static, so there must be a single client. */
GC_ASSERT(I_HOLD_LOCK());
/* Note that in the presence of threads, the maps file can */
/* essentially shrink asynchronously and unexpectedly as */
/* threads that we already think of as dead release their */
/* stacks. And there is no easy way to read the entire */
/* file atomically. This is arguably a misfeature of the */
/* /proc/.../maps interface. */
/* Since we expect the file can grow asynchronously in rare */
/* cases, it should suffice to first determine */
/* the size (using lseek or read), and then to reread the */
/* file. If the size is inconsistent we have to retry. */
/* This only matters with threads enabled, and if we use */
/* this to locate roots (not the default). */
# ifdef THREADS
/* Determine the initial size of /proc/self/maps. */
/* Note that lseek doesn't work, at least as of 2.6.15. */
maps_size = GC_get_maps_len();
if (0 == maps_size) return 0;
# else
maps_size = 4000; /* Guess */
# endif
/* Read /proc/self/maps, growing maps_buf as necessary. */
/* Note that we may not allocate conventionally, and */
/* thus can't use stdio. */
do {
int f;
while (maps_size >= maps_buf_sz) {
GC_scratch_recycle_no_gww(maps_buf, maps_buf_sz);
/* Grow only by powers of 2, since we leak "too small" buffers.*/
while (maps_size >= maps_buf_sz) maps_buf_sz *= 2;
maps_buf = GC_scratch_alloc(maps_buf_sz);
# ifdef THREADS
/* Recompute initial length, since we allocated. */
/* This can only happen a few times per program */
/* execution. */
maps_size = GC_get_maps_len();
if (0 == maps_size) return 0;
# endif
if (maps_buf == 0) return 0;
}
GC_ASSERT(maps_buf_sz >= maps_size + 1);
f = open("/proc/self/maps", O_RDONLY);
if (-1 == f) return 0;
# ifdef THREADS
old_maps_size = maps_size;
# endif
maps_size = 0;
do {
result = GC_repeat_read(f, maps_buf, maps_buf_sz-1);
if (result <= 0)
break;
maps_size += result;
} while ((size_t)result == maps_buf_sz-1);
close(f);
if (result <= 0)
return 0;
# ifdef THREADS
if (maps_size > old_maps_size) {
/* This might be caused by e.g. thread creation. */
WARN("Unexpected asynchronous /proc/self/maps growth"
" (to %" WARN_PRIdPTR " bytes)\n", maps_size);
}
# endif
} while (maps_size >= maps_buf_sz || maps_size < old_maps_size);
/* In the single-threaded case, the second clause is false. */
maps_buf[maps_size] = '\0';
/* Apply fn to result. */
return maps_buf;
}
/*
* GC_parse_map_entry parses an entry from /proc/self/maps so we can
* locate all writable data segments that belong to shared libraries.
* The format of one of these entries and the fields we care about
* is as follows:
* XXXXXXXX-XXXXXXXX r-xp 00000000 30:05 260537 name of mapping...\n
* ^^^^^^^^ ^^^^^^^^ ^^^^ ^^
* start end prot maj_dev
*
* Note that since about august 2003 kernels, the columns no longer have
* fixed offsets on 64-bit kernels. Hence we no longer rely on fixed offsets
* anywhere, which is safer anyway.
*/
/* Assign various fields of the first line in buf_ptr to (*start), */
/* (*end), (*prot), (*maj_dev) and (*mapping_name). mapping_name may */
/* be NULL. (*prot) and (*mapping_name) are assigned pointers into the */
/* original buffer. */
#if (defined(DYNAMIC_LOADING) && defined(USE_PROC_FOR_LIBRARIES)) \
|| defined(IA64) || defined(INCLUDE_LINUX_THREAD_DESCR) \
|| defined(REDIRECT_MALLOC)
GC_INNER char *GC_parse_map_entry(char *buf_ptr, ptr_t *start, ptr_t *end,
char **prot, unsigned int *maj_dev,
char **mapping_name)
{
unsigned char *start_start, *end_start, *maj_dev_start;
unsigned char *p; /* unsigned for isspace, isxdigit */
if (buf_ptr == NULL || *buf_ptr == '\0') {
return NULL;
}
p = (unsigned char *)buf_ptr;
while (isspace(*p)) ++p;
start_start = p;
GC_ASSERT(isxdigit(*start_start));
*start = (ptr_t)strtoul((char *)start_start, (char **)&p, 16);
GC_ASSERT(*p=='-');
++p;
end_start = p;
GC_ASSERT(isxdigit(*end_start));
*end = (ptr_t)strtoul((char *)end_start, (char **)&p, 16);
GC_ASSERT(isspace(*p));
while (isspace(*p)) ++p;
GC_ASSERT(*p == 'r' || *p == '-');
*prot = (char *)p;
/* Skip past protection field to offset field */
while (!isspace(*p)) ++p; while (isspace(*p)) ++p;
GC_ASSERT(isxdigit(*p));
/* Skip past offset field, which we ignore */
while (!isspace(*p)) ++p; while (isspace(*p)) ++p;
maj_dev_start = p;
GC_ASSERT(isxdigit(*maj_dev_start));
*maj_dev = strtoul((char *)maj_dev_start, NULL, 16);
if (mapping_name == 0) {
while (*p && *p++ != '\n');
} else {
while (*p && *p != '\n' && *p != '/' && *p != '[') p++;
*mapping_name = (char *)p;
while (*p && *p++ != '\n');
}
return (char *)p;
}
#endif /* REDIRECT_MALLOC || DYNAMIC_LOADING || IA64 || ... */
#if defined(IA64) || defined(INCLUDE_LINUX_THREAD_DESCR)
/* Try to read the backing store base from /proc/self/maps. */
/* Return the bounds of the writable mapping with a 0 major device, */
/* which includes the address passed as data. */
/* Return FALSE if there is no such mapping. */
GC_INNER GC_bool GC_enclosing_mapping(ptr_t addr, ptr_t *startp,
ptr_t *endp)
{
char *prot;
ptr_t my_start, my_end;
unsigned int maj_dev;
char *maps = GC_get_maps();
char *buf_ptr = maps;
if (0 == maps) return(FALSE);
for (;;) {
buf_ptr = GC_parse_map_entry(buf_ptr, &my_start, &my_end,
&prot, &maj_dev, 0);
if (buf_ptr == NULL) return FALSE;
if (prot[1] == 'w' && maj_dev == 0) {
if ((word)my_end > (word)addr && (word)my_start <= (word)addr) {
*startp = my_start;
*endp = my_end;
return TRUE;
}
}
}
return FALSE;
}
#endif /* IA64 || INCLUDE_LINUX_THREAD_DESCR */
#if defined(REDIRECT_MALLOC)
/* Find the text(code) mapping for the library whose name, after */
/* stripping the directory part, starts with nm. */
GC_INNER GC_bool GC_text_mapping(char *nm, ptr_t *startp, ptr_t *endp)
{
size_t nm_len = strlen(nm);
char *prot;
char *map_path;
ptr_t my_start, my_end;
unsigned int maj_dev;
char *maps = GC_get_maps();
char *buf_ptr = maps;
if (0 == maps) return(FALSE);
for (;;) {
buf_ptr = GC_parse_map_entry(buf_ptr, &my_start, &my_end,
&prot, &maj_dev, &map_path);
if (buf_ptr == NULL) return FALSE;
if (prot[0] == 'r' && prot[1] == '-' && prot[2] == 'x') {
char *p = map_path;
/* Set p to point just past last slash, if any. */
while (*p != '\0' && *p != '\n' && *p != ' ' && *p != '\t') ++p;
while (*p != '/' && (word)p >= (word)map_path) --p;
++p;
if (strncmp(nm, p, nm_len) == 0) {
*startp = my_start;
*endp = my_end;
return TRUE;
}
}
}
return FALSE;
}
#endif /* REDIRECT_MALLOC */
#ifdef IA64
static ptr_t backing_store_base_from_proc(void)
{
ptr_t my_start, my_end;
if (!GC_enclosing_mapping(GC_save_regs_in_stack(), &my_start, &my_end)) {
GC_COND_LOG_PRINTF("Failed to find backing store base from /proc\n");
return 0;
}
return my_start;
}
#endif
#endif /* NEED_PROC_MAPS */
#if defined(SEARCH_FOR_DATA_START)
/* The I386 case can be handled without a search. The Alpha case */
/* used to be handled differently as well, but the rules changed */
/* for recent Linux versions. This seems to be the easiest way to */
/* cover all versions. */
# if defined(LINUX) || defined(HURD)
/* Some Linux distributions arrange to define __data_start. Some */
/* define data_start as a weak symbol. The latter is technically */
/* broken, since the user program may define data_start, in which */
/* case we lose. Nonetheless, we try both, preferring __data_start.*/
/* We assume gcc-compatible pragmas. */
# pragma weak __data_start
# pragma weak data_start
extern int __data_start[], data_start[];
# ifdef PLATFORM_ANDROID
# pragma weak _etext
# pragma weak __dso_handle
extern int _etext[], __dso_handle[];
# endif
# endif /* LINUX */
ptr_t GC_data_start = NULL;
ptr_t GC_find_limit(ptr_t, GC_bool);
GC_INNER void GC_init_linux_data_start(void)
{
ptr_t data_end = DATAEND;
# if (defined(LINUX) || defined(HURD)) && !defined(IGNORE_PROG_DATA_START)
/* Try the easy approaches first: */
# ifdef PLATFORM_ANDROID
/* Workaround for "gold" (default) linker (as of Android NDK r10e). */
if ((word)__data_start < (word)_etext
&& (word)_etext < (word)__dso_handle) {
GC_data_start = (ptr_t)(__dso_handle);
# ifdef DEBUG_ADD_DEL_ROOTS
GC_log_printf(
"__data_start is wrong; using __dso_handle as data start\n");
# endif
} else
# endif
/* else */ if (COVERT_DATAFLOW(__data_start) != 0) {
GC_data_start = (ptr_t)(__data_start);
} else {
GC_data_start = (ptr_t)(data_start);
}
if (COVERT_DATAFLOW(GC_data_start) != 0) {
if ((word)GC_data_start > (word)data_end)
ABORT_ARG2("Wrong __data_start/_end pair",
": %p .. %p", (void *)GC_data_start, (void *)data_end);
return;
}
# ifdef DEBUG_ADD_DEL_ROOTS
GC_log_printf("__data_start not provided\n");
# endif
# endif /* LINUX */
if (GC_no_dls) {
/* Not needed, avoids the SIGSEGV caused by */
/* GC_find_limit which complicates debugging. */
GC_data_start = data_end; /* set data root size to 0 */
return;
}
GC_data_start = GC_find_limit(data_end, FALSE);
}
#endif /* SEARCH_FOR_DATA_START */
#ifdef ECOS
# ifndef ECOS_GC_MEMORY_SIZE
# define ECOS_GC_MEMORY_SIZE (448 * 1024)
# endif /* ECOS_GC_MEMORY_SIZE */
/* FIXME: This is a simple way of allocating memory which is */
/* compatible with ECOS early releases. Later releases use a more */
/* sophisticated means of allocating memory than this simple static */
/* allocator, but this method is at least bound to work. */
static char ecos_gc_memory[ECOS_GC_MEMORY_SIZE];
static char *ecos_gc_brk = ecos_gc_memory;
static void *tiny_sbrk(ptrdiff_t increment)
{
void *p = ecos_gc_brk;
ecos_gc_brk += increment;
if ((word)ecos_gc_brk > (word)(ecos_gc_memory + sizeof(ecos_gc_memory))) {
ecos_gc_brk -= increment;
return NULL;
}
return p;
}
# define sbrk tiny_sbrk
#endif /* ECOS */
#if defined(NETBSD) && defined(__ELF__)
ptr_t GC_data_start = NULL;
ptr_t GC_find_limit(ptr_t, GC_bool);
extern char **environ;
GC_INNER void GC_init_netbsd_elf(void)
{
/* This may need to be environ, without the underscore, for */
/* some versions. */
GC_data_start = GC_find_limit((ptr_t)&environ, FALSE);
}
#endif /* NETBSD */
#if defined(ADDRESS_SANITIZER) && (defined(UNIX_LIKE) \
|| defined(NEED_FIND_LIMIT) || defined(MPROTECT_VDB)) \
&& !defined(CUSTOM_ASAN_DEF_OPTIONS)
/* To tell ASan to allow GC to use its own SIGBUS/SEGV handlers. */
/* The function is exported just to be visible to ASan library. */
GC_API const char *__asan_default_options(void)
{
return "allow_user_segv_handler=1";
}
#endif
#ifdef OPENBSD
static struct sigaction old_segv_act;
STATIC sigjmp_buf GC_jmp_buf_openbsd;
# ifdef THREADS
# include
extern sigset_t __syscall(quad_t, ...);
# endif
/* Don't use GC_find_limit() because siglongjmp() outside of the */
/* signal handler by-passes our userland pthreads lib, leaving */
/* SIGSEGV and SIGPROF masked. Instead, use this custom one that */
/* works-around the issues. */
STATIC void GC_fault_handler_openbsd(int sig GC_ATTR_UNUSED)
{
siglongjmp(GC_jmp_buf_openbsd, 1);
}
/* Return the first non-addressable location > p or bound. */
/* Requires the allocation lock. */
STATIC ptr_t GC_find_limit_openbsd(ptr_t p, ptr_t bound)
{
static volatile ptr_t result;
/* Safer if static, since otherwise it may not be */
/* preserved across the longjmp. Can safely be */
/* static since it's only called with the */
/* allocation lock held. */
struct sigaction act;
word pgsz = (word)sysconf(_SC_PAGESIZE);
GC_ASSERT((word)bound >= pgsz);
GC_ASSERT(I_HOLD_LOCK());
act.sa_handler = GC_fault_handler_openbsd;
sigemptyset(&act.sa_mask);
act.sa_flags = SA_NODEFER | SA_RESTART;
/* act.sa_restorer is deprecated and should not be initialized. */
sigaction(SIGSEGV, &act, &old_segv_act);
if (sigsetjmp(GC_jmp_buf_openbsd, 1) == 0) {
result = (ptr_t)((word)p & ~(pgsz-1));
for (;;) {
if ((word)result >= (word)bound - pgsz) {
result = bound;
break;
}
result += pgsz; /* no overflow expected */
GC_noop1((word)(*result));
}
}
# ifdef THREADS
/* Due to the siglongjump we need to manually unmask SIGPROF. */
__syscall(SYS_sigprocmask, SIG_UNBLOCK, sigmask(SIGPROF));
# endif
sigaction(SIGSEGV, &old_segv_act, 0);
return(result);
}
/* Return first addressable location > p or bound. */
/* Requires the allocation lock. */
STATIC ptr_t GC_skip_hole_openbsd(ptr_t p, ptr_t bound)
{
static volatile ptr_t result;
static volatile int firstpass;
struct sigaction act;
word pgsz = (word)sysconf(_SC_PAGESIZE);
GC_ASSERT((word)bound >= pgsz);
GC_ASSERT(I_HOLD_LOCK());
act.sa_handler = GC_fault_handler_openbsd;
sigemptyset(&act.sa_mask);
act.sa_flags = SA_NODEFER | SA_RESTART;
/* act.sa_restorer is deprecated and should not be initialized. */
sigaction(SIGSEGV, &act, &old_segv_act);
firstpass = 1;
result = (ptr_t)((word)p & ~(pgsz-1));
if (sigsetjmp(GC_jmp_buf_openbsd, 1) != 0 || firstpass) {
firstpass = 0;
if ((word)result >= (word)bound - pgsz) {
result = bound;
} else {
result += pgsz; /* no overflow expected */
GC_noop1((word)(*result));
}
}
sigaction(SIGSEGV, &old_segv_act, 0);
return(result);
}
#endif /* OPENBSD */
# ifdef OS2
# include
# if !defined(__IBMC__) && !defined(__WATCOMC__) /* e.g. EMX */
struct exe_hdr {
unsigned short magic_number;
unsigned short padding[29];
long new_exe_offset;
};
#define E_MAGIC(x) (x).magic_number
#define EMAGIC 0x5A4D
#define E_LFANEW(x) (x).new_exe_offset
struct e32_exe {
unsigned char magic_number[2];
unsigned char byte_order;
unsigned char word_order;
unsigned long exe_format_level;
unsigned short cpu;
unsigned short os;
unsigned long padding1[13];
unsigned long object_table_offset;
unsigned long object_count;
unsigned long padding2[31];
};
#define E32_MAGIC1(x) (x).magic_number[0]
#define E32MAGIC1 'L'
#define E32_MAGIC2(x) (x).magic_number[1]
#define E32MAGIC2 'X'
#define E32_BORDER(x) (x).byte_order
#define E32LEBO 0
#define E32_WORDER(x) (x).word_order
#define E32LEWO 0
#define E32_CPU(x) (x).cpu
#define E32CPU286 1
#define E32_OBJTAB(x) (x).object_table_offset
#define E32_OBJCNT(x) (x).object_count
struct o32_obj {
unsigned long size;
unsigned long base;
unsigned long flags;
unsigned long pagemap;
unsigned long mapsize;
unsigned long reserved;
};
#define O32_FLAGS(x) (x).flags
#define OBJREAD 0x0001L
#define OBJWRITE 0x0002L
#define OBJINVALID 0x0080L
#define O32_SIZE(x) (x).size
#define O32_BASE(x) (x).base
# else /* IBM's compiler */
/* A kludge to get around what appears to be a header file bug */
# ifndef WORD
# define WORD unsigned short
# endif
# ifndef DWORD
# define DWORD unsigned long
# endif
# define EXE386 1
# include
# include
# endif /* __IBMC__ */
# define INCL_DOSEXCEPTIONS
# define INCL_DOSPROCESS
# define INCL_DOSERRORS
# define INCL_DOSMODULEMGR
# define INCL_DOSMEMMGR
# include
# endif /* OS/2 */
/* Find the page size */
GC_INNER size_t GC_page_size = 0;
#if defined(MSWIN32) || defined(MSWINCE) || defined(CYGWIN32)
# ifndef VER_PLATFORM_WIN32_CE
# define VER_PLATFORM_WIN32_CE 3
# endif
# if defined(MSWINCE) && defined(THREADS)
GC_INNER GC_bool GC_dont_query_stack_min = FALSE;
# endif
GC_INNER SYSTEM_INFO GC_sysinfo;
GC_INNER void GC_setpagesize(void)
{
GetSystemInfo(&GC_sysinfo);
# if defined(CYGWIN32) && defined(USE_MUNMAP)
/* Allocations made with mmap() are aligned to the allocation */
/* granularity, which (at least on 64-bit Windows OS) is not the */
/* same as the page size. Probably a separate variable could */
/* be added to distinguish the allocation granularity from the */
/* actual page size, but in practice there is no good reason to */
/* make allocations smaller than dwAllocationGranularity, so we */
/* just use it instead of the actual page size here (as Cygwin */
/* itself does in many cases). */
GC_page_size = (size_t)GC_sysinfo.dwAllocationGranularity;
GC_ASSERT(GC_page_size >= (size_t)GC_sysinfo.dwPageSize);
# else
GC_page_size = (size_t)GC_sysinfo.dwPageSize;
# endif
# if defined(MSWINCE) && !defined(_WIN32_WCE_EMULATION)
{
OSVERSIONINFO verInfo;
/* Check the current WinCE version. */
verInfo.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);
if (!GetVersionEx(&verInfo))
ABORT("GetVersionEx failed");
if (verInfo.dwPlatformId == VER_PLATFORM_WIN32_CE &&
verInfo.dwMajorVersion < 6) {
/* Only the first 32 MB of address space belongs to the */
/* current process (unless WinCE 6.0+ or emulation). */
GC_sysinfo.lpMaximumApplicationAddress = (LPVOID)((word)32 << 20);
# ifdef THREADS
/* On some old WinCE versions, it's observed that */
/* VirtualQuery calls don't work properly when used to */
/* get thread current stack committed minimum. */
if (verInfo.dwMajorVersion < 5)
GC_dont_query_stack_min = TRUE;
# endif
}
}
# endif
}
# ifndef CYGWIN32
# define is_writable(prot) ((prot) == PAGE_READWRITE \
|| (prot) == PAGE_WRITECOPY \
|| (prot) == PAGE_EXECUTE_READWRITE \
|| (prot) == PAGE_EXECUTE_WRITECOPY)
/* Return the number of bytes that are writable starting at p. */
/* The pointer p is assumed to be page aligned. */
/* If base is not 0, *base becomes the beginning of the */
/* allocation region containing p. */
STATIC word GC_get_writable_length(ptr_t p, ptr_t *base)
{
MEMORY_BASIC_INFORMATION buf;
word result;
word protect;
result = VirtualQuery(p, &buf, sizeof(buf));
if (result != sizeof(buf)) ABORT("Weird VirtualQuery result");
if (base != 0) *base = (ptr_t)(buf.AllocationBase);
protect = (buf.Protect & ~(PAGE_GUARD | PAGE_NOCACHE));
if (!is_writable(protect)) {
return(0);
}
if (buf.State != MEM_COMMIT) return(0);
return(buf.RegionSize);
}
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *sb)
{
ptr_t trunc_sp;
word size;
/* Set page size if it is not ready (so client can use this */
/* function even before GC is initialized). */
if (!GC_page_size) GC_setpagesize();
trunc_sp = (ptr_t)((word)GC_approx_sp() & ~(GC_page_size - 1));
/* FIXME: This won't work if called from a deeply recursive */
/* client code (and the committed stack space has grown). */
size = GC_get_writable_length(trunc_sp, 0);
GC_ASSERT(size != 0);
sb -> mem_base = trunc_sp + size;
return GC_SUCCESS;
}
# else /* CYGWIN32 */
/* An alternate version for Cygwin (adapted from Dave Korn's */
/* gcc version of boehm-gc). */
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *sb)
{
# ifdef X86_64
sb -> mem_base = ((NT_TIB*)NtCurrentTeb())->StackBase;
# else
void * _tlsbase;
__asm__ ("movl %%fs:4, %0"
: "=r" (_tlsbase));
sb -> mem_base = _tlsbase;
# endif
return GC_SUCCESS;
}
# endif /* CYGWIN32 */
# define HAVE_GET_STACK_BASE
#else /* !MSWIN32 */
GC_INNER void GC_setpagesize(void)
{
# if defined(MPROTECT_VDB) || defined(PROC_VDB) || defined(USE_MMAP)
GC_page_size = (size_t)GETPAGESIZE();
# if !defined(CPPCHECK)
if (0 == GC_page_size)
ABORT("getpagesize failed");
# endif
# else
/* It's acceptable to fake it. */
GC_page_size = HBLKSIZE;
# endif
}
#endif /* !MSWIN32 */
#ifdef HAIKU
# include
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *sb)
{
thread_info th;
get_thread_info(find_thread(NULL),&th);
sb->mem_base = th.stack_end;
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* HAIKU */
#ifdef OS2
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *sb)
{
PTIB ptib; /* thread information block */
PPIB ppib;
if (DosGetInfoBlocks(&ptib, &ppib) != NO_ERROR) {
WARN("DosGetInfoBlocks failed\n", 0);
return GC_UNIMPLEMENTED;
}
sb->mem_base = ptib->tib_pstacklimit;
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* OS2 */
# ifdef AMIGA
# define GC_AMIGA_SB
# include "extra/AmigaOS.c"
# undef GC_AMIGA_SB
# define GET_MAIN_STACKBASE_SPECIAL
# endif /* AMIGA */
# if defined(NEED_FIND_LIMIT) || defined(UNIX_LIKE)
typedef void (*GC_fault_handler_t)(int);
# if defined(SUNOS5SIGS) || defined(IRIX5) || defined(OSF1) \
|| defined(HAIKU) || defined(HURD) || defined(FREEBSD) \
|| defined(NETBSD)
static struct sigaction old_segv_act;
# if defined(_sigargs) /* !Irix6.x */ \
|| defined(HURD) || defined(NETBSD) || defined(FREEBSD)
static struct sigaction old_bus_act;
# endif
# else
static GC_fault_handler_t old_segv_handler;
# ifdef HAVE_SIGBUS
static GC_fault_handler_t old_bus_handler;
# endif
# endif
GC_INNER void GC_set_and_save_fault_handler(GC_fault_handler_t h)
{
# if defined(SUNOS5SIGS) || defined(IRIX5) || defined(OSF1) \
|| defined(HAIKU) || defined(HURD) || defined(FREEBSD) \
|| defined(NETBSD)
struct sigaction act;
act.sa_handler = h;
# ifdef SIGACTION_FLAGS_NODEFER_HACK
/* Was necessary for Solaris 2.3 and very temporary */
/* NetBSD bugs. */
act.sa_flags = SA_RESTART | SA_NODEFER;
# else
act.sa_flags = SA_RESTART;
# endif
(void) sigemptyset(&act.sa_mask);
/* act.sa_restorer is deprecated and should not be initialized. */
# ifdef GC_IRIX_THREADS
/* Older versions have a bug related to retrieving and */
/* and setting a handler at the same time. */
(void) sigaction(SIGSEGV, 0, &old_segv_act);
(void) sigaction(SIGSEGV, &act, 0);
# else
(void) sigaction(SIGSEGV, &act, &old_segv_act);
# if defined(IRIX5) && defined(_sigargs) /* Irix 5.x, not 6.x */ \
|| defined(HURD) || defined(NETBSD) || defined(FREEBSD)
/* Under Irix 5.x or HP/UX, we may get SIGBUS. */
/* Pthreads doesn't exist under Irix 5.x, so we */
/* don't have to worry in the threads case. */
(void) sigaction(SIGBUS, &act, &old_bus_act);
# endif
# endif /* !GC_IRIX_THREADS */
# else
old_segv_handler = signal(SIGSEGV, h);
# ifdef HAVE_SIGBUS
old_bus_handler = signal(SIGBUS, h);
# endif
# endif
# if defined(CPPCHECK) && defined(ADDRESS_SANITIZER)
GC_noop1((word)&__asan_default_options);
# endif
}
# endif /* NEED_FIND_LIMIT || UNIX_LIKE */
# if defined(NEED_FIND_LIMIT) \
|| (defined(USE_PROC_FOR_LIBRARIES) && defined(THREADS))
/* Some tools to implement HEURISTIC2 */
# define MIN_PAGE_SIZE 256 /* Smallest conceivable page size, bytes */
GC_INNER JMP_BUF GC_jmp_buf;
STATIC void GC_fault_handler(int sig GC_ATTR_UNUSED)
{
LONGJMP(GC_jmp_buf, 1);
}
GC_INNER void GC_setup_temporary_fault_handler(void)
{
/* Handler is process-wide, so this should only happen in */
/* one thread at a time. */
GC_ASSERT(I_HOLD_LOCK());
GC_set_and_save_fault_handler(GC_fault_handler);
}
GC_INNER void GC_reset_fault_handler(void)
{
# if defined(SUNOS5SIGS) || defined(IRIX5) || defined(OSF1) \
|| defined(HAIKU) || defined(HURD) || defined(FREEBSD) \
|| defined(NETBSD)
(void) sigaction(SIGSEGV, &old_segv_act, 0);
# if defined(IRIX5) && defined(_sigargs) /* Irix 5.x, not 6.x */ \
|| defined(HURD) || defined(NETBSD)
(void) sigaction(SIGBUS, &old_bus_act, 0);
# endif
# else
(void) signal(SIGSEGV, old_segv_handler);
# ifdef HAVE_SIGBUS
(void) signal(SIGBUS, old_bus_handler);
# endif
# endif
}
/* Return the first non-addressable location > p (up) or */
/* the smallest location q s.t. [q,p) is addressable (!up). */
/* We assume that p (up) or p-1 (!up) is addressable. */
/* Requires allocation lock. */
STATIC ptr_t GC_find_limit_with_bound(ptr_t p, GC_bool up, ptr_t bound)
{
static volatile ptr_t result;
/* Safer if static, since otherwise it may not be */
/* preserved across the longjmp. Can safely be */
/* static since it's only called with the */
/* allocation lock held. */
GC_ASSERT(up ? (word)bound >= MIN_PAGE_SIZE
: (word)bound <= ~(word)MIN_PAGE_SIZE);
GC_ASSERT(I_HOLD_LOCK());
GC_setup_temporary_fault_handler();
if (SETJMP(GC_jmp_buf) == 0) {
result = (ptr_t)(((word)(p))
& ~(MIN_PAGE_SIZE-1));
for (;;) {
if (up) {
if ((word)result >= (word)bound - MIN_PAGE_SIZE) {
result = bound;
break;
}
result += MIN_PAGE_SIZE; /* no overflow expected */
} else {
if ((word)result <= (word)bound + MIN_PAGE_SIZE) {
result = bound - MIN_PAGE_SIZE;
/* This is to compensate */
/* further result increment (we */
/* do not modify "up" variable */
/* since it might be clobbered */
/* by setjmp otherwise). */
break;
}
result -= MIN_PAGE_SIZE; /* no underflow expected */
}
GC_noop1((word)(*result));
}
}
GC_reset_fault_handler();
if (!up) {
result += MIN_PAGE_SIZE;
}
return(result);
}
ptr_t GC_find_limit(ptr_t p, GC_bool up)
{
return GC_find_limit_with_bound(p, up, up ? (ptr_t)(word)(-1) : 0);
}
# endif /* NEED_FIND_LIMIT || USE_PROC_FOR_LIBRARIES */
#ifdef HPUX_STACKBOTTOM
#include
#include
GC_INNER ptr_t GC_get_register_stack_base(void)
{
struct pst_vm_status vm_status;
int i = 0;
while (pstat_getprocvm(&vm_status, sizeof(vm_status), 0, i++) == 1) {
if (vm_status.pst_type == PS_RSESTACK) {
return (ptr_t) vm_status.pst_vaddr;
}
}
/* old way to get the register stackbottom */
return (ptr_t)(((word)GC_stackbottom - BACKING_STORE_DISPLACEMENT - 1)
& ~(BACKING_STORE_ALIGNMENT - 1));
}
#endif /* HPUX_STACK_BOTTOM */
#ifdef LINUX_STACKBOTTOM
# include
# include
# define STAT_SKIP 27 /* Number of fields preceding startstack */
/* field in /proc/self/stat */
# ifdef USE_LIBC_PRIVATES
# pragma weak __libc_stack_end
extern ptr_t __libc_stack_end;
# endif
# ifdef IA64
# ifdef USE_LIBC_PRIVATES
# pragma weak __libc_ia64_register_backing_store_base
extern ptr_t __libc_ia64_register_backing_store_base;
# endif
GC_INNER ptr_t GC_get_register_stack_base(void)
{
ptr_t result;
# ifdef USE_LIBC_PRIVATES
if (0 != &__libc_ia64_register_backing_store_base
&& 0 != __libc_ia64_register_backing_store_base) {
/* Glibc 2.2.4 has a bug such that for dynamically linked */
/* executables __libc_ia64_register_backing_store_base is */
/* defined but uninitialized during constructor calls. */
/* Hence we check for both nonzero address and value. */
return __libc_ia64_register_backing_store_base;
}
# endif
result = backing_store_base_from_proc();
if (0 == result) {
result = GC_find_limit(GC_save_regs_in_stack(), FALSE);
/* Now seems to work better than constant displacement */
/* heuristic used in 6.X versions. The latter seems to */
/* fail for 2.6 kernels. */
}
return result;
}
# endif /* IA64 */
STATIC ptr_t GC_linux_main_stack_base(void)
{
/* We read the stack base value from /proc/self/stat. We do this */
/* using direct I/O system calls in order to avoid calling malloc */
/* in case REDIRECT_MALLOC is defined. */
# ifndef STAT_READ
/* Also defined in pthread_support.c. */
# define STAT_BUF_SIZE 4096
# define STAT_READ read
# endif
/* Should probably call the real read, if read is wrapped. */
char stat_buf[STAT_BUF_SIZE];
int f;
word result;
int i, buf_offset = 0, len;
/* First try the easy way. This should work for glibc 2.2 */
/* This fails in a prelinked ("prelink" command) executable */
/* since the correct value of __libc_stack_end never */
/* becomes visible to us. The second test works around */
/* this. */
# ifdef USE_LIBC_PRIVATES
if (0 != &__libc_stack_end && 0 != __libc_stack_end ) {
# if defined(IA64)
/* Some versions of glibc set the address 16 bytes too */
/* low while the initialization code is running. */
if (((word)__libc_stack_end & 0xfff) + 0x10 < 0x1000) {
return __libc_stack_end + 0x10;
} /* Otherwise it's not safe to add 16 bytes and we fall */
/* back to using /proc. */
# elif defined(SPARC)
/* Older versions of glibc for 64-bit SPARC do not set this */
/* variable correctly, it gets set to either zero or one. */
if (__libc_stack_end != (ptr_t) (unsigned long)0x1)
return __libc_stack_end;
# else
return __libc_stack_end;
# endif
}
# endif
f = open("/proc/self/stat", O_RDONLY);
if (f < 0)
ABORT("Couldn't read /proc/self/stat");
len = STAT_READ(f, stat_buf, STAT_BUF_SIZE);
close(f);
/* Skip the required number of fields. This number is hopefully */
/* constant across all Linux implementations. */
for (i = 0; i < STAT_SKIP; ++i) {
while (buf_offset < len && isspace(stat_buf[buf_offset++])) {
/* empty */
}
while (buf_offset < len && !isspace(stat_buf[buf_offset++])) {
/* empty */
}
}
/* Skip spaces. */
while (buf_offset < len && isspace(stat_buf[buf_offset])) {
buf_offset++;
}
/* Find the end of the number and cut the buffer there. */
for (i = 0; buf_offset + i < len; i++) {
if (!isdigit(stat_buf[buf_offset + i])) break;
}
if (buf_offset + i >= len) ABORT("Could not parse /proc/self/stat");
stat_buf[buf_offset + i] = '\0';
result = (word)STRTOULL(&stat_buf[buf_offset], NULL, 10);
if (result < 0x100000 || (result & (sizeof(word) - 1)) != 0)
ABORT("Absurd stack bottom value");
return (ptr_t)result;
}
#endif /* LINUX_STACKBOTTOM */
#ifdef FREEBSD_STACKBOTTOM
/* This uses an undocumented sysctl call, but at least one expert */
/* believes it will stay. */
# include
# include
# include
STATIC ptr_t GC_freebsd_main_stack_base(void)
{
int nm[2] = {CTL_KERN, KERN_USRSTACK};
ptr_t base;
size_t len = sizeof(ptr_t);
int r = sysctl(nm, 2, &base, &len, NULL, 0);
if (r) ABORT("Error getting main stack base");
return base;
}
#endif /* FREEBSD_STACKBOTTOM */
#if defined(ECOS) || defined(NOSYS)
ptr_t GC_get_main_stack_base(void)
{
return STACKBOTTOM;
}
# define GET_MAIN_STACKBASE_SPECIAL
#elif defined(SYMBIAN)
extern int GC_get_main_symbian_stack_base(void);
ptr_t GC_get_main_stack_base(void)
{
return (ptr_t)GC_get_main_symbian_stack_base();
}
# define GET_MAIN_STACKBASE_SPECIAL
#elif !defined(AMIGA) && !defined(HAIKU) && !defined(OS2) \
&& !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32) \
&& !defined(GC_OPENBSD_THREADS) \
&& (!defined(GC_SOLARIS_THREADS) || defined(_STRICT_STDC))
# if (defined(HAVE_PTHREAD_ATTR_GET_NP) || defined(HAVE_PTHREAD_GETATTR_NP)) \
&& (defined(THREADS) || defined(USE_GET_STACKBASE_FOR_MAIN))
# include
# ifdef HAVE_PTHREAD_NP_H
# include /* for pthread_attr_get_np() */
# endif
# elif defined(DARWIN) && !defined(NO_PTHREAD_GET_STACKADDR_NP)
/* We could use pthread_get_stackaddr_np even in case of a */
/* single-threaded gclib (there is no -lpthread on Darwin). */
# include
# undef STACKBOTTOM
# define STACKBOTTOM (ptr_t)pthread_get_stackaddr_np(pthread_self())
# endif
ptr_t GC_get_main_stack_base(void)
{
ptr_t result;
# if (defined(HAVE_PTHREAD_ATTR_GET_NP) \
|| defined(HAVE_PTHREAD_GETATTR_NP)) \
&& (defined(USE_GET_STACKBASE_FOR_MAIN) \
|| (defined(THREADS) && !defined(REDIRECT_MALLOC)))
pthread_attr_t attr;
void *stackaddr;
size_t size;
# ifdef HAVE_PTHREAD_ATTR_GET_NP
if (pthread_attr_init(&attr) == 0
&& (pthread_attr_get_np(pthread_self(), &attr) == 0
? TRUE : (pthread_attr_destroy(&attr), FALSE)))
# else /* HAVE_PTHREAD_GETATTR_NP */
if (pthread_getattr_np(pthread_self(), &attr) == 0)
# endif
{
if (pthread_attr_getstack(&attr, &stackaddr, &size) == 0
&& stackaddr != NULL) {
(void)pthread_attr_destroy(&attr);
# ifdef STACK_GROWS_DOWN
stackaddr = (char *)stackaddr + size;
# endif
return (ptr_t)stackaddr;
}
(void)pthread_attr_destroy(&attr);
}
WARN("pthread_getattr_np or pthread_attr_getstack failed"
" for main thread\n", 0);
# endif
# ifdef STACKBOTTOM
result = STACKBOTTOM;
# else
# define STACKBOTTOM_ALIGNMENT_M1 ((word)STACK_GRAN - 1)
# ifdef HEURISTIC1
# ifdef STACK_GROWS_DOWN
result = (ptr_t)(((word)GC_approx_sp() + STACKBOTTOM_ALIGNMENT_M1)
& ~STACKBOTTOM_ALIGNMENT_M1);
# else
result = (ptr_t)((word)GC_approx_sp() & ~STACKBOTTOM_ALIGNMENT_M1);
# endif
# elif defined(LINUX_STACKBOTTOM)
result = GC_linux_main_stack_base();
# elif defined(FREEBSD_STACKBOTTOM)
result = GC_freebsd_main_stack_base();
# elif defined(HEURISTIC2)
{
ptr_t sp = GC_approx_sp();
# ifdef STACK_GROWS_DOWN
result = GC_find_limit(sp, TRUE);
# if defined(HEURISTIC2_LIMIT) && !defined(CPPCHECK)
if ((word)result > (word)HEURISTIC2_LIMIT
&& (word)sp < (word)HEURISTIC2_LIMIT) {
result = HEURISTIC2_LIMIT;
}
# endif
# else
result = GC_find_limit(sp, FALSE);
# if defined(HEURISTIC2_LIMIT) && !defined(CPPCHECK)
if ((word)result < (word)HEURISTIC2_LIMIT
&& (word)sp > (word)HEURISTIC2_LIMIT) {
result = HEURISTIC2_LIMIT;
}
# endif
# endif
}
# elif defined(STACK_NOT_SCANNED) || defined(CPPCHECK)
result = NULL;
# else
# error None of HEURISTIC* and *STACKBOTTOM defined!
# endif
# if defined(STACK_GROWS_DOWN) && !defined(CPPCHECK)
if (result == 0)
result = (ptr_t)(signed_word)(-sizeof(ptr_t));
# endif
# endif
GC_ASSERT((word)GC_approx_sp() HOTTER_THAN (word)result);
return(result);
}
# define GET_MAIN_STACKBASE_SPECIAL
#endif /* !AMIGA, !HAIKU, !OPENBSD, !OS2, !Windows */
#if (defined(HAVE_PTHREAD_ATTR_GET_NP) || defined(HAVE_PTHREAD_GETATTR_NP)) \
&& defined(THREADS) && !defined(HAVE_GET_STACK_BASE)
# include
# ifdef HAVE_PTHREAD_NP_H
# include
# endif
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *b)
{
pthread_attr_t attr;
size_t size;
# ifdef IA64
DCL_LOCK_STATE;
# endif
# ifdef HAVE_PTHREAD_ATTR_GET_NP
if (pthread_attr_init(&attr) != 0)
ABORT("pthread_attr_init failed");
if (pthread_attr_get_np(pthread_self(), &attr) != 0) {
WARN("pthread_attr_get_np failed\n", 0);
(void)pthread_attr_destroy(&attr);
return GC_UNIMPLEMENTED;
}
# else /* HAVE_PTHREAD_GETATTR_NP */
if (pthread_getattr_np(pthread_self(), &attr) != 0) {
WARN("pthread_getattr_np failed\n", 0);
return GC_UNIMPLEMENTED;
}
# endif
if (pthread_attr_getstack(&attr, &(b -> mem_base), &size) != 0) {
ABORT("pthread_attr_getstack failed");
}
(void)pthread_attr_destroy(&attr);
# ifdef STACK_GROWS_DOWN
b -> mem_base = (char *)(b -> mem_base) + size;
# endif
# ifdef IA64
/* We could try backing_store_base_from_proc, but that's safe */
/* only if no mappings are being asynchronously created. */
/* Subtracting the size from the stack base doesn't work for at */
/* least the main thread. */
LOCK();
{
IF_CANCEL(int cancel_state;)
ptr_t bsp;
ptr_t next_stack;
DISABLE_CANCEL(cancel_state);
bsp = GC_save_regs_in_stack();
next_stack = GC_greatest_stack_base_below(bsp);
if (0 == next_stack) {
b -> reg_base = GC_find_limit(bsp, FALSE);
} else {
/* Avoid walking backwards into preceding memory stack and */
/* growing it. */
b -> reg_base = GC_find_limit_with_bound(bsp, FALSE, next_stack);
}
RESTORE_CANCEL(cancel_state);
}
UNLOCK();
# endif
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* THREADS && (HAVE_PTHREAD_ATTR_GET_NP || HAVE_PTHREAD_GETATTR_NP) */
#if defined(GC_DARWIN_THREADS) && !defined(NO_PTHREAD_GET_STACKADDR_NP)
# include
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *b)
{
/* pthread_get_stackaddr_np() should return stack bottom (highest */
/* stack address plus 1). */
b->mem_base = pthread_get_stackaddr_np(pthread_self());
GC_ASSERT((word)GC_approx_sp() HOTTER_THAN (word)b->mem_base);
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* GC_DARWIN_THREADS */
#ifdef GC_OPENBSD_THREADS
# include
# include
# include
/* Find the stack using pthread_stackseg_np(). */
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *sb)
{
stack_t stack;
if (pthread_stackseg_np(pthread_self(), &stack))
ABORT("pthread_stackseg_np(self) failed");
sb->mem_base = stack.ss_sp;
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* GC_OPENBSD_THREADS */
#if defined(GC_SOLARIS_THREADS) && !defined(_STRICT_STDC)
# include
# include
# include
/* These variables are used to cache ss_sp value for the primordial */
/* thread (it's better not to call thr_stksegment() twice for this */
/* thread - see JDK bug #4352906). */
static pthread_t stackbase_main_self = 0;
/* 0 means stackbase_main_ss_sp value is unset. */
static void *stackbase_main_ss_sp = NULL;
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *b)
{
stack_t s;
pthread_t self = pthread_self();
if (self == stackbase_main_self)
{
/* If the client calls GC_get_stack_base() from the main thread */
/* then just return the cached value. */
b -> mem_base = stackbase_main_ss_sp;
GC_ASSERT(b -> mem_base != NULL);
return GC_SUCCESS;
}
if (thr_stksegment(&s)) {
/* According to the manual, the only failure error code returned */
/* is EAGAIN meaning "the information is not available due to the */
/* thread is not yet completely initialized or it is an internal */
/* thread" - this shouldn't happen here. */
ABORT("thr_stksegment failed");
}
/* s.ss_sp holds the pointer to the stack bottom. */
GC_ASSERT((word)GC_approx_sp() HOTTER_THAN (word)s.ss_sp);
if (!stackbase_main_self && thr_main() != 0)
{
/* Cache the stack base value for the primordial thread (this */
/* is done during GC_init, so there is no race). */
stackbase_main_ss_sp = s.ss_sp;
stackbase_main_self = self;
}
b -> mem_base = s.ss_sp;
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* GC_SOLARIS_THREADS */
#ifdef GC_RTEMS_PTHREADS
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *sb)
{
sb->mem_base = rtems_get_stack_bottom();
return GC_SUCCESS;
}
# define HAVE_GET_STACK_BASE
#endif /* GC_RTEMS_PTHREADS */
#ifndef HAVE_GET_STACK_BASE
# ifdef NEED_FIND_LIMIT
/* Retrieve stack base. */
/* Using the GC_find_limit version is risky. */
/* On IA64, for example, there is no guard page between the */
/* stack of one thread and the register backing store of the */
/* next. Thus this is likely to identify way too large a */
/* "stack" and thus at least result in disastrous performance. */
/* FIXME - Implement better strategies here. */
GC_API int GC_CALL GC_get_stack_base(struct GC_stack_base *b)
{
IF_CANCEL(int cancel_state;)
DCL_LOCK_STATE;
LOCK();
DISABLE_CANCEL(cancel_state); /* May be unnecessary? */
# ifdef STACK_GROWS_DOWN
b -> mem_base = GC_find_limit(GC_approx_sp(), TRUE);
# ifdef IA64
b -> reg_base = GC_find_limit(GC_save_regs_in_stack(), FALSE);
# endif
# else
b -> mem_base = GC_find_limit(GC_approx_sp(), FALSE);
# endif
RESTORE_CANCEL(cancel_state);
UNLOCK();
return GC_SUCCESS;
}
# else
GC_API int GC_CALL GC_get_stack_base(
struct GC_stack_base *b GC_ATTR_UNUSED)
{
# if defined(GET_MAIN_STACKBASE_SPECIAL) && !defined(THREADS) \
&& !defined(IA64)
b->mem_base = GC_get_main_stack_base();
return GC_SUCCESS;
# else
return GC_UNIMPLEMENTED;
# endif
}
# endif /* !NEED_FIND_LIMIT */
#endif /* !HAVE_GET_STACK_BASE */
#ifndef GET_MAIN_STACKBASE_SPECIAL
/* This is always called from the main thread. Default implementation. */
ptr_t GC_get_main_stack_base(void)
{
struct GC_stack_base sb;
if (GC_get_stack_base(&sb) != GC_SUCCESS)
ABORT("GC_get_stack_base failed");
GC_ASSERT((word)GC_approx_sp() HOTTER_THAN (word)sb.mem_base);
return (ptr_t)sb.mem_base;
}
#endif /* !GET_MAIN_STACKBASE_SPECIAL */
/* Register static data segment(s) as roots. If more data segments are */
/* added later then they need to be registered at that point (as we do */
/* with SunOS dynamic loading), or GC_mark_roots needs to check for */
/* them (as we do with PCR). Called with allocator lock held. */
# ifdef OS2
void GC_register_data_segments(void)
{
PTIB ptib;
PPIB ppib;
HMODULE module_handle;
# define PBUFSIZ 512
UCHAR path[PBUFSIZ];
FILE * myexefile;
struct exe_hdr hdrdos; /* MSDOS header. */
struct e32_exe hdr386; /* Real header for my executable */
struct o32_obj seg; /* Current segment */
int nsegs;
# if defined(CPPCHECK)
hdrdos.padding[0] = 0; /* to prevent "field unused" warnings */
hdr386.exe_format_level = 0;
hdr386.os = 0;
hdr386.padding1[0] = 0;
hdr386.padding2[0] = 0;
seg.pagemap = 0;
seg.mapsize = 0;
seg.reserved = 0;
# endif
if (DosGetInfoBlocks(&ptib, &ppib) != NO_ERROR) {
ABORT("DosGetInfoBlocks failed");
}
module_handle = ppib -> pib_hmte;
if (DosQueryModuleName(module_handle, PBUFSIZ, path) != NO_ERROR) {
ABORT("DosQueryModuleName failed");
}
myexefile = fopen(path, "rb");
if (myexefile == 0) {
ABORT_ARG1("Failed to open executable", ": %s", path);
}
if (fread((char *)(&hdrdos), 1, sizeof(hdrdos), myexefile)
< sizeof(hdrdos)) {
ABORT_ARG1("Could not read MSDOS header", " from: %s", path);
}
if (E_MAGIC(hdrdos) != EMAGIC) {
ABORT_ARG1("Bad DOS magic number", " in file: %s", path);
}
if (fseek(myexefile, E_LFANEW(hdrdos), SEEK_SET) != 0) {
ABORT_ARG1("Bad DOS magic number", " in file: %s", path);
}
if (fread((char *)(&hdr386), 1, sizeof(hdr386), myexefile)
< sizeof(hdr386)) {
ABORT_ARG1("Could not read OS/2 header", " from: %s", path);
}
if (E32_MAGIC1(hdr386) != E32MAGIC1 || E32_MAGIC2(hdr386) != E32MAGIC2) {
ABORT_ARG1("Bad OS/2 magic number", " in file: %s", path);
}
if (E32_BORDER(hdr386) != E32LEBO || E32_WORDER(hdr386) != E32LEWO) {
ABORT_ARG1("Bad byte order in executable", " file: %s", path);
}
if (E32_CPU(hdr386) == E32CPU286) {
ABORT_ARG1("GC cannot handle 80286 executables", ": %s", path);
}
if (fseek(myexefile, E_LFANEW(hdrdos) + E32_OBJTAB(hdr386),
SEEK_SET) != 0) {
ABORT_ARG1("Seek to object table failed", " in file: %s", path);
}
for (nsegs = E32_OBJCNT(hdr386); nsegs > 0; nsegs--) {
int flags;
if (fread((char *)(&seg), 1, sizeof(seg), myexefile) < sizeof(seg)) {
ABORT_ARG1("Could not read obj table entry", " from file: %s", path);
}
flags = O32_FLAGS(seg);
if (!(flags & OBJWRITE)) continue;
if (!(flags & OBJREAD)) continue;
if (flags & OBJINVALID) {
GC_err_printf("Object with invalid pages?\n");
continue;
}
GC_add_roots_inner((ptr_t)O32_BASE(seg),
(ptr_t)(O32_BASE(seg)+O32_SIZE(seg)), FALSE);
}
(void)fclose(myexefile);
}
# else /* !OS2 */
# if defined(GWW_VDB)
# ifndef MEM_WRITE_WATCH
# define MEM_WRITE_WATCH 0x200000
# endif
# ifndef WRITE_WATCH_FLAG_RESET
# define WRITE_WATCH_FLAG_RESET 1
# endif
/* Since we can't easily check whether ULONG_PTR and SIZE_T are */
/* defined in Win32 basetsd.h, we define own ULONG_PTR. */
# define GC_ULONG_PTR word
typedef UINT (WINAPI * GetWriteWatch_type)(
DWORD, PVOID, GC_ULONG_PTR /* SIZE_T */,
PVOID *, GC_ULONG_PTR *, PULONG);
static GetWriteWatch_type GetWriteWatch_func;
static DWORD GetWriteWatch_alloc_flag;
# define GC_GWW_AVAILABLE() (GetWriteWatch_func != NULL)
static void detect_GetWriteWatch(void)
{
static GC_bool done;
HMODULE hK32;
if (done)
return;
# if defined(MPROTECT_VDB)
{
char * str = GETENV("GC_USE_GETWRITEWATCH");
# if defined(GC_PREFER_MPROTECT_VDB)
if (str == NULL || (*str == '0' && *(str + 1) == '\0')) {
/* GC_USE_GETWRITEWATCH is unset or set to "0". */
done = TRUE; /* falling back to MPROTECT_VDB strategy. */
/* This should work as if GWW_VDB is undefined. */
return;
}
# else
if (str != NULL && *str == '0' && *(str + 1) == '\0') {
/* GC_USE_GETWRITEWATCH is set "0". */
done = TRUE; /* falling back to MPROTECT_VDB strategy. */
return;
}
# endif
}
# endif
hK32 = GetModuleHandle(TEXT("kernel32.dll"));
if (hK32 != (HMODULE)0 &&
(GetWriteWatch_func = (GetWriteWatch_type)GetProcAddress(hK32,
"GetWriteWatch")) != NULL) {
/* Also check whether VirtualAlloc accepts MEM_WRITE_WATCH, */
/* as some versions of kernel32.dll have one but not the */
/* other, making the feature completely broken. */
void * page = VirtualAlloc(NULL, GC_page_size,
MEM_WRITE_WATCH | MEM_RESERVE,
PAGE_READWRITE);
if (page != NULL) {
PVOID pages[16];
GC_ULONG_PTR count = 16;
DWORD page_size;
/* Check that it actually works. In spite of some */
/* documentation it actually seems to exist on W2K. */
/* This test may be unnecessary, but ... */
if (GetWriteWatch_func(WRITE_WATCH_FLAG_RESET,
page, GC_page_size,
pages,
&count,
&page_size) != 0) {
/* GetWriteWatch always fails. */
GetWriteWatch_func = NULL;
} else {
GetWriteWatch_alloc_flag = MEM_WRITE_WATCH;
}
VirtualFree(page, 0 /* dwSize */, MEM_RELEASE);
} else {
/* GetWriteWatch will be useless. */
GetWriteWatch_func = NULL;
}
}
# ifndef SMALL_CONFIG
if (GetWriteWatch_func == NULL) {
GC_COND_LOG_PRINTF("Did not find a usable GetWriteWatch()\n");
} else {
GC_COND_LOG_PRINTF("Using GetWriteWatch()\n");
}
# endif
done = TRUE;
}
# else
# define GetWriteWatch_alloc_flag 0
# endif /* !GWW_VDB */
# if defined(MSWIN32) || defined(MSWINCE) || defined(CYGWIN32)
# ifdef MSWIN32
/* Unfortunately, we have to handle win32s very differently from NT, */
/* Since VirtualQuery has very different semantics. In particular, */
/* under win32s a VirtualQuery call on an unmapped page returns an */
/* invalid result. Under NT, GC_register_data_segments is a no-op */
/* and all real work is done by GC_register_dynamic_libraries. Under */
/* win32s, we cannot find the data segments associated with dll's. */
/* We register the main data segment here. */
GC_INNER GC_bool GC_no_win32_dlls = FALSE;
/* This used to be set for gcc, to avoid dealing with */
/* the structured exception handling issues. But we now have */
/* assembly code to do that right. */
GC_INNER GC_bool GC_wnt = FALSE;
/* This is a Windows NT derivative, i.e. NT, W2K, XP or later. */
GC_INNER void GC_init_win32(void)
{
# if defined(_WIN64) || (defined(_MSC_VER) && _MSC_VER >= 1800)
/* MS Visual Studio 2013 deprecates GetVersion, but on the other */
/* hand it cannot be used to target pre-Win2K. */
GC_wnt = TRUE;
# else
/* Set GC_wnt. If we're running under win32s, assume that no */
/* DLLs will be loaded. I doubt anyone still runs win32s, but... */
DWORD v = GetVersion();
GC_wnt = !(v & 0x80000000);
GC_no_win32_dlls |= ((!GC_wnt) && (v & 0xff) <= 3);
# endif
# ifdef USE_MUNMAP
if (GC_no_win32_dlls) {
/* Turn off unmapping for safety (since may not work well with */
/* GlobalAlloc). */
GC_unmap_threshold = 0;
}
# endif
}
/* Return the smallest address a such that VirtualQuery */
/* returns correct results for all addresses between a and start. */
/* Assumes VirtualQuery returns correct information for start. */
STATIC ptr_t GC_least_described_address(ptr_t start)
{
MEMORY_BASIC_INFORMATION buf;
LPVOID limit;
ptr_t p;
limit = GC_sysinfo.lpMinimumApplicationAddress;
p = (ptr_t)((word)start & ~(GC_page_size - 1));
for (;;) {
size_t result;
LPVOID q = (LPVOID)(p - GC_page_size);
if ((word)q > (word)p /* underflow */ || (word)q < (word)limit) break;
result = VirtualQuery(q, &buf, sizeof(buf));
if (result != sizeof(buf) || buf.AllocationBase == 0) break;
p = (ptr_t)(buf.AllocationBase);
}
return p;
}
# endif /* MSWIN32 */
# ifndef REDIRECT_MALLOC
/* We maintain a linked list of AllocationBase values that we know */
/* correspond to malloc heap sections. Currently this is only called */
/* during a GC. But there is some hope that for long running */
/* programs we will eventually see most heap sections. */
/* In the long run, it would be more reliable to occasionally walk */
/* the malloc heap with HeapWalk on the default heap. But that */
/* apparently works only for NT-based Windows. */
STATIC size_t GC_max_root_size = 100000; /* Appr. largest root size. */
# ifdef USE_WINALLOC
/* In the long run, a better data structure would also be nice ... */
STATIC struct GC_malloc_heap_list {
void * allocation_base;
struct GC_malloc_heap_list *next;
} *GC_malloc_heap_l = 0;
/* Is p the base of one of the malloc heap sections we already know */
/* about? */
STATIC GC_bool GC_is_malloc_heap_base(ptr_t p)
{
struct GC_malloc_heap_list *q = GC_malloc_heap_l;
while (0 != q) {
if (q -> allocation_base == p) return TRUE;
q = q -> next;
}
return FALSE;
}
STATIC void *GC_get_allocation_base(void *p)
{
MEMORY_BASIC_INFORMATION buf;
size_t result = VirtualQuery(p, &buf, sizeof(buf));
if (result != sizeof(buf)) {
ABORT("Weird VirtualQuery result");
}
return buf.AllocationBase;
}
GC_INNER void GC_add_current_malloc_heap(void)
{
struct GC_malloc_heap_list *new_l =
malloc(sizeof(struct GC_malloc_heap_list));
void * candidate = GC_get_allocation_base(new_l);
if (new_l == 0) return;
if (GC_is_malloc_heap_base(candidate)) {
/* Try a little harder to find malloc heap. */
size_t req_size = 10000;
do {
void *p = malloc(req_size);
if (0 == p) {
free(new_l);
return;
}
candidate = GC_get_allocation_base(p);
free(p);
req_size *= 2;
} while (GC_is_malloc_heap_base(candidate)
&& req_size < GC_max_root_size/10 && req_size < 500000);
if (GC_is_malloc_heap_base(candidate)) {
free(new_l);
return;
}
}
GC_COND_LOG_PRINTF("Found new system malloc AllocationBase at %p\n",
candidate);
new_l -> allocation_base = candidate;
new_l -> next = GC_malloc_heap_l;
GC_malloc_heap_l = new_l;
}
# endif /* USE_WINALLOC */
# endif /* !REDIRECT_MALLOC */
STATIC word GC_n_heap_bases = 0; /* See GC_heap_bases. */
/* Is p the start of either the malloc heap, or of one of our */
/* heap sections? */
GC_INNER GC_bool GC_is_heap_base(ptr_t p)
{
unsigned i;
# ifndef REDIRECT_MALLOC
if (GC_root_size > GC_max_root_size) GC_max_root_size = GC_root_size;
# ifdef USE_WINALLOC
if (GC_is_malloc_heap_base(p)) return TRUE;
# endif
# endif
for (i = 0; i < GC_n_heap_bases; i++) {
if (GC_heap_bases[i] == p) return TRUE;
}
return FALSE;
}
#ifdef MSWIN32
STATIC void GC_register_root_section(ptr_t static_root)
{
MEMORY_BASIC_INFORMATION buf;
LPVOID p;
char * base;
char * limit;
if (!GC_no_win32_dlls) return;
p = base = limit = GC_least_described_address(static_root);
while ((word)p < (word)GC_sysinfo.lpMaximumApplicationAddress) {
size_t result = VirtualQuery(p, &buf, sizeof(buf));
char * new_limit;
DWORD protect;
if (result != sizeof(buf) || buf.AllocationBase == 0
|| GC_is_heap_base(buf.AllocationBase)) break;
new_limit = (char *)p + buf.RegionSize;
protect = buf.Protect;
if (buf.State == MEM_COMMIT
&& is_writable(protect)) {
if ((char *)p == limit) {
limit = new_limit;
} else {
if (base != limit) GC_add_roots_inner(base, limit, FALSE);
base = p;
limit = new_limit;
}
}
if ((word)p > (word)new_limit /* overflow */) break;
p = (LPVOID)new_limit;
}
if (base != limit) GC_add_roots_inner(base, limit, FALSE);
}
#endif /* MSWIN32 */
void GC_register_data_segments(void)
{
# ifdef MSWIN32
GC_register_root_section((ptr_t)&GC_pages_executable);
/* any other GC global variable would fit too. */
# endif
}
# else /* !OS2 && !Windows */
# if (defined(SVR4) || defined(AIX) || defined(DGUX) \
|| (defined(LINUX) && defined(SPARC))) && !defined(PCR)
ptr_t GC_SysVGetDataStart(size_t max_page_size, ptr_t etext_addr)
{
word text_end = ((word)(etext_addr) + sizeof(word) - 1)
& ~(word)(sizeof(word) - 1);
/* etext rounded to word boundary */
word next_page = ((text_end + (word)max_page_size - 1)
& ~((word)max_page_size - 1));
word page_offset = (text_end & ((word)max_page_size - 1));
char * volatile result = (char *)(next_page + page_offset);
/* Note that this isn't equivalent to just adding */
/* max_page_size to &etext if &etext is at a page boundary */
GC_setup_temporary_fault_handler();
if (SETJMP(GC_jmp_buf) == 0) {
/* Try writing to the address. */
# ifdef AO_HAVE_fetch_and_add
volatile AO_t zero = 0;
(void)AO_fetch_and_add((volatile AO_t *)result, zero);
# else
/* Fallback to non-atomic fetch-and-store. */
char v = *result;
# if defined(CPPCHECK)
GC_noop1((word)&v);
# endif
*result = v;
# endif
GC_reset_fault_handler();
} else {
GC_reset_fault_handler();
/* We got here via a longjmp. The address is not readable. */
/* This is known to happen under Solaris 2.4 + gcc, which place */
/* string constants in the text segment, but after etext. */
/* Use plan B. Note that we now know there is a gap between */
/* text and data segments, so plan A brought us something. */
result = (char *)GC_find_limit(DATAEND, FALSE);
}
return((ptr_t)result);
}
# endif
#ifdef DATASTART_USES_BSDGETDATASTART
/* Its unclear whether this should be identical to the above, or */
/* whether it should apply to non-X86 architectures. */
/* For now we don't assume that there is always an empty page after */
/* etext. But in some cases there actually seems to be slightly more. */
/* This also deals with holes between read-only data and writable data. */
GC_INNER ptr_t GC_FreeBSDGetDataStart(size_t max_page_size,
ptr_t etext_addr)
{
word text_end = ((word)(etext_addr) + sizeof(word) - 1)
& ~(word)(sizeof(word) - 1);
/* etext rounded to word boundary */
volatile word next_page = (text_end + (word)max_page_size - 1)
& ~((word)max_page_size - 1);
volatile ptr_t result = (ptr_t)text_end;
GC_setup_temporary_fault_handler();
if (SETJMP(GC_jmp_buf) == 0) {
/* Try reading at the address. */
/* This should happen before there is another thread. */
for (; next_page < (word)DATAEND; next_page += (word)max_page_size)
*(volatile char *)next_page;
GC_reset_fault_handler();
} else {
GC_reset_fault_handler();
/* As above, we go to plan B */
result = GC_find_limit(DATAEND, FALSE);
}
return(result);
}
#endif /* DATASTART_USES_BSDGETDATASTART */
#ifdef AMIGA
# define GC_AMIGA_DS
# include "extra/AmigaOS.c"
# undef GC_AMIGA_DS
#elif defined(OPENBSD)
/* Depending on arch alignment, there can be multiple holes */
/* between DATASTART and DATAEND. Scan in DATASTART .. DATAEND */
/* and register each region. */
void GC_register_data_segments(void)
{
ptr_t region_start = DATASTART;
if ((word)region_start - 1U >= (word)DATAEND)
ABORT_ARG2("Wrong DATASTART/END pair",
": %p .. %p", (void *)region_start, (void *)DATAEND);
for (;;) {
ptr_t region_end = GC_find_limit_openbsd(region_start, DATAEND);
GC_add_roots_inner(region_start, region_end, FALSE);
if ((word)region_end >= (word)DATAEND)
break;
region_start = GC_skip_hole_openbsd(region_end, DATAEND);
}
}
# else /* !OS2 && !Windows && !AMIGA && !OPENBSD */
void GC_register_data_segments(void)
{
# if !defined(PCR) && !defined(MACOS)
# if defined(REDIRECT_MALLOC) && defined(GC_SOLARIS_THREADS)
/* As of Solaris 2.3, the Solaris threads implementation */
/* allocates the data structure for the initial thread with */
/* sbrk at process startup. It needs to be scanned, so that */
/* we don't lose some malloc allocated data structures */
/* hanging from it. We're on thin ice here ... */
extern caddr_t sbrk(int);
GC_ASSERT(DATASTART);
{
ptr_t p = (ptr_t)sbrk(0);
if ((word)DATASTART < (word)p)
GC_add_roots_inner(DATASTART, p, FALSE);
}
# else
if ((word)DATASTART - 1U >= (word)DATAEND) {
/* Subtract one to check also for NULL */
/* without a compiler warning. */
ABORT_ARG2("Wrong DATASTART/END pair",
": %p .. %p", (void *)DATASTART, (void *)DATAEND);
}
GC_add_roots_inner(DATASTART, DATAEND, FALSE);
# ifdef GC_HAVE_DATAREGION2
if ((word)DATASTART2 - 1U >= (word)DATAEND2)
ABORT_ARG2("Wrong DATASTART/END2 pair",
": %p .. %p", (void *)DATASTART2, (void *)DATAEND2);
GC_add_roots_inner(DATASTART2, DATAEND2, FALSE);
# endif
# endif
# endif
# if defined(MACOS)
{
# if defined(THINK_C)
extern void* GC_MacGetDataStart(void);
/* globals begin above stack and end at a5. */
GC_add_roots_inner((ptr_t)GC_MacGetDataStart(),
(ptr_t)LMGetCurrentA5(), FALSE);
# else
# if defined(__MWERKS__)
# if !__POWERPC__
extern void* GC_MacGetDataStart(void);
/* MATTHEW: Function to handle Far Globals (CW Pro 3) */
# if __option(far_data)
extern void* GC_MacGetDataEnd(void);
# endif
/* globals begin above stack and end at a5. */
GC_add_roots_inner((ptr_t)GC_MacGetDataStart(),
(ptr_t)LMGetCurrentA5(), FALSE);
/* MATTHEW: Handle Far Globals */
# if __option(far_data)
/* Far globals follow he QD globals: */
GC_add_roots_inner((ptr_t)LMGetCurrentA5(),
(ptr_t)GC_MacGetDataEnd(), FALSE);
# endif
# else
extern char __data_start__[], __data_end__[];
GC_add_roots_inner((ptr_t)&__data_start__,
(ptr_t)&__data_end__, FALSE);
# endif /* __POWERPC__ */
# endif /* __MWERKS__ */
# endif /* !THINK_C */
}
# endif /* MACOS */
/* Dynamic libraries are added at every collection, since they may */
/* change. */
}
# endif /* !AMIGA */
# endif /* !MSWIN32 && !MSWINCE */
# endif /* !OS2 */
/*
* Auxiliary routines for obtaining memory from OS.
*/
# if !defined(OS2) && !defined(PCR) && !defined(AMIGA) \
&& !defined(USE_WINALLOC) && !defined(MACOS) && !defined(DOS4GW) \
&& !defined(NONSTOP) && !defined(SN_TARGET_PS3) && !defined(RTEMS) \
&& !defined(__CC_ARM)
# define SBRK_ARG_T ptrdiff_t
#if defined(MMAP_SUPPORTED)
#ifdef USE_MMAP_FIXED
# define GC_MMAP_FLAGS MAP_FIXED | MAP_PRIVATE
/* Seems to yield better performance on Solaris 2, but can */
/* be unreliable if something is already mapped at the address. */
#else
# define GC_MMAP_FLAGS MAP_PRIVATE
#endif
#ifdef USE_MMAP_ANON
# define zero_fd -1
# if defined(MAP_ANONYMOUS) && !defined(CPPCHECK)
# define OPT_MAP_ANON MAP_ANONYMOUS
# else
# define OPT_MAP_ANON MAP_ANON
# endif
#else
static int zero_fd = -1;
# define OPT_MAP_ANON 0
#endif
#ifdef SYMBIAN
extern char* GC_get_private_path_and_zero_file(void);
#endif
STATIC ptr_t GC_unix_mmap_get_mem(size_t bytes)
{
void *result;
static ptr_t last_addr = HEAP_START;
# ifndef USE_MMAP_ANON
static GC_bool initialized = FALSE;
if (!EXPECT(initialized, TRUE)) {
# ifdef SYMBIAN
char *path = GC_get_private_path_and_zero_file();
if (path != NULL) {
zero_fd = open(path, O_RDWR | O_CREAT, 0666);
free(path);
}
# else
zero_fd = open("/dev/zero", O_RDONLY);
# endif
if (zero_fd == -1)
ABORT("Could not open /dev/zero");
if (fcntl(zero_fd, F_SETFD, FD_CLOEXEC) == -1)
WARN("Could not set FD_CLOEXEC for /dev/zero\n", 0);
initialized = TRUE;
}
# endif
if (bytes & (GC_page_size - 1)) ABORT("Bad GET_MEM arg");
result = mmap(last_addr, bytes, (PROT_READ | PROT_WRITE)
| (GC_pages_executable ? PROT_EXEC : 0),
GC_MMAP_FLAGS | OPT_MAP_ANON, zero_fd, 0/* offset */);
# undef IGNORE_PAGES_EXECUTABLE
if (result == MAP_FAILED) return(0);
last_addr = (ptr_t)(((word)result + bytes + GC_page_size - 1)
& ~(GC_page_size - 1));
# if !defined(LINUX)
if (last_addr == 0) {
/* Oops. We got the end of the address space. This isn't */
/* usable by arbitrary C code, since one-past-end pointers */
/* don't work, so we discard it and try again. */
munmap(result, ~GC_page_size - (size_t)result + 1);
/* Leave last page mapped, so we can't repeat. */
return GC_unix_mmap_get_mem(bytes);
}
# else
GC_ASSERT(last_addr != 0);
# endif
if (((word)result % HBLKSIZE) != 0)
ABORT(
"GC_unix_get_mem: Memory returned by mmap is not aligned to HBLKSIZE.");
return((ptr_t)result);
}
# endif /* MMAP_SUPPORTED */
#if defined(USE_MMAP)
ptr_t GC_unix_get_mem(size_t bytes)
{
return GC_unix_mmap_get_mem(bytes);
}
#else /* !USE_MMAP */
STATIC ptr_t GC_unix_sbrk_get_mem(size_t bytes)
{
ptr_t result;
# ifdef IRIX5
/* Bare sbrk isn't thread safe. Play by malloc rules. */
/* The equivalent may be needed on other systems as well. */
__LOCK_MALLOC();
# endif
{
ptr_t cur_brk = (ptr_t)sbrk(0);
SBRK_ARG_T lsbs = (word)cur_brk & (GC_page_size-1);
if ((SBRK_ARG_T)bytes < 0) {
result = 0; /* too big */
goto out;
}
if (lsbs != 0) {
if((ptr_t)sbrk((SBRK_ARG_T)GC_page_size - lsbs) == (ptr_t)(-1)) {
result = 0;
goto out;
}
}
# ifdef ADD_HEAP_GUARD_PAGES
/* This is useful for catching severe memory overwrite problems that */
/* span heap sections. It shouldn't otherwise be turned on. */
{
ptr_t guard = (ptr_t)sbrk((SBRK_ARG_T)GC_page_size);
if (mprotect(guard, GC_page_size, PROT_NONE) != 0)
ABORT("ADD_HEAP_GUARD_PAGES: mprotect failed");
}
# endif /* ADD_HEAP_GUARD_PAGES */
result = (ptr_t)sbrk((SBRK_ARG_T)bytes);
if (result == (ptr_t)(-1)) result = 0;
}
out:
# ifdef IRIX5
__UNLOCK_MALLOC();
# endif
return(result);
}
ptr_t GC_unix_get_mem(size_t bytes)
{
# if defined(MMAP_SUPPORTED)
/* By default, we try both sbrk and mmap, in that order. */
static GC_bool sbrk_failed = FALSE;
ptr_t result = 0;
if (!sbrk_failed) result = GC_unix_sbrk_get_mem(bytes);
if (0 == result) {
sbrk_failed = TRUE;
result = GC_unix_mmap_get_mem(bytes);
}
if (0 == result) {
/* Try sbrk again, in case sbrk memory became available. */
result = GC_unix_sbrk_get_mem(bytes);
}
return result;
# else /* !MMAP_SUPPORTED */
return GC_unix_sbrk_get_mem(bytes);
# endif
}
#endif /* !USE_MMAP */
# endif /* UN*X */
# ifdef OS2
void * os2_alloc(size_t bytes)
{
void * result;
if (DosAllocMem(&result, bytes, (PAG_READ | PAG_WRITE | PAG_COMMIT)
| (GC_pages_executable ? PAG_EXECUTE : 0))
!= NO_ERROR) {
return(0);
}
/* FIXME: What's the purpose of this recursion? (Probably, if */
/* DosAllocMem returns memory at 0 address then just retry once.) */
if (result == 0) return(os2_alloc(bytes));
return(result);
}
# endif /* OS2 */
#ifdef MSWINCE
ptr_t GC_wince_get_mem(size_t bytes)
{
ptr_t result = 0; /* initialized to prevent warning. */
word i;
bytes = ROUNDUP_PAGESIZE(bytes);
/* Try to find reserved, uncommitted pages */
for (i = 0; i < GC_n_heap_bases; i++) {
if (((word)(-(signed_word)GC_heap_lengths[i])
& (GC_sysinfo.dwAllocationGranularity-1))
>= bytes) {
result = GC_heap_bases[i] + GC_heap_lengths[i];
break;
}
}
if (i == GC_n_heap_bases) {
/* Reserve more pages */
size_t res_bytes =
SIZET_SAT_ADD(bytes, (size_t)GC_sysinfo.dwAllocationGranularity-1)
& ~((size_t)GC_sysinfo.dwAllocationGranularity-1);
/* If we ever support MPROTECT_VDB here, we will probably need to */
/* ensure that res_bytes is strictly > bytes, so that VirtualProtect */
/* never spans regions. It seems to be OK for a VirtualFree */
/* argument to span regions, so we should be OK for now. */
result = (ptr_t) VirtualAlloc(NULL, res_bytes,
MEM_RESERVE | MEM_TOP_DOWN,
GC_pages_executable ? PAGE_EXECUTE_READWRITE :
PAGE_READWRITE);
if (HBLKDISPL(result) != 0) ABORT("Bad VirtualAlloc result");
/* If I read the documentation correctly, this can */
/* only happen if HBLKSIZE > 64k or not a power of 2. */
if (GC_n_heap_bases >= MAX_HEAP_SECTS) ABORT("Too many heap sections");
if (result == NULL) return NULL;
GC_heap_bases[GC_n_heap_bases] = result;
GC_heap_lengths[GC_n_heap_bases] = 0;
GC_n_heap_bases++;
}
/* Commit pages */
result = (ptr_t) VirtualAlloc(result, bytes, MEM_COMMIT,
GC_pages_executable ? PAGE_EXECUTE_READWRITE :
PAGE_READWRITE);
# undef IGNORE_PAGES_EXECUTABLE
if (result != NULL) {
if (HBLKDISPL(result) != 0) ABORT("Bad VirtualAlloc result");
GC_heap_lengths[i] += bytes;
}
return(result);
}
#elif defined(USE_WINALLOC) || defined(CYGWIN32)
# ifdef USE_GLOBAL_ALLOC
# define GLOBAL_ALLOC_TEST 1
# else
# define GLOBAL_ALLOC_TEST GC_no_win32_dlls
# endif
# if (defined(GC_USE_MEM_TOP_DOWN) && defined(USE_WINALLOC)) \
|| defined(CPPCHECK)
DWORD GC_mem_top_down = MEM_TOP_DOWN;
/* Use GC_USE_MEM_TOP_DOWN for better 64-bit */
/* testing. Otherwise all addresses tend to */
/* end up in first 4GB, hiding bugs. */
# else
# define GC_mem_top_down 0
# endif /* !GC_USE_MEM_TOP_DOWN */
ptr_t GC_win32_get_mem(size_t bytes)
{
ptr_t result;
# ifndef USE_WINALLOC
result = GC_unix_get_mem(bytes);
# else
# ifdef MSWIN32
if (GLOBAL_ALLOC_TEST) {
/* VirtualAlloc doesn't like PAGE_EXECUTE_READWRITE. */
/* There are also unconfirmed rumors of other */
/* problems, so we dodge the issue. */
result = (ptr_t)GlobalAlloc(0, SIZET_SAT_ADD(bytes, HBLKSIZE));
/* Align it at HBLKSIZE boundary. */
result = (ptr_t)(((word)result + HBLKSIZE - 1)
& ~(word)(HBLKSIZE - 1));
} else
# endif
/* else */ {
/* VirtualProtect only works on regions returned by a */
/* single VirtualAlloc call. Thus we allocate one */
/* extra page, which will prevent merging of blocks */
/* in separate regions, and eliminate any temptation */
/* to call VirtualProtect on a range spanning regions. */
/* This wastes a small amount of memory, and risks */
/* increased fragmentation. But better alternatives */
/* would require effort. */
# ifdef MPROTECT_VDB
/* We can't check for GC_incremental here (because */
/* GC_enable_incremental() might be called some time */
/* later after the GC initialization). */
# ifdef GWW_VDB
# define VIRTUAL_ALLOC_PAD (GC_GWW_AVAILABLE() ? 0 : 1)
# else
# define VIRTUAL_ALLOC_PAD 1
# endif
# else
# define VIRTUAL_ALLOC_PAD 0
# endif
/* Pass the MEM_WRITE_WATCH only if GetWriteWatch-based */
/* VDBs are enabled and the GetWriteWatch function is */
/* available. Otherwise we waste resources or possibly */
/* cause VirtualAlloc to fail (observed in Windows 2000 */
/* SP2). */
result = (ptr_t) VirtualAlloc(NULL,
SIZET_SAT_ADD(bytes, VIRTUAL_ALLOC_PAD),
GetWriteWatch_alloc_flag
| (MEM_COMMIT | MEM_RESERVE)
| GC_mem_top_down,
GC_pages_executable ? PAGE_EXECUTE_READWRITE :
PAGE_READWRITE);
# undef IGNORE_PAGES_EXECUTABLE
}
# endif /* USE_WINALLOC */
if (HBLKDISPL(result) != 0) ABORT("Bad VirtualAlloc result");
/* If I read the documentation correctly, this can */
/* only happen if HBLKSIZE > 64k or not a power of 2. */
if (GC_n_heap_bases >= MAX_HEAP_SECTS) ABORT("Too many heap sections");
if (0 != result) GC_heap_bases[GC_n_heap_bases++] = result;
return(result);
}
GC_API void GC_CALL GC_win32_free_heap(void)
{
# ifndef CYGWIN32
if (GLOBAL_ALLOC_TEST)
# endif
{
while (GC_n_heap_bases-- > 0) {
# ifdef CYGWIN32
/* FIXME: Is it OK to use non-GC free() here? */
# else
GlobalFree(GC_heap_bases[GC_n_heap_bases]);
# endif
GC_heap_bases[GC_n_heap_bases] = 0;
}
} /* else */
# ifndef CYGWIN32
else {
/* Avoiding VirtualAlloc leak. */
while (GC_n_heap_bases > 0) {
VirtualFree(GC_heap_bases[--GC_n_heap_bases], 0, MEM_RELEASE);
GC_heap_bases[GC_n_heap_bases] = 0;
}
}
# endif
}
#endif /* USE_WINALLOC || CYGWIN32 */
#ifdef AMIGA
# define GC_AMIGA_AM
# include "extra/AmigaOS.c"
# undef GC_AMIGA_AM
#endif
#if defined(HAIKU)
# include
ptr_t GC_haiku_get_mem(size_t bytes)
{
void* mem;
GC_ASSERT(GC_page_size != 0);
if (posix_memalign(&mem, GC_page_size, bytes) == 0)
return mem;
return NULL;
}
#endif /* HAIKU */
#ifdef USE_MUNMAP
/* For now, this only works on Win32/WinCE and some Unix-like */
/* systems. If you have something else, don't define */
/* USE_MUNMAP. */
#if !defined(MSWIN32) && !defined(MSWINCE)
# include
# include
# include
# include
#endif
/* Compute a page aligned starting address for the unmap */
/* operation on a block of size bytes starting at start. */
/* Return 0 if the block is too small to make this feasible. */
STATIC ptr_t GC_unmap_start(ptr_t start, size_t bytes)
{
ptr_t result = (ptr_t)(((word)start + GC_page_size - 1)
& ~(GC_page_size - 1));
if ((word)(result + GC_page_size) > (word)(start + bytes)) return 0;
return result;
}
/* Compute end address for an unmap operation on the indicated */
/* block. */
STATIC ptr_t GC_unmap_end(ptr_t start, size_t bytes)
{
return (ptr_t)((word)(start + bytes) & ~(GC_page_size - 1));
}
/* Under Win32/WinCE we commit (map) and decommit (unmap) */
/* memory using VirtualAlloc and VirtualFree. These functions */
/* work on individual allocations of virtual memory, made */
/* previously using VirtualAlloc with the MEM_RESERVE flag. */
/* The ranges we need to (de)commit may span several of these */
/* allocations; therefore we use VirtualQuery to check */
/* allocation lengths, and split up the range as necessary. */
/* We assume that GC_remap is called on exactly the same range */
/* as a previous call to GC_unmap. It is safe to consistently */
/* round the endpoints in both places. */
GC_INNER void GC_unmap(ptr_t start, size_t bytes)
{
ptr_t start_addr = GC_unmap_start(start, bytes);
ptr_t end_addr = GC_unmap_end(start, bytes);
word len = end_addr - start_addr;
if (0 == start_addr) return;
# ifdef USE_WINALLOC
while (len != 0) {
MEMORY_BASIC_INFORMATION mem_info;
word free_len;
if (VirtualQuery(start_addr, &mem_info, sizeof(mem_info))
!= sizeof(mem_info))
ABORT("Weird VirtualQuery result");
free_len = (len < mem_info.RegionSize) ? len : mem_info.RegionSize;
if (!VirtualFree(start_addr, free_len, MEM_DECOMMIT))
ABORT("VirtualFree failed");
GC_unmapped_bytes += free_len;
start_addr += free_len;
len -= free_len;
}
# else
/* We immediately remap it to prevent an intervening mmap from */
/* accidentally grabbing the same address space. */
{
# ifdef CYGWIN32
/* Calling mmap() with the new protection flags on an */
/* existing memory map with MAP_FIXED is broken on Cygwin. */
/* However, calling mprotect() on the given address range */
/* with PROT_NONE seems to work fine. */
if (mprotect(start_addr, len, PROT_NONE))
ABORT("mprotect(PROT_NONE) failed");
# else
void * result = mmap(start_addr, len, PROT_NONE,
MAP_PRIVATE | MAP_FIXED | OPT_MAP_ANON,
zero_fd, 0/* offset */);
if (result != (void *)start_addr)
ABORT("mmap(PROT_NONE) failed");
# if defined(CPPCHECK) || defined(LINT2)
/* Explicitly store the resource handle to a global variable. */
GC_noop1((word)result);
# endif
# endif /* !CYGWIN32 */
}
GC_unmapped_bytes += len;
# endif
}
GC_INNER void GC_remap(ptr_t start, size_t bytes)
{
ptr_t start_addr = GC_unmap_start(start, bytes);
ptr_t end_addr = GC_unmap_end(start, bytes);
word len = end_addr - start_addr;
if (0 == start_addr) return;
/* FIXME: Handle out-of-memory correctly (at least for Win32) */
# ifdef USE_WINALLOC
while (len != 0) {
MEMORY_BASIC_INFORMATION mem_info;
word alloc_len;
ptr_t result;
if (VirtualQuery(start_addr, &mem_info, sizeof(mem_info))
!= sizeof(mem_info))
ABORT("Weird VirtualQuery result");
alloc_len = (len < mem_info.RegionSize) ? len : mem_info.RegionSize;
result = VirtualAlloc(start_addr, alloc_len, MEM_COMMIT,
GC_pages_executable ? PAGE_EXECUTE_READWRITE :
PAGE_READWRITE);
if (result != start_addr) {
if (GetLastError() == ERROR_NOT_ENOUGH_MEMORY ||
GetLastError() == ERROR_OUTOFMEMORY) {
ABORT("Not enough memory to process remapping");
} else {
ABORT("VirtualAlloc remapping failed");
}
}
# ifdef LINT2
GC_noop1((word)result);
# endif
GC_unmapped_bytes -= alloc_len;
start_addr += alloc_len;
len -= alloc_len;
}
# else
/* It was already remapped with PROT_NONE. */
{
# ifdef NACL
/* NaCl does not expose mprotect, but mmap should work fine. */
void *result = mmap(start_addr, len, (PROT_READ | PROT_WRITE)
| (GC_pages_executable ? PROT_EXEC : 0),
MAP_PRIVATE | MAP_FIXED | OPT_MAP_ANON,
zero_fd, 0 /* offset */);
if (result != (void *)start_addr)
ABORT("mmap as mprotect failed");
# if defined(CPPCHECK) || defined(LINT2)
GC_noop1((word)result);
# endif
# else
if (mprotect(start_addr, len, (PROT_READ | PROT_WRITE)
| (GC_pages_executable ? PROT_EXEC : 0)) != 0) {
ABORT_ARG3("mprotect remapping failed",
" at %p (length %lu), errcode= %d",
(void *)start_addr, (unsigned long)len, errno);
}
# endif /* !NACL */
}
# undef IGNORE_PAGES_EXECUTABLE
GC_unmapped_bytes -= len;
# endif
}
/* Two adjacent blocks have already been unmapped and are about to */
/* be merged. Unmap the whole block. This typically requires */
/* that we unmap a small section in the middle that was not previously */
/* unmapped due to alignment constraints. */
GC_INNER void GC_unmap_gap(ptr_t start1, size_t bytes1, ptr_t start2,
size_t bytes2)
{
ptr_t start1_addr = GC_unmap_start(start1, bytes1);
ptr_t end1_addr = GC_unmap_end(start1, bytes1);
ptr_t start2_addr = GC_unmap_start(start2, bytes2);
ptr_t start_addr = end1_addr;
ptr_t end_addr = start2_addr;
size_t len;
GC_ASSERT(start1 + bytes1 == start2);
if (0 == start1_addr) start_addr = GC_unmap_start(start1, bytes1 + bytes2);
if (0 == start2_addr) end_addr = GC_unmap_end(start1, bytes1 + bytes2);
if (0 == start_addr) return;
len = end_addr - start_addr;
# ifdef USE_WINALLOC
while (len != 0) {
MEMORY_BASIC_INFORMATION mem_info;
word free_len;
if (VirtualQuery(start_addr, &mem_info, sizeof(mem_info))
!= sizeof(mem_info))
ABORT("Weird VirtualQuery result");
free_len = (len < mem_info.RegionSize) ? len : mem_info.RegionSize;
if (!VirtualFree(start_addr, free_len, MEM_DECOMMIT))
ABORT("VirtualFree failed");
GC_unmapped_bytes += free_len;
start_addr += free_len;
len -= free_len;
}
# else
if (len != 0) {
/* Immediately remap as above. */
# ifdef CYGWIN32
if (mprotect(start_addr, len, PROT_NONE))
ABORT("mprotect(PROT_NONE) failed");
# else
void * result = mmap(start_addr, len, PROT_NONE,
MAP_PRIVATE | MAP_FIXED | OPT_MAP_ANON,
zero_fd, 0/* offset */);
if (result != (void *)start_addr)
ABORT("mmap(PROT_NONE) failed");
# if defined(CPPCHECK) || defined(LINT2)
GC_noop1((word)result);
# endif
# endif /* !CYGWIN32 */
GC_unmapped_bytes += len;
}
# endif
}
#endif /* USE_MUNMAP */
/* Routine for pushing any additional roots. In THREADS */
/* environment, this is also responsible for marking from */
/* thread stacks. */
#ifndef THREADS
GC_push_other_roots_proc GC_push_other_roots = 0;
#else /* THREADS */
# ifdef PCR
PCR_ERes GC_push_thread_stack(PCR_Th_T *t, PCR_Any dummy)
{
struct PCR_ThCtl_TInfoRep info;
PCR_ERes result;
info.ti_stkLow = info.ti_stkHi = 0;
result = PCR_ThCtl_GetInfo(t, &info);
GC_push_all_stack((ptr_t)(info.ti_stkLow), (ptr_t)(info.ti_stkHi));
return(result);
}
/* Push the contents of an old object. We treat this as stack */
/* data only because that makes it robust against mark stack */
/* overflow. */
PCR_ERes GC_push_old_obj(void *p, size_t size, PCR_Any data)
{
GC_push_all_stack((ptr_t)p, (ptr_t)p + size);
return(PCR_ERes_okay);
}
extern struct PCR_MM_ProcsRep * GC_old_allocator;
/* defined in pcr_interface.c. */
STATIC void GC_CALLBACK GC_default_push_other_roots(void)
{
/* Traverse data allocated by previous memory managers. */
if ((*(GC_old_allocator->mmp_enumerate))(PCR_Bool_false,
GC_push_old_obj, 0)
!= PCR_ERes_okay) {
ABORT("Old object enumeration failed");
}
/* Traverse all thread stacks. */
if (PCR_ERes_IsErr(
PCR_ThCtl_ApplyToAllOtherThreads(GC_push_thread_stack,0))
|| PCR_ERes_IsErr(GC_push_thread_stack(PCR_Th_CurrThread(), 0))) {
ABORT("Thread stack marking failed");
}
}
# endif /* PCR */
# if defined(GC_PTHREADS) || defined(GC_WIN32_THREADS)
STATIC void GC_CALLBACK GC_default_push_other_roots(void)
{
GC_push_all_stacks();
}
# endif /* GC_WIN32_THREADS || GC_PTHREADS */
# ifdef SN_TARGET_PS3
STATIC void GC_CALLBACK GC_default_push_other_roots(void)
{
ABORT("GC_default_push_other_roots is not implemented");
}
void GC_push_thread_structures(void)
{
ABORT("GC_push_thread_structures is not implemented");
}
# endif /* SN_TARGET_PS3 */
GC_push_other_roots_proc GC_push_other_roots = GC_default_push_other_roots;
#endif /* THREADS */
GC_API void GC_CALL GC_set_push_other_roots(GC_push_other_roots_proc fn)
{
GC_push_other_roots = fn;
}
GC_API GC_push_other_roots_proc GC_CALL GC_get_push_other_roots(void)
{
return GC_push_other_roots;
}
/*
* Routines for accessing dirty bits on virtual pages.
* There are six ways to maintain this information:
* DEFAULT_VDB: A simple dummy implementation that treats every page
* as possibly dirty. This makes incremental collection
* useless, but the implementation is still correct.
* MANUAL_VDB: Stacks and static data are always considered dirty.
* Heap pages are considered dirty if GC_dirty(p) has been
* called on some pointer p pointing to somewhere inside
* an object on that page. A GC_dirty() call on a large
* object directly dirties only a single page, but for
* MANUAL_VDB we are careful to treat an object with a dirty
* page as completely dirty.
* In order to avoid races, an object must be marked dirty
* after it is written, and a reference to the object
* must be kept on a stack or in a register in the interim.
* With threads enabled, an object directly reachable from the
* stack at the time of a collection is treated as dirty.
* In single-threaded mode, it suffices to ensure that no
* collection can take place between the pointer assignment
* and the GC_dirty() call.
* PCR_VDB: Use PPCRs virtual dirty bit facility.
* PROC_VDB: Use the /proc facility for reading dirty bits. Only
* works under some SVR4 variants. Even then, it may be
* too slow to be entirely satisfactory. Requires reading
* dirty bits for entire address space. Implementations tend
* to assume that the client is a (slow) debugger.
* MPROTECT_VDB:Protect pages and then catch the faults to keep track of
* dirtied pages. The implementation (and implementability)
* is highly system dependent. This usually fails when system
* calls write to a protected page. We prevent the read system
* call from doing so. It is the clients responsibility to
* make sure that other system calls are similarly protected
* or write only to the stack.
* GWW_VDB: Use the Win32 GetWriteWatch functions, if available, to
* read dirty bits. In case it is not available (because we
* are running on Windows 95, Windows 2000 or earlier),
* MPROTECT_VDB may be defined as a fallback strategy.
*/
#if defined(GWW_VDB) || defined(MPROTECT_VDB) || defined(PROC_VDB) \
|| defined(MANUAL_VDB)
/* Is the HBLKSIZE sized page at h marked dirty in the local buffer? */
/* If the actual page size is different, this returns TRUE if any */
/* of the pages overlapping h are dirty. This routine may err on the */
/* side of labeling pages as dirty (and this implementation does). */
GC_INNER GC_bool GC_page_was_dirty(struct hblk * h)
{
register word index;
if (HDR(h) == 0)
return TRUE;
index = PHT_HASH(h);
return get_pht_entry_from_index(GC_grungy_pages, index);
}
#endif
#if (defined(CHECKSUMS) && defined(GWW_VDB)) || defined(PROC_VDB)
/* Add all pages in pht2 to pht1. */
STATIC void GC_or_pages(page_hash_table pht1, page_hash_table pht2)
{
register unsigned i;
for (i = 0; i < PHT_SIZE; i++) pht1[i] |= pht2[i];
}
/* Used only if GWW_VDB. */
# ifdef MPROTECT_VDB
STATIC GC_bool GC_gww_page_was_ever_dirty(struct hblk * h)
# else
GC_INNER GC_bool GC_page_was_ever_dirty(struct hblk * h)
# endif
{
register word index;
if (HDR(h) == 0)
return TRUE;
index = PHT_HASH(h);
return get_pht_entry_from_index(GC_written_pages, index);
}
#endif /* CHECKSUMS && GWW_VDB || PROC_VDB */
#if ((defined(GWW_VDB) || defined(PROC_VDB)) && !defined(MPROTECT_VDB)) \
|| defined(MANUAL_VDB) || defined(DEFAULT_VDB)
/* Ignore write hints. They don't help us here. */
GC_INNER void GC_remove_protection(struct hblk * h GC_ATTR_UNUSED,
word nblocks GC_ATTR_UNUSED,
GC_bool is_ptrfree GC_ATTR_UNUSED) {}
#endif
#ifdef GWW_VDB
# define GC_GWW_BUF_LEN (MAXHINCR * HBLKSIZE / 4096 /* X86 page size */)
/* Still susceptible to overflow, if there are very large allocations, */
/* and everything is dirty. */
static PVOID gww_buf[GC_GWW_BUF_LEN];
# ifndef MPROTECT_VDB
# define GC_gww_dirty_init GC_dirty_init
# endif
GC_INNER GC_bool GC_gww_dirty_init(void)
{
detect_GetWriteWatch();
return GC_GWW_AVAILABLE();
}
# ifdef MPROTECT_VDB
STATIC void GC_gww_read_dirty(void)
# else
GC_INNER void GC_read_dirty(void)
# endif
{
word i;
BZERO(GC_grungy_pages, sizeof(GC_grungy_pages));
for (i = 0; i != GC_n_heap_sects; ++i) {
GC_ULONG_PTR count;
do {
PVOID * pages = gww_buf;
DWORD page_size;
count = GC_GWW_BUF_LEN;
/* GetWriteWatch is documented as returning non-zero when it */
/* fails, but the documentation doesn't explicitly say why it */
/* would fail or what its behaviour will be if it fails. */
/* It does appear to fail, at least on recent W2K instances, if */
/* the underlying memory was not allocated with the appropriate */
/* flag. This is common if GC_enable_incremental is called */
/* shortly after GC initialization. To avoid modifying the */
/* interface, we silently work around such a failure, it only */
/* affects the initial (small) heap allocation. If there are */
/* more dirty pages than will fit in the buffer, this is not */
/* treated as a failure; we must check the page count in the */
/* loop condition. Since each partial call will reset the */
/* status of some pages, this should eventually terminate even */
/* in the overflow case. */
if (GetWriteWatch_func(WRITE_WATCH_FLAG_RESET,
GC_heap_sects[i].hs_start,
GC_heap_sects[i].hs_bytes,
pages,
&count,
&page_size) != 0) {
static int warn_count = 0;
unsigned j;
struct hblk * start = (struct hblk *)GC_heap_sects[i].hs_start;
static struct hblk *last_warned = 0;
size_t nblocks = divHBLKSZ(GC_heap_sects[i].hs_bytes);
if ( i != 0 && last_warned != start && warn_count++ < 5) {
last_warned = start;
WARN(
"GC_gww_read_dirty unexpectedly failed at %p: "
"Falling back to marking all pages dirty\n", start);
}
for (j = 0; j < nblocks; ++j) {
word hash = PHT_HASH(start + j);
set_pht_entry_from_index(GC_grungy_pages, hash);
}
count = 1; /* Done with this section. */
} else /* succeeded */ {
PVOID * pages_end = pages + count;
while (pages != pages_end) {
struct hblk * h = (struct hblk *) *pages++;
struct hblk * h_end = (struct hblk *) ((char *) h + page_size);
do {
set_pht_entry_from_index(GC_grungy_pages, PHT_HASH(h));
} while ((word)(++h) < (word)h_end);
}
}
} while (count == GC_GWW_BUF_LEN);
/* FIXME: It's unclear from Microsoft's documentation if this loop */
/* is useful. We suspect the call just fails if the buffer fills */
/* up. But that should still be handled correctly. */
}
# ifdef CHECKSUMS
GC_or_pages(GC_written_pages, GC_grungy_pages);
# endif
}
#endif /* GWW_VDB */
#ifdef DEFAULT_VDB
/* All of the following assume the allocation lock is held. */
/* The client asserts that unallocated pages in the heap are never */
/* written. */
/* Initialize virtual dirty bit implementation. */
GC_INNER GC_bool GC_dirty_init(void)
{
GC_VERBOSE_LOG_PRINTF("Initializing DEFAULT_VDB...\n");
return TRUE;
}
/* Retrieve system dirty bits for heap to a local buffer. */
/* Restore the systems notion of which pages are dirty. */
GC_INNER void GC_read_dirty(void) {}
/* Is the HBLKSIZE sized page at h marked dirty in the local buffer? */
/* If the actual page size is different, this returns TRUE if any */
/* of the pages overlapping h are dirty. This routine may err on the */
/* side of labeling pages as dirty (and this implementation does). */
GC_INNER GC_bool GC_page_was_dirty(struct hblk * h GC_ATTR_UNUSED)
{
return(TRUE);
}
/* The following two routines are typically less crucial. */
/* They matter most with large dynamic libraries, or if we can't */
/* accurately identify stacks, e.g. under Solaris 2.X. Otherwise the */
/* following default versions are adequate. */
# ifdef CHECKSUMS
/* Could any valid GC heap pointer ever have been written to this page? */
GC_INNER GC_bool GC_page_was_ever_dirty(struct hblk * h GC_ATTR_UNUSED)
{
return(TRUE);
}
# endif /* CHECKSUMS */
#endif /* DEFAULT_VDB */
#ifdef MANUAL_VDB
/* Initialize virtual dirty bit implementation. */
GC_INNER GC_bool GC_dirty_init(void)
{
GC_VERBOSE_LOG_PRINTF("Initializing MANUAL_VDB...\n");
/* GC_dirty_pages and GC_grungy_pages are already cleared. */
return TRUE;
}
/* Retrieve system dirty bits for heap to a local buffer. */
/* Restore the systems notion of which pages are dirty. */
GC_INNER void GC_read_dirty(void)
{
BCOPY((word *)GC_dirty_pages, GC_grungy_pages,
(sizeof GC_dirty_pages));
BZERO((word *)GC_dirty_pages, (sizeof GC_dirty_pages));
}
# define async_set_pht_entry_from_index(db, index) \
set_pht_entry_from_index(db, index) /* for now */
/* Mark the page containing p as dirty. Logically, this dirties the */
/* entire object. */
void GC_dirty(ptr_t p)
{
word index = PHT_HASH(p);
async_set_pht_entry_from_index(GC_dirty_pages, index);
}
# ifdef CHECKSUMS
/* Could any valid GC heap pointer ever have been written to this page? */
GC_INNER GC_bool GC_page_was_ever_dirty(struct hblk * h GC_ATTR_UNUSED)
{
/* FIXME - implement me. */
return(TRUE);
}
# endif /* CHECKSUMS */
#endif /* MANUAL_VDB */
#ifdef MPROTECT_VDB
/* See DEFAULT_VDB for interface descriptions. */
/*
* This implementation maintains dirty bits itself by catching write
* faults and keeping track of them. We assume nobody else catches
* SIGBUS or SIGSEGV. We assume no write faults occur in system calls.
* This means that clients must ensure that system calls don't write
* to the write-protected heap. Probably the best way to do this is to
* ensure that system calls write at most to pointer-free objects in the
* heap, and do even that only if we are on a platform on which those
* are not protected. Another alternative is to wrap system calls
* (see example for read below), but the current implementation holds
* applications.
* We assume the page size is a multiple of HBLKSIZE.
* We prefer them to be the same. We avoid protecting pointer-free
* objects only if they are the same.
*/
# ifdef DARWIN
/* Using vm_protect (mach syscall) over mprotect (BSD syscall) seems to
decrease the likelihood of some of the problems described below. */
# include
STATIC mach_port_t GC_task_self = 0;
# define PROTECT(addr,len) \
if (vm_protect(GC_task_self, (vm_address_t)(addr), (vm_size_t)(len), \
FALSE, VM_PROT_READ \
| (GC_pages_executable ? VM_PROT_EXECUTE : 0)) \
== KERN_SUCCESS) {} else ABORT("vm_protect(PROTECT) failed")
# define UNPROTECT(addr,len) \
if (vm_protect(GC_task_self, (vm_address_t)(addr), (vm_size_t)(len), \
FALSE, (VM_PROT_READ | VM_PROT_WRITE) \
| (GC_pages_executable ? VM_PROT_EXECUTE : 0)) \
== KERN_SUCCESS) {} else ABORT("vm_protect(UNPROTECT) failed")
# elif !defined(USE_WINALLOC)
# include
# include
# if !defined(HAIKU)
# include
# endif
# define PROTECT(addr, len) \
if (mprotect((caddr_t)(addr), (size_t)(len), \
PROT_READ \
| (GC_pages_executable ? PROT_EXEC : 0)) >= 0) { \
} else ABORT("mprotect failed")
# define UNPROTECT(addr, len) \
if (mprotect((caddr_t)(addr), (size_t)(len), \
(PROT_READ | PROT_WRITE) \
| (GC_pages_executable ? PROT_EXEC : 0)) >= 0) { \
} else ABORT(GC_pages_executable ? \
"un-mprotect executable page failed" \
" (probably disabled by OS)" : \
"un-mprotect failed")
# undef IGNORE_PAGES_EXECUTABLE
# else /* USE_WINALLOC */
# ifndef MSWINCE
# include
# endif
static DWORD protect_junk;
# define PROTECT(addr, len) \
if (VirtualProtect((addr), (len), \
GC_pages_executable ? PAGE_EXECUTE_READ : \
PAGE_READONLY, \
&protect_junk)) { \
} else ABORT_ARG1("VirtualProtect failed", \
": errcode= 0x%X", (unsigned)GetLastError())
# define UNPROTECT(addr, len) \
if (VirtualProtect((addr), (len), \
GC_pages_executable ? PAGE_EXECUTE_READWRITE : \
PAGE_READWRITE, \
&protect_junk)) { \
} else ABORT("un-VirtualProtect failed")
# endif /* USE_WINALLOC */
# if defined(MSWIN32)
typedef LPTOP_LEVEL_EXCEPTION_FILTER SIG_HNDLR_PTR;
# undef SIG_DFL
# define SIG_DFL (LPTOP_LEVEL_EXCEPTION_FILTER)((signed_word)-1)
# elif defined(MSWINCE)
typedef LONG (WINAPI *SIG_HNDLR_PTR)(struct _EXCEPTION_POINTERS *);
# undef SIG_DFL
# define SIG_DFL (SIG_HNDLR_PTR) (-1)
# elif defined(DARWIN)
typedef void (* SIG_HNDLR_PTR)();
# else
typedef void (* SIG_HNDLR_PTR)(int, siginfo_t *, void *);
typedef void (* PLAIN_HNDLR_PTR)(int);
# endif
# if defined(__GLIBC__)
# if __GLIBC__ < 2 || __GLIBC__ == 2 && __GLIBC_MINOR__ < 2
# error glibc too old?
# endif
# endif
#ifndef DARWIN
STATIC SIG_HNDLR_PTR GC_old_segv_handler = 0;
/* Also old MSWIN32 ACCESS_VIOLATION filter */
# if !defined(MSWIN32) && !defined(MSWINCE)
STATIC SIG_HNDLR_PTR GC_old_bus_handler = 0;
# if defined(FREEBSD) || defined(HURD) || defined(HPUX)
STATIC GC_bool GC_old_bus_handler_used_si = FALSE;
# endif
STATIC GC_bool GC_old_segv_handler_used_si = FALSE;
# endif /* !MSWIN32 */
#endif /* !DARWIN */
#if defined(THREADS)
/* We need to lock around the bitmap update in the write fault handler */
/* in order to avoid the risk of losing a bit. We do this with a */
/* test-and-set spin lock if we know how to do that. Otherwise we */
/* check whether we are already in the handler and use the dumb but */
/* safe fallback algorithm of setting all bits in the word. */
/* Contention should be very rare, so we do the minimum to handle it */
/* correctly. */
#ifdef AO_HAVE_test_and_set_acquire
GC_INNER volatile AO_TS_t GC_fault_handler_lock = AO_TS_INITIALIZER;
static void async_set_pht_entry_from_index(volatile page_hash_table db,
size_t index)
{
while (AO_test_and_set_acquire(&GC_fault_handler_lock) == AO_TS_SET) {
/* empty */
}
/* Could also revert to set_pht_entry_from_index_safe if initial */
/* GC_test_and_set fails. */
set_pht_entry_from_index(db, index);
AO_CLEAR(&GC_fault_handler_lock);
}
#else /* !AO_HAVE_test_and_set_acquire */
# error No test_and_set operation: Introduces a race.
/* THIS WOULD BE INCORRECT! */
/* The dirty bit vector may be temporarily wrong, */
/* just before we notice the conflict and correct it. We may end up */
/* looking at it while it's wrong. But this requires contention */
/* exactly when a GC is triggered, which seems far less likely to */
/* fail than the old code, which had no reported failures. Thus we */
/* leave it this way while we think of something better, or support */
/* GC_test_and_set on the remaining platforms. */
static int * volatile currently_updating = 0;
static void async_set_pht_entry_from_index(volatile page_hash_table db,
size_t index)
{
int update_dummy;
currently_updating = &update_dummy;
set_pht_entry_from_index(db, index);
/* If we get contention in the 10 or so instruction window here, */
/* and we get stopped by a GC between the two updates, we lose! */
if (currently_updating != &update_dummy) {
set_pht_entry_from_index_safe(db, index);
/* We claim that if two threads concurrently try to update the */
/* dirty bit vector, the first one to execute UPDATE_START */
/* will see it changed when UPDATE_END is executed. (Note that */
/* &update_dummy must differ in two distinct threads.) It */
/* will then execute set_pht_entry_from_index_safe, thus */
/* returning us to a safe state, though not soon enough. */
}
}
#endif /* !AO_HAVE_test_and_set_acquire */
#else /* !THREADS */
# define async_set_pht_entry_from_index(db, index) \
set_pht_entry_from_index(db, index)
#endif /* !THREADS */
#ifndef DARWIN
# ifdef CHECKSUMS
void GC_record_fault(struct hblk * h); /* from checksums.c */
# endif
# if !defined(MSWIN32) && !defined(MSWINCE)
# include
# if defined(FREEBSD) || defined(HURD) || defined(HPUX)
# define SIG_OK (sig == SIGBUS || sig == SIGSEGV)
# else
# define SIG_OK (sig == SIGSEGV)
/* Catch SIGSEGV but ignore SIGBUS. */
# endif
# if defined(FREEBSD)
# ifndef SEGV_ACCERR
# define SEGV_ACCERR 2
# endif
# if defined(AARCH64) || defined(ARM32) || defined(MIPS)
# define CODE_OK (si -> si_code == SEGV_ACCERR)
# elif defined(POWERPC)
# define AIM /* Pretend that we're AIM. */
# include
# define CODE_OK (si -> si_code == EXC_DSI \
|| si -> si_code == SEGV_ACCERR)
# else
# define CODE_OK (si -> si_code == BUS_PAGE_FAULT \
|| si -> si_code == SEGV_ACCERR)
# endif
# elif defined(OSF1)
# define CODE_OK (si -> si_code == 2 /* experimentally determined */)
# elif defined(IRIX5)
# define CODE_OK (si -> si_code == EACCES)
# elif defined(HAIKU) || defined(HURD)
# define CODE_OK TRUE
# elif defined(LINUX)
# define CODE_OK TRUE
/* Empirically c.trapno == 14, on IA32, but is that useful? */
/* Should probably consider alignment issues on other */
/* architectures. */
# elif defined(HPUX)
# define CODE_OK (si -> si_code == SEGV_ACCERR \
|| si -> si_code == BUS_ADRERR \
|| si -> si_code == BUS_UNKNOWN \
|| si -> si_code == SEGV_UNKNOWN \
|| si -> si_code == BUS_OBJERR)
# elif defined(SUNOS5SIGS)
# define CODE_OK (si -> si_code == SEGV_ACCERR)
# endif
# ifndef NO_GETCONTEXT
# include
# endif
STATIC void GC_write_fault_handler(int sig, siginfo_t *si, void *raw_sc)
# else
# define SIG_OK (exc_info -> ExceptionRecord -> ExceptionCode \
== STATUS_ACCESS_VIOLATION)
# define CODE_OK (exc_info -> ExceptionRecord -> ExceptionInformation[0] \
== 1) /* Write fault */
STATIC LONG WINAPI GC_write_fault_handler(
struct _EXCEPTION_POINTERS *exc_info)
# endif /* MSWIN32 || MSWINCE */
{
# if !defined(MSWIN32) && !defined(MSWINCE)
char *addr = si -> si_addr;
# else
char * addr = (char *) (exc_info -> ExceptionRecord
-> ExceptionInformation[1]);
# endif
if (SIG_OK && CODE_OK) {
register struct hblk * h =
(struct hblk *)((word)addr & ~(GC_page_size-1));
GC_bool in_allocd_block;
size_t i;
# ifdef CHECKSUMS
GC_record_fault(h);
# endif
# ifdef SUNOS5SIGS
/* Address is only within the correct physical page. */
in_allocd_block = FALSE;
for (i = 0; i < divHBLKSZ(GC_page_size); i++) {
if (HDR(h+i) != 0) {
in_allocd_block = TRUE;
break;
}
}
# else
in_allocd_block = (HDR(addr) != 0);
# endif
if (!in_allocd_block) {
/* FIXME - We should make sure that we invoke the */
/* old handler with the appropriate calling */
/* sequence, which often depends on SA_SIGINFO. */
/* Heap blocks now begin and end on page boundaries */
SIG_HNDLR_PTR old_handler;
# if defined(MSWIN32) || defined(MSWINCE)
old_handler = GC_old_segv_handler;
# else
GC_bool used_si;
# if defined(FREEBSD) || defined(HURD) || defined(HPUX)
if (sig == SIGBUS) {
old_handler = GC_old_bus_handler;
used_si = GC_old_bus_handler_used_si;
} else
# endif
/* else */ {
old_handler = GC_old_segv_handler;
used_si = GC_old_segv_handler_used_si;
}
# endif
if (old_handler == (SIG_HNDLR_PTR)SIG_DFL) {
# if !defined(MSWIN32) && !defined(MSWINCE)
ABORT_ARG1("Unexpected bus error or segmentation fault",
" at %p", (void *)addr);
# else
return(EXCEPTION_CONTINUE_SEARCH);
# endif
} else {
/*
* FIXME: This code should probably check if the
* old signal handler used the traditional style and
* if so call it using that style.
*/
# if defined(MSWIN32) || defined(MSWINCE)
return((*old_handler)(exc_info));
# else
if (used_si)
((SIG_HNDLR_PTR)old_handler) (sig, si, raw_sc);
else
/* FIXME: should pass nonstandard args as well. */
((PLAIN_HNDLR_PTR)old_handler) (sig);
return;
# endif
}
}
UNPROTECT(h, GC_page_size);
/* We need to make sure that no collection occurs between */
/* the UNPROTECT and the setting of the dirty bit. Otherwise */
/* a write by a third thread might go unnoticed. Reversing */
/* the order is just as bad, since we would end up unprotecting */
/* a page in a GC cycle during which it's not marked. */
/* Currently we do this by disabling the thread stopping */
/* signals while this handler is running. An alternative might */
/* be to record the fact that we're about to unprotect, or */
/* have just unprotected a page in the GC's thread structure, */
/* and then to have the thread stopping code set the dirty */
/* flag, if necessary. */
for (i = 0; i < divHBLKSZ(GC_page_size); i++) {
word index = PHT_HASH(h+i);
async_set_pht_entry_from_index(GC_dirty_pages, index);
}
/* The write may not take place before dirty bits are read. */
/* But then we'll fault again ... */
# if defined(MSWIN32) || defined(MSWINCE)
return(EXCEPTION_CONTINUE_EXECUTION);
# else
return;
# endif
}
# if defined(MSWIN32) || defined(MSWINCE)
return EXCEPTION_CONTINUE_SEARCH;
# else
ABORT_ARG1("Unexpected bus error or segmentation fault",
" at %p", (void *)addr);
# endif
}
# ifdef GC_WIN32_THREADS
GC_INNER void GC_set_write_fault_handler(void)
{
SetUnhandledExceptionFilter(GC_write_fault_handler);
}
# endif
#endif /* !DARWIN */
/* We hold the allocation lock. We expect block h to be written */
/* shortly. Ensure that all pages containing any part of the n hblks */
/* starting at h are no longer protected. If is_ptrfree is false, also */
/* ensure that they will subsequently appear to be dirty. Not allowed */
/* to call GC_printf (and the friends) here, see Win32 GC_stop_world() */
/* for the information. */
GC_INNER void GC_remove_protection(struct hblk *h, word nblocks,
GC_bool is_ptrfree)
{
struct hblk * h_trunc; /* Truncated to page boundary */
struct hblk * h_end; /* Page boundary following block end */
struct hblk * current;
# if defined(GWW_VDB)
if (GC_GWW_AVAILABLE()) return;
# endif
if (!GC_incremental) return;
h_trunc = (struct hblk *)((word)h & ~(GC_page_size-1));
h_end = (struct hblk *)(((word)(h + nblocks) + GC_page_size - 1)
& ~(GC_page_size - 1));
if (h_end == h_trunc + 1 &&
get_pht_entry_from_index(GC_dirty_pages, PHT_HASH(h_trunc))) {
/* already marked dirty, and hence unprotected. */
return;
}
for (current = h_trunc; (word)current < (word)h_end; ++current) {
word index = PHT_HASH(current);
if (!is_ptrfree || (word)current < (word)h
|| (word)current >= (word)(h + nblocks)) {
async_set_pht_entry_from_index(GC_dirty_pages, index);
}
}
UNPROTECT(h_trunc, (ptr_t)h_end - (ptr_t)h_trunc);
}
#if !defined(DARWIN)
GC_INNER GC_bool GC_dirty_init(void)
{
# if !defined(MSWIN32) && !defined(MSWINCE)
struct sigaction act, oldact;
act.sa_flags = SA_RESTART | SA_SIGINFO;
act.sa_sigaction = GC_write_fault_handler;
(void)sigemptyset(&act.sa_mask);
# if defined(THREADS) && !defined(GC_OPENBSD_UTHREADS) \
&& !defined(GC_WIN32_THREADS) && !defined(NACL)
/* Arrange to postpone the signal while we are in a write fault */
/* handler. This effectively makes the handler atomic w.r.t. */
/* stopping the world for GC. */
(void)sigaddset(&act.sa_mask, GC_get_suspend_signal());
# endif
# endif /* !MSWIN32 */
GC_VERBOSE_LOG_PRINTF(
"Initializing mprotect virtual dirty bit implementation\n");
if (GC_page_size % HBLKSIZE != 0) {
ABORT("Page size not multiple of HBLKSIZE");
}
# if !defined(MSWIN32) && !defined(MSWINCE)
/* act.sa_restorer is deprecated and should not be initialized. */
# if defined(GC_IRIX_THREADS)
sigaction(SIGSEGV, 0, &oldact);
sigaction(SIGSEGV, &act, 0);
# else
{
int res = sigaction(SIGSEGV, &act, &oldact);
if (res != 0) ABORT("Sigaction failed");
}
# endif
if (oldact.sa_flags & SA_SIGINFO) {
GC_old_segv_handler = oldact.sa_sigaction;
GC_old_segv_handler_used_si = TRUE;
} else {
GC_old_segv_handler = (SIG_HNDLR_PTR)oldact.sa_handler;
GC_old_segv_handler_used_si = FALSE;
}
if (GC_old_segv_handler == (SIG_HNDLR_PTR)SIG_IGN) {
WARN("Previously ignored segmentation violation!?\n", 0);
GC_old_segv_handler = (SIG_HNDLR_PTR)SIG_DFL;
}
if (GC_old_segv_handler != (SIG_HNDLR_PTR)SIG_DFL) {
GC_VERBOSE_LOG_PRINTF("Replaced other SIGSEGV handler\n");
}
# if defined(HPUX) || defined(LINUX) || defined(HURD) \
|| (defined(FREEBSD) && (defined(__GLIBC__) || defined(SUNOS5SIGS)))
sigaction(SIGBUS, &act, &oldact);
if ((oldact.sa_flags & SA_SIGINFO) != 0) {
GC_old_bus_handler = oldact.sa_sigaction;
# if !defined(LINUX)
GC_old_bus_handler_used_si = TRUE;
# endif
} else {
GC_old_bus_handler = (SIG_HNDLR_PTR)oldact.sa_handler;
# if !defined(LINUX)
GC_old_bus_handler_used_si = FALSE;
# endif
}
if (GC_old_bus_handler == (SIG_HNDLR_PTR)SIG_IGN) {
WARN("Previously ignored bus error!?\n", 0);
# if !defined(LINUX)
GC_old_bus_handler = (SIG_HNDLR_PTR)SIG_DFL;
# else
/* GC_old_bus_handler is not used by GC_write_fault_handler. */
# endif
} else if (GC_old_bus_handler != (SIG_HNDLR_PTR)SIG_DFL) {
GC_VERBOSE_LOG_PRINTF("Replaced other SIGBUS handler\n");
}
# endif /* HPUX || LINUX || HURD || (FREEBSD && SUNOS5SIGS) */
# endif /* ! MS windows */
# if defined(GWW_VDB)
if (GC_gww_dirty_init())
return TRUE;
# endif
# if defined(MSWIN32)
GC_old_segv_handler = SetUnhandledExceptionFilter(GC_write_fault_handler);
if (GC_old_segv_handler != NULL) {
GC_COND_LOG_PRINTF("Replaced other UnhandledExceptionFilter\n");
} else {
GC_old_segv_handler = SIG_DFL;
}
# elif defined(MSWINCE)
/* MPROTECT_VDB is unsupported for WinCE at present. */
/* FIXME: implement it (if possible). */
# endif
# if defined(CPPCHECK) && defined(ADDRESS_SANITIZER)
GC_noop1((word)&__asan_default_options);
# endif
return TRUE;
}
#endif /* !DARWIN */
GC_API int GC_CALL GC_incremental_protection_needs(void)
{
GC_ASSERT(GC_is_initialized);
if (GC_page_size == HBLKSIZE) {
return GC_PROTECTS_POINTER_HEAP;
} else {
return GC_PROTECTS_POINTER_HEAP | GC_PROTECTS_PTRFREE_HEAP;
}
}
#define HAVE_INCREMENTAL_PROTECTION_NEEDS
#define IS_PTRFREE(hhdr) ((hhdr)->hb_descr == 0)
#define PAGE_ALIGNED(x) !((word)(x) & (GC_page_size - 1))
STATIC void GC_protect_heap(void)
{
unsigned i;
GC_bool protect_all =
(0 != (GC_incremental_protection_needs() & GC_PROTECTS_PTRFREE_HEAP));
for (i = 0; i < GC_n_heap_sects; i++) {
ptr_t start = GC_heap_sects[i].hs_start;
size_t len = GC_heap_sects[i].hs_bytes;
if (protect_all) {
PROTECT(start, len);
} else {
struct hblk * current;
struct hblk * current_start; /* Start of block to be protected. */
struct hblk * limit;
GC_ASSERT(PAGE_ALIGNED(len));
GC_ASSERT(PAGE_ALIGNED(start));
current_start = current = (struct hblk *)start;
limit = (struct hblk *)(start + len);
while ((word)current < (word)limit) {
hdr * hhdr;
word nhblks;
GC_bool is_ptrfree;
GC_ASSERT(PAGE_ALIGNED(current));
GET_HDR(current, hhdr);
if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
/* This can happen only if we're at the beginning of a */
/* heap segment, and a block spans heap segments. */
/* We will handle that block as part of the preceding */
/* segment. */
GC_ASSERT(current_start == current);
current_start = ++current;
continue;
}
if (HBLK_IS_FREE(hhdr)) {
GC_ASSERT(PAGE_ALIGNED(hhdr -> hb_sz));
nhblks = divHBLKSZ(hhdr -> hb_sz);
is_ptrfree = TRUE; /* dirty on alloc */
} else {
nhblks = OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz);
is_ptrfree = IS_PTRFREE(hhdr);
}
if (is_ptrfree) {
if ((word)current_start < (word)current) {
PROTECT(current_start, (ptr_t)current - (ptr_t)current_start);
}
current_start = (current += nhblks);
} else {
current += nhblks;
}
}
if ((word)current_start < (word)current) {
PROTECT(current_start, (ptr_t)current - (ptr_t)current_start);
}
}
}
}
/* We assume that either the world is stopped or its OK to lose dirty */
/* bits while this is happening (as in GC_enable_incremental). */
GC_INNER void GC_read_dirty(void)
{
# if defined(GWW_VDB)
if (GC_GWW_AVAILABLE()) {
GC_gww_read_dirty();
return;
}
# endif
BCOPY((word *)GC_dirty_pages, GC_grungy_pages,
(sizeof GC_dirty_pages));
BZERO((word *)GC_dirty_pages, (sizeof GC_dirty_pages));
GC_protect_heap();
}
/*
* Acquiring the allocation lock here is dangerous, since this
* can be called from within GC_call_with_alloc_lock, and the cord
* package does so. On systems that allow nested lock acquisition, this
* happens to work.
*/
/* We no longer wrap read by default, since that was causing too many */
/* problems. It is preferred that the client instead avoids writing */
/* to the write-protected heap with a system call. */
# ifdef CHECKSUMS
GC_INNER GC_bool GC_page_was_ever_dirty(struct hblk * h GC_ATTR_UNUSED)
{
# if defined(GWW_VDB)
if (GC_GWW_AVAILABLE())
return GC_gww_page_was_ever_dirty(h);
# endif
return(TRUE);
}
# endif /* CHECKSUMS */
#endif /* MPROTECT_VDB */
#ifdef PROC_VDB
/* See DEFAULT_VDB for interface descriptions. */
/* This implementation assumes a Solaris 2.X like /proc */
/* pseudo-file-system from which we can read page modified bits. This */
/* facility is far from optimal (e.g. we would like to get the info for */
/* only some of the address space), but it avoids intercepting system */
/* calls. */
# include
# include
# include
# include
# include
# include
# include
# define INITIAL_BUF_SZ 16384
STATIC size_t GC_proc_buf_size = INITIAL_BUF_SZ;
STATIC char *GC_proc_buf = NULL;
STATIC int GC_proc_fd = 0;
GC_INNER GC_bool GC_dirty_init(void)
{
char buf[40];
if (GC_bytes_allocd != 0 || GC_bytes_allocd_before_gc != 0) {
memset(GC_written_pages, 0xff, sizeof(page_hash_table));
GC_VERBOSE_LOG_PRINTF(
"Allocated %lu bytes: all pages may have been written\n",
(unsigned long)(GC_bytes_allocd + GC_bytes_allocd_before_gc));
}
(void)snprintf(buf, sizeof(buf), "/proc/%ld/pagedata", (long)getpid());
buf[sizeof(buf) - 1] = '\0';
GC_proc_fd = open(buf, O_RDONLY);
if (GC_proc_fd < 0) {
WARN("/proc open failed; cannot enable GC incremental mode\n", 0);
return FALSE;
}
if (syscall(SYS_fcntl, GC_proc_fd, F_SETFD, FD_CLOEXEC) == -1)
WARN("Could not set FD_CLOEXEC for /proc\n", 0);
GC_proc_buf = GC_scratch_alloc(GC_proc_buf_size);
if (GC_proc_buf == NULL)
ABORT("Insufficient space for /proc read");
return TRUE;
}
# define READ read
GC_INNER void GC_read_dirty(void)
{
int nmaps;
char * bufp = GC_proc_buf;
int i;
BZERO(GC_grungy_pages, sizeof(GC_grungy_pages));
if (READ(GC_proc_fd, bufp, GC_proc_buf_size) <= 0) {
/* Retry with larger buffer. */
size_t new_size = 2 * GC_proc_buf_size;
char *new_buf;
WARN("/proc read failed: GC_proc_buf_size = %" WARN_PRIdPTR "\n",
(signed_word)GC_proc_buf_size);
new_buf = GC_scratch_alloc(new_size);
if (new_buf != 0) {
GC_scratch_recycle_no_gww(bufp, GC_proc_buf_size);
GC_proc_buf = bufp = new_buf;
GC_proc_buf_size = new_size;
}
if (READ(GC_proc_fd, bufp, GC_proc_buf_size) <= 0) {
WARN("Insufficient space for /proc read\n", 0);
/* Punt: */
memset(GC_grungy_pages, 0xff, sizeof (page_hash_table));
memset(GC_written_pages, 0xff, sizeof(page_hash_table));
return;
}
}
/* Copy dirty bits into GC_grungy_pages */
nmaps = ((struct prpageheader *)bufp) -> pr_nmap;
# ifdef DEBUG_DIRTY_BITS
GC_log_printf("Proc VDB read: pr_nmap= %u, pr_npage= %lu\n",
nmaps, ((struct prpageheader *)bufp)->pr_npage);
# endif
bufp += sizeof(struct prpageheader);
for (i = 0; i < nmaps; i++) {
struct prasmap * map = (struct prasmap *)bufp;
ptr_t vaddr = (ptr_t)(map -> pr_vaddr);
unsigned long npages = map -> pr_npage;
unsigned pagesize = map -> pr_pagesize;
ptr_t limit;
# ifdef DEBUG_DIRTY_BITS
GC_log_printf(
"pr_vaddr= %p, npage= %lu, mflags= 0x%x, pagesize= 0x%x\n",
(void *)vaddr, npages, map->pr_mflags, pagesize);
# endif
bufp += sizeof(struct prasmap);
limit = vaddr + pagesize * npages;
for (; (word)vaddr < (word)limit; vaddr += pagesize) {
if ((*bufp++) & PG_MODIFIED) {
register struct hblk * h;
ptr_t next_vaddr = vaddr + pagesize;
# ifdef DEBUG_DIRTY_BITS
GC_log_printf("dirty page at: %p\n", (void *)vaddr);
# endif
for (h = (struct hblk *)vaddr;
(word)h < (word)next_vaddr; h++) {
register word index = PHT_HASH(h);
set_pht_entry_from_index(GC_grungy_pages, index);
}
}
}
bufp = (char *)(((word)bufp + (sizeof(long)-1))
& ~(word)(sizeof(long)-1));
}
# ifdef DEBUG_DIRTY_BITS
GC_log_printf("Proc VDB read done\n");
# endif
/* Update GC_written_pages. */
GC_or_pages(GC_written_pages, GC_grungy_pages);
}
# undef READ
#endif /* PROC_VDB */
#ifdef PCR_VDB
# include "vd/PCR_VD.h"
# define NPAGES (32*1024) /* 128 MB */
PCR_VD_DB GC_grungy_bits[NPAGES];
STATIC ptr_t GC_vd_base = NULL;
/* Address corresponding to GC_grungy_bits[0] */
/* HBLKSIZE aligned. */
GC_INNER GC_bool GC_dirty_init(void)
{
/* For the time being, we assume the heap generally grows up */
GC_vd_base = GC_heap_sects[0].hs_start;
if (GC_vd_base == 0) {
ABORT("Bad initial heap segment");
}
if (PCR_VD_Start(HBLKSIZE, GC_vd_base, NPAGES*HBLKSIZE)
!= PCR_ERes_okay) {
ABORT("Dirty bit initialization failed");
}
return TRUE;
}
GC_INNER void GC_read_dirty(void)
{
/* lazily enable dirty bits on newly added heap sects */
{
static int onhs = 0;
int nhs = GC_n_heap_sects;
for(; onhs < nhs; onhs++) {
PCR_VD_WriteProtectEnable(
GC_heap_sects[onhs].hs_start,
GC_heap_sects[onhs].hs_bytes );
}
}
if (PCR_VD_Clear(GC_vd_base, NPAGES*HBLKSIZE, GC_grungy_bits)
!= PCR_ERes_okay) {
ABORT("Dirty bit read failed");
}
}
GC_INNER GC_bool GC_page_was_dirty(struct hblk *h)
{
if ((word)h < (word)GC_vd_base
|| (word)h >= (word)(GC_vd_base + NPAGES*HBLKSIZE)) {
return(TRUE);
}
return(GC_grungy_bits[h - (struct hblk *)GC_vd_base] & PCR_VD_DB_dirtyBit);
}
GC_INNER void GC_remove_protection(struct hblk *h, word nblocks,
GC_bool is_ptrfree GC_ATTR_UNUSED)
{
PCR_VD_WriteProtectDisable(h, nblocks*HBLKSIZE);
PCR_VD_WriteProtectEnable(h, nblocks*HBLKSIZE);
}
#endif /* PCR_VDB */
#if defined(MPROTECT_VDB) && defined(DARWIN)
/* The following sources were used as a "reference" for this exception
handling code:
1. Apple's mach/xnu documentation
2. Timothy J. Wood's "Mach Exception Handlers 101" post to the
omnigroup's macosx-dev list.
www.omnigroup.com/mailman/archive/macosx-dev/2000-June/014178.html
3. macosx-nat.c from Apple's GDB source code.
*/
/* The bug that caused all this trouble should now be fixed. This should
eventually be removed if all goes well. */
/* #define BROKEN_EXCEPTION_HANDLING */
#include
#include
#include
#include
#include
/* These are not defined in any header, although they are documented */
extern boolean_t
exc_server(mach_msg_header_t *, mach_msg_header_t *);
extern kern_return_t
exception_raise(mach_port_t, mach_port_t, mach_port_t, exception_type_t,
exception_data_t, mach_msg_type_number_t);
extern kern_return_t
exception_raise_state(mach_port_t, mach_port_t, mach_port_t, exception_type_t,
exception_data_t, mach_msg_type_number_t,
thread_state_flavor_t*, thread_state_t,
mach_msg_type_number_t, thread_state_t,
mach_msg_type_number_t*);
extern kern_return_t
exception_raise_state_identity(mach_port_t, mach_port_t, mach_port_t,
exception_type_t, exception_data_t,
mach_msg_type_number_t, thread_state_flavor_t*,
thread_state_t, mach_msg_type_number_t,
thread_state_t, mach_msg_type_number_t*);
GC_API_OSCALL kern_return_t
catch_exception_raise(mach_port_t exception_port, mach_port_t thread,
mach_port_t task, exception_type_t exception,
exception_data_t code, mach_msg_type_number_t code_count);
/* These should never be called, but just in case... */
GC_API_OSCALL kern_return_t
catch_exception_raise_state(mach_port_name_t exception_port GC_ATTR_UNUSED,
int exception GC_ATTR_UNUSED, exception_data_t code GC_ATTR_UNUSED,
mach_msg_type_number_t codeCnt GC_ATTR_UNUSED, int flavor GC_ATTR_UNUSED,
thread_state_t old_state GC_ATTR_UNUSED, int old_stateCnt GC_ATTR_UNUSED,
thread_state_t new_state GC_ATTR_UNUSED, int new_stateCnt GC_ATTR_UNUSED)
{
ABORT_RET("Unexpected catch_exception_raise_state invocation");
return(KERN_INVALID_ARGUMENT);
}
GC_API_OSCALL kern_return_t
catch_exception_raise_state_identity(
mach_port_name_t exception_port GC_ATTR_UNUSED,
mach_port_t thread GC_ATTR_UNUSED, mach_port_t task GC_ATTR_UNUSED,
int exception GC_ATTR_UNUSED, exception_data_t code GC_ATTR_UNUSED,
mach_msg_type_number_t codeCnt GC_ATTR_UNUSED, int flavor GC_ATTR_UNUSED,
thread_state_t old_state GC_ATTR_UNUSED, int old_stateCnt GC_ATTR_UNUSED,
thread_state_t new_state GC_ATTR_UNUSED, int new_stateCnt GC_ATTR_UNUSED)
{
ABORT_RET("Unexpected catch_exception_raise_state_identity invocation");
return(KERN_INVALID_ARGUMENT);
}
#define MAX_EXCEPTION_PORTS 16
static struct {
mach_msg_type_number_t count;
exception_mask_t masks[MAX_EXCEPTION_PORTS];
exception_handler_t ports[MAX_EXCEPTION_PORTS];
exception_behavior_t behaviors[MAX_EXCEPTION_PORTS];
thread_state_flavor_t flavors[MAX_EXCEPTION_PORTS];
} GC_old_exc_ports;
STATIC struct {
void (*volatile os_callback[3])(void);
mach_port_t exception;
# if defined(THREADS)
mach_port_t reply;
# endif
} GC_ports = {
{
/* This is to prevent stripping these routines as dead. */
(void (*)(void))catch_exception_raise,
(void (*)(void))catch_exception_raise_state,
(void (*)(void))catch_exception_raise_state_identity
},
# ifdef THREADS
0, /* for 'exception' */
# endif
0
};
typedef struct {
mach_msg_header_t head;
} GC_msg_t;
typedef enum {
GC_MP_NORMAL,
GC_MP_DISCARDING,
GC_MP_STOPPED
} GC_mprotect_state_t;
#ifdef THREADS
/* FIXME: 1 and 2 seem to be safe to use in the msgh_id field, but it */
/* is not documented. Use the source and see if they should be OK. */
# define ID_STOP 1
# define ID_RESUME 2
/* This value is only used on the reply port. */
# define ID_ACK 3
STATIC GC_mprotect_state_t GC_mprotect_state = 0;
/* The following should ONLY be called when the world is stopped. */
STATIC void GC_mprotect_thread_notify(mach_msg_id_t id)
{
struct {
GC_msg_t msg;
mach_msg_trailer_t trailer;
} buf;
mach_msg_return_t r;
/* remote, local */
buf.msg.head.msgh_bits = MACH_MSGH_BITS(MACH_MSG_TYPE_MAKE_SEND, 0);
buf.msg.head.msgh_size = sizeof(buf.msg);
buf.msg.head.msgh_remote_port = GC_ports.exception;
buf.msg.head.msgh_local_port = MACH_PORT_NULL;
buf.msg.head.msgh_id = id;
r = mach_msg(&buf.msg.head, MACH_SEND_MSG | MACH_RCV_MSG | MACH_RCV_LARGE,
sizeof(buf.msg), sizeof(buf), GC_ports.reply,
MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL);
if (r != MACH_MSG_SUCCESS)
ABORT("mach_msg failed in GC_mprotect_thread_notify");
if (buf.msg.head.msgh_id != ID_ACK)
ABORT("Invalid ack in GC_mprotect_thread_notify");
}
/* Should only be called by the mprotect thread */
STATIC void GC_mprotect_thread_reply(void)
{
GC_msg_t msg;
mach_msg_return_t r;
/* remote, local */
msg.head.msgh_bits = MACH_MSGH_BITS(MACH_MSG_TYPE_MAKE_SEND, 0);
msg.head.msgh_size = sizeof(msg);
msg.head.msgh_remote_port = GC_ports.reply;
msg.head.msgh_local_port = MACH_PORT_NULL;
msg.head.msgh_id = ID_ACK;
r = mach_msg(&msg.head, MACH_SEND_MSG, sizeof(msg), 0, MACH_PORT_NULL,
MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL);
if (r != MACH_MSG_SUCCESS)
ABORT("mach_msg failed in GC_mprotect_thread_reply");
}
GC_INNER void GC_mprotect_stop(void)
{
GC_mprotect_thread_notify(ID_STOP);
}
GC_INNER void GC_mprotect_resume(void)
{
GC_mprotect_thread_notify(ID_RESUME);
}
# ifndef GC_NO_THREADS_DISCOVERY
GC_INNER void GC_darwin_register_mach_handler_thread(mach_port_t thread);
# endif
#else
/* The compiler should optimize away any GC_mprotect_state computations */
# define GC_mprotect_state GC_MP_NORMAL
#endif /* !THREADS */
STATIC void *GC_mprotect_thread(void *arg)
{
mach_msg_return_t r;
/* These two structures contain some private kernel data. We don't */
/* need to access any of it so we don't bother defining a proper */
/* struct. The correct definitions are in the xnu source code. */
struct {
mach_msg_header_t head;
char data[256];
} reply;
struct {
mach_msg_header_t head;
mach_msg_body_t msgh_body;
char data[1024];
} msg;
mach_msg_id_t id;
if ((word)arg == (word)-1) return 0; /* to make compiler happy */
# if defined(CPPCHECK)
reply.data[0] = 0; /* to prevent "field unused" warnings */
msg.data[0] = 0;
# endif
# if defined(THREADS) && !defined(GC_NO_THREADS_DISCOVERY)
GC_darwin_register_mach_handler_thread(mach_thread_self());
# endif
for(;;) {
r = mach_msg(&msg.head, MACH_RCV_MSG | MACH_RCV_LARGE |
(GC_mprotect_state == GC_MP_DISCARDING ? MACH_RCV_TIMEOUT : 0),
0, sizeof(msg), GC_ports.exception,
GC_mprotect_state == GC_MP_DISCARDING ? 0
: MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL);
id = r == MACH_MSG_SUCCESS ? msg.head.msgh_id : -1;
# if defined(THREADS)
if(GC_mprotect_state == GC_MP_DISCARDING) {
if(r == MACH_RCV_TIMED_OUT) {
GC_mprotect_state = GC_MP_STOPPED;
GC_mprotect_thread_reply();
continue;
}
if(r == MACH_MSG_SUCCESS && (id == ID_STOP || id == ID_RESUME))
ABORT("Out of order mprotect thread request");
}
# endif /* THREADS */
if (r != MACH_MSG_SUCCESS) {
ABORT_ARG2("mach_msg failed",
": errcode= %d (%s)", (int)r, mach_error_string(r));
}
switch(id) {
# if defined(THREADS)
case ID_STOP:
if(GC_mprotect_state != GC_MP_NORMAL)
ABORT("Called mprotect_stop when state wasn't normal");
GC_mprotect_state = GC_MP_DISCARDING;
break;
case ID_RESUME:
if(GC_mprotect_state != GC_MP_STOPPED)
ABORT("Called mprotect_resume when state wasn't stopped");
GC_mprotect_state = GC_MP_NORMAL;
GC_mprotect_thread_reply();
break;
# endif /* THREADS */
default:
/* Handle the message (calls catch_exception_raise) */
if(!exc_server(&msg.head, &reply.head))
ABORT("exc_server failed");
/* Send the reply */
r = mach_msg(&reply.head, MACH_SEND_MSG, reply.head.msgh_size, 0,
MACH_PORT_NULL, MACH_MSG_TIMEOUT_NONE,
MACH_PORT_NULL);
if(r != MACH_MSG_SUCCESS) {
/* This will fail if the thread dies, but the thread */
/* shouldn't die... */
# ifdef BROKEN_EXCEPTION_HANDLING
GC_err_printf("mach_msg failed with %d %s while sending "
"exc reply\n", (int)r, mach_error_string(r));
# else
ABORT("mach_msg failed while sending exception reply");
# endif
}
} /* switch */
} /* for(;;) */
}
/* All this SIGBUS code shouldn't be necessary. All protection faults should
be going through the mach exception handler. However, it seems a SIGBUS is
occasionally sent for some unknown reason. Even more odd, it seems to be
meaningless and safe to ignore. */
#ifdef BROKEN_EXCEPTION_HANDLING
/* Updates to this aren't atomic, but the SIGBUS'es seem pretty rare. */
/* Even if this doesn't get updated property, it isn't really a problem. */
STATIC int GC_sigbus_count = 0;
STATIC void GC_darwin_sigbus(int num, siginfo_t *sip, void *context)
{
if (num != SIGBUS)
ABORT("Got a non-sigbus signal in the sigbus handler");
/* Ugh... some seem safe to ignore, but too many in a row probably means
trouble. GC_sigbus_count is reset for each mach exception that is
handled */
if (GC_sigbus_count >= 8) {
ABORT("Got more than 8 SIGBUSs in a row!");
} else {
GC_sigbus_count++;
WARN("Ignoring SIGBUS\n", 0);
}
}
#endif /* BROKEN_EXCEPTION_HANDLING */
GC_INNER GC_bool GC_dirty_init(void)
{
kern_return_t r;
mach_port_t me;
pthread_t thread;
pthread_attr_t attr;
exception_mask_t mask;
# ifdef CAN_HANDLE_FORK
if (GC_handle_fork) {
/* To both support GC incremental mode and GC functions usage in */
/* the forked child, pthread_atfork should be used to install */
/* handlers that switch off GC_incremental in the child */
/* gracefully (unprotecting all pages and clearing */
/* GC_mach_handler_thread). For now, we just disable incremental */
/* mode if fork() handling is requested by the client. */
WARN("Can't turn on GC incremental mode as fork()"
" handling requested\n", 0);
return FALSE;
}
# endif
GC_VERBOSE_LOG_PRINTF("Initializing mach/darwin mprotect"
" virtual dirty bit implementation\n");
# ifdef BROKEN_EXCEPTION_HANDLING
WARN("Enabling workarounds for various darwin "
"exception handling bugs\n", 0);
# endif
if (GC_page_size % HBLKSIZE != 0) {
ABORT("Page size not multiple of HBLKSIZE");
}
GC_task_self = me = mach_task_self();
r = mach_port_allocate(me, MACH_PORT_RIGHT_RECEIVE, &GC_ports.exception);
/* TODO: WARN and return FALSE in case of a failure. */
if (r != KERN_SUCCESS)
ABORT("mach_port_allocate failed (exception port)");
r = mach_port_insert_right(me, GC_ports.exception, GC_ports.exception,
MACH_MSG_TYPE_MAKE_SEND);
if (r != KERN_SUCCESS)
ABORT("mach_port_insert_right failed (exception port)");
# if defined(THREADS)
r = mach_port_allocate(me, MACH_PORT_RIGHT_RECEIVE, &GC_ports.reply);
if(r != KERN_SUCCESS)
ABORT("mach_port_allocate failed (reply port)");
# endif
/* The exceptions we want to catch */
mask = EXC_MASK_BAD_ACCESS;
r = task_get_exception_ports(me, mask, GC_old_exc_ports.masks,
&GC_old_exc_ports.count, GC_old_exc_ports.ports,
GC_old_exc_ports.behaviors,
GC_old_exc_ports.flavors);
if (r != KERN_SUCCESS)
ABORT("task_get_exception_ports failed");
r = task_set_exception_ports(me, mask, GC_ports.exception, EXCEPTION_DEFAULT,
GC_MACH_THREAD_STATE);
if (r != KERN_SUCCESS)
ABORT("task_set_exception_ports failed");
if (pthread_attr_init(&attr) != 0)
ABORT("pthread_attr_init failed");
if (pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED) != 0)
ABORT("pthread_attr_setdetachedstate failed");
# undef pthread_create
/* This will call the real pthread function, not our wrapper */
if (pthread_create(&thread, &attr, GC_mprotect_thread, NULL) != 0)
ABORT("pthread_create failed");
(void)pthread_attr_destroy(&attr);
/* Setup the sigbus handler for ignoring the meaningless SIGBUSs */
# ifdef BROKEN_EXCEPTION_HANDLING
{
struct sigaction sa, oldsa;
sa.sa_handler = (SIG_HNDLR_PTR)GC_darwin_sigbus;
sigemptyset(&sa.sa_mask);
sa.sa_flags = SA_RESTART|SA_SIGINFO;
/* sa.sa_restorer is deprecated and should not be initialized. */
if (sigaction(SIGBUS, &sa, &oldsa) < 0)
ABORT("sigaction failed");
if ((SIG_HNDLR_PTR)oldsa.sa_handler != SIG_DFL) {
GC_VERBOSE_LOG_PRINTF("Replaced other SIGBUS handler\n");
}
}
# endif /* BROKEN_EXCEPTION_HANDLING */
return TRUE;
}
/* The source code for Apple's GDB was used as a reference for the */
/* exception forwarding code. This code is similar to be GDB code only */
/* because there is only one way to do it. */
STATIC kern_return_t GC_forward_exception(mach_port_t thread, mach_port_t task,
exception_type_t exception,
exception_data_t data,
mach_msg_type_number_t data_count)
{
unsigned int i;
kern_return_t r;
mach_port_t port;
exception_behavior_t behavior;
thread_state_flavor_t flavor;
thread_state_data_t thread_state;
mach_msg_type_number_t thread_state_count = THREAD_STATE_MAX;
for (i=0; i < GC_old_exc_ports.count; i++)
if (GC_old_exc_ports.masks[i] & (1 << exception))
break;
if (i == GC_old_exc_ports.count)
ABORT("No handler for exception!");
port = GC_old_exc_ports.ports[i];
behavior = GC_old_exc_ports.behaviors[i];
flavor = GC_old_exc_ports.flavors[i];
if (behavior == EXCEPTION_STATE || behavior == EXCEPTION_STATE_IDENTITY) {
r = thread_get_state(thread, flavor, thread_state, &thread_state_count);
if(r != KERN_SUCCESS)
ABORT("thread_get_state failed in forward_exception");
}
switch(behavior) {
case EXCEPTION_STATE:
r = exception_raise_state(port, thread, task, exception, data, data_count,
&flavor, thread_state, thread_state_count,
thread_state, &thread_state_count);
break;
case EXCEPTION_STATE_IDENTITY:
r = exception_raise_state_identity(port, thread, task, exception, data,
data_count, &flavor, thread_state,
thread_state_count, thread_state,
&thread_state_count);
break;
/* case EXCEPTION_DEFAULT: */ /* default signal handlers */
default: /* user-supplied signal handlers */
r = exception_raise(port, thread, task, exception, data, data_count);
}
if (behavior == EXCEPTION_STATE || behavior == EXCEPTION_STATE_IDENTITY) {
r = thread_set_state(thread, flavor, thread_state, thread_state_count);
if (r != KERN_SUCCESS)
ABORT("thread_set_state failed in forward_exception");
}
return r;
}
#define FWD() GC_forward_exception(thread, task, exception, code, code_count)
#ifdef ARM32
# define DARWIN_EXC_STATE ARM_EXCEPTION_STATE
# define DARWIN_EXC_STATE_COUNT ARM_EXCEPTION_STATE_COUNT
# define DARWIN_EXC_STATE_T arm_exception_state_t
# define DARWIN_EXC_STATE_DAR THREAD_FLD_NAME(far)
#elif defined(AARCH64)
# define DARWIN_EXC_STATE ARM_EXCEPTION_STATE64
# define DARWIN_EXC_STATE_COUNT ARM_EXCEPTION_STATE64_COUNT
# define DARWIN_EXC_STATE_T arm_exception_state64_t
# define DARWIN_EXC_STATE_DAR THREAD_FLD_NAME(far)
#elif defined(POWERPC)
# if CPP_WORDSZ == 32
# define DARWIN_EXC_STATE PPC_EXCEPTION_STATE
# define DARWIN_EXC_STATE_COUNT PPC_EXCEPTION_STATE_COUNT
# define DARWIN_EXC_STATE_T ppc_exception_state_t
# else
# define DARWIN_EXC_STATE PPC_EXCEPTION_STATE64
# define DARWIN_EXC_STATE_COUNT PPC_EXCEPTION_STATE64_COUNT
# define DARWIN_EXC_STATE_T ppc_exception_state64_t
# endif
# define DARWIN_EXC_STATE_DAR THREAD_FLD_NAME(dar)
#elif defined(I386) || defined(X86_64)
# if CPP_WORDSZ == 32
# if defined(i386_EXCEPTION_STATE_COUNT) \
&& !defined(x86_EXCEPTION_STATE32_COUNT)
/* Use old naming convention for 32-bit x86. */
# define DARWIN_EXC_STATE i386_EXCEPTION_STATE
# define DARWIN_EXC_STATE_COUNT i386_EXCEPTION_STATE_COUNT
# define DARWIN_EXC_STATE_T i386_exception_state_t
# else
# define DARWIN_EXC_STATE x86_EXCEPTION_STATE32
# define DARWIN_EXC_STATE_COUNT x86_EXCEPTION_STATE32_COUNT
# define DARWIN_EXC_STATE_T x86_exception_state32_t
# endif
# else
# define DARWIN_EXC_STATE x86_EXCEPTION_STATE64
# define DARWIN_EXC_STATE_COUNT x86_EXCEPTION_STATE64_COUNT
# define DARWIN_EXC_STATE_T x86_exception_state64_t
# endif
# define DARWIN_EXC_STATE_DAR THREAD_FLD_NAME(faultvaddr)
#elif !defined(CPPCHECK)
# error FIXME for non-arm/ppc/x86 darwin
#endif
/* This violates the namespace rules but there isn't anything that can */
/* be done about it. The exception handling stuff is hard coded to */
/* call this. catch_exception_raise, catch_exception_raise_state and */
/* and catch_exception_raise_state_identity are called from OS. */
GC_API_OSCALL kern_return_t
catch_exception_raise(mach_port_t exception_port GC_ATTR_UNUSED,
mach_port_t thread, mach_port_t task GC_ATTR_UNUSED,
exception_type_t exception, exception_data_t code,
mach_msg_type_number_t code_count GC_ATTR_UNUSED)
{
kern_return_t r;
char *addr;
thread_state_flavor_t flavor = DARWIN_EXC_STATE;
mach_msg_type_number_t exc_state_count = DARWIN_EXC_STATE_COUNT;
DARWIN_EXC_STATE_T exc_state;
if (exception != EXC_BAD_ACCESS || code[0] != KERN_PROTECTION_FAILURE) {
# ifdef DEBUG_EXCEPTION_HANDLING
/* We aren't interested, pass it on to the old handler */
GC_log_printf("Exception: 0x%x Code: 0x%x 0x%x in catch...\n",
exception, code_count > 0 ? code[0] : -1,
code_count > 1 ? code[1] : -1);
# endif
return FWD();
}
r = thread_get_state(thread, flavor, (natural_t*)&exc_state,
&exc_state_count);
if(r != KERN_SUCCESS) {
/* The thread is supposed to be suspended while the exception */
/* handler is called. This shouldn't fail. */
# ifdef BROKEN_EXCEPTION_HANDLING
GC_err_printf("thread_get_state failed in catch_exception_raise\n");
return KERN_SUCCESS;
# else
ABORT("thread_get_state failed in catch_exception_raise");
# endif
}
/* This is the address that caused the fault */
addr = (char*) exc_state.DARWIN_EXC_STATE_DAR;
if (HDR(addr) == 0) {
/* Ugh... just like the SIGBUS problem above, it seems we get */
/* a bogus KERN_PROTECTION_FAILURE every once and a while. We wait */
/* till we get a bunch in a row before doing anything about it. */
/* If a "real" fault ever occurs it'll just keep faulting over and */
/* over and we'll hit the limit pretty quickly. */
# ifdef BROKEN_EXCEPTION_HANDLING
static char *last_fault;
static int last_fault_count;
if(addr != last_fault) {
last_fault = addr;
last_fault_count = 0;
}
if(++last_fault_count < 32) {
if(last_fault_count == 1)
WARN("Ignoring KERN_PROTECTION_FAILURE at %p\n", addr);
return KERN_SUCCESS;
}
GC_err_printf("Unexpected KERN_PROTECTION_FAILURE at %p; aborting...\n",
(void *)addr);
/* Can't pass it along to the signal handler because that is */
/* ignoring SIGBUS signals. We also shouldn't call ABORT here as */
/* signals don't always work too well from the exception handler. */
EXIT();
# else /* BROKEN_EXCEPTION_HANDLING */
/* Pass it along to the next exception handler
(which should call SIGBUS/SIGSEGV) */
return FWD();
# endif /* !BROKEN_EXCEPTION_HANDLING */
}
# ifdef BROKEN_EXCEPTION_HANDLING
/* Reset the number of consecutive SIGBUSs */
GC_sigbus_count = 0;
# endif
if (GC_mprotect_state == GC_MP_NORMAL) { /* common case */
struct hblk * h = (struct hblk*)((word)addr & ~(GC_page_size-1));
size_t i;
UNPROTECT(h, GC_page_size);
for (i = 0; i < divHBLKSZ(GC_page_size); i++) {
word index = PHT_HASH(h+i);
async_set_pht_entry_from_index(GC_dirty_pages, index);
}
} else if (GC_mprotect_state == GC_MP_DISCARDING) {
/* Lie to the thread for now. No sense UNPROTECT()ing the memory
when we're just going to PROTECT() it again later. The thread
will just fault again once it resumes */
} else {
/* Shouldn't happen, i don't think */
GC_err_printf("KERN_PROTECTION_FAILURE while world is stopped\n");
return FWD();
}
return KERN_SUCCESS;
}
#undef FWD
#ifndef NO_DESC_CATCH_EXCEPTION_RAISE
/* These symbols should have REFERENCED_DYNAMICALLY (0x10) bit set to */
/* let strip know they are not to be stripped. */
__asm__(".desc _catch_exception_raise, 0x10");
__asm__(".desc _catch_exception_raise_state, 0x10");
__asm__(".desc _catch_exception_raise_state_identity, 0x10");
#endif
#endif /* DARWIN && MPROTECT_VDB */
#ifndef HAVE_INCREMENTAL_PROTECTION_NEEDS
GC_API int GC_CALL GC_incremental_protection_needs(void)
{
return GC_PROTECTS_NONE;
}
#endif /* !HAVE_INCREMENTAL_PROTECTION_NEEDS */
#ifdef ECOS
/* Undo sbrk() redirection. */
# undef sbrk
#endif
/* If value is non-zero then allocate executable memory. */
GC_API void GC_CALL GC_set_pages_executable(int value)
{
GC_ASSERT(!GC_is_initialized);
/* Even if IGNORE_PAGES_EXECUTABLE is defined, GC_pages_executable is */
/* touched here to prevent a compiler warning. */
GC_pages_executable = (GC_bool)(value != 0);
}
/* Returns non-zero if the GC-allocated memory is executable. */
/* GC_get_pages_executable is defined after all the places */
/* where GC_get_pages_executable is undefined. */
GC_API int GC_CALL GC_get_pages_executable(void)
{
# ifdef IGNORE_PAGES_EXECUTABLE
return 1; /* Always allocate executable memory. */
# else
return (int)GC_pages_executable;
# endif
}
/* Call stack save code for debugging. Should probably be in */
/* mach_dep.c, but that requires reorganization. */
/* I suspect the following works for most X86 *nix variants, so */
/* long as the frame pointer is explicitly stored. In the case of gcc, */
/* compiler flags (e.g. -fomit-frame-pointer) determine whether it is. */
#if defined(I386) && defined(LINUX) && defined(SAVE_CALL_CHAIN)
# include
struct frame {
struct frame *fr_savfp;
long fr_savpc;
# if NARGS > 0
long fr_arg[NARGS]; /* All the arguments go here. */
# endif
};
#endif
#if defined(SPARC)
# if defined(LINUX)
# include
struct frame {
long fr_local[8];
long fr_arg[6];
struct frame *fr_savfp;
long fr_savpc;
# ifndef __arch64__
char *fr_stret;
# endif
long fr_argd[6];
long fr_argx[0];
};
# elif defined (DRSNX)
# include
# elif defined(OPENBSD)
# include
# elif defined(FREEBSD) || defined(NETBSD)
# include
# else
# include
# endif
# if NARGS > 6
# error We only know how to get the first 6 arguments
# endif
#endif /* SPARC */
#ifdef NEED_CALLINFO
/* Fill in the pc and argument information for up to NFRAMES of my */
/* callers. Ignore my frame and my callers frame. */
#ifdef LINUX
# include
#endif
#endif /* NEED_CALLINFO */
#if defined(GC_HAVE_BUILTIN_BACKTRACE)
# ifdef _MSC_VER
# include "private/msvc_dbg.h"
# else
# include
# endif
#endif
#ifdef SAVE_CALL_CHAIN
#if NARGS == 0 && NFRAMES % 2 == 0 /* No padding */ \
&& defined(GC_HAVE_BUILTIN_BACKTRACE)
#ifdef REDIRECT_MALLOC
/* Deal with possible malloc calls in backtrace by omitting */
/* the infinitely recursing backtrace. */
# ifdef THREADS
__thread /* If your compiler doesn't understand this */
/* you could use something like pthread_getspecific. */
# endif
GC_bool GC_in_save_callers = FALSE;
#endif
GC_INNER void GC_save_callers(struct callinfo info[NFRAMES])
{
void * tmp_info[NFRAMES + 1];
int npcs, i;
# define IGNORE_FRAMES 1
/* We retrieve NFRAMES+1 pc values, but discard the first, since it */
/* points to our own frame. */
# ifdef REDIRECT_MALLOC
if (GC_in_save_callers) {
info[0].ci_pc = (word)(&GC_save_callers);
for (i = 1; i < NFRAMES; ++i) info[i].ci_pc = 0;
return;
}
GC_in_save_callers = TRUE;
# endif
GC_ASSERT(I_HOLD_LOCK());
/* backtrace may call dl_iterate_phdr which is also */
/* used by GC_register_dynamic_libraries, and */
/* dl_iterate_phdr is not guaranteed to be reentrant. */
GC_STATIC_ASSERT(sizeof(struct callinfo) == sizeof(void *));
npcs = backtrace((void **)tmp_info, NFRAMES + IGNORE_FRAMES);
BCOPY(tmp_info+IGNORE_FRAMES, info, (npcs - IGNORE_FRAMES) * sizeof(void *));
for (i = npcs - IGNORE_FRAMES; i < NFRAMES; ++i) info[i].ci_pc = 0;
# ifdef REDIRECT_MALLOC
GC_in_save_callers = FALSE;
# endif
}
#else /* No builtin backtrace; do it ourselves */
#if (defined(OPENBSD) || defined(NETBSD) || defined(FREEBSD)) && defined(SPARC)
# define FR_SAVFP fr_fp
# define FR_SAVPC fr_pc
#else
# define FR_SAVFP fr_savfp
# define FR_SAVPC fr_savpc
#endif
#if defined(SPARC) && (defined(__arch64__) || defined(__sparcv9))
# define BIAS 2047
#else
# define BIAS 0
#endif
GC_INNER void GC_save_callers(struct callinfo info[NFRAMES])
{
struct frame *frame;
struct frame *fp;
int nframes = 0;
# ifdef I386
/* We assume this is turned on only with gcc as the compiler. */
asm("movl %%ebp,%0" : "=r"(frame));
fp = frame;
# else
frame = (struct frame *)GC_save_regs_in_stack();
fp = (struct frame *)((long) frame -> FR_SAVFP + BIAS);
#endif
for (; !((word)fp HOTTER_THAN (word)frame)
&& !((word)GC_stackbottom HOTTER_THAN (word)fp)
&& nframes < NFRAMES;
fp = (struct frame *)((long) fp -> FR_SAVFP + BIAS), nframes++) {
# if NARGS > 0
register int i;
# endif
info[nframes].ci_pc = fp->FR_SAVPC;
# if NARGS > 0
for (i = 0; i < NARGS; i++) {
info[nframes].ci_arg[i] = ~(fp->fr_arg[i]);
}
# endif /* NARGS > 0 */
}
if (nframes < NFRAMES) info[nframes].ci_pc = 0;
}
#endif /* No builtin backtrace */
#endif /* SAVE_CALL_CHAIN */
#ifdef NEED_CALLINFO
/* Print info to stderr. We do NOT hold the allocation lock */
GC_INNER void GC_print_callers(struct callinfo info[NFRAMES])
{
int i;
static int reentry_count = 0;
GC_bool stop = FALSE;
DCL_LOCK_STATE;
/* FIXME: This should probably use a different lock, so that we */
/* become callable with or without the allocation lock. */
LOCK();
++reentry_count;
UNLOCK();
# if NFRAMES == 1
GC_err_printf("\tCaller at allocation:\n");
# else
GC_err_printf("\tCall chain at allocation:\n");
# endif
for (i = 0; i < NFRAMES && !stop; i++) {
if (info[i].ci_pc == 0) break;
# if NARGS > 0
{
int j;
GC_err_printf("\t\targs: ");
for (j = 0; j < NARGS; j++) {
if (j != 0) GC_err_printf(", ");
GC_err_printf("%d (0x%X)", ~(info[i].ci_arg[j]),
~(info[i].ci_arg[j]));
}
GC_err_printf("\n");
}
# endif
if (reentry_count > 1) {
/* We were called during an allocation during */
/* a previous GC_print_callers call; punt. */
GC_err_printf("\t\t##PC##= 0x%lx\n",
(unsigned long)info[i].ci_pc);
continue;
}
{
char buf[40];
char *name;
# if defined(GC_HAVE_BUILTIN_BACKTRACE) \
&& !defined(GC_BACKTRACE_SYMBOLS_BROKEN)
char **sym_name =
backtrace_symbols((void **)(&(info[i].ci_pc)), 1);
if (sym_name != NULL) {
name = sym_name[0];
} else
# endif
/* else */ {
(void)snprintf(buf, sizeof(buf), "##PC##= 0x%lx",
(unsigned long)info[i].ci_pc);
buf[sizeof(buf) - 1] = '\0';
name = buf;
}
# if defined(LINUX) && !defined(SMALL_CONFIG)
/* Try for a line number. */
{
FILE *pipe;
# define EXE_SZ 100
static char exe_name[EXE_SZ];
# define CMD_SZ 200
char cmd_buf[CMD_SZ];
# define RESULT_SZ 200
static char result_buf[RESULT_SZ];
size_t result_len;
char *old_preload;
# define PRELOAD_SZ 200
char preload_buf[PRELOAD_SZ];
static GC_bool found_exe_name = FALSE;
static GC_bool will_fail = FALSE;
int ret_code;
/* Try to get it via a hairy and expensive scheme. */
/* First we get the name of the executable: */
if (will_fail) goto out;
if (!found_exe_name) {
ret_code = readlink("/proc/self/exe", exe_name, EXE_SZ);
if (ret_code < 0 || ret_code >= EXE_SZ
|| exe_name[0] != '/') {
will_fail = TRUE; /* Don't try again. */
goto out;
}
exe_name[ret_code] = '\0';
found_exe_name = TRUE;
}
/* Then we use popen to start addr2line -e */
/* There are faster ways to do this, but hopefully this */
/* isn't time critical. */
(void)snprintf(cmd_buf, sizeof(cmd_buf),
"/usr/bin/addr2line -f -e %s 0x%lx",
exe_name, (unsigned long)info[i].ci_pc);
cmd_buf[sizeof(cmd_buf) - 1] = '\0';
old_preload = GETENV("LD_PRELOAD");
if (0 != old_preload) {
size_t old_len = strlen(old_preload);
if (old_len >= PRELOAD_SZ) {
will_fail = TRUE;
goto out;
}
BCOPY(old_preload, preload_buf, old_len + 1);
unsetenv ("LD_PRELOAD");
}
pipe = popen(cmd_buf, "r");
if (0 != old_preload
&& 0 != setenv ("LD_PRELOAD", preload_buf, 0)) {
WARN("Failed to reset LD_PRELOAD\n", 0);
}
if (pipe == NULL
|| (result_len = fread(result_buf, 1,
RESULT_SZ - 1, pipe)) == 0) {
if (pipe != NULL) pclose(pipe);
will_fail = TRUE;
goto out;
}
if (result_buf[result_len - 1] == '\n') --result_len;
result_buf[result_len] = 0;
if (result_buf[0] == '?'
|| (result_buf[result_len-2] == ':'
&& result_buf[result_len-1] == '0')) {
pclose(pipe);
goto out;
}
/* Get rid of embedded newline, if any. Test for "main" */
{
char * nl = strchr(result_buf, '\n');
if (nl != NULL
&& (word)nl < (word)(result_buf + result_len)) {
*nl = ':';
}
if (strncmp(result_buf, "main", nl - result_buf) == 0) {
stop = TRUE;
}
}
if (result_len < RESULT_SZ - 25) {
/* Add in hex address */
(void)snprintf(&result_buf[result_len],
sizeof(result_buf) - result_len,
" [0x%lx]", (unsigned long)info[i].ci_pc);
result_buf[sizeof(result_buf) - 1] = '\0';
}
name = result_buf;
pclose(pipe);
out:;
}
# endif /* LINUX */
GC_err_printf("\t\t%s\n", name);
# if defined(GC_HAVE_BUILTIN_BACKTRACE) \
&& !defined(GC_BACKTRACE_SYMBOLS_BROKEN)
if (sym_name != NULL)
free(sym_name); /* May call GC_[debug_]free; that's OK */
# endif
}
}
LOCK();
--reentry_count;
UNLOCK();
}
#endif /* NEED_CALLINFO */
#if defined(LINUX) && defined(__ELF__) && !defined(SMALL_CONFIG)
/* Dump /proc/self/maps to GC_stderr, to enable looking up names for */
/* addresses in FIND_LEAK output. */
void GC_print_address_map(void)
{
char *maps;
GC_err_printf("---------- Begin address map ----------\n");
maps = GC_get_maps();
GC_err_puts(maps != NULL ? maps : "Failed to get map!\n");
GC_err_printf("---------- End address map ----------\n");
}
#endif /* LINUX && ELF */
Gauche-0.9.6/gc/stubborn.c 0000664 0000764 0000764 00000003172 13074101475 014366 0 ustar shiro shiro /*
* Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
* Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#include "private/gc_priv.h"
#if defined(MANUAL_VDB)
/* Stubborn object (hard to change, nearly immutable) allocation. */
/* This interface is deprecated. We mostly emulate it using */
/* MANUAL_VDB. But that imposes the additional constraint that */
/* written, but not yet GC_dirty()ed objects must be referenced */
/* by a stack. */
void GC_dirty(ptr_t p);
GC_API GC_ATTR_MALLOC void * GC_CALL GC_malloc_stubborn(size_t lb)
{
return(GC_malloc(lb));
}
GC_API void GC_CALL GC_end_stubborn_change(const void *p)
{
GC_dirty((ptr_t)p);
}
GC_API void GC_CALL GC_change_stubborn(const void *p GC_ATTR_UNUSED)
{
}
#else /* !MANUAL_VDB */
GC_API GC_ATTR_MALLOC void * GC_CALL GC_malloc_stubborn(size_t lb)
{
return(GC_malloc(lb));
}
GC_API void GC_CALL GC_end_stubborn_change(const void *p GC_ATTR_UNUSED)
{
}
GC_API void GC_CALL GC_change_stubborn(const void *p GC_ATTR_UNUSED)
{
}
#endif /* !MANUAL_VDB */
Gauche-0.9.6/gc/compile 0000755 0000764 0000764 00000016245 13074102153 013737 0 ustar shiro shiro #! /bin/sh
# Wrapper for compilers which do not understand '-c -o'.
scriptversion=2012-10-14.11; # UTC
# Copyright (C) 1999-2013 Free Software Foundation, Inc.
# Written by Tom Tromey .
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to or send patches to
# .
nl='
'
# We need space, tab and new line, in precisely that order. Quoting is
# there to prevent tools from complaining about whitespace usage.
IFS=" "" $nl"
file_conv=
# func_file_conv build_file lazy
# Convert a $build file to $host form and store it in $file
# Currently only supports Windows hosts. If the determined conversion
# type is listed in (the comma separated) LAZY, no conversion will
# take place.
func_file_conv ()
{
file=$1
case $file in
/ | /[!/]*) # absolute file, and not a UNC file
if test -z "$file_conv"; then
# lazily determine how to convert abs files
case `uname -s` in
MINGW*)
file_conv=mingw
;;
CYGWIN*)
file_conv=cygwin
;;
*)
file_conv=wine
;;
esac
fi
case $file_conv/,$2, in
*,$file_conv,*)
;;
mingw/*)
file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'`
;;
cygwin/*)
file=`cygpath -m "$file" || echo "$file"`
;;
wine/*)
file=`winepath -w "$file" || echo "$file"`
;;
esac
;;
esac
}
# func_cl_dashL linkdir
# Make cl look for libraries in LINKDIR
func_cl_dashL ()
{
func_file_conv "$1"
if test -z "$lib_path"; then
lib_path=$file
else
lib_path="$lib_path;$file"
fi
linker_opts="$linker_opts -LIBPATH:$file"
}
# func_cl_dashl library
# Do a library search-path lookup for cl
func_cl_dashl ()
{
lib=$1
found=no
save_IFS=$IFS
IFS=';'
for dir in $lib_path $LIB
do
IFS=$save_IFS
if $shared && test -f "$dir/$lib.dll.lib"; then
found=yes
lib=$dir/$lib.dll.lib
break
fi
if test -f "$dir/$lib.lib"; then
found=yes
lib=$dir/$lib.lib
break
fi
if test -f "$dir/lib$lib.a"; then
found=yes
lib=$dir/lib$lib.a
break
fi
done
IFS=$save_IFS
if test "$found" != yes; then
lib=$lib.lib
fi
}
# func_cl_wrapper cl arg...
# Adjust compile command to suit cl
func_cl_wrapper ()
{
# Assume a capable shell
lib_path=
shared=:
linker_opts=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
eat=1
case $2 in
*.o | *.[oO][bB][jJ])
func_file_conv "$2"
set x "$@" -Fo"$file"
shift
;;
*)
func_file_conv "$2"
set x "$@" -Fe"$file"
shift
;;
esac
;;
-I)
eat=1
func_file_conv "$2" mingw
set x "$@" -I"$file"
shift
;;
-I*)
func_file_conv "${1#-I}" mingw
set x "$@" -I"$file"
shift
;;
-l)
eat=1
func_cl_dashl "$2"
set x "$@" "$lib"
shift
;;
-l*)
func_cl_dashl "${1#-l}"
set x "$@" "$lib"
shift
;;
-L)
eat=1
func_cl_dashL "$2"
;;
-L*)
func_cl_dashL "${1#-L}"
;;
-static)
shared=false
;;
-Wl,*)
arg=${1#-Wl,}
save_ifs="$IFS"; IFS=','
for flag in $arg; do
IFS="$save_ifs"
linker_opts="$linker_opts $flag"
done
IFS="$save_ifs"
;;
-Xlinker)
eat=1
linker_opts="$linker_opts $2"
;;
-*)
set x "$@" "$1"
shift
;;
*.cc | *.CC | *.cxx | *.CXX | *.[cC]++)
func_file_conv "$1"
set x "$@" -Tp"$file"
shift
;;
*.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO])
func_file_conv "$1" mingw
set x "$@" "$file"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -n "$linker_opts"; then
linker_opts="-link$linker_opts"
fi
exec "$@" $linker_opts
exit 1
}
eat=
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: compile [--help] [--version] PROGRAM [ARGS]
Wrapper for compilers which do not understand '-c -o'.
Remove '-o dest.o' from ARGS, run PROGRAM with the remaining
arguments, and rename the output as expected.
If you are trying to build a whole package this is not the
right script to run: please start by reading the file 'INSTALL'.
Report bugs to .
EOF
exit $?
;;
-v | --v*)
echo "compile $scriptversion"
exit $?
;;
cl | *[/\\]cl | cl.exe | *[/\\]cl.exe )
func_cl_wrapper "$@" # Doesn't return...
;;
esac
ofile=
cfile=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as 'compile cc -o foo foo.c'.
# So we strip '-o arg' only if arg is an object.
eat=1
case $2 in
*.o | *.obj)
ofile=$2
;;
*)
set x "$@" -o "$2"
shift
;;
esac
;;
*.c)
cfile=$1
set x "$@" "$1"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -z "$ofile" || test -z "$cfile"; then
# If no '-o' option was seen then we might have been invoked from a
# pattern rule where we don't need one. That is ok -- this is a
# normal compilation that the losing compiler can handle. If no
# '.c' file was seen then we are probably linking. That is also
# ok.
exec "$@"
fi
# Name of file we expect compiler to create.
cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'`
# Create the lock directory.
# Note: use '[/\\:.-]' here to ensure that we don't use the same name
# that we are using for the .o file. Also, base the name on the expected
# object file name, since that is what matters with a parallel build.
lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d
while true; do
if mkdir "$lockdir" >/dev/null 2>&1; then
break
fi
sleep 1
done
# FIXME: race condition here if user kills between mkdir and trap.
trap "rmdir '$lockdir'; exit 1" 1 2 15
# Run the compile.
"$@"
ret=$?
if test -f "$cofile"; then
test "$cofile" = "$ofile" || mv "$cofile" "$ofile"
elif test -f "${cofile}bj"; then
test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile"
fi
rmdir "$lockdir"
exit $ret
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-time-zone: "UTC"
# time-stamp-end: "; # UTC"
# End:
Gauche-0.9.6/gc/mark_rts.c 0000664 0000764 0000764 00000073676 13302340445 014366 0 ustar shiro shiro /*
* Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
* Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#include "private/gc_priv.h"
#include
/* Data structure for list of root sets. */
/* We keep a hash table, so that we can filter out duplicate additions. */
/* Under Win32, we need to do a better job of filtering overlaps, so */
/* we resort to sequential search, and pay the price. */
/* This is really declared in gc_priv.h:
struct roots {
ptr_t r_start;
ptr_t r_end;
# if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
struct roots * r_next;
# endif
GC_bool r_tmp;
-- Delete before registering new dynamic libraries
};
struct roots GC_static_roots[MAX_ROOT_SETS];
*/
int GC_no_dls = 0; /* Register dynamic library data segments. */
static int n_root_sets = 0;
/* GC_static_roots[0..n_root_sets) contains the valid root sets. */
#if !defined(NO_DEBUGGING) || defined(GC_ASSERTIONS)
/* Should return the same value as GC_root_size. */
GC_INNER word GC_compute_root_size(void)
{
int i;
word size = 0;
for (i = 0; i < n_root_sets; i++) {
size += GC_static_roots[i].r_end - GC_static_roots[i].r_start;
}
return size;
}
#endif /* !NO_DEBUGGING || GC_ASSERTIONS */
#if !defined(NO_DEBUGGING)
/* For debugging: */
void GC_print_static_roots(void)
{
int i;
word size;
for (i = 0; i < n_root_sets; i++) {
GC_printf("From %p to %p%s\n",
(void *)GC_static_roots[i].r_start,
(void *)GC_static_roots[i].r_end,
GC_static_roots[i].r_tmp ? " (temporary)" : "");
}
GC_printf("GC_root_size: %lu\n", (unsigned long)GC_root_size);
if ((size = GC_compute_root_size()) != GC_root_size)
GC_err_printf("GC_root_size incorrect!! Should be: %lu\n",
(unsigned long)size);
}
#endif /* !NO_DEBUGGING */
#ifndef THREADS
/* Primarily for debugging support: */
/* Is the address p in one of the registered static root sections? */
GC_INNER GC_bool GC_is_static_root(ptr_t p)
{
static int last_root_set = MAX_ROOT_SETS;
int i;
if (last_root_set < n_root_sets
&& (word)p >= (word)GC_static_roots[last_root_set].r_start
&& (word)p < (word)GC_static_roots[last_root_set].r_end)
return(TRUE);
for (i = 0; i < n_root_sets; i++) {
if ((word)p >= (word)GC_static_roots[i].r_start
&& (word)p < (word)GC_static_roots[i].r_end) {
last_root_set = i;
return(TRUE);
}
}
return(FALSE);
}
#endif /* !THREADS */
#if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
/*
# define LOG_RT_SIZE 6
# define RT_SIZE (1 << LOG_RT_SIZE) -- Power of 2, may be != MAX_ROOT_SETS
struct roots * GC_root_index[RT_SIZE];
-- Hash table header. Used only to check whether a range is
-- already present.
-- really defined in gc_priv.h
*/
GC_INLINE int rt_hash(ptr_t addr)
{
word result = (word) addr;
# if CPP_WORDSZ > 8*LOG_RT_SIZE
result ^= result >> 8*LOG_RT_SIZE;
# endif
# if CPP_WORDSZ > 4*LOG_RT_SIZE
result ^= result >> 4*LOG_RT_SIZE;
# endif
result ^= result >> 2*LOG_RT_SIZE;
result ^= result >> LOG_RT_SIZE;
result &= (RT_SIZE-1);
return(result);
}
/* Is a range starting at b already in the table? If so return a */
/* pointer to it, else NULL. */
GC_INNER void * GC_roots_present(ptr_t b)
{
int h = rt_hash(b);
struct roots *p = GC_root_index[h];
while (p != 0) {
if (p -> r_start == (ptr_t)b) return(p);
p = p -> r_next;
}
return NULL;
}
/* Add the given root structure to the index. */
GC_INLINE void add_roots_to_index(struct roots *p)
{
int h = rt_hash(p -> r_start);
p -> r_next = GC_root_index[h];
GC_root_index[h] = p;
}
#endif /* !MSWIN32 && !MSWINCE && !CYGWIN32 */
GC_INNER word GC_root_size = 0;
GC_API void GC_CALL GC_add_roots(void *b, void *e)
{
DCL_LOCK_STATE;
if (!EXPECT(GC_is_initialized, TRUE)) GC_init();
LOCK();
GC_add_roots_inner((ptr_t)b, (ptr_t)e, FALSE);
UNLOCK();
}
/* Add [b,e) to the root set. Adding the same interval a second time */
/* is a moderately fast no-op, and hence benign. We do not handle */
/* different but overlapping intervals efficiently. (We do handle */
/* them correctly.) */
/* Tmp specifies that the interval may be deleted before */
/* re-registering dynamic libraries. */
void GC_add_roots_inner(ptr_t b, ptr_t e, GC_bool tmp)
{
GC_ASSERT((word)b <= (word)e);
b = (ptr_t)(((word)b + (sizeof(word) - 1)) & ~(word)(sizeof(word) - 1));
/* round b up to word boundary */
e = (ptr_t)((word)e & ~(word)(sizeof(word) - 1));
/* round e down to word boundary */
if ((word)b >= (word)e) return; /* nothing to do */
# if defined(MSWIN32) || defined(MSWINCE) || defined(CYGWIN32)
/* Spend the time to ensure that there are no overlapping */
/* or adjacent intervals. */
/* This could be done faster with e.g. a */
/* balanced tree. But the execution time here is */
/* virtually guaranteed to be dominated by the time it */
/* takes to scan the roots. */
{
register int i;
struct roots * old = NULL; /* initialized to prevent warning. */
for (i = 0; i < n_root_sets; i++) {
old = GC_static_roots + i;
if ((word)b <= (word)old->r_end
&& (word)e >= (word)old->r_start) {
if ((word)b < (word)old->r_start) {
GC_root_size += old->r_start - b;
old -> r_start = b;
}
if ((word)e > (word)old->r_end) {
GC_root_size += e - old->r_end;
old -> r_end = e;
}
old -> r_tmp &= tmp;
break;
}
}
if (i < n_root_sets) {
/* merge other overlapping intervals */
struct roots *other;
for (i++; i < n_root_sets; i++) {
other = GC_static_roots + i;
b = other -> r_start;
e = other -> r_end;
if ((word)b <= (word)old->r_end
&& (word)e >= (word)old->r_start) {
if ((word)b < (word)old->r_start) {
GC_root_size += old->r_start - b;
old -> r_start = b;
}
if ((word)e > (word)old->r_end) {
GC_root_size += e - old->r_end;
old -> r_end = e;
}
old -> r_tmp &= other -> r_tmp;
/* Delete this entry. */
GC_root_size -= (other -> r_end - other -> r_start);
other -> r_start = GC_static_roots[n_root_sets-1].r_start;
other -> r_end = GC_static_roots[n_root_sets-1].r_end;
n_root_sets--;
}
}
return;
}
}
# else
{
struct roots * old = (struct roots *)GC_roots_present(b);
if (old != 0) {
if ((word)e <= (word)old->r_end)
return; /* already there */
/* else extend */
GC_root_size += e - old -> r_end;
old -> r_end = e;
return;
}
}
# endif
if (n_root_sets == MAX_ROOT_SETS) {
ABORT("Too many root sets");
}
# ifdef DEBUG_ADD_DEL_ROOTS
GC_log_printf("Adding data root section %d: %p .. %p%s\n",
n_root_sets, (void *)b, (void *)e,
tmp ? " (temporary)" : "");
# endif
GC_static_roots[n_root_sets].r_start = (ptr_t)b;
GC_static_roots[n_root_sets].r_end = (ptr_t)e;
GC_static_roots[n_root_sets].r_tmp = tmp;
# if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
GC_static_roots[n_root_sets].r_next = 0;
add_roots_to_index(GC_static_roots + n_root_sets);
# endif
GC_root_size += e - b;
n_root_sets++;
}
static GC_bool roots_were_cleared = FALSE;
GC_API void GC_CALL GC_clear_roots(void)
{
DCL_LOCK_STATE;
if (!EXPECT(GC_is_initialized, TRUE)) GC_init();
LOCK();
roots_were_cleared = TRUE;
n_root_sets = 0;
GC_root_size = 0;
# if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
BZERO(GC_root_index, RT_SIZE * sizeof(void *));
# endif
# ifdef DEBUG_ADD_DEL_ROOTS
GC_log_printf("Clear all data root sections\n");
# endif
UNLOCK();
}
/* Internal use only; lock held. */
STATIC void GC_remove_root_at_pos(int i)
{
# ifdef DEBUG_ADD_DEL_ROOTS
GC_log_printf("Remove data root section at %d: %p .. %p%s\n",
i, (void *)GC_static_roots[i].r_start,
(void *)GC_static_roots[i].r_end,
GC_static_roots[i].r_tmp ? " (temporary)" : "");
# endif
GC_root_size -= (GC_static_roots[i].r_end - GC_static_roots[i].r_start);
GC_static_roots[i].r_start = GC_static_roots[n_root_sets-1].r_start;
GC_static_roots[i].r_end = GC_static_roots[n_root_sets-1].r_end;
GC_static_roots[i].r_tmp = GC_static_roots[n_root_sets-1].r_tmp;
n_root_sets--;
}
#if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
STATIC void GC_rebuild_root_index(void)
{
int i;
BZERO(GC_root_index, RT_SIZE * sizeof(void *));
for (i = 0; i < n_root_sets; i++)
add_roots_to_index(GC_static_roots + i);
}
#endif
#if defined(DYNAMIC_LOADING) || defined(MSWIN32) || defined(MSWINCE) \
|| defined(PCR) || defined(CYGWIN32)
/* Internal use only; lock held. */
STATIC void GC_remove_tmp_roots(void)
{
int i;
for (i = 0; i < n_root_sets; ) {
if (GC_static_roots[i].r_tmp) {
GC_remove_root_at_pos(i);
} else {
i++;
}
}
# if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
GC_rebuild_root_index();
# endif
}
#endif
#if !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32)
STATIC void GC_remove_roots_inner(ptr_t b, ptr_t e);
GC_API void GC_CALL GC_remove_roots(void *b, void *e)
{
DCL_LOCK_STATE;
/* Quick check whether has nothing to do */
if ((((word)b + (sizeof(word) - 1)) & ~(word)(sizeof(word) - 1)) >=
((word)e & ~(word)(sizeof(word) - 1)))
return;
LOCK();
GC_remove_roots_inner((ptr_t)b, (ptr_t)e);
UNLOCK();
}
/* Should only be called when the lock is held */
STATIC void GC_remove_roots_inner(ptr_t b, ptr_t e)
{
int i;
for (i = 0; i < n_root_sets; ) {
if ((word)GC_static_roots[i].r_start >= (word)b
&& (word)GC_static_roots[i].r_end <= (word)e) {
GC_remove_root_at_pos(i);
} else {
i++;
}
}
GC_rebuild_root_index();
}
#endif /* !defined(MSWIN32) && !defined(MSWINCE) && !defined(CYGWIN32) */
#if !defined(NO_DEBUGGING)
/* For the debugging purpose only. */
/* Workaround for the OS mapping and unmapping behind our back: */
/* Is the address p in one of the temporary static root sections? */
GC_API int GC_CALL GC_is_tmp_root(void *p)
{
static int last_root_set = MAX_ROOT_SETS;
register int i;
if (last_root_set < n_root_sets
&& (word)p >= (word)GC_static_roots[last_root_set].r_start
&& (word)p < (word)GC_static_roots[last_root_set].r_end)
return GC_static_roots[last_root_set].r_tmp;
for (i = 0; i < n_root_sets; i++) {
if ((word)p >= (word)GC_static_roots[i].r_start
&& (word)p < (word)GC_static_roots[i].r_end) {
last_root_set = i;
return GC_static_roots[i].r_tmp;
}
}
return(FALSE);
}
#endif /* !NO_DEBUGGING */
GC_INNER ptr_t GC_approx_sp(void)
{
volatile word sp;
# if defined(CPPCHECK) || (__GNUC__ >= 4)
sp = (word)__builtin_frame_address(0);
# else
sp = (word)&sp;
# endif
/* Also force stack to grow if necessary. Otherwise the */
/* later accesses might cause the kernel to think we're */
/* doing something wrong. */
return((ptr_t)sp);
}
/*
* Data structure for excluded static roots.
* Real declaration is in gc_priv.h.
struct exclusion {
ptr_t e_start;
ptr_t e_end;
};
struct exclusion GC_excl_table[MAX_EXCLUSIONS];
-- Array of exclusions, ascending
-- address order.
*/
STATIC size_t GC_excl_table_entries = 0;/* Number of entries in use. */
/* Return the first exclusion range that includes an address >= start_addr */
/* Assumes the exclusion table contains at least one entry (namely the */
/* GC data structures). */
STATIC struct exclusion * GC_next_exclusion(ptr_t start_addr)
{
size_t low = 0;
size_t high = GC_excl_table_entries - 1;
while (high > low) {
size_t mid = (low + high) >> 1;
/* low <= mid < high */
if ((word) GC_excl_table[mid].e_end <= (word) start_addr) {
low = mid + 1;
} else {
high = mid;
}
}
if ((word) GC_excl_table[low].e_end <= (word) start_addr) return 0;
return GC_excl_table + low;
}
/* Should only be called when the lock is held. The range boundaries */
/* should be properly aligned and valid. */
GC_INNER void GC_exclude_static_roots_inner(void *start, void *finish)
{
struct exclusion * next;
size_t next_index;
GC_ASSERT((word)start % sizeof(word) == 0);
GC_ASSERT((word)start < (word)finish);
if (0 == GC_excl_table_entries) {
next = 0;
} else {
next = GC_next_exclusion(start);
}
if (0 != next) {
size_t i;
if ((word)(next -> e_start) < (word) finish) {
/* incomplete error check. */
ABORT("Exclusion ranges overlap");
}
if ((word)(next -> e_start) == (word) finish) {
/* extend old range backwards */
next -> e_start = (ptr_t)start;
return;
}
next_index = next - GC_excl_table;
for (i = GC_excl_table_entries; i > next_index; --i) {
GC_excl_table[i] = GC_excl_table[i-1];
}
} else {
next_index = GC_excl_table_entries;
}
if (GC_excl_table_entries == MAX_EXCLUSIONS) ABORT("Too many exclusions");
GC_excl_table[next_index].e_start = (ptr_t)start;
GC_excl_table[next_index].e_end = (ptr_t)finish;
++GC_excl_table_entries;
}
GC_API void GC_CALL GC_exclude_static_roots(void *b, void *e)
{
DCL_LOCK_STATE;
if (b == e) return; /* nothing to exclude? */
/* Round boundaries (in direction reverse to that of GC_add_roots). */
b = (void *)((word)b & ~(word)(sizeof(word) - 1));
e = (void *)(((word)e + (sizeof(word) - 1)) & ~(word)(sizeof(word) - 1));
if (NULL == e)
e = (void *)(~(word)(sizeof(word) - 1)); /* handle overflow */
LOCK();
GC_exclude_static_roots_inner(b, e);
UNLOCK();
}
#if defined(WRAP_MARK_SOME) && defined(PARALLEL_MARK)
/* GC_mark_local does not handle memory protection faults yet. So, */
/* the static data regions are scanned immediately by GC_push_roots. */
GC_INNER void GC_push_conditional_eager(ptr_t bottom, ptr_t top,
GC_bool all);
# define GC_PUSH_CONDITIONAL(b, t, all) \
(GC_parallel \
? GC_push_conditional_eager(b, t, all) \
: GC_push_conditional((ptr_t)(b), (ptr_t)(t), all))
#elif defined(GC_DISABLE_INCREMENTAL)
# define GC_PUSH_CONDITIONAL(b, t, all) GC_push_all((ptr_t)(b), (ptr_t)(t))
#else
# define GC_PUSH_CONDITIONAL(b, t, all) \
GC_push_conditional((ptr_t)(b), (ptr_t)(t), all)
/* Do either of GC_push_all or GC_push_selected */
/* depending on the third arg. */
#endif
/* Invoke push_conditional on ranges that are not excluded. */
STATIC void GC_push_conditional_with_exclusions(ptr_t bottom, ptr_t top,
GC_bool all GC_ATTR_UNUSED)
{
while ((word)bottom < (word)top) {
struct exclusion *next = GC_next_exclusion(bottom);
ptr_t excl_start;
if (0 == next
|| (word)(excl_start = next -> e_start) >= (word)top) {
GC_PUSH_CONDITIONAL(bottom, top, all);
break;
}
if ((word)excl_start > (word)bottom)
GC_PUSH_CONDITIONAL(bottom, excl_start, all);
bottom = next -> e_end;
}
}
#ifdef IA64
/* Similar to GC_push_all_stack_sections() but for IA-64 registers store. */
GC_INNER void GC_push_all_register_sections(ptr_t bs_lo, ptr_t bs_hi,
int eager, struct GC_traced_stack_sect_s *traced_stack_sect)
{
while (traced_stack_sect != NULL) {
ptr_t frame_bs_lo = traced_stack_sect -> backing_store_end;
GC_ASSERT((word)frame_bs_lo <= (word)bs_hi);
if (eager) {
GC_push_all_eager(frame_bs_lo, bs_hi);
} else {
GC_push_all_stack(frame_bs_lo, bs_hi);
}
bs_hi = traced_stack_sect -> saved_backing_store_ptr;
traced_stack_sect = traced_stack_sect -> prev;
}
GC_ASSERT((word)bs_lo <= (word)bs_hi);
if (eager) {
GC_push_all_eager(bs_lo, bs_hi);
} else {
GC_push_all_stack(bs_lo, bs_hi);
}
}
#endif /* IA64 */
#ifdef THREADS
GC_INNER void GC_push_all_stack_sections(ptr_t lo, ptr_t hi,
struct GC_traced_stack_sect_s *traced_stack_sect)
{
while (traced_stack_sect != NULL) {
GC_ASSERT((word)lo HOTTER_THAN (word)traced_stack_sect);
# ifdef STACK_GROWS_UP
GC_push_all_stack((ptr_t)traced_stack_sect, lo);
# else /* STACK_GROWS_DOWN */
GC_push_all_stack(lo, (ptr_t)traced_stack_sect);
# endif
lo = traced_stack_sect -> saved_stack_ptr;
GC_ASSERT(lo != NULL);
traced_stack_sect = traced_stack_sect -> prev;
}
GC_ASSERT(!((word)hi HOTTER_THAN (word)lo));
# ifdef STACK_GROWS_UP
/* We got them backwards! */
GC_push_all_stack(hi, lo);
# else /* STACK_GROWS_DOWN */
GC_push_all_stack(lo, hi);
# endif
}
#else /* !THREADS */
# ifdef TRACE_BUF
/* Defined in mark.c. */
void GC_add_trace_entry(char *kind, word arg1, word arg2);
# endif
/* Similar to GC_push_all_eager, but only the */
/* part hotter than cold_gc_frame is scanned */
/* immediately. Needed to ensure that callee- */
/* save registers are not missed. */
/*
* A version of GC_push_all that treats all interior pointers as valid
* and scans part of the area immediately, to make sure that saved
* register values are not lost.
* Cold_gc_frame delimits the stack section that must be scanned
* eagerly. A zero value indicates that no eager scanning is needed.
* We don't need to worry about the MANUAL_VDB case here, since this
* is only called in the single-threaded case. We assume that we
* cannot collect between an assignment and the corresponding
* GC_dirty() call.
*/
STATIC void GC_push_all_stack_partially_eager(ptr_t bottom, ptr_t top,
ptr_t cold_gc_frame)
{
#ifndef NEED_FIXUP_POINTER
if (GC_all_interior_pointers) {
/* Push the hot end of the stack eagerly, so that register values */
/* saved inside GC frames are marked before they disappear. */
/* The rest of the marking can be deferred until later. */
if (0 == cold_gc_frame) {
GC_push_all_stack(bottom, top);
return;
}
GC_ASSERT((word)bottom <= (word)cold_gc_frame
&& (word)cold_gc_frame <= (word)top);
# ifdef STACK_GROWS_DOWN
GC_push_all(cold_gc_frame - sizeof(ptr_t), top);
GC_push_all_eager(bottom, cold_gc_frame);
# else /* STACK_GROWS_UP */
GC_push_all(bottom, cold_gc_frame + sizeof(ptr_t));
GC_push_all_eager(cold_gc_frame, top);
# endif /* STACK_GROWS_UP */
} else
#endif
/* else */ {
GC_push_all_eager(bottom, top);
}
# ifdef TRACE_BUF
GC_add_trace_entry("GC_push_all_stack", (word)bottom, (word)top);
# endif
}
/* Similar to GC_push_all_stack_sections() but also uses cold_gc_frame. */
STATIC void GC_push_all_stack_part_eager_sections(ptr_t lo, ptr_t hi,
ptr_t cold_gc_frame, struct GC_traced_stack_sect_s *traced_stack_sect)
{
GC_ASSERT(traced_stack_sect == NULL || cold_gc_frame == NULL ||
(word)cold_gc_frame HOTTER_THAN (word)traced_stack_sect);
while (traced_stack_sect != NULL) {
GC_ASSERT((word)lo HOTTER_THAN (word)traced_stack_sect);
# ifdef STACK_GROWS_UP
GC_push_all_stack_partially_eager((ptr_t)traced_stack_sect, lo,
cold_gc_frame);
# else /* STACK_GROWS_DOWN */
GC_push_all_stack_partially_eager(lo, (ptr_t)traced_stack_sect,
cold_gc_frame);
# endif
lo = traced_stack_sect -> saved_stack_ptr;
GC_ASSERT(lo != NULL);
traced_stack_sect = traced_stack_sect -> prev;
cold_gc_frame = NULL; /* Use at most once. */
}
GC_ASSERT(!((word)hi HOTTER_THAN (word)lo));
# ifdef STACK_GROWS_UP
/* We got them backwards! */
GC_push_all_stack_partially_eager(hi, lo, cold_gc_frame);
# else /* STACK_GROWS_DOWN */
GC_push_all_stack_partially_eager(lo, hi, cold_gc_frame);
# endif
}
#endif /* !THREADS */
/* Push enough of the current stack eagerly to */
/* ensure that callee-save registers saved in */
/* GC frames are scanned. */
/* In the non-threads case, schedule entire */
/* stack for scanning. */
/* The second argument is a pointer to the */
/* (possibly null) thread context, for */
/* (currently hypothetical) more precise */
/* stack scanning. */
/*
* In the absence of threads, push the stack contents.
* In the presence of threads, push enough of the current stack
* to ensure that callee-save registers saved in collector frames have been
* seen.
* FIXME: Merge with per-thread stuff.
*/
STATIC void GC_push_current_stack(ptr_t cold_gc_frame,
void * context GC_ATTR_UNUSED)
{
# if defined(THREADS)
if (0 == cold_gc_frame) return;
# ifdef STACK_GROWS_DOWN
GC_push_all_eager(GC_approx_sp(), cold_gc_frame);
/* For IA64, the register stack backing store is handled */
/* in the thread-specific code. */
# else
GC_push_all_eager(cold_gc_frame, GC_approx_sp());
# endif
# else
GC_push_all_stack_part_eager_sections(GC_approx_sp(), GC_stackbottom,
cold_gc_frame, GC_traced_stack_sect);
# ifdef IA64
/* We also need to push the register stack backing store. */
/* This should really be done in the same way as the */
/* regular stack. For now we fudge it a bit. */
/* Note that the backing store grows up, so we can't use */
/* GC_push_all_stack_partially_eager. */
{
ptr_t bsp = GC_save_regs_ret_val;
ptr_t cold_gc_bs_pointer = bsp - 2048;
if (GC_all_interior_pointers
&& (word)cold_gc_bs_pointer > (word)BACKING_STORE_BASE) {
/* Adjust cold_gc_bs_pointer if below our innermost */
/* "traced stack section" in backing store. */
if (GC_traced_stack_sect != NULL
&& (word)cold_gc_bs_pointer
< (word)GC_traced_stack_sect->backing_store_end)
cold_gc_bs_pointer =
GC_traced_stack_sect->backing_store_end;
GC_push_all_register_sections(BACKING_STORE_BASE,
cold_gc_bs_pointer, FALSE, GC_traced_stack_sect);
GC_push_all_eager(cold_gc_bs_pointer, bsp);
} else {
GC_push_all_register_sections(BACKING_STORE_BASE, bsp,
TRUE /* eager */, GC_traced_stack_sect);
}
/* All values should be sufficiently aligned that we */
/* don't have to worry about the boundary. */
}
# endif
# endif /* !THREADS */
}
GC_INNER void (*GC_push_typed_structures)(void) = 0;
/* Push GC internal roots. These are normally */
/* included in the static data segment, and */
/* Thus implicitly pushed. But we must do this */
/* explicitly if normal root processing is */
/* disabled. */
/*
* Push GC internal roots. Only called if there is some reason to believe
* these would not otherwise get registered.
*/
STATIC void GC_push_gc_structures(void)
{
# ifndef GC_NO_FINALIZATION
GC_push_finalizer_structures();
# endif
# if defined(THREADS)
GC_push_thread_structures();
# endif
if( GC_push_typed_structures )
GC_push_typed_structures();
}
GC_INNER void GC_cond_register_dynamic_libraries(void)
{
# if defined(DYNAMIC_LOADING) || defined(MSWIN32) || defined(MSWINCE) \
|| defined(CYGWIN32) || defined(PCR)
GC_remove_tmp_roots();
if (!GC_no_dls) GC_register_dynamic_libraries();
# else
GC_no_dls = TRUE;
# endif
}
STATIC void GC_push_regs_and_stack(ptr_t cold_gc_frame)
{
GC_with_callee_saves_pushed(GC_push_current_stack, cold_gc_frame);
}
/*
* Call the mark routines (GC_push_one for a single pointer,
* GC_push_conditional on groups of pointers) on every top level
* accessible pointer.
* If all is FALSE, arrange to push only possibly altered values.
* Cold_gc_frame is an address inside a GC frame that
* remains valid until all marking is complete.
* A zero value indicates that it's OK to miss some
* register values.
*/
GC_INNER void GC_push_roots(GC_bool all, ptr_t cold_gc_frame GC_ATTR_UNUSED)
{
int i;
unsigned kind;
/*
* Next push static data. This must happen early on, since it's
* not robust against mark stack overflow.
*/
/* Re-register dynamic libraries, in case one got added. */
/* There is some argument for doing this as late as possible, */
/* especially on win32, where it can change asynchronously. */
/* In those cases, we do it here. But on other platforms, it's */
/* not safe with the world stopped, so we do it earlier. */
# if !defined(REGISTER_LIBRARIES_EARLY)
GC_cond_register_dynamic_libraries();
# endif
/* Mark everything in static data areas */
for (i = 0; i < n_root_sets; i++) {
GC_push_conditional_with_exclusions(
GC_static_roots[i].r_start,
GC_static_roots[i].r_end, all);
}
/* Mark all free list header blocks, if those were allocated from */
/* the garbage collected heap. This makes sure they don't */
/* disappear if we are not marking from static data. It also */
/* saves us the trouble of scanning them, and possibly that of */
/* marking the freelists. */
for (kind = 0; kind < GC_n_kinds; kind++) {
void *base = GC_base(GC_obj_kinds[kind].ok_freelist);
if (0 != base) {
GC_set_mark_bit(base);
}
}
/* Mark from GC internal roots if those might otherwise have */
/* been excluded. */
if (GC_no_dls || roots_were_cleared) {
GC_push_gc_structures();
}
/* Mark thread local free lists, even if their mark */
/* descriptor excludes the link field. */
/* If the world is not stopped, this is unsafe. It is */
/* also unnecessary, since we will do this again with the */
/* world stopped. */
# if defined(THREAD_LOCAL_ALLOC)
if (GC_world_stopped) GC_mark_thread_local_free_lists();
# endif
/*
* Now traverse stacks, and mark from register contents.
* These must be done last, since they can legitimately overflow
* the mark stack.
* This is usually done by saving the current context on the
* stack, and then just tracing from the stack.
*/
# ifndef STACK_NOT_SCANNED
GC_push_regs_and_stack(cold_gc_frame);
# endif
if (GC_push_other_roots != 0) (*GC_push_other_roots)();
/* In the threads case, this also pushes thread stacks. */
/* Note that without interior pointer recognition lots */
/* of stuff may have been pushed already, and this */
/* should be careful about mark stack overflows. */
}
Gauche-0.9.6/gc/pthread_stop_world.c 0000664 0000764 0000764 00000115363 13302340445 016435 0 ustar shiro shiro /*
* Copyright (c) 1994 by Xerox Corporation. All rights reserved.
* Copyright (c) 1996 by Silicon Graphics. All rights reserved.
* Copyright (c) 1998 by Fergus Henderson. All rights reserved.
* Copyright (c) 2000-2009 by Hewlett-Packard Development Company.
* All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#include "private/pthread_support.h"
#if defined(GC_PTHREADS) && !defined(GC_WIN32_THREADS) && \
!defined(GC_DARWIN_THREADS)
#ifdef NACL
# include
# include
STATIC int GC_nacl_num_gc_threads = 0;
STATIC __thread int GC_nacl_thread_idx = -1;
STATIC volatile int GC_nacl_park_threads_now = 0;
STATIC volatile pthread_t GC_nacl_thread_parker = -1;
GC_INNER __thread GC_thread GC_nacl_gc_thread_self = NULL;
volatile int GC_nacl_thread_parked[MAX_NACL_GC_THREADS];
int GC_nacl_thread_used[MAX_NACL_GC_THREADS];
#elif defined(GC_OPENBSD_UTHREADS)
# include
#else /* !GC_OPENBSD_UTHREADS && !NACL */
#include
#include
#include
#include /* for nanosleep() */
#include
#include "atomic_ops.h"
/* It's safe to call original pthread_sigmask() here. */
#undef pthread_sigmask
#ifdef GC_ENABLE_SUSPEND_THREAD
static void *GC_CALLBACK suspend_self_inner(void *client_data);
#endif
#ifdef DEBUG_THREADS
# ifndef NSIG
# if defined(MAXSIG)
# define NSIG (MAXSIG+1)
# elif defined(_NSIG)
# define NSIG _NSIG
# elif defined(__SIGRTMAX)
# define NSIG (__SIGRTMAX+1)
# else
# error define NSIG
# endif
# endif /* NSIG */
void GC_print_sig_mask(void)
{
sigset_t blocked;
int i;
if (pthread_sigmask(SIG_BLOCK, NULL, &blocked) != 0)
ABORT("pthread_sigmask failed");
for (i = 1; i < NSIG; i++) {
if (sigismember(&blocked, i))
GC_printf("Signal blocked: %d\n", i);
}
}
#endif /* DEBUG_THREADS */
/* Remove the signals that we want to allow in thread stopping */
/* handler from a set. */
STATIC void GC_remove_allowed_signals(sigset_t *set)
{
if (sigdelset(set, SIGINT) != 0
|| sigdelset(set, SIGQUIT) != 0
|| sigdelset(set, SIGABRT) != 0
|| sigdelset(set, SIGTERM) != 0) {
ABORT("sigdelset failed");
}
# ifdef MPROTECT_VDB
/* Handlers write to the thread structure, which is in the heap, */
/* and hence can trigger a protection fault. */
if (sigdelset(set, SIGSEGV) != 0
# ifdef HAVE_SIGBUS
|| sigdelset(set, SIGBUS) != 0
# endif
) {
ABORT("sigdelset failed");
}
# endif
}
static sigset_t suspend_handler_mask;
STATIC volatile AO_t GC_stop_count = 0;
/* Incremented at the beginning of GC_stop_world. */
STATIC volatile AO_t GC_world_is_stopped = FALSE;
/* FALSE ==> it is safe for threads to restart, */
/* i.e. they will see another suspend signal */
/* before they are expected to stop (unless */
/* they have stopped voluntarily). */
#ifdef GC_OSF1_THREADS
STATIC GC_bool GC_retry_signals = TRUE;
#else
STATIC GC_bool GC_retry_signals = FALSE;
#endif
/*
* We use signals to stop threads during GC.
*
* Suspended threads wait in signal handler for SIG_THR_RESTART.
* That's more portable than semaphores or condition variables.
* (We do use sem_post from a signal handler, but that should be portable.)
*
* The thread suspension signal SIG_SUSPEND is now defined in gc_priv.h.
* Note that we can't just stop a thread; we need it to save its stack
* pointer(s) and acknowledge.
*/
#ifndef SIG_THR_RESTART
# if defined(GC_HPUX_THREADS) || defined(GC_OSF1_THREADS) \
|| defined(GC_NETBSD_THREADS) || defined(GC_USESIGRT_SIGNALS)
# if defined(_SIGRTMIN) && !defined(CPPCHECK)
# define SIG_THR_RESTART _SIGRTMIN + 5
# else
# define SIG_THR_RESTART SIGRTMIN + 5
# endif
# else
# define SIG_THR_RESTART SIGXCPU
# endif
#endif
#define SIGNAL_UNSET (-1)
/* Since SIG_SUSPEND and/or SIG_THR_RESTART could represent */
/* a non-constant expression (e.g., in case of SIGRTMIN), */
/* actual signal numbers are determined by GC_stop_init() */
/* unless manually set (before GC initialization). */
STATIC int GC_sig_suspend = SIGNAL_UNSET;
STATIC int GC_sig_thr_restart = SIGNAL_UNSET;
GC_API void GC_CALL GC_set_suspend_signal(int sig)
{
if (GC_is_initialized) return;
GC_sig_suspend = sig;
}
GC_API void GC_CALL GC_set_thr_restart_signal(int sig)
{
if (GC_is_initialized) return;
GC_sig_thr_restart = sig;
}
GC_API int GC_CALL GC_get_suspend_signal(void)
{
return GC_sig_suspend != SIGNAL_UNSET ? GC_sig_suspend : SIG_SUSPEND;
}
GC_API int GC_CALL GC_get_thr_restart_signal(void)
{
return GC_sig_thr_restart != SIGNAL_UNSET
? GC_sig_thr_restart : SIG_THR_RESTART;
}
#if defined(GC_EXPLICIT_SIGNALS_UNBLOCK) \
|| !defined(NO_SIGNALS_UNBLOCK_IN_MAIN)
/* Some targets (e.g., Solaris) might require this to be called when */
/* doing thread registering from the thread destructor. */
GC_INNER void GC_unblock_gc_signals(void)
{
sigset_t set;
sigemptyset(&set);
GC_ASSERT(GC_sig_suspend != SIGNAL_UNSET);
GC_ASSERT(GC_sig_thr_restart != SIGNAL_UNSET);
sigaddset(&set, GC_sig_suspend);
sigaddset(&set, GC_sig_thr_restart);
if (pthread_sigmask(SIG_UNBLOCK, &set, NULL) != 0)
ABORT("pthread_sigmask failed");
}
#endif /* GC_EXPLICIT_SIGNALS_UNBLOCK */
STATIC sem_t GC_suspend_ack_sem;
#ifdef GC_NETBSD_THREADS
# define GC_NETBSD_THREADS_WORKAROUND
/* It seems to be necessary to wait until threads have restarted. */
/* But it is unclear why that is the case. */
STATIC sem_t GC_restart_ack_sem;
#endif
STATIC void GC_suspend_handler_inner(ptr_t dummy, void *context);
#ifndef NO_SA_SIGACTION
STATIC void GC_suspend_handler(int sig, siginfo_t * info GC_ATTR_UNUSED,
void * context GC_ATTR_UNUSED)
#else
STATIC void GC_suspend_handler(int sig)
#endif
{
int old_errno = errno;
if (sig != GC_sig_suspend) {
# if defined(GC_FREEBSD_THREADS)
/* Workaround "deferred signal handling" bug in FreeBSD 9.2. */
if (0 == sig) return;
# endif
ABORT("Bad signal in suspend_handler");
}
# if defined(IA64) || defined(HP_PA) || defined(M68K)
GC_with_callee_saves_pushed(GC_suspend_handler_inner, NULL);
# else
/* We believe that in all other cases the full context is already */
/* in the signal handler frame. */
{
# ifdef NO_SA_SIGACTION
void *context = 0;
# endif
GC_suspend_handler_inner(NULL, context);
}
# endif
errno = old_errno;
}
STATIC void GC_suspend_handler_inner(ptr_t dummy GC_ATTR_UNUSED,
void * context GC_ATTR_UNUSED)
{
pthread_t self = pthread_self();
GC_thread me;
IF_CANCEL(int cancel_state;)
AO_t my_stop_count = AO_load_acquire(&GC_stop_count);
/* After the barrier, this thread should see */
/* the actual content of GC_threads. */
DISABLE_CANCEL(cancel_state);
/* pthread_setcancelstate is not defined to be async-signal-safe. */
/* But the glibc version appears to be in the absence of */
/* asynchronous cancellation. And since this signal handler */
/* to block on sigsuspend, which is both async-signal-safe */
/* and a cancellation point, there seems to be no obvious way */
/* out of it. In fact, it looks to me like an async-signal-safe */
/* cancellation point is inherently a problem, unless there is */
/* some way to disable cancellation in the handler. */
# ifdef DEBUG_THREADS
GC_log_printf("Suspending %p\n", (void *)self);
# endif
me = GC_lookup_thread(self);
/* The lookup here is safe, since I'm doing this on behalf */
/* of a thread which holds the allocation lock in order */
/* to stop the world. Thus concurrent modification of the */
/* data structure is impossible. */
# ifdef GC_ENABLE_SUSPEND_THREAD
if (AO_load(&me->suspended_ext)) {
# ifdef SPARC
me -> stop_info.stack_ptr = GC_save_regs_in_stack();
# else
me -> stop_info.stack_ptr = GC_approx_sp();
# ifdef IA64
me -> backing_store_ptr = GC_save_regs_in_stack();
# endif
# endif
sem_post(&GC_suspend_ack_sem);
suspend_self_inner(me);
# ifdef DEBUG_THREADS
GC_log_printf("Continuing %p on GC_resume_thread\n", (void *)self);
# endif
RESTORE_CANCEL(cancel_state);
return;
}
# endif
if (me -> stop_info.last_stop_count == my_stop_count) {
/* Duplicate signal. OK if we are retrying. */
if (!GC_retry_signals) {
WARN("Duplicate suspend signal in thread %p\n", self);
}
RESTORE_CANCEL(cancel_state);
return;
}
# ifdef SPARC
me -> stop_info.stack_ptr = GC_save_regs_in_stack();
# else
me -> stop_info.stack_ptr = GC_approx_sp();
# endif
# ifdef IA64
me -> backing_store_ptr = GC_save_regs_in_stack();
# endif
/* Tell the thread that wants to stop the world that this */
/* thread has been stopped. Note that sem_post() is */
/* the only async-signal-safe primitive in LinuxThreads. */
sem_post(&GC_suspend_ack_sem);
AO_store_release(&me->stop_info.last_stop_count, my_stop_count);
/* Wait until that thread tells us to restart by sending */
/* this thread a GC_sig_thr_restart signal (should be masked */
/* at this point thus there is no race). */
/* We do not continue until we receive that signal, */
/* but we do not take that as authoritative. (We may be */
/* accidentally restarted by one of the user signals we */
/* don't block.) After we receive the signal, we use a */
/* primitive and expensive mechanism to wait until it's */
/* really safe to proceed. Under normal circumstances, */
/* this code should not be executed. */
do {
sigsuspend (&suspend_handler_mask);
} while (AO_load_acquire(&GC_world_is_stopped)
&& AO_load(&GC_stop_count) == my_stop_count);
/* If the RESTART signal gets lost, we can still lose. That should */
/* be less likely than losing the SUSPEND signal, since we don't do */
/* much between the sem_post and sigsuspend. */
/* We'd need more handshaking to work around that. */
/* Simply dropping the sigsuspend call should be safe, but is */
/* unlikely to be efficient. */
# ifdef DEBUG_THREADS
GC_log_printf("Continuing %p\n", (void *)self);
# endif
RESTORE_CANCEL(cancel_state);
}
STATIC void GC_restart_handler(int sig)
{
# if defined(DEBUG_THREADS) || defined(GC_NETBSD_THREADS_WORKAROUND)
int old_errno = errno; /* Preserve errno value. */
# endif
if (sig != GC_sig_thr_restart)
ABORT("Bad signal in restart handler");
# ifdef GC_NETBSD_THREADS_WORKAROUND
sem_post(&GC_restart_ack_sem);
# endif
/*
** Note: even if we don't do anything useful here,
** it would still be necessary to have a signal handler,
** rather than ignoring the signals, otherwise
** the signals will not be delivered at all, and
** will thus not interrupt the sigsuspend() above.
*/
# ifdef DEBUG_THREADS
GC_log_printf("In GC_restart_handler for %p\n", (void *)pthread_self());
# endif
# if defined(DEBUG_THREADS) || defined(GC_NETBSD_THREADS_WORKAROUND)
errno = old_errno;
# endif
}
# ifdef USE_TKILL_ON_ANDROID
extern int tkill(pid_t tid, int sig); /* from sys/linux-unistd.h */
static int android_thread_kill(pid_t tid, int sig)
{
int ret;
int old_errno = errno;
ret = tkill(tid, sig);
if (ret < 0) {
ret = errno;
errno = old_errno;
}
return ret;
}
# define THREAD_SYSTEM_ID(t) (t)->kernel_id
# define RAISE_SIGNAL(t, sig) android_thread_kill(THREAD_SYSTEM_ID(t), sig)
# else
# define THREAD_SYSTEM_ID(t) (t)->id
# define RAISE_SIGNAL(t, sig) pthread_kill(THREAD_SYSTEM_ID(t), sig)
# endif /* !USE_TKILL_ON_ANDROID */
# ifdef GC_ENABLE_SUSPEND_THREAD
# include
STATIC void GC_brief_async_signal_safe_sleep(void)
{
struct timeval tv;
tv.tv_sec = 0;
# if defined(GC_TIME_LIMIT) && !defined(CPPCHECK)
tv.tv_usec = 1000 * GC_TIME_LIMIT / 2;
# else
tv.tv_usec = 1000 * 50 / 2;
# endif
(void)select(0, 0, 0, 0, &tv);
}
static void *GC_CALLBACK suspend_self_inner(void *client_data) {
GC_thread me = (GC_thread)client_data;
while (AO_load_acquire(&me->suspended_ext)) {
/* TODO: Use sigsuspend() instead. */
GC_brief_async_signal_safe_sleep();
}
return NULL;
}
GC_API void GC_CALL GC_suspend_thread(GC_SUSPEND_THREAD_ID thread) {
GC_thread t;
IF_CANCEL(int cancel_state;)
DCL_LOCK_STATE;
LOCK();
t = GC_lookup_thread((pthread_t)thread);
if (t == NULL || t -> suspended_ext) {
UNLOCK();
return;
}
/* Set the flag making the change visible to the signal handler. */
AO_store_release(&t->suspended_ext, TRUE);
if ((pthread_t)thread == pthread_self()) {
UNLOCK();
/* It is safe as "t" cannot become invalid here (no race with */
/* GC_unregister_my_thread). */
(void)GC_do_blocking(suspend_self_inner, t);
return;
}
if ((t -> flags & FINISHED) != 0) {
/* Terminated but not joined yet. */
UNLOCK();
return;
}
DISABLE_CANCEL(cancel_state);
/* GC_suspend_thread is not a cancellation point. */
# ifdef PARALLEL_MARK
/* Ensure we do not suspend a thread while it is rebuilding */
/* a free list, otherwise such a dead-lock is possible: */
/* thread 1 is blocked in GC_wait_for_reclaim holding */
/* the allocation lock, thread 2 is suspended in */
/* GC_reclaim_generic invoked from GC_generic_malloc_many */
/* (with GC_fl_builder_count > 0), and thread 3 is blocked */
/* acquiring the allocation lock in GC_resume_thread. */
if (GC_parallel)
GC_wait_for_reclaim();
# endif
/* TODO: Support GC_retry_signals */
switch (RAISE_SIGNAL(t, GC_sig_suspend)) {
/* ESRCH cannot happen as terminated threads are handled above. */
case 0:
break;
default:
ABORT("pthread_kill failed");
}
/* Wait for the thread to complete threads table lookup and */
/* stack_ptr assignment. */
GC_ASSERT(GC_thr_initialized);
while (sem_wait(&GC_suspend_ack_sem) != 0) {
if (errno != EINTR)
ABORT("sem_wait for handler failed (suspend_self)");
}
RESTORE_CANCEL(cancel_state);
UNLOCK();
}
GC_API void GC_CALL GC_resume_thread(GC_SUSPEND_THREAD_ID thread) {
GC_thread t;
DCL_LOCK_STATE;
LOCK();
t = GC_lookup_thread((pthread_t)thread);
if (t != NULL)
AO_store(&t->suspended_ext, FALSE);
UNLOCK();
}
GC_API int GC_CALL GC_is_thread_suspended(GC_SUSPEND_THREAD_ID thread) {
GC_thread t;
int is_suspended = 0;
DCL_LOCK_STATE;
LOCK();
t = GC_lookup_thread((pthread_t)thread);
if (t != NULL && t -> suspended_ext)
is_suspended = (int)TRUE;
UNLOCK();
return is_suspended;
}
# endif /* GC_ENABLE_SUSPEND_THREAD */
#endif /* !GC_OPENBSD_UTHREADS && !NACL */
#ifdef IA64
# define IF_IA64(x) x
#else
# define IF_IA64(x)
#endif
/* We hold allocation lock. Should do exactly the right thing if the */
/* world is stopped. Should not fail if it isn't. */
GC_INNER void GC_push_all_stacks(void)
{
GC_bool found_me = FALSE;
size_t nthreads = 0;
int i;
GC_thread p;
ptr_t lo, hi;
/* On IA64, we also need to scan the register backing store. */
IF_IA64(ptr_t bs_lo; ptr_t bs_hi;)
struct GC_traced_stack_sect_s *traced_stack_sect;
pthread_t self = pthread_self();
word total_size = 0;
if (!EXPECT(GC_thr_initialized, TRUE))
GC_thr_init();
# ifdef DEBUG_THREADS
GC_log_printf("Pushing stacks from thread %p\n", (void *)self);
# endif
for (i = 0; i < THREAD_TABLE_SZ; i++) {
for (p = GC_threads[i]; p != 0; p = p -> next) {
if (p -> flags & FINISHED) continue;
++nthreads;
traced_stack_sect = p -> traced_stack_sect;
if (THREAD_EQUAL(p -> id, self)) {
GC_ASSERT(!p->thread_blocked);
# ifdef SPARC
lo = (ptr_t)GC_save_regs_in_stack();
# else
lo = GC_approx_sp();
# endif
found_me = TRUE;
IF_IA64(bs_hi = (ptr_t)GC_save_regs_in_stack();)
} else {
lo = p -> stop_info.stack_ptr;
IF_IA64(bs_hi = p -> backing_store_ptr;)
if (traced_stack_sect != NULL
&& traced_stack_sect->saved_stack_ptr == lo) {
/* If the thread has never been stopped since the recent */
/* GC_call_with_gc_active invocation then skip the top */
/* "stack section" as stack_ptr already points to. */
traced_stack_sect = traced_stack_sect->prev;
}
}
if ((p -> flags & MAIN_THREAD) == 0) {
hi = p -> stack_end;
IF_IA64(bs_lo = p -> backing_store_end);
} else {
/* The original stack. */
hi = GC_stackbottom;
IF_IA64(bs_lo = BACKING_STORE_BASE;)
}
# ifdef DEBUG_THREADS
GC_log_printf("Stack for thread %p = [%p,%p)\n",
(void *)p->id, (void *)lo, (void *)hi);
# endif
if (0 == lo) ABORT("GC_push_all_stacks: sp not set!");
if (p->altstack != NULL && (word)p->altstack <= (word)lo
&& (word)lo <= (word)p->altstack + p->altstack_size) {
hi = p->altstack + p->altstack_size;
/* FIXME: Need to scan the normal stack too, but how ? */
/* FIXME: Assume stack grows down */
}
GC_push_all_stack_sections(lo, hi, traced_stack_sect);
# ifdef STACK_GROWS_UP
total_size += lo - hi;
# else
total_size += hi - lo; /* lo <= hi */
# endif
# ifdef NACL
/* Push reg_storage as roots, this will cover the reg context. */
GC_push_all_stack((ptr_t)p -> stop_info.reg_storage,
(ptr_t)(p -> stop_info.reg_storage + NACL_GC_REG_STORAGE_SIZE));
total_size += NACL_GC_REG_STORAGE_SIZE * sizeof(ptr_t);
# endif
# ifdef IA64
# ifdef DEBUG_THREADS
GC_log_printf("Reg stack for thread %p = [%p,%p)\n",
(void *)p->id, (void *)bs_lo, (void *)bs_hi);
# endif
/* FIXME: This (if p->id==self) may add an unbounded number of */
/* entries, and hence overflow the mark stack, which is bad. */
GC_push_all_register_sections(bs_lo, bs_hi,
THREAD_EQUAL(p -> id, self),
traced_stack_sect);
total_size += bs_hi - bs_lo; /* bs_lo <= bs_hi */
# endif
}
}
GC_VERBOSE_LOG_PRINTF("Pushed %d thread stacks\n", (int)nthreads);
if (!found_me && !GC_in_thread_creation)
ABORT("Collecting from unknown thread");
GC_total_stacksize = total_size;
}
#ifdef DEBUG_THREADS
/* There seems to be a very rare thread stopping problem. To help us */
/* debug that, we save the ids of the stopping thread. */
pthread_t GC_stopping_thread;
int GC_stopping_pid = 0;
#endif
/* We hold the allocation lock. Suspend all threads that might */
/* still be running. Return the number of suspend signals that */
/* were sent. */
STATIC int GC_suspend_all(void)
{
int n_live_threads = 0;
int i;
# ifndef NACL
GC_thread p;
# ifndef GC_OPENBSD_UTHREADS
int result;
# endif
pthread_t self = pthread_self();
# ifdef DEBUG_THREADS
GC_stopping_thread = self;
GC_stopping_pid = getpid();
# endif
for (i = 0; i < THREAD_TABLE_SZ; i++) {
for (p = GC_threads[i]; p != 0; p = p -> next) {
if (!THREAD_EQUAL(p -> id, self)) {
if ((p -> flags & FINISHED) != 0) continue;
if (p -> thread_blocked) /* Will wait */ continue;
# ifndef GC_OPENBSD_UTHREADS
# ifdef GC_ENABLE_SUSPEND_THREAD
if (p -> suspended_ext) continue;
# endif
if (AO_load(&p->stop_info.last_stop_count) == GC_stop_count)
continue;
n_live_threads++;
# endif
# ifdef DEBUG_THREADS
GC_log_printf("Sending suspend signal to %p\n", (void *)p->id);
# endif
# ifdef GC_OPENBSD_UTHREADS
{
stack_t stack;
if (pthread_suspend_np(p -> id) != 0)
ABORT("pthread_suspend_np failed");
if (pthread_stackseg_np(p->id, &stack))
ABORT("pthread_stackseg_np failed");
p -> stop_info.stack_ptr = (ptr_t)stack.ss_sp - stack.ss_size;
if (GC_on_thread_event)
GC_on_thread_event(GC_EVENT_THREAD_SUSPENDED,
(void *)p->id);
}
# else
result = RAISE_SIGNAL(p, GC_sig_suspend);
switch(result) {
case ESRCH:
/* Not really there anymore. Possible? */
n_live_threads--;
break;
case 0:
if (GC_on_thread_event)
GC_on_thread_event(GC_EVENT_THREAD_SUSPENDED,
(void *)(word)THREAD_SYSTEM_ID(p));
/* Note: thread id might be truncated. */
break;
default:
ABORT_ARG1("pthread_kill failed at suspend",
": errcode= %d", result);
}
# endif
}
}
}
# else /* NACL */
# ifndef NACL_PARK_WAIT_NANOSECONDS
# define NACL_PARK_WAIT_NANOSECONDS (100 * 1000)
# endif
# define NANOS_PER_SECOND (1000UL * 1000 * 1000)
unsigned long num_sleeps = 0;
# ifdef DEBUG_THREADS
GC_log_printf("pthread_stop_world: num_threads %d\n",
GC_nacl_num_gc_threads - 1);
# endif
GC_nacl_thread_parker = pthread_self();
GC_nacl_park_threads_now = 1;
# ifdef DEBUG_THREADS
GC_stopping_thread = GC_nacl_thread_parker;
GC_stopping_pid = getpid();
# endif
while (1) {
int num_threads_parked = 0;
struct timespec ts;
int num_used = 0;
/* Check the 'parked' flag for each thread the GC knows about. */
for (i = 0; i < MAX_NACL_GC_THREADS
&& num_used < GC_nacl_num_gc_threads; i++) {
if (GC_nacl_thread_used[i] == 1) {
num_used++;
if (GC_nacl_thread_parked[i] == 1) {
num_threads_parked++;
if (GC_on_thread_event)
GC_on_thread_event(GC_EVENT_THREAD_SUSPENDED, (void *)(word)i);
}
}
}
/* -1 for the current thread. */
if (num_threads_parked >= GC_nacl_num_gc_threads - 1)
break;
ts.tv_sec = 0;
ts.tv_nsec = NACL_PARK_WAIT_NANOSECONDS;
# ifdef DEBUG_THREADS
GC_log_printf("Sleep waiting for %d threads to park...\n",
GC_nacl_num_gc_threads - num_threads_parked - 1);
# endif
/* This requires _POSIX_TIMERS feature. */
nanosleep(&ts, 0);
if (++num_sleeps > NANOS_PER_SECOND / NACL_PARK_WAIT_NANOSECONDS) {
WARN("GC appears stalled waiting for %" WARN_PRIdPTR
" threads to park...\n",
GC_nacl_num_gc_threads - num_threads_parked - 1);
num_sleeps = 0;
}
}
# endif /* NACL */
return n_live_threads;
}
GC_INNER void GC_stop_world(void)
{
# if !defined(GC_OPENBSD_UTHREADS) && !defined(NACL)
int i;
int n_live_threads;
int code;
# endif
GC_ASSERT(I_HOLD_LOCK());
# ifdef DEBUG_THREADS
GC_log_printf("Stopping the world from %p\n", (void *)pthread_self());
# endif
/* Make sure all free list construction has stopped before we start. */
/* No new construction can start, since free list construction is */
/* required to acquire and release the GC lock before it starts, */
/* and we have the lock. */
# ifdef PARALLEL_MARK
if (GC_parallel) {
GC_acquire_mark_lock();
GC_ASSERT(GC_fl_builder_count == 0);
/* We should have previously waited for it to become zero. */
}
# endif /* PARALLEL_MARK */
# if defined(GC_OPENBSD_UTHREADS) || defined(NACL)
(void)GC_suspend_all();
# else
AO_store(&GC_stop_count, GC_stop_count+1);
/* Only concurrent reads are possible. */
AO_store_release(&GC_world_is_stopped, TRUE);
n_live_threads = GC_suspend_all();
if (GC_retry_signals) {
unsigned long wait_usecs = 0; /* Total wait since retry. */
# define WAIT_UNIT 3000
# define RETRY_INTERVAL 100000
for (;;) {
int ack_count;
sem_getvalue(&GC_suspend_ack_sem, &ack_count);
if (ack_count == n_live_threads) break;
if (wait_usecs > RETRY_INTERVAL) {
int newly_sent = GC_suspend_all();
GC_COND_LOG_PRINTF("Resent %d signals after timeout\n", newly_sent);
sem_getvalue(&GC_suspend_ack_sem, &ack_count);
if (newly_sent < n_live_threads - ack_count) {
WARN("Lost some threads during GC_stop_world?!\n",0);
n_live_threads = ack_count + newly_sent;
}
wait_usecs = 0;
}
# ifdef LINT2
/* Workaround "waiting while holding a lock" warning. */
# undef WAIT_UNIT
# define WAIT_UNIT 1
sched_yield();
# elif defined(CPPCHECK) /* || _POSIX_C_SOURCE >= 199309L */
{
struct timespec ts;
ts.tv_sec = 0;
ts.tv_nsec = WAIT_UNIT * 1000;
(void)nanosleep(&ts, NULL);
}
# else
usleep(WAIT_UNIT);
# endif
wait_usecs += WAIT_UNIT;
}
}
for (i = 0; i < n_live_threads; i++) {
retry:
code = sem_wait(&GC_suspend_ack_sem);
if (0 != code) {
/* On Linux, sem_wait is documented to always return zero. */
/* But the documentation appears to be incorrect. */
if (errno == EINTR) {
/* Seems to happen with some versions of gdb. */
goto retry;
}
ABORT("sem_wait for handler failed");
}
}
# endif
# ifdef PARALLEL_MARK
if (GC_parallel)
GC_release_mark_lock();
# endif
# ifdef DEBUG_THREADS
GC_log_printf("World stopped from %p\n", (void *)pthread_self());
GC_stopping_thread = 0;
# endif
}
#ifdef NACL
# if defined(__x86_64__)
# define NACL_STORE_REGS() \
do { \
__asm__ __volatile__ ("push %rbx"); \
__asm__ __volatile__ ("push %rbp"); \
__asm__ __volatile__ ("push %r12"); \
__asm__ __volatile__ ("push %r13"); \
__asm__ __volatile__ ("push %r14"); \
__asm__ __volatile__ ("push %r15"); \
__asm__ __volatile__ ("mov %%esp, %0" \
: "=m" (GC_nacl_gc_thread_self->stop_info.stack_ptr)); \
BCOPY(GC_nacl_gc_thread_self->stop_info.stack_ptr, \
GC_nacl_gc_thread_self->stop_info.reg_storage, \
NACL_GC_REG_STORAGE_SIZE * sizeof(ptr_t)); \
__asm__ __volatile__ ("naclasp $48, %r15"); \
} while (0)
# elif defined(__i386__)
# define NACL_STORE_REGS() \
do { \
__asm__ __volatile__ ("push %ebx"); \
__asm__ __volatile__ ("push %ebp"); \
__asm__ __volatile__ ("push %esi"); \
__asm__ __volatile__ ("push %edi"); \
__asm__ __volatile__ ("mov %%esp, %0" \
: "=m" (GC_nacl_gc_thread_self->stop_info.stack_ptr)); \
BCOPY(GC_nacl_gc_thread_self->stop_info.stack_ptr, \
GC_nacl_gc_thread_self->stop_info.reg_storage, \
NACL_GC_REG_STORAGE_SIZE * sizeof(ptr_t));\
__asm__ __volatile__ ("add $16, %esp"); \
} while (0)
# elif defined(__arm__)
# define NACL_STORE_REGS() \
do { \
__asm__ __volatile__ ("push {r4-r8,r10-r12,lr}"); \
__asm__ __volatile__ ("mov r0, %0" \
: : "r" (&GC_nacl_gc_thread_self->stop_info.stack_ptr)); \
__asm__ __volatile__ ("bic r0, r0, #0xc0000000"); \
__asm__ __volatile__ ("str sp, [r0]"); \
BCOPY(GC_nacl_gc_thread_self->stop_info.stack_ptr, \
GC_nacl_gc_thread_self->stop_info.reg_storage, \
NACL_GC_REG_STORAGE_SIZE * sizeof(ptr_t)); \
__asm__ __volatile__ ("add sp, sp, #40"); \
__asm__ __volatile__ ("bic sp, sp, #0xc0000000"); \
} while (0)
# else
# error TODO Please port NACL_STORE_REGS
# endif
GC_API_OSCALL void nacl_pre_syscall_hook(void)
{
if (GC_nacl_thread_idx != -1) {
NACL_STORE_REGS();
GC_nacl_gc_thread_self->stop_info.stack_ptr = GC_approx_sp();
GC_nacl_thread_parked[GC_nacl_thread_idx] = 1;
}
}
GC_API_OSCALL void __nacl_suspend_thread_if_needed(void)
{
if (GC_nacl_park_threads_now) {
pthread_t self = pthread_self();
/* Don't try to park the thread parker. */
if (GC_nacl_thread_parker == self)
return;
/* This can happen when a thread is created outside of the GC */
/* system (wthread mostly). */
if (GC_nacl_thread_idx < 0)
return;
/* If it was already 'parked', we're returning from a syscall, */
/* so don't bother storing registers again, the GC has a set. */
if (!GC_nacl_thread_parked[GC_nacl_thread_idx]) {
NACL_STORE_REGS();
GC_nacl_gc_thread_self->stop_info.stack_ptr = GC_approx_sp();
}
GC_nacl_thread_parked[GC_nacl_thread_idx] = 1;
while (GC_nacl_park_threads_now) {
/* Just spin. */
}
GC_nacl_thread_parked[GC_nacl_thread_idx] = 0;
/* Clear out the reg storage for next suspend. */
BZERO(GC_nacl_gc_thread_self->stop_info.reg_storage,
NACL_GC_REG_STORAGE_SIZE * sizeof(ptr_t));
}
}
GC_API_OSCALL void nacl_post_syscall_hook(void)
{
/* Calling __nacl_suspend_thread_if_needed right away should */
/* guarantee we don't mutate the GC set. */
__nacl_suspend_thread_if_needed();
if (GC_nacl_thread_idx != -1) {
GC_nacl_thread_parked[GC_nacl_thread_idx] = 0;
}
}
STATIC GC_bool GC_nacl_thread_parking_inited = FALSE;
STATIC pthread_mutex_t GC_nacl_thread_alloc_lock = PTHREAD_MUTEX_INITIALIZER;
struct nacl_irt_blockhook {
int (*register_block_hooks)(void (*pre)(void), void (*post)(void));
};
extern size_t nacl_interface_query(const char *interface_ident,
void *table, size_t tablesize);
GC_INNER void GC_nacl_initialize_gc_thread(void)
{
int i;
static struct nacl_irt_blockhook gc_hook;
pthread_mutex_lock(&GC_nacl_thread_alloc_lock);
if (!EXPECT(GC_nacl_thread_parking_inited, TRUE)) {
BZERO(GC_nacl_thread_parked, sizeof(GC_nacl_thread_parked));
BZERO(GC_nacl_thread_used, sizeof(GC_nacl_thread_used));
/* TODO: replace with public 'register hook' function when */
/* available from glibc. */
nacl_interface_query("nacl-irt-blockhook-0.1",
&gc_hook, sizeof(gc_hook));
gc_hook.register_block_hooks(nacl_pre_syscall_hook,
nacl_post_syscall_hook);
GC_nacl_thread_parking_inited = TRUE;
}
GC_ASSERT(GC_nacl_num_gc_threads <= MAX_NACL_GC_THREADS);
for (i = 0; i < MAX_NACL_GC_THREADS; i++) {
if (GC_nacl_thread_used[i] == 0) {
GC_nacl_thread_used[i] = 1;
GC_nacl_thread_idx = i;
GC_nacl_num_gc_threads++;
break;
}
}
pthread_mutex_unlock(&GC_nacl_thread_alloc_lock);
}
GC_INNER void GC_nacl_shutdown_gc_thread(void)
{
pthread_mutex_lock(&GC_nacl_thread_alloc_lock);
GC_ASSERT(GC_nacl_thread_idx >= 0);
GC_ASSERT(GC_nacl_thread_idx < MAX_NACL_GC_THREADS);
GC_ASSERT(GC_nacl_thread_used[GC_nacl_thread_idx] != 0);
GC_nacl_thread_used[GC_nacl_thread_idx] = 0;
GC_nacl_thread_idx = -1;
GC_nacl_num_gc_threads--;
pthread_mutex_unlock(&GC_nacl_thread_alloc_lock);
}
#endif /* NACL */
/* Caller holds allocation lock, and has held it continuously since */
/* the world stopped. */
GC_INNER void GC_start_world(void)
{
# ifndef NACL
pthread_t self = pthread_self();
register int i;
register GC_thread p;
# ifndef GC_OPENBSD_UTHREADS
register int n_live_threads = 0;
register int result;
# endif
# ifdef DEBUG_THREADS
GC_log_printf("World starting\n");
# endif
# ifndef GC_OPENBSD_UTHREADS
AO_store_release(&GC_world_is_stopped, FALSE);
/* The updated value should now be visible to the */
/* signal handler (note that pthread_kill is not on */
/* the list of functions which synchronize memory). */
# endif
for (i = 0; i < THREAD_TABLE_SZ; i++) {
for (p = GC_threads[i]; p != 0; p = p -> next) {
if (!THREAD_EQUAL(p -> id, self)) {
if ((p -> flags & FINISHED) != 0) continue;
if (p -> thread_blocked) continue;
# ifndef GC_OPENBSD_UTHREADS
# ifdef GC_ENABLE_SUSPEND_THREAD
if (p -> suspended_ext) continue;
# endif
n_live_threads++;
# endif
# ifdef DEBUG_THREADS
GC_log_printf("Sending restart signal to %p\n", (void *)p->id);
# endif
# ifdef GC_OPENBSD_UTHREADS
if (pthread_resume_np(p -> id) != 0)
ABORT("pthread_resume_np failed");
if (GC_on_thread_event)
GC_on_thread_event(GC_EVENT_THREAD_UNSUSPENDED, (void *)p->id);
# else
result = RAISE_SIGNAL(p, GC_sig_thr_restart);
switch(result) {
case ESRCH:
/* Not really there anymore. Possible? */
n_live_threads--;
break;
case 0:
if (GC_on_thread_event)
GC_on_thread_event(GC_EVENT_THREAD_UNSUSPENDED,
(void *)(word)THREAD_SYSTEM_ID(p));
break;
default:
ABORT_ARG1("pthread_kill failed at resume",
": errcode= %d", result);
}
# endif
}
}
}
# ifdef GC_NETBSD_THREADS_WORKAROUND
for (i = 0; i < n_live_threads; i++) {
while (0 != sem_wait(&GC_restart_ack_sem)) {
if (errno != EINTR) {
ABORT_ARG1("sem_wait() for restart handler failed",
": errcode= %d", errno);
}
}
}
# endif
# ifdef DEBUG_THREADS
GC_log_printf("World started\n");
# endif
# else /* NACL */
# ifdef DEBUG_THREADS
GC_log_printf("World starting...\n");
# endif
GC_nacl_park_threads_now = 0;
if (GC_on_thread_event)
GC_on_thread_event(GC_EVENT_THREAD_UNSUSPENDED, NULL);
/* TODO: Send event for every unsuspended thread. */
# endif
}
GC_INNER void GC_stop_init(void)
{
# if !defined(GC_OPENBSD_UTHREADS) && !defined(NACL)
struct sigaction act;
if (SIGNAL_UNSET == GC_sig_suspend)
GC_sig_suspend = SIG_SUSPEND;
if (SIGNAL_UNSET == GC_sig_thr_restart)
GC_sig_thr_restart = SIG_THR_RESTART;
if (GC_sig_suspend == GC_sig_thr_restart)
ABORT("Cannot use same signal for thread suspend and resume");
if (sem_init(&GC_suspend_ack_sem, GC_SEM_INIT_PSHARED, 0) != 0)
ABORT("sem_init failed");
# ifdef GC_NETBSD_THREADS_WORKAROUND
if (sem_init(&GC_restart_ack_sem, GC_SEM_INIT_PSHARED, 0) != 0)
ABORT("sem_init failed");
# endif
# ifdef SA_RESTART
act.sa_flags = SA_RESTART
# else
act.sa_flags = 0
# endif
# ifndef NO_SA_SIGACTION
| SA_SIGINFO
# endif
;
if (sigfillset(&act.sa_mask) != 0) {
ABORT("sigfillset failed");
}
# ifdef GC_RTEMS_PTHREADS
if(sigprocmask(SIG_UNBLOCK, &act.sa_mask, NULL) != 0) {
ABORT("sigprocmask failed");
}
# endif
GC_remove_allowed_signals(&act.sa_mask);
/* GC_sig_thr_restart is set in the resulting mask. */
/* It is unmasked by the handler when necessary. */
# ifndef NO_SA_SIGACTION
act.sa_sigaction = GC_suspend_handler;
# else
act.sa_handler = GC_suspend_handler;
# endif
/* act.sa_restorer is deprecated and should not be initialized. */
if (sigaction(GC_sig_suspend, &act, NULL) != 0) {
ABORT("Cannot set SIG_SUSPEND handler");
}
# ifndef NO_SA_SIGACTION
act.sa_flags &= ~SA_SIGINFO;
# endif
act.sa_handler = GC_restart_handler;
if (sigaction(GC_sig_thr_restart, &act, NULL) != 0) {
ABORT("Cannot set SIG_THR_RESTART handler");
}
/* Initialize suspend_handler_mask (excluding GC_sig_thr_restart). */
if (sigfillset(&suspend_handler_mask) != 0) ABORT("sigfillset failed");
GC_remove_allowed_signals(&suspend_handler_mask);
if (sigdelset(&suspend_handler_mask, GC_sig_thr_restart) != 0)
ABORT("sigdelset failed");
/* Check for GC_RETRY_SIGNALS. */
if (0 != GETENV("GC_RETRY_SIGNALS")) {
GC_retry_signals = TRUE;
}
if (0 != GETENV("GC_NO_RETRY_SIGNALS")) {
GC_retry_signals = FALSE;
}
if (GC_retry_signals) {
GC_COND_LOG_PRINTF("Will retry suspend signal if necessary\n");
}
# ifndef NO_SIGNALS_UNBLOCK_IN_MAIN
/* Explicitly unblock the signals once before new threads creation. */
GC_unblock_gc_signals();
# endif
# endif /* !GC_OPENBSD_UTHREADS && !NACL */
}
#endif /* GC_PTHREADS && !GC_DARWIN_THREADS && !GC_WIN32_THREADS */
Gauche-0.9.6/gc/depcomp 0000755 0000764 0000764 00000056016 13074102157 013742 0 ustar shiro shiro #! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2013-05-30.07; # UTC
# Copyright (C) 1999-2013 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva .
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by 'PROGRAMS ARGS'.
object Object file output by 'PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputting dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to .
EOF
exit $?
;;
-v | --v*)
echo "depcomp $scriptversion"
exit $?
;;
esac
# Get the directory component of the given path, and save it in the
# global variables '$dir'. Note that this directory component will
# be either empty or ending with a '/' character. This is deliberate.
set_dir_from ()
{
case $1 in
*/*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;;
*) dir=;;
esac
}
# Get the suffix-stripped basename of the given path, and save it the
# global variable '$base'.
set_base_from ()
{
base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'`
}
# If no dependency file was actually created by the compiler invocation,
# we still have to create a dummy depfile, to avoid errors with the
# Makefile "include basename.Plo" scheme.
make_dummy_depfile ()
{
echo "#dummy" > "$depfile"
}
# Factor out some common post-processing of the generated depfile.
# Requires the auxiliary global variable '$tmpdepfile' to be set.
aix_post_process_depfile ()
{
# If the compiler actually managed to produce a dependency file,
# post-process it.
if test -f "$tmpdepfile"; then
# Each line is of the form 'foo.o: dependency.h'.
# Do two passes, one to just change these to
# $object: dependency.h
# and one to simply output
# dependency.h:
# which is needed to avoid the deleted-header problem.
{ sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile"
sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile"
} > "$depfile"
rm -f "$tmpdepfile"
else
make_dummy_depfile
fi
}
# A tabulation character.
tab=' '
# A newline character.
nl='
'
# Character ranges might be problematic outside the C locale.
# These definitions help.
upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lower=abcdefghijklmnopqrstuvwxyz
digits=0123456789
alpha=${upper}${lower}
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Avoid interferences from the environment.
gccflag= dashmflag=
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
cygpath_u="cygpath -u -f -"
if test "$depmode" = msvcmsys; then
# This is just like msvisualcpp but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvisualcpp
fi
if test "$depmode" = msvc7msys; then
# This is just like msvc7 but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvc7
fi
if test "$depmode" = xlc; then
# IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information.
gccflag=-qmakedep=gcc,-MF
depmode=gcc
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
## Unfortunately, FreeBSD c89 acceptance of flags depends upon
## the command line argument order; so add the flags where they
## appear in depend2.am. Note that the slowdown incurred here
## affects only configure: in makefiles, %FASTDEP% shortcuts this.
for arg
do
case $arg in
-c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
*) set fnord "$@" "$arg" ;;
esac
shift # fnord
shift # $arg
done
"$@"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## Note that this doesn't just cater to obsosete pre-3.x GCC compilers.
## but also to in-use compilers like IMB xlc/xlC and the HP C compiler.
## (see the conditional assignment to $gccflag above).
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say). Also, it might not be
## supported by the other compilers which use the 'gcc' depmode.
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The second -e expression handles DOS-style file names with drive
# letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the "deleted header file" problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
## Some versions of gcc put a space before the ':'. On the theory
## that the space means something, we add a space to the output as
## well. hp depmode also adds that space, but also prefixes the VPATH
## to the object. Take care to not repeat it in the output.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like '#:fec' to the end of the
# dependency line.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \
| tr "$nl" ' ' >> "$depfile"
echo >> "$depfile"
# The second pass generates a dummy entry for each header file.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile"
;;
xlc)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts '$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.u
tmpdepfile2=$base.u
tmpdepfile3=$dir.libs/$base.u
"$@" -Wc,-M
else
tmpdepfile1=$dir$base.u
tmpdepfile2=$dir$base.u
tmpdepfile3=$dir$base.u
"$@" -M
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
aix_post_process_depfile
;;
tcc)
# tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26
# FIXME: That version still under development at the moment of writing.
# Make that this statement remains true also for stable, released
# versions.
# It will wrap lines (doesn't matter whether long or short) with a
# trailing '\', as in:
#
# foo.o : \
# foo.c \
# foo.h \
#
# It will put a trailing '\' even on the last line, and will use leading
# spaces rather than leading tabs (at least since its commit 0394caf7
# "Emit spaces for -MD").
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each non-empty line is of the form 'foo.o : \' or ' dep.h \'.
# We have to change lines of the first kind to '$object: \'.
sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile"
# And for each line of the second kind, we have to emit a 'dep.h:'
# dummy dependency, to avoid the deleted-header problem.
sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile"
rm -f "$tmpdepfile"
;;
## The order of this option in the case statement is important, since the
## shell code in configure will try each of these formats in the order
## listed in this file. A plain '-MD' option would be understood by many
## compilers, so we must ensure this comes after the gcc and icc options.
pgcc)
# Portland's C compiler understands '-MD'.
# Will always output deps to 'file.d' where file is the root name of the
# source file under compilation, even if file resides in a subdirectory.
# The object file name does not affect the name of the '.d' file.
# pgcc 10.2 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using '\' :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
set_dir_from "$object"
# Use the source, not the object, to determine the base name, since
# that's sadly what pgcc will do too.
set_base_from "$source"
tmpdepfile=$base.d
# For projects that build the same source file twice into different object
# files, the pgcc approach of using the *source* file root name can cause
# problems in parallel builds. Use a locking strategy to avoid stomping on
# the same $tmpdepfile.
lockdir=$base.d-lock
trap "
echo '$0: caught signal, cleaning up...' >&2
rmdir '$lockdir'
exit 1
" 1 2 13 15
numtries=100
i=$numtries
while test $i -gt 0; do
# mkdir is a portable test-and-set.
if mkdir "$lockdir" 2>/dev/null; then
# This process acquired the lock.
"$@" -MD
stat=$?
# Release the lock.
rmdir "$lockdir"
break
else
# If the lock is being held by a different process, wait
# until the winning process is done or we timeout.
while test -d "$lockdir" && test $i -gt 0; do
sleep 1
i=`expr $i - 1`
done
fi
i=`expr $i - 1`
done
trap - 1 2 13 15
if test $i -le 0; then
echo "$0: failed to acquire lock after $numtries attempts" >&2
echo "$0: check lockdir '$lockdir'" >&2
exit 1
fi
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp2)
# The "hp" stanza above does not work with aCC (C++) and HP's ia64
# compilers, which have integrated preprocessors. The correct option
# to use with these is +Maked; it writes dependencies to a file named
# 'foo.d', which lands next to the object file, wherever that
# happens to be.
# Much of this is similar to the tru64 case; see comments there.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir.libs/$base.d
"$@" -Wc,+Maked
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
"$@" +Maked
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile"
# Add 'dependent.h:' lines.
sed -ne '2,${
s/^ *//
s/ \\*$//
s/$/:/
p
}' "$tmpdepfile" >> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile" "$tmpdepfile2"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in 'foo.d' instead, so we check for that too.
# Subdirectories are respected.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
# Libtool generates 2 separate objects for the 2 libraries. These
# two compilations output dependencies in $dir.libs/$base.o.d and
# in $dir$base.o.d. We have to check for both files, because
# one of the two compilations can be disabled. We should prefer
# $dir$base.o.d over $dir.libs/$base.o.d because the latter is
# automatically cleaned when .libs/ is deleted, while ignoring
# the former would cause a distcleancheck panic.
tmpdepfile1=$dir$base.o.d # libtool 1.5
tmpdepfile2=$dir.libs/$base.o.d # Likewise.
tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504
"$@" -Wc,-MD
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
tmpdepfile3=$dir$base.d
"$@" -MD
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
# Same post-processing that is required for AIX mode.
aix_post_process_depfile
;;
msvc7)
if test "$libtool" = yes; then
showIncludes=-Wc,-showIncludes
else
showIncludes=-showIncludes
fi
"$@" $showIncludes > "$tmpdepfile"
stat=$?
grep -v '^Note: including file: ' "$tmpdepfile"
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The first sed program below extracts the file names and escapes
# backslashes for cygpath. The second sed program outputs the file
# name when reading, but also accumulates all include files in the
# hold buffer in order to output them again at the end. This only
# works with sed implementations that can handle large buffers.
sed < "$tmpdepfile" -n '
/^Note: including file: *\(.*\)/ {
s//\1/
s/\\/\\\\/g
p
}' | $cygpath_u | sort -u | sed -n '
s/ /\\ /g
s/\(.*\)/'"$tab"'\1 \\/p
s/.\(.*\) \\/\1:/
H
$ {
s/.*/'"$tab"'/
G
p
}' >> "$depfile"
echo >> "$depfile" # make sure the fragment doesn't end with a backslash
rm -f "$tmpdepfile"
;;
msvc7msys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for ':'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise.
"$@" $dashmflag |
sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this sed invocation
# correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no eat=no
for arg
do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
if test $eat = yes; then
eat=no
continue
fi
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-arch)
eat=yes ;;
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix=`echo "$object" | sed 's/^.*\././'`
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
# makedepend may prepend the VPATH from the source file name to the object.
# No need to regex-escape $object, excess matching of '.' is harmless.
sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process the last invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed '1,2d' "$tmpdepfile" \
| tr ' ' "$nl" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E \
| sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
-e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
| sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
IFS=" "
for arg
do
case "$arg" in
-o)
shift
;;
$object)
shift
;;
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E 2>/dev/null |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile"
echo "$tab" >> "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvcmsys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-time-zone: "UTC"
# time-stamp-end: "; # UTC"
# End:
Gauche-0.9.6/gc/ptr_chck.c 0000664 0000764 0000764 00000022566 13074101475 014335 0 ustar shiro shiro /*
* Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#include "private/gc_pmark.h"
/*
* These are checking routines calls to which could be inserted by a
* preprocessor to validate C pointer arithmetic.
*/
STATIC void GC_CALLBACK GC_default_same_obj_print_proc(void * p, void * q)
{
ABORT_ARG2("GC_same_obj test failed",
": %p and %p are not in the same object", p, q);
}
void (GC_CALLBACK *GC_same_obj_print_proc) (void *, void *)
= GC_default_same_obj_print_proc;
/* Check that p and q point to the same object. Call */
/* *GC_same_obj_print_proc if they don't. */
/* Returns the first argument. (Return value may be hard */
/* to use due to typing issues. But if we had a suitable */
/* preprocessor...) */
/* Succeeds if neither p nor q points to the heap. */
/* We assume this is performance critical. (It shouldn't */
/* be called by production code, but this can easily make */
/* debugging intolerably slow.) */
GC_API void * GC_CALL GC_same_obj(void *p, void *q)
{
struct hblk *h;
hdr *hhdr;
ptr_t base, limit;
word sz;
if (!EXPECT(GC_is_initialized, TRUE)) GC_init();
hhdr = HDR((word)p);
if (hhdr == 0) {
if (divHBLKSZ((word)p) != divHBLKSZ((word)q)
&& HDR((word)q) != 0) {
goto fail;
}
return(p);
}
/* If it's a pointer to the middle of a large object, move it */
/* to the beginning. */
if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
h = HBLKPTR(p) - (word)hhdr;
hhdr = HDR(h);
while (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
h = FORWARDED_ADDR(h, hhdr);
hhdr = HDR(h);
}
limit = (ptr_t)h + hhdr -> hb_sz;
if ((word)p >= (word)limit || (word)q >= (word)limit
|| (word)q < (word)h) {
goto fail;
}
return(p);
}
sz = hhdr -> hb_sz;
if (sz > MAXOBJBYTES) {
base = (ptr_t)HBLKPTR(p);
limit = base + sz;
if ((word)p >= (word)limit) {
goto fail;
}
} else {
size_t offset;
size_t pdispl = HBLKDISPL(p);
offset = pdispl % sz;
if (HBLKPTR(p) != HBLKPTR(q)) goto fail;
/* W/o this check, we might miss an error if */
/* q points to the first object on a page, and */
/* points just before the page. */
base = (ptr_t)p - offset;
limit = base + sz;
}
/* [base, limit) delimits the object containing p, if any. */
/* If p is not inside a valid object, then either q is */
/* also outside any valid object, or it is outside */
/* [base, limit). */
if ((word)q >= (word)limit || (word)q < (word)base) {
goto fail;
}
return(p);
fail:
(*GC_same_obj_print_proc)((ptr_t)p, (ptr_t)q);
return(p);
}
STATIC void GC_CALLBACK GC_default_is_valid_displacement_print_proc (void *p)
{
ABORT_ARG1("GC_is_valid_displacement test failed", ": %p not valid", p);
}
void (GC_CALLBACK *GC_is_valid_displacement_print_proc)(void *) =
GC_default_is_valid_displacement_print_proc;
/* Check that if p is a pointer to a heap page, then it points to */
/* a valid displacement within a heap object. */
/* Uninteresting with GC_all_interior_pointers. */
/* Always returns its argument. */
/* Note that we don't lock, since nothing relevant about the header */
/* should change while we have a valid object pointer to the block. */
GC_API void * GC_CALL GC_is_valid_displacement(void *p)
{
hdr *hhdr;
word pdispl;
word offset;
struct hblk *h;
word sz;
if (!EXPECT(GC_is_initialized, TRUE)) GC_init();
hhdr = HDR((word)p);
if (hhdr == 0) return(p);
h = HBLKPTR(p);
if (GC_all_interior_pointers) {
while (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
h = FORWARDED_ADDR(h, hhdr);
hhdr = HDR(h);
}
}
if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
goto fail;
}
sz = hhdr -> hb_sz;
pdispl = HBLKDISPL(p);
offset = pdispl % sz;
if ((sz > MAXOBJBYTES && (word)p >= (word)h + sz)
|| !GC_valid_offsets[offset]
|| (word)p - offset + sz > (word)(h + 1)) {
goto fail;
}
return(p);
fail:
(*GC_is_valid_displacement_print_proc)((ptr_t)p);
return(p);
}
STATIC void GC_CALLBACK GC_default_is_visible_print_proc(void * p)
{
ABORT_ARG1("GC_is_visible test failed", ": %p not GC-visible", p);
}
void (GC_CALLBACK *GC_is_visible_print_proc)(void * p) =
GC_default_is_visible_print_proc;
#ifndef THREADS
/* Could p be a stack address? */
STATIC GC_bool GC_on_stack(ptr_t p)
{
# ifdef STACK_GROWS_DOWN
if ((word)p >= (word)GC_approx_sp()
&& (word)p < (word)GC_stackbottom) {
return(TRUE);
}
# else
if ((word)p <= (word)GC_approx_sp()
&& (word)p > (word)GC_stackbottom) {
return(TRUE);
}
# endif
return(FALSE);
}
#endif
/* Check that p is visible */
/* to the collector as a possibly pointer containing location. */
/* If it isn't, invoke *GC_is_visible_print_proc. */
/* Returns the argument in all cases. May erroneously succeed */
/* in hard cases. (This is intended for debugging use with */
/* untyped allocations. The idea is that it should be possible, though */
/* slow, to add such a call to all indirect pointer stores.) */
/* Currently useless for the multi-threaded worlds. */
GC_API void * GC_CALL GC_is_visible(void *p)
{
hdr *hhdr;
if ((word)p & (ALIGNMENT - 1)) goto fail;
if (!EXPECT(GC_is_initialized, TRUE)) GC_init();
# ifdef THREADS
hhdr = HDR((word)p);
if (hhdr != 0 && GC_base(p) == 0) {
goto fail;
} else {
/* May be inside thread stack. We can't do much. */
return(p);
}
# else
/* Check stack first: */
if (GC_on_stack(p)) return(p);
hhdr = HDR((word)p);
if (hhdr == 0) {
if (GC_is_static_root(p)) return(p);
/* Else do it again correctly: */
# if defined(DYNAMIC_LOADING) || defined(MSWIN32) \
|| defined(MSWINCE) || defined(CYGWIN32) || defined(PCR)
GC_register_dynamic_libraries();
if (GC_is_static_root(p))
return(p);
# endif
goto fail;
} else {
/* p points to the heap. */
word descr;
ptr_t base = GC_base(p); /* Should be manually inlined? */
if (base == 0) goto fail;
if (HBLKPTR(base) != HBLKPTR(p)) hhdr = HDR((word)p);
descr = hhdr -> hb_descr;
retry:
switch(descr & GC_DS_TAGS) {
case GC_DS_LENGTH:
if ((word)p - (word)base > descr) goto fail;
break;
case GC_DS_BITMAP:
if ((word)p - (word)base >= WORDS_TO_BYTES(BITMAP_BITS)
|| ((word)p & (sizeof(word) - 1))) goto fail;
if (!(((word)1 << (WORDSZ - ((ptr_t)p - (ptr_t)base) - 1))
& descr)) goto fail;
break;
case GC_DS_PROC:
/* We could try to decipher this partially. */
/* For now we just punt. */
break;
case GC_DS_PER_OBJECT:
if ((signed_word)descr >= 0) {
descr = *(word *)((ptr_t)base + (descr & ~GC_DS_TAGS));
} else {
ptr_t type_descr = *(ptr_t *)base;
descr = *(word *)(type_descr
- (descr - (word)(GC_DS_PER_OBJECT
- GC_INDIR_PER_OBJ_BIAS)));
}
goto retry;
}
return(p);
}
# endif
fail:
(*GC_is_visible_print_proc)((ptr_t)p);
return(p);
}
GC_API void * GC_CALL GC_pre_incr (void **p, ptrdiff_t how_much)
{
void * initial = *p;
void * result = GC_same_obj((void *)((ptr_t)initial + how_much), initial);
if (!GC_all_interior_pointers) {
(void) GC_is_valid_displacement(result);
}
return (*p = result);
}
GC_API void * GC_CALL GC_post_incr (void **p, ptrdiff_t how_much)
{
void * initial = *p;
void * result = GC_same_obj((void *)((ptr_t)initial + how_much), initial);
if (!GC_all_interior_pointers) {
(void) GC_is_valid_displacement(result);
}
*p = result;
return(initial);
}
Gauche-0.9.6/gc/finalize.c 0000664 0000764 0000764 00000134364 13302340445 014335 0 ustar shiro shiro /*
* Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
* Copyright (c) 1991-1996 by Xerox Corporation. All rights reserved.
* Copyright (c) 1996-1999 by Silicon Graphics. All rights reserved.
* Copyright (C) 2007 Free Software Foundation, Inc
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
#include "private/gc_pmark.h"
#ifndef GC_NO_FINALIZATION
/* Type of mark procedure used for marking from finalizable object. */
/* This procedure normally does not mark the object, only its */
/* descendants. */
typedef void (* finalization_mark_proc)(ptr_t /* finalizable_obj_ptr */);
#define HASH3(addr,size,log_size) \
((((word)(addr) >> 3) ^ ((word)(addr) >> (3 + (log_size)))) \
& ((size) - 1))
#define HASH2(addr,log_size) HASH3(addr, (word)1 << (log_size), log_size)
struct hash_chain_entry {
word hidden_key;
struct hash_chain_entry * next;
};
struct disappearing_link {
struct hash_chain_entry prolog;
# define dl_hidden_link prolog.hidden_key
/* Field to be cleared. */
# define dl_next(x) (struct disappearing_link *)((x) -> prolog.next)
# define dl_set_next(x, y) \
(void)((x)->prolog.next = (struct hash_chain_entry *)(y))
word dl_hidden_obj; /* Pointer to object base */
};
struct dl_hashtbl_s {
struct disappearing_link **head;
signed_word log_size;
word entries;
};
STATIC struct dl_hashtbl_s GC_dl_hashtbl = {
/* head */ NULL, /* log_size */ -1, /* entries */ 0 };
#ifndef GC_LONG_REFS_NOT_NEEDED
STATIC struct dl_hashtbl_s GC_ll_hashtbl = { NULL, -1, 0 };
#endif
struct finalizable_object {
struct hash_chain_entry prolog;
# define fo_hidden_base prolog.hidden_key
/* Pointer to object base. */
/* No longer hidden once object */
/* is on finalize_now queue. */
# define fo_next(x) (struct finalizable_object *)((x) -> prolog.next)
# define fo_set_next(x,y) ((x)->prolog.next = (struct hash_chain_entry *)(y))
GC_finalization_proc fo_fn; /* Finalizer. */
ptr_t fo_client_data;
word fo_object_size; /* In bytes. */
finalization_mark_proc fo_mark_proc; /* Mark-through procedure */
};
static signed_word log_fo_table_size = -1;
STATIC struct {
struct finalizable_object **fo_head;
/* List of objects that should be finalized now: */
struct finalizable_object *finalize_now;
} GC_fnlz_roots = { NULL, NULL };
GC_API void GC_CALL GC_push_finalizer_structures(void)
{
GC_ASSERT((word)(&GC_dl_hashtbl.head) % sizeof(word) == 0);
GC_ASSERT((word)(&GC_fnlz_roots) % sizeof(word) == 0);
# ifndef GC_LONG_REFS_NOT_NEEDED
GC_ASSERT((word)(&GC_ll_hashtbl.head) % sizeof(word) == 0);
GC_PUSH_ALL_SYM(GC_ll_hashtbl.head);
# endif
GC_PUSH_ALL_SYM(GC_dl_hashtbl.head);
GC_PUSH_ALL_SYM(GC_fnlz_roots);
}
/* Threshold of log_size to initiate full collection before growing */
/* a hash table. */
#ifndef GC_ON_GROW_LOG_SIZE_MIN
# define GC_ON_GROW_LOG_SIZE_MIN CPP_LOG_HBLKSIZE
#endif
/* Double the size of a hash table. *log_size_ptr is the log of its */
/* current size. May be a no-op. */
/* *table is a pointer to an array of hash headers. If we succeed, we */
/* update both *table and *log_size_ptr. Lock is held. */
STATIC void GC_grow_table(struct hash_chain_entry ***table,
signed_word *log_size_ptr, word *entries_ptr)
{
register word i;
register struct hash_chain_entry *p;
signed_word log_old_size = *log_size_ptr;
signed_word log_new_size = log_old_size + 1;
word old_size = log_old_size == -1 ? 0 : (word)1 << log_old_size;
word new_size = (word)1 << log_new_size;
/* FIXME: Power of 2 size often gets rounded up to one more page. */
struct hash_chain_entry **new_table;
GC_ASSERT(I_HOLD_LOCK());
/* Avoid growing the table in case of at least 25% of entries can */
/* be deleted by enforcing a collection. Ignored for small tables. */
if (log_old_size >= GC_ON_GROW_LOG_SIZE_MIN) {
IF_CANCEL(int cancel_state;)
DISABLE_CANCEL(cancel_state);
(void)GC_try_to_collect_inner(GC_never_stop_func);
RESTORE_CANCEL(cancel_state);
/* GC_finalize might decrease entries value. */
if (*entries_ptr < ((word)1 << log_old_size) - (*entries_ptr >> 2))
return;
}
new_table = (struct hash_chain_entry **)
GC_INTERNAL_MALLOC_IGNORE_OFF_PAGE(
(size_t)new_size * sizeof(struct hash_chain_entry *),
NORMAL);
if (new_table == 0) {
if (*table == 0) {
ABORT("Insufficient space for initial table allocation");
} else {
return;
}
}
for (i = 0; i < old_size; i++) {
p = (*table)[i];
while (p != 0) {
ptr_t real_key = GC_REVEAL_POINTER(p -> hidden_key);
struct hash_chain_entry *next = p -> next;
size_t new_hash = HASH3(real_key, new_size, log_new_size);
p -> next = new_table[new_hash];
new_table[new_hash] = p;
p = next;
}
}
*log_size_ptr = log_new_size;
*table = new_table;
}
GC_API int GC_CALL GC_register_disappearing_link(void * * link)
{
ptr_t base;
base = (ptr_t)GC_base(link);
if (base == 0)
ABORT("Bad arg to GC_register_disappearing_link");
return(GC_general_register_disappearing_link(link, base));
}
STATIC int GC_register_disappearing_link_inner(
struct dl_hashtbl_s *dl_hashtbl, void **link,
const void *obj, const char *tbl_log_name)
{
struct disappearing_link *curr_dl;
size_t index;
struct disappearing_link * new_dl;
DCL_LOCK_STATE;
if (EXPECT(GC_find_leak, FALSE)) return GC_UNIMPLEMENTED;
LOCK();
GC_ASSERT(obj != NULL && GC_base_C(obj) == obj);
if (dl_hashtbl -> log_size == -1
|| dl_hashtbl -> entries > ((word)1 << dl_hashtbl -> log_size)) {
GC_grow_table((struct hash_chain_entry ***)&dl_hashtbl -> head,
&dl_hashtbl -> log_size, &dl_hashtbl -> entries);
# ifdef LINT2
if (dl_hashtbl->log_size < 0) ABORT("log_size is negative");
# endif
GC_COND_LOG_PRINTF("Grew %s table to %u entries\n", tbl_log_name,
1 << (unsigned)dl_hashtbl -> log_size);
}
index = HASH2(link, dl_hashtbl -> log_size);
for (curr_dl = dl_hashtbl -> head[index]; curr_dl != 0;
curr_dl = dl_next(curr_dl)) {
if (curr_dl -> dl_hidden_link == GC_HIDE_POINTER(link)) {
curr_dl -> dl_hidden_obj = GC_HIDE_POINTER(obj);
UNLOCK();
return GC_DUPLICATE;
}
}
new_dl = (struct disappearing_link *)
GC_INTERNAL_MALLOC(sizeof(struct disappearing_link),NORMAL);
if (0 == new_dl) {
GC_oom_func oom_fn = GC_oom_fn;
UNLOCK();
new_dl = (struct disappearing_link *)
(*oom_fn)(sizeof(struct disappearing_link));
if (0 == new_dl) {
return GC_NO_MEMORY;
}
/* It's not likely we'll make it here, but ... */
LOCK();
/* Recalculate index since the table may grow. */
index = HASH2(link, dl_hashtbl -> log_size);
/* Check again that our disappearing link not in the table. */
for (curr_dl = dl_hashtbl -> head[index]; curr_dl != 0;
curr_dl = dl_next(curr_dl)) {
if (curr_dl -> dl_hidden_link == GC_HIDE_POINTER(link)) {
curr_dl -> dl_hidden_obj = GC_HIDE_POINTER(obj);
UNLOCK();
# ifndef DBG_HDRS_ALL
/* Free unused new_dl returned by GC_oom_fn() */
GC_free((void *)new_dl);
# endif
return GC_DUPLICATE;
}
}
}
new_dl -> dl_hidden_obj = GC_HIDE_POINTER(obj);
new_dl -> dl_hidden_link = GC_HIDE_POINTER(link);
dl_set_next(new_dl, dl_hashtbl -> head[index]);
dl_hashtbl -> head[index] = new_dl;
dl_hashtbl -> entries++;
UNLOCK();
return GC_SUCCESS;
}
GC_API int GC_CALL GC_general_register_disappearing_link(void * * link,
const void * obj)
{
if (((word)link & (ALIGNMENT-1)) != 0 || !NONNULL_ARG_NOT_NULL(link))
ABORT("Bad arg to GC_general_register_disappearing_link");
return GC_register_disappearing_link_inner(&GC_dl_hashtbl, link, obj,
"dl");
}
#ifdef DBG_HDRS_ALL
# define FREE_DL_ENTRY(curr_dl) dl_set_next(curr_dl, NULL)
#else
# define FREE_DL_ENTRY(curr_dl) GC_free(curr_dl)
#endif
/* Unregisters given link and returns the link entry to free. */
/* Assume the lock is held. */
GC_INLINE struct disappearing_link *GC_unregister_disappearing_link_inner(
struct dl_hashtbl_s *dl_hashtbl, void **link)
{
struct disappearing_link *curr_dl;
struct disappearing_link *prev_dl = NULL;
size_t index;
if (dl_hashtbl->log_size == -1)
return NULL; /* prevent integer shift by a negative amount */
index = HASH2(link, dl_hashtbl->log_size);
for (curr_dl = dl_hashtbl -> head[index]; curr_dl;
curr_dl = dl_next(curr_dl)) {
if (curr_dl -> dl_hidden_link == GC_HIDE_POINTER(link)) {
/* Remove found entry from the table. */
if (NULL == prev_dl) {
dl_hashtbl -> head[index] = dl_next(curr_dl);
} else {
dl_set_next(prev_dl, dl_next(curr_dl));
}
dl_hashtbl -> entries--;
break;
}
prev_dl = curr_dl;
}
return curr_dl;
}
GC_API int GC_CALL GC_unregister_disappearing_link(void * * link)
{
struct disappearing_link *curr_dl;
DCL_LOCK_STATE;
if (((word)link & (ALIGNMENT-1)) != 0) return(0); /* Nothing to do. */
LOCK();
curr_dl = GC_unregister_disappearing_link_inner(&GC_dl_hashtbl, link);
UNLOCK();
if (NULL == curr_dl) return 0;
FREE_DL_ENTRY(curr_dl);
return 1;
}
/* Toggle-ref support. */
#ifndef GC_TOGGLE_REFS_NOT_NEEDED
typedef union {
/* Lowest bit is used to distinguish between choices. */
void *strong_ref;
GC_hidden_pointer weak_ref;
} GCToggleRef;
STATIC GC_toggleref_func GC_toggleref_callback = 0;
STATIC GCToggleRef *GC_toggleref_arr = NULL;
STATIC int GC_toggleref_array_size = 0;
STATIC int GC_toggleref_array_capacity = 0;
GC_INNER void GC_process_togglerefs(void)
{
int i;
int new_size = 0;
GC_ASSERT(I_HOLD_LOCK());
for (i = 0; i < GC_toggleref_array_size; ++i) {
GCToggleRef r = GC_toggleref_arr[i];
void *obj = r.strong_ref;
if (((word)obj & 1) != 0) {
obj = GC_REVEAL_POINTER(r.weak_ref);
}
if (NULL == obj) {
continue;
}
switch (GC_toggleref_callback(obj)) {
case GC_TOGGLE_REF_DROP:
break;
case GC_TOGGLE_REF_STRONG:
GC_toggleref_arr[new_size++].strong_ref = obj;
break;
case GC_TOGGLE_REF_WEAK:
GC_toggleref_arr[new_size++].weak_ref = GC_HIDE_POINTER(obj);
break;
default:
ABORT("Bad toggle-ref status returned by callback");
}
}
if (new_size < GC_toggleref_array_size) {
BZERO(&GC_toggleref_arr[new_size],
(GC_toggleref_array_size - new_size) * sizeof(GCToggleRef));
GC_toggleref_array_size = new_size;
}
}
STATIC void GC_normal_finalize_mark_proc(ptr_t);
static void push_and_mark_object(void *p)
{
GC_normal_finalize_mark_proc(p);
while (!GC_mark_stack_empty()) {
MARK_FROM_MARK_STACK();
}
GC_set_mark_bit(p);
if (GC_mark_state != MS_NONE) {
while (!GC_mark_some(0)) {
/* Empty. */
}
}
}
STATIC void GC_mark_togglerefs(void)
{
int i;
if (NULL == GC_toggleref_arr)
return;
/* TODO: Hide GC_toggleref_arr to avoid its marking from roots. */
GC_set_mark_bit(GC_toggleref_arr);
for (i = 0; i < GC_toggleref_array_size; ++i) {
void *obj = GC_toggleref_arr[i].strong_ref;
if (obj != NULL && ((word)obj & 1) == 0) {
push_and_mark_object(obj);
}
}
}
STATIC void GC_clear_togglerefs(void)
{
int i;
for (i = 0; i < GC_toggleref_array_size; ++i) {
if ((GC_toggleref_arr[i].weak_ref & 1) != 0) {
if (!GC_is_marked(GC_REVEAL_POINTER(GC_toggleref_arr[i].weak_ref))) {
GC_toggleref_arr[i].weak_ref = 0;
} else {
/* No need to copy, BDWGC is a non-moving collector. */
}
}
}
}
GC_API void GC_CALL GC_set_toggleref_func(GC_toggleref_func fn)
{
DCL_LOCK_STATE;
LOCK();
GC_toggleref_callback = fn;
UNLOCK();
}
GC_API GC_toggleref_func GC_CALL GC_get_toggleref_func(void)
{
GC_toggleref_func fn;
DCL_LOCK_STATE;
LOCK();
fn = GC_toggleref_callback;
UNLOCK();
return fn;
}
static GC_bool ensure_toggleref_capacity(int capacity_inc)
{
GC_ASSERT(capacity_inc >= 0);
if (NULL == GC_toggleref_arr) {
GC_toggleref_array_capacity = 32; /* initial capacity */
GC_toggleref_arr = GC_INTERNAL_MALLOC_IGNORE_OFF_PAGE(
GC_toggleref_array_capacity * sizeof(GCToggleRef),
NORMAL);
if (NULL == GC_toggleref_arr)
return FALSE;
}
if ((unsigned)GC_toggleref_array_size + (unsigned)capacity_inc
>= (unsigned)GC_toggleref_array_capacity) {
GCToggleRef *new_array;
while ((unsigned)GC_toggleref_array_capacity
< (unsigned)GC_toggleref_array_size + (unsigned)capacity_inc) {
GC_toggleref_array_capacity *= 2;
if (GC_toggleref_array_capacity < 0) /* overflow */
return FALSE;
}
new_array = GC_INTERNAL_MALLOC_IGNORE_OFF_PAGE(
GC_toggleref_array_capacity * sizeof(GCToggleRef),
NORMAL);
if (NULL == new_array)
return FALSE;
BCOPY(GC_toggleref_arr, new_array,
GC_toggleref_array_size * sizeof(GCToggleRef));
GC_INTERNAL_FREE(GC_toggleref_arr);
GC_toggleref_arr = new_array;
}
return TRUE;
}
GC_API int GC_CALL GC_toggleref_add(void *obj, int is_strong_ref)
{
int res = GC_SUCCESS;
DCL_LOCK_STATE;
GC_ASSERT(NONNULL_ARG_NOT_NULL(obj));
LOCK();
if (GC_toggleref_callback != 0) {
if (!ensure_toggleref_capacity(1)) {
res = GC_NO_MEMORY;
} else {
GC_toggleref_arr[GC_toggleref_array_size++].strong_ref =
is_strong_ref ? obj : (void *)GC_HIDE_POINTER(obj);
}
}
UNLOCK();
return res;
}
#endif /* !GC_TOGGLE_REFS_NOT_NEEDED */
/* Finalizer callback support. */
STATIC GC_await_finalize_proc GC_object_finalized_proc = 0;
GC_API void GC_CALL GC_set_await_finalize_proc(GC_await_finalize_proc fn)
{
DCL_LOCK_STATE;
LOCK();
GC_object_finalized_proc = fn;
UNLOCK();
}
GC_API GC_await_finalize_proc GC_CALL GC_get_await_finalize_proc(void)
{
GC_await_finalize_proc fn;
DCL_LOCK_STATE;
LOCK();
fn = GC_object_finalized_proc;
UNLOCK();
return fn;
}
#ifndef GC_LONG_REFS_NOT_NEEDED
GC_API int GC_CALL GC_register_long_link(void * * link, const void * obj)
{
if (((word)link & (ALIGNMENT-1)) != 0 || !NONNULL_ARG_NOT_NULL(link))
ABORT("Bad arg to GC_register_long_link");
return GC_register_disappearing_link_inner(&GC_ll_hashtbl, link, obj,
"long dl");
}
GC_API int GC_CALL GC_unregister_long_link(void * * link)
{
struct disappearing_link *curr_dl;
DCL_LOCK_STATE;
if (((word)link & (ALIGNMENT-1)) != 0) return(0); /* Nothing to do. */
LOCK();
curr_dl = GC_unregister_disappearing_link_inner(&GC_ll_hashtbl, link);
UNLOCK();
if (NULL == curr_dl) return 0;
FREE_DL_ENTRY(curr_dl);
return 1;
}
#endif /* !GC_LONG_REFS_NOT_NEEDED */
#ifndef GC_MOVE_DISAPPEARING_LINK_NOT_NEEDED
/* Moves a link. Assume the lock is held. */
STATIC int GC_move_disappearing_link_inner(
struct dl_hashtbl_s *dl_hashtbl,
void **link, void **new_link)
{
struct disappearing_link *curr_dl, *prev_dl, *new_dl;
size_t curr_index, new_index;
word curr_hidden_link;
word new_hidden_link;
if (dl_hashtbl->log_size == -1)
return GC_NOT_FOUND; /* prevent integer shift by a negative amount */
/* Find current link. */
curr_index = HASH2(link, dl_hashtbl -> log_size);
curr_hidden_link = GC_HIDE_POINTER(link);
prev_dl = NULL;
for (curr_dl = dl_hashtbl -> head[curr_index]; curr_dl;
curr_dl = dl_next(curr_dl)) {
if (curr_dl -> dl_hidden_link == curr_hidden_link)
break;
prev_dl = curr_dl;
}
if (NULL == curr_dl) {
return GC_NOT_FOUND;
}
if (link == new_link) {
return GC_SUCCESS; /* Nothing to do. */
}
/* link found; now check new_link not present. */
new_index = HASH2(new_link, dl_hashtbl -> log_size);
new_hidden_link = GC_HIDE_POINTER(new_link);
for (new_dl = dl_hashtbl -> head[new_index]; new_dl;
new_dl = dl_next(new_dl)) {
if (new_dl -> dl_hidden_link == new_hidden_link) {
/* Target already registered; bail. */
return GC_DUPLICATE;
}
}
/* Remove from old, add to new, update link. */
if (NULL == prev_dl) {
dl_hashtbl -> head[curr_index] = dl_next(curr_dl);
} else {
dl_set_next(prev_dl, dl_next(curr_dl));
}
curr_dl -> dl_hidden_link = new_hidden_link;
dl_set_next(curr_dl, dl_hashtbl -> head[new_index]);
dl_hashtbl -> head[new_index] = curr_dl;
return GC_SUCCESS;
}
GC_API int GC_CALL GC_move_disappearing_link(void **link, void **new_link)
{
int result;
DCL_LOCK_STATE;
if (((word)new_link & (ALIGNMENT-1)) != 0
|| !NONNULL_ARG_NOT_NULL(new_link))
ABORT("Bad new_link arg to GC_move_disappearing_link");
if (((word)link & (ALIGNMENT-1)) != 0)
return GC_NOT_FOUND; /* Nothing to do. */
LOCK();
result = GC_move_disappearing_link_inner(&GC_dl_hashtbl, link, new_link);
UNLOCK();
return result;
}
# ifndef GC_LONG_REFS_NOT_NEEDED
GC_API int GC_CALL GC_move_long_link(void **link, void **new_link)
{
int result;
DCL_LOCK_STATE;
if (((word)new_link & (ALIGNMENT-1)) != 0
|| !NONNULL_ARG_NOT_NULL(new_link))
ABORT("Bad new_link arg to GC_move_long_link");
if (((word)link & (ALIGNMENT-1)) != 0)
return GC_NOT_FOUND; /* Nothing to do. */
LOCK();
result = GC_move_disappearing_link_inner(&GC_ll_hashtbl, link, new_link);
UNLOCK();
return result;
}
# endif /* !GC_LONG_REFS_NOT_NEEDED */
#endif /* !GC_MOVE_DISAPPEARING_LINK_NOT_NEEDED */
/* Possible finalization_marker procedures. Note that mark stack */
/* overflow is handled by the caller, and is not a disaster. */
STATIC void GC_normal_finalize_mark_proc(ptr_t p)
{
hdr * hhdr = HDR(p);
PUSH_OBJ(p, hhdr, GC_mark_stack_top,
&(GC_mark_stack[GC_mark_stack_size]));
}
/* This only pays very partial attention to the mark descriptor. */
/* It does the right thing for normal and atomic objects, and treats */
/* most others as normal. */
STATIC void GC_ignore_self_finalize_mark_proc(ptr_t p)
{
hdr * hhdr = HDR(p);
word descr = hhdr -> hb_descr;
ptr_t q;
ptr_t scan_limit;
ptr_t target_limit = p + hhdr -> hb_sz - 1;
if ((descr & GC_DS_TAGS) == GC_DS_LENGTH) {
scan_limit = p + descr - sizeof(word);
} else {
scan_limit = target_limit + 1 - sizeof(word);
}
for (q = p; (word)q <= (word)scan_limit; q += ALIGNMENT) {
word r = *(word *)q;
if (r < (word)p || r > (word)target_limit) {
GC_PUSH_ONE_HEAP(r, q, GC_mark_stack_top);
}
}
}
STATIC void GC_null_finalize_mark_proc(ptr_t p GC_ATTR_UNUSED) {}
/* Possible finalization_marker procedures. Note that mark stack */
/* overflow is handled by the caller, and is not a disaster. */
/* GC_unreachable_finalize_mark_proc is an alias for normal marking, */
/* but it is explicitly tested for, and triggers different */
/* behavior. Objects registered in this way are not finalized */
/* if they are reachable by other finalizable objects, even if those */
/* other objects specify no ordering. */
STATIC void GC_unreachable_finalize_mark_proc(ptr_t p)
{
GC_normal_finalize_mark_proc(p);
}
/* Register a finalization function. See gc.h for details. */
/* The last parameter is a procedure that determines */
/* marking for finalization ordering. Any objects marked */
/* by that procedure will be guaranteed to not have been */
/* finalized when this finalizer is invoked. */
STATIC void GC_register_finalizer_inner(void * obj,
GC_finalization_proc fn, void *cd,
GC_finalization_proc *ofn, void **ocd,
finalization_mark_proc mp)
{
struct finalizable_object * curr_fo;
size_t index;
struct finalizable_object *new_fo = 0;
hdr *hhdr = NULL; /* initialized to prevent warning. */
DCL_LOCK_STATE;
if (EXPECT(GC_find_leak, FALSE)) return;
LOCK();
if (log_fo_table_size == -1
|| GC_fo_entries > ((word)1 << log_fo_table_size)) {
GC_grow_table((struct hash_chain_entry ***)&GC_fnlz_roots.fo_head,
&log_fo_table_size, &GC_fo_entries);
# ifdef LINT2
if (log_fo_table_size < 0) ABORT("log_size is negative");
# endif
GC_COND_LOG_PRINTF("Grew fo table to %u entries\n",
1 << (unsigned)log_fo_table_size);
}
/* in the THREADS case we hold allocation lock. */
for (;;) {
struct finalizable_object *prev_fo = NULL;
GC_oom_func oom_fn;
index = HASH2(obj, log_fo_table_size);
curr_fo = GC_fnlz_roots.fo_head[index];
while (curr_fo != 0) {
GC_ASSERT(GC_size(curr_fo) >= sizeof(struct finalizable_object));
if (curr_fo -> fo_hidden_base == GC_HIDE_POINTER(obj)) {
/* Interruption by a signal in the middle of this */
/* should be safe. The client may see only *ocd */
/* updated, but we'll declare that to be his problem. */
if (ocd) *ocd = (void *) (curr_fo -> fo_client_data);
if (ofn) *ofn = curr_fo -> fo_fn;
/* Delete the structure for obj. */
if (prev_fo == 0) {
GC_fnlz_roots.fo_head[index] = fo_next(curr_fo);
} else {
fo_set_next(prev_fo, fo_next(curr_fo));
}
if (fn == 0) {
GC_fo_entries--;
/* May not happen if we get a signal. But a high */
/* estimate will only make the table larger than */
/* necessary. */
# if !defined(THREADS) && !defined(DBG_HDRS_ALL)
GC_free((void *)curr_fo);
# endif
} else {
curr_fo -> fo_fn = fn;
curr_fo -> fo_client_data = (ptr_t)cd;
curr_fo -> fo_mark_proc = mp;
/* Reinsert it. We deleted it first to maintain */
/* consistency in the event of a signal. */
if (prev_fo == 0) {
GC_fnlz_roots.fo_head[index] = curr_fo;
} else {
fo_set_next(prev_fo, curr_fo);
}
}
UNLOCK();
# ifndef DBG_HDRS_ALL
if (EXPECT(new_fo != 0, FALSE)) {
/* Free unused new_fo returned by GC_oom_fn() */
GC_free((void *)new_fo);
}
# endif
return;
}
prev_fo = curr_fo;
curr_fo = fo_next(curr_fo);
}
if (EXPECT(new_fo != 0, FALSE)) {
/* new_fo is returned by GC_oom_fn(). */
GC_ASSERT(fn != 0);
# ifdef LINT2
if (NULL == hhdr) ABORT("Bad hhdr in GC_register_finalizer_inner");
# endif
break;
}
if (fn == 0) {
if (ocd) *ocd = 0;
if (ofn) *ofn = 0;
UNLOCK();
return;
}
GET_HDR(obj, hhdr);
if (EXPECT(0 == hhdr, FALSE)) {
/* We won't collect it, hence finalizer wouldn't be run. */
if (ocd) *ocd = 0;
if (ofn) *ofn = 0;
UNLOCK();
return;
}
new_fo = (struct finalizable_object *)
GC_INTERNAL_MALLOC(sizeof(struct finalizable_object),NORMAL);
if (EXPECT(new_fo != 0, TRUE))
break;
oom_fn = GC_oom_fn;
UNLOCK();
new_fo = (struct finalizable_object *)
(*oom_fn)(sizeof(struct finalizable_object));
if (0 == new_fo) {
/* No enough memory. *ocd and *ofn remains unchanged. */
return;
}
/* It's not likely we'll make it here, but ... */
LOCK();
/* Recalculate index since the table may grow and */
/* check again that our finalizer is not in the table. */
}
GC_ASSERT(GC_size(new_fo) >= sizeof(struct finalizable_object));
if (ocd) *ocd = 0;
if (ofn) *ofn = 0;
new_fo -> fo_hidden_base = GC_HIDE_POINTER(obj);
new_fo -> fo_fn = fn;
new_fo -> fo_client_data = (ptr_t)cd;
new_fo -> fo_object_size = hhdr -> hb_sz;
new_fo -> fo_mark_proc = mp;
fo_set_next(new_fo, GC_fnlz_roots.fo_head[index]);
GC_fo_entries++;
GC_fnlz_roots.fo_head[index] = new_fo;
UNLOCK();
}
GC_API void GC_CALL GC_register_finalizer(void * obj,
GC_finalization_proc fn, void * cd,
GC_finalization_proc *ofn, void ** ocd)
{
GC_register_finalizer_inner(obj, fn, cd, ofn,
ocd, GC_normal_finalize_mark_proc);
}
GC_API void GC_CALL GC_register_finalizer_ignore_self(void * obj,
GC_finalization_proc fn, void * cd,
GC_finalization_proc *ofn, void ** ocd)
{
GC_register_finalizer_inner(obj, fn, cd, ofn,
ocd, GC_ignore_self_finalize_mark_proc);
}
GC_API void GC_CALL GC_register_finalizer_no_order(void * obj,
GC_finalization_proc fn, void * cd,
GC_finalization_proc *ofn, void ** ocd)
{
GC_register_finalizer_inner(obj, fn, cd, ofn,
ocd, GC_null_finalize_mark_proc);
}
static GC_bool need_unreachable_finalization = FALSE;
/* Avoid the work if this isn't used. */
GC_API void GC_CALL GC_register_finalizer_unreachable(void * obj,
GC_finalization_proc fn, void * cd,
GC_finalization_proc *ofn, void ** ocd)
{
need_unreachable_finalization = TRUE;
GC_ASSERT(GC_java_finalization);
GC_register_finalizer_inner(obj, fn, cd, ofn,
ocd, GC_unreachable_finalize_mark_proc);
}
#ifndef NO_DEBUGGING
STATIC void GC_dump_finalization_links(
const struct dl_hashtbl_s *dl_hashtbl)
{
size_t dl_size = dl_hashtbl->log_size == -1 ? 0 :
(size_t)1 << dl_hashtbl->log_size;
size_t i;
for (i = 0; i < dl_size; i++) {
struct disappearing_link *curr_dl;
for (curr_dl = dl_hashtbl -> head[i]; curr_dl != 0;
curr_dl = dl_next(curr_dl)) {
ptr_t real_ptr = GC_REVEAL_POINTER(curr_dl -> dl_hidden_obj);
ptr_t real_link = GC_REVEAL_POINTER(curr_dl -> dl_hidden_link);
GC_printf("Object: %p, link: %p\n",
(void *)real_ptr, (void *)real_link);
}
}
}
GC_API void GC_CALL GC_dump_finalization(void)
{
struct finalizable_object * curr_fo;
size_t fo_size = log_fo_table_size == -1 ? 0 :
(size_t)1 << log_fo_table_size;
size_t i;
GC_printf("Disappearing (short) links:\n");
GC_dump_finalization_links(&GC_dl_hashtbl);
# ifndef GC_LONG_REFS_NOT_NEEDED
GC_printf("Disappearing long links:\n");
GC_dump_finalization_links(&GC_ll_hashtbl);
# endif
GC_printf("Finalizers:\n");
for (i = 0; i < fo_size; i++) {
for (curr_fo = GC_fnlz_roots.fo_head[i];
curr_fo != NULL; curr_fo = fo_next(curr_fo)) {
ptr_t real_ptr = GC_REVEAL_POINTER(curr_fo -> fo_hidden_base);
GC_printf("Finalizable object: %p\n", (void *)real_ptr);
}
}
}
#endif /* !NO_DEBUGGING */
#ifndef SMALL_CONFIG
STATIC word GC_old_dl_entries = 0; /* for stats printing */
# ifndef GC_LONG_REFS_NOT_NEEDED
STATIC word GC_old_ll_entries = 0;
# endif
#endif /* !SMALL_CONFIG */
#ifndef THREADS
/* Global variables to minimize the level of recursion when a client */
/* finalizer allocates memory. */
STATIC int GC_finalizer_nested = 0;
/* Only the lowest byte is used, the rest is */
/* padding for proper global data alignment */
/* required for some compilers (like Watcom). */
STATIC unsigned GC_finalizer_skipped = 0;
/* Checks and updates the level of finalizers recursion. */
/* Returns NULL if GC_invoke_finalizers() should not be called by the */
/* collector (to minimize the risk of a deep finalizers recursion), */
/* otherwise returns a pointer to GC_finalizer_nested. */
STATIC unsigned char *GC_check_finalizer_nested(void)
{
unsigned nesting_level = *(unsigned char *)&GC_finalizer_nested;
if (nesting_level) {
/* We are inside another GC_invoke_finalizers(). */
/* Skip some implicitly-called GC_invoke_finalizers() */
/* depending on the nesting (recursion) level. */
if (++GC_finalizer_skipped < (1U << nesting_level)) return NULL;
GC_finalizer_skipped = 0;
}
*(char *)&GC_finalizer_nested = (char)(nesting_level + 1);
return (unsigned char *)&GC_finalizer_nested;
}
#endif /* THREADS */
#define ITERATE_DL_HASHTBL_BEGIN(dl_hashtbl, curr_dl, prev_dl) \
{ \
size_t i; \
size_t dl_size = dl_hashtbl->log_size == -1 ? 0 : \
(size_t)1 << dl_hashtbl->log_size; \
for (i = 0; i < dl_size; i++) { \
struct disappearing_link *prev_dl = NULL; \
curr_dl = dl_hashtbl -> head[i]; \
while (curr_dl) {
#define ITERATE_DL_HASHTBL_END(curr_dl, prev_dl) \
prev_dl = curr_dl; \
curr_dl = dl_next(curr_dl); \
} \
} \
}
#define DELETE_DL_HASHTBL_ENTRY(dl_hashtbl, curr_dl, prev_dl, next_dl) \
{ \
next_dl = dl_next(curr_dl); \
if (NULL == prev_dl) { \
dl_hashtbl -> head[i] = next_dl; \
} else { \
dl_set_next(prev_dl, next_dl); \
} \
GC_clear_mark_bit(curr_dl); \
dl_hashtbl -> entries--; \
curr_dl = next_dl; \
continue; \
}
GC_INLINE void GC_make_disappearing_links_disappear(
struct dl_hashtbl_s* dl_hashtbl)
{
struct disappearing_link *curr, *next;
ITERATE_DL_HASHTBL_BEGIN(dl_hashtbl, curr, prev)
ptr_t real_ptr = GC_REVEAL_POINTER(curr -> dl_hidden_obj);
ptr_t real_link = GC_REVEAL_POINTER(curr -> dl_hidden_link);
if (!GC_is_marked(real_ptr)) {
*(word *)real_link = 0;
GC_clear_mark_bit(curr);
DELETE_DL_HASHTBL_ENTRY(dl_hashtbl, curr, prev, next);
}
ITERATE_DL_HASHTBL_END(curr, prev)
}
GC_INLINE void GC_remove_dangling_disappearing_links(
struct dl_hashtbl_s* dl_hashtbl)
{
struct disappearing_link *curr, *next;
ITERATE_DL_HASHTBL_BEGIN(dl_hashtbl, curr, prev)
ptr_t real_link = GC_base(GC_REVEAL_POINTER(curr -> dl_hidden_link));
if (NULL != real_link && !GC_is_marked(real_link)) {
GC_clear_mark_bit(curr);
DELETE_DL_HASHTBL_ENTRY(dl_hashtbl, curr, prev, next);
}
ITERATE_DL_HASHTBL_END(curr, prev)
}
/* Called with held lock (but the world is running). */
/* Cause disappearing links to disappear and unreachable objects to be */
/* enqueued for finalization. */
GC_INNER void GC_finalize(void)
{
struct finalizable_object * curr_fo, * prev_fo, * next_fo;
ptr_t real_ptr;
size_t i;
size_t fo_size = log_fo_table_size == -1 ? 0 :
(size_t)1 << log_fo_table_size;
# ifndef SMALL_CONFIG
/* Save current GC_[dl/ll]_entries value for stats printing */
GC_old_dl_entries = GC_dl_hashtbl.entries;
# ifndef GC_LONG_REFS_NOT_NEEDED
GC_old_ll_entries = GC_ll_hashtbl.entries;
# endif
# endif
# ifndef GC_TOGGLE_REFS_NOT_NEEDED
GC_mark_togglerefs();
# endif
GC_make_disappearing_links_disappear(&GC_dl_hashtbl);
/* Mark all objects reachable via chains of 1 or more pointers */
/* from finalizable objects. */
GC_ASSERT(GC_mark_state == MS_NONE);
for (i = 0; i < fo_size; i++) {
for (curr_fo = GC_fnlz_roots.fo_head[i];
curr_fo != NULL; curr_fo = fo_next(curr_fo)) {
GC_ASSERT(GC_size(curr_fo) >= sizeof(struct finalizable_object));
real_ptr = GC_REVEAL_POINTER(curr_fo -> fo_hidden_base);
if (!GC_is_marked(real_ptr)) {
GC_MARKED_FOR_FINALIZATION(real_ptr);
GC_MARK_FO(real_ptr, curr_fo -> fo_mark_proc);
if (GC_is_marked(real_ptr)) {
WARN("Finalization cycle involving %p\n", real_ptr);
}
}
}
}
/* Enqueue for finalization all objects that are still */
/* unreachable. */
GC_bytes_finalized = 0;
for (i = 0; i < fo_size; i++) {
curr_fo = GC_fnlz_roots.fo_head[i];
prev_fo = 0;
while (curr_fo != 0) {
real_ptr = GC_REVEAL_POINTER(curr_fo -> fo_hidden_base);
if (!GC_is_marked(real_ptr)) {
if (!GC_java_finalization) {
GC_set_mark_bit(real_ptr);
}
/* Delete from hash table */
next_fo = fo_next(curr_fo);
if (NULL == prev_fo) {
GC_fnlz_roots.fo_head[i] = next_fo;
} else {
fo_set_next(prev_fo, next_fo);
}
GC_fo_entries--;
if (GC_object_finalized_proc)
GC_object_finalized_proc(real_ptr);
/* Add to list of objects awaiting finalization. */
fo_set_next(curr_fo, GC_fnlz_roots.finalize_now);
GC_fnlz_roots.finalize_now = curr_fo;
/* unhide object pointer so any future collections will */
/* see it. */
curr_fo -> fo_hidden_base =
(word)GC_REVEAL_POINTER(curr_fo -> fo_hidden_base);
GC_bytes_finalized +=
curr_fo -> fo_object_size
+ sizeof(struct finalizable_object);
GC_ASSERT(GC_is_marked(GC_base(curr_fo)));
curr_fo = next_fo;
} else {
prev_fo = curr_fo;
curr_fo = fo_next(curr_fo);
}
}
}
if (GC_java_finalization) {
/* make sure we mark everything reachable from objects finalized
using the no_order mark_proc */
for (curr_fo = GC_fnlz_roots.finalize_now;
curr_fo != NULL; curr_fo = fo_next(curr_fo)) {
real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
if (!GC_is_marked(real_ptr)) {
if (curr_fo -> fo_mark_proc == GC_null_finalize_mark_proc) {
GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
}
if (curr_fo -> fo_mark_proc != GC_unreachable_finalize_mark_proc) {
GC_set_mark_bit(real_ptr);
}
}
}
/* now revive finalize-when-unreachable objects reachable from
other finalizable objects */
if (need_unreachable_finalization) {
curr_fo = GC_fnlz_roots.finalize_now;
# if defined(GC_ASSERTIONS) || defined(LINT2)
if (curr_fo != NULL && log_fo_table_size < 0)
ABORT("log_size is negative");
# endif
prev_fo = NULL;
while (curr_fo != NULL) {
next_fo = fo_next(curr_fo);
if (curr_fo -> fo_mark_proc == GC_unreachable_finalize_mark_proc) {
real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
if (!GC_is_marked(real_ptr)) {
GC_set_mark_bit(real_ptr);
} else {
if (NULL == prev_fo) {
GC_fnlz_roots.finalize_now = next_fo;
} else {
fo_set_next(prev_fo, next_fo);
}
curr_fo -> fo_hidden_base =
GC_HIDE_POINTER(curr_fo -> fo_hidden_base);
GC_bytes_finalized -=
curr_fo->fo_object_size + sizeof(struct finalizable_object);
i = HASH2(real_ptr, log_fo_table_size);
fo_set_next(curr_fo, GC_fnlz_roots.fo_head[i]);
GC_fo_entries++;
GC_fnlz_roots.fo_head[i] = curr_fo;
curr_fo = prev_fo;
}
}
prev_fo = curr_fo;
curr_fo = next_fo;
}
}
}
GC_remove_dangling_disappearing_links(&GC_dl_hashtbl);
# ifndef GC_TOGGLE_REFS_NOT_NEEDED
GC_clear_togglerefs();
# endif
# ifndef GC_LONG_REFS_NOT_NEEDED
GC_make_disappearing_links_disappear(&GC_ll_hashtbl);
GC_remove_dangling_disappearing_links(&GC_ll_hashtbl);
# endif
if (GC_fail_count) {
/* Don't prevent running finalizers if there has been an allocation */
/* failure recently. */
# ifdef THREADS
GC_reset_finalizer_nested();
# else
GC_finalizer_nested = 0;
# endif
}
}
#ifndef JAVA_FINALIZATION_NOT_NEEDED
/* Enqueue all remaining finalizers to be run - Assumes lock is held. */
STATIC void GC_enqueue_all_finalizers(void)
{
struct finalizable_object * next_fo;
int i;
int fo_size;
fo_size = log_fo_table_size == -1 ? 0 : 1 << log_fo_table_size;
GC_bytes_finalized = 0;
for (i = 0; i < fo_size; i++) {
struct finalizable_object * curr_fo = GC_fnlz_roots.fo_head[i];
GC_fnlz_roots.fo_head[i] = NULL;
while (curr_fo != NULL) {
ptr_t real_ptr = (ptr_t)GC_REVEAL_POINTER(curr_fo->fo_hidden_base);
GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
GC_set_mark_bit(real_ptr);
next_fo = fo_next(curr_fo);
/* Add to list of objects awaiting finalization. */
fo_set_next(curr_fo, GC_fnlz_roots.finalize_now);
GC_fnlz_roots.finalize_now = curr_fo;
/* unhide object pointer so any future collections will */
/* see it. */
curr_fo -> fo_hidden_base =
(word)GC_REVEAL_POINTER(curr_fo -> fo_hidden_base);
GC_bytes_finalized +=
curr_fo -> fo_object_size + sizeof(struct finalizable_object);
curr_fo = next_fo;
}
}
GC_fo_entries = 0; /* all entries deleted from the hash table */
}
/* Invoke all remaining finalizers that haven't yet been run.
* This is needed for strict compliance with the Java standard,
* which can make the runtime guarantee that all finalizers are run.
* Unfortunately, the Java standard implies we have to keep running
* finalizers until there are no more left, a potential infinite loop.
* YUCK.
* Note that this is even more dangerous than the usual Java
* finalizers, in that objects reachable from static variables
* may have been finalized when these finalizers are run.
* Finalizers run at this point must be prepared to deal with a
* mostly broken world.
* This routine is externally callable, so is called without
* the allocation lock.
*/
GC_API void GC_CALL GC_finalize_all(void)
{
DCL_LOCK_STATE;
LOCK();
while (GC_fo_entries > 0) {
GC_enqueue_all_finalizers();
UNLOCK();
GC_invoke_finalizers();
/* Running the finalizers in this thread is arguably not a good */
/* idea when we should be notifying another thread to run them. */
/* But otherwise we don't have a great way to wait for them to */
/* run. */
LOCK();
}
UNLOCK();
}
#endif /* !JAVA_FINALIZATION_NOT_NEEDED */
/* Returns true if it is worth calling GC_invoke_finalizers. (Useful if */
/* finalizers can only be called from some kind of "safe state" and */
/* getting into that safe state is expensive.) */
GC_API int GC_CALL GC_should_invoke_finalizers(void)
{
return GC_fnlz_roots.finalize_now != NULL;
}
/* Invoke finalizers for all objects that are ready to be finalized. */
/* Should be called without allocation lock. */
GC_API int GC_CALL GC_invoke_finalizers(void)
{
int count = 0;
word bytes_freed_before = 0; /* initialized to prevent warning. */
DCL_LOCK_STATE;
while (GC_fnlz_roots.finalize_now != NULL) {
struct finalizable_object * curr_fo;
# ifdef THREADS
LOCK();
# endif
if (count == 0) {
bytes_freed_before = GC_bytes_freed;
/* Don't do this outside, since we need the lock. */
}
curr_fo = GC_fnlz_roots.finalize_now;
# ifdef THREADS
if (curr_fo != 0) GC_fnlz_roots.finalize_now = fo_next(curr_fo);
UNLOCK();
if (curr_fo == 0) break;
# else
GC_fnlz_roots.finalize_now = fo_next(curr_fo);
# endif
fo_set_next(curr_fo, 0);
(*(curr_fo -> fo_fn))((ptr_t)(curr_fo -> fo_hidden_base),
curr_fo -> fo_client_data);
curr_fo -> fo_client_data = 0;
++count;
/* Explicit freeing of curr_fo is probably a bad idea. */
/* It throws off accounting if nearly all objects are */
/* finalizable. Otherwise it should not matter. */
}
/* bytes_freed_before is initialized whenever count != 0 */
if (count != 0 && bytes_freed_before != GC_bytes_freed) {
LOCK();
GC_finalizer_bytes_freed += (GC_bytes_freed - bytes_freed_before);
UNLOCK();
}
return count;
}
static word last_finalizer_notification = 0;
GC_INNER void GC_notify_or_invoke_finalizers(void)
{
GC_finalizer_notifier_proc notifier_fn = 0;
# if defined(KEEP_BACK_PTRS) || defined(MAKE_BACK_GRAPH)
static word last_back_trace_gc_no = 1; /* Skip first one. */
# endif
DCL_LOCK_STATE;
# if defined(THREADS) && !defined(KEEP_BACK_PTRS) \
&& !defined(MAKE_BACK_GRAPH)
/* Quick check (while unlocked) for an empty finalization queue. */
if (NULL == GC_fnlz_roots.finalize_now) return;
# endif
LOCK();
/* This is a convenient place to generate backtraces if appropriate, */
/* since that code is not callable with the allocation lock. */
# if defined(KEEP_BACK_PTRS) || defined(MAKE_BACK_GRAPH)
if (GC_gc_no > last_back_trace_gc_no) {
# ifdef KEEP_BACK_PTRS
long i;
/* Stops when GC_gc_no wraps; that's OK. */
last_back_trace_gc_no = (word)(-1); /* disable others. */
for (i = 0; i < GC_backtraces; ++i) {
/* FIXME: This tolerates concurrent heap mutation, */
/* which may cause occasional mysterious results. */
/* We need to release the GC lock, since GC_print_callers */
/* acquires it. It probably shouldn't. */
UNLOCK();
GC_generate_random_backtrace_no_gc();
LOCK();
}
last_back_trace_gc_no = GC_gc_no;
# endif
# ifdef MAKE_BACK_GRAPH
if (GC_print_back_height) {
GC_print_back_graph_stats();
}
# endif
}
# endif
if (NULL == GC_fnlz_roots.finalize_now) {
UNLOCK();
return;
}
if (!GC_finalize_on_demand) {
unsigned char *pnested = GC_check_finalizer_nested();
UNLOCK();
/* Skip GC_invoke_finalizers() if nested */
if (pnested != NULL) {
(void) GC_invoke_finalizers();
*pnested = 0; /* Reset since no more finalizers. */
# ifndef THREADS
GC_ASSERT(NULL == GC_fnlz_roots.finalize_now);
# endif /* Otherwise GC can run concurrently and add more */
}
return;
}
/* These variables require synchronization to avoid data races. */
if (last_finalizer_notification != GC_gc_no) {
last_finalizer_notification = GC_gc_no;
notifier_fn = GC_finalizer_notifier;
}
UNLOCK();
if (notifier_fn != 0)
(*notifier_fn)(); /* Invoke the notifier */
}
#ifndef SMALL_CONFIG
# ifndef GC_LONG_REFS_NOT_NEEDED
# define IF_LONG_REFS_PRESENT_ELSE(x,y) (x)
# else
# define IF_LONG_REFS_PRESENT_ELSE(x,y) (y)
# endif
GC_INNER void GC_print_finalization_stats(void)
{
struct finalizable_object *fo;
unsigned long ready = 0;
GC_log_printf("%lu finalization entries;"
" %lu/%lu short/long disappearing links alive\n",
(unsigned long)GC_fo_entries,
(unsigned long)GC_dl_hashtbl.entries,
(unsigned long)IF_LONG_REFS_PRESENT_ELSE(
GC_ll_hashtbl.entries, 0));
for (fo = GC_fnlz_roots.finalize_now; fo != NULL; fo = fo_next(fo))
++ready;
GC_log_printf("%lu finalization-ready objects;"
" %ld/%ld short/long links cleared\n",
ready,
(long)GC_old_dl_entries - (long)GC_dl_hashtbl.entries,
(long)IF_LONG_REFS_PRESENT_ELSE(
GC_old_ll_entries - GC_ll_hashtbl.entries, 0));
}
#endif /* !SMALL_CONFIG */
#endif /* !GC_NO_FINALIZATION */
Gauche-0.9.6/gc/pcr_interface.c 0000664 0000764 0000764 00000011614 13302340445 015330 0 ustar shiro shiro /*
* Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
# include "private/gc_priv.h"
# ifdef PCR
/*
* Note that POSIX PCR requires an ANSI C compiler. Hence we are allowed
* to make the same assumption here.
* We wrap all of the allocator functions to avoid questions of
* compatibility between the prototyped and nonprototyped versions of the f
*/
# include "config/PCR_StdTypes.h"
# include "mm/PCR_MM.h"
# include
# define MY_MAGIC 17L
# define MY_DEBUGMAGIC 42L
void * GC_AllocProc(size_t size, PCR_Bool ptrFree, PCR_Bool clear )
{
if (ptrFree) {
void * result = (void *)GC_malloc_atomic(size);
if (clear && result != 0) BZERO(result, size);
return(result);
} else {
return((void *)GC_malloc(size));
}
}
void * GC_DebugAllocProc(size_t size, PCR_Bool ptrFree, PCR_Bool clear )
{
if (ptrFree) {
void * result = (void *)GC_debug_malloc_atomic(size, __FILE__,
__LINE__);
if (clear && result != 0) BZERO(result, size);
return(result);
} else {
return((void *)GC_debug_malloc(size, __FILE__, __LINE__));
}
}
# define GC_ReallocProc GC_realloc
void * GC_DebugReallocProc(void * old_object, size_t new_size_in_bytes)
{
return(GC_debug_realloc(old_object, new_size_in_bytes, __FILE__, __LINE__));
}
# define GC_FreeProc GC_free
# define GC_DebugFreeProc GC_debug_free
typedef struct {
PCR_ERes (*ed_proc)(void *p, size_t size, PCR_Any data);
GC_bool ed_pointerfree;
PCR_ERes ed_fail_code;
PCR_Any ed_client_data;
} enumerate_data;
void GC_enumerate_block(struct hblk *h, enumerate_data * ed)
{
register hdr * hhdr;
register int sz;
ptr_t p;
ptr_t lim;
word descr;
# if !defined(CPPCHECK)
# error This code was updated without testing.
# error and its precursor was clearly broken.
# endif
hhdr = HDR(h);
descr = hhdr -> hb_descr;
sz = hhdr -> hb_sz;
if (descr != 0 && ed -> ed_pointerfree
|| descr == 0 && !(ed -> ed_pointerfree)) return;
lim = (ptr_t)(h+1) - sz;
p = (ptr_t)h;
do {
if (PCR_ERes_IsErr(ed -> ed_fail_code)) return;
ed -> ed_fail_code =
(*(ed -> ed_proc))(p, sz, ed -> ed_client_data);
p+= sz;
} while ((word)p <= (word)lim);
}
struct PCR_MM_ProcsRep * GC_old_allocator = 0;
PCR_ERes GC_EnumerateProc(
PCR_Bool ptrFree,
PCR_ERes (*proc)(void *p, size_t size, PCR_Any data),
PCR_Any data
)
{
enumerate_data ed;
ed.ed_proc = proc;
ed.ed_pointerfree = ptrFree;
ed.ed_fail_code = PCR_ERes_okay;
ed.ed_client_data = data;
GC_apply_to_all_blocks(GC_enumerate_block, &ed);
if (ed.ed_fail_code != PCR_ERes_okay) {
return(ed.ed_fail_code);
} else {
/* Also enumerate objects allocated by my predecessors */
return((*(GC_old_allocator->mmp_enumerate))(ptrFree, proc, data));
}
}
void GC_DummyFreeProc(void *p) {}
void GC_DummyShutdownProc(void) {}
struct PCR_MM_ProcsRep GC_Rep = {
MY_MAGIC,
GC_AllocProc,
GC_ReallocProc,
GC_DummyFreeProc, /* mmp_free */
GC_FreeProc, /* mmp_unsafeFree */
GC_EnumerateProc,
GC_DummyShutdownProc /* mmp_shutdown */
};
struct PCR_MM_ProcsRep GC_DebugRep = {
MY_DEBUGMAGIC,
GC_DebugAllocProc,
GC_DebugReallocProc,
GC_DummyFreeProc, /* mmp_free */
GC_DebugFreeProc, /* mmp_unsafeFree */
GC_EnumerateProc,
GC_DummyShutdownProc /* mmp_shutdown */
};
GC_bool GC_use_debug = 0;
void GC_pcr_install()
{
PCR_MM_Install((GC_use_debug? &GC_DebugRep : &GC_Rep), &GC_old_allocator);
}
PCR_ERes
PCR_GC_Setup(void)
{
return PCR_ERes_okay;
}
PCR_ERes
PCR_GC_Run(void)
{
if( !PCR_Base_TestPCRArg("-nogc") ) {
GC_quiet = ( PCR_Base_TestPCRArg("-gctrace") ? 0 : 1 );
GC_use_debug = (GC_bool)PCR_Base_TestPCRArg("-debug_alloc");
GC_init();
if( !PCR_Base_TestPCRArg("-nogc_incremental") ) {
/*
* awful hack to test whether VD is implemented ...
*/
if( PCR_VD_Start( 0, NIL, 0) != PCR_ERes_FromErr(ENOSYS) ) {
GC_enable_incremental();
}
}
}
return PCR_ERes_okay;
}
void GC_push_thread_structures(void)
{
/* PCR doesn't work unless static roots are pushed. Can't get here. */
ABORT("In GC_push_thread_structures()");
}
# endif
Gauche-0.9.6/gc/gc_cpp.cc 0000664 0000764 0000764 00000004466 13245551623 014141 0 ustar shiro shiro /*
* Copyright (c) 1994 by Xerox Corporation. All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to copy this code for any purpose,
* provided the above notices are retained on all copies.
*/
/*************************************************************************
This implementation module for gc_c++.h provides an implementation of
the global operators "new" and "delete" that calls the Boehm
allocator. All objects allocated by this implementation will be
uncollectible but part of the root set of the collector.
You should ensure (using implementation-dependent techniques) that the
linker finds this module before the library that defines the default
built-in "new" and "delete".
**************************************************************************/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#ifndef GC_BUILD
# define GC_BUILD
#endif
#include "gc_cpp.h"
#if !defined(GC_NEW_DELETE_NEED_THROW) && defined(__GNUC__) \
&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 2))
# define GC_NEW_DELETE_NEED_THROW
#endif
#ifdef GC_NEW_DELETE_NEED_THROW
# include /* for std::bad_alloc */
# define GC_DECL_NEW_THROW throw(std::bad_alloc)
# define GC_DECL_DELETE_THROW throw()
#else
# define GC_DECL_NEW_THROW /* empty */
# define GC_DECL_DELETE_THROW /* empty */
#endif // !GC_NEW_DELETE_NEED_THROW
#ifndef _MSC_VER
void* operator new(size_t size) GC_DECL_NEW_THROW {
return GC_MALLOC_UNCOLLECTABLE(size);
}
void operator delete(void* obj) GC_DECL_DELETE_THROW {
GC_FREE(obj);
}
# if defined(GC_OPERATOR_NEW_ARRAY) && !defined(CPPCHECK)
void* operator new[](size_t size) GC_DECL_NEW_THROW {
return GC_MALLOC_UNCOLLECTABLE(size);
}
void operator delete[](void* obj) GC_DECL_DELETE_THROW {
GC_FREE(obj);
}
# endif // GC_OPERATOR_NEW_ARRAY
# if __cplusplus > 201103L // C++14
void operator delete(void* obj, size_t size) GC_DECL_DELETE_THROW {
(void)size; // size is ignored
GC_FREE(obj);
}
# if defined(GC_OPERATOR_NEW_ARRAY) && !defined(CPPCHECK)
void operator delete[](void* obj, size_t size) GC_DECL_DELETE_THROW {
(void)size;
GC_FREE(obj);
}
# endif
# endif // C++14
#endif // !_MSC_VER
Gauche-0.9.6/gc/build/ 0000775 0000764 0000764 00000000000 13074101475 013460 5 ustar shiro shiro Gauche-0.9.6/gc/build/s60v3/ 0000775 0000764 0000764 00000000000 13074101475 014341 5 ustar shiro shiro Gauche-0.9.6/gc/build/s60v3/libgc.mmp 0000664 0000764 0000764 00000003222 13074101475 016133 0 ustar shiro shiro TARGET libgc.dll
TARGETTYPE dll
UID 0x1000008d 0x200107C2 // check uid
EXPORTUNFROZEN
EPOCALLOWDLLDATA
//ALWAYS_BUILD_AS_ARM
//nocompresstarget
//srcdbg
//baseaddress 00500000
//LINKEROPTION CW -map libgc.map
//LINKEROPTION CW -filealign 0x10000
CAPABILITY PowerMgmt ReadDeviceData ReadUserData WriteDeviceData WriteUserData SwEvent LocalServices NetworkServices UserEnvironment
MACRO ALL_INTERIOR_POINTERS
MACRO NO_EXECUTE_PERMISSION
MACRO USE_MMAP
MACRO GC_DONT_REGISTER_MAIN_STATIC_DATA
MACRO GC_DLL
MACRO SYMBIAN
//MACRO ENABLE_DISCLAIM
//MACRO GC_GCJ_SUPPORT
USERINCLUDE ..\..\include
USERINCLUDE ..\..\include\private
SYSTEMINCLUDE \epoc32\include
SYSTEMINCLUDE \epoc32\include\stdapis
SOURCEPATH ..\..\
SOURCE allchblk.c
SOURCE alloc.c
SOURCE blacklst.c
SOURCE dbg_mlc.c
SOURCE dyn_load.c
SOURCE finalize.c
SOURCE fnlz_mlc.c
//SOURCE gc_cpp.cpp
SOURCE gcj_mlc.c
SOURCE headers.c
SOURCE mach_dep.c
SOURCE malloc.c
SOURCE mallocx.c
SOURCE mark.c
SOURCE mark_rts.c
SOURCE misc.c
SOURCE new_hblk.c
SOURCE obj_map.c
SOURCE os_dep.c
SOURCE extra/symbian.cpp
SOURCE ptr_chck.c
SOURCE reclaim.c
SOURCE stubborn.c
SOURCE typd_mlc.c
/*
#ifdef ENABLE_ABIV2_MODE
DEBUGGABLE_UDEBONLY
#endif
*/
// Using main() as entry point
STATICLIBRARY libcrt0.lib
// libc and euser are always needed when using main() entry point
LIBRARY libc.lib
LIBRARY euser.lib
LIBRARY efsrv.lib
LIBRARY avkon.lib
LIBRARY eikcore.lib
Gauche-0.9.6/gc/build/s60v3/bld.inf 0000664 0000764 0000764 00000000252 13074101475 015577 0 ustar shiro shiro /*
Name : bld.inf
Description : This file provides the information required for building the
whole of a libgc.
*/
PRJ_PLATFORMS
default armv5
PRJ_MMPFILES
libgc.mmp
Gauche-0.9.6/gc/aclocal.m4 0000664 0000764 0000764 00000153327 13302340461 014226 0 ustar shiro shiro # generated automatically by aclocal 1.15 -*- Autoconf -*-
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
m4_ifndef([AC_AUTOCONF_VERSION],
[m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.69],,
[m4_warning([this file was generated for autoconf 2.69.
You have another version of autoconf. It may work, but is not guaranteed to.
If you have problems, you may need to regenerate the build system entirely.
To do so, use the procedure documented by the package, typically 'autoreconf'.])])
dnl pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
dnl serial 11 (pkg-config-0.29.1)
dnl
dnl Copyright © 2004 Scott James Remnant .
dnl Copyright © 2012-2015 Dan Nicholson
dnl
dnl This program is free software; you can redistribute it and/or modify
dnl it under the terms of the GNU General Public License as published by
dnl the Free Software Foundation; either version 2 of the License, or
dnl (at your option) any later version.
dnl
dnl This program is distributed in the hope that it will be useful, but
dnl WITHOUT ANY WARRANTY; without even the implied warranty of
dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
dnl General Public License for more details.
dnl
dnl You should have received a copy of the GNU General Public License
dnl along with this program; if not, write to the Free Software
dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
dnl 02111-1307, USA.
dnl
dnl As a special exception to the GNU General Public License, if you
dnl distribute this file as part of a program that contains a
dnl configuration script generated by Autoconf, you may include it under
dnl the same distribution terms that you use for the rest of that
dnl program.
dnl PKG_PREREQ(MIN-VERSION)
dnl -----------------------
dnl Since: 0.29
dnl
dnl Verify that the version of the pkg-config macros are at least
dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's
dnl installed version of pkg-config, this checks the developer's version
dnl of pkg.m4 when generating configure.
dnl
dnl To ensure that this macro is defined, also add:
dnl m4_ifndef([PKG_PREREQ],
dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])])
dnl
dnl See the "Since" comment for each macro you use to see what version
dnl of the macros you require.
m4_defun([PKG_PREREQ],
[m4_define([PKG_MACROS_VERSION], [0.29.1])
m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1,
[m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])])
])dnl PKG_PREREQ
dnl PKG_PROG_PKG_CONFIG([MIN-VERSION])
dnl ----------------------------------
dnl Since: 0.16
dnl
dnl Search for the pkg-config tool and set the PKG_CONFIG variable to
dnl first found in the path. Checks that the version of pkg-config found
dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is
dnl used since that's the first version where most current features of
dnl pkg-config existed.
AC_DEFUN([PKG_PROG_PKG_CONFIG],
[m4_pattern_forbid([^_?PKG_[A-Z_]+$])
m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$])
m4_pattern_allow([^PKG_CONFIG_(DISABLE_UNINSTALLED|TOP_BUILD_DIR|DEBUG_SPEW)$])
AC_ARG_VAR([PKG_CONFIG], [path to pkg-config utility])
AC_ARG_VAR([PKG_CONFIG_PATH], [directories to add to pkg-config's search path])
AC_ARG_VAR([PKG_CONFIG_LIBDIR], [path overriding pkg-config's built-in search path])
if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then
AC_PATH_TOOL([PKG_CONFIG], [pkg-config])
fi
if test -n "$PKG_CONFIG"; then
_pkg_min_version=m4_default([$1], [0.9.0])
AC_MSG_CHECKING([pkg-config is at least version $_pkg_min_version])
if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
PKG_CONFIG=""
fi
fi[]dnl
])dnl PKG_PROG_PKG_CONFIG
dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------------------------------
dnl Since: 0.18
dnl
dnl Check to see whether a particular set of modules exists. Similar to
dnl PKG_CHECK_MODULES(), but does not set variables or print errors.
dnl
dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
dnl only at the first occurence in configure.ac, so if the first place
dnl it's called might be skipped (such as if it is within an "if", you
dnl have to call PKG_CHECK_EXISTS manually
AC_DEFUN([PKG_CHECK_EXISTS],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
if test -n "$PKG_CONFIG" && \
AC_RUN_LOG([$PKG_CONFIG --exists --print-errors "$1"]); then
m4_default([$2], [:])
m4_ifvaln([$3], [else
$3])dnl
fi])
dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
dnl ---------------------------------------------
dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting
dnl pkg_failed based on the result.
m4_define([_PKG_CONFIG],
[if test -n "$$1"; then
pkg_cv_[]$1="$$1"
elif test -n "$PKG_CONFIG"; then
PKG_CHECK_EXISTS([$3],
[pkg_cv_[]$1=`$PKG_CONFIG --[]$2 "$3" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes ],
[pkg_failed=yes])
else
pkg_failed=untried
fi[]dnl
])dnl _PKG_CONFIG
dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl ---------------------------
dnl Internal check to see if pkg-config supports short errors.
AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
_pkg_short_errors_supported=yes
else
_pkg_short_errors_supported=no
fi[]dnl
])dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl --------------------------------------------------------------
dnl Since: 0.4.0
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES might not happen, you should be sure to include an
dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
AC_DEFUN([PKG_CHECK_MODULES],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl
AC_ARG_VAR([$1][_LIBS], [linker flags for $1, overriding pkg-config])dnl
pkg_failed=no
AC_MSG_CHECKING([for $1])
_PKG_CONFIG([$1][_CFLAGS], [cflags], [$2])
_PKG_CONFIG([$1][_LIBS], [libs], [$2])
m4_define([_PKG_TEXT], [Alternatively, you may set the environment variables $1[]_CFLAGS
and $1[]_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.])
if test $pkg_failed = yes; then
AC_MSG_RESULT([no])
_PKG_SHORT_ERRORS_SUPPORTED
if test $_pkg_short_errors_supported = yes; then
$1[]_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "$2" 2>&1`
else
$1[]_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "$2" 2>&1`
fi
# Put the nasty error message in config.log where it belongs
echo "$$1[]_PKG_ERRORS" >&AS_MESSAGE_LOG_FD
m4_default([$4], [AC_MSG_ERROR(
[Package requirements ($2) were not met:
$$1_PKG_ERRORS
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
_PKG_TEXT])[]dnl
])
elif test $pkg_failed = untried; then
AC_MSG_RESULT([no])
m4_default([$4], [AC_MSG_FAILURE(
[The pkg-config script could not be found or is too old. Make sure it
is in your PATH or set the PKG_CONFIG environment variable to the full
path to pkg-config.
_PKG_TEXT
To get pkg-config, see .])[]dnl
])
else
$1[]_CFLAGS=$pkg_cv_[]$1[]_CFLAGS
$1[]_LIBS=$pkg_cv_[]$1[]_LIBS
AC_MSG_RESULT([yes])
$3
fi[]dnl
])dnl PKG_CHECK_MODULES
dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl ---------------------------------------------------------------------
dnl Since: 0.29
dnl
dnl Checks for existence of MODULES and gathers its build flags with
dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags
dnl and VARIABLE-PREFIX_LIBS from --libs.
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to
dnl include an explicit call to PKG_PROG_PKG_CONFIG in your
dnl configure.ac.
AC_DEFUN([PKG_CHECK_MODULES_STATIC],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
_save_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
PKG_CHECK_MODULES($@)
PKG_CONFIG=$_save_PKG_CONFIG[]dnl
])dnl PKG_CHECK_MODULES_STATIC
dnl PKG_INSTALLDIR([DIRECTORY])
dnl -------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable pkgconfigdir as the location where a module
dnl should install pkg-config .pc files. By default the directory is
dnl $libdir/pkgconfig, but the default can be changed by passing
dnl DIRECTORY. The user can override through the --with-pkgconfigdir
dnl parameter.
AC_DEFUN([PKG_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])])
m4_pushdef([pkg_description],
[pkg-config installation directory @<:@]pkg_default[@:>@])
AC_ARG_WITH([pkgconfigdir],
[AS_HELP_STRING([--with-pkgconfigdir], pkg_description)],,
[with_pkgconfigdir=]pkg_default)
AC_SUBST([pkgconfigdir], [$with_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_INSTALLDIR
dnl PKG_NOARCH_INSTALLDIR([DIRECTORY])
dnl --------------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable noarch_pkgconfigdir as the location where a
dnl module should install arch-independent pkg-config .pc files. By
dnl default the directory is $datadir/pkgconfig, but the default can be
dnl changed by passing DIRECTORY. The user can override through the
dnl --with-noarch-pkgconfigdir parameter.
AC_DEFUN([PKG_NOARCH_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])])
m4_pushdef([pkg_description],
[pkg-config arch-independent installation directory @<:@]pkg_default[@:>@])
AC_ARG_WITH([noarch-pkgconfigdir],
[AS_HELP_STRING([--with-noarch-pkgconfigdir], pkg_description)],,
[with_noarch_pkgconfigdir=]pkg_default)
AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_NOARCH_INSTALLDIR
dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------
dnl Since: 0.28
dnl
dnl Retrieves the value of the pkg-config variable for the given module.
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
_PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])dnl
])dnl PKG_CHECK_VAR
# Copyright (C) 2002-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_AUTOMAKE_VERSION(VERSION)
# ----------------------------
# Automake X.Y traces this macro to ensure aclocal.m4 has been
# generated from the m4 files accompanying Automake X.Y.
# (This private macro should not be called outside this file.)
AC_DEFUN([AM_AUTOMAKE_VERSION],
[am__api_version='1.15'
dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
dnl require some minimum version. Point them to the right macro.
m4_if([$1], [1.15], [],
[AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl
])
# _AM_AUTOCONF_VERSION(VERSION)
# -----------------------------
# aclocal traces this macro to find the Autoconf version.
# This is a private macro too. Using m4_define simplifies
# the logic in aclocal, which can simply ignore this definition.
m4_define([_AM_AUTOCONF_VERSION], [])
# AM_SET_CURRENT_AUTOMAKE_VERSION
# -------------------------------
# Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced.
# This function is AC_REQUIREd by AM_INIT_AUTOMAKE.
AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION],
[AM_AUTOMAKE_VERSION([1.15])dnl
m4_ifndef([AC_AUTOCONF_VERSION],
[m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
_AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))])
# Figure out how to run the assembler. -*- Autoconf -*-
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_AS
# ----------
AC_DEFUN([AM_PROG_AS],
[# By default we simply use the C compiler to build assembly code.
AC_REQUIRE([AC_PROG_CC])
test "${CCAS+set}" = set || CCAS=$CC
test "${CCASFLAGS+set}" = set || CCASFLAGS=$CFLAGS
AC_ARG_VAR([CCAS], [assembler compiler command (defaults to CC)])
AC_ARG_VAR([CCASFLAGS], [assembler compiler flags (defaults to CFLAGS)])
_AM_IF_OPTION([no-dependencies],, [_AM_DEPENDENCIES([CCAS])])dnl
])
# AM_AUX_DIR_EXPAND -*- Autoconf -*-
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets
# $ac_aux_dir to '$srcdir/foo'. In other projects, it is set to
# '$srcdir', '$srcdir/..', or '$srcdir/../..'.
#
# Of course, Automake must honor this variable whenever it calls a
# tool from the auxiliary directory. The problem is that $srcdir (and
# therefore $ac_aux_dir as well) can be either absolute or relative,
# depending on how configure is run. This is pretty annoying, since
# it makes $ac_aux_dir quite unusable in subdirectories: in the top
# source directory, any form will work fine, but in subdirectories a
# relative path needs to be adjusted first.
#
# $ac_aux_dir/missing
# fails when called from a subdirectory if $ac_aux_dir is relative
# $top_srcdir/$ac_aux_dir/missing
# fails if $ac_aux_dir is absolute,
# fails when called from a subdirectory in a VPATH build with
# a relative $ac_aux_dir
#
# The reason of the latter failure is that $top_srcdir and $ac_aux_dir
# are both prefixed by $srcdir. In an in-source build this is usually
# harmless because $srcdir is '.', but things will broke when you
# start a VPATH build or use an absolute $srcdir.
#
# So we could use something similar to $top_srcdir/$ac_aux_dir/missing,
# iff we strip the leading $srcdir from $ac_aux_dir. That would be:
# am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"`
# and then we would define $MISSING as
# MISSING="\${SHELL} $am_aux_dir/missing"
# This will work as long as MISSING is not called from configure, because
# unfortunately $(top_srcdir) has no meaning in configure.
# However there are other variables, like CC, which are often used in
# configure, and could therefore not use this "fixed" $ac_aux_dir.
#
# Another solution, used here, is to always expand $ac_aux_dir to an
# absolute PATH. The drawback is that using absolute paths prevent a
# configured tree to be moved without reconfiguration.
AC_DEFUN([AM_AUX_DIR_EXPAND],
[AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl
# Expand $ac_aux_dir to an absolute path.
am_aux_dir=`cd "$ac_aux_dir" && pwd`
])
# AM_CONDITIONAL -*- Autoconf -*-
# Copyright (C) 1997-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_CONDITIONAL(NAME, SHELL-CONDITION)
# -------------------------------------
# Define a conditional.
AC_DEFUN([AM_CONDITIONAL],
[AC_PREREQ([2.52])dnl
m4_if([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])],
[$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl
AC_SUBST([$1_TRUE])dnl
AC_SUBST([$1_FALSE])dnl
_AM_SUBST_NOTMAKE([$1_TRUE])dnl
_AM_SUBST_NOTMAKE([$1_FALSE])dnl
m4_define([_AM_COND_VALUE_$1], [$2])dnl
if $2; then
$1_TRUE=
$1_FALSE='#'
else
$1_TRUE='#'
$1_FALSE=
fi
AC_CONFIG_COMMANDS_PRE(
[if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then
AC_MSG_ERROR([[conditional "$1" was never defined.
Usually this means the macro was only invoked conditionally.]])
fi])])
# Copyright (C) 1999-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# There are a few dirty hacks below to avoid letting 'AC_PROG_CC' be
# written in clear, in which case automake, when reading aclocal.m4,
# will think it sees a *use*, and therefore will trigger all it's
# C support machinery. Also note that it means that autoscan, seeing
# CC etc. in the Makefile, will ask for an AC_PROG_CC use...
# _AM_DEPENDENCIES(NAME)
# ----------------------
# See how the compiler implements dependency checking.
# NAME is "CC", "CXX", "OBJC", "OBJCXX", "UPC", or "GJC".
# We try a few techniques and use that to set a single cache variable.
#
# We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was
# modified to invoke _AM_DEPENDENCIES(CC); we would have a circular
# dependency, and given that the user is not expected to run this macro,
# just rely on AC_PROG_CC.
AC_DEFUN([_AM_DEPENDENCIES],
[AC_REQUIRE([AM_SET_DEPDIR])dnl
AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl
AC_REQUIRE([AM_MAKE_INCLUDE])dnl
AC_REQUIRE([AM_DEP_TRACK])dnl
m4_if([$1], [CC], [depcc="$CC" am_compiler_list=],
[$1], [CXX], [depcc="$CXX" am_compiler_list=],
[$1], [OBJC], [depcc="$OBJC" am_compiler_list='gcc3 gcc'],
[$1], [OBJCXX], [depcc="$OBJCXX" am_compiler_list='gcc3 gcc'],
[$1], [UPC], [depcc="$UPC" am_compiler_list=],
[$1], [GCJ], [depcc="$GCJ" am_compiler_list='gcc3 gcc'],
[depcc="$$1" am_compiler_list=])
AC_CACHE_CHECK([dependency style of $depcc],
[am_cv_$1_dependencies_compiler_type],
[if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then
# We make a subdir and do the tests there. Otherwise we can end up
# making bogus files that we don't know about and never remove. For
# instance it was reported that on HP-UX the gcc test will end up
# making a dummy file named 'D' -- because '-MD' means "put the output
# in D".
rm -rf conftest.dir
mkdir conftest.dir
# Copy depcomp to subdir because otherwise we won't find it if we're
# using a relative directory.
cp "$am_depcomp" conftest.dir
cd conftest.dir
# We will build objects and dependencies in a subdirectory because
# it helps to detect inapplicable dependency modes. For instance
# both Tru64's cc and ICC support -MD to output dependencies as a
# side effect of compilation, but ICC will put the dependencies in
# the current directory while Tru64 will put them in the object
# directory.
mkdir sub
am_cv_$1_dependencies_compiler_type=none
if test "$am_compiler_list" = ""; then
am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp`
fi
am__universal=false
m4_case([$1], [CC],
[case " $depcc " in #(
*\ -arch\ *\ -arch\ *) am__universal=true ;;
esac],
[CXX],
[case " $depcc " in #(
*\ -arch\ *\ -arch\ *) am__universal=true ;;
esac])
for depmode in $am_compiler_list; do
# Setup a source with many dependencies, because some compilers
# like to wrap large dependency lists on column 80 (with \), and
# we should not choose a depcomp mode which is confused by this.
#
# We need to recreate these files for each test, as the compiler may
# overwrite some of them when testing with obscure command lines.
# This happens at least with the AIX C compiler.
: > sub/conftest.c
for i in 1 2 3 4 5 6; do
echo '#include "conftst'$i'.h"' >> sub/conftest.c
# Using ": > sub/conftst$i.h" creates only sub/conftst1.h with
# Solaris 10 /bin/sh.
echo '/* dummy */' > sub/conftst$i.h
done
echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf
# We check with '-c' and '-o' for the sake of the "dashmstdout"
# mode. It turns out that the SunPro C++ compiler does not properly
# handle '-M -o', and we need to detect this. Also, some Intel
# versions had trouble with output in subdirs.
am__obj=sub/conftest.${OBJEXT-o}
am__minus_obj="-o $am__obj"
case $depmode in
gcc)
# This depmode causes a compiler race in universal mode.
test "$am__universal" = false || continue
;;
nosideeffect)
# After this tag, mechanisms are not by side-effect, so they'll
# only be used when explicitly requested.
if test "x$enable_dependency_tracking" = xyes; then
continue
else
break
fi
;;
msvc7 | msvc7msys | msvisualcpp | msvcmsys)
# This compiler won't grok '-c -o', but also, the minuso test has
# not run yet. These depmodes are late enough in the game, and
# so weak that their functioning should not be impacted.
am__obj=conftest.${OBJEXT-o}
am__minus_obj=
;;
none) break ;;
esac
if depmode=$depmode \
source=sub/conftest.c object=$am__obj \
depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \
$SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \
>/dev/null 2>conftest.err &&
grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 &&
grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 &&
grep $am__obj sub/conftest.Po > /dev/null 2>&1 &&
${MAKE-make} -s -f confmf > /dev/null 2>&1; then
# icc doesn't choke on unknown options, it will just issue warnings
# or remarks (even with -Werror). So we grep stderr for any message
# that says an option was ignored or not supported.
# When given -MP, icc 7.0 and 7.1 complain thusly:
# icc: Command line warning: ignoring option '-M'; no argument required
# The diagnosis changed in icc 8.0:
# icc: Command line remark: option '-MP' not supported
if (grep 'ignoring option' conftest.err ||
grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else
am_cv_$1_dependencies_compiler_type=$depmode
break
fi
fi
done
cd ..
rm -rf conftest.dir
else
am_cv_$1_dependencies_compiler_type=none
fi
])
AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type])
AM_CONDITIONAL([am__fastdep$1], [
test "x$enable_dependency_tracking" != xno \
&& test "$am_cv_$1_dependencies_compiler_type" = gcc3])
])
# AM_SET_DEPDIR
# -------------
# Choose a directory name for dependency files.
# This macro is AC_REQUIREd in _AM_DEPENDENCIES.
AC_DEFUN([AM_SET_DEPDIR],
[AC_REQUIRE([AM_SET_LEADING_DOT])dnl
AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl
])
# AM_DEP_TRACK
# ------------
AC_DEFUN([AM_DEP_TRACK],
[AC_ARG_ENABLE([dependency-tracking], [dnl
AS_HELP_STRING(
[--enable-dependency-tracking],
[do not reject slow dependency extractors])
AS_HELP_STRING(
[--disable-dependency-tracking],
[speeds up one-time build])])
if test "x$enable_dependency_tracking" != xno; then
am_depcomp="$ac_aux_dir/depcomp"
AMDEPBACKSLASH='\'
am__nodep='_no'
fi
AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno])
AC_SUBST([AMDEPBACKSLASH])dnl
_AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl
AC_SUBST([am__nodep])dnl
_AM_SUBST_NOTMAKE([am__nodep])dnl
])
# Generate code to set up dependency tracking. -*- Autoconf -*-
# Copyright (C) 1999-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_OUTPUT_DEPENDENCY_COMMANDS
# ------------------------------
AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
[{
# Older Autoconf quotes --file arguments for eval, but not when files
# are listed without --file. Let's play safe and only enable the eval
# if we detect the quoting.
case $CONFIG_FILES in
*\'*) eval set x "$CONFIG_FILES" ;;
*) set x $CONFIG_FILES ;;
esac
shift
for mf
do
# Strip MF so we end up with the name of the file.
mf=`echo "$mf" | sed -e 's/:.*$//'`
# Check whether this is an Automake generated Makefile or not.
# We used to match only the files named 'Makefile.in', but
# some people rename them; so instead we look at the file content.
# Grep'ing the first line is not enough: some people post-process
# each Makefile.in and add a new line on top of each file to say so.
# Grep'ing the whole file is not good either: AIX grep has a line
# limit of 2048, but all sed's we know have understand at least 4000.
if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then
dirpart=`AS_DIRNAME("$mf")`
else
continue
fi
# Extract the definition of DEPDIR, am__include, and am__quote
# from the Makefile without running 'make'.
DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"`
test -z "$DEPDIR" && continue
am__include=`sed -n 's/^am__include = //p' < "$mf"`
test -z "$am__include" && continue
am__quote=`sed -n 's/^am__quote = //p' < "$mf"`
# Find all dependency output files, they are included files with
# $(DEPDIR) in their names. We invoke sed twice because it is the
# simplest approach to changing $(DEPDIR) to its actual value in the
# expansion.
for file in `sed -n "
s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \
sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do
# Make sure the directory exists.
test -f "$dirpart/$file" && continue
fdir=`AS_DIRNAME(["$file"])`
AS_MKDIR_P([$dirpart/$fdir])
# echo "creating $dirpart/$file"
echo '# dummy' > "$dirpart/$file"
done
done
}
])# _AM_OUTPUT_DEPENDENCY_COMMANDS
# AM_OUTPUT_DEPENDENCY_COMMANDS
# -----------------------------
# This macro should only be invoked once -- use via AC_REQUIRE.
#
# This code is only required when automatic dependency tracking
# is enabled. FIXME. This creates each '.P' file that we will
# need in order to bootstrap the dependency handling code.
AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS],
[AC_CONFIG_COMMANDS([depfiles],
[test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS],
[AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"])
])
# Do all the work for Automake. -*- Autoconf -*-
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This macro actually does too much. Some checks are only needed if
# your package does certain things. But this isn't really a big deal.
dnl Redefine AC_PROG_CC to automatically invoke _AM_PROG_CC_C_O.
m4_define([AC_PROG_CC],
m4_defn([AC_PROG_CC])
[_AM_PROG_CC_C_O
])
# AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE])
# AM_INIT_AUTOMAKE([OPTIONS])
# -----------------------------------------------
# The call with PACKAGE and VERSION arguments is the old style
# call (pre autoconf-2.50), which is being phased out. PACKAGE
# and VERSION should now be passed to AC_INIT and removed from
# the call to AM_INIT_AUTOMAKE.
# We support both call styles for the transition. After
# the next Automake release, Autoconf can make the AC_INIT
# arguments mandatory, and then we can depend on a new Autoconf
# release and drop the old call support.
AC_DEFUN([AM_INIT_AUTOMAKE],
[AC_PREREQ([2.65])dnl
dnl Autoconf wants to disallow AM_ names. We explicitly allow
dnl the ones we care about.
m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl
AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl
AC_REQUIRE([AC_PROG_INSTALL])dnl
if test "`cd $srcdir && pwd`" != "`pwd`"; then
# Use -I$(srcdir) only when $(srcdir) != ., so that make's output
# is not polluted with repeated "-I."
AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl
# test to see if srcdir already configured
if test -f $srcdir/config.status; then
AC_MSG_ERROR([source directory already configured; run "make distclean" there first])
fi
fi
# test whether we have cygpath
if test -z "$CYGPATH_W"; then
if (cygpath --version) >/dev/null 2>/dev/null; then
CYGPATH_W='cygpath -w'
else
CYGPATH_W=echo
fi
fi
AC_SUBST([CYGPATH_W])
# Define the identity of the package.
dnl Distinguish between old-style and new-style calls.
m4_ifval([$2],
[AC_DIAGNOSE([obsolete],
[$0: two- and three-arguments forms are deprecated.])
m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl
AC_SUBST([PACKAGE], [$1])dnl
AC_SUBST([VERSION], [$2])],
[_AM_SET_OPTIONS([$1])dnl
dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT.
m4_if(
m4_ifdef([AC_PACKAGE_NAME], [ok]):m4_ifdef([AC_PACKAGE_VERSION], [ok]),
[ok:ok],,
[m4_fatal([AC_INIT should be called with package and version arguments])])dnl
AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl
AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl
_AM_IF_OPTION([no-define],,
[AC_DEFINE_UNQUOTED([PACKAGE], ["$PACKAGE"], [Name of package])
AC_DEFINE_UNQUOTED([VERSION], ["$VERSION"], [Version number of package])])dnl
# Some tools Automake needs.
AC_REQUIRE([AM_SANITY_CHECK])dnl
AC_REQUIRE([AC_ARG_PROGRAM])dnl
AM_MISSING_PROG([ACLOCAL], [aclocal-${am__api_version}])
AM_MISSING_PROG([AUTOCONF], [autoconf])
AM_MISSING_PROG([AUTOMAKE], [automake-${am__api_version}])
AM_MISSING_PROG([AUTOHEADER], [autoheader])
AM_MISSING_PROG([MAKEINFO], [makeinfo])
AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl
AC_REQUIRE([AC_PROG_MKDIR_P])dnl
# For better backward compatibility. To be removed once Automake 1.9.x
# dies out for good. For more background, see:
#
#
AC_SUBST([mkdir_p], ['$(MKDIR_P)'])
# We need awk for the "check" target (and possibly the TAP driver). The
# system "awk" is bad on some platforms.
AC_REQUIRE([AC_PROG_AWK])dnl
AC_REQUIRE([AC_PROG_MAKE_SET])dnl
AC_REQUIRE([AM_SET_LEADING_DOT])dnl
_AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])],
[_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])],
[_AM_PROG_TAR([v7])])])
_AM_IF_OPTION([no-dependencies],,
[AC_PROVIDE_IFELSE([AC_PROG_CC],
[_AM_DEPENDENCIES([CC])],
[m4_define([AC_PROG_CC],
m4_defn([AC_PROG_CC])[_AM_DEPENDENCIES([CC])])])dnl
AC_PROVIDE_IFELSE([AC_PROG_CXX],
[_AM_DEPENDENCIES([CXX])],
[m4_define([AC_PROG_CXX],
m4_defn([AC_PROG_CXX])[_AM_DEPENDENCIES([CXX])])])dnl
AC_PROVIDE_IFELSE([AC_PROG_OBJC],
[_AM_DEPENDENCIES([OBJC])],
[m4_define([AC_PROG_OBJC],
m4_defn([AC_PROG_OBJC])[_AM_DEPENDENCIES([OBJC])])])dnl
AC_PROVIDE_IFELSE([AC_PROG_OBJCXX],
[_AM_DEPENDENCIES([OBJCXX])],
[m4_define([AC_PROG_OBJCXX],
m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])dnl
])
AC_REQUIRE([AM_SILENT_RULES])dnl
dnl The testsuite driver may need to know about EXEEXT, so add the
dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This
dnl macro is hooked onto _AC_COMPILER_EXEEXT early, see below.
AC_CONFIG_COMMANDS_PRE(dnl
[m4_provide_if([_AM_COMPILER_EXEEXT],
[AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl
# POSIX will say in a future version that running "rm -f" with no argument
# is OK; and we want to be able to make that assumption in our Makefile
# recipes. So use an aggressive probe to check that the usage we want is
# actually supported "in the wild" to an acceptable degree.
# See automake bug#10828.
# To make any issue more visible, cause the running configure to be aborted
# by default if the 'rm' program in use doesn't match our expectations; the
# user can still override this though.
if rm -f && rm -fr && rm -rf; then : OK; else
cat >&2 <<'END'
Oops!
Your 'rm' program seems unable to run without file operands specified
on the command line, even when the '-f' option is present. This is contrary
to the behaviour of most rm programs out there, and not conforming with
the upcoming POSIX standard:
Please tell bug-automake@gnu.org about your system, including the value
of your $PATH and any error possibly output before this message. This
can help us improve future automake versions.
END
if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then
echo 'Configuration will proceed anyway, since you have set the' >&2
echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2
echo >&2
else
cat >&2 <<'END'
Aborting the configuration process, to ensure you take notice of the issue.
You can download and install GNU coreutils to get an 'rm' implementation
that behaves properly: .
If you want to complete the configuration process using your problematic
'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM
to "yes", and re-run configure.
END
AC_MSG_ERROR([Your 'rm' program is bad, sorry.])
fi
fi
dnl The trailing newline in this macro's definition is deliberate, for
dnl backward compatibility and to allow trailing 'dnl'-style comments
dnl after the AM_INIT_AUTOMAKE invocation. See automake bug#16841.
])
dnl Hook into '_AC_COMPILER_EXEEXT' early to learn its expansion. Do not
dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further
dnl mangled by Autoconf and run in a shell conditional statement.
m4_define([_AC_COMPILER_EXEEXT],
m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])])
# When config.status generates a header, we must update the stamp-h file.
# This file resides in the same directory as the config header
# that is generated. The stamp files are numbered to have different names.
# Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the
# loop where config.status creates the headers, so we can generate
# our stamp files there.
AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK],
[# Compute $1's index in $config_headers.
_am_arg=$1
_am_stamp_count=1
for _am_header in $config_headers :; do
case $_am_header in
$_am_arg | $_am_arg:* )
break ;;
* )
_am_stamp_count=`expr $_am_stamp_count + 1` ;;
esac
done
echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count])
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_INSTALL_SH
# ------------------
# Define $install_sh.
AC_DEFUN([AM_PROG_INSTALL_SH],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
if test x"${install_sh+set}" != xset; then
case $am_aux_dir in
*\ * | *\ *)
install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;;
*)
install_sh="\${SHELL} $am_aux_dir/install-sh"
esac
fi
AC_SUBST([install_sh])])
# Copyright (C) 2003-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# Check whether the underlying file-system supports filenames
# with a leading dot. For instance MS-DOS doesn't.
AC_DEFUN([AM_SET_LEADING_DOT],
[rm -rf .tst 2>/dev/null
mkdir .tst 2>/dev/null
if test -d .tst; then
am__leading_dot=.
else
am__leading_dot=_
fi
rmdir .tst 2>/dev/null
AC_SUBST([am__leading_dot])])
# Add --enable-maintainer-mode option to configure. -*- Autoconf -*-
# From Jim Meyering
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_MAINTAINER_MODE([DEFAULT-MODE])
# ----------------------------------
# Control maintainer-specific portions of Makefiles.
# Default is to disable them, unless 'enable' is passed literally.
# For symmetry, 'disable' may be passed as well. Anyway, the user
# can override the default with the --enable/--disable switch.
AC_DEFUN([AM_MAINTAINER_MODE],
[m4_case(m4_default([$1], [disable]),
[enable], [m4_define([am_maintainer_other], [disable])],
[disable], [m4_define([am_maintainer_other], [enable])],
[m4_define([am_maintainer_other], [enable])
m4_warn([syntax], [unexpected argument to AM@&t@_MAINTAINER_MODE: $1])])
AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles])
dnl maintainer-mode's default is 'disable' unless 'enable' is passed
AC_ARG_ENABLE([maintainer-mode],
[AS_HELP_STRING([--]am_maintainer_other[-maintainer-mode],
am_maintainer_other[ make rules and dependencies not useful
(and sometimes confusing) to the casual installer])],
[USE_MAINTAINER_MODE=$enableval],
[USE_MAINTAINER_MODE=]m4_if(am_maintainer_other, [enable], [no], [yes]))
AC_MSG_RESULT([$USE_MAINTAINER_MODE])
AM_CONDITIONAL([MAINTAINER_MODE], [test $USE_MAINTAINER_MODE = yes])
MAINT=$MAINTAINER_MODE_TRUE
AC_SUBST([MAINT])dnl
]
)
# Check to see how 'make' treats includes. -*- Autoconf -*-
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_MAKE_INCLUDE()
# -----------------
# Check to see how make treats includes.
AC_DEFUN([AM_MAKE_INCLUDE],
[am_make=${MAKE-make}
cat > confinc << 'END'
am__doit:
@echo this is the am__doit target
.PHONY: am__doit
END
# If we don't find an include directive, just comment out the code.
AC_MSG_CHECKING([for style of include used by $am_make])
am__include="#"
am__quote=
_am_result=none
# First try GNU make style include.
echo "include confinc" > confmf
# Ignore all kinds of additional output from 'make'.
case `$am_make -s -f confmf 2> /dev/null` in #(
*the\ am__doit\ target*)
am__include=include
am__quote=
_am_result=GNU
;;
esac
# Now try BSD make style include.
if test "$am__include" = "#"; then
echo '.include "confinc"' > confmf
case `$am_make -s -f confmf 2> /dev/null` in #(
*the\ am__doit\ target*)
am__include=.include
am__quote="\""
_am_result=BSD
;;
esac
fi
AC_SUBST([am__include])
AC_SUBST([am__quote])
AC_MSG_RESULT([$_am_result])
rm -f confinc confmf
])
# Fake the existence of programs that GNU maintainers use. -*- Autoconf -*-
# Copyright (C) 1997-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_MISSING_PROG(NAME, PROGRAM)
# ------------------------------
AC_DEFUN([AM_MISSING_PROG],
[AC_REQUIRE([AM_MISSING_HAS_RUN])
$1=${$1-"${am_missing_run}$2"}
AC_SUBST($1)])
# AM_MISSING_HAS_RUN
# ------------------
# Define MISSING if not defined so far and test if it is modern enough.
# If it is, set am_missing_run to use it, otherwise, to nothing.
AC_DEFUN([AM_MISSING_HAS_RUN],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
AC_REQUIRE_AUX_FILE([missing])dnl
if test x"${MISSING+set}" != xset; then
case $am_aux_dir in
*\ * | *\ *)
MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;;
*)
MISSING="\${SHELL} $am_aux_dir/missing" ;;
esac
fi
# Use eval to expand $SHELL
if eval "$MISSING --is-lightweight"; then
am_missing_run="$MISSING "
else
am_missing_run=
AC_MSG_WARN(['missing' script is too old or missing])
fi
])
# Helper functions for option handling. -*- Autoconf -*-
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_MANGLE_OPTION(NAME)
# -----------------------
AC_DEFUN([_AM_MANGLE_OPTION],
[[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])])
# _AM_SET_OPTION(NAME)
# --------------------
# Set option NAME. Presently that only means defining a flag for this option.
AC_DEFUN([_AM_SET_OPTION],
[m4_define(_AM_MANGLE_OPTION([$1]), [1])])
# _AM_SET_OPTIONS(OPTIONS)
# ------------------------
# OPTIONS is a space-separated list of Automake options.
AC_DEFUN([_AM_SET_OPTIONS],
[m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])])
# _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET])
# -------------------------------------------
# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise.
AC_DEFUN([_AM_IF_OPTION],
[m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])])
# Copyright (C) 1999-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_PROG_CC_C_O
# ---------------
# Like AC_PROG_CC_C_O, but changed for automake. We rewrite AC_PROG_CC
# to automatically call this.
AC_DEFUN([_AM_PROG_CC_C_O],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
AC_REQUIRE_AUX_FILE([compile])dnl
AC_LANG_PUSH([C])dnl
AC_CACHE_CHECK(
[whether $CC understands -c and -o together],
[am_cv_prog_cc_c_o],
[AC_LANG_CONFTEST([AC_LANG_PROGRAM([])])
# Make sure it works both with $CC and with simple cc.
# Following AC_PROG_CC_C_O, we do the test twice because some
# compilers refuse to overwrite an existing .o file with -o,
# though they will create one.
am_cv_prog_cc_c_o=yes
for am_i in 1 2; do
if AM_RUN_LOG([$CC -c conftest.$ac_ext -o conftest2.$ac_objext]) \
&& test -f conftest2.$ac_objext; then
: OK
else
am_cv_prog_cc_c_o=no
break
fi
done
rm -f core conftest*
unset am_i])
if test "$am_cv_prog_cc_c_o" != yes; then
# Losing compiler, so override with the script.
# FIXME: It is wrong to rewrite CC.
# But if we don't then we get into trouble of one sort or another.
# A longer-term fix would be to have automake use am__CC in this case,
# and then we could set am__CC="\$(top_srcdir)/compile \$(CC)"
CC="$am_aux_dir/compile $CC"
fi
AC_LANG_POP([C])])
# For backward compatibility.
AC_DEFUN_ONCE([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC])])
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_RUN_LOG(COMMAND)
# -------------------
# Run COMMAND, save the exit status in ac_status, and log it.
# (This has been adapted from Autoconf's _AC_RUN_LOG macro.)
AC_DEFUN([AM_RUN_LOG],
[{ echo "$as_me:$LINENO: $1" >&AS_MESSAGE_LOG_FD
($1) >&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD
ac_status=$?
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
# Check to make sure that the build environment is sane. -*- Autoconf -*-
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_SANITY_CHECK
# ---------------
AC_DEFUN([AM_SANITY_CHECK],
[AC_MSG_CHECKING([whether build environment is sane])
# Reject unsafe characters in $srcdir or the absolute working directory
# name. Accept space and tab only in the latter.
am_lf='
'
case `pwd` in
*[[\\\"\#\$\&\'\`$am_lf]]*)
AC_MSG_ERROR([unsafe absolute working directory name]);;
esac
case $srcdir in
*[[\\\"\#\$\&\'\`$am_lf\ \ ]]*)
AC_MSG_ERROR([unsafe srcdir value: '$srcdir']);;
esac
# Do 'set' in a subshell so we don't clobber the current shell's
# arguments. Must try -L first in case configure is actually a
# symlink; some systems play weird games with the mod time of symlinks
# (eg FreeBSD returns the mod time of the symlink's containing
# directory).
if (
am_has_slept=no
for am_try in 1 2; do
echo "timestamp, slept: $am_has_slept" > conftest.file
set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null`
if test "$[*]" = "X"; then
# -L didn't work.
set X `ls -t "$srcdir/configure" conftest.file`
fi
if test "$[*]" != "X $srcdir/configure conftest.file" \
&& test "$[*]" != "X conftest.file $srcdir/configure"; then
# If neither matched, then we have a broken ls. This can happen
# if, for instance, CONFIG_SHELL is bash and it inherits a
# broken ls alias from the environment. This has actually
# happened. Such a system could not be considered "sane".
AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken
alias in your environment])
fi
if test "$[2]" = conftest.file || test $am_try -eq 2; then
break
fi
# Just in case.
sleep 1
am_has_slept=yes
done
test "$[2]" = conftest.file
)
then
# Ok.
:
else
AC_MSG_ERROR([newly created file is older than distributed files!
Check your system clock])
fi
AC_MSG_RESULT([yes])
# If we didn't sleep, we still need to ensure time stamps of config.status and
# generated files are strictly newer.
am_sleep_pid=
if grep 'slept: no' conftest.file >/dev/null 2>&1; then
( sleep 1 ) &
am_sleep_pid=$!
fi
AC_CONFIG_COMMANDS_PRE(
[AC_MSG_CHECKING([that generated files are newer than configure])
if test -n "$am_sleep_pid"; then
# Hide warnings about reused PIDs.
wait $am_sleep_pid 2>/dev/null
fi
AC_MSG_RESULT([done])])
rm -f conftest.file
])
# Copyright (C) 2009-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_SILENT_RULES([DEFAULT])
# --------------------------
# Enable less verbose build rules; with the default set to DEFAULT
# ("yes" being less verbose, "no" or empty being verbose).
AC_DEFUN([AM_SILENT_RULES],
[AC_ARG_ENABLE([silent-rules], [dnl
AS_HELP_STRING(
[--enable-silent-rules],
[less verbose build output (undo: "make V=1")])
AS_HELP_STRING(
[--disable-silent-rules],
[verbose build output (undo: "make V=0")])dnl
])
case $enable_silent_rules in @%:@ (((
yes) AM_DEFAULT_VERBOSITY=0;;
no) AM_DEFAULT_VERBOSITY=1;;
*) AM_DEFAULT_VERBOSITY=m4_if([$1], [yes], [0], [1]);;
esac
dnl
dnl A few 'make' implementations (e.g., NonStop OS and NextStep)
dnl do not support nested variable expansions.
dnl See automake bug#9928 and bug#10237.
am_make=${MAKE-make}
AC_CACHE_CHECK([whether $am_make supports nested variables],
[am_cv_make_support_nested_variables],
[if AS_ECHO([['TRUE=$(BAR$(V))
BAR0=false
BAR1=true
V=1
am__doit:
@$(TRUE)
.PHONY: am__doit']]) | $am_make -f - >/dev/null 2>&1; then
am_cv_make_support_nested_variables=yes
else
am_cv_make_support_nested_variables=no
fi])
if test $am_cv_make_support_nested_variables = yes; then
dnl Using '$V' instead of '$(V)' breaks IRIX make.
AM_V='$(V)'
AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
else
AM_V=$AM_DEFAULT_VERBOSITY
AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY
fi
AC_SUBST([AM_V])dnl
AM_SUBST_NOTMAKE([AM_V])dnl
AC_SUBST([AM_DEFAULT_V])dnl
AM_SUBST_NOTMAKE([AM_DEFAULT_V])dnl
AC_SUBST([AM_DEFAULT_VERBOSITY])dnl
AM_BACKSLASH='\'
AC_SUBST([AM_BACKSLASH])dnl
_AM_SUBST_NOTMAKE([AM_BACKSLASH])dnl
])
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_INSTALL_STRIP
# ---------------------
# One issue with vendor 'install' (even GNU) is that you can't
# specify the program used to strip binaries. This is especially
# annoying in cross-compiling environments, where the build's strip
# is unlikely to handle the host's binaries.
# Fortunately install-sh will honor a STRIPPROG variable, so we
# always use install-sh in "make install-strip", and initialize
# STRIPPROG with the value of the STRIP variable (set by the user).
AC_DEFUN([AM_PROG_INSTALL_STRIP],
[AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
# Installed binaries are usually stripped using 'strip' when the user
# run "make install-strip". However 'strip' might not be the right
# tool to use in cross-compilation environments, therefore Automake
# will honor the 'STRIP' environment variable to overrule this program.
dnl Don't test for $cross_compiling = yes, because it might be 'maybe'.
if test "$cross_compiling" != no; then
AC_CHECK_TOOL([STRIP], [strip], :)
fi
INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
AC_SUBST([INSTALL_STRIP_PROGRAM])])
# Copyright (C) 2006-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_SUBST_NOTMAKE(VARIABLE)
# ---------------------------
# Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in.
# This macro is traced by Automake.
AC_DEFUN([_AM_SUBST_NOTMAKE])
# AM_SUBST_NOTMAKE(VARIABLE)
# --------------------------
# Public sister of _AM_SUBST_NOTMAKE.
AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)])
# Check how to create a tarball. -*- Autoconf -*-
# Copyright (C) 2004-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_PROG_TAR(FORMAT)
# --------------------
# Check how to create a tarball in format FORMAT.
# FORMAT should be one of 'v7', 'ustar', or 'pax'.
#
# Substitute a variable $(am__tar) that is a command
# writing to stdout a FORMAT-tarball containing the directory
# $tardir.
# tardir=directory && $(am__tar) > result.tar
#
# Substitute a variable $(am__untar) that extract such
# a tarball read from stdin.
# $(am__untar) < result.tar
#
AC_DEFUN([_AM_PROG_TAR],
[# Always define AMTAR for backward compatibility. Yes, it's still used
# in the wild :-( We should find a proper way to deprecate it ...
AC_SUBST([AMTAR], ['$${TAR-tar}'])
# We'll loop over all known methods to create a tar archive until one works.
_am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none'
m4_if([$1], [v7],
[am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'],
[m4_case([$1],
[ustar],
[# The POSIX 1988 'ustar' format is defined with fixed-size fields.
# There is notably a 21 bits limit for the UID and the GID. In fact,
# the 'pax' utility can hang on bigger UID/GID (see automake bug#8343
# and bug#13588).
am_max_uid=2097151 # 2^21 - 1
am_max_gid=$am_max_uid
# The $UID and $GID variables are not portable, so we need to resort
# to the POSIX-mandated id(1) utility. Errors in the 'id' calls
# below are definitely unexpected, so allow the users to see them
# (that is, avoid stderr redirection).
am_uid=`id -u || echo unknown`
am_gid=`id -g || echo unknown`
AC_MSG_CHECKING([whether UID '$am_uid' is supported by ustar format])
if test $am_uid -le $am_max_uid; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
_am_tools=none
fi
AC_MSG_CHECKING([whether GID '$am_gid' is supported by ustar format])
if test $am_gid -le $am_max_gid; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
_am_tools=none
fi],
[pax],
[],
[m4_fatal([Unknown tar format])])
AC_MSG_CHECKING([how to create a $1 tar archive])
# Go ahead even if we have the value already cached. We do so because we
# need to set the values for the 'am__tar' and 'am__untar' variables.
_am_tools=${am_cv_prog_tar_$1-$_am_tools}
for _am_tool in $_am_tools; do
case $_am_tool in
gnutar)
for _am_tar in tar gnutar gtar; do
AM_RUN_LOG([$_am_tar --version]) && break
done
am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"'
am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"'
am__untar="$_am_tar -xf -"
;;
plaintar)
# Must skip GNU tar: if it does not support --format= it doesn't create
# ustar tarball either.
(tar --version) >/dev/null 2>&1 && continue
am__tar='tar chf - "$$tardir"'
am__tar_='tar chf - "$tardir"'
am__untar='tar xf -'
;;
pax)
am__tar='pax -L -x $1 -w "$$tardir"'
am__tar_='pax -L -x $1 -w "$tardir"'
am__untar='pax -r'
;;
cpio)
am__tar='find "$$tardir" -print | cpio -o -H $1 -L'
am__tar_='find "$tardir" -print | cpio -o -H $1 -L'
am__untar='cpio -i -H $1 -d'
;;
none)
am__tar=false
am__tar_=false
am__untar=false
;;
esac
# If the value was cached, stop now. We just wanted to have am__tar
# and am__untar set.
test -n "${am_cv_prog_tar_$1}" && break
# tar/untar a dummy directory, and stop if the command works.
rm -rf conftest.dir
mkdir conftest.dir
echo GrepMe > conftest.dir/file
AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar])
rm -rf conftest.dir
if test -s conftest.tar; then
AM_RUN_LOG([$am__untar /dev/null 2>&1 && break
fi
done
rm -rf conftest.dir
AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool])
AC_MSG_RESULT([$am_cv_prog_tar_$1])])
AC_SUBST([am__tar])
AC_SUBST([am__untar])
]) # _AM_PROG_TAR
m4_include([m4/gc_set_version.m4])
m4_include([m4/libtool.m4])
m4_include([m4/ltoptions.m4])
m4_include([m4/ltsugar.m4])
m4_include([m4/ltversion.m4])
m4_include([m4/lt~obsolete.m4])
Gauche-0.9.6/gc/extra/ 0000775 0000764 0000764 00000000000 13316646663 013520 5 ustar shiro shiro Gauche-0.9.6/gc/extra/gc.c 0000664 0000764 0000764 00000005645 13074101475 014253 0 ustar shiro shiro /*
* Copyright (c) 1994 by Xerox Corporation. All rights reserved.
* Copyright (c) 1996 by Silicon Graphics. All rights reserved.
* Copyright (c) 1998 by Fergus Henderson. All rights reserved.
* Copyright (c) 2000-2009 by Hewlett-Packard Development Company.
* All rights reserved.
*
* THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
* OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
*
* Permission is hereby granted to use or copy this program
* for any purpose, provided the above notices are retained on all copies.
* Permission to modify the code and to distribute modified code is granted,
* provided the above notices are retained, and a notice that the code was
* modified is included with the above copyright notice.
*/
/* This file could be used for the following purposes: */
/* - get the complete GC as a single link object file (module); */
/* - enable more compiler optimizations. */
/* Tip: to get the highest level of compiler optimizations, the typical */
/* compiler options (GCC) to use are: */
/* -O3 -fno-strict-aliasing -march=native -Wall -fprofile-generate/use */
/* Warning: GCC for Linux (for C++ clients only): Use -fexceptions both */
/* for GC and the client otherwise GC_thread_exit_proc() is not */
/* guaranteed to be invoked (see the comments in pthread_start.c). */
#define GC_INNER STATIC
#define GC_EXTERN GC_INNER
/* STATIC is defined in gcconfig.h. */
/* Small files go first... */
#include "../backgraph.c"
#include "../blacklst.c"
#include "../checksums.c"
#include "../gcj_mlc.c"
#include "../headers.c"
#include "../malloc.c"
#include "../new_hblk.c"
#include "../obj_map.c"
#include "../ptr_chck.c"
#include "../stubborn.c"
#include "gc_inline.h"
#include "../allchblk.c"
#include "../alloc.c"
#include "../dbg_mlc.c"
#include "../finalize.c"
#include "../fnlz_mlc.c"
#include "../mallocx.c"
#include "../mark.c"
#include "../mark_rts.c"
#include "../reclaim.c"
#include "../typd_mlc.c"
#include "../misc.c"
#include "../os_dep.c"
#include "../thread_local_alloc.c"
/* Most platform-specific files go here... */
#include "../darwin_stop_world.c"
#include "../dyn_load.c"
#include "../gc_dlopen.c"
#include "../mach_dep.c"
#include "../pcr_interface.c"
#include "../pthread_stop_world.c"
#include "../pthread_support.c"
#include "../specific.c"
#include "../win32_threads.c"
#ifndef GC_PTHREAD_START_STANDALONE
# include "../pthread_start.c"
#endif
/* Restore pthread calls redirection (if altered in */
/* pthread_stop_world.c, pthread_support.c or win32_threads.c). */
/* This is only useful if directly included from application */
/* (instead of linking gc). */
#ifndef GC_NO_THREAD_REDIRECTS
# define GC_PTHREAD_REDIRECTS_ONLY
# include "gc_pthread_redirects.h"
#endif
/* real_malloc.c, extra/MacOS.c, extra/msvc_dbg.c are not included. */
Gauche-0.9.6/gc/extra/Mac_files/ 0000775 0000764 0000764 00000000000 13227007433 015365 5 ustar shiro shiro Gauche-0.9.6/gc/extra/Mac_files/datastart.c 0000664 0000764 0000764 00000000204 13074101475 017515 0 ustar shiro shiro /*
datastart.c
A hack to get the extent of global data for the Macintosh.
by Patrick C. Beard.
*/
long __datastart;
Gauche-0.9.6/gc/extra/Mac_files/dataend.c 0000664 0000764 0000764 00000000200 13074101475 017122 0 ustar shiro shiro /*
dataend.c
A hack to get the extent of global data for the Macintosh.
by Patrick C. Beard.
*/
long __dataend;
Gauche-0.9.6/gc/extra/Mac_files/MacOS_config.h 0000664 0000764 0000764 00000001434 13227007433 020027 0 ustar shiro shiro /*
MacOS_config.h
Configuration flags for Macintosh development systems.
11/16/95 pcb Updated compilation flags to reflect latest 4.6 Makefile.
by Patrick C. Beard.
*/
/* Boehm, November 17, 1995 12:10 pm PST */
#ifdef __MWERKS__
/* for CodeWarrior Pro with Metrowerks Standard Library (MSL). */
/* #define MSL_USE_PRECOMPILED_HEADERS 0 */
#include
#endif /* __MWERKS__ */
/* these are defined again in gc_priv.h. */
#undef TRUE
#undef FALSE
#define ALL_INTERIOR_POINTERS /* follows interior pointers. */
/* #define DONT_ADD_BYTE_AT_END */ /* no padding. */
/* #define SMALL_CONFIG */ /* whether to use a smaller heap. */
#define USE_TEMPORARY_MEMORY /* use Macintosh temporary memory. */
Gauche-0.9.6/gc/extra/symbian/ 0000775 0000764 0000764 00000000000 13074101475 015146 5 ustar shiro shiro Gauche-0.9.6/gc/extra/symbian/global_end.cpp 0000664 0000764 0000764 00000000275 13074101475 017744 0 ustar shiro shiro // Symbian-specific file.
// INCLUDE FILES
#include "private/gcconfig.h"
#ifdef __cplusplus
extern "C" {
#endif
int winscw_data_end;
#ifdef __cplusplus
}
#endif
// End Of File
Gauche-0.9.6/gc/extra/symbian/init_global_static_roots.cpp 0000664 0000764 0000764 00000001331 13074101475 022730 0 ustar shiro shiro // Symbian-specific file.
// INCLUDE FILES
#include
#include "private/gcconfig.h"
#include "gc.h"
#ifdef __cplusplus
extern "C" {
#endif
void GC_init_global_static_roots()
{
ptr_t dataStart = NULL;
ptr_t dataEnd = NULL;
# if defined (__WINS__)
extern int winscw_data_start, winscw_data_end;
dataStart = ((ptr_t)&winscw_data_start);
dataEnd = ((ptr_t)&winscw_data_end);
# else
extern int Image$$RW$$Limit[], Image$$RW$$Base[];
dataStart = ((ptr_t)Image$$RW$$Base);
dataEnd = ((ptr_t)Image$$RW$$Limit);
# endif
GC_add_roots(dataStart, dataEnd);
}
#ifdef __cplusplus
}
#endif
Gauche-0.9.6/gc/extra/symbian/global_start.cpp 0000664 0000764 0000764 00000000277 13074101475 020335 0 ustar shiro shiro // Symbian-specific file.
// INCLUDE FILES
#include "private/gcconfig.h"
#ifdef __cplusplus
extern "C" {
#endif
int winscw_data_start;
#ifdef __cplusplus
}
#endif
// End Of File
Gauche-0.9.6/gc/extra/msvc_dbg.c 0000664 0000764 0000764 00000027415 13302340445 015441 0 ustar shiro shiro /*
Copyright (c) 2004 Andrei Polushin
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
#if !defined(_M_AMD64) && defined(_MSC_VER)
/* X86_64 is currently missing some meachine-dependent code below. */
#define GC_BUILD
#include "private/msvc_dbg.h"
#include "gc.h"
#define WIN32_LEAN_AND_MEAN
#include
#pragma pack(push, 8)
#include
#pragma pack(pop)
#pragma comment(lib, "dbghelp.lib")
#pragma optimize("gy", off)
typedef GC_word word;
#define GC_ULONG_PTR word
#ifdef _WIN64
typedef GC_ULONG_PTR ULONG_ADDR;
#else
typedef ULONG ULONG_ADDR;
#endif
static HANDLE GetSymHandle(void)
{
static HANDLE symHandle = NULL;
if (!symHandle) {
BOOL bRet = SymInitialize(symHandle = GetCurrentProcess(), NULL, FALSE);
if (bRet) {
DWORD dwOptions = SymGetOptions();
dwOptions &= ~SYMOPT_UNDNAME;
dwOptions |= SYMOPT_LOAD_LINES;
SymSetOptions(dwOptions);
}
}
return symHandle;
}
static void* CALLBACK FunctionTableAccess(HANDLE hProcess,
ULONG_ADDR dwAddrBase)
{
return SymFunctionTableAccess(hProcess, dwAddrBase);
}
static ULONG_ADDR CALLBACK GetModuleBase(HANDLE hProcess, ULONG_ADDR dwAddress)
{
MEMORY_BASIC_INFORMATION memoryInfo;
ULONG_ADDR dwAddrBase = SymGetModuleBase(hProcess, dwAddress);
if (dwAddrBase) {
return dwAddrBase;
}
if (VirtualQueryEx(hProcess, (void*)(GC_ULONG_PTR)dwAddress, &memoryInfo,
sizeof(memoryInfo))) {
char filePath[_MAX_PATH];
char curDir[_MAX_PATH];
char exePath[_MAX_PATH];
DWORD size = GetModuleFileNameA((HINSTANCE)memoryInfo.AllocationBase,
filePath, sizeof(filePath));
/* Save and restore current directory around SymLoadModule, see KB */
/* article Q189780. */
GetCurrentDirectoryA(sizeof(curDir), curDir);
GetModuleFileNameA(NULL, exePath, sizeof(exePath));
#if defined(_MSC_VER) && _MSC_VER == 1200
/* use strcat for VC6 */
strcat(exePath, "\\..");
#else
strcat_s(exePath, sizeof(exePath), "\\..");
#endif /* _MSC_VER >= 1200 */
SetCurrentDirectoryA(exePath);
#ifdef _DEBUG
GetCurrentDirectoryA(sizeof(exePath), exePath);
#endif
SymLoadModule(hProcess, NULL, size ? filePath : NULL, NULL,
(ULONG_ADDR)(GC_ULONG_PTR)memoryInfo.AllocationBase, 0);
SetCurrentDirectoryA(curDir);
}
return (ULONG_ADDR)(GC_ULONG_PTR)memoryInfo.AllocationBase;
}
static ULONG_ADDR CheckAddress(void* address)
{
ULONG_ADDR dwAddress = (ULONG_ADDR)(GC_ULONG_PTR)address;
GetModuleBase(GetSymHandle(), dwAddress);
return dwAddress;
}
size_t GetStackFrames(size_t skip, void* frames[], size_t maxFrames)
{
HANDLE hProcess = GetSymHandle();
HANDLE hThread = GetCurrentThread();
CONTEXT context;
context.ContextFlags = CONTEXT_FULL;
if (!GetThreadContext(hThread, &context)) {
return 0;
}
/* GetThreadContext might return invalid context for the current thread. */
#if defined(_M_IX86)
__asm mov context.Ebp, ebp
#endif
return GetStackFramesFromContext(hProcess, hThread, &context, skip + 1,
frames, maxFrames);
}
size_t GetStackFramesFromContext(HANDLE hProcess, HANDLE hThread,
CONTEXT* context, size_t skip,
void* frames[], size_t maxFrames)
{
size_t frameIndex;
DWORD machineType;
STACKFRAME stackFrame = { 0 };
stackFrame.AddrPC.Mode = AddrModeFlat;
#if defined(_M_IX86)
machineType = IMAGE_FILE_MACHINE_I386;
stackFrame.AddrPC.Offset = context->Eip;
stackFrame.AddrStack.Mode = AddrModeFlat;
stackFrame.AddrStack.Offset = context->Esp;
stackFrame.AddrFrame.Mode = AddrModeFlat;
stackFrame.AddrFrame.Offset = context->Ebp;
#elif defined(_M_MRX000)
machineType = IMAGE_FILE_MACHINE_R4000;
stackFrame.AddrPC.Offset = context->Fir;
#elif defined(_M_ALPHA)
machineType = IMAGE_FILE_MACHINE_ALPHA;
stackFrame.AddrPC.Offset = (unsigned long)context->Fir;
#elif defined(_M_PPC)
machineType = IMAGE_FILE_MACHINE_POWERPC;
stackFrame.AddrPC.Offset = context->Iar;
#elif defined(_M_IA64)
machineType = IMAGE_FILE_MACHINE_IA64;
stackFrame.AddrPC.Offset = context->StIIP;
#elif defined(_M_ALPHA64)
machineType = IMAGE_FILE_MACHINE_ALPHA64;
stackFrame.AddrPC.Offset = context->Fir;
#elif !defined(CPPCHECK)
# error Unknown CPU
#endif
for (frameIndex = 0; frameIndex < maxFrames; ) {
BOOL bRet = StackWalk(machineType, hProcess, hThread, &stackFrame,
&context, NULL, FunctionTableAccess, GetModuleBase, NULL);
if (!bRet) {
break;
}
if (skip) {
skip--;
} else {
frames[frameIndex++] = (void*)(GC_ULONG_PTR)stackFrame.AddrPC.Offset;
}
}
return frameIndex;
}
size_t GetModuleNameFromAddress(void* address, char* moduleName, size_t size)
{
if (size) *moduleName = 0;
{
const char* sourceName;
IMAGEHLP_MODULE moduleInfo = { sizeof (moduleInfo) };
if (!SymGetModuleInfo(GetSymHandle(), CheckAddress(address),
&moduleInfo)) {
return 0;
}
sourceName = strrchr(moduleInfo.ImageName, '\\');
if (sourceName) {
sourceName++;
} else {
sourceName = moduleInfo.ImageName;
}
if (size) {
strncpy(moduleName, sourceName, size)[size - 1] = 0;
}
return strlen(sourceName);
}
}
size_t GetModuleNameFromStack(size_t skip, char* moduleName, size_t size)
{
void* address = NULL;
GetStackFrames(skip + 1, &address, 1);
if (address) {
return GetModuleNameFromAddress(address, moduleName, size);
}
return 0;
}
size_t GetSymbolNameFromAddress(void* address, char* symbolName, size_t size,
size_t* offsetBytes)
{
if (size) *symbolName = 0;
if (offsetBytes) *offsetBytes = 0;
__try {
ULONG_ADDR dwOffset = 0;
union {
IMAGEHLP_SYMBOL sym;
char symNameBuffer[sizeof(IMAGEHLP_SYMBOL) + MAX_SYM_NAME];
} u;
u.sym.SizeOfStruct = sizeof(u.sym);
u.sym.MaxNameLength = sizeof(u.symNameBuffer) - sizeof(u.sym);
if (!SymGetSymFromAddr(GetSymHandle(), CheckAddress(address), &dwOffset,
&u.sym)) {
return 0;
} else {
const char* sourceName = u.sym.Name;
char undName[1024];
if (UnDecorateSymbolName(u.sym.Name, undName, sizeof(undName),
UNDNAME_NO_MS_KEYWORDS | UNDNAME_NO_ACCESS_SPECIFIERS)) {
sourceName = undName;
} else if (SymUnDName(&u.sym, undName, sizeof(undName))) {
sourceName = undName;
}
if (offsetBytes) {
*offsetBytes = dwOffset;
}
if (size) {
strncpy(symbolName, sourceName, size)[size - 1] = 0;
}
return strlen(sourceName);
}
} __except (EXCEPTION_EXECUTE_HANDLER) {
SetLastError(GetExceptionCode());
}
return 0;
}
size_t GetSymbolNameFromStack(size_t skip, char* symbolName, size_t size,
size_t* offsetBytes)
{
void* address = NULL;
GetStackFrames(skip + 1, &address, 1);
if (address) {
return GetSymbolNameFromAddress(address, symbolName, size, offsetBytes);
}
return 0;
}
size_t GetFileLineFromAddress(void* address, char* fileName, size_t size,
size_t* lineNumber, size_t* offsetBytes)
{
if (size) *fileName = 0;
if (lineNumber) *lineNumber = 0;
if (offsetBytes) *offsetBytes = 0;
{
char* sourceName;
IMAGEHLP_LINE line = { sizeof (line) };
GC_ULONG_PTR dwOffset = 0;
if (!SymGetLineFromAddr(GetSymHandle(), CheckAddress(address), &dwOffset,
&line)) {
return 0;
}
if (lineNumber) {
*lineNumber = line.LineNumber;
}
if (offsetBytes) {
*offsetBytes = dwOffset;
}
sourceName = line.FileName;
/* TODO: resolve relative filenames, found in 'source directories' */
/* registered with MSVC IDE. */
if (size) {
strncpy(fileName, sourceName, size)[size - 1] = 0;
}
return strlen(sourceName);
}
}
size_t GetFileLineFromStack(size_t skip, char* fileName, size_t size,
size_t* lineNumber, size_t* offsetBytes)
{
void* address = NULL;
GetStackFrames(skip + 1, &address, 1);
if (address) {
return GetFileLineFromAddress(address, fileName, size, lineNumber,
offsetBytes);
}
return 0;
}
size_t GetDescriptionFromAddress(void* address, const char* format,
char* buffer, size_t size)
{
char*const begin = buffer;
char*const end = buffer + size;
size_t line_number = 0;
if (size) {
*buffer = 0;
}
buffer += GetFileLineFromAddress(address, buffer, size, &line_number, NULL);
size = (GC_ULONG_PTR)end < (GC_ULONG_PTR)buffer ? 0 : end - buffer;
if (line_number) {
char str[128];
wsprintf(str, "(%d) : ", (int)line_number);
if (size) {
strncpy(buffer, str, size)[size - 1] = 0;
}
buffer += strlen(str);
size = (GC_ULONG_PTR)end < (GC_ULONG_PTR)buffer ? 0 : end - buffer;
}
if (size) {
strncpy(buffer, "at ", size)[size - 1] = 0;
}
buffer += strlen("at ");
size = (GC_ULONG_PTR)end < (GC_ULONG_PTR)buffer ? 0 : end - buffer;
buffer += GetSymbolNameFromAddress(address, buffer, size, NULL);
size = (GC_ULONG_PTR)end < (GC_ULONG_PTR)buffer ? 0 : end - buffer;
if (size) {
strncpy(buffer, " in ", size)[size - 1] = 0;
}
buffer += strlen(" in ");
size = (GC_ULONG_PTR)end < (GC_ULONG_PTR)buffer ? 0 : end - buffer;
buffer += GetModuleNameFromAddress(address, buffer, size);
return buffer - begin;
}
size_t GetDescriptionFromStack(void* const frames[], size_t count,
const char* format, char* description[],
size_t size)
{
char*const begin = (char*)description;
char*const end = begin + size;
char* buffer = begin + (count + 1) * sizeof(char*);
size_t i;
(void)format;
for (i = 0; i < count; ++i) {
if (size)
description[i] = buffer;
size = (GC_ULONG_PTR)end < (GC_ULONG_PTR)buffer ? 0 : end - buffer;
buffer += 1 + GetDescriptionFromAddress(frames[i], NULL, buffer, size);
}
if (size)
description[count] = NULL;
return buffer - begin;
}
/* Compatibility with */
int backtrace(void* addresses[], int count)
{
return GetStackFrames(1, addresses, count);
}
char** backtrace_symbols(void*const* addresses, int count)
{
size_t size = GetDescriptionFromStack(addresses, count, NULL, NULL, 0);
char** symbols = (char**)malloc(size);
if (symbols != NULL)
GetDescriptionFromStack(addresses, count, NULL, symbols, size);
return symbols;
}
#else
extern int GC_quiet;
/* ANSI C does not allow translation units to be empty. */
#endif /* _M_AMD64 */
Gauche-0.9.6/gc/extra/MacOS.c 0000664 0000764 0000764 00000011303 13227007433 014607 0 ustar shiro shiro /*
MacOS.c
Some routines for the Macintosh OS port of the Hans-J. Boehm, Alan J. Demers
garbage collector.
11/22/94 pcb StripAddress the temporary memory handle for 24-bit mode.
11/30/94 pcb Tracking all memory usage so we can deallocate it all at once.
02/10/96 pcb Added routine to perform a final collection when
unloading shared library.
by Patrick C. Beard.
*/
/* Boehm, February 15, 1996 2:55 pm PST */
#include
#include
#include
#include
#include
#include
#define GC_BUILD
#include "gc.h"
#include "private/gc_priv.h"
/* use 'CODE' resource 0 to get exact location of the beginning of global space. */
typedef struct {
unsigned long aboveA5;
unsigned long belowA5;
unsigned long JTSize;
unsigned long JTOffset;
} *CodeZeroPtr, **CodeZeroHandle;
void* GC_MacGetDataStart(void)
{
CodeZeroHandle code0 = (CodeZeroHandle)GetResource('CODE', 0);
if (code0) {
long belowA5Size = (**code0).belowA5;
ReleaseResource((Handle)code0);
return (LMGetCurrentA5() - belowA5Size);
}
fprintf(stderr, "Couldn't load the jump table.");
exit(-1);
return 0;
}
#ifdef USE_TEMPORARY_MEMORY
/* track the use of temporary memory so it can be freed all at once. */
typedef struct TemporaryMemoryBlock TemporaryMemoryBlock, **TemporaryMemoryHandle;
struct TemporaryMemoryBlock {
TemporaryMemoryHandle nextBlock;
char data[];
};
static TemporaryMemoryHandle theTemporaryMemory = NULL;
void GC_MacFreeTemporaryMemory(void);
Ptr GC_MacTemporaryNewPtr(size_t size, Boolean clearMemory)
{
# if !defined(SHARED_LIBRARY_BUILD)
static Boolean firstTime = true;
# endif
OSErr result;
TemporaryMemoryHandle tempMemBlock;
Ptr tempPtr = nil;
tempMemBlock = (TemporaryMemoryHandle)TempNewHandle(size + sizeof(TemporaryMemoryBlock), &result);
if (tempMemBlock && result == noErr) {
HLockHi((Handle)tempMemBlock);
tempPtr = (**tempMemBlock).data;
if (clearMemory) memset(tempPtr, 0, size);
tempPtr = StripAddress(tempPtr);
/* keep track of the allocated blocks. */
(**tempMemBlock).nextBlock = theTemporaryMemory;
theTemporaryMemory = tempMemBlock;
}
# if !defined(SHARED_LIBRARY_BUILD)
/* install an exit routine to clean up the memory used at the end. */
if (firstTime) {
atexit(&GC_MacFreeTemporaryMemory);
firstTime = false;
}
# endif
return tempPtr;
}
extern word GC_fo_entries;
static void perform_final_collection(void)
{
unsigned i;
word last_fo_entries = 0;
/* adjust the stack bottom, because CFM calls us from another stack
location. */
GC_stackbottom = (ptr_t)&i;
/* try to collect and finalize everything in sight */
for (i = 0; i < 2 || GC_fo_entries < last_fo_entries; i++) {
last_fo_entries = GC_fo_entries;
GC_gcollect();
}
}
void GC_MacFreeTemporaryMemory(void)
{
# if defined(SHARED_LIBRARY_BUILD)
/* if possible, collect all memory, and invoke all finalizers. */
perform_final_collection();
# endif
if (theTemporaryMemory != NULL) {
# if !defined(SHARED_LIBRARY_BUILD)
long totalMemoryUsed = 0;
# endif
TemporaryMemoryHandle tempMemBlock = theTemporaryMemory;
while (tempMemBlock != NULL) {
TemporaryMemoryHandle nextBlock = (**tempMemBlock).nextBlock;
# if !defined(SHARED_LIBRARY_BUILD)
totalMemoryUsed += GetHandleSize((Handle)tempMemBlock);
# endif
DisposeHandle((Handle)tempMemBlock);
tempMemBlock = nextBlock;
}
theTemporaryMemory = NULL;
# if !defined(SHARED_LIBRARY_BUILD)
if (GC_print_stats) {
fprintf(stdout, "[total memory used: %ld bytes.]\n",
totalMemoryUsed);
fprintf(stdout, "[total collections: %lu]\n",
(unsigned long)GC_gc_no);
}
# endif
}
}
#endif /* USE_TEMPORARY_MEMORY */
#if __option(far_data)
void* GC_MacGetDataEnd(void)
{
CodeZeroHandle code0 = (CodeZeroHandle)GetResource('CODE', 0);
if (code0) {
long aboveA5Size = (**code0).aboveA5;
ReleaseResource((Handle)code0);
return (LMGetCurrentA5() + aboveA5Size);
}
fprintf(stderr, "Couldn't load the jump table.");
exit(-1);
return 0;
}
#endif /* __option(far_data) */
Gauche-0.9.6/gc/extra/AmigaOS.c 0000664 0000764 0000764 00000037065 13227007433 015142 0 ustar shiro shiro
/******************************************************************
AmigaOS-specific routines for GC.
This file is normally included from os_dep.c
******************************************************************/
#if !defined(GC_AMIGA_DEF) && !defined(GC_AMIGA_SB) && !defined(GC_AMIGA_DS) && !defined(GC_AMIGA_AM)
# include "private/gc_priv.h"
# include
# include
# define GC_AMIGA_DEF
# define GC_AMIGA_SB
# define GC_AMIGA_DS
# define GC_AMIGA_AM
#endif
#ifdef GC_AMIGA_DEF
# ifndef __GNUC__
# include
# endif
# include
# include
# include
# include
#endif
#ifdef GC_AMIGA_SB
/******************************************************************
Find the base of the stack.
******************************************************************/
ptr_t GC_get_main_stack_base(void)
{
struct Process *proc = (struct Process*)SysBase->ThisTask;
/* Reference: Amiga Guru Book Pages: 42,567,574 */
if (proc->pr_Task.tc_Node.ln_Type==NT_PROCESS
&& proc->pr_CLI != NULL) {
/* first ULONG is StackSize */
/*longPtr = proc->pr_ReturnAddr;
size = longPtr[0];*/
return (char *)proc->pr_ReturnAddr + sizeof(ULONG);
} else {
return (char *)proc->pr_Task.tc_SPUpper;
}
}
#endif
#ifdef GC_AMIGA_DS
/******************************************************************
Register data segments.
******************************************************************/
void GC_register_data_segments(void)
{
struct Process *proc;
struct CommandLineInterface *cli;
BPTR myseglist;
ULONG *data;
# ifdef __GNUC__
ULONG dataSegSize;
GC_bool found_segment = FALSE;
extern char __data_size[];
dataSegSize=__data_size+8;
/* Can`t find the Location of __data_size, because
it`s possible that is it, inside the segment. */
# endif
proc= (struct Process*)SysBase->ThisTask;
/* Reference: Amiga Guru Book Pages: 538ff,565,573
and XOper.asm */
myseglist = proc->pr_SegList;
if (proc->pr_Task.tc_Node.ln_Type==NT_PROCESS) {
if (proc->pr_CLI != NULL) {
/* ProcLoaded 'Loaded as a command: '*/
cli = BADDR(proc->pr_CLI);
myseglist = cli->cli_Module;
}
} else {
ABORT("Not a Process.");
}
if (myseglist == NULL) {
ABORT("Arrrgh.. can't find segments, aborting");
}
/* xoper hunks Shell Process */
for (data = (ULONG *)BADDR(myseglist); data != NULL;
data = (ULONG *)BADDR(data[0])) {
if ((ULONG)GC_register_data_segments < (ULONG)(&data[1])
|| (ULONG)GC_register_data_segments > (ULONG)(&data[1])
+ data[-1]) {
# ifdef __GNUC__
if (dataSegSize == data[-1]) {
found_segment = TRUE;
}
# endif
GC_add_roots_inner((char *)&data[1],
((char *)&data[1]) + data[-1], FALSE);
}
} /* for */
# ifdef __GNUC__
if (!found_segment) {
ABORT("Can`t find correct Segments.\nSolution: Use an newer version of ixemul.library");
}
# endif
}
#endif
#ifdef GC_AMIGA_AM
#ifndef GC_AMIGA_FASTALLOC
void *GC_amiga_allocwrapper(size_t size,void *(*AllocFunction)(size_t size2)){
return (*AllocFunction)(size);
}
void *(*GC_amiga_allocwrapper_do)(size_t size,void *(*AllocFunction)(size_t size2))
=GC_amiga_allocwrapper;
#else
void *GC_amiga_allocwrapper_firsttime(size_t size,void *(*AllocFunction)(size_t size2));
void *(*GC_amiga_allocwrapper_do)(size_t size,void *(*AllocFunction)(size_t size2))
=GC_amiga_allocwrapper_firsttime;
/******************************************************************
Amiga-specific routines to obtain memory, and force GC to give
back fast-mem whenever possible.
These hacks makes gc-programs go many times faster when
the Amiga is low on memory, and are therefore strictly necessary.
-Kjetil S. Matheussen, 2000.
******************************************************************/
/* List-header for all allocated memory. */
struct GC_Amiga_AllocedMemoryHeader{
ULONG size;
struct GC_Amiga_AllocedMemoryHeader *next;
};
struct GC_Amiga_AllocedMemoryHeader *GC_AMIGAMEM=(struct GC_Amiga_AllocedMemoryHeader *)(int)~(NULL);
/* Type of memory. Once in the execution of a program, this might change to MEMF_ANY|MEMF_CLEAR */
ULONG GC_AMIGA_MEMF = MEMF_FAST | MEMF_CLEAR;
/* Prevents GC_amiga_get_mem from allocating memory if this one is TRUE. */
#ifndef GC_AMIGA_ONLYFAST
BOOL GC_amiga_dontalloc=FALSE;
#endif
#ifdef GC_AMIGA_PRINTSTATS
int succ=0,succ2=0;
int nsucc=0,nsucc2=0;
int nullretries=0;
int numcollects=0;
int chipa=0;
int allochip=0;
int allocfast=0;
int cur0=0;
int cur1=0;
int cur10=0;
int cur50=0;
int cur150=0;
int cur151=0;
int ncur0=0;
int ncur1=0;
int ncur10=0;
int ncur50=0;
int ncur150=0;
int ncur151=0;
#endif
/* Free everything at program-end. */
void GC_amiga_free_all_mem(void){
struct GC_Amiga_AllocedMemoryHeader *gc_am=(struct GC_Amiga_AllocedMemoryHeader *)(~(int)(GC_AMIGAMEM));
#ifdef GC_AMIGA_PRINTSTATS
printf("\n\n"
"%d bytes of chip-mem, and %d bytes of fast-mem where allocated from the OS.\n",
allochip,allocfast
);
printf(
"%d bytes of chip-mem were returned from the GC_AMIGA_FASTALLOC supported allocating functions.\n",
chipa
);
printf("\n");
printf("GC_gcollect was called %d times to avoid returning NULL or start allocating with the MEMF_ANY flag.\n",numcollects);
printf("%d of them was a success. (the others had to use allocation from the OS.)\n",nullretries);
printf("\n");
printf("Succeeded forcing %d gc-allocations (%d bytes) of chip-mem to be fast-mem.\n",succ,succ2);
printf("Failed forcing %d gc-allocations (%d bytes) of chip-mem to be fast-mem.\n",nsucc,nsucc2);
printf("\n");
printf(
"Number of retries before succeeding a chip->fast force:\n"
"0: %d, 1: %d, 2-9: %d, 10-49: %d, 50-149: %d, >150: %d\n",
cur0,cur1,cur10,cur50,cur150,cur151
);
printf(
"Number of retries before giving up a chip->fast force:\n"
"0: %d, 1: %d, 2-9: %d, 10-49: %d, 50-149: %d, >150: %d\n",
ncur0,ncur1,ncur10,ncur50,ncur150,ncur151
);
#endif
while(gc_am!=NULL){
struct GC_Amiga_AllocedMemoryHeader *temp = gc_am->next;
FreeMem(gc_am,gc_am->size);
gc_am=(struct GC_Amiga_AllocedMemoryHeader *)(~(int)(temp));
}
}
#ifndef GC_AMIGA_ONLYFAST
/* All memory with address lower than this one is chip-mem. */
char *chipmax;
/*
* Always set to the last size of memory tried to be allocated.
* Needed to ensure allocation when the size is bigger than 100000.
*
*/
size_t latestsize;
#endif
#ifdef GC_AMIGA_FASTALLOC
/*
* The actual function that is called with the GET_MEM macro.
*
*/
void *GC_amiga_get_mem(size_t size){
struct GC_Amiga_AllocedMemoryHeader *gc_am;
#ifndef GC_AMIGA_ONLYFAST
if(GC_amiga_dontalloc==TRUE){
return NULL;
}
/* We really don't want to use chip-mem, but if we must, then as little as possible. */
if(GC_AMIGA_MEMF==(MEMF_ANY|MEMF_CLEAR) && size>100000 && latestsize<50000) return NULL;
#endif
gc_am=AllocMem((ULONG)(size + sizeof(struct GC_Amiga_AllocedMemoryHeader)),GC_AMIGA_MEMF);
if(gc_am==NULL) return NULL;
gc_am->next=GC_AMIGAMEM;
gc_am->size=size + sizeof(struct GC_Amiga_AllocedMemoryHeader);
GC_AMIGAMEM=(struct GC_Amiga_AllocedMemoryHeader *)(~(int)(gc_am));
#ifdef GC_AMIGA_PRINTSTATS
if((char *)gc_amchipmax || ret==NULL){
if(ret==NULL){
nsucc++;
nsucc2+=size;
if(rec==0) ncur0++;
if(rec==1) ncur1++;
if(rec>1 && rec<10) ncur10++;
if(rec>=10 && rec<50) ncur50++;
if(rec>=50 && rec<150) ncur150++;
if(rec>=150) ncur151++;
}else{
succ++;
succ2+=size;
if(rec==0) cur0++;
if(rec==1) cur1++;
if(rec>1 && rec<10) cur10++;
if(rec>=10 && rec<50) cur50++;
if(rec>=50 && rec<150) cur150++;
if(rec>=150) cur151++;
}
}
#endif
if (((char *)ret)<=chipmax && ret!=NULL && (rec<(size>500000?9:size/5000))){
ret=GC_amiga_rec_alloc(size,AllocFunction,rec+1);
}
return ret;
}
#endif
/* The allocating-functions defined inside the Amiga-blocks in gc.h is called
* via these functions.
*/
void *GC_amiga_allocwrapper_any(size_t size,void *(*AllocFunction)(size_t size2)){
void *ret;
GC_amiga_dontalloc=TRUE; /* Pretty tough thing to do, but its indeed necessary. */
latestsize=size;
ret=(*AllocFunction)(size);
if(((char *)ret) <= chipmax){
if(ret==NULL){
/* Give GC access to allocate memory. */
#ifdef GC_AMIGA_GC
if(!GC_dont_gc){
GC_gcollect();
#ifdef GC_AMIGA_PRINTSTATS
numcollects++;
#endif
ret=(*AllocFunction)(size);
}
if(ret==NULL)
#endif
{
GC_amiga_dontalloc=FALSE;
ret=(*AllocFunction)(size);
if(ret==NULL){
WARN("Out of Memory! Returning NIL!\n", 0);
}
}
#ifdef GC_AMIGA_PRINTSTATS
else{
nullretries++;
}
if(ret!=NULL && (char *)ret<=chipmax) chipa+=size;
#endif
}
#ifdef GC_AMIGA_RETRY
else{
void *ret2;
/* We got chip-mem. Better try again and again and again etc., we might get fast-mem sooner or later... */
/* Using gctest to check the effectiveness of doing this, does seldom give a very good result. */
/* However, real programs doesn't normally rapidly allocate and deallocate. */
if(
AllocFunction!=GC_malloc_uncollectable
#ifdef GC_ATOMIC_UNCOLLECTABLE
&& AllocFunction!=GC_malloc_atomic_uncollectable
#endif
){
ret2=GC_amiga_rec_alloc(size,AllocFunction,0);
}else{
ret2=(*AllocFunction)(size);
#ifdef GC_AMIGA_PRINTSTATS
if((char *)ret2chipmax){
GC_free(ret);
ret=ret2;
}else{
GC_free(ret2);
}
}
#endif
}
GC_amiga_dontalloc=FALSE;
return ret;
}
void (*GC_amiga_toany)(void)=NULL;
void GC_amiga_set_toany(void (*func)(void)){
GC_amiga_toany=func;
}
#endif /* !GC_AMIGA_ONLYFAST */
void *GC_amiga_allocwrapper_fast(size_t size,void *(*AllocFunction)(size_t size2)){
void *ret;
ret=(*AllocFunction)(size);
if(ret==NULL){
/* Enable chip-mem allocation. */
#ifdef GC_AMIGA_GC
if(!GC_dont_gc){
GC_gcollect();
#ifdef GC_AMIGA_PRINTSTATS
numcollects++;
#endif
ret=(*AllocFunction)(size);
}
if(ret==NULL)
#endif
{
#ifndef GC_AMIGA_ONLYFAST
GC_AMIGA_MEMF=MEMF_ANY | MEMF_CLEAR;
if(GC_amiga_toany!=NULL) (*GC_amiga_toany)();
GC_amiga_allocwrapper_do=GC_amiga_allocwrapper_any;
return GC_amiga_allocwrapper_any(size,AllocFunction);
#endif
}
#ifdef GC_AMIGA_PRINTSTATS
else{
nullretries++;
}
#endif
}
return ret;
}
void *GC_amiga_allocwrapper_firsttime(size_t size,void *(*AllocFunction)(size_t size2)){
atexit(&GC_amiga_free_all_mem);
chipmax=(char *)SysBase->MaxLocMem; /* For people still having SysBase in chip-mem, this might speed up a bit. */
GC_amiga_allocwrapper_do=GC_amiga_allocwrapper_fast;
return GC_amiga_allocwrapper_fast(size,AllocFunction);
}
#endif /* GC_AMIGA_FASTALLOC */
/*
* The wrapped realloc function.
*
*/
void *GC_amiga_realloc(void *old_object,size_t new_size_in_bytes){
#ifndef GC_AMIGA_FASTALLOC
return GC_realloc(old_object,new_size_in_bytes);
#else
void *ret;
latestsize=new_size_in_bytes;
ret=GC_realloc(old_object,new_size_in_bytes);
if(ret==NULL && new_size_in_bytes != 0
&& GC_AMIGA_MEMF==(MEMF_FAST | MEMF_CLEAR)){
/* Out of fast-mem. */
#ifdef GC_AMIGA_GC
if(!GC_dont_gc){
GC_gcollect();
#ifdef GC_AMIGA_PRINTSTATS
numcollects++;
#endif
ret=GC_realloc(old_object,new_size_in_bytes);
}
if(ret==NULL)
#endif
{
#ifndef GC_AMIGA_ONLYFAST
GC_AMIGA_MEMF=MEMF_ANY | MEMF_CLEAR;
if(GC_amiga_toany!=NULL) (*GC_amiga_toany)();
GC_amiga_allocwrapper_do=GC_amiga_allocwrapper_any;
ret=GC_realloc(old_object,new_size_in_bytes);
#endif
}
#ifdef GC_AMIGA_PRINTSTATS
else{
nullretries++;
}
#endif
}
if(ret==NULL && new_size_in_bytes != 0){
WARN("Out of Memory! Returning NIL!\n", 0);
}
#ifdef GC_AMIGA_PRINTSTATS
if(((char *)ret)