scalapack-doc-1.5/0040755000056400000620000000000007151422255013543 5ustar pfrauenfstaffscalapack-doc-1.5/html/0040755000056400000620000000000007151420741014505 5ustar pfrauenfstaffscalapack-doc-1.5/html/faq.html0100644000056400000620000007226607141341617016157 0ustar pfrauenfstaff ScaLAPACK FAQ

ScaLAPACK Frequently Asked Questions (FAQ)

scalapack@cs.utk.edu

Many thanks to the netlib_maintainers@netlib.org from whose FAQ list I have patterned this list for ScaLAPACK.

Table of Contents

ScaLAPACK
1.1) What is ScaLAPACK?
1.2) How do I reference ScaLAPACK in a scientific publication?
1.3) Are there vendor-specific versions of ScaLAPACK?
1.4) What is the difference between the vendor and Netlib version of ScaLAPACK and which should I use?
1.5) Are there legal restrictions on the use of ScaLAPACK software?
1.6) What is two-dimensional block cyclic data distribution?
1.7) Where can I find out more information about ScaLAPACK?
1.8) What and where are the PBLAS?
1.9) Are example programs available?
1.10) How do I run an example program?
1.11) How do I install ScaLAPACK?
1.12) How do I install ScaLAPACK using MPIch-G and Globus?
1.13) How do I achieve high performance using ScaLAPACK?
1.14) Are prebuilt ScaLAPACK libraries available?
1.15) How do I find a particular routine?
1.16) I can't get a program to work. What should I do?
1.17) How can I unpack scalapack.tgz?
1.18) What technical support for ScaLAPACK is available?
1.19) How do I submit a bug report?
1.20) How do I gather a distributed vector back to one processor?

BLACS
2.1) What and where are the BLACS?
2.2) Is there a Quick Reference Guide to the BLACS available?
2.3) How do I install the BLACS?
2.4) Are prebuilt BLACS libraries available?
2.5) Are example BLACS programs available?

BLAS
3.1) What are the BLAS?
3.2) Publications/references for the BLAS?
3.3) Is there a Quick Reference Guide to the BLAS avail able?
3.4) Are optimized BLAS libraries available?
3.5) What is ATLAS?
3.6) Where can I find vendor supplied BLAS?
3.7) Where can I find the Intel BLAS for Linux?
3.8) Where can I find Java BLAS?
3.9) Are prebuilt Fortran77 ref implementation BLAS lib raries available from Netlib?

1) ScaLAPACK

1.1) What is ScaLAPACK?

The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.

Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy. (For such machines, the memory hierarchy includes the off-processor memory of other processors, in addition to the hierarchy of registers, cache, and local memory on each processor.) The fundamental building blocks of the ScaLAPACK library are distributed memory versions (PBLAS) of the Level 1, 2 and 3 BLAS, and a set of Basic Linear Algebra Communication Subprograms (BLACS) for communication tasks that arise frequently in parallel linear algebra computations. In the ScaLAPACK routines, all interprocessor communication occurs within the PBLAS and the BLACS. One of the design goals of ScaLAPACK was to have the ScaLAPACK routines resemble their LAPACK equivalents as much as possible.

For detailed information on ScaLAPACK, please refer to the ScaLAPACK Users' Guide.


1.2) How do I reference ScaLAPACK in a scientific publication?

We ask that you cite the ScaLAPACK Users' Guide.

@BOOK{slug,
      AUTHOR = {Blackford, L. S. and Choi, J. and Cleary, A. and
                D'Azevedo, E. and Demmel, J. and Dhillon, I. and
                Dongarra, J. and Hammarling, S. and Henry, G. and
                Petitet, A. and Stanley, K. and Walker, D. and
                Whaley, R. C.},
      TITLE = {{ScaLAPACK} Users' Guide},
      PUBLISHER = {Society for Industrial and Applied Mathematics},
      YEAR = {1997},
      ADDRESS = {Philadelphia, PA},
      ISBN = {0-89871-397-8 (paperback)} }

1.3) Are there vendor-specific versions of ScaLAPACK?

Yes.

ScaLAPACK has been incorporated into several commercial packages, including the Sun Scalable Scientific Subroutine Library (Sun S3L), NAG Parallel Library, IBM Parallel ESSL, and Cray LIBSCI, and is being integrated into the VNI IMSL Numerical Library, as well as software libraries for Fujitsu, Hewlett-Packard/Convex, Hitachi, NEC, and SGI.


1.4) What is the difference between the vendor and Netlib version of ScaLAPACK and which should I use?

The publically available version of ScaLAPACK (on netlib) is designed to be portable and efficient across a wide range of computers. It is not hand-tuned for a specific computer architecture.

The vendor-specific versions of ScaLAPACK have been optimized for a specific architecture. Therefore, for best performance, we recommend using a vendor-optimized version of ScaLAPACK if it is available.

However, as new ScaLAPACK routines are introduced with each release, the vendor-specific versions of ScaLAPACK may only contain a subset of the existing routines.

If a user suspects an error in a vendor-specific ScaLAPACK routine, he is recommended to download the ScaLAPACK Test Suite from netlib.


1.5) Are there legal restrictions on the use of ScaLAPACK software?

ScaLAPACK (like LINPACK, EISPACK, LAPACK, etc) is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web. It can, and is, being included in commercial packages (e.g., Sun's S3L, IBM's Parallel ESSL, NAG Numerical PVM and MPI Library). We only ask that proper credit be given to the authors.

Like all software, it is copyrighted. It is not trademarked, but we do ask the following:

If you modify the source for these routines we ask that you change the name of the routine and comment the changes made to the original.

We will gladly answer any questions regarding the software. If a modification is done, however, it is the responsibility of the person who modified the routine to provide support.


1.6) What is two-dimensional block cyclic data distribution?
two-dimensional block cyclic decomposition


1.7) Where can I find more information about ScaLAPACK?

A variety of working notes related to the ScaLAPACK library were published as LAPACK Working Notes and are available in postscript and pdf format at:

http://www.netlib.org/lapack/lawns/ and
http://www.netlib.org/lapack/lawnspdf/

1.8) What and where are the PBLAS?

The Parallel Basic Linear Algebra Subprograms (PBLAS) are distributed memory versions of the Level 1, 2 and 3 BLAS. A Quick Reference Guide to the PBLAS is available. The software is available as part of the ScaLAPACK distribution tar file (scalapack.tgz).

There is also a new prototype version of the PBLAS (version 2.0), which is alignment-restriction free and uses logical algorithmic blocking techniques. For details, please refer to the scalapack/prototype/readme.pblas.


1.9) Are example ScaLAPACK programs available?

Yes, example ScaLAPACK programs are available. Refer to

http://www.netlib.org/scalapack/examples/
for a list of available example programs.

A detailed description of how to run a ScaLAPACK example program is discussed in Chapter 2 of the ScaLAPACK Users' Guide.


1.10) How do I run an example program?

A detailed description of how to run a ScaLAPACK example program is discussed in Chapter 2 of the ScaLAPACK Users' Guide.


1.11) How do I install ScaLAPACK?

A comprehensive Installation Guide for ScaLAPACK is provided. In short, a user only needs to modify one file, SLmake.inc, to specify his compiler, compiler flags, location of his MPI library, BLACS library, and BLAS library. And then type make lib to build the ScaLAPACK library, and make exe to build the testing/timing executables. Example SLmake.inc files for various architectures are supplied in the SCALAPACK/INSTALL subdirectory in the distribution.

When you install ScaLAPACK, the installation assumes that the user has available a low-level message passing layer (like MPI, PVM, or a native message-passing library), a BLACS library (MPIBLACS or PVMBLACS, etc), and a BLAS library. If any of these required components is not available, then the user must build the needed component before proceeding with the ScaLAPACK installation.

If a vendor-optimized BLAS library is not available, ATLAS can be used to automatically generate an optimized BLAS library for your architecture. Only as a last resort should the user use the reference implementation Fortran77 BLAS contained on the BLAS webpage.


1.12) How do I install ScaLAPACK using MPIch-G and Globus?

A detailed explanation of how to run a ScaLAPACK program using MPIch-G and Globus can be found at: http://www.cs.utk.edu/~petitet/grads/.

See Question 1.11 for general installation instructions.


1.13) How do I achieve high performance using ScaLAPACK?

ScaLAPACK performance relies on an efficient low-level message-passing layer and high speed interconnection network for communication, and an optimized BLAS library for local computation.

For a detailed description of performance-related issues, please refer to Chapter 5 of the ScaLAPACK Users' Guide.


1.14) Are prebuilt ScaLAPACK libraries available?

Yes, prebuilt ScaLAPACK libraries are available for a variety of architectures. Refer to

http://www.netlib.org/scalapack/archives/
for a complete list of available prebuilt libraries.


1.15) How do I find a particular routine?

Indexes of individual ScaLAPACK driver and computational routines are available. These indexes contain brief descriptions of each routine.

ScaLAPACK routines are available in four types: single precision real, double precision real, single precision complex, and double precision complex. At the present time, the nonsymmetric eigenproblem is only available in single and double precision real.


1.16) I can't get a program to work. What should I do?

Technical questions should be directed to the authors at

scalapack@cs.utk.edu.

Please tell us the type of machine on which the tests were run, the compiler and compiler options that were used, details of the BLACS library that was used, as well as the BLAS library, and a copy of the input file if appropriate.

Be prepared to answer the following questions:

  1. Have you run the BLAS, BLACS, PBLAS and ScaLAPACK test suites?
  2. Have you checked the appropriate errata lists on netlib?
  3. Have you attempted to replicate this error using the appropriate ScaLAPACK test code and/or one of the ScaLAPACK example routines?
  4. If you are using an optimized BLAS or BLACS library, have you tried using the reference implementations from netlib?


1.17) How can I unpack scalapack.tgz?

   gunzip scalapack.tgz
   tar xvf scalapack.tar

The compression program gzip (and gunzip) is Gnu software. If it is not already available on your machine, you can download it via anonymous ftp:

   ncftp prep.ai.mit.edu
   cd pub/gnu/
   get gzip-1.2.4.tar

See Question 1.11 for installation instructions.


1.18) What technical support for ScaLAPACK is available?

Technical questions and comments should be directed to the authors at

scalapack@cs.utk.edu.

See Question 1.16


1.19) How do I submit a bug report?

Technical questions should be directed to the authors at

scalapack@cs.utk.edu.

Be prepared to answer the questions as outlined in Question 1.15. Those are the first questions that we will ask!


1.20) How do I gather a distributed vector back to one processor?

There are several ways to accomplish this task.

  1. You can create a local array of the global size and each process will write his pieces of the matrix in the appropriate locations, and then you can do a call to the BLACS routine DGSUM2D to add all of them together and then leave the answer on one process or on all processes.
  2. You can modify SCALAPACK/TOOLS/pdlaprnt.f to write to an array instead of writing to a file.
  3. You can modify the routine pdlawrite.f from the example program http://www.netlib.org/scalapack/examples/scaex.tgz.
  4. You can create a second "context" containing only one process, and then call the redistribution routines in SCALAPACK/REDIST/SRC/ to redistribute the matrix to that process grid.

2) BLACS

2.1) What and where are the BLACS?

The BLACS (Basic Linear Algebra Communication Subprograms) project is an ongoing investigation whose purpose is to create a linear algebra oriented message passing interface that may be implemented efficiently and uniformly across a large range of distributed memory platforms.

The length of time required to implement efficient distributed memory algorithms makes it impractical to rewrite programs for every new parallel machine. The BLACS exist in order to make linear algebra applications both easier to program and more portable. It is for this reason that the BLACS are used as the communication layer of ScaLAPACK.

For further information on the BLACS, please refer to the blacs directory on netlib, as well as the BLACS Homepage.


2.2) Is there a Quick Reference Guide to the BLACS available?

Yes, there is a postscript version of the Quick Reference Guide to the BLACS available.


2.3) How do I install the BLACS?

First, you must choose which underlying message-passing layer that the BLACS will use (MPI, PVM, NX, MPL, etc). Once this decision has been made, you download the respective gzip tar file.

An Installation Guide for the BLACS is provided, as well as a comprehensive BLACS Test Suite. In short, a user only needs to modify one file, Bmake.inc, to specify his compiler, compiler flags, and location of his MPI library. And then type make mpi to build the MPI BLACS library, for example. Example Bmake.inc files for various architectures are supplied in the BLACS/BMAKES subdirectory in the distribution. There are also scripts in BLACS/INSTALL which can be run to help the user to determine some of the settings in the Bmake.inc file.

It is highly recommended that the user run the BLACS Tester to ensure that his installation is correct, and that no bugs have been detected in the low-level message-passing layer. If you suspect an error, please consult the

file on netlib.


2.4) Are prebuilt BLACS libraries available?

Yes, prebuilt BLACS libraries are available for a variety of architectures and message-passing interfaces. Refer to

http://www.netlib.org/blacs/archives/
for a complete list of available prebuilt libraries.

2.5) Are example BLACS programs available?

Yes, example BLACS programs are available. Refer to

http://www.netlib.org/scalapack/examples/
for a list of available example programs.

3) BLAS


3.1) What and where are the BLAS?

The BLAS (Basic Linear Algebra Subprograms) are high quality "building block" routines for performing basic vector and matrix operations. Level 1 BLAS do vector-vector operations, Level 2 BLAS do matrix-vector operations, and Level 3 BLAS do matrix-matrix operations. Because the BLAS are efficient, portable, and widely available, they're commonly used in the development of high quality linear algebra software, LINPACK and LAPACK for example.

A Fortran77 reference implementation of the BLAS is located in the blas directory of Netlib.


3.2) Publications/references for the BLAS?

  1. C. L. Lawson, R. J. Hanson, D. Kincaid, and F. T. Krogh, Basic Linear Algebra Subprograms for FORTRAN usage, ACM Trans. Math. Soft., 5 (1979), pp. 308--323.

  2. J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 14 (1988), pp. 1--17.

  3. J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 14 (1988), pp. 18--32.

  4. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 1--17.

  5. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 18--28.


3.3) Is there a Quick Reference Guide to the BLAS available?

Yes, there is a postscript version of the Quick Reference Guide to the BLAS available.


3.4) Are optimized BLAS libraries available?

YES! Machine-specific optimized BLAS libraries are available for a variety of computer architectures. These optimized BLAS libraries are provided by the computer vendor or by an independent software vendor (ISV). For further details, please contact your local vendor representative.

Alternatively, the user can download ATLAS to automatically generate an optimized BLAS library for his architecture.

If all else fails, the user can download a Fortran77 reference implementation of the BLAS from netlib. However, keep in mind that this is a reference implementation and is not optimized.


3.5) What is ATLAS?

ATLAS is an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. The production of such software for machines ranging from desktop workstations to embedded processors can be a tedious and time consuming task. ATLAS has been designed to automate much of this process. We concentrate our efforts on the widely used linear algebra kernels called the Basic Linear Algebra Subroutines (BLAS).

For further information, refer to the ATL AS webpage.


3.6) Where can I find vendor supplied BLAS?

BLAS Vendor List
Last updated: November 13, 1998


Vendor

URL

Cray http://www.sgi.com/Products/appsdirectory.dir/Applications/Math_Physics_Other_ Sciences/
DEC http://www.digital.com/info/hpc/software/dxml.html
HP http://www.hp.com/esy/systems_networking/tech_servers/products/library.html
IBM http://www.rs6000.ibm.com/software/Apps/essl.html
http://www.rs6000.ibm.com/software/sp_products/esslpara.html
Intel http://developer.intel.com/vtune/perflibst/mkl/mklperf.htm (NT)
SGI http://www.sgi.com/software/scsl.html
SUN http://www.sun.com/workshop/fortran/


3.7) Where can I find the Intel BLAS for Linux?

Yes, the Intel BLAS for Linux are now available! Refer to the following URL: Intel BLAS for Linux.


3.8) Where can I find Java BLAS?

Yes, Java BLAS are available. Refer to the following URLs: Java LAPACK and JavaNumerics The JavaNumerics webpage provides a focal point for information on numerical computing in Java.


3.9) Are prebuilt Fortran77 ref implementation BLAS libraries available?

Yes. However, it is assumed that you have a machine-specific optimized BLAS library already available on the architecture to which you are installing LAPACK. If this is not the case, you can download a prebuilt Fortran77 reference implementation BLAS library or compile the Fortran77 reference implementation source code of the BLAS from netlib.

Although a model implementation of the BLAS in available from netlib in the blas directory, it is not expected to perform as well as a specially tuned implementation on most high-performance computers -- on some machines it may give much worse performance -- but it allows users to run LAPACK software on machines that do not offer any other implementation of the BLAS.

scalapack@cs.utk.edu
scalapack-doc-1.5/html/slug/0040755000056400000620000000000007151420274015460 5ustar pfrauenfstaffscalapack-doc-1.5/html/slug/copyright.gif0100644000056400000620000000011706336145701020156 0ustar pfrauenfstaffGIF89a!,& Ǒړ 5EU#v覙X9Ӯ;scalapack-doc-1.5/html/slug/footnode.html0100644000056400000620000002153606336057275020202 0ustar pfrauenfstaff Footnotes
...output
This example program computes the relative machine precision which causes, on some systems, the IEEE floating-point exception flags to be raised. This may result in the printing of a warning message. This is normal.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...output
This example program computes the relative machine precision which causes, on some systems, the IEEE floating-point exception flags to be raised. This may result in the printing of a warning message. This is normal.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...output),
(Local input or local output) means that the argument may be either a local input argument or a local output argument, depending on the values of other arguments; for example, in the PxyySVX driver routines, some arguments are used either as local output arguments to return details of a factorization, or as local input arguments to supply details of a previously computed factorization.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...input),
(local or global input) is used to describe the length of the workspace arguments, e.g., LWORK, where the value can be local input specifying the size of the local WORK array, or global input LWORK=-1 specifying a global query for the amount of workspace required.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...input)
(local or global input) is used to describe the length of the workspace arguments, e.g., LWORK, where the value can be local input specifying the size of the local WORK array, or global input LWORK=-1 specifying a global query for the amount of workspace required.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...size
The block size must be large enough that the local matrix multiply is efficient. A block size of 64 suffices for most computers that have only one processor per node. Computers that have multiple shared-memory processors on each node may require a larger block size.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...(DSSL)
Dakota Scientific Software, Inc., 501 East Saint Joseph Street, Rapid City, SD 57701-3995 USA, (605) 394-2471
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...(DSSL)
Dakota Scientific Software, Inc., 501 East Saint Joseph Street, Rapid City, SD 57701-3995 USA, (605) 394-2471
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...PSLAHQR/PDLAHQR.
Strictly speaking, PSLAHQR/PDLAHQR is an auxiliary routine for computing the eigenvalues and optionally the corresponding eigenvectors of the more general case of nonsymmetric matrices.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...computer.
The ScaLAPACK sample timer page (contained within the ScaLAPACK examples directory on netlib and on the CD-ROM) has a BLACS port of the message-passing program and instructions for building the BLAS timing program.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...PxSYEVX.
See section 3.1.3 for explanation of the naming convention used for ScaLAPACK routines.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...'E'))
ICTXT refers to the BLACS CONTEXT parameter. Refer to section 4.1.2 for further details.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...digit,
This is the case on the Cray C90 and its predecessors and emulators.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...PCs,
Important machines that do not implement the IEEE standard include the CRAY X-MP, CRAY Y-MP, CRAY 2, CRAY C90, IBM 370, DEC Vax, and their emulators.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...[#lawn112##1#].
Running either machine in non-default mode to avoid this problem, either so that the IBM RS/6000 flushes denormalized numbers to zero or so that the DEC Alpha handles denormalized numbers correctly by doing gradual underflow, slows down the machine significantly [42].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
....
Sometimes our algorithms satisfy only 535#535 where both 532#532 and 536#536 are small. This does not significantly change the following analysis.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...small.
More generally, we need only Lipschitz continuity of f and may use the Lipschitz constant in place of f' in deriving error bounds.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...exist).
This is a different use of the term ill-posed from that used in other contexts. For example, to be well-posed (not ill-posed) in the sense of Hadamard, it is sufficient for f to be continuous, whereas we require Lipschitz continuity.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...described
There are some caveats to this statement. When computing the inverse of a matrix, the backward error E is small, taking the columns of the computed inverse one at a time, with a different E for each column [62]. The same is true when computing the eigenvectors of a nonsymmetric matrix. When computing the eigenvalues and eigenvectors of 550#550, 551#551 or 552#552, with A symmetric and B symmetric and positive definite (using PxSYGVX or PxHEGVX), the method may not be backward normwise stable if      B has a large condition number 553#553, although it has useful error bounds in this case too (see section 6.9). Solving the Sylvester equation  AX+XB=C for the matrix X may not be backward stable, although there are again useful error bounds for X [83].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...error.
For other algorithms, the answers (and computed error bounds) are as accurate as though the algorithms were componentwise relatively backward stable, even though they are not. These algorithms are called componentwise relatively forward stable.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...bound
As discussed in section 6.3, this approximate error bound may underestimate the true error by a factor p(n), which is a modestly growing function of the problem dimension n. Often 569#569.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...vectors
These bounds are special cases of those in section 6.7 since the singular values and vectors of A are simply related to the eigenvalues and eigenvectors of the Hermitian matrix 700#700 [71, p. 427,].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...magnitude:
This bound is guaranteed only if the Level 3 BLAS are implemented in a conventional way, not in a fast way.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...large.
Another interpretation of chordal distance is as half the usual Euclidean distance between the projections of 714#714 and 637#637 on the Riemann sphere, i.e., half the length of the chord connecting the projections.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/img100.gif0100644000056400000620000000037506336075601017152 0ustar pfrauenfstaffGIF89a)!,)Ԍڋ"&F\Ƕ rDz~نĢHzAӧݢRJ˟RYBڲ_6c\OCT7N/8FvWhx(DŧT%Shy"#7I(2):jYٔJ Ɉ:;Ys kAwԧbR7twwV(5GPy9XY"9Eʓ*s*:7 :A@i l!;] |2ϡ5]pᗳaʱG0b$YD$6<G.3[i ?VTQHEFE4Hta~VBM:4@0_1{'/U'oy ORJKpiU\wbǡ;Kwv%h߈o&aD1 "@KU+{=Fԏ|Q9Ut@OYf=R~VeX/QAjFQզ qIg>1 g~'! '' "L;scalapack-doc-1.5/html/slug/img103.gif0100644000056400000620000000135006336076141017147 0ustar pfrauenfstaffGIF89a!,cj\]Ǖʶ R-:nevL*C@ ,,&tsRܮVod9"F Gwzθ=5#>:WbhgD%7Wy(XȂs` 'i:Yyv9(*H방Yu:,K -SrÅb0OZ=rݷ8@ӅP:։`;scalapack-doc-1.5/html/slug/img104.gif0100644000056400000620000000020306336107713017144 0ustar pfrauenfstaffGIF89a0!,0Za=so}`DfyjΚd 6+㢷4ҋ^ 5M@Z/c9ݭS+"X. β(ϰ09~sg"QUP;scalapack-doc-1.5/html/slug/img105.gif0100644000056400000620000000007506336071552017155 0ustar pfrauenfstaffGIF89a !,  akNE˲|5W;scalapack-doc-1.5/html/slug/img106.gif0100644000056400000620000000140606336076231017154 0ustar pfrauenfstaffGIF89a!,ƋMݹVa%ʶ hq4y}ۙL*<]qiETdS8:`ܮj)*Lc3 s` =vrdGs@W(4)qXؗ%ɨ@bY Tj*)ېX ;;j0+%ɋ& qrJ@}ܶ+= J{+9],ٌϮB=g[o| E_ttLG˒uh(O<jS!J0KG=E\}3NhR$Q 3v9O2; Fa*]3$imʉ4ˬ /U ưfATZ!vY+ؼI%kM]F-XR/(U j,bJ;]iWo*7mGsqffSޅ7{ja}@Oz'Sow~o .aa{Mwm}gNj"x [ߊ3p(f=WץRy쇢'ng!WfaScz 6%5-CrAez=Nl)K@)@Zۜ RXR%e< Xe~#iZ礖Rܥnʩv *JjxjejZ髓Y;scalapack-doc-1.5/html/slug/img108.gif0100644000056400000620000000053406336110513017147 0ustar pfrauenfstaffGIF89a!, Ћ<`dJ9JhFZ^L.lC A_ v7(R>)iV}ͬ7S>d]0~9bΐ‡(eʄšDJ@U},Z3ΡDIyN^"t %8hOiҌiQBgIMU k̖S\56I(*[.U[6tgPkݥt+7o׬פ(W6 %H|BZsCJK:1=fyЬ;WSǷIʴ @"F$ܵٴ+֬N-ȒkR>s*G2~98q1,8oՃ|mۋ?Ww~V7fL',UR!%v@QGt]]uU=HŌVw#AhUwigb`Kه#>hSlE>\ẕwtĠ w$2dV}dn*!!EV)]@ۼhD@(51J8" q\R' jaX"iߡnHxpY'S}^{ZNrq&(tW꥓hVY6S)Z)wc>v銌n㰽i~Q f[L,8UGf9c+WVSm (e]-f)j`90iQ~lZ-`ů23*yR.4$ŕiaz1!L !&1pL22&r51,#!LsGs,MtŴkbsIMGWm,/k ]`8>J4j7c* rMwvߍwzr} C :]׉11مƜQ4*!z&|6MkB ^=5b#+h虋%Yo,To K$|2$ T>ۓh4cs>wߺIrNXsYCD*'I(P*Ys{ԓ"ȯ H* `l"cQkm|~}Lɔ)oGU`r}b7k"9 2L3Dp8(Bb82~QIY דΩ%DQYd+jgjUqHJA9gA0o TvuB) 85ð?:7)G^\"@Bă,&E2Jv{A:]}^,H&N~K9$9\, gG%(H ~T,cj*KeqicVdXTnD#( ]W9ISϖo"XO`0?4*yf'ZK2xs@Vya5 hmgZ 8NYSxE1BVVEt|#L =`T)ƲFKmI{-wR+ED#Уc '*qêQ7:3~իME*#UVMhFUmekW*1Ǯzba7vaX~m]ejvg? Њv[";VBm-X 6nuCmH[Ǫֆed]+ע"okYkeaat"ֈEls8m}.u+.^E/ݒ@z[:㶌hG]%ݮI*|I`6ǒ 4^ ̟-ԅ5lT)M5XU|JөS0oC[m0Z! e.2CCko DC i]fJ !!3ۭn<]e*!D_0%X'rEp!m ID^꧁ZV_dv_{RCsS}ywn,8C-(O\V#58(Y\\wM:5BX:)eLx\\Wf\DiO\=\RX\yPxfZH\dh_j[l4txvgig,XN%Jx;󂽃 X(i1G@Q!^!灁H^I^%:w(@<^Ոhs}q &<@NԊp{1fOD `}|M&Gw(7BOM2aPN=dӗJTy~Jg7:>k,1fydL0ΨwG7K(tV{7z6\צcz(-,rGy4=f!F{m}gWkGcdpxGvX`hkwr0hF4 cU%UC-n?2'hqpU=<kyQA>BTْ(Yvݘ︕\v'haVvuL4-DL&]*xvtomC%,Pg2f'r9hzj3{JL.;3+N Y<Qẉk`f6{! jc$&,vkk2%<J'<\MDWɾLnjz bHܟ {%8K#M/+TP !7N\y9o^y8-\c漎zɹzL+t|;qW$B8)m@<Сz+yjϗ*M}mQUZ!;wʴ\PJ|6 e)3=ʋLWu.=Q6k穱&}zѣИY;;H=Vi[}B$DRHٓ5yԲȉty@7Kb@ݫHha͸h%*d_=dżCcˊX)#s\2ؾؤT3->V`[S(؛]µ ڢ]*X;ڑX$ڠ۟hP{ڸ;۬\hulܽ]Cxcܣ|]9܇i]ˍO\0D]ep p] {g^򽔏 ]$5ߍ᫥L;Cw`mwcnUg2>A}p(]d;'ЭQeu0n`u[\x2='uEF *ɉ9D~ܰ:i6.ec,h^^`.4&QIxhӬiYvMX/=Zm~/!vfA}ҍZ97l>zt_0ѓNT+US}rEFh]r( ͔6~my q"6ЉIĺޫ~O/m.$hμ^eϲϊڙn߈nj'ws$_RlȦp=㝟 y`X\|.ĮdGc4n]ιBJëo6xQQo,}ܑO?ЦR11^̣;+ˢB*q ǻHAPB  ,Yn/kSy#>n<܂H N6CMM<={-Qb̤L(+ 4 CfA֌GƴO,dMx4JQs9Ժ? /O KnH=dSխX} N%P JQ9eJG&4Ymt9RVq< "^cE͕PD*d|uAow; Au֡FeR1]]]NTL`IRĠ#X-+UgsIBd;2 1cث_EO3|?Cl4q]h,xB S!iKW Zv>^ 馵?dԾ>eeW3"(y'ӴuZoGhH+ vgOZ1+b2&Ѩ.0&Veav+URV"FuAUM5 VJv#(αDdHf쒖I" cF=BptXJE WTd!A P&yF …PGYM\v+T{Laц˜`篥,xdiVK7_ K,p )9n_fŸ&-T{'8~둣<# rlO&]8 %"rj&*Qʹ>֦825󀚻P1UDBӟ-"^Inr%DkJAsG2DA)S5ekOXq_)BRo~(`/E1Ulb%+Er,bJ'kIL]a5EZ gh ֮̊CMnj5$͍M?:|X ?wiނ2;,}bh2bWtΐ|lO Mlgƈ6vxtk ֚4}G,5  uRa hGSp/B7 jJoUpHǙLa_FH18Dlu;Tj;&- r[/88&3Ue ;Nk3 6ۜi+ xFENʠ= 8K:SߥݗoӝAjQԥ6QjUխvakY:Q3iLC~c>gi[ \-%XucƵ=5?!phYu0ڧ2lXs7v!mAnw ǾAԮR/߯0w.5)&굮}rDO"ɋʈih 2 E|9 }RV۔(ɏf .ZJɕS\. vykaCZ!K*+w\'v<\-?ʵ^Lp{+fRa#{$TӨ3|8E`G^`eXLMe/O >JQdo2TjdDRWjFƞ6T$=ɩ VZ,0殯,f.΁Yf Fc*QEoڀ/]wNl0/2ˍh"+V֤l-*HPRnN%Tqxh-Fk6 !PKd˾6)-L8LHroAEH`h03:pYZP0PE?$uLCHNC̷kK0IWTgkfL<ܦpjHhHBLW*΍><*,I XPRLm2sO>n 05 ,ȺoS@Ǔt2:hF3HP!I3 B3&s"t&4jې -h8l1/#mr 5 FqIiA<9aN,#0FȐ4AJC0c-;pL1i^LKK<:0S:TR4fiNc oOABZ4p  cDP:uj15OPO3pҏ0E$MuA/>週 Msm*p{$S=I$?oWF[K2tU#hԅ?`V2ZY+crHKӼ=TuDGY.7'QapXuM= qUr]]qJ_0SU44MԎ,C0  ei\}F{O"Yu&Rˁ.9i)Q5]=qISŗҴ+QdBcPTyPщeEpMTppFb9bmhcHM& ACе=CeVCDUeVָRACjGEgRM|C NǩrQ0ikEbo#U/*UBTB06ru i϶dq`m)?tIxh&OOfJ7k w@-H)IτS_:P BHU" 5uqH5hza"x1PF#et[ .!E=r|x6/8idO2~@huTOUg~wԗ8},mR_1LR7yi-.Ƅ5e32p/ǂ+m o.Ix/OXr]xwi-uxy}8xid%؅M/QXʖ}YsWҒx,q/rgDچB1!2YTߐW5%2ݘOsuS-*3JSw1RXGMͲC8ZxDؐn7y:.kSy5.9/(a$F?k8Pr3e[n+ϵh놅HrNCXeD>/gv=|Y8r1:DjW678{h9cf~;i?S ]ծ[9af3jT9PﴗXFPYEijxIMVL'eSG=ژ3Zs 1;QvP=UVMy'XMu\qBUǐ{XUٟV 5Z֝LQsy.5|ZovXq1#hөh}Y/4:k(Y;uwU[rw`җԘjv%WѢIӕlW EM{U;t-P#ecIwkκ-| ?/Vt `ʹSXzYյQ;;7=$^  4wvkSjq\0)йSGKMZ`[&;QּSڞTSW6KږsaRoMF[_I\d:U:ߐ6))V:tgu zuGuFرF #G#}xQj(gyl39u OʼZ|wy1{J$KfҌLrs㻆8kЉ{=χ/Kx(8f],*5}'݆3YSGA+.2&W]ұҝQ,g}oSv=Zҭу=9ףqU10!س&ľ -%Rq=6P =ʯz ,3O;3ϥ} Jj#TNyVN;WoAy169v6=Mw]z n2rPߥ./\8l0,WY?3T:ltllj (kƘ`W탡s#?p{iC:-˺HeSfe;dݚW huJXdBX  ֥X)G>vq~##?q td pTdJ\[Wtl|=*53΄`)ܡxƸ%@.Ეw0bJͅ<%muG:b qbA ?aeqwCF%<ӊG!JLȝ1čUDh0o,C.єe+ɝZ |q G ^P \i $#Qůb#fg^V,5H*Wz&8)ά3WDi9e 玓LL36( tB/b$yOʐt+!D^qG Dm(;9N2)@J&Nt*]i/YPp S"iMMS ()PO54m:rLZ2Ӫ*|Ԭ>S+! r+%;6gUOcY3 ?jVrPM׾*mG2,e{ٮj*Aaiګv^AЎֱQ ڸps1>a/ʵm270d;\ץjH`blZ꒷K0r$xZWm܌nڜ\;_lk[n,a&- G=qkWjs֛+ayi;.ؠUtfřu%ڄ`3tq}O)yuŌz Q#_PNݕ{d);ke2LKTrnSV l7@pLTAS? e)][ceq)<=5vU"9%_J,]ϒ墉oTGl`ݮM]1U*ftQⲑ- )jC@\V&HfZaaUpir5 \qb^ƨr'5bt1gATR~)bڰ\q\a\AR{eWghnk Nwnϭ,0g)f5( Fky2PZDn1E<>By$f̝dg ~^I*ez~gb`2vmr$=m 5~,.#pF㈟f^#vSPJ_V#,a6Ҽ'|&)r&n SFZoݏhݏ>EޚvI(S- !#GVg8g 8j~1LZ-Ij" ō(bf!"e^g+h6:퍖Na}Z j j#lr&'SFeFU!]!xjn!R*( : +.9W:" ]ςqm*Ra9K^5LQk (FD&i'b~uܣ gj\_R#k v #>Bw‚-+5©2] bFmP(t-d3ʟ18B;+c]m[8&㠖cfĶ-ҥ?"zveZ- I?C榛TI@9fe1.'s4sVe$z1>4dj#aYޚRt4EgZ_ V9rSY xz:kb0zNFQ$k(;/0,zf`첈4a䰯Dtx~kO[8uA[Q􄵡uTlӰ,g1r#1 )Z XopRt~"4 6lvGтg>mР`[;mu 4Z'T+.r+WTJ.v-){EɭHqM/m'j5_sutGV[^vӶzqc0Ic"c6ٶV*_ &[m/9:(IS\5uL{ަ`ww5Ƥ|#~5.oj.\p=.@#e"R.x߷Qn<'xsm2$o//lwO$og"go)yy"29SV$XxIrW^y9U$YIj%9ooZӯS5M;4M$d$|`BcZpfJ7ɓY _20K:O2w=npTgpկ&ndJ ~޺Ě!  + 0;.ОY ~]~zG&wZv vXglf'Ŏf{p= o򤛤!i˅ sK{̪]_{93 g]u uk\O8;\I»ur 1kv4&(:kL򶤓Y27b 2-̳%*V-:C:+"}3=wb*= ZqzqҜ Kr<;=w~^Ð5 q/`ѿ!x`^.fcߏi(Sߧ%Ӷ63;dXv7^tÝv/4=~~7O`:OjF+>C>0u2Qh43rL|4l=7 *'u3wֱ[J͚D9cα"G!AT_cO33ì!q5ҝ7gZU0jC^jh OЦ1%Mm_uiaid>w\[S"RbXbs zJlTY[4 <ܺAbCt{ 4*:ěԜ|+]J|,lżU=]c~8bRB.Bohvn=f4N5X/Z*h;h\ZGBDӥ V[) cr9Rc5QZwv6B&0Q6tשx@)Ums+@ uSn1熞R= M *4:rR"FT$!Yl`>Ydv+_ƜYfh -hȞxfӜSẵدk洚n޽\8Ío-~\˝3|tnҭn~]ۅwn|7z9޼wn>s|so}~ SA B4\ ;| ED MACqE\Ԭ eLF81s1}l,ȳN %KKIBB)Ϛ( *+0ȋ.¹1 .TNs/:;p4LB+;_TҨ'!rq",(-Mh1UDŧ# T7u65Ӷ(r$AS/tl"% ɗVaLcUFeش 5蟏 M6TWihΆ~ՔZMdV퐴mzRN)HlWEz7Wҩ ߈].lGoR6 %l})ʹc.UR:㑩*UU=/$-W+˙eUpyUImQ՘7+c!,{p":)G&.F#Yn5i?Fm;NNEt o: :#JzU{NJ-dѥ6ʊ]=˿nH΅C_}]7W-[Β VbiRKy5l+|}yO Q:eIZCT?SShvx7.w 5=f:#4+Ӕ5 #2- N ҙȪm-CD psz1XI3mA/ D~ SC#2j[GTކ5.^OCSmeu:#NΉt.yM,:ҪzWC dY^QNd]Lw*!kK ~ѭLs$%v<ơ$I!dhEP~Nj=Em==ZSDA#hBj̉CgFmPJQJS)t-ũ@cJS,MAҝ#dhZ7T!J TQHʚ4,c<\NUǁTd ZʧR}*WeYTӓR_}-\T̴&W8BQ!ZѪa#59N;Jd 3:u2aWϾ6r=BӮ4z!" ۵D}֚GU"WVD[*f>C+ڟ3;":Ue}-p\":m܊1lmx,>).̶I~R')%U-eR$ Uʼ[|;_2s]^AM|k 4wKDfUر/ރb] /T|H](eMqγ PD?Y^dfad.2VՈX$ǻ[ 6[VcmlO[kՖ~-lĵbhZVXhrVD*]tYWdc_mdlsK]NM._q 'nzsCk}vdpRj+;vnwǭ"$bOq6.%Ƒ&CWPdL9{I̯2ϯ$ZQQsbUW{jV0Sa9߁m*ԣ>[E}sY7s1f^ |؟fv#C]vFFNuO: ėG/Տg=} vgic@?u\k}z~{-j+?9r>fe,:uzg{,;תٵծO5C$KnaXC2,b-U D s2z<+!#SS=Ժ0މA3qA`30A؊[p۹g 4Ldі -tAl4㡢f80A+.#7,Ԡ8mQ ð9C&A~7!y8%d$K K1xCs;4DqAT0DG|Lkx#"8,Q|Q>QM$x:OĦ;51WR4vDX6QEDE)k_D:T^8/ ;*V1XER3גE"m0"tNǭĔJ-CP\E<+Fa!{$4[L_Yܥ7r5qd<3Ÿ|܄A{H&1Y\F.pºHzF3y7F4+&,eKH,7,ǜ|!ɘ4·, Op2h}tv$s(e BPɚɞJɝL 3Jy mEIJxFpDg0CAlDqAr6T4NlMY;iOdTO{:+FP UF=D1PϻOKSO Y>S>;ZP̓ KDUeuREP4Q޻',R#M5[{&@cAcYP)?bϫ2_ ;??TN99[S:9S9TSґ̌>uKi*S',7r,j @R}SIe@Jm@Tl"Wb1D<ۢTTBALjU䠯$.aUw1]}βB|VeJR6mԮ\T8BV0TTsCuGb#g=I֒?O:̾<$3v9ì8WEDO|- ˓ÜʹD,|T jW#H, t@~@TU\Tyױ4#E̸14rLN}GMS3J%+2 XKCM+ZqU1!~ŐԠY1>7}p^]A_BS}:̌Aۧegf6NS<5Ӵ3;zv}5[4qvCR+Q a tPO@LA՛}J+AQTZ}ևM uYz%ޡu~ʓEԏ1dޔKՌ;?~ݛ^Κ.EXKaYǖI[͎X؍E׎j,S~jUEJ_$[ٲd 𳬪~#7e9jh:_c8ZZI_ߛD[L۵L[k-.|9/8ts5[\ 14N׹i\6>o\X >Orܫj;~n3Sc^[VH E#pA떛ݘKbBn1%gq,n7_ U`coS:H+ފ 1:EqtEjtq l3K]x|\1+Mr obK:exo9Js_ÜٽsV+T}j-?-E jt ꘹ |_/zsY-v{hGKr,orBᣦ#>kD/YQVu{sATq]f'>(^򸒻 ޤG:B)FM 6?&guok3fYNG{S8,)4c乧K:X$6C&xFSed On)UKea^RcW!ie]5J^6j~fdPvfe[)/gdLcVU-eyheyRy/eeycGl\zg_ P6>zc Q'Pxӫq_V{/>clRBN7ᓷ4>{xyH VD u:r W{I^FU]4ֺjr|?}_{Su~r.3pz/}*c}\n%.7w|NiBi|g{O'ǣ] U:^?_ezH"8gz EZ\Ehlp,sm߷|꾠?Xb"Й3JZilNM7L()*Hyo7<+svz\{i`xCGZ}h?.RW5/)bHX=B|D;ѣgqٲө0,I$-H3NjpNE2.8A%RbLI rCMܶD5}̙?+yC;6g?}*R=I\r ZhfkSOaKVjPk.u?w=X$qqMZEUvqjnvo.iZkvfQG瀣C!x 漌׳l͹w·$罙sȗ{~ysSϸ}5∋|ݠX?|yP[=|سϿ(h& 6F(Vh!.OG]{nX Ƨu@i*Ȉuٗ]S}dJCi<-dIbQEy)$$Bq~We唛eVzY OvxȣcC<8MRVELTs egAFQrQR(i褔) 馄JTJpDQuM!zʨN+6u#jƮ?j$5Q;F5ZvٵR]*+|鳝zK,]넵 $v+;,0 {/Fd{kU Ml&Qz#b$~nUklϴkF$j^3<7\"cn]tΥŐT촥9;9tT_K vgGnmkN $xo8fB޷s+3ĔK~~>fO%j Z>E˴۵[|䷀vSuo&N=狟5OKJQN]d[n\3 Ӿ lr\"e]p[藹OS#򐶷ͫ)$ZP6b*aĀV >nX΁Rya!o&, X\XE *ci (f85ukS#)G$GH=`IOQk)YJVynXKLl,6W|ylk7L2IMRdHiejIu:"H"QNjѮʀVѝԜϬLҤ&l 3U220VmҔȊqӚXJگ鉓wq VW6ݕ4]c-HExU!^Z!1: uDe])Bj#-GeEv[*DDeOјҺ=VwP_P݂W^ 3 aQ;Ǎ.hWg#ݷ@ի",0~ۊpk1V|ux)f_&W"V!YaݳU okՖ2g0V z{y[֦W}:ˊ `>z־B[ƶx6DhdW+a \C[sqQd8ҜP«iE v˥Q2+3ee\稀3fęOBcĎcіS4Wh%7[afm<3{x3ְԩD(7dMػ]-+qm4BdNĕ>U]緵vAf c|gs\Iiw2Ǽq+ k/ ,Rn6ح 9!.z_Y| OYoiF*stO譴 LG׏Ez 6VT4r{3Kp 6惬yy]cMICџ~PJKzG:NG-'뱗祅4y>:~?p_!?<Øc=i%E>?\\kX#:MqoiFy~|)!Vڱy~ %j Ts'{OiJ+(oT?IZ~vn-S'TgSRw~EeGeA~ HT5;WU(*_aTVs("|ZVYnSeʁ)+ȀuOf}DgW}4n h{2S$xh^C(FHBiKxpfIOGG_h=. ،qR oywQ,HY&䇒! (A )i?EU9!h%q.=h#Jwh>SIJ>jpvz^"azcbzbcfxHIJ¶dtw[jl*6nEwr&zeg}zs\.ok `x9eL鲜͢Ƥ,ʭ8+Hv~K] Nj7+ۭ @9,^F7qDYt4vֳ7 Zֈ uvwMцeyZe}2nHw]-+*֐zOA=Z1]]d\@ _ryjVF^鳞7Rkۏ3=.!_Џ 5oӼeK9xnZ)fZsJ3CujOmu(K6{cGTBn.C77 ;ϓۓ'E%kP _=w27)Nu|DTη_HN$dO?D̬*- 5pP ALKDUhKL`qiEˑGzjHԄ,($AQl2'p dI)2J. K\mL23;RTM-$N$3Dԓ>@MP,C{TOD;s44!5F'%ۮ;ϻ!^Rn8TRCT ϐ8Jk>%uVY KT-K.&ne᩵[]"$2l,eVۮU,ٽ gu,`U3I!I'[Jwyި*2`vyq"`%T[5ޗ~{ٲ"VMw,XXe\~y(k6A }c$9Q`'Th}u!f4ѽ.cwZиJ:IݦlcKޤ;+&~v"x))Q~j4Ǭr0hV7&B|B$-N X,;:ǭs5ដ<<[0t wʳ]L\>ţ[guÑ٭fåtn+K|qORA|k×ztr`^@q~wI0_@Rg'}76/q.; Em YX%/NchNx@%[LVO`GAp.@E%$NHr-h 2?tE+N%CDy%-b`8ӠBj,XT;0F7F;q[UwL%CzF:WKlGq=V!'a5#s&1^$тaKF%^8q'U藫.oDLsE f&!El4GfZz&nzȚ8Mru$ٴaLO@ ZP UB;scalapack-doc-1.5/html/slug/img112.gif0100644000056400000620000000030106336110453017135 0ustar pfrauenfstaffGIF89aE!,E˜,saa_i֛֚R]su"R·T DlI15j WVPCj,M˙nu<νwt8#RbDxrf2(G yI9*:JZjz +;K[+Z;scalapack-doc-1.5/html/slug/img113.gif0100644000056400000620000000030306336110474017143 0ustar pfrauenfstaffGIF89aG!,GVvݼOx4N {Ylں(qOuPls>oG]ZJ_Pc,B stsvcI75&xh#ayaHbIy8 V9JZjz +;K[k{[Z;scalapack-doc-1.5/html/slug/img114.gif0100644000056400000620000000121406336066762017160 0ustar pfrauenfstaffGIF89a!,π Tg~yH扦ʶ ~HbAP:M|;~]#X!##S|J3L8B85tȸ;SbItMZvUZ6x7w'gxe(7)YD0əZw%tAVĚZ4td4$TW+&z5 Kyxhk]=KYZIkNiŪLEt /-QKVN>t_dG #HJ<#>]ƌL KQe’>Bd-cgEtTǘ&q6k&#8b<HH:gKtN(e>{Vj vlAG!6}]c*kjy{#fɬ#j2^Ƭ\lwynSə-Z)ϫC"2֢S7ͭdz^ ~k|_ϗ'>_K,e-ȂgeEP6Y%!~V6xD 6] !iW)!\y8o 6!!aFgQBHՈ'Y%h*ɉ@_5 NhdJ.`;scalapack-doc-1.5/html/slug/img115.gif0100644000056400000620000000107706336070350017154 0ustar pfrauenfstaffGIF89a`0!,`0ڋ mHfW L܊\LL*;C JSEjsjcKܲ#? CoFPO\Iᕇ0gxhVr8&9E֩9JjY Zz#j*$Zk{[z[A[I{e"6 |L flш3M(uIF.]^v?v?17{r6Gy /=Ϟ) 8 px0Zͪ8EXf %ēIҼI2Et'rfPj6DE& 5J4!PSwd33Qʴ Œ?Cf*t;Ζ~jxKvvva[PڵU)/ ٣08ֳ<i1.Շ7Νi˻ xoā}]^w͢[גzܣB*II}ӻ,d_ȧ,_񇜀Z.69;scalapack-doc-1.5/html/slug/img116.gif0100644000056400000620000000111606336070370017151 0ustar pfrauenfstaffGIF89a[0!,[0ڋp}H扦L `&L* JIqjF t6v o WwgxTxȘ7hy8Y#eib7HyC fJz#:*$ZZ68 kBЫJژ)cʙLyłz+(MI-=h̫=|A.|Y~cN;|֩_ BN|\Ylt$HC ɑ%FȀ6#_Ȕr8RɈΠ7Bc&8F<(h6*ꤸ -g>-ڥMzlu5+imQ$ܲ_sV0f0pt͝^ₑ:N1R4Ѯh:aԌsҾ ++83dbW}+^~_Q qjWoƞ0$'a߯j#EĆv][JÐѯ"Dq(jX;+4%ɘ,)%6U2F+s~l L!N.ur(Ish TTFIf꺨石mX"S:*1h]ډ ~%.?w6ϖ?* Ӵ5f [#cSHۼ+ ?P;scalapack-doc-1.5/html/slug/img121.gif0100644000056400000620000000013606336112416017144 0ustar pfrauenfstaffGIF89a!,5k+N aw?I):ZTfl=*I<ĢL*Lf;scalapack-doc-1.5/html/slug/img122.gif0100644000056400000620000000014206336112435017143 0ustar pfrauenfstaffGIF89a!,9k+N aw?I)znزtXd8قL*̦ M;scalapack-doc-1.5/html/slug/img123.gif0100644000056400000620000000016006336073002017140 0ustar pfrauenfstaffGIF89a!,G˜oVB:j8b"wfOzUri2z 'v#@HvhNd֨@Bς(/BvunHK{-;p?lL&DnKJ:\]doiGd*iF^st+5n[|L|a.evL[ǂk:Dp,okQoÛO?wjV%`èGOP;scalapack-doc-1.5/html/slug/img127.gif0100644000056400000620000000010706336073264017157 0ustar pfrauenfstaffGIF89a !, L`kk3JV$m%ɶ s;scalapack-doc-1.5/html/slug/img128.gif0100644000056400000620000000031406336070506017154 0ustar pfrauenfstaffGIF89a!,ڋ؜Hm 2Ο byj nLU'j *+ NjmDL MˍH5dO|aFWh86Wv)9򗖔Ex7 9*b *;sIcP;scalapack-doc-1.5/html/slug/img129.gif0100644000056400000620000000101106336070525017151 0ustar pfrauenfstaffGIF89aR0!,R0ڋp}H扦L6ڪvuɦ ,umN!- +cgn{f  '8gwHhdXiY)yYh1 jz0IډyA kk[:[Wz[ٕқ+ۨ+ \7M|}Le=\-am.C\,EY H.~#@j7#>LЀiڞ fQC$Hc,9ed:'QPɰ&%94cB.I/&̍LyhQ͡Q6%jT.!* ^+DhT'µΦ}kePp>9 1hX|ml`RbմK8iRV+9sg A ZBΗCvynӬmִVڼnp|݌[<羥[Mv~oMK;scalapack-doc-1.5/html/slug/img12.gif0100644000056400000620000000026106336073166017072 0ustar pfrauenfstaffGIF89a=!,= MhʐkvIfVT;K4p'wyiҩr6UF8:H757hN/iE/2GwZ)beHxh( Fiy *:JZP;scalapack-doc-1.5/html/slug/img130.gif0100644000056400000620000000021306336073635017151 0ustar pfrauenfstaffGIF89a!,b m(hmaQZRy\,vU}ک, *F)7sƨk*25c]~.1+tħu/W;scalapack-doc-1.5/html/slug/img131.gif0100644000056400000620000000064306336070544017155 0ustar pfrauenfstaffGIF89a]!,]tӹVy⇅Vɶ +t2}Pä>Jg2ܰrjmC( 㠚n6%zɱ&Î<M%N4LE^Ը\:)"MGQ@h*z}"< `=Y*ܽFq_\w,;scalapack-doc-1.5/html/slug/img132.gif0100644000056400000620000000164006336110156017146 0ustar pfrauenfstaffGIF89a0!,0ڋ`H扦ʲL Ģ̦ ~JjUKZw ㅦV oy|گwGXh'xx)i4viyiY#Cj9g z:YKB[++gK I!ܻ<(Hyͬ]]I}\ʽñ>~~>?/ndJ}S[C~01Ӈx$*0ɦozGǡv:8z㬂lLf|OQ-OWWĄܢ:(Ak TMٵ\4i.bdɘ"*i3RM;,9+JDP?sqjc"= ITXJI1@2GiĄZV zݾk:,µr}ش vᛋ9v3oV-SεSdxd5zqj.p5:rN$UZؿCFanaҞ[r#^|}}6r:{m;k~'vিv-7h6oWKO=x)?wJF܇WKA]Am7TMބvaTH^˵b܇U-uh64z;H#HDq^2duLM>IeT/>eBsßL~STJTκVn q:s1TMevv.8o(8tG4ShVXiA'0iy9JƲZںy*r*{zv j[,;\KlK9+||,- {k.΍??1H@~> |VWp$8=S!Ǎ௢Em 4x"$9eL郥W0e4y:4D v(qjJ&HSDSԭB%UdVRe1^m뺤_^v^;o_{`;L'.Xnǔ1Vf1s :UC~6;f Qױgu6Dku)5o?P;scalapack-doc-1.5/html/slug/img135.gif0100644000056400000620000000111206336072635017154 0ustar pfrauenfstaffGIF89ad0!,d0kvzH"@W L|LL*# JepjjcKܲ#sX` ˉoSݝ| ~QRWh'v88A$yViy):ip)F*:(;AQ izJ[;ٙk닪Iy]#ɮ$AFM-ߖeּ0DAo"1Bm4*ư7kЉR)q*DRY`h󗳜ؠ§ć:w26P*" li=;ZR- f^gOꚑ|&吂g<:dϕKfFػsni{Pz.ԁN^ OZ#_ `_ H n >8C~gAHOt,W;scalapack-doc-1.5/html/slug/img136.gif0100644000056400000620000000062406336072666017170 0ustar pfrauenfstaffGIF89aV!,V"`kbFiZi-·xE)?fػ %EW faDmX( [_H[VOU睛mf;#8uWHh7GطIC鈱(: Ȇ7 juieڧ8 qKkSI|,;k֪CL+K̇]v <yj]UL,{DR{TG vwSo#0[̪s1% \lCqHa$QͦKMv| 'TBhbI{ʩ3m9t?mT$A;T<&RUK_TgT&Z9n];34] B;scalapack-doc-1.5/html/slug/img137.gif0100644000056400000620000000100706336072714017157 0ustar pfrauenfstaffGIF89a!, `j8{uH扦ZavKjߎ|"@@LJS\aR d3z0o1uu?;-G'R7x8gHsUvYsyӠI4w$d*6hVJ BJ$*h(IEw{|r[&k{|Z:ƜgڋrX?Nj<'DK7J(ݩg)ޮhf"Ct8Cѡʎݐ}-[¬Vnݹ]H8Rf@-B/ʕIdLQYX IMM:cKAOɚyM&fק653Yme*f\M(_MHŦR*n84/ǥcv9tdmqTiN,:ӰRGufsan^/LyJc望t fi w,B; 'qcFճ6 3Mr"G A~B:0i\9H: s$OQa}HO{u>WU_agjKyaR-{C)dXݖϨqP*+}ҟMol Z&f̖5&o^ZOg ߀=QO ]WRxz'`e.tة1jĈ!6t!$]!7.QN+W[tZxct H8ҋ- h]aeI=%j 1F$dcT: m1mcuėLMd@#\$Q"gz%%i>(idzhn^MƥXEYW |VzEwDk.$穓tYl')5Q+p[#&Īױ&뜳R1VmJhk-+.kn;scalapack-doc-1.5/html/slug/img142.gif0100644000056400000620000000077406336072760017166 0ustar pfrauenfstaffGIF89aB0!,B0ڋ3[Hl Lkzw˙VL*chdJԟ9V yr@z"rBy|>Q7xg1ؗHpfHY9Yi):4 jAz*8f z [\gk i;% M[9ܜM!}nEIy$d=nER>3k/|\'4c.'ԇC f񢼉խƉ-7$%T%AgAKY$ R?3:45Ǝ@ B&#E&C#D[ˎ:lk'c۹lTE{^.ឨ70ӫ)ڻ};Xǔy&FZ6r3΢{.<:5hKR [rdͱcrA\^{ NPK$% }~h5EM.-2*6†y7x~3}gD׆8vՕF(X׸Hav'IrIWUi*BiEPYV;y(t3Ytjb ,-E8m{m1+~CUnS/?O_oP;scalapack-doc-1.5/html/slug/img147.gif0100644000056400000620000000015506336100263017152 0ustar pfrauenfstaffGIF89a!,DzTF[ř0up|rn*q2垣`H띌>Ԫjܮ;scalapack-doc-1.5/html/slug/img148.gif0100644000056400000620000000077406336110775017173 0ustar pfrauenfstaffGIF89an!,nԀw}"扦PǾ[,tzr-BXL1lV* eX }*Lflc[7߇9|gb7BǨs#9"Xiؑ(IQIZc ղjXG)1z [j7ȅɻ #WplJ| 4jlNY+cL@~l nץ5ܺp};8b1,B/b ߲\Rz[ Lv$ʡ6Uv%2Ϙ?OBC(!}N/l4UR$T#VqcUx*P;scalapack-doc-1.5/html/slug/img14.gif0100644000056400000620000000020606336107123017062 0ustar pfrauenfstaffGIF89a+!,+]kxUsJjlVa8 ;scalapack-doc-1.5/html/slug/img150.gif0100644000056400000620000000101506336111077017145 0ustar pfrauenfstaffGIF89aH0!,H0ڋWH:l L+zwDJagL*L 2e<6j 5aܮ7Uc1~P> APlǠ~ughX⧸H('gXiyBI@Px9J)sZ **+GgQۀ5K\:K*yl $|5TmSGni;l}^S-\ {˞@rr n@:dp ?)FCsv2w~f('T<&RdE@ݛ ZtGF8sz <[=ZtjLG9}M"͖d4*?<_j0.vTI[؄ӭIb7ɾ*8TRV [x![>\2˜d6qn<:0L_Y~iаk/vMhOgO,q?*fr7>\uٵSݽw ;scalapack-doc-1.5/html/slug/img151.gif0100644000056400000620000000013106336100741017140 0ustar pfrauenfstaffGIF89a!,0˜>pͭ!Ԥ%%i Zjg91dow4q}gW;scalapack-doc-1.5/html/slug/img152.gif0100644000056400000620000000013606336100777017157 0ustar pfrauenfstaffGIF89a!,5 YKMՏl5>Bn#(r(EŗSx:r:K;scalapack-doc-1.5/html/slug/img153.gif0100644000056400000620000000106206336111527017152 0ustar pfrauenfstaffGIF89a_0!,_0ڋ kHr7 LjdHL* J_pj*surСa ;iaGXX6储g8$9 y%I(ؙ :(ZSIQ ;Zj*VQ{;&CK,q, wL=-}{^l%NF-V_oca9{w@{N(G6l0SA5PI{U7?@#/f3aG"JNihf39ѢBˡ?@,T9V;SOܼR(ǏFdK:^l-촬﬷[i+Df.sB6J~EC(&5m,^*fB6Ue4&ܽ\F|  p{88\L(K\ұ,˘;|яD#逩';scalapack-doc-1.5/html/slug/img155.gif0100644000056400000620000004250206336106556017166 0ustar pfrauenfstaffGIF89au!,u ڋ޼H扦:.L Ģ8 "̦ *Ԫi 6z2ulNY4 Ji=hHXhEuї `8h8wB$i#z *H zH;Ɉk*{jY%) :ŬFr=a╓dI (q,<7iȽa̡=KA=^3^LA]P5I^n\h^Җ b뼝Ů_N~RqiNQ!TM[q՛-hhcDGþuO^#*0n$]8֞ZUiQ37"vOy\ (]+I笀+2ur,d=Gj:g? Њv-iOԲ EkCY]Ve]g;7-6_8Q|-cSJQocaٗ28*0tRA]E Vj2.uRXt^ŦovRw"1P(AV Eֽ>I?r 9'/K{l=}ICP?\ StlqpZٓxd@JsrTr⧋]6QIE0m9GjصdnK^rsgƫLolfZ2nX'dFM^pgIt&8،/~&vp 90ˑ$4*Ek(˜DzCjVҢwMJj0/мhs.Ո|~HYDf 2 K2z 캈V"kl @FFiP6@؉)$o=@ns֝ߺRmwڢMs!f("LZhs$sLWml_S@7]m9Y޷;Moa'\[7ɔO'y֞;<{WaRK^i:,-:/ C0t:9S7yM{ǎy``fYvRǗ(h{b{p:MULj1xpkg篇C݄ߴ`\_"@'>Hr/}UdRJef&aåeƼa 8m9#r-_=w|o;_4t zПFRq%'7_]~wD`V!?uZ$ohj|al2}Eg|'h1N@:gz,JSFS`wExRT3 8c!&dQ'Kwj?gBwrO/ !޵V1-Bs,_6i:2v6%xt:$I4]fSf_ᅅ%x慤pJFCAXBKxKx"[V+4.Pccpe;69 @R$wExSw622.>dc n`딉pvs1=xOncc1ʼnYUx>7vff{p4{%\[ȌuPՋ^Yu\1v[x،[pHD[荋U5茓GWh5hhxG3DX{r%u^>GvO^ES4S_( ]J^_ wii`vWBv"}jz$oзiia*f+a2ޗln0"K“-)h&9钎i%iS.l8/T7gܧ}t]T2z]yVd`)~#&6Xt{3u k$SI<{d{&Ui~Fͷu~Y i[]o֗ aGlh(hE oؙheE0) ^DSxrv!Gsu'7a&̶rV)epYl+Q9e0VWy!n,GHKɄwtKn g[4P9߉ޙz%wg!I^ǞC9|Gt~דėaun009FUdy%sUhhlr^ʵŕMWt\{sMHҵSVKe%'Mx͂?՛T}wz≙N)Va#*sFvvQ*ȣ( Ov2x}tu+Ƣx?(zsigAJqNx:mG|:7>7mYh?I}~ j}bz I{eBfrɩ@z}D{Jx:QvQ Arw$DrI*-W7Ȧ%G!'9ʜʚBΚ*a*8a)FYzI SH>h4JyIIY(jKlrZ)O iJADXsiQ ܹ&R(ι^^nXb~8h.zy8L<-^/tr =Zw<[,ݻym쪱F>=H' >Ƀ|𮙋(/^J7`b-t(ì믋stܡ\qjvuqbrzN1~Uyg*xG|v}w+}_R|"OX6lcN,,4_Ɲd|FQKMF|DD |ϕ<\sZV/ѹ{ϫлMohDZԺwQT ^"um~k"[.vMg7yknڐ,#27 9C>EKfLUP̒Vcefq0@ ;oEsk<~[US9]w>my73i ߟ_Xf`H J8 BM1bE1fԸcGA9dI'QTeK/abCmܢ3"Ϟ>ŵ:0yEIzp>=*u(ҦVbJ5@i\ip,IϢ-Pǩq)>F.^e_pw kdbXmX 9ޥݸW予_XFyfG-IՋMv@~F^߿uf=ǰ-x Q|Bp'罙o19a儩{s_qSݕ0i nܷJOn8..@<-Q#=W3\+/&/8%@  o5/}jq< /3-? %:qsErA; t\(o +C죄Bv(. [J[0NҶAs=4 M>q54EGLM*yB0BoIE#N9=/HCQG7SI",?rՙ5Ldu;{0Pq%OҍԎbLL jSfu@{͐Κ=Tֵu?? M:1YɕF@](c]WYrK~8\8e^Sx_:(7>VTf_ `4v7cجѝR{Vxd|:ED8b#I$T BYcUY=W BJ.5Xd9}q,8l"U>[u퐵f.)5Ud&PNdgQmg}M "Bzk=VqaX4HcuQ*R!LHKIu  939>)AWk3GXHl4:g8Qm`76, UⱣ v[;W]-TkMw03qI9/Lp;Ij,Yay] H :.ӗc .[{#v~V' J8o_i,f-ofv;׃uu1xYVy c=#&w2ƙ#kr%58M.ݪ {Lt'|Ǵ7cj5̜$Fkl~n52oivZ_ۚb-pq&9بwFfFiY}ehRmVyI ura@*NsM-ʉ7`ja2[zqxU_b ww"]L/b*Zb5蔜m?ZUεI"XK3X ߈J6ͣW kVi0v0 bysσSo`&Fl, YC ,'.arC-TB cqn|bvJi]-GVȧb^lTNWhW g* ~,uu,;ԽNr뱙eJͺ[*xfNo콣{sҗNQ>r;݉xY4~&#/yS>‡[n=JN?r>=Cz _=?*z|k7>|֞'>#*FϵX{ے?icԞF/o~͘f>n.Meg&n疌'+߯LȱlB/TLJKB jr M L*<ǪF<ʴK‘oެN-̮Ͷ&N,P˵hpj+(o( ֜P[ty gPp~}⦺ l} mpp kiIχl4Aʿ V,pv6*j_ , auB0r #^ا Q% S@DHMhI#n3pQa?p1aK |(os>'adLm 'MB Lfu7n´X0 wB3>ʀ2Hy1"Aq#mNEo'^ejqr,!γк'_ $=,(v>' uKr'v%1ݦ'ON2|P)ub k&c,_r$2iqb+&`, q w%2f|aڊ,!P,Grm <-+ sƧ{ Ӷ6--O2F|:* M2qxȎbz*.l ҟrτ` 7 ,I3/KP@I_V7 HR$Q@n] FĎX.2(LF 2ВZ*ĬHLR>.γ;n0!(BrIrzoG;OI"}IoXMI / yTKT TBrGL4^E|iLmO&tNNN4OtOOO;MoL4TPtMMNOQQK/ Q-Ln2J+D^2Ey?ɳ?N">WuHJ p4xT X,@O|U.fFrBqLG5EJrwi]f(r6IPZ/k m[*;[{3Al,uv<5i^)J^PrTC4p(6 _C-aze- vrrB7Rvϛn"1uK0` [/4-IPeqئ JV6':![3mT,os\@N6/4hh-ޜ fȆaw*L6j^ Ki (ml6nkf߆hl>NWG_.l RNfZH3eSq5SgeY9I"+ rVS:&86YvsW*o2fBk( =;\u 0Dq/7-6Wx=eK k2ǬhԌ@P e^w?whZƮ5NJ~6D5eWzGRc{͍EI4~ep'b7k9s1a|wsqˁ;kYC2k6U -4`h~ySeV(V^7T-uMmW.p_)3s]tF 4q'cgWD5 -c w>3eux֭ٮfRpO8=87ا]%37mx׾?gK*v}s0iKSҿr ۘG};\MɕK;?U(E~g&n0irmmM]nRCO $~V+n@ȏ>#SW"vRz-WwShjon*^ԦZU;V.]OԆSuwnW5ь7po6| 'i⮝ݹE)t3c_x \A8+ϙ=֠㪡G%bB M!aA0ZiڜK::G[=B5 HP?PS ϧcQ#U'Rh/ST˞,z.:zUeɨ Z17:SZO*5EUloTVs֮V[UWMU<pWAC\C꒱p%˵R5_;!V\Zl3i9IUѫmqq Sbgı8aT߸9>[bW3y=[}P w[RoK)j1 u[ 2;IY }K3sؤm(llê?so`"A;ֳ`$]Ve-B[[mujd1rĖ WXT9o؉?VreuUY6Y ĘԏcdYWcS%gdObd"i&Z(_Rg$(+e0Jk2Sss2klqeZj*sn,/i pN.}۪" yoBl'N`C-&O۸QH5&$c|2+i׺~μt 2wC``B'&ST2! 3!I6֐JR;'r6#F 侳hu +Z=-)vĦoW:dHxj~b/»W DKrH`sf]3hY*V+CwfƹNY syʬQjoU2j⤞ZqޭlTx紂j {De$+gH;=|<|ZXul!"V|xᷛ~VYf|1AF2~-Bi}H^ )$sl*يYAeՒGҶ}6!]Ĕ#fn4M[{aޘq&)Pqb3v؜謶ea]u ctdAEĬbi)ܼfљVC¶};;EնV [{-i QLV þ^G.o>GĢMG5Qz15uxk]\ClpxLPo<3'{q@Ѭ!@_\8+FKqJܴ/S]X\oݝ(_c-A vfÝi׽xǬ7 >8~8+8;8K>9[~9kI^dnz~XǮ촿귃ѵ+ϻ%7lb?-̇O[lkwJJ}ڽϾT|!>MwEnP a}g(&@cLjOJHj#`%̚a bBQҹ()=M& [_k-%bکk mKhٕ9QL:e  Cة]MFvή 3&=5qܾ6iэmo~޻Mϻnٛcn||o-O8 >3|hW8~qܶ䡘LNU@^rљ$IS[+{JyrҡغW ޸h 7p?W:R-T(4؃n)ꂬ)xuR]F\v2Z $hk^ ۱N"e`w8V;8,x<;j<%u{d pؚuj3uFWڏz&>Ǽi,K5XIrtfǶ-<ؙSd|}ۛϝ.i܏%(G7Զ"Х1 N@SdP|a&5`b5ZIp4TpYPbq CxwSaLI_ Oi U /Ԡ`^_IMIì1Y`! ,Y0a]E*av >=Bm a-a)UךlW!ja䱐^߽>}#>Xsġ!@!⟹"Fi!Q,+T$ra&Gb96oYל ϻLq֌3zh)2#7j@ʙ`׋ 7k\3 z*J MKќ\*5F%"ɗ!0rɭFHI;m㜍%cqa^̀ (H X4qF>QNNl N:b?& )+_DKUdl%?>e"v$d\V%QUdSdBTReUҢ8[3^r X"_[LRa6c`bbrIa}b^d=]f"bVfgNą&g_V&eDih[:cklk:&l]`MMuQԱέ"'Q\]cI`L EBrJs Nvju dP~)nZ  Q2L  uUzEֵ`]M^eNص=9c6gٝgj.&Ӊl(y^7g=1 nb0#셣ݞ]⃊_(opeVn fX%'5Qqވ,CA"]bޔce=&e)"hm~\tAYȄCf$#|/j% 6h) +lj)F.rẝJ-Y%ͥG~:Y'A`.XWL(H@6ThfWt"j]2P@9jgNYa!X% baGBu*[T v~fh&gYGE]k^j' @et*W-.$'!ʨj6øVv鍨Jq] *^~⩲ߏv W-=Qk",lz#,Ξ̒㣰^Q:RhlIfbWzw+R9ֿSk>PVr1" BچXJ(56ݏUU QDt"Idrv6YP &Z+tH ڙ ͅy^Wb>忕dQ$;{2%OFdeEpmbi" eoz ^: ̒ߧu+#VZp$ YFUa0ZnN2iަ e gaGf9p011'/q 0 0nNq [(]S[1BqIŰoq[AjtV%Ռp? 0Ҟq]qg[q1\ȱᨌjo[OfR^]J|WY-#.杌%h!'nzZdVQ)ۯ"ܝ5J^MhQ re3w$\,35[]2Y2/ J*& 6 62Үͳs]=Q=_3~n5Ӥ1Oם_qWjb.% fzEnpU͵)|FNy1~R7tXC0n[-Xb2;vFXz`F2umjVt)[JxB5x\&->YjI$ٶNEak}O+W﫤Rr?RXsƤLY^:u━\vrjyky,Ls1돪0)J>V.6&_luPoՎhmi͂3fi&]vvR,[*BHrutRLJ$3eCKmy6f]tWZV$}91".2}OhK њh7mTwmUm)jwg-<P[d ,s%s)a6o"'|,/ɵq-(ت$W'>sC668T{O}tLcnٿv*GylRwrwߗxz;~uzZ]a.c7!4{3 yxF9$Aa!&TCz:ٚ}v /tg?dIQ:X?C0v݇z ';4[z+N ::_Z  17krqc>;:zeҦq_ {&phq3]r{&91~ﻟxo;w{7e;C3;8%|@'{|W|#I @? ~q sot1 qrb#|g}g<kv zvm0o}Zޣ2'N.lm@ku|s owyS31;+ ]i2"Ӟ7h>k=m#'+>|܇sr%J#x@Qʠ~<>g¨ٟ=7>⇡U>ѷ._m$C}9߀>9ORJ)u~A޾CMGV#SW]{Sm$K%9^N$#:FV` 8C^=9LIׅK:ɚ*V=7#>ޏt@D&KfސU1Vz,_Q2ѻ5 ӣm-ӵ89?0 +#Rt3 By*ӄ ċ -+5=E3[eRs%;[Y #M5\eziMQ5-f@VNު n}֖s!\xN4@r9К"kq)2*p¼tM'+C<0*A*kp FlTEqG1HTzGDJTA|H&uqI4H*\RA$^&K0S2Y2TI6\ Νޔ3:N7>KBӻ.QEeQG4RI'RK/4SM7SO?5TQG%E ;scalapack-doc-1.5/html/slug/img156.gif0100644000056400000620000000057006336074367017172 0ustar pfrauenfstaffGIF89aa!,al<jؙ J)bLy.9 qf9NJqrbce:3{n6 8fE(vVHq2@)YI8gg(*)qy[ j y7+̸ RL 'GR [=Nn>QMf.\NbD~/,[T,@/~-: D!hP-\A\2(@YiKecYh%/XJ 4 BQ4iQJ1=4ħRsQS58Z;q;scalapack-doc-1.5/html/slug/img157.gif0100644000056400000620000000127606336074432017170 0ustar pfrauenfstaffGIF89az?!,z?ڋs\!c^мCP <2Ha|^H0(ˤ/ -l\+)y Hw.!:b%ّdia Xʶ rʑ[ ĢQչVIO4Mѣje5-j%M6$ ι[/gxS((9I#8Y *:JZjz +;K[k{iKj{\|< =]m;mM ^NU6.yow}>A֧_=7(dV ig *[R Ed,Pb'O ;%*'2@>r N;jvqIzhr9T9 zbU Ju_h .iϊdY[{ /:YƙKAgrz:vgL[ύōD;coz4"0{ёߢib<yvE۰t`l NHW4R~\a¡H "bhb(R;scalapack-doc-1.5/html/slug/img159.gif0100644000056400000620000000110006336074474017162 0ustar pfrauenfstaffGIF89a|+!,|+ u扦ʶm gjg n]&gSnj~˚NmUS>mHf}akzټ/ _"mBѝq9u](p\;bǍqųB3Ōy l \kKލsbkDihkS.7\t]p3M,kΊ"~|}s2 Pd`cƑ r@ 1Sp_(~LΖ'y)ɰơ@YAeQblNffM1:DY (k8u'd"UVLJƞRz zė`̠E8@m]b9M9W1h͡%]^nQv2U6;uZ)[eƌͩ;d׆A+)of=,XsRfP|z>jN /N 7z%ffE瘁GE^hDaa"|8$}mHՄtE[b NZ~9xa5~=q@x y?!x 9 SRa儶%PZfPyI;scalapack-doc-1.5/html/slug/img161.gif0100644000056400000620000000015106336077653017162 0ustar pfrauenfstaffGIF89a!,@ |Z9. V^i>H%OٵҋrPij JԪZ ;scalapack-doc-1.5/html/slug/img162.gif0100644000056400000620000000015106336077747017167 0ustar pfrauenfstaffGIF89a!,@ |Z9. V^ig䄤vz@+C)h JԪ͂ ;scalapack-doc-1.5/html/slug/img163.gif0100644000056400000620000000016506336100330017144 0ustar pfrauenfstaffGIF89a-!,-L Zy;H-1Uy!!i tP'-jt!3d9PzBl,S;scalapack-doc-1.5/html/slug/img164.gif0100644000056400000620000000054406336075156017167 0ustar pfrauenfstaffGIF89aD!,Dq&扦|,1ɺ^x|sȌJ%Uf f/F sA -Ů65cƶ˸3-hl״u4eGvBhPhU9*RfZ2h֚QyDjk*{|fkG( Yg]s L, 5@>Ff,z6 ߮c, Ϳz)vۇ%{w( RLQB<J`Eo3DlC*FA+kLsiNDӨқCu(;scalapack-doc-1.5/html/slug/img165.gif0100644000056400000620000000013406336060541017153 0ustar pfrauenfstaffGIF89a!,3 5$qxU!|֕c̘q{ʵѲLb+Sa<݌;scalapack-doc-1.5/html/slug/img166.gif0100644000056400000620000000021706336101167017156 0ustar pfrauenfstaffGIF89a$!,$f˜23{TyV萦ikg*ƞ6qqkH`)T76NVZӉP ;JԷGڵ(8W;scalapack-doc-1.5/html/slug/img167.gif0100644000056400000620000000056706336111617017170 0ustar pfrauenfstaffGIF89ab!,b,ļibX L6F~fI" Qe0QJOYj 6dӼl\xVm)^;xVTsh('5'IٸsU54bp6Ki{۠ *[!gUZ++|+2x ;\I =m~{m4IQIneTK~_7 .U!ueݣ#pD~ =(O $^fa61!TVZe#ZFZI5$S 42Bv!Ԩҥ2mͩTN.t S 3s&e4T#! y8#>1V]}qR*)S2bH+?di &xBY-J4&G-Tѫ[ӁtOo€ـ, N6}Zh[~)Jh/@=8VҬo%wPj^se䈘VT*f8 lN{slY]2Ilߖ"QCMHOpqF՚/Y>o >x=g.osϒEQ0/W GƨcX`HCKNX/R8ᅓh ,;scalapack-doc-1.5/html/slug/img169.gif0100644000056400000620000000131706336111656017167 0ustar pfrauenfstaffGIF89az?!,z?ڋse4c׳cӆ=b2es'ߺybtzkqyDuF $XR- +Ε3;iX6ŢAm|V0y/:IrYN5ILw ݶS} ~R]|#iPQXb(6Ma7SdTQ8"{>cJHT񛈭-I% a"UU;\9e7PcEHhfl6fI`\Y晧{;scalapack-doc-1.5/html/slug/img16.gif0100644000056400000620000000006506336060756017101 0ustar pfrauenfstaffGIF89a!, i[(P;scalapack-doc-1.5/html/slug/img170.gif0100644000056400000620000000015706336103447017157 0ustar pfrauenfstaffGIF89a!,FzTF[ř0up| ڂjX8J ;rʛ/I͊ͪjܮ-;scalapack-doc-1.5/html/slug/img171.gif0100644000056400000620000000123706336112141017147 0ustar pfrauenfstaffGIF89aq?!,q?ڋ>JL~U"jn.Nyi-`9A$$A@ 1у ,"S,n|C"}y-vڅeԸdƅv>o7`^ҽѭ`n.06bXb>u1J;0G,hnW2;6G# HM•9$ Mʺ$TO*̦ JTK;scalapack-doc-1.5/html/slug/img174.gif0100644000056400000620000000016406336060332017154 0ustar pfrauenfstaffGIF89a,!,,K cTڋ^z)8Ji'~`q.l2[m("NPLB*E&;scalapack-doc-1.5/html/slug/img175.gif0100644000056400000620000000052406336112246017157 0ustar pfrauenfstaffGIF89aD!,Dڋȼ8z扦˔ =~lTaȌJe Ԯ7ECƇ ?6-pC}.SV(aWӷHq'w2XF)a(9wA ȇ*ZG;˨kK|džBG3{%8\L=U7Yfz]=ޘ왍DzdHo.;ߣ^ϱ6R* 9ÇŬ|7Oێn ˡF,$qcз>31`V|i_0k\j@E%Mj42~{(8HP;scalapack-doc-1.5/html/slug/img177.gif0100644000056400000620000000020206336062475017162 0ustar pfrauenfstaffGIF89a? !,? YoiыkfH`Ѝ!nKZɭlW.5i!< 8~ s3A|Bf.Y*gEEyGk^o.;scalapack-doc-1.5/html/slug/img178.gif0100644000056400000620000000034006336063714017163 0ustar pfrauenfstaffGIF89aV!,Vڋxi.PI\]Ȼi (~!2 爐:g)WGU:Ufۮ푵Q:e;5vFXVg!xH6IyF7vb#¤I:zG y ,*:ŭ!p!.Q`32Qu(q3Z*I͗\!,> 5Ki NwiHN:sf$_/VUU3ȼ).HnaVY UɊ;{6'f5wfG%Է%TWAhɠ 깸@Z +;K[k{{P;scalapack-doc-1.5/html/slug/img184.gif0100644000056400000620000000023406336102470017153 0ustar pfrauenfstaffGIF89a*!,*s Ԛ.^GYyW͵i9BL>. r<8zuGNI3Z1AToSS)Na`3f6}c-.oi{-^(8HXhxW;scalapack-doc-1.5/html/slug/img185.gif0100644000056400000620000002300206336107654017163 0ustar pfrauenfstaffGIF89a2!,2ڋ޼H扦e Ln ⯈L*̦v|JԪ3׮wǹ0NSf1Wx(8%pu0آIV9󶰥wy(W*`XƠ*3XrE6:ˊ;xKj) ) $wJ(=ݨ-Ww$;nkyc.nj +O}RF3i@7)X"P;b?p E#B䰂 jٺNEus3E 8=dԪ931IΗ@y,Yӣj.GJNtHleT+?j秫&* Zq}EL4w`N^/ Ԛ[妽`} C_дV*SkSmsjؠt;%7sFo7 -5oNCy.eis V_*uCWn]}c;{ydQ Ws6`KnGeٵB#RU }]`yfM4iGC~gV\vW \05<AhYЄ]d;ƹP~_0m~5)Vƍy_R>8kRΨ idN.B ՝x'5.CoD&W鷧@H!%yh)4X!W ۢY^lh'q&瓃"[ eX3h!iZieۣ)i|Vd>\'[UX-N&-j֔Q3j׊83נSQUO&r#$|f {[ћjI4Fp#KWY'G[+O;.n":Ω0r@6++1_ss:C(3P-H/}?(t*Y5Yoeu>WvF:Uv|i]`lwqvK2'~CwB-˅B/x?y zŕ~9Hky{>ͥՅǚzӭ%BC; +:7 u c1{F*5Hg+z#݃i/,W;cz>iߗ?N*-Mً_qrI}=b1k^݊r1Pz F"h &bX@)P(+z#aU5MzAOeؼj57K 9ֵVK\ȰCfd7> ȗ<`6 75qpk4[ L^R m1ւHGKّHxn6P!5&G 0IP=O"'YGYrY8HT.fQl(d5*5<#5AȖ4d J &sd(#E[Q5-M:P,,7M3ʕ7eP!|dH\Nvg:0.ps&Jdo]z>;D dfC呰`;,5|93C*|(F(l ]H+tz[HI֝d(KyL%Ou0[Qhu*%jNU3uRM䮊լju\W ְ{NoZUi&YҴmT]kS)Wl͜[͚WV}kN)̱&3 ;% ݕ-)O<煮R2fPj6FlJ[2|('cb {Q`Ak>]IJSUd$?}lha6Iե 9x]9%B(UͬQ%RiuS.]2+osQZb V #߶Vo,KN׸Oa{\;9sJ>L<`^csMy`s•aqLh0s6'RCg3Qm+1#_9BݤE,x rY˲,]gcAAb>c\mFxdySX1M{>4,BEP>l@ּ!+iϳ5S.9 w9m"]|%m:=2_MY[ٍ]pE3#=$|I֡n/0(m[QmtS+= LpE.YtTѳuCۑl=M_߫,ӊ]_jSwM&7פV+x{N]a7aq'_/Gw׭٘U9<=Ͷ$0]F* W@̘};u*O_e>j˗ukhf0o~6Pfu^6 Q1jS!\=e [bJ偿D45h!2{H,89~W/xX4n;Cj{a-g4LK4RXmesNXWxN2VJȅ_XX;6%M8d؃8uO>F5lȇ}HM#yC#X냁~؅VpՁ#dAh^oxfX3)D8CZՆLxs(oUd'JAE~Hv#%CB?Y؈ROpCDdxmc)$KCD!vDhz:hW(v!F$.'p2'n72l`,h7iX}v䀻&N0EW-րxD$;膎H,hd>8zdu"v1b]oHf'IUJ8:`n'#g,EȂ9ydxp AH%W%cjג_"c4_lh#Yɓ8dZ$[EO\ Jim"#fr~~xZRYmQ:N~= xTw Six{Yfw*v:x)eh t>i~I R&R 9ViɚTV|IPJ9i7YSe9zIațtĨ96Gm DOdΕkEa2vZ)~G$UB~]dEv|yQT@ԘX|ɕICY%!$G T2yrkmW`g'jqDc84g%'S{wPŁ|xch_Px.ӱy&|icIW+9wW 5Rc+_1J3Q ɣ vBj z{ n8D@LP#zrivo(קJǢaYj?^qOsx 0Ze"71h[ohudgsVKdb)zJI Eswgb-qz|!1z0|*(VĦSv!mdfC(GtW/ʮԆy& &2F=bfr|jSv&dgCGߵӨ =zh#VGm=u3vxʥ4|3uSrJ5ʲ?"#==Es˱q_CFirrZrz~B+ƤMz>*©H*b;rd˵f{IŶ綈!gkI{{VkR%X)&E-[x/n7[xRZ{B)`6w`'{w^,iU[v[h䨬 )L+rzx͇c[hc^J [ ^q+*m'˺riv*KzTBW4 FovHAI籇׵ګ*5 |oٕ+w?ҷ(0}h1WYխw(d[6e1ZT+Y_۽؟PƯjtziejEkS\⫋XHAh10h:R\=cƞ{HRE.՜Fř J&I{Iy_lǀ ^4Uu~lieȖȎu)ʢܘ Q8ι /cH x˿*<%Ӻy ͚,; -. ׌a<ڼU ZܛwYڋ\~Nj|Aθ̘L[^6Gtfh+vș7Pi[DJʊhgK κηoYڒU.9Ag=7(3*s_41ӀlE G QS*`{Z W=Ό_ԥƫwh<-z.9LOMxlM,-2ѨHۄ>yB-ޡ+l綷tK~M.(adc]6+un ~Hш'ۦ>ވ@Ǥ}n=pE -T̽듞涾Éxdf~.Y~nٍmS^>c8$S|go9Z[ 7^ޙӨ 3BWz'wH!~`BTs*6erJ<7f&r75>Μ4YwSJs}\,OŪ]d1_x\8k`O@-j'ֶ#_dE٬W_CE?mh͖l!lD ̀컘5g<ooD9"Ÿ/WoVZEgf-xxiGfEQ ]Z鍳ǖ[ov_Gn.罢t y;˟fiz)gg/-]`T'_$YO OaLV{qֻYlPsU9ھKv EEM׌̔a*@$QW1h:{1Ϫl aC'Y< v +t@=;.|R`~V'hNoqA<|d]_aki2sq@~>i YyǑB˗wDܴۦ1QH'D>"]?94dUQQ-KfjFysl$ .O>[L%Řy-yS(QA 5"Kb1J"U!@{zZq&ͯcljXr&»7]M*_/*aƁ[t)ǕLŒf˞t-z4MM%4:,;bDiƩ&奄GW1?1V'x֣;Ŏۮ@FSv>L JAB/v\! SaiZ[µ190} Өt*ܡZKI 0V'xr44 eXX0%aQ؀Q圇8];+]8+ XB^Bx̣:wD&qj5\`.q%ٿPPud GTf@^_)8JVr|x)T>ҖߴK0qy/P-3ymkf!IQj4HrOq8l8INpS5XRi^6#xqb"e9"nѕlV?4Ān D%ZfY6jyEFG}_ eюGi)*iN#3[>nM rtƩӘh{+(=棯Vvx&&$Nm/>R}hI+*(Oy=-/w/PK96ceL(~`NDi Dn*Q8yJbwlRR]l\嵢Fez%n\AYW2hBɦշQ!PT0FkR7]:JUWXަ4_{F,uy o@፞[}*^8܊iPQB͒uR5w 7^Qo5\8ChW!#h^ uaWJ2ܨ0V8Q 2utu[Lb׽;QTZw:slhDCf6]7 갂͊nr\UGkVe{L(SX sYɬCm7Y71q; l;wDw ]h{v Yoɽnm&ݜΕ}Lv/rKv94)hޔ&ϴPoewlq|2s n\o8[no8HQ1q~RVD-1NyXY0|:̃;w|aOъ4]+ZNдpcFf6z^=ٝ,%}{ »w؅ ^ӛV>Vq%w2tou#S/c/ n5xw678K늮_z[Rp?O s>>}Iw;'m"Z#l A3TgxW{z)f)Z[U~ͲҮ$~/n0cfNؒBh,0Yj%Х<3PG䣜O>640(Pb0CXmo`pP.NlrP { EP㴏 螪OHLdkSNF-PD0 - F*/&Ҋ( v2.Kɢ):9cel:*ptsxr[hp^*s#kO!9,9l;;>oH2D1.K>` <# ګNo$񣒍^lΖ9M4UBK^@GT5;Ҩ󧲳:n*3Qv 3_bp-m^y- 3MD+I%ů4r4貂D*bIU4J l {KBG?teF yrC jF %WAx4PPPsvͦ-'Q!۬"U't25PT uM0UpTyQ U9U7NوVAV#W9q~0TcuUpXQu3QENPURiuGHв:3XXuWu9ZnSwP UŕiPO,]GA eKk 2). SM}WL^_;^Lf|H,vP]u 5/,lE~ȉVUVN-ucDz҈KaUoFb;\ Oke:.BvQ2QtfUX->NC/cyB"^1e{6ePv^B &2tQ%jU\ϞH3Te,"+3 7iUsiiqWmi}ePqKg/T%'ch`APV5l7\'wgEg9Wל0rt3newvivmvq7wuwwyw}w7xwxxx7yv ;scalapack-doc-1.5/html/slug/img186.gif0100644000056400000620000000020306336104674017161 0ustar pfrauenfstaffGIF89a? !,? ZoiыkfH`Ѝ!nKJyJeBjaҡPpQ\FQI<슕N@7u"7N/k0^;scalapack-doc-1.5/html/slug/img187.gif0100644000056400000620000000026606336104747017174 0ustar pfrauenfstaffGIF89a[!,[ Z\ Sgi:BZ-[` ?LS6yj*^ĝ0Q|FRnKd:OWta!H7U&7Vu(䧲tT&w֧E T9ZHujQ;scalapack-doc-1.5/html/slug/img188.gif0100644000056400000620000000026106336071522017162 0ustar pfrauenfstaffGIF89aW!,WaRKυ☀ŘbgkI"*Xn9&9π""uGd'I RbcgŲ+5 U/[4:iv-13^WawWEx3xF9uYR*)Q;scalapack-doc-1.5/html/slug/img189.gif0100644000056400000620000000026206336071741017167 0ustar pfrauenfstaffGIF89aU!,UڋsxB_"0e)**ޓw7`$|ȂLӰiu T:YN%:;7Z{)Evxok ew/Aw5xuWTDeiAsRIP;scalapack-doc-1.5/html/slug/img18.gif0100644000056400000620000000017706336107401017074 0ustar pfrauenfstaffGIF89a: !,: Viܜ4JDq=`syZSف횾q&͊%ls↛KM$Ҧ#4(:[vW!,mO ;scalapack-doc-1.5/html/slug/img190.gif0100644000056400000620000000034506336073407017162 0ustar pfrauenfstaffGIF89aY!,Y ƋFSzґLrl&tު+-~\pWʈd+郎jzڔ;ϑ+\Gǂzѥ-^i5M˻5t5X7EFexeD$8RD) !کj9:wBJbK{kz{+e71tv8h!uUHRWHW!*3 c%uC 2+ә;r[< -=M]m}=R;scalapack-doc-1.5/html/slug/img192.gif0100644000056400000620000000026306336074137017164 0ustar pfrauenfstaffGIF89aU!,UڋshRJ2*vy_~vEEC֖Üs6 ΔDky[45\qx]2f'fU(7AEySyy8X%əWAs7YjQ;scalapack-doc-1.5/html/slug/img193.gif0100644000056400000620000000026506336074561017170 0ustar pfrauenfstaffGIF89aW!,WaRKυ☀YJ2;f _~YvTxTɹLFGL ]fӮu[(77Ir;70^mVgHx(%hTP1g$IvvF:)&Q;scalapack-doc-1.5/html/slug/img194.gif0100644000056400000620000001326406336100702017157 0ustar pfrauenfstaffGIF89a!,ڋ޼H扦*l L Ģ;*̦LBԪ{z.U :0N_=p7'82gpxਸE(Ӱ9SP9zJZ'U#r[ %*6EK۶ C ʆ,(X= s{D-ܕ+s-.Ǝ>Of8k$Nr'>#=iتd :$ (JS@wrArPQߦg$42ɕ ('7(;tj*_idI'Ѡ3BڑѦjӨ큹T쳲 Tյl-+h^Jynߺi©._o:g ^E"R: .^j*.ՖZ'[鰖qFmL\5si{oeRS3͵R`Ԥ­6JlMc`=fyq.m5 #5w9~zc Y`Uw8xP{e 7EYU\6Z7nwq1Kumbc(>g$gwaÚo`odD`~hc|#a4:#C څ(刧xGa YP"{XjhߙS"@&ch'ڛ; _X3 #?7Fߎ OpJn*9cY⡉!6jVa b|藫{US:)6e[Yu]U8̶тxum}mԪ^ETqF. C oW!C]ʧ?\Ÿċ쮛qHKpM r   l38vH5EXƾ|W>:n>uX}s0/V'~Wcd,:e-Cv-?4@Yv8 Xj!zg8;Bl_K!UFha9A FhBi4 5D$0$b|懊(R ѕmXD^4j0hBQEP4GU-OcEG/q‹': oz\Ntb^*xHѫx##(ŴQ #3Fu lR_5 rj|Jvn׊$IEO~]<)FCHT \>0'Gҙt(WXbS2F2 Hn J9-qpe <ιYκ纺?_őh/\'$9e*4 u<Ơ mm(4zl?+ICr75MlJ鵵`i:RtC5YLYWT>l7=H{ԣ>jTUլju\W,$ݤ֜.lMZ קTm̓YWJuc+Oߚ֧0$>?>4l.u"@~Ps!fdDž5 .oeYz˴{UwVk ʂ֞Ĭ%JQ+oPBm1Q#%_c?%#o^BwZYÁ.uyC㭞EV=ڮ%!.Ee}Cԝ7eG+.[lG41Rm _3%7,Ib ^#p~+lᥥI 4]#S}bm/1ޘs~TIWѝh9d I%1 ^n. 3; r͎TMH?yxݛxx,i|̵k^+O7kƧRv/2-gu\k2={o_5^mIVFi0D7VvJZRGT1UUZW['([c Sħe EN]kBaNGfYQw3b4{pX._447hv\.sWcXVB4zWuo3'c[bRERkeS~E&P(F5<5`deWh@lkxparȅL儸0אG]؇nh28YDW8&Y@U.pr#OȉgoPXE[h(ZC%^~hx+lx}u3g\z}liU~ϕvwGr|f'zr}`c V'/8$UUJxuKHI z ; ɟ'D(\ tԢ#I9;_Ӡ)OZx$qj{jDk+Zh@փ(sviJ'IdGvV\ #v?zU'xJۗ*1~JىTj*TA,w$:1q<*=Ʉ4Hxׇ9ǡǍj }`g(K礰w{7P$ dEwwlڦ a*}sxIbʗ*F٪IϪu2mu<bIiay-u%rZ.YZ~J*O3'1E;Հ^bI%=T, ;{eİJi=b\XU؋7j a8k)vhc23zQ8/ˋ;,ô38XS[ [WkuӴ|صleMhxjiQmk3[;uuH{p[PKz+~KWh^;ȶ胈;|[lʶr;[ueoi9*XKWv .;v yc+˹r0_|ۻk4Q{{{HxTYH2dj3\&:[uz:$6iIhhI-siK1䂖۾u۔*٧G2Ȓ|<*q$vMGBл}bq˓aF{ATi" {^7L'DƼ]*lP(>7Jҧڶ ,AtJG^lAĊIL'+ix%\ YS|DYr,2{Y{0ګ÷#lũ,59{kYRN Z첾?鄷TһK4ʂo$3ɵB n|˯݂\4L)tL38`;FĬyW|Ȇ͆, 3r%`El,)ʆhT<ʝ\VNe“d9Ss6bvկ8SI|=Dr CR5ҨYyFoK`ԭŀi|x=="U"LMx엵^KM3Nl^p\3N{n.쮂~>.M.N̤\?QϳV(OdD(/ !Ϻ n7K;N.5?ϩv$O9l L Z-!=aܓPeA8@s\R~hw~<==?Ϝ^L93o/O1uaLZe֛vP7$Lٖ\X.چ[/wAPX4D2d>P9@U+T~q`pW$g0q^y !#- ;scalapack-doc-1.5/html/slug/img195.gif0100644000056400000620000000026406336110014017151 0ustar pfrauenfstaffGIF89aV!,Vڋxi.Pu`7+[t6_Cgaht0yqS(ի}"ґE˦B6td⚍24d⧡5(%Fx(sTi$EjXQ;scalapack-doc-1.5/html/slug/img196.gif0100644000056400000620000000044106336111124017152 0ustar pfrauenfstaffGIF89a!,Z:<_iy2c"miv*~5 +fFq2RW61U3hrbJڭSAbE=$DWgU8Ǹvq8x xI4Z7cY6Z#'hg&xx9 #K+ \ZR:)k{ VK "mmCKDD:5QO0 <0… :|1D;scalapack-doc-1.5/html/slug/img197.gif0100644000056400000620000000036106336107732017166 0ustar pfrauenfstaffGIF89aj!,jȌ TҋW/x؉9lƗ+CƆ[8COoƠE2-GNjML>P ?65xvE'cG8fhG&uXdž豄مTHF)UV)3h [ 9[GClrk,=M]m} .>N^R;scalapack-doc-1.5/html/slug/img198.gif0100644000056400000620000000036006336111146017160 0ustar pfrauenfstaffGIF89a!,nj X@ͼm!BcJt 奋pat%(\=㳚VEUv-I oS1tSA݀޴1ÇUg7$Dȸ38eytd(y蹉Zי Z8 yUKkW+kK -=M]m} .>NN\;scalapack-doc-1.5/html/slug/img199.gif0100644000056400000620000000026606336110034017161 0ustar pfrauenfstaffGIF89aX!,X@҈hυhc%+Z[}٤]ဿEq'zAeiC IIEurXSbnǝs$h7`2s+;h]cS!qGSxYD5)(j镧R;scalapack-doc-1.5/html/slug/img19.gif0100644000056400000620000000021406336067641017100 0ustar pfrauenfstaffGIF89a,!,,ci|Bd[mQ"`x엾=W&N}* p#*=WpkM]Q:klW77 XG`gLΛ<`YƟ-}&d4\(ˍL:Sަ~s(Q}V3EJGZj*Ҍ}ȷ_M{5[Ѽ7[ÖZR^2Y]q~l9U8re:YYƐ/2h~b"I ͻi"%\g^s>{Ю۽.éZFT˩Z'O84nهw؀u&UoO sݑ`q%'P ZWvF!B֦w16rvpZG"3X}8RY?Xi!i8RI"W)"n4Z2HMBNّsF)^mxKgfW&&Y#K{Vt RT(nV)7%}{ ҙ':iy٫~eYu<5Qǚtz9NEZ OiЃƍ|k^2XP.j(z# 7+m1Tac=p4V2h5SLUG#rP޳js3a(a*<4 s.9]pFchV팘ح<6:.Zɪ֗-khæ:?M\UfCXX؝-vG}j)jr\tqNU@)ykۤfANȵ6}?̪=v[V^,+C9jYV+b9/wJRJ+Ô&jNsTT)pDZsvհFw .n/EY<3is;CWW, *麡uef_hU_M$ @j˩{%So`+һg$m1Ӓ]l=.c:`6ē遉\#[Cb2lʜtd d bk9j[>f[WJѩIx2O1![͊$'wn~$ʛ9z6dA׽7ͱYkMZUt5ʍ-~ʭd5͙l|}C`:!m\Q yv6.Uu>G;od>mK}'{)zxz&&Tߝbic!vy&xn7kw(|u|҄>a{.NFAvq$ߦ0 lr$v6J_,_m|!U.˛㼎k}\gbֻ_ώv =lH.@7;:&(¥ﳾ ->pxhl47wsz 3P[56՚<'?Og'W׾@zD|CFoWDnTsVPX"\wqW80'VW0ІH!XEIVxT|4u$҂3p Vhy6Ai$B4)"m4i^Fj%3;%R>!bb9g~2ZYBB4l ܤ1E3-PuPeBidU`PF}0b'sp†ʆ& \Ok]X/s*'vnwh3!sUc2bmcP4/\+S,I dX[Yo8c5Wby<b1vgW%1̧`K&F>5CZ8zVzXdf8m|"TWNUVh胁(X[H״U6i86 [huƅ!9eL&#8Fh$\㋮(UzW>zh4e>A2SSW"=CncqĒ7vnf/(?_4Gnc^sWr*.` z Bv8r,'rӌR9D'vHrC`ɕGmgV8eD^EArBw{yɗ i痁IuHɘ阏 )Ii;scalapack-doc-1.5/html/slug/img200.gif0100644000056400000620000000035506336111406017143 0ustar pfrauenfstaffGIF89a^!,^ČƋnlHJ2_yVi帰7V ҼYi 0fd%jYdF$(tJdZZb84-mP%\zdR15C4׶5Ʒq؈Df7٘xDgigxڔUEj';h[k+z+׻ {K,2\| -=M]m} ~P;scalapack-doc-1.5/html/slug/img201.gif0100644000056400000620000000035306336111464017146 0ustar pfrauenfstaffGIF89a`!,`Œ uH:1gi\e3K)kh:l YZ>!R"na3g#AY!x Qa ڇؗ:F$ C6%Yxgi83X6&iL(|}a=l(^I)=/?O_o0 <0;scalapack-doc-1.5/html/slug/img212.gif0100644000056400000620000000021106336074655017153 0ustar pfrauenfstaffGIF89a4!,4`Bw wIqpd$lt&m%責R ;scalapack-doc-1.5/html/slug/img213.gif0100644000056400000620000000011106336076204017143 0ustar pfrauenfstaffGIF89a !,  p΃Yצxm`}TsRW^rP;scalapack-doc-1.5/html/slug/img214.gif0100644000056400000620000000011106336076510017144 0ustar pfrauenfstaffGIF89a !,  pk|T4:\o3E5 '7"hU;scalapack-doc-1.5/html/slug/img215.gif0100644000056400000620000000041406336067560017160 0ustar pfrauenfstaffGIF89a*!,*㌏ڋ3iH扦%5l(L0 ĢwL*̦ J,PpdKk^W /6B>fgx!Sc8(9Eȅi8)JفIj:*;P3: 뉅+{LyzkO_ f@;scalapack-doc-1.5/html/slug/img216.gif0100644000056400000620000000047306336077125017164 0ustar pfrauenfstaffGIF89a!, aQ<I꧚% Wn=^^ 76( $GlSv^6ZqL_«Ng?/}÷'%hW!HT5HyT Y(I8aT:uzCKh cS4DEz YsG*;UbӋl-<(M|g%~>N)gߎ0  ӡϠB9@P[1ĉ+Z1;z2ȑ$K<2;scalapack-doc-1.5/html/slug/img217.gif0100644000056400000620000000020606336077153017160 0ustar pfrauenfstaffGIF89a!,]˜͒@ƫ pmTu8KU>9L|m'Z1Lex, k-1l. ;scalapack-doc-1.5/html/slug/img218.gif0100644000056400000620000001063706336104032017155 0ustar pfrauenfstaffGIF89ao!,oڋ޼H扦g L: "gL*̦s|JԪ3n 90NSf'8$psТI%XƠ99y@Stʓ9h؂Z):j+j,ٺ[s9\}̩ ,}\WL) }ku*?>}]8Kk֏A{DaBUmrӇQUkh:DzԭP(OeTb:rqDVn;\}Чcl询FR݈K3>E곖jYc)X== ӰiaVvkZr|st-Ӈ4i.LU̸oY2ӂ63^gn2ǖrVVQ{7tThkخ 'bΥG.)Bͼ"e~Ћ7|.]̱IvmiuUx{) žɶ!yZF\u[r\ 6Uu}A\`xI[Y'ddՈ\9xPAQyZ_}a~6^֝hIE7YY 8cF߁?W;(܋#9+{bo-_qu $dwKw/f[E?zQfяX4' Dl@CFƑ 7o$RIJ6]D<AIn d0I1˘S'K']SǬe[vM}#$?T+XSI;5OC-L>]y.ʩ% ^ "f33@痶4BM6RH}dCYѝmh$FJy )PJH,-`G5Jbf0)fzSu2O}'R*5mSkS ըJuTU 5GUXڂz620OUudmZI0XP]Vt@~r:= VP `#6 p=~[ĕZcvkTdi-te5+N4o-m-=f2pfI٥D-l ;U3~ʍՍ6[2QmۿISd(Mʦp\`yD{۹H l/n)wDq-i 8|/DAJC핮! ג 0![.+֘U>/e/0׻ű2Ϻ gU׋=rPaDcMi}, !.ش8\[l*m.)殇*qB9򞝛Cj\n5J( ֣&O㔺5!X=N'S50ʆ,0fsMf~v[mKkb3ngⶹMpy=7i71lK6|7㩩sllЙW^`]} `c&- onyh8Go"&pB[SiJo}hb._Ȼ4jVN18pouؾ8vG%]s]T4Njs@[}L$YI_8&09Gѵ\,pepnQ-vo#?kp ˱&xX4{Mvk< r_-rx l:ݱ{UcuG+t3Ogv9u=s1x;# :zȕA v[L#)Op6dr?hj\̶ ? {ϹgOed~@ZEeQiOǀIFRNPfssׁ/&jB5rt4!qg 10WWzVGy #۶B5K784@K2<,UwVC=V,t4!}5T:hQx%W8ȄMQY%cHehgikUh\ICTWSVph^{UQ/ƇXZ JXm'ncJ ktj֖ZՒY6Y* 3,Fc։YQYVE(ZSpdcԂI#2E\}gO1wqqGrLaҕX{l\~3rXq{9vx|Ոݨw2v5}S[ǂq7gs(qpby|(yryAe|6&G_vfշ h`?6o(!74G!l]uu֘How|8ZyyL?m)W.'S”ȋjFjOrnG~bQG_1T>pW]d%PHgf3!JpG>)qVqlظ0)sY|;F>yqHfsVcy) ɑHJnz.ǗFv3lz.|ٌ8X"4e&vD}ksZvj5%SsxjiY,0wv'Tufl=l3}hA)2DQX}$sHD)֜ѦjVtCB)GHٛMiCYbGu81izcN;)1JI܈Vt:stTvuhwoi״ɞDɛ)uKy=Wg KԘWuXFyAwbsrp4H:9]ң{,ybۅ;Hs#@*C}Dڂuu6xjj}2;n:vvy9UkK|屧v|Pw n%OaO}Oa_:uaJ]cyytX]8veZ;K)$٪wL9ʠY pܩZsګ*+ɨZQɓ{A~*t* $jI:jN;Dڗ)(fۺhd'+ovYgDO#v~Cut`g,rb   9JUԥiIYix XajX.HUE )Pf^@;><*j/8a6h[wHWUkؑ㤉8:rc˄Y7 [{bkxznoXTv/|[V~ wKs۷Lk{[++lz J^3XH]ös^%}HB3[XAxKhBM7{[۹4ǛŻؼ[U3NGҀiGY2cd汵&ewe λWid'8f/72WҷT&&guEEhP;scalapack-doc-1.5/html/slug/img221.gif0100644000056400000620000000015406336111674017152 0ustar pfrauenfstaffGIF89a$!,$C ޚԴs^J}`|rS(Ù5U[yfm"([8˔Ze EM;scalapack-doc-1.5/html/slug/img222.gif0100644000056400000620000000016706336111712017150 0ustar pfrauenfstaffGIF89a&!,&Nԉ}5l}&bZj< .ّ|_*؋F&3#:(jSZח\ ;scalapack-doc-1.5/html/slug/img223.gif0100644000056400000620000000110006336071247017145 0ustar pfrauenfstaffGIF89a!,ڋsڂ㉕!ʶ 2*$|~9γ|DL*gAH;@=)j&S ˓\N1+trHw<۹gפrFHhr(9)11(iEyiآI*聸Z:G[ZYk;WW,r7 M6Lt lL,nfk=vs*&\/t'Lg73W KHh:}&S5j"B|"2^u%D+tg(q42nH 􍳦Pf8%{\( Q]r,cFTvzmeU]u6$-%ZeK'B}]Q3[TāI((Jg}>gz۝kȸ0U|lB>|:en_-Cy8_2ނP;scalapack-doc-1.5/html/slug/img224.gif0100644000056400000620000000025106336076161017154 0ustar pfrauenfstaffGIF89a@!,@xI2 ލr9G>vb*%bnʶݨR6FAN<R9Tף MH}٪A&T6z7R&!wh)9IYiyىY;scalapack-doc-1.5/html/slug/img225.gif0100644000056400000620000000027606336112342017154 0ustar pfrauenfstaffGIF89ao!,o {4XD)gf;bsC=~Et\"x4:B8f9&TWtx-.]0ùe, =ڬm9ތV'v(ftVwT8IuG96X dXD$ZZ37iQ;scalapack-doc-1.5/html/slug/img226.gif0100644000056400000620000000025006336112361017146 0ustar pfrauenfstaffGIF89a?!,? agpeaSmI&K#*`Fꡖ=,D6$bQ&~fM/ɝaU.+8P[78"G8xw)9IYiyYS;scalapack-doc-1.5/html/slug/img227.gif0100644000056400000620000000030106336112377017153 0ustar pfrauenfstaffGIF89aq!,q `/|.i(8G+ s2r Q%{bzG.jBgG6x }5NUDۊE;guEX疤QH4V"8Yi8S*J9DG Z[R;scalapack-doc-1.5/html/slug/img228.gif0100644000056400000620000000025306336112610017150 0ustar pfrauenfstaffGIF89aA!,AmAً<;uSErh K޵ ~(깄_#IU2iݚEܫ1߶XNv˹ ghmG'Xbe()9IYiyP;scalapack-doc-1.5/html/slug/img229.gif0100644000056400000620000001107206336062103017153 0ustar pfrauenfstaffGIF89a!,ڋ޼`ʶ+L MgL*0 .K)Yܮo +M*Xfl Ͼi:؉xu@$5ƀC8DPgrw׳x2WiI駧98y:*Ӛ TjG$ E쉴j[Lӹ'[\e#+wLrElm}cNzޗ9]H}sb|"ŐӿpLY֯!A!^zR}.bPaJ+ieL R% 9֬^"x-'(%9/¤')5Kw TS&#a^J(PF3ZfXi!L~v{(y:^U*.UK¸kWM_I [;N]z'3ZLl_ö!k!ɺqTY99Ke. .,~g۵؊jyq>ފof7*ۿl~-G\}"M(Wd&VsՇzY5{CU!H߅UaU7.j$\aT]Wj涙*>t nD&v&&yq4#ѝ75E-`HZXN֨ 5(ty*ճ域i*kƁP hrqk K,EBWIE{lFQjfߊ61z.9n[nNnuK"T֯>BOz6ͨh 3*Y/.kEɶ\ܱ;be3>]<3BsH*3|٘E)-M#E2']]؃n4uo6u客6]waŨ!N&k5{I\sv&c*~ߦ"YF>u%n$zm~+w!o31^j#8Ln5,TOYuˏü*gܵyZirܛׅ%>,ϖ7"69ɳ^;з ==b~lg,ͮ': ξnQ7+6 N8#;12i}҃ ; %J#\C{'ъ%D#Z9#l2GO-L[Qv;K(x>͌"|8vxCDaȴObjcnj.,"x5xE(N`$c)V4-Kzc#F8qto'"|~{z C%9Gi^uQp@GtSDӷ8}3cq &\3zL2˳p-.IyK½=S\}_k.Yy"wݡa ӹ<5|"Q9Kp 9Wkvhyf#$I)zEt!r%j^QKkF$>*fEdUkvM?3瀄Xi+nvQ<uB6%Ƃ5vMY&E}d%z28w>lѕq)$Epѷ=N(vswv4B 4hX`xZ16UKh wwP&T-G4\yXZ}LW@*3Jg@GVIU2[)=zvSqhas't0Fz:]?' ӊFhw}Tܶ^SSt:(wXsuL#tfijsr oɖeUE2NsZ5ɇu{sLV'HZBP(]XEqd](ҧ%v8M]bNg_xugᨅi6dXXp Jxȑ|W2ۧ~cEY4sz}4_6x璓 }Ghy0YYC$&c; DG8)GaFiOi*Nfbeؔ5WI.Fciyby_i)yG7d4^UyxDbgAkI[}O +wbgRHuR+XIh6hsŃ@ƘgGY"GvgN6pi*WDⵚvLqٚF鋶 ]`)w 3 jȋ}pȩmOsܑg(rWOgq0瑔{p£~ɛyIi7pP՟Xgj5ycW68Ƈk_NٜX/lؘA'q]qYH)Yz1<!J9mSui$G2uRu.['hɖ5ɔv7gWv?wOY, $jl(×t3!Ŝiu0:|=)rtZze^ .p}ʤ[%;zPi~H'@qW:bG'JjszG\0[9*]ƒJZcҤ]~:2ڨ7Y鈲t=xz@y"xY;ʝڏBȮ?fIG(\Hrh|HUݺљ^Zթ6&1xv'zLJ۶%R2m[Veɦ1xnfx痀rvdsŃp?ث8w{ho R#/s9ڨSֈhcTKp9"OFk_`ə(kOx`KNĺ_r;ךlv]Ԃ\{Y(JȨ~/ ya)J) Hu%W[mˍ7 G˟h3;[ )7K:5NX2J^5ˠMȀ?9Z|*\T|X&Xk\YnWKUJڸtzJsھzz|װv3 "̾{DZbFLZڑ3ZۼF eZ'\[7˜ڤ°ƪ7hà lz v "&(zh- \{er%x;DkIIX 'k,gd L3ܡ&cfl+( ¬WY=E i<,ԩiŹr<,ʶrIyRyN۞AoDشP׹Ƽ QM_;e:ʬׅ | K:a clʊ4ũ|g mA-\Z,ʩQ>o Z\j orzk" ku[_Aya68eg!|^l`q]yvj!``8z.ff܅xKY@"FEHFA`yUQ^g΂7rX%o_C`ye8Nۏl9;@g%Vb366nW7"!j'N pnco❏F*|yoEHgRBBdwVpʊ6 llzɡoV騳'blbr0aE'Rak]wU†i[#[Z䫯4,ZLT|JhFpX!®[ 㱂 R#A:ߚ0,EN|z+d2l{,t{3m2=ǢLAV7t_]E S5&_s}f*jO6mm rǝr vB}Bx/.x?ە?~9_=ϙo mFw^Ito2FzM֦.9r:N[LJ :ؔKb+JGR(񁀟]=s{1شBPnQ&>TEܘXn3FѧN!UBERo) "Ԭ8hhiQ$5įhFr`a7F+vut;s-ʈׅ56-q8RCeikt(W%4'_SG>.]N$˾0e^oq\i.S Rqcu]Kz6?}`s]{3 JT\GdNxmIPU2nA-ID%WJ (Y_A܅h WUJyj.֖)?=kJ&#lWN0GJӋiO'MhyuEb\Ϗ F,E;*1- :AϪ׈ ](ITCeB̞ `&Eի8Vky5&/X \sЂm_$6~[1jOP!Q8zW ~%&&Q;2uҤe8C]4ln+>t&ؽa4uaQ]0gcs5/ lzRɮ^nK EBm`w%Yw$,0q1"z#?]jɪ[Xz %_nvs̜BpP&8k}Gwɋ33swKB[3د1Rl0|ɂ_ xiK "x=OǥkC[=eVZ3ĉWj\֫gǶ/ȏqL{huvyŸ tLLor|$n:KA xr|xa|k:KAm;Ǿe@#yq{[4fjq!5W2.!a2-f< [ƱvHm:ůߝ9;&:o#nǴ.9A; r+aT]{IVYrT+dWY11Ahv }DU`uK@qY'E!NUf4`fχv''"pVarVjt]55h[]7(q`2X^PCW9xRH3>L8MdSRSV%Wj5dOa(cHehgKU;scalapack-doc-1.5/html/slug/img231.gif0100644000056400000620000000013706336077015017154 0ustar pfrauenfstaffGIF89a!,6 xTIgf1"h工gWi2{t/:~ĢL*f;scalapack-doc-1.5/html/slug/img232.gif0100644000056400000620000000044206336110407017145 0ustar pfrauenfstaffGIF89a<!,<Z Not`0rʶ 2dxԐ~ Ģ1؃mriAѪ5P 7߬,bV8ݢTw7զ4xX7&HuBIvXV1ӧd3*c VvJXKۻS(f,k!y+m}ݭ]*:2kn. .^?Nj6|bW,|~ $(%BV:|ZU ;scalapack-doc-1.5/html/slug/img233.gif0100644000056400000620000000057506336073140017157 0ustar pfrauenfstaffGIF89a!,ZAH扦ʶnKĢL*W@ FT‰eMY{ 9'֡foIkb"xwf@f%2cy4IUA:ѥAf )IZ[! **{4;;,T, ޲WL\~8}K}>O/l()׏[O |#ITrt2;1cUx88" _1%Zܘk 3i.cvʝ5vP=)`2}:QPRJiXp VV ;scalapack-doc-1.5/html/slug/img234.gif0100644000056400000620000000007006336070406017150 0ustar pfrauenfstaffGIF89a !, L mkiV;scalapack-doc-1.5/html/slug/img235.gif0100644000056400000620000000053506336100144017150 0ustar pfrauenfstaffGIF89a!,Wڋdمf}w`:ުЏ ib|T]"'CP bOY0[eVZfK{G qCgs}?{F5gBs'DvX"5d8xXS֙x:资k(vIWTI D;E܆ J{8JJZv8\'m5S\ } N;|(x<' _׳T ZwVnt C,t 55^UxaƄ"%f 1ʕ,[| 3̙4kڼ3Ν<{ 4СD= ;scalapack-doc-1.5/html/slug/img236.gif0100644000056400000620000000017106336100351017145 0ustar pfrauenfstaffGIF89a.!,.P ɋnSV}cQŶ̇2cM~YfcF ib1֭ N;scalapack-doc-1.5/html/slug/img237.gif0100644000056400000620000000011306336100661017146 0ustar pfrauenfstaffGIF89a!," ɹ:h"qxl$( Lt;scalapack-doc-1.5/html/slug/img238.gif0100644000056400000620000000024606336100722017154 0ustar pfrauenfstaffGIF89aH!,H}a(̜}]iJXHBԉjFiyrQ=%:"TWy8D$y|gvx)9IYiyV;scalapack-doc-1.5/html/slug/img239.gif0100644000056400000620000000014406336100761017155 0ustar pfrauenfstaffGIF89a) !,) ;ԉ ;{C~ThAd.*,k-&m$6V 򐡟=;scalapack-doc-1.5/html/slug/img23.gif0100644000056400000620000000013006336101350017052 0ustar pfrauenfstaffGIF89a!,/ؠ;N}[}"Twqd{) Ģ;scalapack-doc-1.5/html/slug/img240.gif0100644000056400000620000000036506336101225017146 0ustar pfrauenfstaffGIF89aj!,ǰy-BMeUǠd,Rl5b߮//8!2atҎV+Zi@ Z|]q,v5N^n~];scalapack-doc-1.5/html/slug/img241.gif0100644000056400000620000000026606336101266017154 0ustar pfrauenfstaffGIF89aM!,MD pZWQnnu!.3F-s*]cdɬ+)CXqf 0IjR>j~RsYwr4{ R3ee"T8ӥXxXGYi) *:JZjzzQ;scalapack-doc-1.5/html/slug/img242.gif0100644000056400000620000000057006336110432017146 0ustar pfrauenfstaffGIF89aN!,NZ 2zu~LF 9LB~?SW=Ԟ)lV] v\KX;Мz"}=G3ٵ-OѤxHUf&x&'5(9H(zdiQ:F)Z*sH[uEf*Xԕ[ҫ ,{ &-O/ :PDy}K ar8ND(L2T*P>ąBdȍ&wbd.3aa:+ɝ'sd Z;yCFJW= utEb5+V u;scalapack-doc-1.5/html/slug/img243.gif0100644000056400000620000000011606336066335017157 0ustar pfrauenfstaffGIF89a !, % p UDyfy]MjarLz;scalapack-doc-1.5/html/slug/img244.gif0100644000056400000620000000031106336102242017141 0ustar pfrauenfstaffGIF89ai!,iL`˭ 4YD[f2:7՝pkϮoqg'FQ G4 *:JZjz +;Y;scalapack-doc-1.5/html/slug/img246.gif0100644000056400000620000000017006336112551017152 0ustar pfrauenfstaffGIF89a'!,'OL 3ꬉ-smze~ic:_B,Z۱6G*xLxNsuЎ'!V7 N;scalapack-doc-1.5/html/slug/img247.gif0100644000056400000620000000031506336112571017156 0ustar pfrauenfstaffGIF89aX!,XL`+YQfgBȎR¤6}ΙnR]~Umf;~",xP2$S"aFenJJiFeh܆wwq 4;P!wE2X؈GHֲ(8T(XI yxre +;K[k{ Z;scalapack-doc-1.5/html/slug/img248.gif0100644000056400000620000000031706336112646017164 0ustar pfrauenfstaffGIF89aW!,WL`&YQtWgBfR|Uu‘\xˤ;l*Yw&S,ͬB[W!蒋e7F|ߝ'Cs!$72Ӣ67E tzIɉ +;K[k{ ,j4=)qͪWzwuh GXWCX88x&YGHɕ8UY¨tsh* Zk{ ,<k| -==vK ],mm] ^:~]>nˮtn_tt/n5~C#cja0y sE(B`AVj㪬*ϱkb-?Jlzj xk=vrߡI^m-%hXjV7rV9bTbE|nlu7qr $49Yޝt9ӟ-k~:9:Cں&zãz±8S~l={7{ཱིN).|C}:O+KjuO~~;scalapack-doc-1.5/html/slug/img252.gif0100644000056400000620000000016506336060147017156 0ustar pfrauenfstaffGIF89a%!,%La w(m]T2*ma vR[6 2n"jxF&LG.(jܮ V;scalapack-doc-1.5/html/slug/img253.gif0100644000056400000620000000452206336064172017162 0ustar pfrauenfstaffGIF89aZ!,Zڋ޼W䉦ʶ L [R8LҐʦ JX2ܮ j!~OUxH,#"vopgxW2xXFxBii')iiz6PX#jBYk([;J\\*̸{ʩV 9h;} j 6 tj+mMj<~]7,N׶r;nPl!1b~ghѻwOa!FyEI<2 Gk]L䍎D*FrQ2Sc6X_:ķ^OTK/ݶf63^H Ҫڵ,{uMڽv*n7>p ,? +%TBƔ}:\9;! :t4"J>:լ[~́ݷϢk۞qݼG[7vw6D9) K\/}̽vqJd^ѻ_5?bFifv,C2P64D "G9)#6,l(ؔA}z!EFw x-h@RP>ȱCG6Rx9a0$[ ADMmC h&O.:aLE'̂%[U,8ʇkDb//RP"6ePaEB;Ή^:!!9],ŰDiYsAg,!Ŗw)j\!2-d\IRYlWPjc$\U,QZ.zR3}ED>2iB*䖃)epG9+Ȧsb 7yi=ݑf̬MJ=гfң1{ICjSƀ6q2ʴ)LL55N|Jl'7815l]D$ІFBQ? Ҙ&DSbM3BYPJ浤&˧PRYc:*:O1fsZ쇽15.ujohժJfk wbxk_u.e[ &WJveq 4v@"O Dϒ6JފpU>=MI ܆Q*&&o*5=mfoC3=s,@v~t3S\ප}a.yyNV,RKR%Uk* гk4jRJ@\_s ]. ]*VR#1/mhFG Rd1Y':qBF#5ґݪ+<^؎1'.e ŧ,2 Ivr", #QN^MM M=_:J@s tcWI4eiǐy]tժT==N 2![|  #[:m theD6;-Y'Jr.3)̩T#SjSR9T i[1TӭQtL6#dP6,**JD%Bk(Z< DuR{waGدV͓ 3iR٭9;Ō;7C:|Ms gGf㤥jC;v/g /A}Vս_}k =)FZd2O8%ȁå柃E_`b:Y}b/Lwrbr朧x.ވ%abAXĊ9I$80$1=&IE> e;F)V^ҔXe^* bIfffjfnZܗIgshi'%ygx)"ğq.L#V*>q:(qBґz`1*NBjH!/kkHVW[YCt*lEǾ|xuSTvCM噵Vcڶ׬`3)%]\Qں-nC[v6W padBG3mтGb+-ڛ2H!LN|V~xJrnx!ӬtӸ)0NO]SuVunboR8eW{vnJrmMqc=wz˱{}yk=EG\8$%۸#yMhO}ѹjʺhZ^=zWQŐC>n2q2<|ϟ{GO^O':e)o: ja EDax-p?gJ-1emAы'hPV"$33lZrqf[bZr0r'Hlc<ΆByQ"(d=8nq_oKzTc9GM0ā,2=6J*5 dRP9q.SR6hV[ipl17>IMN[Y\eySڲ2+MXAp MN+sjt5cz GG_5s30U!⁾Ptجu|j2J4ibҮY\6>La3憥Ie(:F؄mP)**t}_բ'@؉o_z IAoȁXq_t3tQcVk;j|L&tTRщĖ{8ڢ>Xַ֮o`w^SLWc`} gX̎wd]j)J(BBL'vtn7ZlŻs_zod7{yky=o]qE M~=I ?I(5m?874z(plЛ;+,:%fC\3x6A Y#kC\Vܭ}d" ЇCaeIH%zx|i?I_tF4̶nF}+v+Bp~zNoWb1bn?n|ɔOdP "jyg}77h I"o/ o5Vt`7-bzyӅ*Յe@7wK8psmdžqX8P5rhWUrwwv|eWr؇5&Ȉ舏(H P;scalapack-doc-1.5/html/slug/img255.gif0100644000056400000620000000033006336061522017151 0ustar pfrauenfstaffGIF89a[!,[oO-"-ʶq x7< 0:/<(6j3W#](j¾lN}^8fDw>%x%w476GǸfǗ1wVuVI1CHi(i29sf{+S ,gNN/?,>sz;iDA ZlMt3CRI.ֳtgǖSniE7WC] &[xh C]ɔ)LO뚪VLȵ+አ4ٳ9Zm~eϭ \uPyIP6j"ŋ;Zd'EHm>A Rn131oڪ f8qu}9`q+1n!Nojw߱>k;ӫ_]<oo8Nχu?#~As)98u$]B2%W{5|Oa5X &h#w4BX#bdG*`]Yne^~YF9 1<W= Վh2盔)fOq/n&ty瞀hRH/dZVȟ*9(:q%cRch^ NZt~7-RꬕzꨩG4說*:Ǟښ鳏{eyi=Jk)Z[jihՊ R}7&ɠŖ+BQ #/{òzNj,l$ y. Օ<0}y-yxQyzsn+DѶ%shHcG1E5gL}j1VB|S JuT]:䥳W ְud-YϊִUj5umFk]׼uʆVSռ vիJJ>qu#(c Y^yYDY'ȬEvūLyj-iYS>:k_V"a ,lo r]n Y64-p+Xr2? $i ݺ*7c-ݵNWoh ޳n#aiRËޱ+eq F#|~+ۀ"{+_.6:0zCJX/=RR཰[NDr}SIcí.[b8p cx,hs|D\b ŵ{]+d~x}o%XYVq/`ø˯`'9_|5CD2-+Ӛ;Z}l=_1K+huR ztg#34іN.U;ijӑ)Ieԟmm5UE,kǚzm!s]Z/7̔N!-HZԍ=`rSɽD >ZPk%ƒ7[٬i Nucϐ/ }oo{߳N@cvI`W &2soow7NB3 `S=SѓwK{]g[!uMGBf//]]Vu'umgW5̲j6LED$RYKwB(.V@ayO&ýpqCvqӑxWWfk=ëZ3v[zO+EǺe1ܢYٛTc{~,l[8ߍ~d/_|7{8CǾnT]^OW&r!O{~4S%H_7Nt}X/{G~(PgyHjL򗁐RS҄)$Hc73Wx x h}D52/$~{gewbw)‚2uoFOD؁P[!φ]3 CNX6'<&-qzb"wV#4?(-~F:^'2"Pf6QOT㪆h@ݸ#Fpx~$  IҎI# A'2^ZH EI@&IE.rt EyJfRsD%NZd3eI9K1?a)0ԉ!*R-l$X_|r0G%$vHq `*h$=ɼn#"HKj 4S&}BщT )N-zS4<}O+ T6ugLR)T[:=]:mаt FJԢ┥ uYָD%k?-߄CCPFUUMg\)"նՠoa:6:uUkV6V'o_*v`mi[YJ4mZuV-lGЦ!j<yִ(CZ[>ͅlNz2bZݣrwkKn\rS9urgbr-on}.գBfw}e۴׼e^qS~ܓĂ݁˫]fvj@a>`DkQCJIQa^2= "?'w1ZO`*Eh`Uޜ!81ìd*Ose$P/ĥVqΔ卯g%޲p+ٖ_ FdJQV'==[zǘ~7AGmF4ga тr ->kQF^M]?Zzkr\f6"N<3OVI`յq}dKT6=glB['F!w]iX٫!oputy}GҖpnskt O)|ۺ۸N{@q'\/9mj4-NROϲ.H0OcClt\,˧}l9g{L44[9^X e;rWYȗ웋y~ȒJ:e#<"8nfDyB]d7Д}mNAIR7aIy7|AnC^҈w5M>s6ohwx5o}=mvl2zG?^1sZw}Wz|~y׀z֦{Wf}xr7dxtS7!HfWw@HH~,pW'=Haǁg*xȀLGHA8Hs+yΆ#ϧ#1hl5uwyK36URh'TX%(Xegrg||hwnpMj'xqj;LCx}|h(vd6{i>('}~|7QJKcgl#l^'Ȋ9wp'/xXaXp~}F狓-_}~x؅<Ҙ'z8\hH؋x|ׁu8ȍhxHB؊шHWhfkDPx؍ 9(OHu r(xhx7V'U8).gyI; ym({x&*9 SY*g,y9y]iّ879UhF$=YL+h)XAAL |Xg8z)Df3IzÈBX4Iؗj)txӒZI %x9x)Ňȓ͘P))I#({>ٚc2 ə5qim٘Ѷ7$0IiXM~D9Է馝fyE~ɜC9\igIxƹ 59zDYH9ѝѡd뉘) \ɠoȚi)Ji:h(Zhly9X#Q*SJQCUY[J;scalapack-doc-1.5/html/slug/img258.gif0100644000056400000620000000023006336062156017157 0ustar pfrauenfstaffGIF89a)!,)oD j6UW=F\y.fd Ej[X!^%99Q7>tl= vlR?](8HXhxQ;scalapack-doc-1.5/html/slug/img259.gif0100644000056400000620000000033706336062441017165 0ustar pfrauenfstaffGIF89aV!,V4oX1a)YmÄ蒾5kL$TO q҈NQ%hMOK kΓ̬2&P \!OgMe7QXtXӀHWi)61yyY%e 6*z"Jek ,#7g7wdeׄGfqSTwG)C!X8(8vQ'&*)z2:Q) -=M]m}];scalapack-doc-1.5/html/slug/img262.gif0100644000056400000620000000053706336100064017153 0ustar pfrauenfstaffGIF89a!,Daqɋ)޼+g!Id& 5$=YZ-mr}.EAѹFR<tVZO7 G%CcȀhX78GƦyC6yy2es# G jGiikR˃9LtK[\z'kliUueX&`RmxjdnxfSfsMbJ~ß>On_JRO(cW ƴEfE"SQCF́_?X7LI/4ELVH4 |6C׼GJC#&!QxDtLlat()`\waXǒΘ>&PN`cQB<юsbSõ[1  %*0F$;~l" ȱd_JFR,XD*qVE+- Lnܤ- )´+y!%AaEьմJ*]u55ie g Wq ˉ񰳝g=Mܔթ;ZXQVm`Ίִ`H:Ѳfwޣk[yҁ%ӭDA*גw+iM9U5CH1h Uf5`gY:e)YE5 ^'%Q;Y֮Vee[ĵn1[WmqWIָU[*츤mm+YX=z[]wC#k7M,LԻ/^wZz;8B+\%HwZӽT/t <;7׽|$laWmxaRV6֮x"يL5[zYƚ|MG.KZ(NYVᔍ]rG!V9ZRy;/f[d+'>{0ǦY]fάzI* | kQ0sXճ3֢Ek2 g \M(aühDue_hGґsz>zԘvgPR3uhM38Ք+jCt7m[_[JTcԴ.,<\O޳MCCǤ-e7X.ww-R3پv2Rt[wT=m8 ݀6g'8<{7НS~' )WlWk O(a+]׶ˌ JkQÔEr;(Oμɟqsbiّ Pη-zb=c~ҿuVU.]w|lq}lt c3D{ŷfKbeZ!fyvJP>rbW|ˁnvuw~n۝|9q>ڌw={ESo~؋U_lǾ?evs.7C+y_j{ }?7S]WwmX榀 >Wm&pSyƷmoa~aTG&hmy'Wu$Gb1VZ47 Oa:;u3XvlHF-å[<+,]%xمxgz}gb:]Lք>067|=|Dd0t0&9a&&؁ 'aD9D_crtwv}WS9QC]b(3d5}O|Ĉqdc@J=#V~U؈m\`(.usf1ķ׆VWH|qHh{qȋ5{%H Xr87}'OS׌(Dk[h8SM6n刎h}Dž(W>(}};hNhNnŀ ɐ+珐ȁ |Xhu6}uw'AyyG4yKt_cX7 0M))_GGDsCEZdr1A(#({Ȏf iRVfJIG^q=02G뽊㉈ fPjmaII`p;n nv1fGձ6x7xR&gp)Br٦2Ԧ8B:J**+z:k{ ,ZgliVNXJqؤ;څ0B()#ً!PXZgZίQhFjޜ|ֻF&LJWw%VhfǓ!T WuEâ9$XxzZj);G{ ,'FMW/O. `Mw%EvXx 82h *:JZjz*Z;scalapack-doc-1.5/html/slug/img271.gif0100644000056400000620000000040606336076573017167 0ustar pfrauenfstaffGIF89ab!,b݌Тҋ5/hїLJfɁ * /{8 ߐmDXZeFDKH1i"=fB/זZ8}VOc5efs4xt8tWGؓ9BCiZH;{qyK3k[$}hbܸIOZ-`V;OABc܍Eќ+/#7nQd‘÷ C'֓hݶeI< -S>^K&ؾr蚓u )Þ<jkUx00J}&)VNѥ:N (V^~j!.EٷG8+t6C# :tRA-HiѬ[Ylѳ]۾!sk;scalapack-doc-1.5/html/slug/img275.gif0100644000056400000620000000177206336075053017172 0ustar pfrauenfstaffGIF89a!,΢>ͼagcy`ZLlpο;˥nqyLk8"u4ש, vP٫MWʖ#xa&7IrȖ%I1x2H%7t9J1V)g8 9J*h+*KV8Ɋwic \tpl}],A\M=>n&ԭZejH+4u§lVmKi3yiU&66 [g+ٳO9aٜk,*"zt7lԈ[[V#ANGh^];"_p"6UQWaSI]uߵzM8Ň9ȅj&߃%b+'Ac"n$N^Qq"V[)ua9$z7{ PTpV8 #a~f&#-@n`=~i3Exhd􍙤la‘d4C1e}Uz⣼ VrNff*'vn7U.݋E6jqqY͈Y:{Sf -Uh;{NM,;:u&Krz~]w˕FVh(يO& Y˕{(G0El+ *NpR sz=d1Xq g-ePi3;e01Hym[.N[a(,z"u[ݵLe&ivnB;scalapack-doc-1.5/html/slug/img276.gif0100644000056400000620000000030506336061721017157 0ustar pfrauenfstaffGIF89aR!,R }X3蕗hRpi(+&p@:ѣLۭI[ORjiV9DvDWd862jj8dtwxe4Wh7س4h#Dv%IfٷiS):JZjz +;K[k{Z;scalapack-doc-1.5/html/slug/img277.gif0100644000056400000620000000777506336104011017170 0ustar pfrauenfstaffGIF89aho!,hoڋ޼H@ʶ L6u|_!hL!KcsLڮUnW;9h&j; S: u$HHwvH4اUHcYe(铠I*PtvgEu:`Z+[iK {;|ܹj, M ;u)e<z,6 NnjN4&޽RHclψk?a+:!ԅ }ٴmcy` R+G$=j2:%e]KY uṄ3INŋNc0-]޼QR5_Clj|[e 3XsGTUlb@@-[^¸EWp1]Ϯ5׹nrlwb]X3Ve*װ?'nvl7FcUŖsCTg&C%E]:i]o\ <+wuϭ&[׏,|{seG}7_x z))g!ge8e gzi|qRz}bf.hصXz7ǛH #{4>#9_y"'awKr#&SbY$>}H#6I\>hն%IP:Fɔ!fy &H3v&{txUr9igJ.r&s^Y #!()5y*if) o2v>x2˰,*tc E6*Vz2 q:Jli$R:QԸP{.NB״`ᮨr#n|,$xg/p |_|pKag0.vt}æݖ4d;Y?6 ~wZmZ>_?C==7m%j7eRJ= j]c>ͯSGǝJ =5"!D+Sm\Jq3a6 mςs: p, ^$%1AK$S#RKvao8 37? 2B|C|3&nK]c(1"ENUtGXeT9B ,nD!Qu_?+\%1zEt")G< O!/+IZƪ$ wFBK3w˻Pb/Cl0'"L]O 0z-lbִvARJGQ!hq JHIP* Z]E-z, ^Tͧ.HьD4(CN,''K[ɗ s4}jjӚE%=I4DPTSӹ]Qr4L*M]8U*WuU;jY:ԴFEkUӯuYU>լk_ݪQNt`׻furS\WBͫ`+Udž-+`вu,[êV g;kZm>vլ_O+RlkQ[WQ֪md3{[z6moU^nKZfWnk\"/I2⃖zGW"&o^TܱNfos&>f0'ܾQp=_,~XP6IP)'Ix$jlPq; GG??mGY2~7PΘ2ldB2e/YdzSf7)TFfZWnnOwg<Q6azvI6c ICѫ3!Fyuglf:CBЪh6ԬV)}iLڻt&m屄9ԢvKmSHVW˺ς-i\yJF6;GZVv]ic ݮsjNNͽ `zT\fa{/ߨn&oz_ G ~pvb=޵;kd78#W)LM^rᭉ:Tωpcna3TyFt>|$<Ѓ~󡯦?/N]~\ƓRdV+ݺ=1I\Z4bڽn۱vֻJ/p;ߢ];ۥjwە ?|g| u/| Z^?ص|wMx]p;]o^σ=SkC}R_V+}y\7>oؓ7mݪk/G|?xs_'4Zg}_zϧ~^WU~IQ6wt1A"RW^?w h|6swHQ'f`caET=o9+ hLwvw$a 6uj~R-H_fe l#rHG)w v$vrp7d9Xr؂lb-W(IgAEVfk7+g0{$+GOgZb`8_q+>gutgpb73|{RwJQ'M'9^{urww_PDǂH~dL'o_'U7;H𳉰8KrIp6dcv[?PUFTHksbb#HR!KXQxc3&i׍."6VX T(cvb{ @ pU$vC(Fv&qN8axp8N䷄`ֆ-s=N/d(? l؉iqb[8l}HNzHs>_Q@Qs9?gFNj؄4F10vRb׎R ]. VLM-^dM(ԗ|ZևwyX}f~drj|hIvIx~sٗFǖ'ϧ pȗgYwֵ~əٙ) 晰uy隍4KEXg~5ɛ雿 Кv~؜Zɏ҇Gn7$맜kw)9IV9i9{ٞIOc,:nU: x|8ˆ3`䆏ʁ4׌H;;{+&(_cqژIG PȄZ_ѓGLH+#9_y"'awKr#&SbY$>}H#6I\>hն%IP:Fɔ!fy &H3v&{txUr9igJ.r&s^Y #!()5y*if) o2v>x2˰,*tc E6*Vz2 q:Jli$R:QԸP{.NB״`ᮨr#n|,$xg/p |_|pKag0.vt!F-!+M$E+pn SGvGhiTձzrñ`t\T%eG N#)HЌdYBαA c0W/% i:[ⱏ!(BR;4DY2+,y0,%M:iV6eR :yf",f23[R &'Mzr8THW++z A *+^zh@7:*.'/FATJo"8t6\$}jҟF.4K'wT5l NIԨ ]* J*I=uRzVլ5*U P+Zgu{kVZ4}sp]7iתխ`׾Npo+a-[X/wmdeY ume1;̢6,gC+Z*--NGK[ձ-oBvqK\MUm66Unr [mo܆guMvTgzc^s M=:iTk +,_N%gR`b*-'h_osߌw'x2pL`@xr:i%JjLvpPB^jx4Ye$=VUJl\qt`kw;w;"W $scHRMIJx>ة֢|R 1kf8ZrkF[Q8ҫi gGzh6 (?#2*!1VLm*Sc-_3)$${_*fz*"D2T'%޾KxA+LbVOv{CzeN;]!uaq,זּ[#ozg}Yo}k{|sݿ~3L-?㧎?t|xVVUYde7|o9fP dF`8*sfuvDWtЧŢxkQNkmݦdiuRufx1HI G+wy48}f?hO+(@juWVGx_L:x2oQ҃U'hDhC0_haK'6vwetdK؆N^ZXhV|Ԅvt2hX_TBv>rxqKa~^cx$lS*tŅu8R'3hKe8fHm7ĉ`](vX'f{w؉hv~FwHH%xu˜2ȋĸO.wƃn(X]GȄHy#(:"j>R=qZQ<;؏#9_y"'awKr#&SbY$>}H#6I\>hն%IP:Fɔ!fy &H3v&{txUr9igJ.r&s^Y #!()5y*if) o2v>x2˰,*tc E6*Vz2 q:Jli$R:QԸP{.NB״`ᮨr#n|,$xg/p |_|pKag0.vtdڵWH;s'۟$>玲P_wL)󒷪~* b aO (J =5"aTe>piI2C\?JP3-)*/ÿ#da`!$+;TB#.I`8&K?)(&xix'42P93Z+xEMcqh=hpwϸ9.m CE6*RjYтX9Dqt +Yd#INvu0#"?dJ|ecMf}#�QޑP/ A5Ӎ 3mBMi~/$5Xjί8Si[}М# *(ˢГD4Wl(ucL'mbt{$%AI|:1y˕rs);5mRaۙ16M/'@iz4w۔68l43=jQTjI]*S V9UpTø*vu\Mk JՖptE\ǽzRr{j[ D\Szع!zj_z+`'j+c;څV,f!+ӊUvMmkUς}_k+5U)kYn%-lҚVE.s[B7jdkvq(;]庖%oxk^@[:؉1{{R|L.ikX&@hʟR >;jKS26Gu&P+'Lţt&LS,󹙄!EJYD +@ur1~dx'x{z2 cGlo4HVe%ZFj%9i*RͱL[sT6aah:45y )]Uc3GJ"QZ87zΚViu쥫Ցv27'щ|?_i|gfxTƧk8;&"X<&8aq;6r#8u悠Db/twywSqCN'Ӄ&1;VN!og:7&1cBV@A>LZtXbEXlHOJm'}FMj~P+Phfv(U #zQs;|VqHS_a3]E,GlcX&nmA6xk@=HD(mBwq2UCE~xrm[kfleWȋSva&gPĘtƘd\ax8~wFOGqp9YOԂ.8OXTpeNl W;hsߖ_ qGhBȌ ~挭Xt57 Gw=i% fCb ȏXs@g7s$9$DŽ(|; h\'wzh]j‡u͘XkG 09[2I7ygGxMɔN){98zI{@zVIX~>^_ف~Y9}W d}X}i)}זj9|&؇niWpӕK l7RY5y|U95Iiə`ؗ{yuIWvzr1YɗɖwY n45~C7yI}Vy/9gE99ɘʉًY1Er'ʓaTr_QF!G8s+(i>T/^LaRAFrsـXƠQ*A9{r!8^>E4o~::,XiؒQfAXXdn,H'' X@jƀLe /za֐pk-2 *Q(xa#8Mx7_FłG/ڡ7oz㨑u-9!i.ףjvV< p*L߈sLj/OKMjEI{QEUSu$ #wȧڧ%tpȢqi 4ꐯ*UIjR8B&e56,AMSRH.h( ! =#Q j*p0Fȯ Ǣ8ecʐꆂF` fZFȥ V9l^(coi(*hBy YBs %!H fS(K:fQ/yyU9IkNjg5O2⇝YR[lKYvyob;~hs[xKI=ɕ}8:k;ɜY͙M&vf{Pĉɺ뺯 + 5P;scalapack-doc-1.5/html/slug/img27.gif0100644000056400000620000000011106336071052017062 0ustar pfrauenfstaffGIF89a !,   g]tʮd q;scalapack-doc-1.5/html/slug/img280.gif0100644000056400000620000000254506336100310017146 0ustar pfrauenfstaffGIF89a C!, Coڋ޼H,ʶ L׶ ?^qP50eYRBԪ혈6.Á 5 jnB;vW&54vh4Gזu g؉Ga*ZA֘e9q kR9CjY :,*{H;|kڋ a< V$X\;`scb8FšB9+ϟ^ l'>=uae 2 ϢKxZR#D8/{J#cҌ)c]|' Ν1%ӝThF˖u -RL;4wU4$;Q|0׬`?hOlyPD5p]Y;d65mп E Ia).@m:3sXgط}Ԫмme"Hq3t"Rű hXYzS+d *Hg!x_[ 26+s$Vɶ%_d\)! f(~xÆ HbgEhb*,c<(c6^ YW걆cߡOP4t-p? B^'i$X$^ZGkrc (59 =I&dޙtExi}6JYgQȢt.:Ye:i&_UJ*^j1iz[rjg xInJsJl:-uR`V.(Tՙ]i nZWeZWt۩"aibܾA)z2E H;jp[-K(՛PE5* O+A2:pƳ)֦M\tG&@JB=q˶fqI)O+}6ou"/e.LO>U `x16zi( Yԣ|ܪ20n2行k} rMYD Zͷwf xp NxUmx !.9{(fvLwy"/!Q:oȳ;>HB/n ]GκNp| gޮZ7Yx=H1^g1wYft.zFkiWmڅio)cEgj҇{lO.UV]1kl[s*^tmY@݊ ^Ƭ3`.Ti`s#%ad.U"r»aZ w6)GϪlrA|QA@Em5U]}E{刋]\qErl<#ø*Wq;iܣo? R;scalapack-doc-1.5/html/slug/img281.gif0100644000056400000620000000015106336065603017155 0ustar pfrauenfstaffGIF89a8 !,8 @#S5+vOXeZ!q)UFt3*cMPxܥJZ5 ;scalapack-doc-1.5/html/slug/img282.gif0100644000056400000620000000022206336065667017167 0ustar pfrauenfstaffGIF89aO !,O i@ڋu!3zىZ W} ɶ_Jk|˰:~,Dt$YJWP*zBTwhk;jl.VӸSl5"#Q;scalapack-doc-1.5/html/slug/img283.gif0100644000056400000620000000022106336065743017162 0ustar pfrauenfstaffGIF89aI !,I h` k[vv<%g:+va-º%f"A$P1Ƭj i φZQ,΂IgڛK{c'HHQ;scalapack-doc-1.5/html/slug/img284.gif0100644000056400000620000000020606336065777017175 0ustar pfrauenfstaffGIF89aC !,C ]`B{yY*iJSkh͋&"j$7h\%1iaVzRE !ޯj ݵ(Mϳ1;scalapack-doc-1.5/html/slug/img285.gif0100644000056400000620000000771406336105775017202 0ustar pfrauenfstaffGIF89aho!,hoڋ޼H@ʶ L6u|_!hL!KcsLڮUnW;9h&j; S: u$HHwvH4اUHcYe(铠I*PtvgEu:`Z+[iK {;|ܹj, M ;u)e<z,6 NnjN4&޽RHclψk?a+:!ԅ }ٴmcy` R+G$=j2:%e]KY uṄ3INŋNc0-]޼QR5_Clj|[e 3XsGTUlb@@-[^¸EWp1]Ϯ5׹nrlwb]X3Ve*װ?'nvl7FcUŖsCTg&C%E]:i]o\ <+wuϭ&[׏,|{seG}7_x z))g!ge8e gzi|qRz}bf.hصXz7ǛH #{4>#9_y"'awKr#&SbY$>}H#6I\>hն%IP:Fɔ!fy &H3v&{txUr9igJ.r&s^Y #!()5y*if) o2v>x2˰,*tc E6*Vz2 q:Jli$R:QԸP{.NB״`ᮨr#n|,$xg/p |_|pKag0.vt]}&-}kR;ik>?0p%ePCBuuو`^~ .$uCK HDPX: Vq V1zo"D/HbcuxC8laX/v{_GHXn~dl!Ar wTdhHm!rwi,-\WY' Ec%F*gFV$H$ntm lBJ2}fc=fEtb1IcƱL4/\Z!WMv4#2b1qABU}!/IG8ҏٙ < Ub &V̲@G\2#b&E9MSa~ؓ?=ғ]<%M|f48'RaTEX^lbYUoim{[涶E-,S \6mjg҆v6s\ַ,q]jnwq6Mc`4 aGk+ynuyo,9Mos3j!p9Hl}ޞ`&E֕9{L su#ࠆؤ#&H3\ubq:8V30n3V!޲6ތcSAN[ *G+A̪wqp2jei~M&3W&j^T`sX&k^ ˜4CͶi%Xf3S2 Fpq MO 4}h%Lؿ3eLiM/K)zԤތC@3sf[ͺb^67{g&4y]Ejʋ3jK7+mQFb.fm\SLN$ꚺήoOEqkzW;zޏ=򲶵; y=lMr# ձ'/}'oK􋧭qwG|U+z+gyo>p){ƣm{}}O{u/|#P"?|黼 ww闢}ُUeWB:)z,c4,rk)ao9 vb&fgM8W6}wo`Jp&A=Dc!(RETd]6&r}hq&|Gl!qԃ sxfOc(fhs&~R$2L^>7NVxgXb,hf/L(b+qSFB9+;?ꈻ&tBZ`~5Ԣ֞KlfyV ʌ,{ Gvfd )Aњfbڭ"<ֺiJڦ:jу:tkH*ˈʺ'a>Jd/7KpHGzY8٨'8)nͅjzIzSœ_Yfy)):wc뛺ٶƩv[yzɷ)˴aKu+;n xYkg崋˶۸`i k{W뺯 +KQ;scalapack-doc-1.5/html/slug/img286.gif0100644000056400000620000000031606336063463017167 0ustar pfrauenfstaffGIF89aS!,S`{ >rvT_0Z&5bf$Z{I3}컬P'EJ'KsA QYZ8]sr=a-lS3{ u@zW3}}{%'QWRWvG%%94fuicz +;K[k{ ,[;scalapack-doc-1.5/html/slug/img287.gif0100644000056400000620000000465506336113302017166 0ustar pfrauenfstaffGIF89aC!,Coڋ޼H扦, LGm ĢLʦ J$@}ayNPeN~dxN!8gf8HXhV8pXH) 7ٙə yZyJJ3V1I{wJ (K*H,,WFzZ;%g ̽܋i Iv=~yno?|ȽtYiWb<V oJZ`reA:V%m*۴saoo8!RTѓ WZ!L14Nu6,Kd2?_A J*WUǂ)# {k}e"hեFìuv.Y8ϊ JO{cR yBf?l$Rs 8fz7\Ǿ󚦋zCgs\>;y>DTɍ, \K0!wVEGkOf"6(ImWzby-T]esS{e^D`6I*1NK( 7Uaq#|i)g~wP_*>=M)I\vT\[~M##&|z[׏h.P)fɱحЌQut BCbɧvf3&i)iV꣇ hB2}F*^F|([ ~^ꨟh( 3ѬG0{y枋nOn Koޛ1-@`2/- FD gHK紖e"iXi߅1nN}8}vm1GmwG+"ar[bDdϪ4r&z$Me#gwS 4;;5kO}-7A3A_AGIuH  R>U΍&K\Qy/ʕ;9_TC SB' 9J}D{QS4lbfqtckע-d Hgf֒"J/_ضUQy^^cBd! AqlV܁',zzU^( 1to[pHfd#.;pfB|# x%k" 5L5px㞎jJ2tk3eY h""2MQ#3h?"rr˛ 50v"L\!)Fb%1痓2}'pZzIi%Njԩd3)Nw*'(Uw*Z,U~k!YJhTR#C Sk>Q2zZڬtqD^zV9X+Ec*>rTlGzXܡ٪H(xI[̙4%{*& Zkoېv-k*wB.s ҭuKbaԍT\t-N!ٶqA83Ê6gF4Sߡ-ުs{Tn$C rԿxH^װKtdX?nNmUL,TQTlٕ6zmi6W$poBoҫJb(LwcKegyuig Y^-ɇ}GwQ%ȂdzCi&(4>t-on0+3P6Ȝn,$YwknːE{=$b<ٝ QC($FJddɠ.,`46'O\͡\57kZS4NT { nu k]gU \WdV)7f5l=ǚX=3K﮶Jzontq-5ύZrzӸ9xaQx/ɋ!iVwܵ[k³̐AA8S*7uds3|i%of`sfrZC37g'w.3.ܠE3vVm:ˬS&n-tʊ$ׯ6snb+['emS{{nj;K-/c~n;scalapack-doc-1.5/html/slug/img288.gif0100644000056400000620000000015706336102547017171 0ustar pfrauenfstaffGIF89a& !,& F`{lSSjk8Q$} Wf'+ۺڱsqv/-`CxB! 8eEO (;scalapack-doc-1.5/html/slug/img289.gif0100644000056400000620000000014706336102650017164 0ustar pfrauenfstaffGIF89a# !,# >`d*9%T`lA8~YTezhmLzT9#x!=P]N"6A)0Q;scalapack-doc-1.5/html/slug/img28.gif0100644000056400000620000000012106336057746017102 0ustar pfrauenfstaffGIF89a !, ({^@4>$*:y5 fmf;“rvp;scalapack-doc-1.5/html/slug/img290.gif0100644000056400000620000000020406336102721017145 0ustar pfrauenfstaffGIF89a4 !,4 [`{ 4zgjtE9؀'Yʱghc\Ib?&K^:DU56C0Gy<#Y= );scalapack-doc-1.5/html/slug/img291.gif0100644000056400000620000000017506336102744017162 0ustar pfrauenfstaffGIF89a1 !,1 T` ږ9JxMqa)ٌr-FRy?e< i1j`+;T#OztBt(;scalapack-doc-1.5/html/slug/img292.gif0100644000056400000620000000010006336061737017155 0ustar pfrauenfstaffGIF89a !, y~`3oF[HfR;scalapack-doc-1.5/html/slug/img293.gif0100644000056400000620000000033506336103702017155 0ustar pfrauenfstaffGIF89a !, `{&nxH`Lj_F{^p~3="3 N:&lYc%U[b]-+Kw1"Vk8mh'Rs(d(֨'iV惹'7FJc:2&EU fj%9:'*̪yF Q;scalapack-doc-1.5/html/slug/img294.gif0100644000056400000620000000037606336104062017163 0ustar pfrauenfstaffGIF89a !, Ռ`{ 4Ȋs xiHRXh%[\Svk +np9ݪ Q Lڐы蕛v-c׆5 O의]O$lC$xx8Gw7xGV t)2iUiWI jwYjJYzCxQ$$ꊺxkcimd{1<#w+ X;scalapack-doc-1.5/html/slug/img295.gif0100644000056400000620000000017406336104152017160 0ustar pfrauenfstaffGIF89a%!,%Ssj1멸?I]ˆ9]DM[Թx-#i2ێC-mfd|F N\;scalapack-doc-1.5/html/slug/img296.gif0100644000056400000620000000016606336064114017165 0ustar pfrauenfstaffGIF89a%!,%M 퍒[Vz.M]5"eȐ lKͨkM>#"  7jܮ (;scalapack-doc-1.5/html/slug/img297.gif0100644000056400000620000001117106336066645017177 0ustar pfrauenfstaffGIF89a!,Z&޼He Lױjx vBL*as|JC:Y׫5.+Vv`v2xdkv'G8H5G9yfy)rizQJy)JEVBi[{k[ Tw %;f}~{Ҽn>^ c Έ~WuZb/E@xp{4꫆]"akdOL e:0W:ql͞-[pYL=BMG)]R!W#6bltӬaQBuT"#QIy싌z&5}7É f(ű\6Pl2Y\u,ELkjMGO{f`hh~bW#f 8Лi; _&\[N/u[ٸ*Wܙ7iuʮk[罹|= q|H2^|~_(-Xx=`5 NmKH"" bOD7ec2Hc6c>.H؊'dlE6q$2CZd_M*dQJ٣-4YjY ֳea٥i4fiY&VATfa]UrKVHEr ՆYʣDyJ~(6ꨠ%)RSaZz餜v5&+ehPq.yh}E'{"_.Rrq2ְ%Yu:JO*&qҶxJDjj*G9*ҭj5j*2Y:-~vzlDB|1j+ָWZl'繲dǦ\`zG(u^ZPi]fe/ٲՀ. ,AЪ˻["KӮD,9{F}e,9A}R߯mefJw^9=a-!{S$0`J_vD % L AN Y.z’:$nCsoDА[hG;J/|ܳbĸyLᆧ7:;۲AтD!g<.&%K9QdJ;=7{ ᠄V A$?Gv<MFyP(<0J#x#v%B\jH)aKYrzeJucd^3x!EӬIO6KHeΗ^&@TjD3LgP M-b#?fټj6ϡў=9q81;_2 WIV`]n-XK91J&(㩋[96uTk!~FY.W-bڛ"q~4O1fǭyO}XJFߡ}1Ii<x#J0~.*h^EAl\Yh=_/k',b_nyƲNCo xr;[$vystYow< *ZO@ ?ryÚz>qA L빇6_袋ϑeU=p~޷l/o97K֍4K~fƅ3aăyYeD}?Kr' tK&e=dX_bbxS|S/.VVmzu6?]('I'p$~Ow*,&}~TD!w1c=E^RvpH8EG*"KHhB}F=tƅ%Wfa(]#kȆmM8_dfR(aeh(Zy8AythFi S2xhCSPjfh7YPJ76(s9s/Kl8?4ƈqHVen({s8(bnbݶshmo@XpsFt,GXBd7Uu`*&~:G]ss~u].83u2wAP0XԓKywqw붛ǐ(5eg4)?Mp9F&DX8qyҁ"zѕ{wz<+T{9Q&{@əI(JY@gfG)pguwZLJ炷xM;})IZ|)|:\ ~=f Ewsqv;wݙ&:8Hp\KC~ 矩+5Ġ">&eKIiwW_l9w>Y2Fs 9tCx^VOHgS _oH|Ǎi{\J[/U xH+ME)Gi (I0UUH Bd[臁ZHRzE:X0|U%膟 چH gSeZzr Z\򨒘ڨeZyz: {8X)MW*zVjv yzZ|Zx)JtӉfh w(oIcZsJ#89wFlG*ȟkׂ[+kXlԦ ۉH֬k~.w7k:zz2.vD=4;o 6;TI* B;W'IѹFۖ경T[EJ)9iQ3^)`6u6bGf)wSyں|6n9ӶJ#YU':4A E+ nV&,k}]fűn]%[A;X7:~vV[t;VIyhGQ0g"y 5ڳ3DWNˑ@Kk]ڽJj^Bʣ`u#ƓKԾu:o#*vJE$#&_rK,/߃iͥ}=tEpw뻤r3[տYzrM'HJsҟ \drʅ?j~kE>;Llzꛃ+dȳPc <B%|AqD^$\<`W8h zjưٝ7w:hzo{ /;X,*ɓLɏe eMɟ ʣLʡ̮ʩʫʭʯ ˱,˳L˵l[\RL̃^y˿, tńo|2LilZ9w[pATlA.s8\L\PdƮ,ynهm45tz G_JK = u!MSaм7T\Iи;μk)Fw=[,$} Ҋ7F:+fΣرd˚=6ښ90WtĪݽ|e6WE྄ ,Gf;+# [:YJ^\\ۀ zD-:씍j&7ݎ?ٮ7- *p2ن9ny, M9v=a9֏:ǃYo_> Cʼ{$I)wj-;7@IGI{*! BH[aX3hDauYEؑ,6]'a{$`k!(Kh nmʎ|L㣒(,Nz"UHK(X}xfjfn-c Quf J9vf*z#de S+8 Q6fS͋cE1J҃Th$1U#%vJ+h<k˅*C^{,Snl~K,k䞋8ˮnH+oWJ6ԯ;scalapack-doc-1.5/html/slug/img29.gif0100644000056400000620000000017306336071110017067 0ustar pfrauenfstaffGIF89a#!,#R#\}%詔UoϤK|؅O {ey`@M#cOTNv;scalapack-doc-1.5/html/slug/img2.gif0100644000056400000620000000016706336071617017015 0ustar pfrauenfstaffGIF89a!,NŠoAw͌uUGIX"J']/|Y&QuZ~X)D1T]Z NS ;scalapack-doc-1.5/html/slug/img300.gif0100644000056400000620000000114606336113363017147 0ustar pfrauenfstaffGIF89a!,`pm⍦: ,.vp#%{z ]~r5~6T\1LHG-UymSbVq"9,ߒ/z^'FHx&hD(7ԅɘGr!ģpXHw Yڸu) DdX ;bilz;!Y+,+*U=bN L,%mtJZ)3Ɂ띬a;gЛUP:?($[dO1Z%<{ȋ eȓ\{X2"q^e鑓 4wraIԶ)ӓUFBUsY* 4lP6=iML=7U[*pRRjPfQb+IQtKIQ_ qJUfJ]WVrXt5&ӰLBګ1HzWݷM0ǬL9\>fuiVǮXq6)&6j`[Oߙס>c~B??S_3x`HT3`1ZɃaaַa&a ;scalapack-doc-1.5/html/slug/img301.gif0100644000056400000620000000153106336113404017142 0ustar pfrauenfstaffGIF89a9!,9&Zbp[6r$L Lv|y 'L*̦ ؛EG} IJްEwn1Gu@Hc)9I9xɣFuf˜(2ZizQڈкuW՚;Vyyt!x+F -ʹ l;;.> kNH/?oNխzoN/@h&Xƙt#!BZVFrU1k#iH Kaɕ,[!r54kZX?<{nYMNDPLLTt5֭\z*6 ]x.QD2ρ%%._]eZne^6a:8Ȓq1dt]VAu$9`)w2Yl"f~3Oj]qr t h$8dRz0XG"y*1*g*(}+)ZUN]P{3Zt"lMf6 -`FKOvL[m!F DzߎN⒋. ˮ?6[;scalapack-doc-1.5/html/slug/img302.gif0100644000056400000620000000111506336113422017141 0ustar pfrauenfstaffGIF89a!,`pmIY Li.*U8}[1թ*,^Su(lw(G•btH)W'WwYh47hUȢ*)(R[dfT7+kkYv̺5,v*kJQ,) \zge+8zM^B~&@5^msO>r+Z*ȗj6g$ۧY/9%˓i1gVɪMf^w(1tCOBuZst*(X XOâF+3RbCH[̤*W^[&N=GI*Ӝ2!ێY*N}yYx71_2MqQyOj? tט =^,9Qkzvh FUhʻ]"pUs2\"rܹVFADsB8V):o0>̟&7Eyv_ERYq`N]WYH;scalapack-doc-1.5/html/slug/img303.gif0100644000056400000620000003155106336067025017160 0ustar pfrauenfstaffGIF89aq!,qڋ޼H@ʶ Lj/ uL*̦ bZbC-nM4hn2λV%ӠP`@ȦGhtEe9# #Yjz'ZDggI:7w'' *kȊY+{|۪5u֋Hl=^jlΨ-$ߕwk5|ʯv|t3;.lƩ.Ն=a^Dr{'tSA(J@Ђ}e^0q`~MpplH["ڝy~ESH1Sqdb$0mc&SΔGu)!9֎Q܇bi@RYaIfv6c3ە 'Ihg+[&NzFzeFcT\~ *9UX}jpj*JkankCki):kVtf³l>khצoB… ,^mnk- ;E-['nMd,bo2S寧 ?]K (AqǜM +B E01²M*VGv\=eRsѺ3S(c-FO}hnT'<)u`MYm N{Mƍwj ,y-j7 ONy_yoy_~d7xLPဨ^W8lz|Zq?3Ôe>fZ|iy=ʤ]>}+}a + W^|Er]kvxa; 2 8OOah}:P{䧌yu'E ЂTTn1M;5x"d7>}rz\(+omR!̇Ixs~fxR_r)GyyNoNHŸQΦ_|iF5z/='j0AMSWۇU;yTVjtLGq#W4ܵX&JsHRL{ut$YG:dXȊR-*(G{% HϔN4N$A> s;PwV,f+ն.R`%T65$!RmrSk=EH) !yQű.wmrѱ׽/{\|e_waʕoy*]𭮃BN8knu1XNf|iwZ!q7K|#dOm #߶V` &D'#d$[;0CQ)e#~WXuJF<]+lN\E/_ MQu1e^5sE&x~crfί?6he%("w1ՀEeOKdq-FWIֶ4uM`^q:_ΓF锭 wوv]ljڰv]mb(7}Rw F 22ls51?y%K4G/66_9֌13* 71ģGog^]NՄG:: ʍC <~(˯=H>wI%392],Hԛ܀}n"6tpP`zu;No9ZR}漨,Pq?6ڳ)e]Yoa/9 j{9͏jZģZ{enFڌg9x?w!ڶ =c$$*d2=W7FGD9!2>,"S٧4 ݠ.Cvs7rMP7{qtl`G{}W/QFpXVHYXpxg8ZLgGkR2Czi$Y/ G&(tkіaICiGeg}=_k-(h92S*gk5EaWa\H[`aD ltCuh4+[ ionkvl7vf 7"^$i&yYn, 6(}֑y'B"f/yJoF(Z +pF M*0H;b'?餅.E\n޵}x:gWu}& xD}4.vJUk懇=XNx!:~}1+tɄ&vnHFEp(M+fb~dYZIc)v GpeȘ.a~C3iZō||vY7TZV(j98qH&?%(DVpt4^Q0BlH2hĹ@dSJ8%% )GcUIK fd|8~uvK깍icb_It}OEeI6UcLʕi4" PȢ Q U kseyU=qnjKC}*ڊ(Hġd8E)f T fH}%~ia) )z8rxB#qʩgQ')% |bVYjfUSGK82vZW)w:!iƎb 2fntuv8([񹒨ꓪ:2z}35] 㐫7 4 zi㖬Ss ֓T:Zֺ_׬f:ʬ(MTWwG 4xz[rVzsngsvopgXsBTgdJ*Dwuwsp5tĺZqz mM9a(I|䚒FC$r{dXz0W? cz3^ Z$ {"ztp r_wgXh}ٛP 9ƊMp;@gʊ;#rƔwHp墷GEqʊ`uR{};NnxXeG'VozXv9,Oȃ(h:nחbM{Vئ5w'j&|zx뵅V#(G$ ˟)+Y 2;pxtgqVqKVbAYkj6Kq+P1A Z&᪒xd]r=Gh ŧ+|Jc,xMAfψZ]?A.\BUkCLNoN)Mpr8M@X-}漛d>uZ^C7?pfn'^xާ?r'B6jVcİ]3Dho~݀Lx颲H NꜢާuߚa[NNaN)6y)>..NĮn(U]C߳vnL)_^c? xS莞ג[!C(Ws O Lb]ָ\~_ݷ>P1]["_y0t!LL٘fgta͛y㲋NFRTמ=︴;]:ϵ*<8Rμ=qќu{0*:/̇/+P}⹿4'm_]/ʥr[Ѩg-kDv_?_F'2kK.gNmKvtr}欢`n5wc$;Op[A]ǡ.|Q=N>xҟ--+l,\m paZ{Xiƒ G__{Wy9Z(JPB,š+K;"꘮SvʈUMi_ hz|#)lj+2jir奟|& !nMNB{妃,@O +TIko$)B!8'_6P.n@іJ8(jZQ_0shS EW|;(dP,Cz,#pCL)0f:0K;)1.+7/KFld]xM3d3lƐF8G'Pfl\1WOOE8,rIxM>Ĥ-4R6eRM< dSSR7dRfQNt/Q ҋ7z̓RZb.DqMA%͓*+qټC%|^}z×H">h?Du{&]k\誃QgyZA ʘjjW0tEZËӨH6g Ũ?^A "úiY܃ͩGQ6`iv6"Rㅺ!NpF4 0|"T#+R 2g r9ụV&_\ѭMBeg"@,lDG'6O%niLk[f(Oڏ";cBY dR" Q=,PxNkV(xҽԄ4ɑvN) 'c(Z"?|B_D6Z]l j4S=fUBͨ^01+sRCԓeQm2S@X^OҬ&NRvQPLOW5 c0%*SEǚfkaџoPl0 R u[a*nn-E{I]lW~Ck%EX -.o$]˚$Nƍ^8%,"/8:gz꙯x_ &vv0I H=B"2` h˞>a/U ):zf᥹k2p#<2sxQU}K\H|)ylݔ5ȩW{~{75dxDUrK| FҭU v8[HEv+b`]nulgFӵY;#F*ײ[[wu*rɞذ$mȖW]6x;];;XmiV\ä|3~`IMV}%Ȟ;mp~!yK[9UuCdZ3ONf&O9m_r\y,9nEcNg\97:w.My,?UJ^s[yJ=+PwtAP.m`]uL{$=E͚ kƩAFȑZ҂x=eRQcٌE=Ϭ$i2wkCKc'wxL< P yd{X;Ʊ4,V:sO<7uQ\?^CP$WCWT5*qrlg74{"o7\Ҫ Zf誯̠ɱΪ_P(P ϡ$bҩuO(Z*8*"KS L&Kk6L fM<2ϲ.W fƬ:iVplZdBLzӀ4TC׺i &J  ݬ=*Eh`RYʏ `ҴbmYQ%*pofIp lܨƠj&P#NP^7βfg-Pn/K9 UhKo̅  1 pV sQVm ),m똲: -h7N 7]&0?˔,.'3u1 Kl1>_G6i27@;9.Q>}*s s0)vF7E3JJTAKNKJKLHuLHTLSM'Mբ10O&-=~X`ɾ`ª4Z B X#:oop%5t\ύd.2&rg 0cS1Ǻ1lR-RG-SoVRJIWOv5yYS'IPU) 3 Y5TiƏ:snP o̫3+ ]@Qf:tPE3؞[+!Z°8i( PX1";h԰:0c?CҰ T`6e3!I[HQ"PI@EI @iMBݘ1b ԣl` V6u[c f0 k0sL=;pHysDgn["B\ r[?U hZf) gAoi v8{چ k/0J}jDvQBjQpS[9J qQ|vbos03?hٓӉd+X?)e*`wq=r RKl]8Pk{zW9T(`:iBf|M|MWMN83ofݑDQ+pP/FW{$R= v1pojDt[{E%4OMQ&G$AtT:~//I0ԇRWbd8NCWDmx.w@EtNXi}Y XJ8X=6Bi,K81NEݖ q˱$uqr]% 婇II'希ڊ#Rz:ǚϧrZӚ% PŚ̯gGW1®5ˇF9AcU8PճqU"leP]7ͰXnOlFpZUZ6ﲈj+鉎91odCPb1XcV L08Ō>GUC:9R=ٶC\V'#۽{U4' ѐ/YPI^c;uF sعqI2s!e P5_[[!;3qWJ{\ 6˜[s%d}bkSVSx\c6eY<ʗ?:tsOU0W˪Ae2WMz;ڎsWiwi-beʟyUvնUPx?:6DS)Qm֖m PG6:5IAuEfOpqp}c-!o=ѱyOՓ[N|Uٹ>K]5{Wp*ը:-/u<㓮Bmjō弰7ۑܣyPVyleڵ(n:=w9g{[Z]XUH\s_SYqD ƀ~ы~Q#>~!Y阬^Œ>~ľ!?s7^W-;D9{#>~8MBzNSۘm>1/X[+d-{ط{9߱g5D楕+X#/P 1 w[Nx2g4Yv>8ECc~!59@il*ǹSd7mK+SqqmtkcӮudՄu{m7p].[h3o~xy< (=gt}( x} \!b#fG2I0 UWy,n6Dtx>'̄QAz` JĤgEsNZV~䡅 B"àX#q$E;F8Bޕc0x{W!YSK.@S%u9ޖ(]WR1RAR1ɴvإG9{D=$D)lx" >,fMZ eAlxYNy=EhҠv㝒zx)韫AHX$TH fhwV 3I. '}J뫟SgfQjZ6elRBCϘy j' %"\V;-,I;7[0K C\ Zf8jq**بo3-G 7t0 މ cm #/Isb͕|.}.?hŹH\.H-nx 9f ,Y+YmhO0{A~ ,St.Ta<-7dR/Uߚj>|3Bvg:'m4祰鱫e#=/n㍲֨C:=6߽>[\S=}mB]n׾fτrHc]Ҩ0"(B∰y-P7$d-SRB_5|ػ/D IpX薀YG+ۂ1!ƐiF+~H=EiܵY F61c5PdYCM2KI4 .9l sA b v@)e(DQ,/G]&R 6 P9;m8 a+y%2%mTGÑMO %p؍EN)snX6C7 >rc'I.+,8Cs30pZobE%/WrSgia]yhӵCF):g:8Dn3lEDHjfBQf5P\q\M#<*ցƼL$ad$zճriUjŗO- Q!)QԪe%1 .Y 8#MӷsXGסGa1(8IYTx4d9JZ) 4ŲzgJ[k(({z ,|3kT;\Hmͭ .^>>ޮ\aVO to[;L9@aĀ'Js>w-0H9zHd/8h߼P 3f<{#bP(5E-@T@Y5֭\-x俍BB$K$Yh=mk*U"$ݯqe ԓFC MxNe5\tw&KrԤ/|'Fѯy6cizL210F̺3#'^c7'y߰W|,S4r{{ܱ+m6`kups9AW)>:q_zS!QMoI` z kop 59tbj8i>RIl-fz̼2:Hcj~ߍ(#bi|alsdk%O2c0PwheNdeT>bV?xdvRti&+hem $^JX!*dZG[5iN_92TqKfY&)BW.jf$}jl)zXzꯪz{u,M7ZNXP i"3쪅d|mi_֎=_}J.*27yn(* naLTߙښ^;w">I&\ov9\-xKMU-q/9^I ա.(J?߮ZL| Zv$Rz0_Ҩ^x=`Vw7Ӳ,$ޜS}:Xȳ}]v]a/*5)GoX7] t/vا`nd[ֹu(Y{c,#NCy)3E?, F:oD?{^9LwMM=* wݣ20\,~)H"a*L5 l*Bff "QÔM>"+;,6&u,yδՑ ) LvD/`Xf&!J<`)2f[#ŭ+6bbZֱ0UBՅ]#VוD*rldȺ+iSͫ(Ug$'A}Pq#7'|WPI>WvY +pH K/M Ҳ9ۓ+ 8[5-%)Ky;`f ^agBS4 )2МXKiSGt'̊nQ=+r1"e9lJ4_; ۸@"KG/HؙuIOҔ,\ڶDŔs[`AGhO#jQTܳZĢwi9uM:V4GGQ6ZuX*\"gʀTp? y,dM HCʧRpQ]!*-)J]hn 1{𖳻)y8,,y4/ Q5z2( ~c[ǵ&jږ; lLyeV Tqc$Urq3ܧfu:֊,4Ch 9'.gK2"ԈJ…Ԕ5ݑ0.G,e.ϖEhIk_3ߤ0AţG*:VˏUrN>WtV2R-SzbLF0 ]c8u 2Bϳƅ"(rPY:4;o3 n,߻rqݼZ\FO*} EO7OKM׭;\z)ٻS,F˂^gKPgUKx7m~XGdwwfhon^GJFGdzD 5I&h$B~rjS_7T#5a%k'r7@3|3ւsuBhՆQ(She&T#Egr'$FT[(mEoGHmuV1Gze@wZΖGp&1H㗄GVsm:U`5⇍wSfCnyĶnuntsex=D^^MTsN愤vli>rlXtjxp"rKrXX*'X y'KK4*f[$0X[ hK65))[r9 -~iM{ OO3:răIqeeU9WjYcLOA|ɄiV A5YDHϹz$؊r8fțzR7o,9扒p֝˙8n]Wda)a6xٞdډ;H G(_8 $م V ! J'p z蹌)وphJ丆+ hE9|-@IviBcH3A:k4jXۘtub9fX˳>F gKZv uwŐo'-wvkq5KX:&zEJFn}ʌzN"wf!yNw4I&cXsByqJGNimVvªaAZI?{yBƕgz6j,U0TPQ N4t oOezTTLgW*dJڄiIt9fzj.Wz~j(œLziʗy5y|Y2 |K::)r'¹[9ybpaijٗȉob]fh:©tzɤ ڭ/ pҲũ 2۲9nk)Ɗj3qW *j8[.m.1(ڇ`9Sd;kج.雃 oL{&\k銁{Iv=:=}[4&V;:ZKPBT|˸\,x|t0 +w8j:j=>:]Zj9zYZ_`1 )Pgu:wji..Vx]iU7Wy˦'ʹ k.dwo]^61A Rګ"V]ૣ=铹{&({`Wk]/JZ3\ze>·]\3!?tx+) .$wXwR:\uxk"L7Ϫv:s;.<D_ڱJy佴է?,^̨řJM=ڽǯD|Me6`]iZÓ'C{Qks rȶ0m F[gwZ}݇^l' EwLڮxq|fzwM-(Ke6mtJSz4_ 3|+U$H(}?c'8_X\>)K.&i^).E@c旹u >{FҜ䢮܎MZi~\ ^}~K,؎,yrm}2 ך+sxKj[vj=2nD. /R #,.,OYۣynI]^ƑF_.gJoR?>/.;/1/.m_H%>Ͼu٠t|B[m|,w$[#z/3Ǎ<ᮌoʅ_߉Oo/Ooo%zP-RI]fo_<ßfY!^ _Z̟&_=9  C_=.9&%%NOIZNi}O1U]{(}xv6JۼUʍOFduAXȤE9cLI3(5}~=9F^:p7< =+|:tl!#F 2[G<%.UgpnXlghr( ogTk_{X)Y@CJV-O:[ 9OAc5SْۄywoҷWl醋q- YA՜GJm~rQ94Ъ4LC D.[ ,=&!ICHH/cɚ0ݙaO? BbC&MiSO9 Q&kW_]fX!;scalapack-doc-1.5/html/slug/img306.gif0100644000056400000620000000536606336071013017160 0ustar pfrauenfstaffGIF89a!,ڋ޼G@䉦ʶ Lvy纩p̦|jҎB 6/YgT.u[_"Ɛ2'pHGg4byIƇitY$ZjD4Z8X8j{{X Z+({;FK,ziJ\[{HJ]-};m,͍`LYKμ7:ݍ%^L) RF M[#yٌT'`ix nON"1D6m8r6|1ƽq4Dݙg'CKSOI$d 縧**}P'0i6D {tWKm,oLejVIkv0۹oYF[Uev TnL} 9,H)W/fōk}_=.ŗۦ=t#k;yOlϬm8Ə.Z6u.X:k^h_#I^<<^|Vߢ~`q' M$~%V .gx;}gWFג5Ug؁N=Wѕvr5fX܈8UO5a1☏yS]F&-Cbe&MER4dk䊘YUXB.FWAJەZ @k' ˩"UW Ć.!ZntuyI^sLJjIJEJ)]: JkX ꖩQkGaůM Q&,K,Ӟz*f-ڭ߆+枋nn K/֫-jKِ2F]1Uq}M֠[ƉNG>`G9 {CǧcO[67wrkjB?.5jKp <9c*;2a~*ebo/y|s.D=P|@ l\$.Xj0s b)xCNX6ZۦV+eKCY1E|BVrLru\|)qllb;ZYR5n@0Tr,4]'6xQg|Fх "Bi"GAu'/?RGD,dH5S.ɓF$,R7<&[Nz(GܽʔdUFFP~ ,kC!yF'2l&[0B\}@,gjlXKpsK<]8'OuӞf>/H^?g> M ~ӠD;uLC8 hAE2]iS%GC:K~7"qx켛LE3BMHU+RjgFfS6g9Df*IWR.g{P@OfL@q>ﱨUGlҸH~F-ϊlդc+TT:򊈝\/㑒`}Y5SN@i9f(fZ^k^$;UTmf*d$la%ҲjvE-'iYoo:C}[\n6uC?{Tzխs;PѺDnx;y~W [^W+%o;ϻuR83ŕf&WR՚e |<\\b{]cbr%;-+$+ Aؐm4JŒq/cYymOcmJFqtXIjf21Mjd>Q ɪrWDzMʵCfٶ14#u0LU;7"ZcċysGL[zlp=Z;bR79qvQjQSմvu~:Wv}jS{嵰ϼzVvlj?׺<\u}Jxv#)d,]-}FK,R[wBDQs-H Teά2VpXBoiifn떖θtFG%8\zyʮcϺoh'\9r'&vm(or?$ YHyN*.RsSYMg"&\mhbN7]tܽ' ﯿC>5|{/x^ߵ8k+ڣ< xts.`.;ώ1g~^kqpOׁXG9O=0>om\e?)&Ǧ=QtF UmƼ/7).ˮfil_i~(X8hxxT(#Iyw *:JZj*wZJ7*;{KRKk LLXዄ<| ]t"m]= ^-NMmn^Mo)l/ٿ}q VA}PC )&h 0Nx%*4J8lPBi,̹:-]Lb@5Gd|pPBR6] 5ԩTZ5֭\R ) Mt=6ڵl~Sڽ7oַ0*K +q];~vJ(fBy0^>e Z#V:5)m0V ;vT#K˾˱{]᠁Mal[DZ=(eR1bK("UԊHtax\Q=xU])Z@Fxb`e '.$x&rF:YR60`I.ayd/ fKXBF7%V{io:I զgfV^&'ƕZU֤6砏㘠>ia|$nS9񃞧mgb+*vQ^ihRVvg-w~~aGnΞ{nӘˮ{Zn—7/uׯsʋ/i^!2\Cm=Lrכvq6q}*rpqǹLsl(_67p03BGtK4QM&VMGXsV5DTdchBiK7Mwvߍw;scalapack-doc-1.5/html/slug/img308.gif0100644000056400000620000000010706336066546017165 0ustar pfrauenfstaffGIF89a !, :EygsvU1~ʶ K;scalapack-doc-1.5/html/slug/img309.gif0100644000056400000620000000010706336066704017162 0ustar pfrauenfstaffGIF89a !, hs InwwHmH&ԉʶ ;scalapack-doc-1.5/html/slug/img30.gif0100644000056400000620000000013106336060040017051 0ustar pfrauenfstaffGIF89a!,0˗ OlwKu$If:>d4 ĢS;scalapack-doc-1.5/html/slug/img310.gif0100644000056400000620000000352106336070674017156 0ustar pfrauenfstaffGIF89a!,kpݼHd) /}OsТzĢ Lǰ !M5uMn {`LZ %ݹVF!W(xweHX5y *:JZjz +;K[k{ ,V0@tm_ #aĉ)At"{&Ƒ$K&t"$G +}&ə4+*QcK;D 3ΚD)/M,9Ԩԩ^+tæA\ 6F r m׎ĺЬE"YޢqUg8V ;yXip}ybdȓ;-ky̤m_iJsC`īf`tfzc>[m_c mK|Gu-yZYGn]ޮk>6zQ^3}чZy8_R BK ` ``G*6!K2"y"ƒPx߅i8"e%zw)Wga,w#OBNGQ?b8.9r)\%I[McdZNea}OH)f~O25fe9%mUMJ(1q:8gn2.zçs h.6)huW).ړ)t-v~YݪUʪN [2ۦ@)ˡƭOBJm"[ $i|PǷ RZ6Fbr.k[Hߠ;C3Rph3ȣEk5O[[kՑ,.+:m|GUW3~3g 2,S\>C۳9DK 4>}otѐ,dRn"k~xˍ8~c+S&BsNWJ:|&~M#춿7[#,U(?VZim}n~ >u?^曂>m+)]ȱRO},XIybU@Qr 2ؼЂ^Eke ޞ9`xGbTcHG71D Ų/kF;ѐ9#eF6VI"("Diɳ!µHUd[!UHVѕ<` Z-YЗ|eL6D$4O@K^״)A6Ó &')Gk&36Ine=IJvҝt%GzuSJ*SrFլ-)H :BӫGF! 7HaW +Qr^{=%Tֿ恰^yV&vePȾteTe{Z])%Zv fYzlzmlk3[6eS9vi)\֪VdkN9}nH#+jwx;scalapack-doc-1.5/html/slug/img311.gif0100644000056400000620000000017106336075012017144 0ustar pfrauenfstaffGIF89a*!,*Pac|{P{Vu-yW[M)gX!*.ӽ|ciI!HD3hܮ N׋;scalapack-doc-1.5/html/slug/img312.gif0100644000056400000620000000040506336076300017146 0ustar pfrauenfstaffGIF89af!,f܌"x.R#-(]⤄vuI-)x.?q~vA jW{AK66"Sx bzIUR6n|M71UdĴu('HFfyyYx'Ʌ*s&5K%" G;N^n~/?O_o/R;scalapack-doc-1.5/html/slug/img313.gif0100644000056400000620000000026706336113515017155 0ustar pfrauenfstaffGIF89a9!,9D #jHaU6.=)etܻp,Eڙ@jK>]GE$Q+0{ A\,vPٖ,jЄ 'aGhx8Șv((Y9X *:JZjzP;scalapack-doc-1.5/html/slug/img314.gif0100644000056400000620000000023506336113533017151 0ustar pfrauenfstaffGIF89a.!,.tm 9YAx]u#ᧆbj~^d6V<"IdMCz/E^WHhʶ̱xG@E 4 ioDj핚*H$J9ٍ w3٣D=(8d('wucQG8g#g4hQxYjz1٦iyiyZ!IKI::[l;{ F[)[C,,vۛ踭V% WHu N1|~nbN~^X@ Ůg+F*؊^iьI&JsE!V^e;scalapack-doc-1.5/html/slug/img319.gif0100644000056400000620000000463106336067743017175 0ustar pfrauenfstaffGIF89a D!, Dڋ޼HNʶ<8t{!1w%kK {B[)zbU[fL=}]b4[}^:.Wc'Vw5ȷh6H Ps`Y# S%% UJ J+[4f+㊻|K8 kZ{X{ ~ nN=O'T=O)T1t!+Wg~D(O4s soYZ1YNb u5c,:h K$pǡ1&Qԕ^`y陏pDs'\ѝ5/~͸k|cyekhY]vV&T%DZJ5Sk:(rm&n˭g=dKmP@m%bۭ|+{MJnɺKo^k'N0sKSl0q! r'˛0%,2<52l&tBMtFtJ/]t t.sT_sYu:[5Q[}sf-_gq+{s;0NAyZ6%wRv8gֻ_.,(9k9ϝKC]tz*n9wbhM_N_Twl0EOM[!z+_л#(zkSz6&st$゙z;/@X@E< %zGc`U@jYqO9(>-V`M@ˁb\B>~`2!|?]5|*W`x$PBdؔ114W+Iͯj!bW8$Vq8R\P4A| 5/\ qwr*$EI.T"CZ/b")9v&@3LkSgJRN ҍFi'ɷ^4J9o Ȝ帅js:%}S U8+QSq,})Q[RTziM!Rڔ]r{SJըz5<UfZRaVbs&Zױ^|+\UUa;"MZ YlUZ֠jvHjپv.Q8RΚLR;ZT2ӥ.S9]Y6m;QEe7Wz3&aސ;{.?m!]1k3s荓$eTǂP[Ok{:nPz 鈵% ^yϣ bMzRw*zX8BtG{xlz]T88G2TA_:㵯4&՜<߫g 7z-p;pWPrzush(v{t:33¨f--^?'wRd~Wۄ*gq5MMNㅹV)+[ۜ~!UkhYx]mkYwph*y5g7{MԱ/,P?~z!?8OrG89C y^]>Zyc6*x 09Nt=_:5#3Э~uW}Jourm[?z^]io۳7l_ޱ=z7y L++~o<P;scalapack-doc-1.5/html/slug/img31.gif0100644000056400000620000000013306336062513017063 0ustar pfrauenfstaffGIF89a!,2˗ Ol*1`hb}db^ZtYt, ĢL2 ;scalapack-doc-1.5/html/slug/img320.gif0100644000056400000620000000034306336062364017153 0ustar pfrauenfstaffGIF89a]!,]`cRkoӊhHj_B)2\{IـpFCm:^.沒<9*-v$JJ{ K5K>:M_]{$r3'v5ȵWwD)Uhi Ffh ,x|K`6Gk qd1'Ƅ$Kذ GAalmY 4 4hōP+]n~q$w4*4+~驪eqPV 9ڿ$fU\v\ e_ğS ]4ũB*kT0C%hf;C2+i_ɦT8u)^ 8gsqԞum8 *h˴ͷEr༛3ͭaoYkv񂅝{ǰCkVyS=>`:w+U}G`5ORg`bVT8 g#i>>YӥXD6LjYkP~I%s0bbjm_H_$wQ֔&{{.M9.EpdbT=e գ%yfʩ{~g0SVg(V}7<iPɖ%c+V"l6(krj'ZꥱֶA+ylBZ U[ Z>fϚKvwM乙һ)+J 3UȌ"jIK,*!72~|tLp)\Erl0':<@Mj,4B';scalapack-doc-1.5/html/slug/img322.gif0100644000056400000620000000462506336071032017154 0ustar pfrauenfstaffGIF89a D!, Dڋ޼HNʶ<8t{!1w%kK {B[)zbU[fL=}]b4[}^:.Wc'Vw5ȷh6H Ps`Y# S%% UJ J+[4f+㊻|K8 kZ{X{ ~ nN=O'T=O)T1t!+Wg~D(O4s soYZ1YNb u5c,:h K$pǡ1&Qԕ^`y陏pDs'\ѝ5/~͸k|cyekhY]vV&T%DZJ5Sk:(rm&n˭g=dKmP@m%bۭ|+{MJnɺKo^k'N0sKSl0q! r'˛0%,2<52l&tBMtFtJ/]t t.sT_sYu:[5Q[}sf-_gq+{s;0NAyZ67jx/47N*_~wppu~JgϪo8XmN~P*Uiho,+SxF?]tC`|(B Ub2.1pNV ;#}^b)R^Bq`D_LH$QI4# Y2c d%+KMd< k%h9TPF&4HQ2$!w+iKxd2YB?Jy2 ժ;etU7ymS2&X7K˔B ιnb^B:3:?4 EBoi9)s?YQnd E+t5I_ΔEQ p"=T8[R4;D574y}iQg&7KUFTuT PA:D$UQ޾ U2=iSUc[XҸf\s֫bu|h[WU:-,N[ҼuZWVi`-{8v0}lXzVE6LNTӁ:Bڸ/-N{Jn"͍9]k*ɚ$xR'As'@%{QQn+V-g!rgKEAo5n1SyVܦ =b^6;C`Q}K`ʁ=MIS⅍bgUwwl90KHZ,_O/hPKET}ħJmmcf3 +O'\,}NvQBV𑹉G7g$::wK!RP9Y҆ }][=ҝLׄGX_uN܏-;[b$k_y !Rl)G@S;X d=Hedʑ+|R$Jhڼ3Ν<{ 4СD=4ҥL:} 5;scalapack-doc-1.5/html/slug/img325.gif0100644000056400000620000000033706336112741017156 0ustar pfrauenfstaffGIF89aQ!,Qo I~YZYܩ^Rix5ՂUH7!!3ʢ|vRDɴ^;M~iC0_I QNe7'P3gx$Yu81uYjhWFjꨚ*65T3E,v #&R[`M FrS2I)C\ ̙UZƤ%Ν*iСD=4ҥL:} 5ԩTZ5֭\z 6;scalapack-doc-1.5/html/slug/img327.gif0100644000056400000620000000046506336113460017161 0ustar pfrauenfstaffGIF89av!,vpkҋ޼]SG26)zN˽jѨ2(Y]/DEͧyAg u)g٭N[qappIx#5d$xgwvUtt4 xIDF yXbZWF+v:9zKVwEj[ &Zi\l̻Í*׊MΣ='m^o4uKoᅤKo w(:t:|1ĉ+Z1 ;z2ȑ$ ;scalapack-doc-1.5/html/slug/img329.gif0100644000056400000620000000006706336060127017162 0ustar pfrauenfstaffGIF89a!,aNI;scalapack-doc-1.5/html/slug/img32.gif0100644000056400000620000000010306336071446017066 0ustar pfrauenfstaffGIF89a !, Lvۧ{Q [ͮE QىjZ;scalapack-doc-1.5/html/slug/img330.gif0100644000056400000620000000022606336060242017145 0ustar pfrauenfstaffGIF89a8!,8my cdT:|==AY)ߩ)7ҥ/BRFxF%'t|L$Lz썻9>chm%'uo=(8HXhxhQ;scalapack-doc-1.5/html/slug/img331.gif0100644000056400000620000000055306336060315017152 0ustar pfrauenfstaffGIF89a!,cLI0 %G9ٙxXi"g,|ln!m]rPEl)+{|F+Ngwix^w+'sd6>qUE%9Y8DS7g zHI:ɨ{ڊ Z K3T;[kL,{zȨZ{\l\l{ z=+L(3}em]VDCRtojeN%@A58,4;Y]CmU D N#v GpShڼ3Ν<{ 4СD=4ҥL:} 5ԩ ;scalapack-doc-1.5/html/slug/img332.gif0100644000056400000620000000020506336061015017143 0ustar pfrauenfstaffGIF89a.!,.\HΈz R|ef1˲2;˅t`o豉c$q`ʧuI%ВHB±YrN Ϗ ;scalapack-doc-1.5/html/slug/img333.gif0100644000056400000620000000060106336061033017144 0ustar pfrauenfstaffGIF89a!,Тk \>|bY~HF{2xf}X?ahĉ-;js(mRfZ3ʳkktz6|7GWufCȆWgF426d)T6H藳(5sY9FV'svdUtjy˓C8 *m*<,{L7J\LuHT=TeW{'iŒc[^ -6wBe8Rڴ?cw!ZUYIboJ4Lyl4|bY~HF{ؙĝa[e|@QDIs9)/;=K֬k,5,RHzu_yQPV8DWgՆ78`Fgף!0ca:#S5SD©ؙ踉{[ǫp#izd"eNN:c!}T.2Ξp65n'Ec_SfUxw5tDWfXBY):*ב(b uzj%*gǻ:U:6ŲfC*9xiM -mR'4ZH½N]. k[/u%(3u lK=yM1;z2ȑ$K<2ʕ,[| e;scalapack-doc-1.5/html/slug/img339.gif0100644000056400000620000000022506336076425017167 0ustar pfrauenfstaffGIF89a8!,8ly cdT:|==AY)ߩ)7ҥ/BRFxF%'t|L$Lz썻9>'hm%uo l(8HXhxhQ;scalapack-doc-1.5/html/slug/img33.gif0100644000056400000620000000016506336062656017102 0ustar pfrauenfstaffGIF89a!,L큞Zr6}ճ-fbe % cl~9J=8*ZJՄ| UŴ$ Q;scalapack-doc-1.5/html/slug/img340.gif0100644000056400000620000000055606336076651017167 0ustar pfrauenfstaffGIF89a!,cLI\!"5~b>#.”+ ,yl @ L5=F#q.;JP3a촦-.sy#qf!ESFvS%X1%%%I xxٷiI:*S6 J*zj;&x6Hl7Ī,,J7HAgg]KNȹzޜ5!ϝ텻\\];::P{&! -޶,nEF).-O /J(3)u(–kx3Ν<{ 4СD=4ҥL:} 5ԩTZY;scalapack-doc-1.5/html/slug/img341.gif0100644000056400000620000000016606336064651017162 0ustar pfrauenfstaffGIF89a&!,&Mo큢V\QM}%jUY'^Fz ҩ l+u|ۧdIqItr]jܮ -;scalapack-doc-1.5/html/slug/img342.gif0100644000056400000620000000021306336064734017156 0ustar pfrauenfstaffGIF89aC !,C b`\>Ayz[q=wVޕ¹rJ2.XȡTg._,h⃏͕uW}S_S;scalapack-doc-1.5/html/slug/img343.gif0100644000056400000620000000467506336111023017157 0ustar pfrauenfstaffGIF89a!,޼bfjeʶ %6~ ]Lh&b,9NJdƚ-JM&ܼc,>IO'u5؅pHHQITb)8ef*CIi 9jJz[*hZ)<||`P˙ka\IXdMg&UaC}mfKl6$¤%sa*'bfTDS6]:xzSvy]ũS55 sΊ+`6kz*""쫲.۬lqF+ylbbZH߂[ǏS ⳰xGv!XEeZۊ( oy'.;ZUi # ;{pэOi8Nr2+IxK;Zg^lw i{e*_G;al+& WH!sXvԯΏ@ o:5" #Mz$1_0JLb~Hƨ,V.z0_Ԥrv~#Tʵ)o:uOJPt8INE2;8l*eHB4¢%kX2~`>VPׅmT T ^fO;n3j-)3gvE4"W̮/lA9ɞgC[Z`mmB67y^oxEfV'q[p<}H9>死pv,s-{w;t^ 4q\q2oY\ a6t;R㞮4C.MXd +מ{r2/h8wSv[\=#k3q=F?oJܬF?2MjMo}ЫQul[[_sTkouד+QE{3ss6ʴ/޼5YWh PYN^IO;scalapack-doc-1.5/html/slug/img344.gif0100644000056400000620000000025606336101247017156 0ustar pfrauenfstaffGIF89a\ !,\ ` %Q!9cnXwʦ6w}2݆?˵̈́Ee妰HF:(fO#žbi Nf=SlnC6G&EvgŇXis' P;scalapack-doc-1.5/html/slug/img345.gif0100644000056400000620000000371306336070002017152 0ustar pfrauenfstaffGIF89a+!,+lZ;޼Hrrʶ Fz/v ĢQa\RD]Q(szj!ZUrkJd5 3S5}Mv[4VhCGeH(9DVYf& CǗ:2ZgZC;c)9\\le -=M]m} .>N^n~|{7(_Oϻ/ZPM Tv!7p!C(A[`E}AK䫇& LeK};6<uẫw7 װgzxh4xKw}Оv;͜vvB!~B~w7+ԇD&!A1 "7khh#>f?Mh\:D-'HƶEnI6Oqō7\aPUuf8iڵ7mf7q:56u2ug6y*5}vԟJf RshQ#k)v!="g Y{vi|OV*a(lMG;Gtެ I]2̉dfnj2医3vj m(]&9O 4#rx;"Ge6*Z9=_DqjvQ]wYGNH[v >,>`ou[;+< Q܅ عO \TU ^Ħ/qpg}[y zzkvBY )}ԳU g( fy o6niRZ?"BMZ7!2^G^Q/]^4f+8yRw:Erz#1$!sCcND65gCfґ%l I> r^^d5}l뛏)@Wx.Pp*_I+Q#`&s91GG5.B9Bfh/L9x;e)G[K{)ueąe U18_@͗Xd65H[WԎKZZBgUTpeuD'hVGNP(::#5_(ICqG!GfmJL.qu\]78ս2{5}ڜb!T*#Fn,kSNrALꐭLbjWJ,,ZW,U2@[V#oz4ʵ5aaO j-NB%褔ΕѨݢ*+Yw4{l2i+jDL9u)22ybQj4sȻ䡭rbiSZD!fU;"ISvO1VEg\̦*(ikjʛXItB;VSa0n97Qt7C=J7o\Zbb5);9/"UVLYkNBI-Cw צT>W uC Stj\㸾xa;scalapack-doc-1.5/html/slug/img346.gif0100644000056400000620000000024606336060505017160 0ustar pfrauenfstaffGIF89aP !,P }` ;Ii1au&{zh|ɏ#bjGQrʨ4b9s2ᆱm4HdLnN^n~|{7(_Oϻ/ZPM Tv!7p!C(A[`E}AK䫇& LeK};65` "wїzuHxz$!:)ER+XQ/P3Ҹ8⎽#}@ ːo!T)aHӒQf┴G]j҄7҄%zQ)WםeY]h9W[f_ zڗ(䇁IͱGzv&S'zqviϥ&e駥>xjRݪik2iޭ4Zi*iH .h9,9lZaʆ:*I7h$hoG\v+2*~H1rVj.v(!)&"t2y)L2 p -0cs[q,(\, :y(=뜴nZk3.{l$Y3"[mq$qm0t3ty5.Q;]Ն'{4{Z̋>xz^-!_ɹZwiiW:{z ;OC;E{'qH]ecM=׻r>&;7?^N 釿>/5JO/~m?1Ki+Gtؼv1NuPg R0zbPY$D`|Uq~Õu@Ndl}s+yr. kE?Tݣb_A%O[Pu{agF_Dcq71wr 61`cPH v*D&BQeEJdd =~IUPihG$7%y!#jI)Y -ɓrbe),aǟ&`pQd#">)j}Ѕr}L%Pb"md!3B5D)dH 2 9o Rz웧dKFc, xLz-FC6Y5gpqPR#Gt]3QrKs@ 1+21!,9P LS#d:n TM!%Zʓ=ryJ=gu u&SuTǫ,Z];;B+jTAVHb+i)nGoyrfKWFrW(8X"v.$S Mq6;É.)+ғ|P`!U!>-NJ5-F8@oaRo SM܆ӱL(&3ԸI9a'g+7TdBtG3,]S)u(Bwc,CtG2Wl%1*xuQV4w ]/CIoH6c &0&)\:8%xk3<,+.kHZ>znĪPƗ *˗Sեfq(|G6ȟ r=,HUEJF2$V7ʁ2T+GHڒ1FFv:_`-9LTbI&ZnT79jҷTC[`-3f#KƇO@ti;^EѐO}ِzu6.5+C9N\ۿ :և~8nn#k3fי]^lDj"Ɛ{#\Ϫky, 1 ʇ4A`)cbn_4dI==dѧn>_=k8CTwU1 _׷9TܖܜxF-?(OrrĜ 3G͵psp;W"C˱LnDG>_(9+4C%uՊ=6˞":~dvN%mdk&k KHµ}}%'kS3at嫠skpkͰdgK Ӎk=;jڡDk?Bi<1+ ;=AYI 483%+ݾ7#^19xz687V|zU44ַMq'Q%~ }xTtBscԀttzeExO<]wc<T4gl!uQft)x<'$f5v7\Ww}#(N=eR5h+e}ofrgb?rwQ^eJaX~h}lwF'R׶w7a4Csx'WxnE^ZlF R'rU1?"z{ζ{;QZNMgzUz6pyEH{]vd_mhLh_FB;0]Qƈfzy|pGAf@MR~Qh1jhssgZwrs'ݷu>P8i2$f(t!7$Tc#Ř;scalapack-doc-1.5/html/slug/img348.gif0100644000056400000620000000253206336107227017165 0ustar pfrauenfstaffGIF89a+j!,+jlZ;޼Hrrʶ Fz/v ĢQa\RD]Q(szj!ZUrkJd5 3S5}Mv[4VhCGeH(9DVYf& CǗ:2ZgZC;c)9\\le -=M]m} .>N^n~|{7(_Oϻ/ZPM Tv!7p!C(A[`E}AK䫇& LeK};6\ /MqRr VygP! d!LQ]2`x5+Ubpo'=wpxWbÍduO(M{+&($r%29AOhFN{-0 ;e\[\~\wIB> I̞͍ɗm%Pz&Q"J%P2dP2:)9z8)875*76u6.*6Naن*c6צ<¹$ *hn+,Q,G"x* xHUfUw u%z {wKSD}F'm Y{EKpNw̡ o((V>2vˆ,bYNsWcc֪68h#"rpޜr+1wnk'f.4.@/0ȺuԖMmiKdr}u¦54B-/f[x& FQKy5[^_綆.:>騛`.rس[bC]~w*Ʉ:(͎ltΪIrz)խ-BwR)ZYqO[vNN^n~|{7(_Oϻ/ZPM Tv!7p!C(A[`E}AK䫇& LeK};6C~D1vA<{1ϥ+t'{[7\8a]wjs7grwF#ziL=xk 0؁+| zZg"' Lz քM, :lC/j(uȅ WF"oU~CԸSlm074d9ee]Qg8m&8q57uu'7y6'h^؟M*G(xvY>IؤSٙER$)ޔfNJ/3:eIaƗRiCZpcV*btUi݇SGmͩ^kJlnda({*b[>cڟ vz"S˂7|_]Pav{BB鉃 Gqs'fc믊ҡ;ݚ?ri4#G BYMKd*В|+(x Uʩh-ZSz1چ>ӻ$}(7DmPm7&$8|7x\DZÎZ*)yG*5Z;lI:ʽq&,/-Sm]v{̧/Tn~r-/^{b|jKz:3a&=ps @~9z9B Ӿuyԣ_2!lsvR(K}cGQe㽎bZ04aLxP@2ĸkO(FlT"Wtٲƌ/mɥR<Ӗ:/p3KRFĹ㛇|eV̉˩BUUR~=(ڐڹHy7^v37`28V4;~<’+ثz1'9ZψANE:%RtWIʦ0زYwdbM˙_|{]5cr}?f8I7  U9 s_*P^A ցA- ROSAKx0~fC]8D*A7"mωB Ɍը~O4#`S_H($qYj)0#SaJQ\miLّQ_Ap^ZjQaf4[h{Uq5*ݣ~)y9n*-i:^j% %zEu⚭]YkYJ"Rn~y_p7$aX,3ȧ&imX ݛRַV^զl϶$ /rIATЩo/i Xaމ#<iй+ ۰RRԊ <%n( kVLIb Q6qw<4ZsޓRͫSҒlE#1!Gaw20^Vt5h 6a%4tȆ:Rg07_nM.z 콏=W_~dc-yyty豎U馳:T~j8 jK- ʻ(\_Lͨʾ'2(\ kJ>>ӊ#XMoŠ< G MkgDOc3IH$"g=Lg!dPԠAVMpV\: "贵PogBQqN4D`~B,[bD (mBhhzSfԡiCtO0d l6м?ƞQLʁ%&#Q0Jb_#Y8|^yzWY; -ˎM] K 5C,d8akYUl, +5m::kK.fpU*jZһf)_]'n#ۅS'Fǝ4*z΅IWUcocتsvR*'9R 4 %7A:-6}eFk8eH ZRso8mN}\l}):#$Yݯ}jJQVk@S$D] juQYWǯws ykJRuȃyk04#aq'Ax(q =kDY樀mcbWf3VXj3\,' /p-9!:w͓эiOByX;HDЎ B~K(`mZCU[s5Զ {\2a#jOGU޵sU ]zmpf68;vgz ̸ J?o =iA=1.>%ɞ#:ƾ78N~x :yW_-˭E8?1%q7Yπ=jZit8@ [s(Wyn[r1lP;scalapack-doc-1.5/html/slug/img351.gif0100644000056400000620000000315706336102151017152 0ustar pfrauenfstaffGIF89aXn!,Xn ڋ޼#扦ʶL 8L*Hl:ԘFil[eq" Gr}tk>F75HGXH89HYf9hxy`Qժw7)K+j:w{{zQ  9)**L|t8M:m:jmaJk H~ Z10! :ĉ"R"5rУh IJ c6%+d&ʝnDs M^|F[P s/A~mjMUϫ !:-: ٲmG.pFM)T\u>p0L;N'[Q2L;{ :ѤK>:լ[~FfȋgS\vnĶ%xCRnI4u3pJݎH^xo)I}C',]i * +l2_*T]EoWʼ GO!eh jU[m+u&؟+p]/LH҂19lHb-ZVICI(< xEIJ6E!_z L5z39hҕeX}!7]9c,TMPdv 0>SX~b;"<.3=># c)q z(g$qI&$UF_)H^alEhe&߆|B*uPᓷם/LvrţXRrj'UJVR 4 RBj,^ 及k+q]ΰ.Ri.P^/2*S%Ƥq›[șw+ט,p8R5߿9pm]GLQn,r{S`ɲ218\Yԍ6'48Ds&8xTܴ{cQ3z*)MZōQ9*!hٴYT% 6vǶ7M3;[mvPwQ67PF<u^(٩m7[rm&SG wNK'KG{17؞Ұ*!V}^[셋=&_z~׏:G7fp:np6 ]W)!"#]jj2x-f%NrPsZ$֗~Eb YDx1c`\x „!oD5.%3N2 TZR8B `͈C%@̖ɮ:@1(T˼EUl"3+Q:X47VQ Q$ J+[u1=C$2uB;B1iRlb?y›Y::ep5AG.ϫ_V-x49ى4!+,Q76rMH$JK#}y'vڄ*L >*%01WFKNlQ<,gk =׳HTn x@LСInZ_&L%hW)ႆQ Ԣ+C Sy9V;8+ouq{ECȐ)9! C}5(;scalapack-doc-1.5/html/slug/img352.gif0100644000056400000620000000231306336060056017153 0ustar pfrauenfstaffGIF89aKX!,KXڋ޼.NH扦Rc Lvs "QLlJ'&5dܮ1zec0js: 0p8gDצxF5H96'YZ%ig)xvZKBɺ+Kv[ۺj \vL͡Mze=~Q'<.8j~U^/u0<0W… Yl1⃇+Vh1#C; TCeH 'KĒr"̉%ѓ krНNjR BZPN}U\Ӊ5d]e^t-skU:oDt3'ܾ;-8Č7~䫓+۲9͜;{ :ѤK>:լ[],?ـw{E|nhm8<6%ΘyU+5#^ X^]wxlc}|.dJRt7'H`^ŷ%%ԟ"@ Q TVj ;Ⱦ*Aљ(r{/ ˄wuOOu*קOYoOو~n6+?of Vc}A H0zApc2(;scalapack-doc-1.5/html/slug/img353.gif0100644000056400000620000000231306336062271017155 0ustar pfrauenfstaffGIF89aKX!,KXڋ޼.NH扦Rc Lvs "QLlJ'&5dܮ1zec0js: 0p8gDצxF5H96'YZ%ig)xvZKBɺ+Kv[ۺj \vL͡Mze=~Q'<.8j~U^/u0<0W… Yl1⃇+Vh1#C; TCeH 'KĒr"̉%ѓ krНNjR BZPN}U\Ӊ5d]e^t-skU:oDt3'ܾ;-8Č7~䫓+۲9͜;{ :ѤK>:լ[],?ـw{E|nhm8<6%ΘyU+5#^ X^]wxlc}|.dJRt7'H`^ŷ%%ԟ"@ Q T(_銟CݫO˗y2]H{/eK;yV}jc D<-7~bI;u eγ)O졋5ދc})XvjD-);scalapack-doc-1.5/html/slug/img354.gif0100644000056400000620000002654606336105667017203 0ustar pfrauenfstaffGIF89a!,ڋ޼H L> L*̦ Ju98ְڮ녂Ng]Qty=9?vвPH8xWHxtpԸg@ee9'YjzX)hD:XWh J 5 lu军+c{;%,.[$ / MKz}hct&"lj1D' 2Ȋr5!*ٯX҄d|ޅ9RI`20dk%II5y$]6kq ɘ(#c=I+FRdE(vk>1Ͳ][ܹH6OzKUUǮKϪn;PoaZظ0͜TlYF< a3۸Zt4;Hmtij|vc6=~L ]dԫ[Vdž;Vþ춸L;'-|yW|E ~~臞xjW-IGq!{N/QSc@upsM8p5j&rExuboYuEHa:~h=̄R\ 5R|EzBIzͧ9%gd>\dHf,Dz!H!99oŁfމ"lEX)h!hJ@c!Jiɧ_(RdZrutJj#g34ފkU+_^:LA?{nf^{vۮ|wKwsSw$NQmxum;NMVιAz袏Nz馟zꪯz뼍-8Ϟ(<9wzh.S6"ٔ'ۼ*U?=_ !F=Sr (s3'#Ϳ/Xi >7pkGֆi*TRm4T@O&#Ph$$?d=ԛM xJT&1 ~%^0uЃ1Ē C9ɳ]&eDHh|zɗNGp!T WoQ?0K<\G#09I$E%MrD}=G:m<ސاOr-=0vפA&.y Q$1Q  !F[+r@`%/&.J,@M;TyAI f7#씘3f;2$ͶDaKl$.LQtD>qlp$'&H}|ޕ֙tB(D6e|3o>$I%∇g8jKJϚI$hA M2=dJ9JF3„@7ON SM/1EM\ %8J*OISk Zu5c! јƺܩ$  x'n!0O= ^`m.Mc\Ż߄[攠}n luU`«DIv4yw`e>l#(É9QӒ/1c <~aixȓذ{ggJ&?Y"{Y_݉n> ^T2݄c!9J %~%=rW± +4U9 _Է}FeMY|޳aMd-(V;·~2k̵:Fe9R%$61wo>;26›!K[^V_Ţ6쓎֏~]MX~yand4 In`_}.b| w'r0%lPsO:CnJOnBX$"wMl[Hik61*tM|epWDmtztiedЇzsSiQrU g]aQo%qm,CDcWSq!Vx[j7 C5rPO34臀L(T&dI#|LUQP(DhX6i](VWOfY_8YKkdxJw1HcB}9(he\i'\rP*H>w}/G2ׂfWs-7xRX;~h{G{uBHW`X8bc;/8dĈ^tw،fϕC!wB6ƎTrKN͕e֧gJw̓Yh~tz򇀰O#M8)9~"Yw1:x+ rVx(oZp!hCglȌ\IMFVScjȅ=HDaPV\.ubmWjXlSY~ӷmY+/[GBlx~&TIpnDV5g^Iz 唂}PtWUeIK@SU؅4W9YY4H2h3%,OᇉHm֧ '& IF֕Y |ҹ 99yӝٞYi8ҜꙟyƏLsFXcymg[9re>|}hh :s)ZSu9P5o6V8X)D/_y䥀u6hCV%֐>7AKƢ>梗)K?ix7j'*E0qȤyY{Z:uC׊S:BB"_ڤaY ?X:M'n1 x KgGx EㅤusJlY4khD A%}(}=awJh9| $ȒU# $|ͷP%& c^tvDⳓE3M ;H(MMz*@xXJxYthT(Y%ل#h5sN ؎8vBjuIU o'hXon%3n ꔀQY^ 3:,0oJDjÄG ԩd9gf3A㸬Y]w FO(۹Lw!ȥw`8Xe&s eƴlZaVw\Eg54E]X\,ƈL{e[:ȋܫ|Mq>Z] Ľr]EɇCK *K>Ij${~Y}(';c,}8u;1(N1 SX'lgrh_5(U鬯$HY߬/Eˣ)ϣEWV%MI=c^5xdYyIyMF.>~DסFΆO+ӎxQ>Z$:= \h}?<}%oXѾJr …,M#n-޽߸aTbzD_Us3ֽL==PX՘.ܗ\?:_~VO3s\nߪ#!ss+ r7+kG$ʷQ\6N̦Ǽ]DzgYb_^bڦ>*K2|$i~^n}ʳT<r;ΟOhz5/eh=u{N秸a;MjmqHYl 賊N1u\Vn@ [Ġ4 =+U[nIWD^7|׌̈́NSDBq-L*6-?n7#VѬy>¯ z V.!*57ᖈFMNURWO[)dR?l1~*'dj&uvɖ%3o}Wa]9YrMH lec`{mȺYUa/_9J ;2\E01h V;5ǃ^CNć̐k,sOQ84>6+tRRIjwRUkNfzZu0ΦU$ȁe^+ث.;&"}m`5qcr7.rL"p _fYs8Trlxt4ѕO\Lj\;ٯ'xn؂q Sx5ҵ:Y.|zg׾9G%~&y_3{ ʓw?~j˯#to< LPLA\P 'lqÐ1 9> Ӱ=,D OH<Ѽ{kqLF+*b,"R'%įi$L \ "R BK& =RKf .oG7e;̡,saHF9UO9:*KY+O:m7%B+b+,tW>CQGCbMv`cm-ee Ye>̢oc[gAM7v;dՍwkuW;xwG%Y}vwe0M1k(\a]W DV6X$_5 QsMϒa9c1eXUxh#u5j xk^6@b79i+sƢ zPi=Fhkf[dN0zbsgKuH{Ў35YmSI'.l:Dihח⧭TյH2%%Ca+TŝS7k*v 'Oϳ.D})zUűt+ O 7rl $4ȟ2 B&w)i!M5%b^`'F/Ħ5ПB8|t*緜m}eSI U%|l 68q)ogG՜'pZ~f*ON(M~!EitBQxF]gv@166,ox 73-ʓE9ۢؑNRxA^q: ]Kc߭6sf_h6M7N 2wIa-6։G,.psBfxp7nLz 2M[Ͻe㐼#)`L%k2"pt:l!ϒQC[^PԚ*6scvdh~L?q.cuDPiнMb;Jŀdkl zsn!ߒ{ QMzv_ me4q=bvא6Qks]iCjLyD&R)F4l>Y Wr_C6ǩmmqm{bK7To<ևC%pkԊ]U:^-fSsW2wX(\/O]LV];N{4Vr[_휓.qyx5YU4{o0]v["1\u/e˙'﷖Iܟ(cN_v#%n"ǡn ~&J޾!I%[ L M)wX,0K-%"m$'b/v:VGf,lϝ|TIІ' ͖(X~R%`ٚM(0p'n'>ҬɊ- lƞ⦪jܧߌ/hM "i)~GB*ԋ_#@,o00{0° -jxa I Znz 5N؊n 0fpcNގ3hh "3'0n"Ǒt lR1 OEC.7tq:t]-?h؄D'ډ10C̔BF+Qr3%U%ƱȜ J3L 4LI P'ڌ>A̱g DR )>oq"("42w7US Ma ) MͨE]HcQnOPEQ ጜbNѯRB"ÍUB;kR;SHr<:Mw!yMsSsTSQKb2/ GR\O\\55ǕK]^R^ו]^3BW':r_544\ ]yu7m>#&vtTa/62Vb;Hc%YsWO;ϢTZ9E_Emq%7RHaF`"Xf/UCsOL2ӢS,6'i_jN/v5q$+P]S2f'`줅&5v3EpIW#i8eV4-jmٶo%o]kPƓ=mQprO .XLqnyZҘa6^ٛ5yYbCB-Y6YȬ ٔC7:TsZ{oOZ 铞J6qd}:*6SU!ortunO(IWeZ^V^[cWAa{ϐP@4W}qt%nwЎQu]̅D[D͔+%l7vKOR79u '/3qCm6Ev$X6خ]wذ{qLVy9*Wq8O?(/A!؞:yKKM-?s-Tz)CrMf`f[Z)Y'Y-_[-bGM-{ic[Rg#v#W1nx.ٻٻiU`[J{a<\&ۥ<\#ܨ'|*z3\6</ɦMbqe֬.GqY$Rq9?)3{:ڜ{`EVɋ2ߛr%t5GZ89QˣsvQ/=ʍp?me@+Z`՞zPVׁQ;ŗ9Xl ґOWt/}z~$ē':w_ڬ;4J Jy8FG47{+u}wgT )PN8=ԋ\a{Hq[ݚ`vMPӗ|n>} PԤw%|EZ+}k58uIϙPĝ:gꚍلډo: u%{5jP[XҸNmZW91\K+m|dėYh[U\O;$j-o^e_^ƃY[/;请i7 ꛪ9ڹꁞaznu^R9n>K~=o6D>mQm2Sr^a܋r^c~SpEM^A#>NS[c6#?ϮeTg[iq-&Z7= YوytxUzԟ̺ [aE/cz[y"KY?$C?h @!kzj/z?#YCiٺ& v.6ZF-c3: kj\/dÖqjLYmzb ߸fE[7Տ[w,.vHQN1 >WX} _ro<ʰSsQYo gFwazÍU& !h!W_UZG#vw:uQ_TFDH3UvUa|tt]J\)_.Е$C MI NtEbZcfҷ\|wOޠ[cI;e}vU vUg4y'P~p> wۚhMiWp֨gCj~:)Ɗʬh kJ4;ɞl΢wlFF@LtX 24 5nyKKnfU Ko.t[o[%DF.<Kti21۩'hX^q0^$q9<2+eS>ˋ2fWdH,)?/nBu2€2uA__.S$V[u=)mmM XMwVnMkMhs-܅/^ǭ0㒏 _yoyz{~#;yҪv8Z{]rQ%&=D<~]$):VMER&˔✷352w_d/'AaڙSe[Ho/yD|8qň2D$NӔȆ$CU5H b(4FȲw(a]D&K,:JmЃ`7#v9!B&Bo@b8\(*&iKVPF`qTCAH<=G) ' <~K*އ4QxEl[aCr>g"Q*f,qPD͗&V4u(B2k%x(ϐ}p EY^iߓ iSs_Ot!8FA)VH!1#d7ؽ1̃/F?qvQ O|_Y`oȂ.LB xK' Gi? Z Jf/H/S5gFL>2ۙ"J GO$ ;ӄSzɢ$Bo(6[gthN#vdC~.%$^,#3Q r_٠lbF1 :jh+OET,&u?TUg `] Ļ>u=dqT\%JLcW GQk UDm|tK:b-]6u2Z rocm5lF#jؘ ZKWstοRWc/EdKm-Jk߹qo`%D8wL{.Q]- kLm|+0-.GƁ΍s3ܐ ϻ'_NYfV|q)b{HkƓ!H-xN ]$H10V٢76[/iVβM;SI>5}J_l߻k: J3;N=eLA-t`KJ2+) \]bZҾ*ۚ\1Xž'JǒGӞiua{y}^e(. I/mnC`){py7)/K3nXr3-UC){HBwѝ~t@repwFK[=]^a?4[ߖܸyFnpvï-ii=ki:yZ3w+W@a% vcz3zg(iHNU{P; 9S jƨu/4jn6Q'QErV7nЖZ}o_5Fmp eSVo&~Tq{Z^Shn^CeW5xr t}RybnsI"WM HI 'JפXU)$. vJk_ݧT$_Gi+tl3r}(LHz`tt՗}vnVei~Nmrq9G%{$#GI}Zv8R&Rw/W"։dht׀-w_A׊6[@Ȋ:8_;(;'wHa)xMB_hnPWhYS pD$R))%]q (ZoуSl}H؎&tX}Ph3"^yqQ^AXEA ktLjfD=;5$"aLGyF䏑nNV!':a5uX'`0*]^f4ghb/ 3iVbt֐kJw@{5bSoΘ'E3HuFq{yd>|dOn:/=||X|x[֖u9g$x^[H[Y~)!cuhvFLRJnxoEJ*Z!%jX[%~EhPyTiPbZ)ӈfYkqxřql=)C姆()іMȹnhoD_ tz' /OV;PٶwFjȂpyUP%S[E7DYeYcYYL: S4mpeV< 5jWU UZI涠 t~W+cTPyؙm๔WQX:3iKi'$T $):KX*ZEP)c9$?Z%H:]vG5k}n~`TrʝKa*Cߘ(g;ij6|gI\c$I%E4'DjrL"wVhmx|\|$(Ȁ[)V yQˊ*k. w[*I{y IJyj٭p=A:*H?! Ԡ ɊLWtGqXGj!~LkזZ˯7Hh&Ea4j{58Uot~AS hɸ :LFEM=SE+"_ڞ?R5PO{n_ˌ;^ OQhmR. [/jN#墴f#۶;ipip˜PŃцFX3Ik+l1GK:{tXa8\hraXZn I*X:\k5ʹW[[l V⛩NMָG5he[ᙴN4ǹ)6)3ZBP̕R\nS6\6k,pJxx7r0hj {ȍȤ5a˱ Ȩ<ʰ4 ʶrЋɴ \alɢ̶(?6Mڳ-*4fg&ʻ|ɰj =; y<SKKǩlxCLY1++zШ|i۬w̭C|*d驕{|Ii\4v%;I Z{[gw/|Jx%\Nͳ #-Ԭi~熍'E8(LLЩ;HM@IibE#܆FdenMTԑ-8ZlHgIbʟ-ŠB1\T}e ?bGOjZmDmi}v˰{ُUcʧY?騃մٍ eLMg4HHW>9jwym~eT15o渥dȆAlϋÓP̈́'IN lѿ̩xV$K=g-Yŭ]ԁܫ|Kq ל\8ly ,߲ $1Vͬ|*>3,X}0p^.6^1\+ܾ>7׽xP.)PzDn(\N\_)yb Ɖ j"Uy^`DC]Ba>zh+{ntBuM7ݹғ$ |3i${Wc wNV .j%<>)<ԩkp봨Ko8.1]? ܩ)2 !+nܒ}EhlTjgMTn$o -7J+n-ZgDٿln!Qf!J:9"|L9*mRmJ*>nx-a\Y-;^u /$0/јKY-:mʋ}-Mݞhp]mIjʮZ~]nZ𖞹ikƃz̓/we 왗ΈعY. ^@ݜ;.ol.wjK;~)>m@dL/Ak*v.<1\؆ +/P/R]_rMĿ~o+2?6E?G_ײO 7\kU 2Ps6L4<`=e2̙2,}fSm]S[[h޽jJ U!FTQ"bG\OM~Ti⼔^Ff,-\O?IOoA&3OKFiî:Ҫ0غlWg*:JHS\x|54ƭ;Sv6I՗`XTKKcU+WdG|D ȱkv.<ħK?ʬYiX{,:9\=Mħn@R(%u/sͿ>mP|W;>}W*o($w F[ͭ ÄA_.f&L9l/l檐CCTn $Ek;Kُ S )r4bL]DBmt:-ȇP$6CE QQĄq̿L,r+&t4N:μ-s(:2#>B>+I1Ӕ>kR:cڭ@JJuTlO%PN+?8z_kBwZKp(]D78%i*rargKKmEqdo^q۲_ o~ QBX>Ҋx#Ld ]¯ -8N N1)Z ؛#Knqb}鮀挫"2 D6RKNXd.+ յӼ%gJB2FAika{y)p)'{:_ۂ.7scR$)kmFS9u 8pf*~-{18q̅ FYŨQjKKQLObaJb(KOEÊ|4RAPsՇx Q9?lL_~twmO4#;#.#͋~.{\\2 GTy4V'[l"HsuF3AnȊ~x tКå]xЏ7 kjx\. OX(e-#AݵOMUh)>`Lkhg{m,7wbD˩/gU0+q63"dp]2}fdZm:B9>ϕ|nx{SnQ|]tw+V-[4>,(^ӎcGŻo t|z^NZ^vm[!|H{^wznϥZ9ogW_kOW 描M~.S7q_aOfn=v,UjϪmg"v6JP>rk/aqo

=DM.p 鮴*<ˬ 櫋 O,|| D ,VPmGΎvm0@NC$ l NeJs00{rD-ִ <* ZW֠Tl^&>4pܐ0  eƘ+j쬥DW͖(BѺ^0Q,e(%?Dx:` %mo"br/HrEb:r$# ) ,κrRJ-ײR-R.y.?").2/r/k.ݫ. =0Ox ;UPǣ:(+p"3Kc+ȲO@38J44'n2BP4*K'35j5-żs8[P΂82L׼Ql'Z̧z-q3%/? ;G"*31b2AȊM:K:ÌF ;'29d(х "/2yT2$>qk>0ee8ό0T I.׺. #ڬnMFF@!ձg³ 0C;yYD끉-Ad֪E"GPJ1G<ѭJSs9*F& 3E QIm*JȼG{.-9X@Hm`tΰFߔ꿴иD9c nVTU81VDnl+kVF[%rh6cuX͵;tm:Z}iCorƪBNUtXk:#jq(* ˺*'5˪k4rMJ>(tҲYR?Yϒ.2LdNR^tdccVVkfYc1O4wt0i YCv 5jֶ8*i3k+1*9zV g50CW>E0LJ2P,T&h=Vi;=.x 'ѫ#pr9n '638cd3ffvkV7s-KrQVմQAmG tverpc]C4 㸈hhvw?P%I/5Y<*W̠wWq6pR VuVlEwHIJ&5[w-vs"14|)Ȑ2'6}t}zKRGk%{n[#2ur !CT^m8n7hh1,XxxM{겋nlʃ7 L) &j㡅ӅWws =NkX,cќ!;·psWŲgrxY>hW3HV.R=뭇W!&B5S3c18tK͍u]_h!2?w)6M.K%N4jpj4vn ?s_Y4xitT*9<3CҿޥUoI>{# FtO^_|}?8KQQP05=ģ=gmʉ{%X]/~+'.5]rkDž_N2\R>?Qy* +ȿ[GUќ]wG^}G >.'xrjoxs;#Y'+}1=u};?0hi#2\2'4*R+6r/8,;scalapack-doc-1.5/html/slug/img356.gif0100644000056400000620000003735506336106502017172 0ustar pfrauenfstaffGIF89a !,  ڋ޼Hr L=DL*̦ J7ujk٭wFNIZwn憕 g`veY(cI)giz HH#j {"Z {d+x[SẆ[Ȫ6K:M]]68ԹHڵg=Tn]l=O_φ; <ܫ١*d# 1~4Y$+pa{iCJrd_ҭr1RqQPx|DD\]eSOI5Ul. ´yPXg.ejѹt!-Tʱ:*65۷+Wb:T`Z ';lV6rغ;*|nɘ1VͬrWyjܼSDɹ7nxk׳=?nۼh vZ_9m>Ytfg. w˿6L\aۻ\ ؄ޱ@W_na% ilvt|wki'{^h͇߄ma' eX{nEV?dK-)a=nBIWK%$u_aOBUf HZ4نX87!hiRRf(E"JzD:<٦zb_MDj_g~5M'(}H.ijcsi~5= j\skkpꞻ1ZQN FmYxڎ޴u~֞Knk;ˮf oEBom3Mfq; _<®6LR!cLr/:˜Ŋ[rk+IV`2= 4+ks2#4' ǬD#6Og?/ummIb/mvrۧpˍt5w 8C4ٷ:?u5>_WIydА)Nz馟zꪯz뮿U֬g7{wkuޝ0{5*P{M%5u#-DN/=|~^,ň;;;ՠuޠxiol [VNeP^H%ҊPdK]2̋&L/H8̕Q4CdvbythFՄ$:DI1rD@!4PK`s)ρ>E/!'uCcFەK,G)AzԘ,>?5SP)?~T h U@bώL1wL-eTXH5$En̍UAk*"RIj kJΤ5%1̷͙/|+|MQaHnTK ~|S%bF> 0-{~ -b1$Cz睷8붪Erif3^mq29αRrg܁mStq;&}^li9˕wͮL^wF%Zs}7ޮm8CM/~/ mhx\$]W7wl C׆ȵK"< mߺ@%8ŽYp-?O,Z`{B'6;4*+rY#X&юF~R;lNh`VщR~ҁeU8??i?uR3 0 K8w[QQ{f  _]OmQt<,ҤVάiby\}Vkfef&gwi8N([LB֡ru("U.A+_fsbև2ܒd#Qcqr e7L{D sWe7%j{1vvVġuNMoM|kWםwq@vxq?۶?yGNО=#:#9EI9fz z뿜X"4iՈZ QOrx;$PoțM74 XiūcF>]fTSu6 y~;Wr9利̗wv}guQK{G|i1cRr\:ubsۋK0*zƒ0˓6 wfː\=a|UֳQ{qu'Y}Һ|kmᅬe:B >h߅wmZ .SgiZ>C/165urf ebVb2[bUO!pAhk6iDV[ d0ugwmwtiRS#awh~@kִxv#J d*VEZWugkWN'D>XZ[%}=X?޴i7 Af-hl2R+mnYgx>ցfzg's>T֏kR(RnYQUdiJ)%wncbXu8Ns.p`H,ҌȢ/,x_"yp%g u(Ws+8I,07 z()`:Sbs?iWFdA<[I9>`G7TVyRI2yc\9^i>}(VpHi(0Irf_s~KY M̗>,Ppa'qN^W@iGFwE's=|F9l~CSx6g:r9s`h/Ed{{3Sɚ"79nYz9ךX@)t#VWQq}zRgP%IeJV&~girZ_/gmDFZU{k"YhWGf>^6tl(B塁u!H]+"ïc|$bPwv {b˰ًf(my&{Jhڢ@ rqȎT\v^HRSn|KT3;F :PA\dfzf&8JP]&YAzǶn*{8Pdle}=D[)ihG|oگd"Uzg/ʻfd*:G ȲwMut9% Oܝ5t-DM,IpM٭?{ݭL<,mlϠ YW P?+hkD^yuI}z >~ >å_%'kjF5Ή~쨙ك^fUMLrh& 7Nxa%v" -2Lt; ksYfP$͉XUA\nmsPfӸgMn ;S0X.(TtJ ^N xl~Un`v j㙴I,&E#&B⼆xw"mG+T}-1Nx^$j=P͞M 6AbaNUy{n"`R>`٬bɺ޳-Β p'\J>>3p6x.Ӏ5gmuyC[PLM٘G+_ͬo,7(/_2o64~8?.*>g@RH+|%;3SoT;[; -_(QtܓsGmhY580{zw흲w|wbNm wQݳJ+g14kg~p/{=|Eg?n=DOx|\p4OMnhkZX;kvDX{Ax~VOV_J,)T ӹR#Fz4 8?ymlO#ua jnٜ.L/EIZOrثs7n/C0)K2!LhW-wmflSr(-tvf|k\*%e83'>aJ!?@MS`c 86eok@spZ,9 t;Yq0#[_]7R0xl؂Q |ZyEE:vS(0!$U+( Bf*8glj%kǏ,m4pl G}W\s昄Dxx1jXgL6YlM\W2ZőRVX`2+Kژ,<պgK^0oQ ٷ/R^EaQ&ln2d eɘmHЪ<}vuX ,[umۿݖC[Խqw=oXG W̷Bo[:gGFvus\'ȇ? DL+z&0ܼN\0O3s*h?(Ԝ C RQA"+}-i67uIFw۠bbJ2oBlr`(MT?ԘO-Dal&$Hzkf=+*NxN&-zZ: .Ibhc7/M#Ue*,++|Zg`eJoL]97{6|KI 66K!bM˦TV/WO0SHj (\JN#Fڸ̨jyZ)("K ٥&f ,5r>nljA:#zz)Gn[FdZﺎƵ"Okagׁ6H3DֵmbGՋB66 H!L;AHZY=1ӹM٣4s%UTzkc(O- nxq~@{i$pj;x02GT7;;◉;>ǻTsmmW4$ G/>4{XkO'eQJ49=j"2Pa_zlhyP ./b,Vr6+&k1hAiJz,l oܩh^+ nq)G揲GfonֱR4/:l(ò*q! ;qyX6/mܯ|K*_!Rv H'2rrz*k.9z&f})L0M ' m-jDߺJkNr`S4c|xvK.kl/o 2,h.L.l)D7,Y #C3,('sr#h4ɬ)C˨%+R,4dƌ| IH0tPj~/Ey C FԲiNI6*lFF7T7H}Fф. GCqI3I J@ptL-K7ELTL4NLHPNWGLOWiO4L CLr=m*N\ Q΢ D32Y*o cpU]6H NTLT?(.A-I]5cpKV-yhd䈎TxW1FZIn.L&4\*jV.T3Ksddl\ܵ_IYK+S%.b#vf(CGK\Ӵo'grU3wB%r0e&'~vuSa#gpKA5\ UVyψ`6fR/RQ_ 76TOUKUpPguSU@85ʎb.OouQ/nUi 6KULۯe_nV,o{$/%6+ۮU*,ɧkӶ_PV2UN+Gc>aAdKKx2pwW ~*ubuk)MrSpBw2r?,vn]WY~w2*̪8QqqS`P헡uwe/3ݲu 8oV7S goyW7u'x+;a37XN?8CؔKOxVQSA^F+CHO>NSW[K_cgݏ;>sFkxdGnQw}/CSpP--r^D#!>ٲ׭3q˚|;HUڼ/{䛛ŷ6nёXZ[SCqq8e>K??񧔑IDY;Þ( QNṀA̵!!tkL{LֵCYqP a_AeD s#짒496d^/9?}SZ=Сb\z+*՝Y{|R_E.ېw?:2 _i؏Wa(鶭"H\!ţ yēܢQPp/8 !.8.>9>`]Ր\Seݟb %JێLjk똜+ZNY$$ҖP ۞]b\%,$5TQ'a 0,7fa;tT;#[ܺwwKYж49ҁk^Kk?m=;Ny_J]==OM|߀i|g݁* X`F݄ Zx! `u܇8"JH"="-h+'o4ț7h:6߇W>9sS)cg ߔixO.iK[)cV%=xlW)Pe0aBp9]ޟݽgw|3'yz灌ا>C]'5y@4s]oxy*&Ɗ驨z(t()N-)DIH}eE&Q%ceMbt {&̖g-TYYh9VqX&e-u8btl0.9~9e{OuB4u8S ^B e;8n}C,|DKa9-D:%-0^."&"3<2L IvC )0WgESzֲN<#Ż!c%u[!lp'Yn&߈JxĚpfW|ێv8?7q,> ҭU2q\"S,`P%Kz_ nT QIB+_T$9h:ӻE0Xs"m< &PLTr8E3F~]̉kF(E4x`yzHjbΰ63 P*F6(N@xB3̀oxRAEjHm R)A udOÉa%TM#XPrR[`\*M2dU; >:O'(aȥ r5qb!BLI A&Dž2 ;@-am3 5*,-r=JԦiL)sa丛q1gQOsۻ%jTPؖn mNB$q[476T` ІSEZ.KkiPE`Em bE/eTᲨz^1VmcDT~TQTLIca8j@gYqJa-XԖA].Vu3(-=!B .4)iT/&zWjXE{Oc*=[`5Dl48 ^6A]]u~W3]>zу I^-9bT2PzX&)[YP3NU⼇}\$ɴ,ٞQ\2̽S3;/}q3:~dzφ~h,>E( 4n,dGSz`Fs]Ma^49-y͙*vԮ25B ]_kExn> kΝM O'!۱=d{ʶu=WisԶ]-Գ6mu[.^wt҇=5{KuZwn|_[?8aMkK˚5E8퍦3=ņ%Aj1CqI!?c ../˷nۋn*8< |#'ra 'э{IMez d&.jRe ]x]nN*&iz$ Vn6u+}U[S(Y^иɒu+d3lޏ{rhw h鷀Хo!w,/n3)􃧽KӗHz|9-h^9iޕK,9#س])KΧp1`2pgN鯚HU9I~>[?DN]eĽ; `nyqd|SRUwaޚՓ7.S_)Sd8ir}mϬʕ߃QA`Hq!%a_EmIԖD[]Vp(O V. UV4\mRmUUy p Z` teL]TO!Bê^I $zN΃ ؄%C@2JX<1PV΅i5f^_,dI@ b!Q6WPQQ6O$TSS\T7^Zy%Y"e\Z7&ZeV>GR]=e]2XҜtIQI%ͥMeX.ޟDHM)J*aR[%MQZ:adGg6N9M fPhgbUe&זERQP$@X$Ę]O&_"w"` SR Z g'W'd#A `E' by1Ddruv M MdIc-—LQ\evQzrqA_Q50JaHzFz6VDh:gBc%-BeZmph#g_(]# Q]ZɊYDj[-Y_edA]@c(\iƥWbir)^(倊Qz>))iҹͩii\\Wi2ܙjie j1zz)L#nXDid]C"J)M,cz_)`&e~:6T,핞h^p^1jQ.6hdhcIeoU9OdfuAf2 )+S*eG&ޮ^Vfbki+f*ziݞf^rE/Xn!rK/bnjC2loYdDd#2Nmvϝ"wV%UnV+ud@'b_4.B/jn1.c- xV :.$B!6M-vn0.*>RZfNk5fķ\~tc. ",2`wMcچh#xѠ&:y#c*n.>gn*qJoĢ$~i0E`e&XL0 rCm&]EB:(d}U R]Y)FةP~q +j1je qj1©]iq!j#fq2"*1AIri K !W2%c1jk$cr:r+"a^Z\u*!+r%erpWNrTm1s*k;'ݦFlh@,OBpYֺ^75UT(`"~)02(Oek4nHՉz'2>7?2>o*c8;U^:4m25jgcڐ3%/BK.f4:(sTh}fL=JjU1ƬL߰vV=F{yt"mriQ˰HrIq5G@IWYn^1fr3}qVVs4C[3n\C\rQr^[]]'_'u_oraavbbsrO_$eWe_6fgfo6gwg6hh6ii6jjv;scalapack-doc-1.5/html/slug/img357.gif0100644000056400000620000000512706336111365017166 0ustar pfrauenfstaffGIF89a5g!,5g ȦH扦ʶ 5\=C_{ ĢL{Dž ʪjn|׋N{fn"_o*)hw7Ș(I'wIi9 IXjzɳ)tKWz ƪ#97ˉ+9zceAntK0ΛP>ox+>*ofauRO&ʰ\?J՘*J>vX˺JSƅ]0M> `D ;^?mì7ijl.=i7Xi=S|{O~柏~~Oߏ wXM禗y_51̍8Ibky4nWy4b9!(+ESjzL|] 1$DSr&ګxFDq͑6H9xzQ.QGVLV(,CO"PncDfuj%I!n8].#-E!CXT4YlYպt'iKBRs ݵb7y(M)2nl)4&-%nN(%`BDW s[ֱ4vq0}ҍ$pˑ ux_j ҀhU HKX;V$|5V^jԛf]}I+-ӛkaFMɆQVB_l"8}=2kf"AgK_ d}-ARfMəi@ڛ5Erm2}mIlTF"n.%&䞑|#;NfZ!]eN.M6}&a+vng_ DIlWzgyVS }ڵk kʀ-3ߔs탚Z^5BSLDMFZ aDJ|/k|x[we K%cD.ʩz Jl,UUΓw{UbUsS|':'{ ::N,pjֆw8浰ݣ,:C@}%Ӎg Œ\[1wt%:!c#W 8<7_)O.B .%&eX LyX#WzEF+${:ˣL}Cq+x?vjNٿg-|q4~;C9fS<\?Q~n8K6=8$c焨\NQnkglpo B(q;scalapack-doc-1.5/html/slug/img358.gif0100644000056400000620000000433506336111505017163 0ustar pfrauenfstaffGIF89a:R!,:R ȦH扦ʶ d gMn| ĢL*gQ˪j\,z92N[WƔ͌dmd3EWcfx8!' 9Hy(:JZjJCəz:{뛘 'rT[(7J\;M]m)bkcɽy|tQOF.OIթ#y0ĉ1ƩП3#אƑ$/f2-Wjxҥ̙4k ӦΝ<{3С;=4ҥL:} 5ђTZ5֭\z 6رd˚=6ڵ]} 7ܹtڽw):/&Iߎr"dy !aVfø '"K&7scg-ӣ fiCE=Y-q9>tG}8x"J-Q_u}(N4՗OF%GKvic[lFq&6ΓJA8i:(emYyc?9^"fda3b)Ql@(k9<>'fgT#2.:cjcrz Y%80x٩E(1cI CfgyR>IP02Q9u6eYhA]ivVHמtB[L05op¼p? qOLq_qoq rs%b`yp61eTbusFJ| ewRx*qϸ O"8VJ~rAƳ^}tG4n% ӄ9 l\6r:Å%j?G#Fہخ|^#>./xQN7㐿.^7G)҆snzK(ak$z !/;܈* =;*dã , 9UvQ^0n(׿)KcSO|~)dxeB F^>3j,߽?e)2IeA[ApYSC'C/YsBzLa] ) =%{R!%jPW o/L hI&Ս*F7W&BS1]?!=R)JƱ<eFWL_DE~0+3yil{OjHi}PHݑY1*u if`.0$ EE?~+ P'}tpt~rMܼ$Ajl0j*֜l 9],[8٪djy]ƙjU݁Br2y5ieJ0MmS-h+9hMTJB,[bv 5wԑABtTWTXFЎ) ۪)-t.Mb]8>qa4d6Ϩ>"rgYT B-e/GuZ*=Z;& ĉ'Z{| dۂýx`08Յq[{Zn*e6! 8$cF5`/C|0&ؿs,c5u=K:/H00U(FBJҊx6WZ$lc/Wܳ=^Yoۢ-Ȣ4f={U9cEc>T6zurX|e% әk~&C򲝀E6~ɨI;#۹ig0CKpe1壇 4BcԞWn軓½;ţTѸMP;scalapack-doc-1.5/html/slug/img359.gif0100644000056400000620000000140306336112022017150 0ustar pfrauenfstaffGIF89a<!,< Ȧ(f g5骽Ls5z ZZ\(_ fSBЮibhײM/#u83e [Ͽ{46Hx(9'8HZuXFwɷy긊jJ gYZi v\ Ej,gڔ|=|I&!E[Yi'NTO/eU1ٽ c0… :.ĉ+Z1z2ȑ$K[ؑzބ5J]7d%Qe%<:A4K6h05Aa@(Sew1sUJO:ȕcBaBlPZ/f)keY}t&JFNԷY ?^Tz{!YPj|M&b1gR/[VvFm#.D&.:&;Tq-~72%mf^Ϯn9je27Kgv<1@U<{yX5L>ۿ?Wn{ [t -#U6UX(0`N#"WhXa"/V0! >skd5K !݈$R[!\6K6CyŅXGp3Q!U%l F ye_(NEwfM++3y~==g7'(B"X' vRWi(eهzZ1ZyYڣZ sk8V̋;< dL<ݕ (MM.MngH&??[ι[ } c…o0|1ĉ+1;z2ȑ$Y<2ʕ,[4qCt*b*iKB6!A\'p0ZndARRaif}S Aiz5CdꁺTau6KHM^t; ޿ -0+E18_Ve[)Ԗ3c2j1ՠw^Ckeޞ33lJU{q6s,80JDcbYnjW5G{܆Rӻf?)ӿ?7`H``]ae]F8ӂ-@HO:I8a|qBM JGb,arχ82W5j= lwaFyd[MU6d )(#q c$ވ#YWh_9hzEY^1YGl"YYXg #\{٧}=؛CHƙ g)a 1qb8awb YP]3s^Gf'h(d] dwZbDȚ5fl^%mZP;scalapack-doc-1.5/html/slug/img361.gif0100644000056400000620000000035606336113073017156 0ustar pfrauenfstaffGIF89af!,fŌza&F$}煖.+w sj=^)N@)@fqtu<6y)ҋ%Ruv30oulIĺߩo7y7F'fsVV1Fvi'$87wf hWSibEgY v˻idxK,T\ -=M]m} .>^;scalapack-doc-1.5/html/slug/img362.gif0100644000056400000620000000054106336060703017154 0ustar pfrauenfstaffGIF89a!,@ڋ<>m^H!Qi&q;Ȥk/BxȃфKS|QZuZ^"m:amJf܆Tfufof+Z%s#&7V5Ե&xVAV$89Hyvw(gk$9[|J`< &]Pi_g{`ڑOfL*;scalapack-doc-1.5/html/slug/img365.gif0100644000056400000620000000031706336077577017201 0ustar pfrauenfstaffGIF89aD!,D Т4ìC~U^IHm ~цov_hf-[b-*ql0eD3s7? WvC'x(U8GrF3G)x Y)FhjȺ*+;K[k{ ,<Q;scalapack-doc-1.5/html/slug/img366.gif0100644000056400000620000000027306336077730017172 0ustar pfrauenfstaffGIF89aX!,XHs)a捠=h- 7:ri[5Ó/XkCSXJYb%--1v YʅrFDfq¯wHXAvYyygɉ *:JZjzzY;scalapack-doc-1.5/html/slug/img367.gif0100644000056400000620000000036206336100045017154 0ustar pfrauenfstaffGIF89ao!,oɌ ZVp{GWD*hHƍKsc']~,)wA-q);{xٛІ{fu 97 f3F!('" HD0XgXwwiEReuZIل&sHk+Kkr\̼q-=M]m} .>N^nP;scalapack-doc-1.5/html/slug/img368.gif0100644000056400000620000000037706336100123017160 0ustar pfrauenfstaffGIF89a{!,{֌xt[wp,!&qMzF)++ū]&h=э(GJ h#":ItJaslk4c=T_5OoWhԂ1rTHH$4#%QWb9(4wIC詃*g EjezVI7Vc8t\< =M]]e .>N^n~/?\;scalapack-doc-1.5/html/slug/img369.gif0100644000056400000620000000007206336064352017166 0ustar pfrauenfstaffGIF89a !, L`\Pѧt:;scalapack-doc-1.5/html/slug/img36.gif0100644000056400000620000000540206336107433017075 0ustar pfrauenfstaffGIF89aUI!,UI}K6jTנag9ㄦKL׮|N8 2gRJ'j'W7 d`m/{üipGdt(fvR(85Hyvxt8Zi w )9'ˋ":TG,K*i*KL X*-xS">rw| 'o]b;!J(ܬ'9,FʚKd!սu:fa5Tmj(u#Dǝl%3dܸ'K$jOH<T4[ KR.&7\z Tرd˚ڵl6ܹtڽ7޽| 8 >8Ō;~ 9ɔ+[9͜;{ :ѤK>:լ[~ ;ٴk۾;ݼ{ svUMOʗ.^a*N*WM'zz맹]lkݴjVjl"GnE j-nu6ka}` .O2oVoҫ7.Up% q7:oQgXǀpoU-2 /3s53\!ψF*f`싼&\1+j6d|R\^rCLX\eO+b2/umčӪz2x睐ޓ(t#>`NVQjM!$SVSfBt#O @aUMz^6P^? :Ƚ[6;xY~G_nc.ua?52_3? F/v܃˿OKSu>PT-mHe:wKDAxZ.x&V  v=${C U(X0D0t!Kek<9=œ2y`"әƄ&09_>Ħ4M^j,.yꔥR:6Nr-05c'YtsxGO6*xHgwRS5Bk=D9uTdV`bt$^'R2>?4)XLP[JYzC88n>]AKZ9mJ,NuHPʑ2"RʥzԼ.JRaM!v Ӕy5 \2&D &wv ؆",Esmh83L*#ko KDrǞfa Q( 59("JGpJ\[[aU 0uit.o#Zw n>F8ٺ\zk$& Gɭ:w<M@u\*IIt񭂆d"J>IqX*]ran2h'<0<<7ga fgQ.հZrƘOZvט3}1QJz޸tצ;~=Y-.bVWe^dm?xiIr¼90g5hή1KZ+ [6?+AD c?ddfL1Z56nYQ5ImHa2liٰZdu][Wְ5_C|q3kJFؽ-c$b˶sțioن6%8ȕZx!R$ftۓ/m0뗾߹m2y>sY<&aFܣa2Y{Ͷ%s0)v`#`PG%ґQYR ,Bɐ1i LRș PIZXEc$jҷQMfp/ycXsvT \ֵquNX캍Plr"{ q;E": '_VD/0tmbF4}_KA\vhӦ|:s)3k+OqoNxȤHs0,r <-/1ݯċX7?Arc'VQf4~|͕A"N?Zx^`3GgmsG|]Smr;0wG7 {$Tf$A7\%ׁõR77G{U{^ 6)·|2z//y&XFV+W{֥www'j F9;S=|"Dl`~wQ~P)- 5MEr+@"̦ 5,Y2M&[` 5Yo˝dA֮g ^YK[SwwX#8s'sHɘIw izJW !*;zwfskvFJ d%WI, M1zlӆf9=}ih\B>Vy.OXZUڤ".P%16%!6LYőc ^|fW/;}+iK`,L"m6eeR` !\΂:T5ե2'\*r1c4 Q{jRiϸJ4 p!fiFo&X8)˰K;SqĆ lWvoZ~<ʠ_ hzgjȠ4-u^5taPp"j5#XKnրR;scalapack-doc-1.5/html/slug/img373.gif0100644000056400000620000000027706336113441017162 0ustar pfrauenfstaffGIF89a9!,9 ;Z#FWIHe؄$5h.j0Z|dqt)ȵ%W[2.oud2`QM:TCէd`Jl:Y>'EEgv%Ǩ6eBy֥ jJZjz +;kR;scalapack-doc-1.5/html/slug/img374.gif0100644000056400000620000000017706336073117017167 0ustar pfrauenfstaffGIF89a!,V chpWyC&bqNCᨾڽcZG6F/ t+i IƳ&l!nr+bnS9 y'_6i'fM1иnAa;jɬMkqwn4|EtK˶|z7{1QNVVy0xc>6Ro8o~G^ ewVf((!I}YzZGmjBaGGp7q8k'@h1斢8Cyxl=hF!C(֡66dJH bDbL) ~ pՙfk*dCB"'2dw%a0l2e.f0{R/nhV= iLJЉUIa(i11"N)-^Aɨy %<"LakMfdW Ň)·+ s~\&iYy+ .xBKn-fБE2ڶnھJWnݳpZr-ʪQ| /O fJ*A 1m/,0~AL5BLYRӱ{ܧs܋f,\õb\7]՜6vnJlH.ksi%N8? p0c 9kcz[+d{9T/y9}:33-y⾥1N,vqV>_-m;;yGdԷ(v޳p*λl%,փ^b\/?\Ko01w#::/`ː@ЂxU$:wqp{;O^$DhٳZ0Y DP}l VV/9r"qŬ@"&Ee-epn?5ыۑxA!p;a# GYYMH膂`>(a2~(K,JJ7j$d;x9.RByd"MY acwFX d[[.5|e}eIUyL:,^Y?p5 /)+4,^F3,'9'N1L;OI?c sl?!?rܦAu ܟ(;ѫ=z >1O^֯9`Mj$%C_a=KѢ҄jGڢnPLNv-ov\ զP\P.׬m9Bj6`vaûrq-z[W{ZHn](Moj\ h)#Y{졜rx]/dCL]&ĈEq~{b`ZW.b,K ύt#27}tW1ЊNðDʐHQ O]ҧ6C-.1khҁmo)^X34I>XUuM=ڱ~ŲdW;́1mb; wls{V\`w47l{' ݂ nieW8cΖxp?XqsCt!p\roVY({)]p{58[=zw j=A,16}0{vԱH@ڨW3t}L)>vu۾i"n@{n;π7-o ߽#_ ><;/ƃz}Dol|><'޲|k=zʛ&~͎s;ZUh[Cힾo:&3䋟W>!o˿PW+n7A{1rZ5qx#Iр(+Rwv4BH%h'Wue GhqWr4BugÃuAm >C1R<+8 oքCG%1/.;scalapack-doc-1.5/html/slug/img377.gif0100644000056400000620000002265306336073574017204 0ustar pfrauenfstaffGIF89a:!,:ڋ޼H扦F Lz ĢL*̦ jmzY) Kdk$)ԑAdVC%PR^~Km)#f)zBn grIgvމgz&$an.ʨ4by蓏jhZ2);d 0t槃 dPRΩøz@h^Fa4 q>SE 0J)Sn!m6R)Ϯ6guܘ-2;^,֛/U^alȮK_Ki^K/=V4&ÿhRȘ 5~ qz*:jGqabHVCr^8tay&%3, uU^`r(u-I.5.ʊlyf95+X@Cϋ5s}J,K~\6x_)`Ojh'|gx!v^P7}Ξq1+K)҂fq[IyjD;w]&r-x{}>j[2hi@Ų;6vO/0Z\OG/ 9m{{,If\ l ^PAq'9Q?c@"'>|^ݧF4de `2nZl [ 'pW >?PQ9[w^e-9tͧO˛ >΋=}E1rL}{3_FmD@ &2YfA.{YC*gNxUr|J,>4^ ĈG`,D |B,äaj-@ ܆J_H$̍hSSf4A|,j@1]q2l:)=s=|s!5&v t8gyVJf eApP|яlE\2]\#eB/ ҽezmdB95I!40 69HiAhJzJ~g>אX#YhEcHUAYWnaG`{_#l:<\> 3?Z*d- v!qe:[LمOjx,߂sb-aF!cJ7 c=&-c_C\> h+l_IJ٠'B/!B;Z6ZJrvXgW͢^ 331ATə n\WHy^( !Hdܝǂ"e!M)5R)ւ{'"#,j.}my?/|IoިzRp3.ϕ&!jMRԯ.us\׿}vl{3ifN w͘r@k=¹*R(K|o)ǑkRyF[eݗ^?gMN^VwV'2^Vdg8Ǿ+jޝ &vZelb/]\2}lzmճ43rjHVȥIwGH"8˸_O]tMqlf%iYac?&M]l%A _&b36_mtcm`%g]}{%z&]h@gmkjł5X-f:.n%<րwz7|'l4fedj#j2^"Fkfi`NxAj4fH0GfuQQ]X>PTIqbVxU9Rt'z%kdžgY6iΤ|VKrq\m68XXs?6j~(jokcs8*L`ljEbhZxdvx6&hiQB Yn(_#hsjR*;%ʦ$XV+ZWzee^6l,Z_ |ڃ٥3Cq)m>qFfvz?5Za3hQ(fvHdg(-J[T [H8HIgq4ɮEpә*y7x/*wWXI#^Z*)4`*8Z`El$9GWDtj009{ 9s;=ư M,Բ:r{BIez1҃L*0+&YYUM; ̆ziKvPi,es+)Zf˶l&hSҶs˳ak9y{˷}2#oKq'`;k;=+ݪ`h{lj)GXjP;|gR yYzպWUYB[Aj9l W #)W=++^ͷt!9Dƻk 4F3[B坋J\6ɧկ)\~ω~w؝~۠JdTŬb x#9%VDۆFfgs&h!#'֩!Xw Cи H>J>-SNUnW>> [MM-4N (.S5Be^%g3y6iJt. -{!y~WNcN0LJS9 sD6O9]پ[~YE.#x2 Ʌ_k ;Nħm0F گ^䁞Lϵ]H٘f<~ >’iE-$N~zܵg٠jn%J ޻ B@8<ɨ6 αeg8Fnu쾀L vߓ/ 2)+-O^&"oT$?&2͢I1Yl;T+Cv9 zG$|獞[+}Y[_~ي}b?E_MM_2^z~jo#wlP$7m+&Zw޸dqB5(p٪١8Ok; FZeJv%'oRJӕuxeGm!aFuvIK1 nihՖE 5Vh]3b0H@0΂Z58-n*qT]}Xھ\^4WEa>QZ>WmnXno$i]])w . &$,13579:( ?mGMOQST/9H^acegikmoqsuwy{}o+Y]JYBU3 Oӧ\dI=(O? a?vQDТ$+`sA6T釣-BD+e4K)fN-Jpzͣ(18/jS6"hMIy̙`A$z0"0z|J?"qʝ$tAwo hDƞ͵rH4佊n瑙1\ZrΈ /(MZc?ТZS×^nM6Zeo6:pO{bȃ&\}m>r ӬoQ7/|p.rkƿA Al$$٭cO#̾ԣʣ9,NPQl17z< .q95kP18HKQI3ł|N4 d-DB !tdz,̿LT^|\6?y.\D8bj -ИtO4O@ Ge98,RN䄨M9үkNI-Q RSYmS2UWiՈY-%[y\%ѵ`}-؏VdemgVikFboasm57r)uAtUU/~3+yҁ$(!^)a){#R;YL  f4'.s>$Kެ 4X=&Qn*Rce[`ݎfiWahG>?>z[i[9jR/C*,,(j.̯0gl*^Tٽ'L}8Sjx oXgC z fM+ٔKĜ0~|&2u0ax?zvIlrA|(OL0};|D$_;\l!hy{&ӸEKtNsd?Js6C *Bddϒ+"=I:h!ԓ~No|RB%1I\:_DN2G jNL cN%4rΐv<I@Ŝ&?DG]J$%W @!I1OTӖ 9wN{3̊<$wāAN m%ԺZPL3<'El,Ս)WER-uKaS4"%M[eRhTS8)tʆryvY:0J3wgǿ]}YL#D1mZ7΄114z|1{|1ƉUN>6z=gì6Ǵ~^rYNgj0&982Gt7zż%,5rl_l>Uu#񖑭Vf2YpۨhW TqɦqV.喳y PHjk '];CwP;Y*%ZbOBs7 5qT0[ WK5VH_yI4|mvU9]mzƜbj s9;OLuo] me_'cZu}G_ɟNr}lw+u8.Bb铫madjUnO>w HY~UEU:q9)5Vblzj5u{ڒheaGZ}3:cKgylo]]Wjy_s 6F "ӭ̪~·;~ZCo\ؿn#:MMKN*8 0ʪ,B^ J6-Kȸg͈O~+ZifmAlo80.Hf+:-fmL@(LDzK(\yVmsLP&TK Po| kƨȪ ՒOP| en^H /вo P׾O8ŔmF0DO i\MR&jษ NR& ޶ a iIQn1:ɷ Z,kX=ڮ/D0fpz.WNOrq朤 &aK N !!2"%r")"#q!5r 3r#AR1$MFRfQ[ !ȊSO2qsz/z 9F %%5|tj& .lVt*oұ<.쪁0>'< ؑ)VhhJ2w& k6QtLb#Ց˲4Ѣ ܲ2.G1W/.5tf d8/G3/n,nr3IokV5cH+ 'FOF6OKsk'O8+G2,SonFJPmDm;̾S;<ɳ_Ҙm3 m0\tnӄlBD,,3r-,ԏNAlR= phOw1{D'h:wp-KL10Gm 11~C*ϊ 5i4DqDMEQ 8#GO9<>C30oXΰ ) [K-/$kt9&R<GL|-tBetN֜4NBJptjSE/ &1rB5LuoFI]J HQ{F d>kPR APH /3)O_(/ݻ-_|<1y4yT{ML"FqܲjEG%1YJ?[3245O@unyY*T}1 hK6-UtFsjlڙ2d?bÔuMUi mbzT ծGdճ/osح߳w*NgF=vL/k¬ ZGJ he dcw.&ĉ̉ki{0~{i]8<3rݞccwkwXa~Kin[SE{O̗FYv᥏%v}`? u|1 Z#_btwu g ݆f$3 7~\8Hs7jTW55batT#RA6$_q8!W5q!iIvaMT^,܍Jfrdމg<5̝ h 6&cjWDFZ\A*=&f1Ւo)^g`y:pXc$[uJX`r٘ଆ&Z;iHj0(k峭Bk^eѡ31"~:B\"'q:4Z e>˲:%9)S;y*h_eaD NH]̻* Vezfc»4|ۿ̚u&L1e nxbrK0%Qd7Sx،Bl3oI@KI~-F42:UH=Iĩ-Qt-|Bmb-7³ku ȩeW*50~8vlV!Z5v6CC@-2:I$ Kwxr.ӯoTdN. A{ LoQ޺/O}ކb9(D-;=/+mN.H~Si _ok pb^'SAp} y>]{0K`,j( xyZ\^ eڶgywueM\quu 6VMjSw>žIɷ} CX7!HXb2!(}k\ NSxKHᇸ`ԁdT  NeDIeP>iy 9)]^8!2 $G.6%9e^s#l"x c&w) ]X8RKjzعI-(ЍJybCV\y(NJ*}ar+z+e5cMh2 j{fxf Rg4ϸmܚ-.q=R+vH^uXG캖$&'o\ջJm>"xn\ EU7J׬ HS}H]$n 3ͦV EW4gs&;r"8"+!{V}[;ºVlEENoI֘u#ijcMt THG je%ܽڍ~W Bw ?c p63=rO'㚥9wJos9S=XaODKH9 H K+9Qq ?UF3O?[O[Λi/r4LW/>#_=o;scalapack-doc-1.5/html/slug/img37.gif0100644000056400000620000000760206336110243017073 0ustar pfrauenfstaffGIF89af!,f Ћ޼HJʶV . )L@S zMú*24ҢVFNpOu;+򟹤W x81F'Ft8bG fUiHZVUʖ :Uxº:3k%; |ca[,}.^ߗ+ዴqeRV`A?7dJ.dN> eRNIeV^eZne^~ fbIfefe#lIi# 4vb6'SyJiiO@(=ꝒNڋf|^m|,ArPgZ.2)jqPxOS+ ʢh@K~s͜m~ZP$+M^+,K2,~$S}"ZsHvadmƓ=6fh .Uÿ *]H 1G!d WFO95Vr>ZrjNriGj`Vy7ϛVzyY7B:t+k[ k4`vMa \`:WȦ WƄK3ͺ'a9klϭpw5&b9k 42toQniEw㻄Kxay`k;Y&` \:~|o&q̥̳ܳZ#qUxsw2߽{廓ΚXW4&8 r`\=QGiꇚFzֲ-UAZR.>fqrrxd1I0\қN?t4 ! # *L<e6B ă{Z̷c'%"$|b`%4ʤr+d^h%h*bp4].y-S{"C?Dq1V*NƯ']29GO^3v iyem@ZLW06W8KMﴣ^$0ɸzsg[y΃{2̀M83ERuX&Q =>әtr ]ԧ4>A%̹Rfէ?{#G (xL89Į},D_֜ovSSTE{HItohl`*7)h +puUAFYS^keR M%)Ϻ-JRVnkGKPvo w-q*wm.TZZftK#JݏSpjzH6,{ۥWˮh秛Px/w12oy+,6 R!ȡS.qunFe&*{g+PR#DJ)Er'ɩE^$]| ЉHGs= F3d \n<iv +)Tbs2;9=Nafg4j"7҉נqFsuP'Co3^D>/2*ڹu N^kըRήΝ =_s7~pI{Z߄Ս_{p1bB-i %N7W!_J쥩/ yX{A v3e=`IlG'Ҝ6k?7u^;*.k*5aid0w n _TPKU0҆R9n>C'W}w"K}kc-NKZ^WLAU\Kn qqln 8R<O4#9\G-]JRwW d7dbhY7v]\Bs[̔oߥ5ZkdsĄIxNv3&f>K7$C^WR{G2DV{rC#kc[Y;RfSq2U$FnwrynWOFYsfQ6#i9Wu_}gRV}7~CtSu*s(xq춅>F y[PUkV1n%TOkuNP3WX VqiyoԄ7W92Rm~dG?gyCE$ebxwSgm&x&ywFyxȋyY!xaBw0RUx`5{8{ˠ%G)EU5ȏ)I1}eO*a5EOY]aƐHn'I~oEGbhTiys9z(:$$ؐ#bVE_Cnc&IlT?`.T>i+Jw*;6xF2ydTc%4m_Afhiƌh D2+5WIVY>x$ZQs$`rB(573s~(7PqK2؊cs ",6P]; A RsV֚ MO39T@ƎTٛNh/Io#m8hv`7DvTE JܨHNJ#w=֓SW鋠k[ǒģw՗75uqfgge,ePf4g5YV(^%_b!Yh6y[F騢\53i:7(R9IEX$3Sflrԍ%YR")֢Pd15A5F-ƂD6Y)jY`0H;6肭䞙vJtW'jE'Jl" yHIf(hhnhEiN8f5kNiu [9nzIYb$ 0/zSjTXfXf%vȩ@hxf :4z ;?I PުtlVoJ~e,T5I*Z]eX­ZX>NiYrˊ*}(P^ɰZ~X^sJ3)F it*uІGhwUVHvYzp qxת Aq:Ff(E`J \f*`Y0 r 4 B_D@Z~jk>!Z;scalapack-doc-1.5/html/slug/img380.gif0100644000056400000620000000026206336063433017157 0ustar pfrauenfstaffGIF89a5!,5bH ,Vz}uY JԙfbJw%-N[/;F،)r**3vLU%턾k&=XX uk#:*g}UD4F5$hTYYh *:JZj Z;scalapack-doc-1.5/html/slug/img381.gif0100644000056400000620000000025006336063550017155 0ustar pfrauenfstaffGIF89a0!,0}T(VʇqMmS}}(xPow֫xz#l<󙄏# HlIEhboTAŽ^;0RHRmȖT 6rGW()9IYiyɹP;scalapack-doc-1.5/html/slug/img382.gif0100644000056400000620000000023406336063734017164 0ustar pfrauenfstaffGIF89a+!,+sˁ WSUbǢۀH1vjj])ҩY)Wd͊.х|-iރjGp u4zhuo8HXhxX;scalapack-doc-1.5/html/slug/img383.gif0100644000056400000620000002053706336110753017167 0ustar pfrauenfstaffGIF89a9!,9ڋ޼H) L beL*̦CTH ڮ2*5l[ _*M6Egrh0ChGe%vTXXjFx)FX J: FhŢ[(\ll[{zZi :7 ;e.)=k}M ߋO]mʳ Fdp߼ini/{!QY!*q1߂tR2ve$"IO@oȤh[xfflQ*֭ܭ:ѡk'8{%1ס=eT]u16 ͱTn^F9[GJ7d[}$ims:F-vvZףs2rEY>NrLu Z @yW$]sq8 L|.n/C<|GV!fRSG0`Z\Fم\Mar~V b噈bb.c2Hc6ވϥ]$% O>Iq&`HZ$ 9wRڱuKP^*t-OPdVIMxPE_Q:#b| ]eY5\[TީnƜpI9D*[yZ7Vڠbhb#V 6TLb4xY*h1IenݣkQEv@Jam[^~k^بzVJӽɘjj~-NjfQ'-ÎjbEzd!RK}l о[N«}Ѧ,OVT{%mE*K ROcr-|YOs8l\,|3>:/$\\lpp3I,[%BVko4믧b7r5f\+/lV\-L hk۫.'bib#,{˥S.sBzbdC^r].6 u~i+FSpӸl4>nc l%hК~Yjy~rٛAD`b&p{lŋ9sEܺST*@'Fɠ=p!a „ٟT!0,P gؤZ;tJ) qD,$*q&DD@2r<,꧊H80pԠ\l~m^n+9{ҋmXNtlHMxAqc^"%,yDKBA&x"9a Jd>m}t`&ALlݢ:G"E0~zHeEmrPbNr %ӝfN#$MZ$A/ur#LRNlNɂ ,bz# +ܚT˯`oA&? zZgACc:m?Iv-.EP RUF*cޤS9oi[nxLI2ʔRk *b?Ks1Q͓܋QENÝe57ZHa Ʃʥ9qZf+EYV&A\K+$VpbˉAnM++>FilEWYfVg_2Êbat6"m9۾o wKtn8Z]n)6 dreR ^p rk΂ݥO#QRrM૷V 9KޔYZXoh .`|bJΚUnv@L&*m$3/xx#1,݋zKn gvyGGaYӖ_ 2w[aSvqrE]ɐ`%fUL!Ȳ,Dk.̋e Q3ղ,۰ӕ{[P' ^b. Hŵ [Wi6J[UHGǞBLݻLu%َVf]+׾PM"(^vlԮlk;Gƍ6go,e˭cg a܍Pcj6!3٘ԣ qj*}JN2eaeea <44#>򤀬dO %ǢJ 1)wk,sZ3WA[+'hSUO[2mrѧFM-G.WvѦ?NYTY*[.蠺>weW/>xjGj,qC-[624g.sz͈--yї|ѾʳyQN3)t(,WoсrMJK]+ЗtVޞj_j t;Ϙ^T>O{hoQ_wwj?>h/ÉVͲԎ#x;_/:4Y]>.V}!@Vr>' %E@puGjvsEnTd[4gL\yTu*r{X|Zu0y~]bsn4H_6xl#l"3$?yI8nƄmOz=(PZUCЅtm]_a([ETh~ixhȆlxjk(8&4k1>#ef_7tq4ls5G|$]ko٥pWd(v47fYV}oANwwxlF *HK5ֳI7q%Bge9h(k9#8qۓcQyrH1s)}Aho`uH8FtwPIv׎=ׂԍ16)bdHFfsxv4Ue0Tv taH8}*! JK~7 DQ:Ԑ4/fgpG>Q?dvYS@؀b| 'i2r09t4Ki ɂ.؜А/ph;I] K^uVSyzq؇j+6 ؏rHl/Hch79;eH-n4Z@zB DeVIxgQ%i9Wt:iX%4a奡؇hgP4դ :o3<#&6 RujBJc&*7ni±:4cuy 6(yz99G7i:pAgn'lI:Z#X٪m}e(t n qad}}C(U^A:Q\Cy0+&)ƠgXt ȭ{ѧzBϚ3J)jyJ)6j#!dSfKaZzW8~v֓AvƪS3ԯ=ezR2aǩJ#HG;g3 _:f*{ZR3;X@3{e{8vg`x%WFYz ] e9@sgu=hwMHYCw@룫Pw]B\*wKE궅fO (롄릆O1Y;^ jC{f< +Kd?:c;]@ڢ+[cZ2Ado{۵Q*{ɛN"U\jjW世!;+{kj q+[ĩH~1Q5K@IK'R8薉$:c:\u뿃 Ǻz۰\E;T,JG*s*.$+k9]GcC%p_ɧ%z6 l~el Tˆ8x qڷ]H!ʷATVCܹY?"F'+:K3PJDzٿ }B[)w_P 1z|ę\IQEV}Xx\^gi.p\iD ySgɡy4aX}xɤWƼZu|Ȁ0C7Y\x\s,j/̯j5j*ɭ$X3#ztd׼8nۨ]?>AG|;䚍`7.>=o\&[]XqD4.pid]{3ϧn^/0S"ҴeLއq-#ìmˮ۪<;hpAhv槰nh"<Lj.ڊuŒZz FVMm{WNM{mH|Qӭt%]x`ZW e|bGf]ۡmB톦UZ,RpH_xK-_ۗjU}Kq 9dWTM%o'򐂹"?gKJ/Fu&m7oC8 {&zu"{((9Dٌ֒&'%,ocՑhY?~myX/6 {IO=LrBYG/eם$| rAJ\(NiK/W_mJTﱁͥT}'o7Af__n_b=sUKSK{ay|qOܴ'ƪxq~b&&<^vϋ?YO1u a:)i]7gK.?L%}X-ھe5=2NH P*<2܍OևDƴ܇P\6qd00<7q7  " )/07598,@1^l6WX8=҈j I\;>ϲ.q  ; oN>o򬞙iqYyf^ַ)R鈗j>NXjM?^ˍjSCfd fFnhnK`nSceUn#[ \3]LA˵WgqW}eYUr,s4ax4piĶ*2oP:QycȊQ EoF2֍UQWA{PVF exZ%i]çWԱ?micU/t#Yw)+XxcP<EjKM C%|'G/Ҕ!4E 9Y lMNUG3{' 瑰&t(Cv I(1 @t!1ER(4Ntk!#hnWYS:Њ;R8.ϋcA!5瀠cDUqoB)+X֍,we3,z68uXVۑ탟GjdDˁ4I%"Y ɚ)c2Ɨ|Y2%)GÌ+%|7M*R41;f; Pɳ'Y|x@?P5Aq}qQk$Iԡ]ZD- Œ^cCSZ>EtsQ?')vҰ<@jV%-qpكh T$)E%4uF9n K? N?t6Ke"7m)ZTE̪i)R^us"d?͎Uh3tTN u%I\TT)%İ\;L)Z{%!)(>ADJkIO`g:ʁ+mZWo5 geC*erʑ "|䪺R T(iԢ즶y.tYp5ŎEńy.z I;^t{u D'nH ['-lc"֬φť7'{ nYU WE@,-=j( *\#ީC"h}#SqjE ȸjdmȴYt}_Uds\8r5hH )DwT]0XWQ4.3вd[Xyd4z_Vl xO>):AOX\ցDAc+XN햡S>ٽpxgۥz鮺Gj( ;.= 4T upxj|pC܉7! I |xVY{&YI;>DMRDJoFI|nb &Wp$i-sa/Q7c81m`0JI8N*MqSa1eqi{;scalapack-doc-1.5/html/slug/img384.gif0100644000056400000620000002132206336111177017162 0ustar pfrauenfstaffGIF89a4d!,4dڋ޼HFʶ L .eL*L|6Ԫ)ja Nla-cy:XvW7`Sw`(ĐsɃG Z7R :hwJ'{:vickZl6Lɘb*{EL\9M:;M|j}ν=z=vNM\o~|S0ad 9P2VSso Hp߫o@).r|q EQ4J JL)/1+E>@Sԯҏ4Tu19s[Z PzBәt}&MQt]rEEEFtsX&u>[4$ֺAUwN6Sܡ^ӦH͞}[un^ WN㚪QŇq0Vs.fݫDm.z&6#oM:nnOo=~EUѭ7]'xuDS7Uځ/^A7|vA߅ XzP!&t"f"U1!b,3S:b=j㎘(dIHNtAF UT>9iF\?9fSd\fIf#p2'o 薟aڝg` p7zzy|rXD\]8)Z[}jR סtZ9*Rtj`ėC ^(s:FX,nIzTq_s.xt[hLɢx^딧VߵQ=7YVJ8j`s1lzӖ{ 10h1ד^"sŴ[(ۍ{n4>-w3͵tJ{v23cXM*W,/xjыFX_KB<0[teQalr;p إ^a}}:mm~ vTʠ>4:1P`"ygrr (.>\Llj˺mB\hl.U[Nwbx޸C:6z>T ~OCy$ qOX@@%pIN3gde{W b@`6XA #f<%oєѪ(C);JŠmn5&`dz 4B 1fepl sL1'=E&3mjh22N- Ur)Q͈0方ӧ6D4z$fXOI@r%C\q(3 $ɐPrGBRu H]2ٴKT/ jR vF8F]*b•vsq갔-ҥq\UR#H}Va1+zTb{{^W2͓Zk'诇UK#$ Nivg? Њ~HlbM4N?dmjZ!M1tmQ=ܤAҢ)SL~̏/ -VXJdz{➮PJM\keE2I)'{wE#H;80m Z#p{.W纓ps,{KY(Ew\1uxhI!^'z ISobeNg`ؽ$u+Bz8lJA&ɘlKL'M͕[;Qիejd,FԗeY/{-oNȀ0磌d2|z2i$ js(ij< :4yL/҅RQ^Q 10aU.3Cwl予u=.Z`u|JJf&Zh2[ok=3E_۾ΥL;Im,;<"2v&}Y|b}>f/+|5#/ڻXh_w ն썑'kPE(X56Zü2rj9 :[k]n{c˚ߨ\ %K{yt=vL-Ƽn"abBIR6C+r㱻e#ZlMLI O9Ex`2꫻d6C<r/q3\[àY᩺[Ã#f|9^UdN}9֏3:6{[ ;rG#tdoJ:LAWh1 O_f`jyYؐMeS[*Z98Y)阏 )[Zi0y)ə<Ș eIjuHafCvwGGUnWrG]f-0xR^zȕ!A`_)B䙳mr2P;ww;;oK {X8] i:8ԝmeb >g|ƞkKQ ,ַ3edHxiw{zۢte~@/~,Fɩ ʴCa~꛼YӕKR뺠&cZ6ew 3+bMzxgbrb9mfw{]W8-RڏJ2˵-*O lH |H,q ɼB\+JGj]7T}C\kӶh|+yK[{SbHX.ɖ)d6;/7ˉ9q% +Ǣ˱̊)LD+ʺD*gT7͠ijW}"I]VLcdɀ2=͛8"{a:d\\ ח62IlЬ({ z>}fT 1 $ՃɁly]Ic]ږ,M%"|}0btQS}A,ɭ w ݾM'F) @HϏ !GXRy ccE4BgXE]lyYH1~ Ú{M.MZm}5ZrS4t{g4/];ۿ ܧ8 ?-Lv Q/S;VZߙ-KGdBR>B2G&]F{fˤuo)ɽGOm N_a[2 ^?ws//N AR_%CZkP̾IʷnߒNꪅ4ƥ~Q~4eߐbH߁`!o坡g.l \{7pG81=NIh' bSX{솈.}ߢ4:2Ϋ߫6nOH R:zq֛wߘ hz.Ep '|dѪS-kv"l]qXBЮ߄.š%V& -A4;0TVadǔ:+oGp7{lS8rCM=EuDz;S{Jo~mQcÅKO|+ݿј 3ꆿ6%{̋7R{u`l]CA:Dw#"ƨP;1ZB_=~2qR3M5l2h6"릃ldtCɦcSsj"`A:d0X!MzLUmTǶoe*nZջ^)xqǑ'm5lkGO'Yc5^Zgz΋o V2ӫNW*YtTmzIj䣪ɈFb+! .2i\H̺ OyPE0BERA&09@QKy4PHhD @8Fh.9({)K H()+>e$(IJ 5?dqK#KDs$O"I2НDAIa$( GzeҬM0E53?-SzR\!lz D#4q-]!%V-kҳo)45kHh6YkJ6$6{eZ ؃)HCn͵ q%J{磯&cO66t3cP'X叓hv=MO/\U`qby%H٬B-eBcvIMkˠS>TiT%:j[b q! M,cmk/! = 5#Cl>r&ؐMrЙNut;zfhgxk&J@ON0{5Vi1BP|` yQr4k+A]n &X/$ D5(] ӑE :׺1i.H"ݭ[=e+^yOPX:Y-*w䯛3G]Gd1-vKl. u~"qs}U^?߯??"tZ察PcΏ'w Jޭ"Zk=Ψҏ@HO>x~>lR/ximfi2ʞeP(stJ%FrT+ nhj, Cu~ MP~Orp3FP _JPd^rh A p uKm(o>Gm˞a0K+n زС"J櫯hdmj d0koh/J&np,phHkîߎQ`P渆! RFl*eT KI 1`pѴ5gN 3P` Fm v3 fJ W l`*Xc Y;d -NQ qvbanڰTΑe{1aI~,OLS `lˢ #dŰYR>R?oTq>''wΌjVk% 4%< QĢR֨hm> +Ur]|R,!0޶-*,rj.񡜍w(n%/001.!Jw/(S2˒33W.9p4Q35Us5Y38 ;scalapack-doc-1.5/html/slug/img385.gif0100644000056400000620000002373206336111251017163 0ustar pfrauenfstaffGIF89aY!,Y ڋ޼H* L> D!L*̦<ϪmRܮm de.[ /y|(qg%`ؖ) hSH$I9iFŨIS h*S Ctz; ˤj+l W3{c{k[-iŽڜ`Mr9J^M*j>^ޜ~Vʟ@6jW*}17%H]:h!őc$*We5/0'#.P#ˀ 3uDSKA[Ҕ2QkTtq]O25ajMWݹQI;U۽Ȃ0Ю'jە]X|+DJ"vd]sR ۠s#"0Zy}`s+]6Ḱ=K. z|O.:k]+zvÇ+ʺ豯-Ge/%T|9ȠXEYHQf q *HolH߃hEMހ#z =;2$P-a/f&ނ7IctQ7O=rډd+9sQ$=;XږJv)͗H\:&6USM&_YHޛp(O>jrinodlҕep@yvA&hW"IgjF5W`姒Yj0ASW5\yOV9/)?{ S tS˕]R"ȁQ61>jmDR(re/yA^-E( >AU; \A97 Eů7/Fs:*I('>/Ü+=YM BA|ZT :mkn`uGu-%d㭼 c!1]1&a(-T$ ,X&=68㠽ʘD6#&KVސxR<CZs/=.bSD1I5IT}r)I61{d%AinHo sBwc>ftj2رӉg\y&|42z~`lH&:b%$:1 FяjAi"%IHK:ԣ ]iQ {2JkZ!4P) uD-QԤ*>(IzB\ɜjTujCnOjՖvx۔'_!˘H̩e S8@nh55q>4zdw}S/5{2;LV q5!NR}^"Yc,"e|,/B늒R#nHVbmpm41Re˛pܪrc g귚G}wڇ*7Vl."MJ<ꪮV|k4b֑s_iLѧX˷beF}mL X 0ڄere;n0m/bhخ} bB:ⷽ|l)Aw_!jX1qh{^gCjTk3;eP I$EYd}AM<=?5;N?Y<.;K1>%ZY,YDs-M&,lƳAK^*M&3^]t.UМ?S箈g[ Sh ^o\z׼F;k4l):c{m^r.\*ESCvd ۸ [qg!wuHTcF H哠?ӑNkw u{nƷ[ֳP8J ;Gw" D'hrKeRjMJ,;ga0ov [op̚'ZU::e_{kˬrί<vmi̓ܳm$c?rѳ}_ پO._~[A|Oԯk_vw~S7?#7#q~|}UW?7"´6M'\F%iD3U+ y-g{6a8~<4>i@}GIKɔ''jH9 vw֕Vft3xPOyCݰCl wن-#HX=S>֕($IX9h@vYxMLY7]miSRiRuɎoZ|XT(Hua,aygdy6ݨh&q'_IÚ9cWBtbpQ'fm2Y[f8[+8bt)k$yUI`DŽŁ詅,)1K[oyQOi^ؖ5IJV'gD)A1Jrؙ w"@E`٘܉e 9w< cbdGq+{o⨖/Cu%F$!c4K&jZj@0 ^(q\[n6D]bA@Vg&8jfp\yut\Wk 䫡eo:wĘ7f*Ʌ$^ZG_wn cxVUh; vqX~bY XMy:dktWs8q +YTe4?uYiI3;T ;c8O[Z;:82zxbkꇵLjw^[/.,(xJDYDb>4;.(c y1txWJY҈$mӏfX1癩ʪЪaUyiq(ʼn"H_1{l f\Y۫fQlSV WưHųEs kmXW#y_\ucB.5Z۱ӆK… Mܩɮ K ;k%;.G|V<2 ɓщ^[kbe;3Wl!s%l9&P iʢh"ʉ̡1^žv눌[U_ˡ*¼Ͳ3ͫZAi 1ܬx̮̻󧃁y6ќ|)f:\t |i4kܰ<ə̹mܓ7%mɽjƴܷ}Y5Eцǭ6Y{|r<;=? BqӪV6-VIKmo,SMUm8*\] Vam<~\){:;o8V=L-t Ѹ>؜(꜎tjg%mrW 6Tۑ9Jz9fg&׍Ǚ{I)f%I}x}~0VzG=)ZA6J[ͳ}⦂}hK5] 9HJC@?-|)Wki̺&l>-KzzjEkMcHtڍz,kO9+dJ*J'Nwx Cwqm誴)dxn5A ˴7`;[&rwdW8qAqPdznJky轾&~(^X`+U»QN021Y2[)@6n~EWdYe  &IKҕzb_ L ޷զg]hol!_i(W 1/3%#:Kx:ᾼ\'';V̅n3n1BkHDhnnT0.ͱ\o] Vͭ,Hhت"{#yZ8"m|>ۃ:3}o87F4 rR$jBht7ަ54ӹNNmnK.PfJ12]Bo'^Sv?Zy(H̓*s&wtpA(LXFKAV1ۂ:FZf, 븳,V2P8\>GoK1M "wTOrB!ʡMdcMӚJWe.|>|IvJJ&lRȈVk +ҢVZKA6$]{}+yPyD Yv}eeO4Ms#Qkg.AXZc_νo ;ua)GAP%pV&mCVG)N$Tpn"@iAzq 2rK U'3*DYR@/a2t y Gx:z'\zYͬ.0)ULZS8L\Ɂ;̰ݳv!ٯ6R̸@ҽwWo_f@ _#Gh!</p$sfcʝ=,WgM7n_U[־hWWwr}xߥ'Wzy͡GUzuy?Ǿ]yʥ?|yѧW}{Ǘ?~}ȝ?d]P͡5CЯl% " C( (L Qt~"6hD`"i<]'rFH;J{&.K24 #hR樤E[e]J^MeH ~XYYYގ)9]M6٥VkLgQJg`-iOntߌ촳pם&XI Z?JaԐW5~{A Im/0莮;iǟ103Q3~rYF],mQG}u upv2{^wA^yCJ_@r^b~@^ŷ˟1g?n{]yÿi%~g3P+ (RySR F7_Y34)BU(;Mw A0!9'ůfbg5 PӟQ+ZK-kY_;Hu@Fa%c\̘6E4Qr\<)%m30x,"QEZSgհT4!GcXÔ5 9|[0I%DF!q^ ӛ!FjMFGr ǗFZˏ%@K2BSc0H2wDf2Lh-IM]fv-Թ%oxd'ܰGc`, Or>lQt473Z)ᤙ gACL+3[/P};*(Gm5ICHBc~*9Qs#12E7] ~b}lU{WVGU ن 7ZJIV2o}9<3Wa6W-˯{8.i/L^;r&Nb:~w8I%-~qݝ7ʫC[7Д+uqqZ][~59L=o\g׫ n T^4 1,5g:;K&ɌJSyO#uFӹh3S5?pWJc(}naNYACsqMPx +:7VG%o}0y^lm` )Xnp^+3Ǽ~`շ}ogN`OOp`?4lb5 z6z{z!yq/ίJEyg+.oh(' ܔN!ج'}򬛄cJjJ8w-LD:0DyJ6k M.ܨP R`)sOX\&% LԘ+_PNtPm_:]@A,5#DkxDN QڦX 㲔 @鎌Ȱi\0􅰢E8F-T, nucJ.\DQiŋAM 1ifE)՘ 榞Klbƨ>^0F3 Pk$  &mˡtphf圮+ n]Ȃ/ő.$l7hH*l쾄M)".n:16Pllj.*QLn|0kkz0&YjŔ4nО1)mnJPSj)t2X2+ΪS犩"=n)EG@.Y--O//_l,)[//+,I0Ӆ001//201 hR-:Ss0S22{l S.Gp6Vso4EE"#5/5]p([0 V r%,3Clf,ѥ\jjQ6FH23:*nMө78-J4&RHHqS6sdsم/sT@Q%OٶF)Q7)@oB<"B+#>%t@Ķvd#\( wU>6KFXs14ˋHDwmT4>HI2 jKK4TtLaL JMSD)MM3BRNQNFL70sOO#,3eMMs*KLMP6ՠf0bزJO]*5 B42%'WM  K'CJ`m m?űNG1KUot}UR)m X'PюXR+Wp,ո ê3`y5U@ODQ R8qj(0$?N S{\ "Ve^yYY#*8c" o- ÍL;R^`qAu$uM*Lk *ӔP3^G8G ofGrGBQ?eOc9jT6xqP16h]6*ҚHõBvKB钖6LVO`Khkl6mvmٶm#ioi66nvnn[+t%InIK^W]qȲh r&v3"ġqjps F[jkg)uryA7s{2VrDdv*_#]1b'l827fĶS7䌸%} YxCۤuEO{8؄:|AssuUN?XW-IzdBT 8~e|m.]^]Q-/?oHs!SVL W| xAgg {8pU]t9_ UtҠP# lwcw?nh/ӈ8z70I6z級nV(?oaKnݶ:zӣ;scalapack-doc-1.5/html/slug/img386.gif0100644000056400000620000001674706336071474017210 0ustar pfrauenfstaffGIF89a!,ڋ޼HJ L ș|JT\ЊVܮՂI1lڎݺ?l?'4(UؕhwFu)I$I ig:yPcj 9K[Xiy{4JJ4 C,\|L(k}+;H= 볔zْ~tiyyq}DaR I`qa|EN$Hq& YX6(#Ζ۽T&tGi:އ!ȅ>a%eMQOZcH>AVʕwCb]TcdUie-FJQ3fdj '(lMf}A%Z h9J*h0*f蠘Qij&PZ)G:)kkkyߥ|`@n&zl\ ^MnX) `[HyX%v30ag2K-rr pHUPvRXVi`b*lߞ<,ކVbĊp~Lmu1lY*+s~4' F*Dc?8J^ŝ2}l~f4G$m$ R:8WÕaN }GonͩEh hs7N~jhO.vsҲRFnuҋd_9U 8MoYNnlҔ o4M6wV _2|<>9f?eAVgme'~԰|p9A1א6zU~m|)NX s죚 $f*'-HPֵ_zDx"5'_̩!;F~F=όɜfGOsfS>!HJӣ)MJeʅoh&k9?M,e츙ǩՃmps$NUIؕ;S_xR򪲋&nso` aIPu[[%ff.Xwib w;g Ew Z4slLa8 dK(}زD̸p/YX(*.͎GЦ' 牓Uȅ堰rka{ mό(z=2kOڮsnX&-^vJ$n$71QmJ(J,kInޮ}_&<}VҢɄ^54*ax*L{XU4!f*I?%\Qج,|&VxEi2EPis}?t/ar| mck,x1yd3! r$%O`<~#[+')8w 1(fyu^/b9ֱb܊?aV74swfS^7'̯V!;&:3o:CyoLj"̻pf;hINջNX_66r-/Y L`PmImݒQVEY=-IXq[ZJ^p]:i\bzNsjQ҂{Rx/&n/Ai9[KgHUKsd3Cл\"g]{&<][#HW Юm j7uƹW߹cmB\fr֗`:6^C=0hSQu[WiާQV4%WAEk'dG-k 2Ex 7!Zg$f3g餁aVhڶXs{q)&o 0Ux asb"dF6{%dA !x"HU_OmI{Rw|vyr<#fZ+epW+Gy߇hGϷY1Dkv%omq}nWɥw^(F(x3aht%`(mQhNLx u(d~R],hDshc ca |kRq ֊sxh@ezgB8ksi8Fc&V-Fx[a劸m[$hVjk>m (\s~e\<؀&}d6aIxlo}gBf}w n{-Zr!Q@|?~|.iJ[S87ڪ:-re,hmpzj`F m4::ffdž'VH0`;Jgjc_^|:-[׃J\*{AēqE(Ժϕ2EfO.\H4mvyvI fڬ0X=*lFbvG ;vK殉u#װYq[z%g1 pKme8)A^Eg?f g۩S+;EDz՚[ g$&H}鵺8]scˀˉDJB3 d u`jE;UG]Z;1 Ѻ`" G*iljT}Jyy*牢!(%$NJ^ lkr› PpI_t@wnx[틾J>WyiKu)Tx2y~y ,(y  0# P lIAC- <24lS>@e5ZFG/ly1LN IZƶ\nv J*`=7.}\\fz/ArIv;>E>*Xl[8C͈7BM*缉@^`m)]Nݑ2;f!d@کg >,΄(ͽnĴT(U)F{Yp-]W܋I '$*ߧ+h [b9D!?݋*$|WkϨMv9Z5?]jb~$KC>+ 询XjL# e*O 3T X%~ScdrCPY17yP6>a@5[[z﷥nKI~7*LvU;FNIx0W]̘^cك֔J,L%LE V~oiwvyǒn}-]_/ϑݙ!OQ#LvՉoץ_lөү׊ |ŧ=O ^۬ËABOq,/Iśw OP-\AkAN\<;w#ׯzKÔEqlNచZ4N^DJH5/`a4FC[R:>LVjfmr`$)2v: $YASQLViT4+bsy#{'۲dT~XNӠo}Ѯ.\Fu>yݖae7bO,_w"ȥ)oaAuE5|=sRZDp,-`Xlȓ*p->NHУV.0B:&PBg)"XE2x#(lj8MLe׋֎.&Q'(!lM?Z]K)*!Df _zi0-hn}ڑyn77ÓMͷ/@٩:d<2Us9ĺH/7mkp̐f6}%"w.Hz֥_ +?}+bLB/ &S,>rC,֛O f񎺞)¢+ ,O4{l*ѤpmС(Еq0>lLYG*%K)/6(KRATѡE`L)V\/ȫ&l#6ytݢR"n*g4PID;5q>iB;sQ5-k& %K4UV$uV ku\!Wb/6h3c'nݐ%YV=YnaYiqYy矁Z衉.Y㥙.}Q6jkXaZkgwjiMwk~E[$zl,m:B @+T93TQOOl k|qqgTUk+ [AL_;>0.ӶR1(SNa*5bY2k]Ӑ@sAGT|lVL<F_28T_.!\2 LFE-5t~^IäWmtʴyRtII /Ra`5 aIdJ29 GɂL78dB#&ָE2lPs 7pva`92T<<:wϔ2{H j8P%52bf::kh×wS4)kRfsXq]R[[J5 %"mUbH fԸW>s8 i29*k&lx$N*wғFΊvr\͡CZQ@&UfZ jM=fE[cꄺ˫[K:䓋U/rUn S L*<_uPqHT?vo%Ey*)k6; 0IL!ƗR-x#<^]~$4k.­y3vXB#H3BZ"qH;4U(;h .vQAM!N 632z/1e9icrtڵC 2{Aǝf0X!!G!Fޤ"@%-JXs|[rS\tK/-N\1kLcʤ2MN55oj>S^6ib9j; xs=iOJS* 7"hπ.)fJK>Άә :UHU4#io,Zp>H0ڄAN.Z\!8+2tzQBADeNXC }yH *hC WD/䶲GUix-?^]:KDfZ IZ6V+E.,bd(qVomb")Ui/gH)SñIt,4?Ա3`lhtZ{79ѰjE)\:КuM BR)`8-g?74. ̣Y%CGH!8OY_Bߠ_o?=2Ԫi[<}i,\Rq [i'on]JbU:5듿 9#_.e.}6rғfj;p`7)Yk%e:.0t֜mOn *õ+9bM :>Cz#~)O&^ݵWo F|+e48uR[/Oi 1h3sΜ3i]U#Ov~c<|2:;kq/ՃCegk*RygwGd~`viw9v7T"f*x~?`~v}VkЗ?G"rU`fDWm861w9g+F>:<}dIx #`H<9i~5t[{ {<ǂ{ZF'&R#k,6 xhwg=兌~hcWg[[(He>it,&gwdc"/'w'mGhÆH·B'H|v5Vw(#8 E2X.eQVSEw}tDx7\5$4cE\B`j!^BWkXcc Weȏ!X`75\wG0'Շ@h5hӄtwpjv0e'l I#耾c2ADsqwq)r* sNX ;崒39r.Gr.:Eg=ɒ9w0@<;Gn]CiU@T07)w32YG:ZY KM\ >Iipb)Vr]H=Gqygiw"ǖm3QrfHu5h9Sg.vSuW衆e+FGƏIVw2s|)?4mMGeY'V+ӨGVFktXGو rכٌ\8M1}}iHcRuiH8zX9xuɘOwyX%2,ө湈HFږxoDʆ}՞Al@ۈ窄|}Mljfl nL=[pyR)Ҋd`V'K40:”6A |h҉ɲXÝ! {UDAL lj0=μvf:ZJ+nAÑ ͆;Ǻ+Þ|&9ctۖk$h|$}K|ϲ+M;Ϝ;m,<< ];kЀ+ ] ` \=L ǻ&-#;Tt\9#n4h數kq@-EŒlɩ z쇆|V,,={fS4W}D_R}Ϫ̯;΋9y+}Vηʄ~͗]$8VLyܾv3>ӿ8}E 7{{M0=mO 9=$o'F2aI.t) fL8:_5"B-)H@LOMR1[V vHu]Z)vYu>;wA~YךXƄʤSŽֶ:0E.E9NRKI%Zghoqsu_X=G^mza!"~-W36b8y)Ѫ7EvѣduCƟg51琵W6q18b6MQc,|A_={xO?iY㕤B%ii֨u>!#e1z9!I#=:uIiROeM TR*4LHGI`pxk\dR TpaÍ8Uȯ+4 +X[p؟ƾ:xaҧQoNb@#{mBNV37o~<d_]5r熑>x9^|K՞k<l2JN $OS@NPC!""(4Q HћT@)42OI&R{5e]VlⳫeUT_eV7wѕUoO'ȝ 4 Y\u1rڲrC}4Vd]BQ1V[]u?m-u7mC8 6_j?᝕ZU]:ӕT5,a c[W4#}Y]00_8QKE3bY%꣖uWtQD8R$գh/V.,)]BmU м:i6#fQqƌh76)p&'gnC/f=l)xoV._ר`bnW,np!=Y;:Ǣ4}foVh-SQ] ,i:|fL1m/۪i=7>mƔrլ% t +Jܷ?PnT^O Zdc.q?9O$dBk#e&'=t~EHALy9*˓dIӠaKFFL´[sXJx")҈isT:y=5t Hb̥Uթ%)]:(~o1M7e#F;6X d(UIfe@ E|a Š_ͬlD,LkfI'h&ўAWRIk}neۉ&ζBz"Vuy[};69{Js-WN?ln-gmJJS&w]u1@)qEqڛanw(#xW 듁\\ekte{kc랋Ǎ¸MֆDŤsb Ur9K;2Edd٘Fg/ab>Z/f˪yAɌd7Ǘ~+9>fs+@V3%Cj79nt =fD+z 3n/0Gϓqo==Jc`lvL=0bk4sӢ4ǔ\S.R3}QS=V\RX1u ] 4ط)Xmv .]|]w&4vYo z~m=kR|]x34*RDSvl}EL~URϮƏ;>~Yjז( OFtsaWM~Z脥wN^b5f/9#V VD1+} N`V](uU)=)\m (-C@`o}rAcϽTW=DAS4uQ<}e8v5zW,A2T|y]jM,F KH~r/*ukozOxPeθ^ngdda[s]AK.d7T{pg/KO n˲O * /Df,]NL^dr.EdȎj.in&]JxCN b-B >Dv0O0'`0iʚFjJhP | neVZ m L&r wRP S7n LI 15 Qzf Q֭68.pm*o-j D0&j fτcjȑ Lp˫ǜ@3Nߤ؉?-8z{|!Κ}M$ Zk/bMF  ph$ӫ#x&r p)"`{| lj2'i'_̥}MlN P,-Gu.%op%J*QPH0udz*" OiVR6+!xczo֧j3rTr& ܎O~$?O6niyl1%0k-- G4a! #' MS+ =P0~b3YBfJ:&k؄/ ȱ,wJÓ51: ̉b2=S@G2)ǫM8m88o!(*651>/Q2Ih(6ma{.[<ߌų vx:ӧ> gܘ,&3+sSSl*CA|sBFQOjD38N"St^T fЇH+#G'xBSJdGشN-xNH,5q@$,q..H6($C(/-=O9sߴ+'O)51N.S 0$Ii85͐"qQUVwjV-UsuXuSemܘWaEY5ZWZuX \5[U[uUUWu5U5Y?4]}Y5VSU^V_U`+MI\TX&-Ǖ,<o14g b+L;V-m)$'_ks% 2(fGkf90RBF$UTǒG N")6s(VX69o!^DP8t6g+)73|eig+P")^`!jrh¼V2vjT,*>1<oS[mIxҢfm"XLozAg2n;Tdt>yȠ=8>Ke*@?5R+7ӠFxhgM3% 4ֵ~W5jh9?I]r'f}KzwFsdGQ5zmLźxߎ WnwVvy %v3{7ʒsh@N?RrH?~$5[uu`k`053WX\Y˕ [eG`kXY|[uoY +9yo_UY5Y9՜9\Um_yٙÙX^YzZLbb-26R9V-9nt $3V}n]gPØp;:|N!wNdUwpB489,zj8W7i[7aڍOdAI[zŬȸ-vutkHp2pG*z!4?Ǔ-ڗ>(IrrAő!I )/ǩ.xЗMrN)LyJLU3#6!P|XyA'넅>=fj;gns=κww5/Rw4qxc+;[U$G3og/x{%I,[#LЖ{m9{l ;e:EX 编5u&G3n}p7w0uVck)cqrۢx]٤H IKc4G;!3"RXƯ uTA3㛅4+ڃcqUYǙ+^YuΟZ Z\N\ͻW9ٝam\E9=A=E}I+%=-S׵[o҉9g})_= G4s-Ʈ {ZYًV=#|vJ:ÕJ9} [/gTLدuZ<Ӻu}[tda19}cqJt7|mr<+E[śL=ق8]zn1/4x*\XNErX#/<[Y+XS2NrƁ=J(%ڹTQQT!_Y}\lۢQi˪{Wi~tܓ~DyXQJJ[Xcf{i5*}Ӓ! w=qTձ}s{071~_} 5?& 1,vRIܟ~Ձ-b:kyls!ToF q?30>{GK\e=SO1vv6؝F>P-a3;4ȌgI闀ؤxIS޻T>~ۭ >.c.xn6#+QžeW|wcٺ/˳+4iai  1)ZnYE)+l!TƢ< m/dKꊹbM aUabO \L"`$Vc ff& 'h#)[ꫨ+km-dnboppّqh33tj35n6u7x\7o5s{`,nS}1w/AJ#R$?eԲܿ,dGz [Hɬd|,_.gʼnR聃Q%N1bV.(BP^e!gJpҵY*a9IƷR bwxM8r{f4%bd2έ)Om+/;L!ܛ$c7Z~.k_ҦkJ՞$fș cPV]c1潘*8s =vn띋We 44ٝCN<{g5'oT1xz@B[Wpݙ5l\#s:[}`_}p-gxțyϭ"M\j8zdz,/EY6ci9򣉁}ŗQNL;^hgl7%PjXRݗ'YWM4`AٓbFIfA g] )Idf~iRNI~jEBbc CDh&w iFDi O^%nHgve"G`*/ƚfG#x \Qzn|> (M]ޕcoiiUFΆ˄̄r:K+XKf,L,'ERE1̨7_Ë:X= ׸1_yLSW9)FK!;r8u4zPB] [Gҕom=LCv*}0}%]O8v߇;^k0=$wX>}zS孯 3;Y _91={ݼ.gn}ZI6)MTF3̊f޲u>G-gן}sq۸xҪiNaPx )F!mD9r,x k/} dD3R%DK fc^W\EP?9Ӭe"X3$Ұ].<{nQ`knH$|BA}vB^E&[a.v,H^2J@-ᐋ sJD2mSRRC?1Xԛ% *ױJEb0wc!$$F*qCTU/?Ra%,woQQ,$Md(LmEMNT*K V,&fDa +4u%=bSv[HӳQ g1S9BIPFdEIM0RZ, 8k=Vp Tt.v˺0$+{쟎F2]B?Hrfen=ML>5hCw3h4 PԣYz(1>,7<ΐŇFNgHN 5TJg8ˡ㟟"AUiJ LQy-^ y/jZ' 6wzׂ zc:6x]l1閼=l_1 LVϳ=)GU+:KzMmk][Zv#Ai?,me[frnqo ~6ehGmE*g(kؾ1 ´}{J)=Bl+Ϯhy8dK(gxAK;A7A#p1A]v ?@cg`bt4~u*߰7 >1G%rN]e/wy.uzs~f\ꋎ 2mj^@0[~"qfw>ݺ"@vC9N,܍;֚'?x=ƻiP]<]#xC޶J5t|u>3lO4/W v΁^ޒo֑crQ6y^(3?%h2R_ /6w?O&2ZĒ_!_mUőԗ![!xX$Z]L |4yٶE] PМLT И̅IQCQ Q $^ae! Va_JYJQ z\}EBj8%a,=aU.¡zZ"Z!L= F'ZaQm Uq`Rb",jW j+!<b af9.:"%cA+)(!UKqm~22IZ4&$*ņm]ȣ 8I$Ω c8"q"^O+T #rʡQ} "!HJ̕d}),d{d@\a]>jB=ڢd`:#n^M!]Q WQ]5cv&dFdN&eVe^&fffn&gf^~&hh&ii&jj&kk&lƦl&m֦m&nn&oo&pp'qt&r&r.'s6b'tFtN'uVu^'vfvn'wvw~'xx'yy'zt=QS)Y@Fl{&ZL~ {ʜOBv8,R~'PB'X[LIN"(֓ Z0^ EUjڄP[ib@S(AŽ T\ ?Z(!Jyյ %}(LDb(I˙alIh@ʧ jȉD"`EUGD> L~jH©i%H.R!E,hn|).Rb[$YN$h 2 X,ZZn ɍ9nߩ@t8bH{ FI*Vr#Dȷ5Y?!*.cS"뫶 j+u['߻>YHԵkN&̳_2+S gkƌ'juE#q,d?>,顅* ~F?UJbly)v\di",/+2j:$zcRHl짔+>1fZjCkfd6 [:kj>)9Zƒi6;rX]#!aܾ JWj>mNuՇy PVk>܋qmrPm*՞ƹJ.5bծ쇑inN->h$*WЅ^{*~xlUMsNjR/lZIJd*{RU5Q*aj)n/ܥ>jӦ/GTQVFtq%;2v*08I0'\@p`0nZSC/0 0 0 [MՃΐ:oY p #N;"^{(]+/O1Z@\ &(CQ/v*6!Ҩ*b:[jojp: { bnFXrb![1孹)4,S1>) rΠ֞vڪbqvEF O|! .7ݚFMNQ0*["ں2.议OR­H؎/C2+s oq*ֲJoP)9rois*0W<}%1>,L3AA4B'4 ׳B740CG>K4E&W^׭E@2`GhQH@? 9[|/n/zlpߓ:K)c4݃vC(M*@-n>QV5MrY6F?4*=hu$$:\8 S(b8Z&)q4LOMJِq>v7oYAujՁ-5Q&.maii+n',1M0*y$Ć/F8jq{$6l>ovz*<3FiiwﰉwD .2^3 ._5f 62lXy׵Nƾyk~aft22E沮Â|/}7jGzc1"xߚ׃αn*F,'+9+3+bG#}l0`wlj3-n1~lF>FH2OP!`J6擐x_lq39爍[wk&_˭{-O~43sؒ3n2Zuk&]xA)Ljj̺T8F3dokEVpww}wKx\.zv3.oR}W۹wN&nRSViQ_ vxIs#yo01oncjm{djpM#&8vIq6';6o{&k)ɣq;d/#_?j-1C/xWt|H/ KǼ<׼<Frx,T.`/Csþگ+1t[k&s,'8478&7{#LLjo/;m^O^74@4QNZYoCq$?-U[ucyks}{CbxD&KfFSjzf[ncr|X5C})yN D3+tP`l9 q,`쳐<ܜSKccLTl%} .9.^[P.*u~>vtc]㋥N==.>$7GG۪6nno("xM"s5fCOV'^h3h ^e>1R]=,KnγMd$D 4DYFh;Pu-UWoejuXN]E֑ZQ6nZ:kWY€k1Ŧڲ zhk GLHnSJF[l$3rZE1iNU=QKD?sww=ICgȲ^ت/:QN tJ >A7<c8, Sk02PNIqÍAy3K"~y ĸϔY{*4z:~Ht"jNjCC0v;n离n;oo xxh_1$4+m?5<>X3kW.:PCI _pgyKVQ#%fZTVApR5" Bvຢs)8C+X`[ Q&a8ư-B,VJ,eRŞ*]Vl XVK؄%IPQ"# MfD 1@!BXx[3|7Ӻa99j +8 ;YDb8'NQ8E!iߓMJL&9)c'2eb%fx^/F׳ag"+y?@S4MAfޜ 9NrD49˹Nvӝg<9OzNg>yO}ӟ?:U&ud n:N3<4]DuX\#I.P/Q8UP:ӚDgJ=.4;\$A'htūJ")ə{V8ou|@bR j6: tGd)Xݡ\zxk-֮jiMDcAӴL^jl[zG7f&,Sa/$u/ge 9@2: -r].zp2._- &ˑo"0?)S{j K 1Y\ikaKY"DNI<oEs^AI](4K!]- kVk 5)QTAV"FiX͟XUv}$屬Bp+\%mS{=?zЅ>tGGzҕt73ȹb~fBehy)1{zkN3!q^b>5&Ԝe{C l1g l޾㖴%ػWVBo0|dٕ~ntfJ^iFA4iХ/O.X'iP=:Bg0|G*Z8s:S'۽/qٌͧ,vS=TØ? K!F.75+=qu$S^Dڎ6>'>:r Ԡ 1rѯ*K@+Lܦ 8Yz%2yK8.*<# cJS.k&[03"3@i?+L:,l 0˛13>.0D89:;<=>93@b@(B`8F+6B*ID,@ErAE|"үªKpF:⨀ Hr2Eq1hA:rcӇPCREb4Y,lzĵZ0=E:!t ʴ7N(.)'.]c[X5Y+*>;&?k,@S D-GJB/2BkL^٠~kAFqKI5<5" Ȍ#|R5k$1*@p Ma3?9k{K+ʔ*bE* /#5pM5]*CRSD"<"ܓS0ks\0°_cG(b#=̠β. ,O]6,Ĵ7i4I<+BcD-\z"|"7B$5iy.8PɅ#IQ"R$1*7RQt$9hҐz"<9EBA6SS7S@ȡ@A%B5CEDUERFwԞ+Ɗ BLF?}+Tjp;t|RTفGGUSU9uǚʻ9XÅ t$R]"T^£4nΫTD#Z354bD!XR@0nC7X+S\!4ƴ%K‡ )O쭣Y[Ev:5ڥ ,5DzM1j̬(917秕d4v3fxRxU*ڑX* +;K[k{P;scalapack-doc-1.5/html/slug/img38.gif0100644000056400000620000000017406336063601017076 0ustar pfrauenfstaffGIF89a%!,%S 퍒9ً*Uz.M]5fCeল~SDBYP &1Xg65D*K}B N];scalapack-doc-1.5/html/slug/img390.gif0100644000056400000620000005032706336074321017165 0ustar pfrauenfstaffGIF89a]!,]ڋ޼H扦ʶ RC/ Ģ1;*ɥJԪE:d [rk 񟷎ϻzu'8HX5gxs(`aH5c ֣֙ɸ٧)[ Vi 4̋z|Le K $M䫦 8W\Z>-]={.u^n_e fjOJix khIC Œ6JLqq=I,˙>ު(˝)w"y@Uɏ"EtdJɹc2z}NbF=Y)u+$Mk_*EZ.9fYW]^]qH( \fJINs_YǑE5fvZn.9h*<{2g)Zz~ߵQ x W ^\I!C,Xm)| O"L1H[eS!l9fdŒ,"h[n/==2sh,eWoE bW` t>" _D̮pg6ڲ*wIpbC6ju?Oa[x]r* 7FGx\K8M\5sc]p:_ǙSvCgwcR]+G0 ~3쇳Z|kc;I66zh.K{|gZwYPc"=/E%N{~[}\u?V}rS80C3t1,a 0J'&9Q9ښ PVT+\%;Tf";'ʐ-Q9V&6[jsw8!bdlQGuO_sȇ`$>%E mUԂ!'xc!HuJ0/ƩLJS³B=eK%7ajq"$JFђ0) }9L^JK0%-#a}e.$ì0SK\Ǵ~Klf4zͲ$6iQ9WLhs|s? Ѐ 4dg9NvT Uh=NkN EB9Nr>4UƄfhI`gLԉ@*W2gZ͂"O K4i,\Nj:yDyNG}`Z'PGҔVTZ ~#gH?Hv,HZZ|xW^5b kD&eVJ,n:73֚#oe&?izkVS5Ώ.tYVҬ _&dk(1 Z8oIz}#YFILϯ4|Emt*TLlA>*k7VWV2k Hhbb[<"lrdar٪NzR,;#cZqbG)פ-su˺[J ]c1t[[\Im`$,$}[4eȺq`7M qza|1u1+vpwUU=|], Ǭ2+9%V8αtz ' 'n bXD+SU$Wz7^dy !ʲ8ϕ39Y nF+-drZIY(`}ߢA#orÅtCy5FXb5BϒL%SQFiVz&hӬMך>Dɝvr77.Pw^M;9gJ<rMo/q\GML\OxSm^Uo5-LζY+}xN :5QƈZ% |PN?, Rx7Tl*f:X-]Ҋ5>z[ gc8`[,Hi ͱR'rB67~]~qiSb+x\H^\z?EfORG!e7Ut\77'HlH? Sw}!+G H`2;txVh6shVelŇ@8F?ӧC8d"WurA5z~Z;/h@QTl]u#RUEz}h7M?7X'wx_fuvng3U׋aFi9}#v&"wj[w1v7c'd|+Uf귑ff8J5e(G~ĩ!^ܸ"x9CF|֡tuU+hhK@`}'FDg+8lk*uVei H*C 22ʕI/4eXǑ4T )1Fi%v.`biN#x\Y An,HhH8:_77yi{8z). 7K S`fz<[jթa7x"ȩUEק>ĪI3 bKɡDT JJ]\Zh\vɗ,Z9㪎)h;~4h|zFh9ʒĆ+%h+ן:dƅd*R{Sk"SځŇq#wxVE#Ɂk S)_[F׍+%1 FFt>H/3 W^ԗp9V;;?j\۵wYd;j`zAn;b[XqpprW}X;]VOnƞMZ~c?nBe+s,øƣ7T&ۏ5›VHgv̼_.P}y ,Nd9ɾvJYάδ.٦mr.}5Ҏ9NՆ K~&Ϻ>(}쮞_= >q|Jn.lI~qޞxyp^Ms[-n.e<'luf܋(u[JBy8K'fc^K/ɞ9qjƌDvc Zj=^e}/=S (zݢRLhS'y{j-13_|XQX^[:ɋq/楴Ԟ 9"Lc׷xtIsDW#vߘԽhצ;3f:/<J9ZZeZ$mJQ_һ[jjVX27ɎKf[eɊ&UZҦ*uЪĮjt\V_Um%)6Y:fςŚjo;rvQśWo_{u;.fWČ&Xr岖}l_͝76tiӧQVukׯaǖ=vm۷qֽwo߿kcZx{ʼny9[$>]%W'^qRGIvO] o"89&z ?~Yw;żc8Ɵ*DU& Yd@r`(kcO^p"t/L0mtD*s<2GJ)E%GD E}QNAG`(U K1Í DcȨ4 Yw'( Ib0D/e0"0iHޓ)31bs?~T4L]Q8uS4(\ lOQF,) PV}VhM5`lt aϾ%)*`#l_l}UB/gVYg5I4w\EvAl7YۥTStt'8OH]Al=BoنÎ&^f=Mq!x6SCsߕ\IUj)ŗκkC)Q[be&RXJq%wUp^ԹڤtXAv`x|"eДQ J ՘xFʴo:ITpn{PX5uڃm_`5\6ΠNuOr7R>uvA/5>}]0O҈8BV?;Nui0v 4]ttF9OE祅tbƔ>ze2ji zzcsj|!-Ly?}o~tYǰ%3[S=M| X-pcp@ ^&} l_>Ax$`L 2-!RPys! [HAh! WĄE"HFabLKi؞Zb ]94(ib ȘF,<YfRx*|." X[U&75j|kM̨pSj:ulQ։Jw=FG qX\$Y ̀)kCbxX-)Bpq.ibX4G[cX#f=a$/q{lQ$F0<#8^񏁌eYx$2'.q`Udns(zͲ+tN6J Vˍ0C6UhZ#*Dמּ\S\b@NE k Zq3/c3ZU4|H݉bH*2Ö9L#Av0[YmfK;bFy}ug̜ n^tGP&Ð/o t,sH+aFF}^7&̭GPknEpNxxt_:q8Q{ O36> NmY'k-39 %q=r8WuFxCZhe5I%"쵍޾K] jjq\}ݭV%% pyfۮ%~}zKEU'oGqV~W;Ҽ 3cSw~,jO[t]w(ύ؟qpw?o:7uǣ1QP 9C]sk4YӛG^˷/>:$ff 4or&oGhʂOt Z*ɯiVb 5Mrh͂@M5 ,̒iΐ Elͮ Ϻ>P P2,n+ A 8 0p0pi01,άD~0 в$y  0;ȤpɨPpKH2nwаۦA@@HsTνL  MN t)F͏jpNmbQsrP-#wY0WdHfcY(⪭ , 'I)JQRPp  M{^&i 1qISxz s$PHfs婀 >'Ӯ~ZrLwn q   yjZr.H1qn&&rn.*3Q 2&$6 vle%)'e;O2՞Dc.P,/oCe&#h p)X\!cII373Rj2Ĥz' ڏ2kv .R.ti\G3=sl23Q008p,5U'5%-+d066!n%O!Ef~ns/m1'O˜'y^=/x$_(xg4S p~m8k?2> EAc~DYrlF7eB9 Ͳ T0 Bk * H*"KJ0ΰQSK# *G}HtTKNtMTN*QM0 SPBpNwH9GT4TUTLR C fPS";5QQ PtQSS`G9$/$'5Tan4Fo(G4{j.KKrz- jȊFV4UL& 8n^5&K%L-Y3 aNSp4jz5[=[OR@.StzԘ:͕,SH;҉$vr3{esd[Q7MTuV}*;riM2z3 i_ɓ1r rOvnvҀ/djb]3ôdBW $"J^JɖhW{zrUW}7*Ar΢QRTa/v7%u1G7JJnG%ێhnW:xvo&vsvT[*ZK>Nt3Hnx8/[RKjZOgڪWUs j$YWoG%0dg-Wpnd3f82YϔMT/r`Mfᗃ{MpRՄOaLsQS3ՅY~WL7LeR+,pkXTUTUPQuxTwIU͔58Nш~U9zh8{NYXxx-̇8ؓQ3Ík W7c!҆UzvuuP31u[jՑIU}R4ْoTz$acuJ϶8v\ksk]tVDcs`Xo{YI4f֬Jkjv5!si0X{Y=kyY? yvsyYurJ}2ym}1um8R ڎa5dn㶤S%k"ɸ829~6" 6ygA2U(\exzƝzZ%ZՒZ1'R1AWBmˑ6|eZOl%r_ۜ ,rYw=AA]Q``=IV!ۘ-0GY7HmyW;v9 :V9f4\/y`#uj|;m8]_=tJeqhf֌wq;AV[ TS ۻk;۽;כۛ؊XqW<| |u;xS;83\Tx;K< :hhGkOb=iU6۷1[^I{!I%gv\; ^1|yG:6jT<]OW{L`\yghy{ķ>rlxޅΓ:%H#\Ze|[}&Wi99y{q({6q2 <&? וw3Z՛]'t2Zʕ<9ZxJ0zdz=KSvk˝?c:-㬴RǑ7& <]Yڹ5= 3[D>exIEѷe Tҭ}OS3TZzOC7) J+M)QÙWxAO']-9-Av :s_jZ8;egX \˞?č_C4Wo1社_Qnğ[Yȯ\ >?RP'Z߻"dZZɮV7mu*02MFi":/jnJגeEw%fޕFjUߓQTՕYb$嘘e&VÙ#BdǓÚ&k&'_hᤨdn\kq_-C0c-tjjsԴl (7TWZyj"%a/Pq1~$jMakr#j\˵c'j8` 9V1?ziLʜƂDyD4p\1F%YQ7TY*˗ Nj&4>GggB};弈uFsJ_Ԫ0mցlN XES}Ԓ迋tì2\/bpUJVo&С*?^qVzV͑/3N@hK|e+=6t\L[R%2sW?*pC0| 䄹z7y$Ig xyz̎jY}'mwxLGTbWe"H`p!r"j8Ȉ䦜&G4| ε!.:S[5rhN:ƈN29$F6 IG Qvd+TRd^7bY(%ShdNfB(v&n&ijl?[}'޹$lA=f装6y(Zz r 馚nho^:*詝*x(ֹ|عY TqFlrҚ$)AiKb=(P(F,z!ɖf/^0a D;諅Z>2?X0 6D$oy{(j[D9 l} ^-ZN8fS^$~!{TW: ,!gzUГ DQH"5ё!˨O*yOt*ԗħ6ԹM#K1@ tMCEgD4ȐJRw+'XW8vDJ\tK ա Ƣ"YӢ(`9]*8v"|ШI 1TRI3~$l' {M VLyh=̺ ՈD_==^k(=ggնVo )ٴt&#,3;Lb+E50ml:F]Ea߸kjG-n4 ][qXElJVVvo4%ΖG e^1f<=cqUMNJZy)[S沶2'C蒛J&o\^rVҬXiޅNel\+/KwN3W0[? 5((u-:Hww` '7(L}lZ.!O'Ke3ėUq[cZrnXfT&%,giBw^&]JGҐL,Cr!4'/!azN;OrjIRbZ4 k=2g֨4/iiFzש&(la3;UvϳWZնvmH;Rs4JaIϻr3o٬j8[ b~k3=zz_=Aoyg W ה~]|9 `;ܘQEeߎy]\'5m`م U܄_pj ` ʏ؉ꘝJ),Y3 `ݝZũg\!5q12Ae~uaa0̋i]ơƠM L卑(e`#2b#_&".-")⯱#j\'R@Z1(bYaQZ):)ƒ$~"(&+n⭍](b͢*:-0^!$2.#363>#4F4N#5V5^#6f6n#7v7~#88#9^-#::#;;#<ƣ<#=֣=#>>#??#@@$AA$B&B.$C6C>eDN$EVE^2BFn$GvG~$HH$II$JJ$KK$LƤLd- \^ UIP2ܤ` /%,^ FlK 4uPRN_GJYOT~WzkpRUDX!TR W@&>%!!&\iiQ ]H^%-]םeQQTaI)fgTA4f\=eS S\&%(E^G)K ȉmG`1qӕ4rBg5NVg~˔qZ{n 'uZr{ynfLG~'jHP]MGgX h{!1Fat5O5a|E(mBXGhE΃_h޹EM]h~ eI~Jr~Ra&Na&WwŹυ Vo<_z<%cD` ÈY!2])mh0qEatbJbd*E)mVEOV֨!*nV)~_ }6zFV6M]Mu*^aq}@ N)_BW`eKZf֡ޗOlGfO*rbe))~~TzŅ=yj\fATX%΂^^w&#`Il5g"uJ,B9#y2cOjfv2 *} qx֧S',,Άz͎,t uZ!mΪ,t-~S"mZJ-d/ !팠mm-ۚ--mz<- j-..&mm R-2S*.jMۦdVLyjޞEGɜL"=.nŒ2qRoj&"^nuӒfgnj0.~od+6jjh+(JoIz /kFnԖg艹V+m!j&p]m6p-B*oPme0rگ&,bnj0jkn ( [+ǵfjkoo ϮJ.FqKL0:o_'R12-P0'oF6q#ݪ1'^±J11  !sd!";/#kaU.Frv1QR"U&K1l!Y#u-(+pVlb1*NmVG &o$Z %%ř0frg>kj0L,7%.1~Z0%.31 8.Mke1C3^zs3O3^Pyf&/1es4?38E0:sAKWژ%+;'3n23цA1] ўq$t,{Isi1Go:%m[*F{%tyJ1mδ IlL0s&5{윕,My(1+8{t -nHtzⰾ1PU5lF,po]*-rElEo {F.,]#d5x Ա56zo'2 wXQ&&7wt]z:CUL]wk5RjDNƁ20)[@WZ6ujt70Ah{JAuJ"ök3lԽvsM.C~sﶿo.cóRW\ncwug/5cj*B\Rwh_#w[0~y@` XJV7ɳΰnxŵŠyG6=GjfSk_js0;[8pƆ0b?9u}ewxK8v`OxwNsDh=0OC.oM{Ls1yyòW0R9s mkJ9Sv[uy>k/GzR.G[{zS4kBsZ3Ia6tI#ҭQ)_a{Z'kMq;'/;7?;Gy/eA.U$۹/{Y:jn2v Π#;P8: 7ڻI~J5ps;:W#yy+crnX5w9i vJIwLe-/ Yʑ a:[j 61YmE/KS^2Z匚yjM5얰`h Л`iŶeTѕ9ilgڳHVaJsJYQnJToE:3bc55A5Yo7(ٞъeLkGi&\n$>{vj6:PF~:v!ӻ\|pj'nU>oN ltjѦL}>+,7 K6,s2ro|40>@p wB(}JoM7>,+*$~yjE1"t8/}Gt"{z0HyڟA5NXRݬ476QqDnnSեi-xŒڲ$ՐYC (jg:Vs8EEQWI騸E.vы_c8F2ьsոF6+mc-,юwv2+i3 fQ)H@L#(7*R6\d W09d/2U,os%9UF2fD*9^& %2BT4aeBU$G]~IRR'$Nr|*Y]rf\4e) {c`&6tfFjZ3 Xmӗt'>ws1T&oNk4Plj*l嬶ir,QpM )*J1^Y[Jw4I}dNЇZ(Qcaخ# x>nT}]xG ~ҏ@ĩ8c *DӐsHK*)"NXeri==E1*JNP izb){RU@bMEAφ٦UK/ tq_YQ(۞OF~`{ϪT]k=%N+MeɜŵkiXdIt*]A~{9pнmX t]^P/ԃhv(Pass;*ˈ>vr\ il{D/^i"jU. G$NpZam\:3! ^in5/]߸(t\j`P8uo1JfeơuӞO S$˖R)×4OqfJ_:pkDi gC=4h>r4ZOm}Kf7Q}/UԫbaMQг"!qk^׿v=lbFvMR7fſ Hqƶfgjh@_0.gآ?c4f"&=oi{>ٷe oq)1&мhP5khؤI$/i*=tPMir4ylcd.=]=WqR8&Vq=+<#VЋΨ3mֺ~߯o>?¡X?/]@ٷ+4 =˭ d 97c?x'74.ESisH6Ó`tS5b@$٪'"Ac'1pS;SkZ'YA!'y/ƃAjAoj6 Bjc,!\MЛʡ,; t#,h 44:3KZEbGc4S\8ٛ36Q*Er=,AK˥){ B*C;;;Ή<#[82yJy%;)+3C20 բ17,q42:Kʢ6󰏋.h,t| LE;Z1q=K px Z t;j"S™D8.!|/ +II.^M-C=;k d'% 3DʖD<Ɍ;#K2Isʙ۱.D˜i)g.s,&ܮ>74 全+dJv/*;C :ƻ"ú"1&'Ôʼt*F\: " A ;30˨42S3*H"8{D Mݓ,E|,\dC[)QGLDNC86ㅬSƽ8Fbk74OosCjWPTEeu P!&4' ,šP]А#CvEÑ,F$ O DQ*#A -}K4\GEG8ҢζaNٴ!E$)IE39LCEkܹ*&üB-RU;$3 /UDJ &ySѭKa8S S4Li4zg$ɎIPK F% ßى˸J#=NqU BR[RlW33F q!dP,^P/BVkUj]Q$SlVqzpr;scalapack-doc-1.5/html/slug/img391.gif0100644000056400000620000000014706336065726017172 0ustar pfrauenfstaffGIF89a!,>xNzt:K%;Ct"h>h2E4ӝkR"Y̦ JԪP;scalapack-doc-1.5/html/slug/img392.gif0100644000056400000620000002733206336110370017162 0ustar pfrauenfstaffGIF89aI!,Iڋ޼H扦K L繫 Ģ;*ʦ 1Ԫ3n * ;웺 ˲Y=}HH(U'ŀsP趨(HY9'r4HiY) J49 jjW&{y{;۲K<:U| ,L lL{+-}- ޙ'^9 &_ov*k_<8/̛{~EI`AQKYc2rQAbrd&uH? tGy^+f&n JlM԰H$}~]RDancu}GlӛsUrvK+25\Y)t`͔pCxX*DHM7DzDbF>S>}^#tXaۨ8M*q,4+dR&Ti\2bYW-kno"O5*IM:O< WM 85xiGiףK(HJ6ldns{QՎԟҥ.}Is]XKJ#t@.#M[oB,+|z6%&f언#Fv'⯁O^STa;޶PstDg^t']v0$պ)i%,DvWB`:$ťr 5F=Ư v(clޗ,o(|TlplLR[6+]/+yk/7MΈc#;T2pw݃F@QA%HS"-TUHOu7vEtW" ~,K4$ERhS3V=Fc.5I(kY^=5JcrېaM=u^ CғM6P+FחNxm[36YB\l##3d˶Ҟ-u\;b͙^לisT!S@_.LvbG 89qj OϝW-1Ï;+HXϰɀmo+4XOJ{kM%InCmpv{=I|ѯ='uzEųŘJۯi/#9-wz ]YRe^ pb#9BXy.R'}FAcb6ik}w`Pufc&X~#(w؂c+/s^ł3og-gSlѶo`Z(dq/uA`.0XoH t}C~Ƅ+(P7ca`5?GC@`8zM]b6dL0E;8W3c#"1]: UqH'2Ţe]23fav5gm~MgIF9~O|#e=.pa!xUZEfuKUI`fVlC. 8G/j˨iq2wd&|b08׉ snjʼn^l7#(#h~^w1h(gJws(Q;nsrv14AnHx|upEahin>(fXXVGdo9E^jHMs˵{؇QabnfD*,Eqx*bH_BX"Gwg<d&FFhlqqXgӘo"j,+>]TTN}i68H Hy]s{8SNDFtbs4+LEjL~LD}RAP2uu$xH~?g=7K7hV8]jvw^sNUyA)+d8|wIf\Xx)S#\wgÉVtҊg}yЎiCwwǟg&zڍ ꞅŸzTڏ :ԉ$("wd(jr0jv s,Z-%5zvj&kZ'~ZUY@$Kȋ TDUyH+b&9%J5O5v#VÖ]mEVw(9\g)jʣ3.Yr@o)ŧ'Tg(EBb8Vd sGt=!<(Y2e%rxkF㙂aȐ)%6#X5watjHM:HZh TwpD*䈉H|!i c8~C+s8X-v(^zXk>T:.mIfʋ(zʑhzة%(׮v&pglF$pxZ+S׃oU~k֟2Qʰ;Hi{(ISug5֗Wv( d0h6rimY>sv~PibH;BbVzyUʎex>Imqp ƕ7Lꒄy?ǭpǖr1q=9)ڃZ~왵8 dB9X; ZvkphrmYs鲞È8[r*v뭅B;Me4טZK۪[dLT?&tH^TX Z_+8nK ;'VV.zy״\6P6 ' ڂg)Jy[x? y3W!z_w vcWAr3Ҡ9xd8l_57OuL?|iЗ~yw+"<m6 u:ԭ ¯ЖYW5cZLl >"x⹹kBkEI!>Fv aȇiL3V 3͵R^MZwɆs (H|Y C$ *>O>b7SՙN_ ҫ$:l)D 2n PJ灞$W6̬׸Zߚ=RsztCn\KɢvJG^zyK~Xo*i DLxMKϫe: i,]YGhmۯV)<KK~坭l/o[k/nf5+Ucom1c] '8`a=hm_%rHc턨ܒܭ^l_ODoyxxG6n.ݒ>KQCR+nL @qKn,DXڎۍu:L':2$g >NhE"QeM͆_K-9z?g"&{zd<v 7'$].NhKX. TX<.bs!gsvKJkAu{ym@1i#e9 Ii5ŭӫS;˽#cy_mC' ~۰*.'$h*⧊\jJO9"5KوC-A$aƗ7a~S,?s‘ f&M;4h)P}x:]JԂCVڰLnZ5+|e?EQ>(йBڎ7j]#=;15AiU0lxu.LxisӪ%h.邉bCeD7Nz=sCCeL ¢l[QS3 F,DLQX D tTRfȌ/J({61Ac4f)SH9<$T;W+r5.41b 7A{H)P>D'P1]oW4V&4is[\r7#;}U`YWh58YhSCK`=,+*)SXqk589Iil EtZ9!x٥Shob{Ud#uK +@qR#8uݘ&]ux]NN]ב.{hVa=RJvvGqtaDx+E9DL( GZ շܐҤw\zH5aڰr(PX4A&8măZޕ%q>]{r0żQ`Y69zo‡Gn`Z^ן;WP%_ŏ]7?ecY\΋wZmTIE 5gSNgOTsDž`$(V8ql ${p;\` iu{ꗟ N5or +R&E=K]h2}Q PrVY~iEts-PhDļ)Hc Lh'Ŋs6(wStK?%r(cDU+c6$!;lb21 ɓ?&R0k$#!EF}d IHKjF_ܤ~>YIG1%M9ToL*X&iGa_Y޸O"/UަqSU G%G%YL (3UZ0֧I ;f63kj:܅YtK\JmM*äHQ=Jb?kNBeջ a?'U七g4]Cdxv/rp5Rn}RJFhɳz(%\ W(="RSH})<X8f PMQ@ &*_&GXWēqѦRGe"S)SشM9l&$xLTOR޺g-lŠg4^U@qd?+ ) S6]۱~nRwMKMN Z{Ʒh}VRЦQ(ؙe{#[:3+xv=,x[IӉ"Yƻ]\ )!gh>::dNZG[Sk_5W^@;8t$޷VHSZ qaW&۶ה|#u3ty_溺#_dVPiuL:c`,O8cf.K6b%LTYV2mO!ڵq,2qD|'PCE=J^50Az:j,YbsٓE6(M:ƶl>rymnX2G.e۔w|m;S-۳-%Ku.8tH{G0'#+8f̸Lx'lPguҏ[,tuq)#Z2e;FLA|^n/sA}XHwN+*LщuSإ:g]uK[n=m=/M+ j[1-xOLxbrӫcR|VvKF\hZGͿO;hQh {޶31^}N#I vA֝_/KȳzmeO"]^gNх|9jwIy=jGߕ iޱ%:WrL R暄XBl^ {Yk^ϕzAdLblgЦ">vWf:ǹMb/n$ʨ̱&Uoe,, | |$/ip% ;Me pܴxIP৲ooEB5Tmr( 1p޴OU" QKkQ';B/p Oq4U1Y]q2"W1i N{QpQ|p^s[1P(m,K;ʖkȍF1n$1͢ +BkXNtoƜ&Q0@NZOxN/DJ-r# wq2v1!%"'F *D %銅!cUpDm,aOUئg]r$YRʿL:-p|Aeeppԩ=P pEl.!+np*erd $lܯq Qe ز" o/O AŜ 0ct̶p_p~'b2]Bfg)i+IV)OlR0.IY+`6P*njD-8+MYK0dV+N"*3{jJliD9#n%ŭ(6оJ:g8 &rY,S ׍%IDu0}x A)#@e_YJ-o."Z}5Z,a-,GG02%`Sn]?a4 b_z 6h,woh<gAe۔gI "*DZ?*unؑZEPɅp~&֚*n&9(Oh%RR/`<\/g.]Zd=)-FKYωBv;gE,d²6k鵿Jf ͚< " odGsbX"] Me1:3maUϐd Ko:lsl7vŞm#'1/e٤Yfĺ|X]w"o0Ž,Ib"4]d%=(U׻zCy.Sj&A0t3F9Ii:vlg0o@G.Yگ 567oVRz{:Wiid#mU㚳:,yL]7$뀯D͘Um[:7\IQM[S3:zTI{MN[ۻ;Q[[[{N}ۼ<o[^T{+ܿߛWڹWAI|WU;[?(0x!qxz"eց bOxUBp[YTJؕ0BznRi|%DeGc̅ךl ~7\<-v6g{%w9l{c#6ЛgWrVhtdf :2ը\$֐Ut[MV>hѼ|[ =9_=wWr6QKGp;WuEAO VbϙZX6DYΏo6g=эmW R+2W-i?HWw kE%n_=NwQw{~2́C]:]mGMߏ95}/eMceO_I2|']CV%+v{~euӎ?83; [+>z)=^#(T4Kk X58\}ɿ5yeV)9=ԆNSi~p̥~ggyY`%+)T-և|R8u p^x%G~j|rSl#5Б_~|FlK(-@ _5zBW#*.]5]c:?a3/sI_v>}?@gG0;lAtň7X fe$.9˘V>"C5FyK&5@u-fB=$bӧLݖZk;N4jTB[MYvm÷:M J;:Ş͓@[UڕVU¦V%,yaܱ8%MF<`ӪCX{+\KK%kҫ zwߢ6NmYR7Ve:tmS//|r쮿xO=?y;jۤ;}`PftzSnޝNAW$R!!\(O|̀ jxوъb΋PX؍?&!$-v$I*$M:$QJ9%UZy%Yj%]z%a9&UP;scalapack-doc-1.5/html/slug/img393.gif0100644000056400000620000000025506336104767017173 0ustar pfrauenfstaffGIF89a0!,0Т:Kfߺ<"V#rzއlKw,w3伏ADZr *,+ąlT d0l H=eb9WW*PF-ű4W w)9IYiy Y;scalapack-doc-1.5/html/slug/img394.gif0100644000056400000620000000011206336105320017146 0ustar pfrauenfstaffGIF89a!,!aXBlyLaH q# L?;scalapack-doc-1.5/html/slug/img395.gif0100644000056400000620000000011306336105361017155 0ustar pfrauenfstaffGIF89a!,"aXBlM,8"$}j:Lq;scalapack-doc-1.5/html/slug/img396.gif0100644000056400000620000005734106336111312017166 0ustar pfrauenfstaffGIF89a!,ڋ޼H扦ʶ.Lp Ģq;*ɥ3JԪk6 dݷl){̎K;~no ^ GIB}w+' A$f)b)qqb-%[$7&ic t2fx%$ӦƛVp^5/DsD&ŝBEKQ?F (f`ҭIדXUa @EBfJBrR契sEzY6ga Jhj/Y4w8z|y)bR F PZ'jz%]Hb;@%c*aq ]v Z+wM-bĬ $Zk1PUT|U x f$_ۉj߮[Yf,r_ҹQ`tʋؽkGh6W+q\b iřII^6PѻuqLs,WDyG}ɛ2gxTK׼HbY m`o 6y[$ҰXd{0tn7 _,}Aoe}KT^YxUFKl>!뚰Xb({qá-y}ۮuN0U>mMy#;xsta|l+A!êf̻/}jo|IG⧞<䷾l>8V/͛>lb$? u t|@9/+_(v? Ov㟟 a-]Ԓ=N\AwsXz2.atX?u*AXˀ;Os8;,cnh@"3B݁KB.x^Y"ʛxK)aY O̊ŬOvMF˝UX wm;6!WM,w5CN .Y5α/` ݊Wi-{}xC`t )VșҔxͥL_}ce,z8nN4־wJ#s8f^skkwe⤩~[.z`g3E{Ů4#KJw0[lmjv| Ȗ%ɽ!᮪K~Tnu-d1+fMzyۤd>}3\ဢen[Q^?oz3< +r')ǸK<+WJr)NՏP׉vU|k$vG9=LpR+U.gR츪GIb/[Lz%+i8@+^/2u~Vw{qw/L-\Fhq.a`^۴pCvH*4j5_|kz;-~7t_kMW=g\YiPG:w٩,D-IN W_u/{!e[V]'x|fe CFk}5w]UbJ?wHGHo3{|?g/83huM.v~DW Y2VbG^ɷY *{v-{7yv}f&a褂6K-6t<Uwm'VDW3!exhgtHVjCUg(|H |\xyf2==W[GRn7ҧ>6_ƆG?q}|Zzoa?v8Ck{cd7-D|$ YkAB889[Yf`R@x'g=e h~Te."yw8aVd\kxzʰ|WACb/',S8u8Vߗ-uI|+Hh[(6\hv>Ht;hOF>u*VO0Ŋx00@)VJW~&k[XiXmHG( Wngl,q.'`))#6q0Yq8yq74i[$;ysf*+OVs47p3*)sP-b#8Q-הa`Rٕ? AIXiskn\y(ly/i< q fhYYi*Q {|n%^7HVu~ u3nuGw,^7uHSD^%si瑂wȨvsr{Mn81iצcbxebyZMH"$oHwz9}XhcǛ(2z\[E)(_f^I08bj'z28CcWUC(Whhr gax=}gE*PFI{^H)\׉I:~$BvcUT4֏FGY9fi-`IrY&9k4:H):s%$IVtw:!`vjH>NHW\HȐd Ozeꞯ\_ xHa^8s8^ڋ̧GH?YhJJolʚJ|VxבIqS?y1Y8lh{ZuE8 xhigyWbg,F6kb(uu?/Z.iZVFwmw9zg~#&H'|wt蕂zHJږJY?9+頑ԡC:wT lT֩I#uȖ PeTnXzbt9mzLgzLgK:uyf/S-"k$,K;*ji%0;S2۳C[*{#˴n)PuQQ+SKUkWYPR l{SeO91;Bw`k d+M{И\FnMB؇U&Gy=u6{t}3-nzucacm"Sڷ4[#7{ۈfCVRsܻrji^awͪȡE`JF(#(]҂>̷!YȫlĭvG_l,q (F,¬̻\̑|p yk-ܿa<ڜ&Ko |,|[ηn ƯL<\Kΐ)\ M#<о sИއa[Ż,A7¡FMW.}k^Ož,$=jIBl3,w'=kWԲcVzʝF)\zvK\)){8zXk{pSǁ~ZxȕVI}Z^~v_͞" |>lf6ln}tL(tCw-ׂΕmv;RZr-Z֛Êe~`+9JUӫkyR]:++$y_mܽ6v`ٍ&݊w3JspHCp|c]޽J=X`g f˛[-hTLgCUV >ʍ Ȟ mάH+x=LơۧdM` +ZUހ˂L+֦i}Eo ky,vT\ؕ實VI^ 0"m 84* mm҉}lmƴIyIg}^ѫ`Ll~+ϩ ϲl,n뙰^Ϻ^!˳>Nޱή귮J>i)ѸĎ~>͉;#Ʒj$ݙ$NI{^ ct)Yz蝎ߑ{݅Үw*wgYsM{}斬̦PlP<>eSFlQFz7Vujag~UVxᖅc8gCOZJ'[Vɧ) ;-?&P|jJm5h8.ǀ#]|-"H'QjKJ.%"aːJԹgEJe@&GfiSE!T(J}2쵕kW_;lYgѦUm[oƕ;n]woSq*UHFpcǙF|xc 2dܸo; Z? 2YzZpxll+v6u5n, o)i1gN:jV{]堔;cOxG˛C."2'\|9xt;nܸəx9ܒ ڏ0bRwģ 90  #2Lg!H~,T(E6l#1PHKN:OWFEVj,bX+_ t ?#q<H9Ԭvp_ÏDDaL=yGT30ݳsϔm;c4\M27\BEmQnQ#r4ϖ&=@}RdLڋ:>lQVcBKemts恍#RQjԂH=9k7_]K-qu6JBoRvN9^ٮ<|O\/+#i"M ~Xq4R[<&ń uQ@yWX]]S94&;f6[CUZhQeFyiЃ:q56ؑ0^Uu{%;<ןXgY_e^՚Wf |aAkNMd3TfkLk6ݸt<^=yo9;DI`mIndc2UVqы/LɃb+)vsY/E[E H.סLl\Zvm%sQ߭*Rp}/۟6$-enr4ctNvdZ3"aE W.! o$b @(b RCQjC4)Ҏ(OJHۑx2xJɏ !I.1dzYJ(BéQ2D/&K_nsve1F953eHыW0lnG-hD|9΂W;O4v>gz$zj/dhMegm%@YaCxn%Q ǡƜJ.$Igٹ-E9NSFظ&խbd>MDnReW;Ӭ4heJw\&ȑPn*U`Uƨe']zzԒ!t8QZl_r8V.t#gakE|fzzh=svq!42-# i#,oW'/*g_hq9!7g%&kUD"S܊uf>W5op!5ꅒR_=*bDnP Tjcgut^k2U#usKy/,qjZט*4ƌIScѲv4:vT_^;tCKѧ-:iK{ cqM<Ѱ[ңN>7v#ĈW-Ze"gtَk:}&ˮj޶NgM[x7:r"!}iFᆆnP-,ZJvj. TX-"BOi(f1$p, RIXOZlxtR~%_S/;"3$R} -Flc%n- *rl'{(i슲R&ri]g j$ %8InfR4r,HRƈ{1-ދ22 !# W*;M30 IF|4 9e03&6Ud$S)ЇFfMo!&\i ;M; k/iACOD9/&~;gDL?^lD#REEBFTHe{H'1Ak3H{IC4-7EJItKDOTLeqIL%Kt$iIIMoMԙ@CJ; %Qv*g/^tO#:w9'.REFHU)iS DʮT&mJ)[ 2ݠO*=MLI3Q:1nuB\36Yfӭn6O7.mO֦XX l(#M.Cd1Dn FiW)C5Te([)XTh:͕\Glz)Nb$^vZ?)6/-(w] 8%}Z?m4WM}՝~R(0/V36DO6gQ[aϦMCJ{P e+v7a.FV;` c?k 8s4KW?e|Qh`*1ٵR6W\5hHĶ)cSnm5o-Vlm75ELth@ 4[4p?InsxtjaH&jR.4IɔhÐ=3:s@1UBfn21XkSP+WGJgt =vJc(UU^GbVt2ya7o!v|W|!Mh%nk)Wot^?e~7h~7}=H| xT~4eѷycy u'87PXf3sSB|wrLK8btPx!́_x"EIGԅsXwRmv(z)9K,eU`i8[XvΠUqwep qYwyXXOxiʒuUonqۖXS7xi.B7~IoY7]. ٛ2'RA1؏ dm8&6v񗒱Xj%u(U,qd ZkȤ7#6c*R2FN7.Y)yAu}y5pRj vr<ϖan9_WOyI Su֋Jl0e/4EyuB7D޸Oߖ騆547{U7/{ *Tqyr9r9ʙ8sUku1!J\x'?o:}ؤWºןY5W1r;XW}ؘWxچWp/̡Ղf|_׃/q:zϚz}ZZZ?uӸzyۭү)[:My'{'>Y>xˮ Rxufzw·;SY5Rg5aZ9ئkˇfu{k۫_97gؐ96z$ң͸U\Qٺz7:3zmې?ypA ֱyۼtvSo4N-8+9AZnڿZF=kvG;[1s;lMOU--'۸ue ,STOO%sw_\~foՙwhgY[YSE`57G,z|'z\EȚywYq"e|uֺrٛayƺ ނW3][{͙Sʼ uvW1S=x\XQ?Cz$q'pOZ5 }]s{3ךǻEڸ/[ ;M]}+: ݬ-޳q~޽Q=]t >~}ϻ߃-#~9[|7ݵGު u{vu*l=ׄf̛]eWѝ^c_"RՖAt+ii\憻0x՝K Eq;p+pמ:oV1U^Ty'B^!K~ SX?K͠`GW9A߹ݝбv.uveRZT7+çqA.\%^Yo[#C<}1V[ NU(Ϙ!uM-"oTE}5U1rY.Օת+?{ >rpCoHQmᄊKYj+f3' d,2].M n@hMVʎ\~G# ˰W']k_9(W^ "PbJPUXaEf!`aMf(S)[#+_*ҫai#-oW(q+#OQJ$*e(U\ieg$S5z &*kiwr\?;H;c)%İ)x SÉc9`TlhR[@Rԗc>B8(vY O$̣Q`ABeӟO]"GU'`ZtSت24P`WNJ"9SbY-u8ΧSMu:[Ջ8 ErSF2=*3ydsYs@7U5؋eͺmV;.Yƽ/ỗ^.sh Gn賕7Z{3<_;f^?=;|AyD _C Z e&Fowj8cB!mӕ)^b|R(h4xc-!&U`DEjU%8:9!Mڅ6Lcƚ7T1yi[ne?lTg0%h ff Em>y'>eW9%dQN=)UsFf`ע DhG, Jxg?'LTU.L+AZ@r*d?Ji^.z⯝Ց*Kj41ģE:lR 2zZ2 iJXf%N^uhD#U[|06ЪږK*Lﲷ]5)dGI|mERr\)ژih=԰{Nr;C{%4JfKiVM_F d -q\$ߒ`a(pZGRssߩL# 6;BM0hk+ #oI2mJ۸5靐g1bJG9"3Pd|tEuQW=ɌELc?|J%f{r+]Uf21QޣFG;F2x3|Pq0' sGݭ9Cmmyld]ϛ3SrZ?3K٣}9v|Fn\-YK1[ɦ>UhE1=LhrNwpJW4IL KxbG۸42c1Zz=f@ Np'Fy,._bS_){jx.&U bRbvͪY# u;ks7mwс~e *Ϛ` E۠j3/!p_Xݖ8IQ.t=_ͺ:HrotMV*m~ڀsnuLԖk|=:Kk(жlQ+nh|ή -&^Mcli^8J_os7_NG0mBwn[k#~gZ@;[w&ir[qk{[ŲyKtTc^օ}LvWUH{Vf_s/}={߭&>/z[Z ]bO4Q]6A|߇\ѝ G9`-vQ* *qYM`^]~iA dY _ Q 5  `r}a ` Z2! !I`N!4̒8߿!jY!.L\ & B( Op\^Ɍ҃!`R\ *Y]屝s}J`[09Aa_i܅!QE;lx'<᡻y_@E NLv+BsH R7+^!ɋg`^`\II~a}&"MR]ckhD[[$O8?Rcm "G.4}!ގH2$ec%(CYiHE(\v:`J=ܐIPnح$JCf02uV6NeV-J VePUbQ*b|_R3/b"[bS6I""]u޼ěqW%8RLbH}+vZ*!b6 \dߒ CJn#0Z: mgcCjm N^nJo"G&g:R@:V &rtanBqJRrV'NGjg@q gG'qg99y|2'pZbv']sa}z''~ygad _e#z&bRMѦ_!)1hh4%;->қa7RvM@ell!g8Wa>牮BW^fk&ͽ0PJH*i H&/%awi"KÝXY}dg")YyZ)DeL- glag`YDze!鞦TR(#ub0Y|j}V*@d>]aIgQ6OO:[=#BKb'UՔTmRKJߟj9KI(eDIGjY>_)).i "d%L>߱i鸮+5^4~#r̛4+n Ù A%{jnzm+%q }aZ*"TjEl/d:^,C,^dǞJ i2Of"VjR*~x fW(갲aNkӎi|"(6gז& lPԚJiif] Zl2Q-xF%- qZ*m&n&k-r0.ޞnJ.~X&R.e'vnm-.nm*XZ(.ie9jҌk )VكZ׭bn'v(*bO}ӎfJ8N`Нc0֨M Qe bW4b~B͜*~D1oN]o/^'g8leڗ()cL!0zmpEA|*NG2ꪟiepZ2  c۰oV^䚏No /wQmcꞘqz1^/%7 p/&V!%x0^hq- .o1ڄ1ps,K ,aBE(Ysx}pr$_@2">kpֆɤrUoUbn-2E~(Ybx021 ЍM*VVnq(|%&PmBj/g":eդU=Su0fd 2B'.ԩ*֦Ոc{!.Q/DJgfgVFORr ézJ3kcrF+GJftGg&}ZzntL.*(OߴLnNwN?.N-)PnfQ4MR:SoFuP7KjtVO5qMYˊX5X25P/u[WuOmW\S]{uv`5RKam`w'8f%[f+o.^T o2+)(i4`/\W-hXhieo9b)۲ů 665jWFmra)*mjqLtɲ2):46=iwNKNr򻢴2? OjS|=OH1y֑6[7_вlÖ7I6/tN 86my31ߦ;9 c2ի;8V8bKB##.ox-y-G˜=,@y[OKջ)N85CvxxPMV7W822ެ{étU~ 7J:1og)yyov33ĮS=qL$]8Gz?]j] _ӒS¶6'zТExOyx(bck+8`pojLtIn6ԩWI_d29GySyl׳- tQW67I41w:I9)6T_n* uC;Z[3|` uûujZ{_ZnŃ@Ó5'u׼<<<}<='/=7?=GO=W_=go=w=؇ =ٟٗگ=۷ۿ=ǽ=׽==}@E [ʾ#^|Ɵ |,PY_|fXS8ݥ>mCGLpr@K81?שS>i K9>(\"P({4G,>?\7~iy28C~5&#B'X+FO'qO?pa> 4G)27ācG0*@R6:o.jR\Nls\0F֢w׹H 2s3gcv`8ehQZ&%v¹ӓ1o^)*k79K˂[ԻSKa{RtīChKR4 QM CDTmi̅ :$-}-\]m޴&.NF%D.=ގu.vSc'K>*ֵLG2%< R|,IuJ׿t Yy7< Т}u(AP.}샮VI_=ݺ LDi#Qܺw* tUF}1͘xށ ּbC K† l #/AABg <Ÿ<YW_=3>"#;\1Xg6Z0iLc5->GHi7\qעZ,%Ym,ƅ7^yAɰ_|WF'x[-l C8b\1~J`Z{m(i*t}*f98f!Nkә]cK`(Y˲v M)xڝiihw3 X4/ZNmHT&6i[CFۡM5h4έlM-9Bm^+./ r6TBӷ4OGSsôS=vnP=w}O`a=x'xG>yg՛>z9}~z~Q۳{$3`tj(rVVIֺ`يVeP^sA)q&Y1eSl6aJ7L2⢠_0'# *TqN.nb/`K2^ehpy Zyp$tC9kԬ=fxӧ&.ìKL޳-j[TjghO<+,{lhagS=Drr{[tEoL,lAJzfn sL$>p+'uj.w!3Ɔͣ歐L0bBU2U#\El=WHx4"^"KvF3K}1Ģ]GoDk!2 oj"L"H2/g6DcZ H^g~HB~ob+v`,Xb31p )R8^2q:Y%yʿ'9/*yĀ<߱<7yus?zЅ>t?-|{9)1'8FC^DE9eu߮^IɲgNTv./'~:TeB?ټ}Ln*ٝrrnWݕ.9<@ƽw)UxKXv K{uD饽Lvyk ^o]*Iura o(mt tKoҬ0c?_Zü5"Dfy먉j?(CI50@qз.VˉZA6bI@ &t:7wc*>`Ftʦ'XGdACl|{C8r<.GeJJ#ےXsc *CF`zuF(3.jz 0%Y[/㑭0# 0k4)$8[)k4%L3; =˙/2Fo;&V!b=қ<삻cƴ4ɴE.FLjk67KKlZ3"[IE=#lMICD}!][%|Ԝ($\0HJNL;C1쵲,OZ.VH;7Ct;+Z6uawG*NiN8KҚk IdK*Ė*HҶܴ"];6#a+b#=l&(6:{PLj81!p#H A,'jȿ9 QAѩHR|Oc @!=#NTT>%ļ̾BjC?M2@ ԏ4SpNmMLSDŽV`=K7;U<{QX̀< ʫʲսh%qyT́ |X|V$)dV'JNSFFF43:KD%ӗO)w0?\?BK{Θ,N =0Q2ar<@;'5 - ҐjjFA^d83AŠ'|ACݫ/mOd SzAEB|$o (J\=Bzp,S"B 3,¯zE^{:L4CܼEiC +WÞsk,̈́1؛Հ-ɝC҂jČݱQ̩iF®z+(Es+ =Ej +pz E X[[ʼn߿ԥۆx4G 84[' Q_.t/]\ҩ"z < :ܨHԊɛ EOBR'~l4DWm)F;^π`]Bת%7mISRbڌT:? @Yɳ N^ܴ4>00 $)TRڧŸ;.I֍/^5CJ2BQW;UXIf>g)MJVL~#L,dnPQ&R6SFTV5Kw,!LC2^8՜]eVuݓTH.$CP(J|]V s_,͖M؏W`f\frb>\d͔>AN͟-5cU6w4Z=?-Y0VZMYa:@SKFtR $O#&\[+ثۍϿrAA>NFhE(XREČRBP8ub%ܒ⥻Fv:QPDi۵,ܥQ߭G܃&auCjM"=pa {RUa._# S2ճ-P%. 7<ٳ~"ɣuF7m@eS<5S;8ΘEaĞ\^5mDmqZ]mֹ֎nP=We#U֤V5m[eZd{E8 *UNPăXfg ţȴt>M؋WMw6mfhPsm Fq:sesL f-,Лd~Lj os V,p Yh^;܉=>j̤LCMngp+- .[M[{FDW\&hY[F/sqc{eu&C2|0&-ʞ򪥥LrVRm7 ؐ5-K[ ~eH.iS>C$SْklƂaCAyo]T{[v=\[ǧL-C\#>'f:u&U@sʢ6]DQs\9_ڕEuCNvqJYƞ^\n iLWc̠_5䉾cZM$N.kjeLQ@eҺE`56J[q|jvvm]lYq,ʫ&xb.E%DHn_s][t/Xk^p>5`N`4<}lvG}H6=v.)FH8DŽG۶xxДxʢ鞇px-6u?m^b^..I>VրQd'>:VM3c (8n:4̯>>DU3GVDn+|sPLs N9y>7m̏KΧFjwOmWgwׇq9Yp9:p:-ˎ{%4f}{UƇEoZnY҃C~:s6m窥v.$|)N%$o`+p'Zc5;̯& :iqu$U}݈憚44ZKmT2VxϦtJZجvzxLPkJenh #^/|~}:jl/iFrxs=mlwn$jhAEeMt 2Lyz-y"RhpB{ƻțL#լΩ?FIұ۳@uNuI@u [=~:q<%'>!iȱǏb s HNG# aWȥf&Hf̞T4TɕrD6liteџ(իXT[9!j(vٳhӪ=d٭8A] w.ݻxE(ַuK{*.È+^̸ǐ#KL˘3kތ.ϠCM:ҨS^2ְc˞M,&+JeWQ=&'iŅrKQɷ<oM͍iҠL[׍ ׸`٣:OՇUF!b \y+G`zwyfwT6N5硈\2`wv[x-3]\x+J $|&ʕ$W*!#>ȅv݂eF@7QvI5I6!LS %j~yR^`0-B9Hi":3ղ1nF's U2k^*W .Ngmv@@`Ť*BPBh3%E]4ϫ(fc.۪%“A1k.aEⳀKS sаZ(& 3`T&^ /O.h+m'L9dG|,L!.+# (W~Ҹ2R|ql:3K/Owlx\L%yli1(UuBtmYy걵~scfQߩj*hlȯn_OS |'˓V>̨(⩇ŬYjz-U]ǃkb7}궑xMC\-'dij҆>9iVY6o( Ш]Jw6^݂z^IhGeW'حZ{ #MT`Ęvm 't@xOEmTNrxV&Ǡr=!S "ȍ({rxIF"$:I 92 )KHcThie[V$^Rd \82 &fesfqI'Sfsg)MhJh)Dئ{~YȣIH:i䩨%n)ޔei)kfz$֘VÐư:IԹ:lu{``VFy7Wx޺nVi4oJEyQpmnw NAen{.ZmR;< m~& .GV"l~2̰hf|I<Կ` p'0ƖpbFqC6|p}jy Eз\3\mrʀ`V PFʭZG;oؠ:)lgmÅExԪP\*ѷ֜1x< wwc_RV9X}7 6彦]w>yO~Kkg ɛ5+%Z9p:RHSQaR)I3I XaɃuPjg*cN%* BKR+_ 7bxZn %O5 ~o__chA`ǝHDX]cd pdw$U61>$ZHȲ&Tb4ƒsRCKJ eaU>>WfDyDm&X->]$jac)0FQ:Z&~&loi \f{rEDHۖ5{eP~U`F%M+^ l)\J[XKYDpڲ摲_z@4n$29&;`md [mm &kVq>XJalͧE~\7jΩ@IAό.tK*X=],E_bjy@Fsg#S]Υn\3Wּ<.5]x9I(>65M3R>0B珗6M֢غ+]W6cw5*昵xV^%ko7EyEd6$K#R{4þ0 U<N \w4Gqrnt §7Qv"\rMR|yq\6s:CUsH^)x\T:1S 2^-f ѓp~.$ nJ5+]5Ӫ![T[8>;09s &{8ȈAֺN",ʘ"[%SZƲ){2PG64,BV+]c6;lBS QOV{!ClZL;~;um<˶²7̴ E]-7aqG!-4Gd[d~`{Fj}CyH`6c|Ef q8Fi8G{5bv[xZY^ׂe 6XH#|CEev"|H+V=Bt&8{vVfI{ h>Чqc9f5T=5jF/B"5W}^}eg|eo@Lf;J2Ȃ7EzzY—=iAY׆Qijf8-@58^VeDÅ3#cv~bx>CFh:Z8.b4UFtIeq{䚴Q9F u1S۵j$E$^{;mG;t %sKukwyURXYoMKR+q;嶄˴Bǵ鷨N۸+Bu v[>$G%XX5 X gI#$p* lʲ1.Ĺ%&"v ;7H6~i$fX訲k*^o{OK{A}8ʼ85Wj:٨ u} Kh ի `a+ڋt2ۊ9ec9IWxb6'f1Z6_*^Ȣ, -ݖ"ELu 4辷˳Lf:b4Eۿ*Iܳ hj>J'gK@f=z?kA!}hL-lMl& *f4z'˹b)ď\֩@R3hƼPX|߇Ȇ,|g\hZGhfjkH陼jlei%l[ EqY#­% v6̰vE|ʞ3IڼJ۷J;X+kQ,3|Æ &.VیYĨhξ LWτl ό+;$Хi ђ mZݜ#\t\}Ϲ;=Ͽ< Ǻ@|;ݹ3B7=)+&}L[G{+, %Xi\c&]T,IЩחL|cὅ ̏3xM K0Td=e߈}`i蓭/TJŒA]&ڠȤuw:cnA|ǍƓ(ȧΝ~\3H5AǗ-9oT>#ڏݖ{G [7TMɳIx]j9٬hʫu^"}kUmتRzgݣۀMmsůnʋS̄ͅbq k-Ќ Iлk9}> Ngl'`=]lBB-D>ў˾FImN an9Er)BnپO^> _~V͕nζ6 鈻^.__m>w;B~[5ꨖy(y>H~سxׁʉ:Lr綾j? ~ oN"ꮺ 3^5*>MX4jLIKǒ];nh:_Rig{E/ ߪ@l-n2ɶ_5o 6v5{[-MUΚ1VμV4Y/ MC=ڤX fЇxZ>}}lʯ\w '®#!}܄Bb:,7hZF=A4gF a0WF(;R/hj Y#=gI1 mD}3^xSsUƨ jCb%8ҞK8?'u*⧤:U6! q7Yw(24o[& W(G6;>Vc2-aƌ 'ØfS^iáj4JRidBzjb;b:hVn5TgLklز]۞u&usoQs~"FC;bb w_Dy(fΟ3s<ѤU ׯaǖ=vm۷qֽwo߿>xqǑ'W^u͡;-lZ՝kVdULBž}qٿz}7Ur+#&*Pn  T@(i,J.|>[ 陔Jϝ4C(f Vѩh-t }gF-!!T[SJb,,#l1ȃxqGNJcI<AA 'S93; Uy=I ̦2$SP<34\Dmx8<Q3J)s%C=&Rr(t3fMT,oLm&t1-9K;5ET@LXYm A0mxQD!tH!%3ҀӰr I1UUZRy2W6PYEx%oH(ec]Cvbŝ*:dхINZĶMnw"\f6k%e-9,@12iU9Eqwl?qƊ9\xbxU։eY6~ISFoZQ+0uU!Yݯq@!؈fV>ݯсMbα[h:)ۖ/kULBΙmЗܔ walmcw~jh"}yW ugw^_3"ӄ Tf:GFV/ |{^$Z,K/G{hY?\ŚŒNʚWA4dCF2BiTe8H:H- 鲷 P<a~|эj$! v2Ilk6:,nP.RNJU</a\!j1Bdx2Bƌzc ]oV7T?єus' XE30O.`B|M_ ^rS,g,fʇaԈybR=IZEr,:rt<ٟv-YٽИ*Wh#.sPnaGK?W3\确zUGSZZ[GzҒ$n-jT&X(yz!jD&©Rd #$-wSki!RuXvbHf"sS1!ě$3eEÆM7M֚(\$b2kngIH9/Za;Zen[F^kxӵ`! 2 Mz6P Vk Pms!+ x$ ӱe5lƒLBLJ1Ri\bE l/mv4S3+.ͺ6jMHtdx J*d(2Y[9N!Bٚ_4OMBQ$hl氇LKHpxEQ/2KH9Ynse4؍~,8sVsݬ+C?;yx 3'Џ#1=fFGϓt_ C=;қ3OotwclcIrXUR$/Y:DՐe ؃ D6Z^5#|Wt]Bgc1muvmtT;X#W j>?[6@M/m?|KdS1n͔jwW ʹ:46(;2 j.%/gMߝY.^vBY@leu唫@t߽ݼ-s cuF^}SMN/K]w[+Ȭyn#kpZfm|G.V^%k>̢R>)4K\Y8-]_s0Z'=UVd)f}u?H!Oֵ<ݏ'ʎn'/ ]Sºȍ}nĒg iԭ U4ݯGW.#o0piG $T \/&0Zk|I$YZ*lL9J0D0c,lz/ N4%`tr--zFLr~<*h0k 3&%3 ,ڍR, (MC89nb/{$u1,9 Q9s,0=gsԲ8?oS:vo3䆳) ZiuZH3QJe?v3?r?;@@{ґ/g3ppxV"BM3g#$&q4l* n0XUt@i1PYI{lCQRGR9S)E3u6u;RAPUO)0UUGUVU%(Qip5It RV5VIGU 1YY{5 CS5-WW-Z;Zgu(3NTصq^G2-cN4h4hNSsiKUPPt] F.0"Ͳ^ǔ WRHN`IY`pt.J=-Vɽ6frgX\Mu)BS:VDodH-FDM 9Ϥ x`6fWLb kp_Sn&KhF*kSY.*BUKA+-dPak{k7toE{Cf'3R͖Hy|'ejWqk nCU3kMrd<=7j@WxY2 FHN{`wD!XnzRPa=#j'^QEMHW}Q8AAGcfT7Ko8 /O2xRdݶTǤ$2,XzqVe?t2X\ϵ\O[sSQՌzxWxRx[؎pT"@ ˖Q]yyy8#AXɱ[=rorU9yX MٌCE5cٖWكWIϲ}9]̐w,7_CuG^d2!iY'%8~l$uMHymkIP'՜VhV~If%y?,se'17ljcΟGWaz#Pjπjw ǙJi׽PZZ3ٙ3xv+z_aW6wؐ.=7TQ$B`r۫cuf;J-X5Y56b]XCSJ:菧}+zwYzgxVغ`v7Vvຍsfv]Ep-ϖ[;XKخ嫮: ,+MKuAԝX봭!ݸm7$#{7ץccc%yq;z;[&IUlUkZs!?{cs[gz{ vSz;Mq9" \+ ?5<CX=YU3ãՒ\[G|gYq 훙 96qPSb Bo'G? C=c\ܳw\&K<=0}8}H0co鉝o*Ky 5>a^](sY11D|>9ʫ]YHۥ>OY<˛a_s!\?e<_ Sc~?~ǟ} >.?莺l[)+ۺtN:lVU&ʌëPZd65˩h^*%V:ǹ29<7<|D\"z\H]L_a d]%f]`Q]a`IH&j꟡"O"%#m*Us['nio"*2uV 0,1 dG챰xz{9w>wKn sE˿|5ADcW ;%eO Eb e*Z O$̍+/h @'c$8&aG2Y|#N=;78l<6a]C Ew=R6< zlkۡ fр3LUA]F1q º.Y\V;^K*Ј2HhiH FAn D%9qCE*ZQr|#Jbk$fLVgʵ)ukIura-]Q"ې*EJFRPe0Hr~̥1[eJZv23UhL%7I[dd *p6syN;hPGԲ ٖ~*-hUP3) ]Z9Gތԝ.Q|m%Ҥ{ `XB5;;5qBS"K}c 낖^5 Um >N. a 29, s"AM!O*zĢ-6A}U%+-x)c+6vN fD"hc!c-M )f'uJySz- ~KT޹FW-[XeΡ## JסpZCm6nm#,p9 ֵP-x#gŀ-$x*o;I[kJ]5U]u.x&,/`5-SJ Ca[ Ȅ#^ca,|a/Ytpj@2WLˡ}5U7p6ņ=[7\}C6DKwĹ :XOIuLo\?Լ=m~|Z6 1@B2Ŕyh> UY|oe{S1'㱄Tl䯝C(͏O]k\^%QPriJXruՐM]IѾO,٩Yh65yU[ӝ,' RV-C:5M`SSkTzt3ezYsmc赵omyꎑKVv7EeQwF T wU<-%V3΍ -h!Z1q$ ?"#e/󸣋5m}G b#=Ƀ9p5]]K7g9kq1Yӥ򇱭sduIϛӞmLzݕ6FM~:qG1KԋCR׻3;O M&jwyd6`YZ]fi{=+̘ck47|@aD"/&s,hۉ=&-#QG=Tyߒ͐n77_}ڥ/{~s|$w?獯hfN|/ycur`Y51gUX# "E~őrQ. Mf=ޘ Ei][zԉ!u݊͌M MYWIV%eݽ``ݒ`A5 B>X]a X=Q-܅B\呡 Ùq8ci} zY* !y R"͝zC9bU{e- b[~ 6b&ӑ'z[([b)bě* _'v[(+nբ|"+b}_.r)""2"6 "0n\2*4V3d6n#7v7~#88#99#::#;;#<ƣ<3֣=#>>#??#@@$AA$B&B.$C6C>$DFDN$EVE^$FfFn$GvE#HH$Id8z$JJ$KK$LƤL$M֤M$NN$OO$PL͔! Ʀ$a7@%ȓAMTn$TD!?X-Ntk^PtX-GuYHa&)BegLՔiO3dv\ԩU*˽UΤaʧiݩ!hT桧Ij0~~**h(*U!ijj}j栠He%].t*sv叭ǹV몮iG؉.B.f2f(!+qz+깒F Ƅ**+i & ~NWv|^rr#}e>t砶ǪȖ!~c&g|zNwZ !}!S6i,,(Β嘺\t-mެ|*(.~Y͞gjԎ'Ҳl^تY eg5VT٢~!]䕲K%--..B mӪgyw~,K kqIybhAҖI.63~z/+,[ $n( 'ىY6b`R7Y٨. .)kzi^]ThiTM%+*)5rie>l5HE@$ޥ {32?+5gV}26skRk8֦:MJiJ cncV+bB~l3gx\h3'y.6ā[H1 c}}@'lD>Vv)53wDی9:HT̷,s:E7^2Iatzsn.DC|8B; Y`LQlsc|t,ќҫ\\\3͓dULu T- <%e,$>*^nTlݡ^-#8VTC76&!to8={^}B+a"h7 Y2:i%Hȏ`ٸMlV]/#(xk'ŕ,dD*HN ӥ7ٱpBmSM_ZC O4 [H`0D;㻊R|ҹzl).2/&{AڪH~d6Ql^EL/5240F92G4i4n-ނIs|y¿cĚiV~-վ/,C ,+5"Թ *P090S-ӏx!"3ij\ (A%a ;c#"eym#H#D'tdI&Gr2J)J+r%K/3L1"43M5d͟r3N9SΆGLqBn`HOA”zM8feC 83fI9n5itk藞O+(Z&l|py(g돞JkbB gՑ~=M4l:$b "[8(㭮"CfgXbSllR!-,d S7^W.5b.W: 7ܛ67vy{Ν,Ӯ͜Wzvu؊oDY %$=5cĐFP{OzUWπTQ  8nm-`5AvЃaEAЄ'DaJBЅb e8CCo`H+{9` 5YHF!$v%$jIJ"⤢+`^;]/L$ùdLnF3fkj9 Yq:}81RƁ= G $#'z1$܈I*ViuIjIԈS@ʕpG@]"ePB8PrJEk Qu'}gr])E Qu r9~ ق0]}q[QvwoR97 #S9rbQi9#v oe3'ɶYГZB[I_y3Y˅Od(9Tg*9Rg:pN5+_v@$BHdcg=v^ʼnnqvj71빦2BTJ )0H.eIKw}Rf:k\R#_Zvkj ]znnc `9&o"uN,ٔ!.,[⓺կl9XG-ZRvT#=L1֖rGU}ӉؚԱk\rD#mcF;PEB#bZv~g@vhvS(̧2]$EL;74h-CJ؆ͯ$3۞RihNv&uGc$+i-%^9pNmp⾊|XXc%?^{#HX h&#m3rn51pe=ȊX8̔ykA`Yh6ϹTuVL&t=hBЇFthF7}K\W朰8{іo`taNX"b^Xշ$? AX45k7g40dӇrfY0([{Uޘq3UvaAt#x&v?Ӆ|wzesk%Jo;;wdMiZv#dt pcy_n): kb_h:#ӝ4cz m֯ʆ3z*j{aU$P*@ZpA:_ְ9s?4ҹ+@s5P_c">a%R/܂({. ,AvI7R$l 4a%/ 7P9/D?er@,bG*z.ۚz&٘"i2B -PsDZ.)ÛX<|T Da,{,QEJƌ4Ń/ 9GZe8Bv$^F NJ")"V-`9.^:S#Ay[ܰS~ PH Զæ$<~'r7_#%˻C2,2oz/Ƴ6l21D"+Fţ0c1033#5S>M5Ld |@ᕓ&2ՄX[n$alvf(y^@RfNPN Vy>چ㠔ȨCJ:x"uor5Iy VJ&w%*F鄞DHZkzkR ¸YJ:-+Fg)PZO̢ H:Z충etS¦c,Қѭ҆g1ɓ0vո[Tb0{dpN^L%̝B-i2_Qk;K+rxUI" &WjB?tZ_ 8xPm82oMvMwmf whGrcǍw-bcwH% /mi73NySW79ʇ.l-ݠz +Rc@K'c2hup,Kɴہd;JwsW"E8sG^x^t7~s9-_y|y!0C`P#0Q3m3׏Dc\Ѐt* *h@}CQW2"5/X ڷ5+XjKMh+` Cydi`6ຘH,@3"3nU 3ka_>0_vUqc oB,b_ W$pHEY^$17=Y4HN~\[*JjLw)$uܩ"ʼDr-i L\|ٹ`*SK*y=sԬ掐W&/&܆KrΛ,*!!Ӝ4'8Iq;iEͳtZ3 us'ᴩϨ(y4UF] ;]Mx&*)/ GX^TDhu# 2cf?lЉ;_.9A\MWRSarPH-)8z9S9f-9p2T9bBtMJFJxDGר8Nu\P֢jWźn4O!(ZȺIr jė=fy5%`kDB9*OVr-\ [T{gzZU1]=g2u? W('3[cyD1:9yq8)~C,+ge Bb1 d\ G2jxSekʁk(Ι˝meJXT[Z"8w1rP z!H+\tekZ;7MmKC3aJhVѭa]-Q#maajjG<+`;G.~3n}$`,7F}5q ,w]biQ $5Xa/ϰ-YKWZ]oX}r5*Xϊ'澝DKvkﺋs/dd !MaZ?lB]NU KC,d5ū;_0\'ͻaideu Fl Qt_[ky}!vrK9t܇^Ұ~|/h`g%J Q~hAa/RFWIͲ3ko{7CozR8.49ӵɟhzޥ._$9v_72R1r+?Wω[wGOMTCwn8m'.{5cgdWzVCm[p߶w|7~}F~yuz.s`J'`"q63VBEGU|D |b!tEE6G1o]hJgwfDsܗ (Hȁ#t|\~+j~Y^7'rPxfS(kփ .["kxWJDK_Et} C'_7`GyA1|#fkm7wy'ZiI@0y/`~P6N(h:Kv(~@xxk|u y9p(b3D։VxxXXs gyH^'H bwzSe7zXf\`Q@L8BfhfGR/z`{ #$q@x]D|tg?WhHS[!_gwigZ{~L g|zznGwd;yl (qH5<|V|2wS9h fOy-~"Y}eYip*Xv/Q\yn1xnDD.Ir)so6YrPﰀHX6x2WNFIXvuY[yW]نbHCH&^aEؗ96R vvŨs|H臃t9ՐRD&҈XdgKf˜rɜeY('©xybEa x)♆sYI詐yٞɋm)JY)udٌ Ic"P J h1ӎ({}bfXȡ 97}0h&sT.ilIUT"CآfÑI( ƕr霧rfKZso)A1YTq 6ǟRɅ1IM(;ٔפlVW"IXa ZY_IFW {g.tZ5pٓyjzyqov=[S>Hj}J? iqZj+9"ᅕjJ}q hL Z2j~(J$sɛ\PB9Gլ7駙tuɹ89g|e :d"ILԯ؝J!NQ  T욞 MK*դL!K]i%v3Fz;7;䨳l /)WS ezƧ29Dž2ͩ0I9ݵ);q^BЉzs{l)qE] ktƒv) ˰( SBozI(Jzpm@29Z+_{XxUi~i=;l\A)*zڻ֭*X˘껟ծev$[K-Gk.|r#evk(y:=ؼz U89+ARu-qr3SbkVyЎ'HXhxP;scalapack-doc-1.5/html/slug/img400.gif0100644000056400000620000000061306336104230017137 0ustar pfrauenfstaffGIF89aa!,aڋ޼OG扦ʶK:& @"̦ ԛvu\eY6)}3]t-ӪK})͇WFx2qw1ԇWR7x$%j )YfHZ:yzہF *zYL LMy X-s}[1+'olM,u?7p)CQ=%<0ruOCc@˧BNN^n~Q;scalapack-doc-1.5/html/slug/img403.gif0100644000056400000620000005027206336102375017160 0ustar pfrauenfstaffGIF89a]!,]˭ڋ޼H扦ʶ ǝC4p ĢQ;*ɥ3|JԪ. dh=7 n$χz'8HgXd8v!xyc4 0 )5*97ۃkkS u{4K{ L]w\kC, -tx^~5+~j& v*fx+P &(שPe3e'#OJ=H؍d<8*IL-/gDG]]Xr6?+sKP1o;ʌ./]TDeru9Y6D՘ȡV>ejOh׌E(V>ӵ4NTx)(s֫p)s҈цvs+~m̴ږ1Y}ʸa\tMNmt/C/c'--U:.t O;.o@\vZoi$8Iz=G| K !Sj-ՐnYp!Fe`ƃi$nYo G^M]43b9=$:NSdi$JrcQR1VNX] 呄TeFd)֖b&i*rfA:n"BHΉ5g9S32 Jh4s' q*t mC шJtރO5TvħFPlnt(:;Krt4(+e)(I҂3!" OL)G6y,0Ͻ\;$23U)hT"23Z8R? jZ]+/9 ]5Sl1K_6XkVzXt[5w,!9=iu ;3 wYdk\$g7%./̦h0#y=4˫t )[uȧki6ڎ0dj IҲ]\EҔSΒǦ4Y˜%X}' m ZpJ72rsc&s>kJЛ\^o'GsÚ|,n;ܫ8 pT I^b8M.p^r[ էmsC0'wZ] ys"Ej`_n6}䀝=:PT-uZiе*lYhSi>յz<~Ob*ܻd.3r qbMifUv- N$j=$r})KMe}_\W:]rg`}{k(2P0۳ ɹ3+vUt1.2Pμ_ Yc=o#$pca9{~+ SY]q*x8em&yq=*vVGRԀN||DufV?15/f#cv?W6w^l+G!W8U t@zO"Wv~22 v[yw9mV7[lx;qy ~0faOxxxDӷBbibNKwVCnFdӁu9Id^H3GYDjr8@Z ywj3YeU$axZz~d]}2LH~f5uNn87D؉BfVM|0-9!Jr}r73v$Fztԉ]:z~|Bhb;Gu8\_|/vlޘ7bY7clEW*^tuXXHImOqh,."dsln|4`Rvj(HOuu%l)^Y7TExi m{#UEWxH/Yn*钦w;!Wt0)t(u'q;7uI!EOONygJ:tDit%w[ MKL.*ﲚ5lwi`uzǖyT3oEy$V.2&~ًh6fA;&S6Z+Xfkg{qX(L.ȓXF`ܨkQpˡ^֛>IS 4Wxi^^2.ݳ44~I&{i(& zOyȇ(\}]֗2эj'k/R8碙=[$mpDjn^T,Vڠ 쩍a7{xgex(q?FQdʒ'Fhzb)x~[[DeHQj98|2o؈ f2)}h*$Ynz#j/ Uvzsl  >mj ijfʢEL3IiY d_jybHga5x£ْzo!~e0*IBe:$&Y](:\手u?|6[ƀ|x!ŚB&AdkJjh5K6&u ۣ֐o$YH=%JْBy5ix&Ms+Rc$A*z*;NW9t9c4z)Y&F[|:<ǘ5֙Y{R۵Vd봒tRQq+sKukwyK(H$ilDKi:`[TO[e|?Wɴ'M e˵"wᘢZ6jv`zwH{i{~#ץ51[a9z(]*n˸o#̇|SJ|kTn5ڥ+؟kw*_긁q[7X!۽YԅE o;V[ڸ[Rj;+}UC7݁5{I۲?dKliW*;'X"36/OS. ^+9BەviG4izȺɩ !K h Q?qڲgn~F-IȪPb/{+iÕXJI¶ދ`|ʣs;͛)}ສе:s̉\٧[j1 hpd < ʏ[fת[˞=dm^hYq {kRM$"EԘ P 4LXJŤIIGlǚヲ3uxݟ}F8_15ˇ-Zn:x\ګ ]g+m#<ʠ >IJ}ʪYT%ݠ>Xć[>@ Eצ+w z#(VQ~:ׅbmBac&̣=7-6WW3绝] +"t$yqި 6;#kv$ yyX3'w۠&b~>!oiMЍͻ5fҖl?-3-&_Mi|w0-NWEϗ<9^>mA^;O>ޘ opޔ ]/\%"K=xqǑ'WysaTӭ`7_W/]|yL 3}Ӭ37o~u\{>Y↑Dsxh[FA%.ڈ/p2nM-C 0bJ!0^T?F¨;:ib桢QI G鎤Yw*q#EȒRJ4[pE4UDPC+!"Eŀq&&/Q0xITQq' BrԒKEҨ8whpӽ qpB*N"oqD}1яNN]S}0! -Jcݔ/Ysl:*sEYLLrP{Tֆ 7Nrdg4eXwDYPdv97_@vި3U0Th,(J Aa ݏ$Zlv+r\fo֜* >(P & 6%7 d7=gE޴\<2HBt蜕>jcW銸}oFV F7qcI)iCdZ_V aYhQSlkftf)ԮYZn(dGO9kpɄ5E>C- %wcwmsH{n"vgoUǸŞ{ǍuqBT'͂.~^6W2iqR;XyIѥZ#`I ",י@^z*!Vx"^S7 vu!Xf861.C\kYBl0/ h,nqax(=Iu[,?\a"e3ьF#(F6=du5&mhj.:d?Vindx0)Ly!>qpC)o<s&_9 ],d3Ǹ!'={\F\G r%8GMLK_J& H,f:WJl"^tUuҚͲs~x?\*I6T(Ͷ lӦ.eiҙB)rr:{uNK)LwJA 3HXW7UKgw; v~thx=f쟽ڃ4QpF2(?#\vw(LAeɻS#'.c1:^`˟bOE)XZּlPYKs+2, UNRFjKJ(cAƺO^:%4oPp嫲ir䲲*^oM5ڤkB0iZԊmR"u!i]baY.ԎLlMS"KP<% ^)^>1Ij#Vc7{ı3-D:Ujכ[(Ke,u'r8B~Tl(WK-ΦV~ZKޓRzQ%t.*yԒ&ȘNFkf=0w.iE@VS"=N(ZҲQru\%W6(z/'H&s ;"^^x%+zbN|YccYg:1Ђγc>7fRGZΝ4Ch!7:Ӛj,:it!iIO:ՕvgLjXnf5m^s_aOpf'lgkOo殤-Q?"ǻ!X`3mm|3{8A5l73ޏE|E ET%HZmv{{:W͈zfj& t;RIT[Ӧ9-|8nO)O-v}9)1u$ڔ};UYѺkw1w䄋rfr0Ʒ6~%CV5K'^6~.Hw< ai ot7ConDM$*+a멣y3;l/~9o6t/eha$λVkf~ -zOrXk:x],H,%Fk 0h NJޱ!101Kd1zQ0HNN .%kEqE14-'\i i $>c?@%d%2&Y&?:2<{phU$Dn|R~% *"/{NMj/M+;d ݚ+B&m(r[e Mp0Rj(1$;Gcn*'Z1i l2YpP/_r1!*+//p2U. tveΘ(u3,5Q)) 'e9q*3{QQ+h₆nȏ߈ȋQ|>s36p$rf=>34 *ol7os/)(lR B\Hz*/:SBUEfTD}"{+r.|!Q!3!lV2 nLn* +gZ )R$%alW,F9o01Ծ/.AbDJwn o927$3?r/C,'A; 51"11+>mO5C+Gq VcUU 1RWQUV34dpT[UV%XwuPXm5YuF5LWsY?WQWVZW5X)([]m[]UYor9uR^5\T\),GNuͨKڧy7q>Z` 9ss݆[+JggK _=պ5~˿ih7%ڎzn]P„ţ̵u 61ͿӼ&_Y|ռ >.p-F6X>^0]7y Ģ9*f3IJk lOiW+Ke*KXGo^sڟ ߗ!b daN`@f榓&Kg(gej璈hVm'l F.Ƞ$% qpo/ms3ܱmpS(-yx9Wz'|q027%mt 6k#tG+j#00i1o )| d)5\LBq[pĖ b;^]&<( tdܷٔ1 ((B;e*΃z1m CZQ$>P?KԮZn=# [6O^>nP E:~I$_k;fU*if7,؜eAxOvfN'. .{!r~\gv+`'7_C5c񈋏i^Jʮ3F L{=Va`~oD&[xS qb g_{1m^Bsҹ_\(S]jVQS4ыՖqM͇v}buԓ*bwJĘgwل4bP)Hoۛ.БkE,|pVo jOLq",m [gNڬxatUTr5E. gM-os5ҧ1R 0O>v.0Y)ͼNKԍ|\fd&]Qx} fh7o,pOGeqWL߼uƤ_Kj0;FKyMW-xWm M<˧ysEBS=/Ӷ|{c~C р;eNTğ!$Nd;> o,ު7?k{ox&1qpyB KD8/vAx́7̨uK yP66&rFtR *TZ.~z\g\:k[Epxю-#"gظީAcd5(Ov$l@Q)^')3<~# hKU^'9(V9IUrԲj2int3yyL`#5klD4ɬj$K͇};p=xz'^:3NpSgMՓ!tPʓ E6wNV!@)Ҁ4mfGMRӡ U2&єM_qF[|jJmÕp~A.M(*W*>b ^cxJt^Ec(Z橫VXWY5!ꭐct?ץG#y@n=u=!Z*FՕCmP^hF3Uh$OClmb6Y\痬m=6fE\OzΨ OSmڨF|\ޠ@ >phTn𪖿պP)A@pۿ0%(q)Oi JpH> m_ |>˅ptG sKGFY_6Ų5/b:vR)AC5ɴ6v\5zKIyVg+p[eKvVǯ&='2u4] %L xVlxPo 1>_!>#??#@@$AA$B&B.$C":>$DFDN$EVE^$FfFn$GvG~$HH$II$JJ$KK$LƤL$M֤Md4N$OO>P%QQ%R&R.%S6S>%TFTN%UVU^%VfVneO!hk4GRķbeYW!).\‹έuά%v*\A4^_Y\΢}`4Fx4Ic&& Xf}]aY*f" -FJ=&=BIUߩ]&fҘoN9 [bjʈa¸f,81'+m&nZ즏iᮭu$ndNfIeǟeX Yr|5]g{Қ pF'g|#%M"{ʒ$I^.(*}fht vrE}\?%yh_ 0(8]iW Q\R Qↀ ސg!(Xh2 GXy6iA]hoYx"[r_rvwNgdFPgg@( dRr;$߈M8_>|VVn).j! RWUFiɅr)E,%KĖ^)u*A *Dž+]h,Ҧ:R{(u]#miئ_9VơizgM"% 6giQwh6i:'>*o22kBlNił(b>bƆ>gYwR"xiz,ER+($)'~n(V(:`c# h"&5}""I"R{ZerjmnZLzb'ڵ-~jYgmhXA@n?VD C"n. (fVn.v~.膮.VIH"Қ "uPήﲌ(db&&쑪eh vM n Ѫx mAWQkv-"'U@nb$L_kXyRn:JvLǢ+e" {kf&U.XARpgš+ ؖ+8G Q|0Wv<ETk兞6,Ȣ)0,Y1n. {\ 1cKg߲,*hNtTjqTen`n1  H1!H2"/G1#?2- (yrZF'J\- _o&IgjJ m;A*R(Ӳ<`Ȗk>vf o K-.sf"rcYzsfTuJn>sGsD4k'd3sV3 0uTίy`<f+!(Xh _:05OGR(^-mO>m آ(ݪۮm'KKtsmPcΆ1(ZB0{e` '밸g&x"] krf&NEfku*W0lֵ?ߣ:6 0*r6pSFGP%]HѳsA&*7Lh+}1c\lmdiζ{5ﲐ*nꩲOU"bnTv:pJ1z`"J6vHhsR7kkWnk6lo|˷6g*qiT?p;Ns'7.ylW2]W:rlt>ve~lhhD:r7qn5 j篊Fvn v>Сe))q7n[+s3Oj. rYkv~l'mL(/-vqΧ? uM5m^G?Rq2S4+ڦOso-&WjY7}w^9ҭ1w>h7$oӸ3J[ĩ::Ǻ:׺:qws•鬥rߺa&V!gHWky?rR9kͳbK{:ofsNʟ#oQ:'eG׭{|n"1OvR/{F/9#:'EgRpZ,pZ>uWGЩ{Lw&UX&yibpipp?X[<`*y$(Kzz_6,wW;+k^uop;|ЏxDC1-ЯG4c|t{P hLwM,Zݻ/_ p<>'/>7㓓$-%%Wu)/nTgש|r[(V:Rٺ[1߶'ޡ&Y9tB, E)a+?!{_snFgAzzsy^fϓ2غNX>y6>/Nj>dz ?4GIH炆bmPƯ"#KLNMM>ɒםhU4>%iI5⮩]&QParzζ:dadZB$r Z\lt0!Is eQɐc)m`}z`dMN >E-Nf^3rpVmnI%}fN&G6.~DĞQ cJP*Rh"W}<)cͣf2x$vY,W7֘EnLY) =br3"ő0)miH.}!5gNJ+05rjj]=-Q=0xOJESX6v:)׬[Q6 Pn VU.7~T2#;}i ȭ UYl jx|<3,&iO+x5:Ӈ\˰-.F`4H !' {<'/{♥;傃Q;r'<)@ 1p@fʋB0$*@ECJDĖ$0Ĺ,İƚ)$H-)Hu찒Jq'D.ĩP[ɲ L3Ds2dMr3N9礳N;|_'[,J{|iOVQ@)hG zģEZ* 8C0KU|qP C98Bσ`JBS!TBІ7auCЇ? 8D"WB4bDWŐObCE@g!nF'9R"f1ZCF ch5Qm+H)pT*֥A61:/Z`&wNtp_dj9lv$&IR\d 4!IG$"BYIQH-Xjғcrlsagi{ Xې^iM٠e5)iRfw},i-`-fCNmW- eʚy-lwP>0- #& J6 xтLLk{]-iϵr P+94'/c`f=!v13εP3dN 6mX؃%M*NMƠfʹNjѫցcN8iE$1+U5 auڎ͵:=C [VatnB;Bn鰜T0KTyq]\G- r7ʢF0L_cÃ5{]v]Lr#u5.fzͨ6l4Sfg/A'w9r"f5m[1ȃI@n4R5Km9?fl{*;g {Nf9=]#ֹ⮪$_Ǟy=o첕yYZlژʥ^Yؘ=$El>&W(c h-5ZHq$Vj-|#3gE(?ޒF#'K nb$IfV}e c-ZVf4 iaIvt,iNwӟuE=jRԧFuUanդ"L:Q\2uQWY];C, uM/\Z_cGYUVB'sKLlHN(D2{C8 f`h=%YwѣPM?{'^.wbរjWb=qHx*aj *U9>m.[kUb儓t _L80uqƜߤc4%wڑ-9\ܖz}0^EMUXN6j+WGezVM]M?qn<,Qk'Y =1_fmDٓ-8py1bܻL0YDQ([9 ;}8}gma.~$?!\`t3RƗ%W~}w?~XF+ ܎(wc/vGq52# 6 5pG5Y2ۿpҔӥFb@k@86R5oȓrb[gjs@S" 5 & |Ajzm{s6rp[bRQ+X3Cq*Ar:C@^^ҩ %b0(2%:3l13|=v56 :j?zCt (Ka9=:D+)9")cD%㊷C!bG@3 yBT4O@KDO+eK|O}@ $&8´+Gs6’+<9[Ŷd +% 0 :]RϪѯ*2|@N2Q=I MSC@Lܷ<DLC\Æjվņ󸑓T20Ģ0D% \Y1IID}t!vJ-,iyL(S:u01':\? Ŭ ;(:P*+J2S;`L--F ҋε GˣQDłL؋9M)<"M.ǠAL _% BWL*$ѪȲG" ^lkc i~fNOo.{'2V'dhu7x!&r()Ÿy(i9ɦP×IC#!I98 * |KxL8iw SװLWkH]֛s*N~+Ky Xi:NhpO.CTzS8SH"~*QzP 94u1$ *l 2?s̠%2Sh2ΝTxСD,4) ;scalapack-doc-1.5/html/slug/img405.gif0100644000056400000620000000053406336064007017155 0ustar pfrauenfstaffGIF89aS!,Sڋ{G扦ʶLgV]x^AlL*I1 JѪ9- Vrc JӹxK@^v3zUxէ'xAfh y9h)F(4ŶيD:7x("Z8ӉKi<::j ! ] ȬvF>6Z~y͑ NJvNDTdyzd,]֎O 8l4̋DpY8P+YTa$#kڼ Ν8 TʠDr2P;scalapack-doc-1.5/html/slug/img406.gif0100644000056400000620000000071606336064035017161 0ustar pfrauenfstaffGIF89aq!,qڋswH:Wʶkq]8̛ ĢfGE@9sJԪ yPBQ[EaƤ־AU{:fܩ׶'`'BȔfH'YwRXu)fz*8Z9":1 ֩Py˩;jl+LPX1lcK=M{<w쉽.uC].N/Jά^ {hzvq3")^'Aa;hHB䟔[Y\L`2 ֬u%Kmdx"4m$h Eݻt)͠TSRI~CxU-^ ]Vp-ԧ2i̠<3m]hq͢6D:^yJ3ɒ#kLtE|4ʕ ;scalapack-doc-1.5/html/slug/img407.gif0100644000056400000620000004346706336105237017174 0ustar pfrauenfstaffGIF89aH!,H ڋ޼H扦ʶ C: Ģx!8%|JԪzE\ ǁ-h>3 o۷ϓ'8HtWw(vdž$Si@s ũ09xږڪgJsK麐;{eLY 񋼧,EJ)ky|6TV=l݉~=N+^ 6_TZo C1L\ I-pA~"+"=lG TD2jx߲k SA[ !idJ/%ЎgJ9D6ZRB/&걌 ֭.T(!xb 6S,Ϧ7kN3Ң8< Zh[µ`?%HP&UbHY-V Qɺlw0K5Ţ-w5OõQ*ձPg[2hҦG-'ꎪ{0nԱf4W&nM+&ׇg^=of=þ}>8o\ցUE8{GقIBG 5 E$TrQn`}IcbXfs/ Ќq(Ӌ42c="$F IIP $RvMR@eV,^C&cɩf+ZfmΉƕ;Ig#qzC ~ hʡxȞIh y(8j(馛bڤ2$^%V}M|5ZUEwj.D[?I!j&e <|;x Ry l}˛n8ږU2{JBY9$~˾m}ۺi׊kpOw #.y<̫$DmF(&Qѫ4)ZyE\)_%]72ÍI  d3SƳ4fȬYmڭVlVWum=֨ vwv/8t:E]G%5o![FڭJcW{ yr.bO4:1-{sQ~{hG^[y1u Lƙ.pCքdwRx( Eo9kA/kxkJ 9=0Jرia>ctW!*a{\$ рfȨE\li"bB@!i#qC{CHF<2xoLST/+2*rC։s/Kg=$6RDT5V^HE$JPbEB)SId+U1HkRLe,}yEl\jOS&RILZje2$gN3<6aj"Ҧ+tsl; xs_4+)s?M[\$8IpXCYO 4+'CZ~NVZ#rJ(P6M{#DR^Ԕ}ŕlO_|#iyIi9K?T\ƼjG&"eК馔v 5LJ-i?v/͞):zNt(1]vTUg ]w7COuHZ69S`ǢjH:Fr&q!#F֍6VOQ hb6FO - IVBuK%;'F96TfO+o\bw<«[Tռjz{5{D)L2: GG-/oh+EH!n02![V!Y?k^ "xu TR,~;NfXÓs$;`N QR*bf45FHCGyS X* blA̒_laFu,#gdܮ7^ K3%[tͪŔZs3 M~E>-7f=Z M4 Fї>t+\꽚3+:P35KSt5fX:}3 mO[hfiUZ٩fh];ZШ&lg–/ Ll2@do?r`O3x Bw_y}?|#UWOrcMd~tab%% v[a|w'T~Շk}6`v7As4R'}Y GjVzFGYwr(seG(xRun\UoGXn'uH!IsXY!Bsy1X@Fog~pO"~xt|5NH*x/wv4iVkٶfl)tJ~yiJzqmxm&covkgluw{LJX#/jxk|VHUmfMklFuld臛X(V %{+UQ+͸{lwobqO.yH$ow7E[ZI5s5uEKC4ȍ$C]E'uH]8TFFyu_tQ.j%:XMYy`T.GE藂Qd~1p 6St?/6{}[Sɀ18wH{eRJH^5"W&dp=,*Y^?c*LYwXwҳUM99-"^G%{R2di~(yh RՏ ÐQ9vr)@d73)SoU%u"s)f9rHɂZhzcE's^ׂ?DǓw9s7H_ ȎEX(ՙX晽y.G5()#v9gzA Eh=+9T0I|R!Xٝ\(#Xhy4)&)w)2 FɊIhi8և jUzڋnvȈڠ zg"zء{}8;JBO? A*CJEjGIP j:.j$zxDä0ډ8<*ZNvaPTfʅW 1Q,9Ff%R2z5oNf,)J3Ӕ[ ?3E9C')矽9J%SU+.}3 }d(Et['h:ڣygW5E3<@}ZIz:q>8ʥ~F7.ii<oѫ8zG6פgjΗF*:W7"~ ZCO[iJۈaْ_Ɓj4qV@cxٞʪ9c֩HǏ%F/J"RHٮʡj:Dlry5n7 K6;t{@Rv*bj:^;%pG٥ 9+|;5汋Csʼn?+zJֈ:;) @2үNzϘ'eQV_&L7e۪fM4y[teڭ!$[~Z-ڸ[+hKB;JY ;`Ik NqJZzhjK;e k+ ċ;{ ۼ[6u˽Kۻ۴߻_:ige yXzӱyj2+4 ۰:r-ؔب ݖErT)ޚQywJrLtj& }Ys4{K*Ai? wv㷖VY4zĸĘ9zc-x@9A#`{z凭/<[Ŀ7f$YP}u ~-ש,rGYv6cGy >|er q)9փ]Re>OīWnڨ7wV zdqٜ\\y͟}ۍyl8b RP*Kg|y;Mwmܥ\~T^ʢ 9w[3jڨ'G-9:%-/͙@kCeyl d{l܊ˬ-Y7}Hx.2Iڕe3Y}<ɘmbM4ny%~X29n5Hн¼ _# AB)z=b T*m2>~lNZb\ǹJ =χb*!^U;.nβ\{Yվj&Np7!MUlFMT]Φl4elRZ)9֟u£s )͸0E |ӀMjR.T{.h]N̶8^J̛]MBC:]as ͎9H4R+yNr=@?M*Юt,XuС_Ԧln]KƘʧ̳kp Su}w0 ~ [ZOм~U,[2E,ίIEOZaޮqu xF 4wR\{b["7ODžJ".L؂XT4h  *OQS)WAZacMH?="Rj K-wup<'CB3/G8GgVl;MeOrq Iy1GbлtoRwؤ e29tMAPm@!Aݷ>B H"KiJWtɮqi9g:=߼9tGSU-UґN TFfTZWNWOvUX4[V੶Ak2/+l$a}e!)㐊'B6|Sd1/&Z\Z9ǠhҧUu=nz0j۟ Ϟ_ؾu}ceq δwca 'X*W^kD$^.(/lVG|;nn9U1vrBj.l AT[Zi<>$%tCX dq|Ά [@ga"\kksME}<7bp0DD,<etFs*&`\R&b\Q@i@{ȇrZrL_,+qS3*st/(4p C0e{.1@ѻ(P颙])>l򪿌l6u#A;T2#F!$Tiu0=KBhQ>j Q/$dIm4)RAF\'WKE3\% >5҇؉pj+KUO|}O9 :CBsRw2ݯgTSkh5b}sxI%rM 5U+Lla݈]ږ9EcQN1hXӮ.g-W!~Ut:V+_ڙKyOwhUgu.:緯 'Նܔݚ Y;4 ;qJ\eT1gg]ʹNsp<aEW>WMw~?v;7^>wwqQz px/4|Wx#~yW}ﯶ%tX?y >S#Щz,Ѐcc`Cʅo_Ojt^Ku,#~`B_VA0fB8؏W-Z"B aA1+v Q eg*t0,Q}ɎAыH(‰5 OXK4=2dQ[P#x#r ۸0좚rnh>,#]!rl"ߦԠBncc"- ba]cxŸ/񓷊622YWK -a.GES!u Mn~tZ199QfZ$,ILe`L'=cMa3o}˘#]C$v4& mӝ=Y0 `QS&?1W2E͞,g6Ҕ(E?JG#=Oa]9Drl,ٲ=5rĕRFm'-*D+ Ë0!muPx]?оJťnĤjOlk+bZ"v3SJ`泬gAZю5iQZծuka[Ζmh%[PvNkfo{|ݭ]z+; 7-r(tzO(meT'πxܻPzuB|g-5&UTcAS `lwaWE*ߴ7[,~_1`/zNnLׇJg;#+>mЗ Q"@>풉F44ZvĆMhNv@uAOԣ ;c1\n8 Ҥ"UMNdyaYU7\|/Kymd,oLJ̠C-|j'&6M26?QW>NZ2W9%yБ 8_y{[Nb ۰8"V,Z`t.P%TIkg v15@]Kf^IJtn&;|WFYO@y:vu`YZ} '/퀢hJVYGmݝuǥ0g yyhbwO\=A#Jc35%fJ’AΈVࡶecEU{!R]}@>h M}w#ʆoAqQYjMo7_ը>M᪪풗gZӺot|%OXu%V~vGȧ;~?+_Yܝq}_7($N:F;¼L͐ ƾJmpnLhfdd" (V.l=T/g]Bz* JMr(ڮΗ -?KXƢD6ao-P/`WM/ÜO.=,* Eht*0d^x,iʫMj̄hzzT=)[M=M* Mnn O8p׎PިoDquM81dҜ܆jݎn^A#Ԑ[ zLm+JN0 H @XӚ0Ц<}bI ^A1? EHO{!qq7JHH 3o!/"Q WrWOQ&Li$jm뒐 PĎhOXjԬА‘羚ۯ`p/&aVBoz/jO$S-r/.2#ײN-q2/AP.)0R-,R0ߒ00/2!"/s&*1r,R2;0511YS4K4 d,+//6e+׆L1%HA8?OjQ[7K! -+sl%Q%m"ѲMS:mψH /m٬ 0>(ҘM46'#:k,\R%)V32=ӳ hF)AK N61-t4; EAM$"ӭs4H&b^l?ٚ4(Wn MG]?oH YȪqFg)q9ы9\ t6eoEHm.I >m8m4{)'0آ91343a @P%NJCE )ĺm"GTKWSe'I >XQPI۔\yf]NܒJTM25Ֆoq"g<6:cnUF-f*fJU608I(VFTq 5ׄ hx$TodZ8ᐌWMʜs!cb޵SG?JT,J`i3ʟCmW=isקj3q!u\tɷEn|gE1z5Uu77e0V&w t Vf7}7 ZAJxűXmxvi̼nFO+uVsWC,p0X UFIDmpw͵ftpitGl?)z7sw=#1Zelw4x|n| GF'uvmF&KfRwv`}+^8^3miZ!'nWtcX+P6z]8{/bU?9wŕ[8kɘgs30wXmUQv. W{{_w `6yY#CY{UykyfGٚN.p 뎟ƋsLjhۋhJ= ٘!tN3={Y#x ]Qb kt i#nRӕ!%zuoGp|P LܖN37vp#QP'z#Yzu3KN9LqWXO5nA/SZn4_LC9g8W+7z" ?1nءK[YUh[tqe ZrMAs'j=ͭl@[QsʖIAYaG{S{ԁx[ڌ& `Woׁ=Ns}śS\<`\v?#_q}כ9mv 1/?>;mOCW~+Yc]Jh,wqW{/(Hr?=}O|mkRYsmUW)ESI'u1{ӗ0 @X/zr6Ӫewt Ȅ~+ۺ/ː67;߷m|)`5 N + YR]X\C ƨ s5B'Ӊ:>;̌XN ՜`E!S%`fS Z\jƝjk_R'-)%]#$pg0dhUh!.HVI- ,!k6wwg o)98p/'(vxt bIdh""N}+x rO~AQ @r,{/bj ?% ‘e0gdi~wΛ珋JTs2zAd C_-5̚jG֎KJ #EBK%xm%iɈ WD:KhMޱdtKg i;Ro HZؠӪg<Ōϖ<6 EhQ~ɦ2ygBruЧV(iWm?:iRo7z{o}"ur_AE w7 > :芁jY wa >Xztr_ؠ- xb7"F7ٶ#=#A 9$Ey$I*$M:$QJ9%UZdYj%]z%a9&ey&i&m&q9'uy'y'}' :(z(*(:(J:)Zz)jЕz)䦥z***:+z++DƆj=Œ'x.186nVZ9L[4ՔKd ]`b-]3:6lJE e9bhmPNdlk%ok@,ʛkC*`"R٫pβѩu)[G%] _N1Tm?bV6;F1GLkmǬ=q>;}ۻwc4c 1&s7cy N yCܺc8ۀo>'o<9uOk5I]`7 _3G>0q"r_//V&-kcrE߾8|}Hk1oDr>3/]1o~K/qĸ sՋ {Q ?c7d&bg5Y^׺E vYy&8 NwN@x "I7/sUiSȾ-f 'EOq>c`nNa忲+$px"i=킌žW1|"_8qb'LQFG"~c*nj6*ph<,|$ih,vQ9!x ;IN"Y_:Șo|WAYfmIp)/1 FrI]F*d.x_}ȬL& T K=bL87fuSbVhn>j*&9Y)AwZ4{`E:q[\lA=:„->A`)ciNBQS,1 GKԩRV*Vխr^*X*ֱUN5]8j-oM*?>l^Il\)p%UwʠiQU*4 jR`H٨tnd{cfz 2&Κ`Qk#qc*7 >B\Ё󳩕MIQퟬUL) ѐxǩmyQ,)Q2$ٍI~2 "kP2IYy^rC,1f>3f5[כ2zgչxXu*EVFe,Ki\PgGl5jX"=]`8?u4TBҁ5o,M,8;74Lڥ4!P3{ Le>eӌ4E Xiپja[x^v6rgO[?f`ZLL#{4/m~3 @=g'f2O;Դf ۈt7=Nvt'GM},[GG8-}5rÜ\.vg-ʏ 69xf"ѹE-n厫42\To Lyo}޽J[?t7hkI{`Ztrz1ZO"^-n)+9Zvg6^ yB=[khW#k xD'.ۧBnf#:{]!XUm|ܷ^kx_6ul˽[|Dpk}|\KneYP5ٛ?UC͌M_ b-E,]YX)1<`HS1 &dFdN&eVfef&D$`}vQޘb"n]Ք[ /T-jZ9E!?50.%.Q_̶#pr e F0&@:S#XX_GExrK Me '%2t gVY5g.yq^r.LJVgZ`R'#}&q(⻙f}lm&hLj)i!v΅Wb|B>+:riOV߲*P6i'geNvFڟG*qW#}il%OB*Ν FRm~+ddy+.IhktZ6Sn`*~`$) Pi ܅ ݝh 3m1͚ z"M_ W*!aeT J!>cKQ¹%0f*;*hZ\If])>ZY#&b"iZ,!ImL!!ns=Lc9-&p.e&d.kizn7b)cf.Ʈ.֮-byY/dMY*~gKJ#.0JhQcc!/JRoY/boPE xi)pnjܢeMTVB<:ZhA.b6aannk?.nYmE]A^0gBv%1,(ޤj5mSFDfׄL(̩ͯfa;iԖ+ , K . K߱jс oKUV.vi -~(v1_j,_s@إCjԙ2!vؘƆ]B YS+ w2/\ق.rK"7&! "Zi="]/!o2EV.k/1#p1/3373?334Os|Q55gF~1m"~ˠI(< kU/OF,(,{.G7%{j8MK9s"[r,x*'(*tAstN39jMhȉr 5%bE)syWNr/]ᐮZ;£^3: }Vt;"aQ2^ʀnjr[5x`;]<  fX~~+ϡbr&1Ud?,SNoanI홎d]:koF:͟vix#6cQFQv17&kmp¾+ְշaH']jyiB32O7fOZO^ ^Iҫk6si;,zzßnI*5gKl+B*Qb3E /n ±qNw. k,n?"?! xcE" jvvkm4UOZek#1JZ C`0f׊nbI!T(K{+nV7^.F!:ZV݆RGBۭqh_3: cr_gv`/)x61ps/z+ں@K/:)6:L6;;'/;K2/CƱU\vQo?U!'G5vlBӴ;x7gǣx޿0c3]J/0Wa2XX^#w;zuXO{=5r[6$o{$z+ vl)ptp6{0j;و+stw>E"IsoOּIF5GbodΖqd>*$"pkhqKPx^=}h!rDjri[|6T9} S k!m6]'{'_/$³72vra0pY*οrϛzo!d۽'BH_(~;<??GO?WLz{R8mRvNspxײS3NG;?Ocp›1O3/5؇!Ӵb=W*qD\)VCS^k9ݕFW6X[ ک\q,QHST^ʃB u* U2bʨ<Ρ۱Gb}Y;_ő.QZ-ݧ gku s%D藫G}tZEK໹R[Q|6y.+?l[ L3 o۷ϓ'8HtWw(vdž$Si@s ũ09xږڪgJsK麐;{eLY 񋼧,EJ)ky|6TV=l݉~=N+^ 6_TZo C1L\ I-pA~"+"=lG TD2jx߲k SA[ !idJ/%ЎgJ9D6ZRB/&걌 ֭.T(!xb 6S,Ϧ7kN3Ң8< Zh[µ`?%HP&UbHY-V Qɺlw0K5Ţ-w5OõQ*ձPg[2hҦG-'ꎪ{0nԱf4W&nM+&ׇg^=of=þ}>8o\ցUE8{GقIBG 5 E$TrQn`}IcbXfs/ Ќq(Ӌ42c="$F IIP $RvMR@eV,^C&cɩf+ZfmΉƕ;Ig#qzC ~ hʡxȞIh y(8j(馛bڤ2$^%V}M|5ZUEwj.D[?I!j&e <|;x Ry l}˛n8ږU2{JBY9$~˾m}ۺi׊kpOw #.y<̫$DmF(&Qѫ4)ZyE\)_%]72ÍI  d3SƳ4fȬYmڭVlVWum=֨ vwv/8t:E]G%5o![FڭJcW{ yr.bO4:1-{sQ~{hG^[y1u Lƙ.pCքdwRx( Eo9kA/kxkJ 9=0Jرia>ctW!*a{\$ рfȨE\li"bB@!i#qC{CHF<2xoLST/+2*rC։s/Kg=$6RDT5V^HE$JPbEB)SId+U1HkRLe,}yEl\jOS&RILZje2$gN3<6aj"Ҧ+tsl; xs_4+)s?M[\$8IpXCYO 4+'CZ~NVZ#rJ(P6 Vx=&p&rb|]I_Z%E¬c;E.*>4)^g:-MW.VȨyd@-v$ꄶꍵUGE:U=iOe/5Qf qFUkzwCh>͏2hTk5P"J#bLJe$զF A@%(i )&i'iGzyuu : V V6 eMl"҉*Ĩ3 tD0p%Kvf/>\[86n50kRj4:t-طJEH_Lm$0qFbXndo:W=*%iVM? c:gNY(`j4sQZVfruhA6ۓnӮ!O抎Mks&4`l tR n;;jrq 7_wmf-.1&+|A"Rh^fBY"B;P> puZ dƑV7Ew!ԙNmWWtNm}E6~mm,c[.5!TDl6ka*[O 6̍173펪dk;N0*Od&]=܌\Q"~pc?qL7dDޔhAF ,*U>Hk }=$wi>TsYN?z׳YjG^ ;ߜ|GzW9`VW~NW#Sgn%R޶,Ϋgxz'>4)nzУ,.')"^y;OwDD}'w/r]m)@@b#QYܲ9EoϿ= YgrREGIpTxcUEhyF{}ChaS(H_KD! &d_Hv娑u$ Y#.Gruh%5 |/_Ł4S=㣎Ovn.xFA{]SIEAYJUE=ᑤwwH7dOi-sԅ4ABQWT5ؗ2?R2~/hiSyg6u$u[ y,Ʌi xo( <UX9'D(iabbEؗ'5'z%śW3qȗʙOG0g|_wI?TvlSƇ$W@fY߉4|'PzWg@70I2rki}|~`n WɌ֢茘٣0*ӷGhHUh6 BڤFk :x$EOa*cJejgiL'C:HJQJV[*uJby|꣈ ژji4hENb&wl FrJa {$t Is2#q Z^Zxg|c(~qZ_꧵ꗷz<Ăy4B'HYkI٧jr!AL5Ӈ:FYazʡ*QFЁ`:h<^J֥Z 6wVʣ."vj?ڠY['tJ 7Ta'5TqIn*۲jp~9KyպGGrG 4X R:Xءbv*_M+5v =+A˲V u몰O+7 8{_ ǚC[訏R2r,t6y(fرR yW*! :豠)ˈYz|&X\zJVoWzlzj*ںK& {U붸[۷o빿q{ǫsjp 漘پ;ZW ݛJMjqjr넹Rf5($ZȚCHqhJ[lc 4w*1tGqe +" rIכk¢.|UH>dP[:=V; c{qv "h\ugrd8ML{fyCl+\IF|Hg 5˜ȥ}ۼ[?w,Hrs5ɓsh8\=? V^Z.-zå̻Wﺺqd{ Yx̡F[hwA-LKT<Jg8˰Jm'ډ#᝖^~3,Ӷ!nL#85G\M"̋}ޢ}e #^w 0{3g~8,n{s 7W@̖l*fWГmAt'>Lkz_eX:;|ݼ؝Ah>Ic>Iyy<*~N{Tǁ$Ml.2 gN^衼\Gc.6||-גCm׊惮鐽Ǝєc=unў핍셍X֟.uM_٠-Qj*Nn^d;Ն<%zӾ~~ūz!k.v[dgi'Zֽ+_^>ۇ\~ gWTX:%nH6Ӭڹnُ +OMY-.O嵗]~Ӫ"}>Z?$;eO?pl3[}Dʵwb [:Kі &Ϻ:[Ȣ1&N^d$F( F:཭o~.Oǫp0cwk`juf/d׵y! wF`1ßeu.\QA _E/V(?E]NX9xa_CNtؽO1uYaM{awEжPR4X$ffwNҸvIZKД`@8C\C`z?KL+5Q8zqq H(r4a 4qڰbl*Aoqc/tKz0켧*xn 1d185rd$r2܁P%]F R\GK'KCĄ*V},G<Y Ud. SR'Mqk˕T?.Nt+*!ZMmFV?u2S.QX#@6' 1-4Mf&.Ζv0DSZk_9࿖Se̖V.gҰ=MllȀN;i>je:Kp pOc<o_k==[XWts =OUo|KUn]hgݠb[sk"݅JWw/&~В7tR%wx~; _^FB.mlzV{ks"oElRU(Zi<:Pvi?l#F;͡q0U% f6XDڗ贾l1ZdXQ%)omsжpD5/}:kCy8k=ʛF!YOt d rx}U)8'-ˆW+J,R28fOBD,^m,ZD r֔1}][V#Jx{D1o I'gxGT:F+҃@y1n% >T 1377vO$Ð$dPjf35rR>Ru^DD˕Y&B3ѝ"S5])"~R f- l75V% Z]#=EiR;JyC`YєӟB('CvH9iiҫ҆WOsڱmj/BlHgT)+O9"O6qxUfuPWU)X%+ *j]QR}_X5aX.uc!YNvk[Ÿkmg'V~ebgZ7u+fGqv|j6F'~q5N$L) fj6RY۲֡, sзh"_՘TXǏޔɥa@sPִk~?/Ãe/KX9wiN~H G$k7g?OlKG^n% wƽI /?$;>^4X}fo{ڗҮݍFN6a'̻8/OXd˜,R22m8PGfP!pUEv߶B|j LE¨w0 @6JRzZŐ ,Pa4ϲ0L.EPпH ' G`^0Yv Bʩ@p ;Xn.d+p1K-hїR}O+K*q`(e.b0 =n$/ 淢兌h_rf- +,̊Ē ?n+_oQj{1//Y|Bpq Qo 0O2߱ R%2!#' 51$'R!# $-$;2%(&.qJQ%ARӆқO"\ҜQao2)R!1ur'MH&micמ2xp掁.r- Nrs+ ">"ϣP.r0P(.{֨:!m8.*1m 1//2*/22U-m\m_(%rpPZNr[4151L savS]QE/0 89A.9~O. 2\4Cĉ8 O^s)r9n Ӓss$3=I53ܔm"; e.;0E1: :@WRr//elp,./t-'C+FCIFw/nGL /A<:S 403(s494=s I螑hNT&Ne %Ea-i2IrI+2HQ+ܫ2)sKII4 e2~&4M[/I!4PG+t.aT7QUNQQUOW#T osRϔ#4S!LMR=P5UUuUYU]Ua5VeuViSY2`T7WoT%Wf cPXRXTiKT1I %xz}q"T",+ W'YtL?)2)w 35Q]?WE]Y@\0;M2R50NU`55Rz(^Au)+>(u*)ГRKfVC-E )dedް;eXeW͏`IpS}ܨj7h+ !Ne/VQuSq~n3dXu%lKveeR0Y6W5V7%\l}R3u '7i}MwoٖM&M y4*/=ׂ7cFf9Qdy)Wi0plq4a7vGiH;}V^R֜/ӾM#F؝mZ^^,991y( x _pb60OmuT͍dT`dz ֞`Y~hyXqƏ&=Qg5YɴJkuZդ8],ڥ؈Y͆f+رxs9;Xթ:{g/{qZqزC[uI_+8WGK۶'{Wko!;udcw۴Sm:-۳Q{#u)ջ+ n۹t٦mSћړҨU`ͼ4VSV w}֕9:WV+ӛO,3gWH4Kț Y +S\cĉz!ARֈP/|_? 8 x * : J8!Zx!b԰!@0KR Kb104V'E42z)Hď4$I Y̵)yB5>d0dM=z#њ@GLI҈|S^`KTrRGB Mr> f,r`z)3֙gev^Fdb%E}*Roި Ϥ9ʭxt:H$P!ϑak.١l?s @Jmu򃢇FR- x"(rŹ).Kƶ.k$5~ ,hâ["Q#,ҫlMp1|˱(B3LpĬͦ>Lsp4/;$'842Ȭ \"3KqI uϺ^2PMA盛*6T֫kV3'>Hm8euOg4^tϑM҄9yR}oΥ$.[++0,G),8;nޏH:ρɹfwF5^c5U=. mvzП=G#z ~Ԓ?ݖeNj5+AKZk&2Po]"+tIc)J9nm|RոBꍮS8?%jS؊vd]dV &~RB$~oİm_F0Y 1bF\u;ctb"ظ{协G#)G@" <\u!;ka|#;V Hl!Q'K"Rcc fH; +KFؙnp9 [ܒ_<&2e2|&4)iR$8% Fxq^h{;gXRd%2a)OHJڕ") ~ '@Sw`' !دї¼M_4HDžInERPq;tib:9Y].laâ%bö,+=jW׼=z}z1G[tӢ6jD'Rú&"瞺T{5N/ m?u'=۝7r]Oe' Ûr5dh٣d!'j> qLI? RR6 %$-m!(O[VGQq:Gq =2 Js+ݻNMo]wn$UNdFЯ IY_;Ԟ@H\#920kiZRB.>C\pޒ8Wm/" 1eX'ZV ׭IM!>LAw5ˠ E]-iT+Q y_ F,rPK4pڈa-֙Ay^-8Åqb)*Qa~S*OC(]P)/6"Chy -TW|3˗Fx^CCDWƯv~,'hJҍLdi+>϶!`fJK:-evi'b7Y&L /(àAXFTY)hu)?O dcelFց;bP%lte?Edϭ3WWה#E;oi׊z!V&sIwME*i~x"ebv."kzlȵS޿sN[sMnQKϊ,x?i7\Lr q3"p> .DU ~5LWr޻n: ljbBE>va |g6y ަbbHlvtGA=hH&4>|}G!p"rݯ7 )^-D??;zjq>j|jT3`o0 VxX†:Na`ktnbI~ 9a!@Y8V `Ř^`SeE(L ֵuNa T2 aQuEZSY[%_‘9ҿ_Q }ymTPLT\۫>VNTApaaaQS Љ]yI#"]%Z!`Uɢ 1D8btaС%N j"c!6U.P٫[-bm\AQ)&acX`n)# ʣ > :^>@$AA$B&B.$CR?6$D э#)6Bn$}uDC0Rd/lGDpLVb>IYMΘ{E=vٍThؘ}Ň :]` UZ,͉xE=G؎y:ɍՓЋ\UናT7 RYtx$.%婠Pb!gb3 R/ 0X|c:6"[Ǹٟ!W'Bvߤ&՟ 2Z0V;¦hJ&FJjk$p&مV&ܩɏ m W]-d-v,Z ^[^Mdu=zc)U>Y[襥E\{AMѥ^ѧ' eZwݭ)܃_:ͦe&%X`mg')\ZN^"{&['L"X E" ]Ibat*٩ıUՙxANޕ[FI%mFAiYɜرٽ"܉. `TJ^M,܃ (5ݎ'n.(Fq)e)e}#g:鼱PQ1Y(%TmT"^۹^E^x'*{'Vb;1޴ Ց>͵bꙡMU*! %#79']g%Y:2g8b$h~d`RnrWy硕9߿f1H+&:jҥֆ;x$GaDc%q\zO&C*dv,ɖɞ,ʦʮ,cd bWSrevWF 2Er͞WLS^K|l%-HlNfgaZaWe>R:-Xi!&Ζ=Ma~_&ij&#Fu# ZH ܷ7&&wka該"N2!v>cy~Kyvm."^b0법&E(Dj&֑7 `ȥOb"ʅխdnP *Y) "bA"a@5g!ʱ'n}^ʮƧ3xJ QM+ѐ0.۩do "'* uoRyc₡f+"k+#7~-.Zk-ݚ '+fZ0SH-pAGڰ300qq/J֒"X"+-a^0֜`2ޱ/Wڄr: EJjmIgn.umHĠ_U r/oYbU;-.S^Ű)*x)h&aZkJFf.L '1#Trfzo"j &s0?@&? 'j:2E& vBg6B,wpfMIe\[{bm~['_^so6Okז K|yվ*'/=L;dtH!a=]ö2X.9Sfum(t珂*ngtfa[i\fzM7_Ciwnޓٍ/^J6)sFu/tnٱd%]];+3boh%ršut_vHXbe]BDc֘nA']32Vn+ 4odh*uLk@jZWUovr+`*FhZm xxqmx 99'/97?9l @fqS9} sC<sf-6BS'Ga)zymq3-Yyu۸oU-;rY­Aom~gmssO+KIC.TnytJ^3ۭ oKc4{Q֌g{4ozz5O9Mm`^*r^Y5:8d(6qSm,ѮZc)tf{}TKU.CvtUJb1#6b~;#Tu5KW)iz牴"wT.2k3>p\u&$8 "8kujxW:^-/m l \SMz:{)z;߆mYwGo=w=؇t}yyaeBOI2e$Ig}䃧sF}E=s_n%̊Ϯ1*cB{QRmvG1s'6LL*CûHT#7/sz#Aa'e %y?dr-03[^!~2Gq~C3uˢ:aӸ?}iB/[ R;9 m"%y+>G8hKbn? Lsc&h=B56-p5Gsnwo6`% i;ov@A !-NW(0ѓC^6+fú9mkscRI|Zʡ"K, ܻ<4$dl@P)#}sx-ʼn=2EC"j|L(5N|TZHNVEk֜>.5~ m*Oe>Ü=v[VՈ\̉UVa gKkSj%\R#>f*UߔjZRQ,֞.]n FEI!hpeph^& +I|4jc9#ߍ )ӎkJB@H4ȔQ^ 9ZavI6cN^㜹C@e5'vyS|ƔY h. (T9ٽaii̩f2bRi~|z!\MB!/Lmnb^ %y+DJۮH)SGK,Jk$͚ EԲ^[WVvze3XnBDopZ&2k/My /N8/h1:lһƴo$HhC0t{pڹ;֩O2,eZQgI09 %G$:$,}, H\-zk:KWpwy]be,}_zbgѴhC-)ϸbMJg sfRm #Wa:ښ_nD+4ߧMbIǩ}7wc7ݱ[;r+K|np7^png;zyZsMs=Wm#P><ՏU:i3''7V/Ӯ>C6f>] Ejv<ot]t2e$ [t9$aƻO.V@m+ 3A/b 5PK 2 ְF}RA؀K <ib+fPl tN\&9cSF% fU(@yAgI4YBYQ^$D@i@wA!QhT@Eq1rp[O(r^oA|) -ֳ3ayN|2rGC%H#4FcYEZ" ʶ5Qq3ە:FFKAD|emǎBOD|rqWaa@Nڨ+$bKQz~ÎO$RQuXvd+=핢dU92R[Ē:URxueIŁ_֨@vi,R0@8TlE"BȔu=nDEFQ_8x:mnJTFu*`*&/³[ ;jtR թ*UG/cɦWճqPZE~*7yή},9R*o4\xUeuoV/&ky>e|#_RYɹV5<޶`qyQSIJ+۠ R}UFl[9ѻK$zZKOڷ>gq<6%&]֧Sլ)Ox,5e6DZ޵X:ԲfH=k;]K;5ilQ36lP8ضGlt{&7[T4unq4ߨk'ܢgkCGt`VN8|gHo'tEgceA3d:R|^'ie\GG5S6s< q*Y{վ}NCaBla`'4/YF9t]ј. \ $Yj(#3&mmsP꼵U=_֝ڽ`y8}w7e,7Ɩ!,Ax+5y7.K]HӱѼY޹7z0tu4Mg]GxT^:-[5y(+v`m='khqv%$tؿV~S|S2y$}|kB[|wqsGuje; O;3+P +d^Ƃ{5 nl^h`\v<ƹrzTLj̱7B X< |1IG\Œ dPH7}&ˠ#TU');qs8*N\ɠ,Q,sxL©S@Zˡ;Ym ~Q̲\Z.3`G:؛k{lƛI: Gs䥶^؋ zG~Ra1k&ʢrwIK:Jx%huٌ\ѬXy뵥^t~Z5禓s1 u]p-ܥJݷP6[/}ʌ'/xRE:-oi'MՑL f<ë[hyuaD켡}LfzєIQ-3漍)M<@A mHK= -ߗٞ*@:ɵԞś̚ɰƺ|Ȋ,)鬚˷I<,}✬ُ[Kud-o6%N)Nڏ,ǝ <@B>F L{lP~Z>OV>qx}(h1L4u|bP̽^`tx)Gm|I3."a=*+M]-<ͅ]>tn$7 <:ɑdK v4$.e>Ys k~|~[&>n,vޔ⹰-|}k}[䈮Bۨxn>uMT>#N;} ٮem-N뉑 mَLbu3@^)z.m<.\3d'z7݉xϑW -Ԣ.%,q oZi9,EĈ>H0|R٧|/&KMECOZٛO~`?pnvHnzrObO~##͓?ϟE,OJoOq}_OI{/sR_F|l:UKh҇_{3h+_k:f)?KڵPfW 1 JgUO1u9a:q֛w|/$3Ui=Adwzs2 Xt”$kJY j_-Igb|b:ӁBr$V܈D,134;=' sJK Y]TROLadmMRfykr_7}ZS"'n??A9=CzeM%#GsIkשөĢ޲%&]CZ<8>+aEIcG1务QƏ޺v*vK{ؽ@BIbӀTۍk%lxgHx@y&ӛ0$,HJ 9,lAͼ \M㓡Vjq/YQSb|J#` (%9~_~.RsL:Z@nBFW׀MOT&Wעw$[mLQ熖xU$sLl*a1:.)QeR(5s= j[/s -`mΗ}aV \"Awr FHmXy;#i2;##1kAt\Yi TDR5 AVڱ m:K~pkr?*k36#s@ -ٛ5̒L7t2N6-Q=sA"PC<DhPPG'(ҠkTE5KUBI7EtTUm5DY%YUX]u/]WNUVaQRM*_?5;JTgUjVf+tXiŶWD_,r_LrQKt,YTU0XǼZ p\bo}0NsN޾(4ؠ,i:.|i(QFvóu`h5VϾzi; $bxƴ@GzBwvc3yY3ij֤vz̛"z#z.;[D7&e jd*!SrJ9=?ٮ 7F죲[npceԗdhrAtΙ1G;HȊ> =ݜ{ʬu/Y;^}k_Qly0mǁ{)s]us &᣶yӋM~[whgirө>\.z8#Σx(~+it4J\&3K&!.浲&vfk'A h3dt 殃0Jx? 6v:! 9Upy% 1 X 1=<έ"]F; 6E!/^0 )`3X%'r q&5NW`;]Y= T:cJHS#%eVvC͍9d)=ɧGk$(GYVҕef9XT.H[v P%_?^fO0UULx 5e ͵HS'eN<7?O ;˙NSZ'+9p2T\d-&,7&?ISgG]|ƤX~5hB/9Pԡ[#3*2|NqF`thh>ŒA;%h$Q =lJMF, H6ԧ&E*s@ɉmZ%i~l*BϞU 8NtBXj3-vYz[sE+KcH'ɋl`2F M8?Iv<*jE{бRimdc@ ~~,MԴ*T mnB ;3\iԲerJҭh(T7=kn/6{ڒҧ=apb#)|7O>UwƳxKF Y#u@]`y0pF~bXi (v^.7.~.YVv֧TK_ svM(d~JJ^rAO"GY;m_| sF)\0XE[|d)-g=}hAЅ6hE/эv܅ʎi|9s9U2'i)%Vr2+J7kgL-v[;b42-`šÇj x2F |W?ek*,alޛ/:cGf=~4,.m㜌PK~il#B>xW$պ6(>uv $3}>UA@%{1~Nf~}zDaUOnOz>bAW_o ޥږO!֗Wnef[C-Eܚ~F7ƵOGFob|*]7;]aÁ/ =[,G{ ;C~ZLw[ۄֲv/Z-Z:"(9.px9Z< iDlEP.0OG \ JIN}t |MU0/r .-eO)Mn\RBo. 5-֬ g&+ n c 0ӀԬqt/ͮ`. ðP [ P6pPP 1!P)q[Ȍ05nOQ 1qMFp-~ Ȩҥf K*qtQP !gF/߈qݘ $2Jg?휱0s6hߔ,CA:r8Ks%;10lFMo4;eTG1pΰGsOjQ4 ! ITiF}GTGi4IHsK}KKIt)MJtTLB4ԙLTHH3 ԛ^LԜTN/r4 DQ!_$ӕrS_E2J%5PW*?2%4$$eZ TTT99҆\"d0#%OBr'GkrWeW}U+Ek69qD[+&BK$njJPs]y:5E!\Uk 3Pn1sT;@[EAAA/",|v t;f/q/q}t{*=~udt4lҨ`Wenyt2kuwTZwZm7j-Q;#LuyuME7ZzWqkz{wMvM| Ohw{L!unWvy4|NK5F}{cxC7~(3o8KH!>BmWc27"'C X-sí<+ [P7cx'Uf-v_L\vqSxx(ju-3VC փ? r35d SVE_7SzW7k,6wZ-bXx.{chs|xRs=eSUGt gwtKX8up':R=O>S$U nY ĂgNLŒ8L a 74i׎KnR;iwz3 7BoCwѯU-z+38BC*)x}wZiXwm=7ӚUukYjӐћv7hi rOVJr?5A؊&` Y 87V9MY=ZD9z 8QSG+wWwI٥/ڤW_7[9wgW YwY{}ڛR!:z:z:zٺ:z麮:z;{ ;{!;%-1;5{9=A;E{IMQ;&9Y˸pOzԟ22%s/ wה3u /o|_ZAUblrYn-NRoRB-U◁dhM򐥯ǦA6$=k@~cIYHh)2[^Q3k s=m2.1vq4o*&Q/?qFfKgd;ȝMl5 _!3s?9F/]jyelȹ>En|:ﺰ|ǟaɍD!,zqEx0ߣam3鑾C=WVۍʜ'W>4a!91wGejδ^ g螯e;\oG?B-ޢ. ]қC$p}!!K_^px^m9:}t^}N?m!cBu~?ݶ[}aM>~͞۵~69Y%/R }~^E__?)mq~--1?x;?7!OM?r7 -eM#iq?_ Olr_]|ze5k<ɴW"={t{j? fةo{rRޭƍ؄T+ۺ/3]7;V>(QƧpd+6r/,FNGi2,;>%YYࠉ[c#dE%E&HD%g%&)g&Ѩ+g*khlDT/p0Sar22)G5otԬ5r377!x&ia<}}).&^ӮyjF24jun۸rQ#F?6rѣc%>T*1X)*.!e'[>LiVǠB$n:m1CдIƬqWEVhVJ"âjײm{d,D%; )tem) axVR f-Ȓ'eʚ7slGj{.m4균:5زgӮm6ܺw/nʗ3oҧSAz-6,[ qi$]~%tSHvSd|7W3 F*HXMU# !ky@E`8u1;U N\EEwgxbT _G t W~;TCrޑA&UbL/BٴZKȂ`id"d݇b lvՂXB9KBifL x7wjU(pCsx_"}%*vO}hW'1P,V_\ މ2ʪ x^ `j*a,Tʟ ST .USI=yhcb25̘պqkջҬTȵ%U-šY%͔XFI#:}hWZUQLYL5HߎFC19iuxuLߡ׼/_ah2;e*H 彎S?^XNסG X"U<&i2x,3QF73x~g1mE}""!ZћGaujXXu+Wٔ`Jx\TR ӫK?B PG8LUtBҰ!O3*CTEKjS,t{)%h-\bEm,RgHs#yy1p851w)A<$"E2|$$#)IFAr|$@j%C)toT7218%,cL*0O%ZS.=`2ӍTdc=1Tb2XVES%haTڨ΁t!7)MdtZ٬eEz,(a1(H2 I"lhߺԑ{KP:Kl)pQxʳdafԹ={) 9{.2ө)U ,N1tW[#fsg¢ٞii2n Gt彵kIu]׾=,b2},d#+Ru>Ȇ Vڦ‰V-DfҴl-enPT;&+K[vV#]z-`0q=q]Kϔ78JBD.ע9В,Jd] bҕ4+wW媗E і7yj7W\l%-K!wFTR| 粷FzQ\KW~) "WJ<BCqhQmk&]^T}g>. (@^+v>:.` "csSzzJvF+GtszB6O M˻m0^edZ-1q9$ːj5zls<hxB1WѺEeo4W*8ֱ5bsz~0!.s3e#c܋=۲[n,o3>/Sֿ> Gô:םiI_K(p-?SSLcŸҹ䥙_e!ӝP4\ߍ  X=^A`UYZqʏ߲`9Z _>ӵS͠U[ULL5O <ױ E==uu` VH[5BEهu( =Bјֵ 預 ױ߫QY:RŏU4a׭],ũdUa:m!N2ex\:& '&RgX'ZbUea72 /*^9^i#b EHI."$Gnx6 S م:#ac|F-G#;֣=c,#>#A#?#3#@dG#AdbɎ%cJ]I# ?l{ɣ@N$[ݖWI'! .$X ,FN,VlZflj9%pBdZK8S$-lɖ.1gr%%QV[i,9붊1ӖЃBv kJ1N:qjԙt\0eP!)ܸ,gvm>2(+ۨ( !r9⢽^(Iحe] %4X +.C H*a`ʨ,j,F֘mKRm"6OXmRbjOUnْWJ.n1j/*mvU55fd&-ԞjYj2MoVO,Gbo.l>Fz;scalapack-doc-1.5/html/slug/img40.gif0100644000056400000620000000022606336072525017072 0ustar pfrauenfstaffGIF89aQ !,Q mo W\4UQވc^bp1A,RRϷKF`fN̥yJF3 ^o% جJEv3;~$SdguP;scalapack-doc-1.5/html/slug/img410.gif0100644000056400000620000000043006336064054017146 0ustar pfrauenfstaffGIF89a&!,&ڋ6{H扦M Lת۾oBLXʦyHz?Y. Q$,v0eEg4;N?RdFpHh興 qɹ)7iթJxZ*ѪzXP;j;$lki \I&D܂S e| ":"ލe5j^?L$o~/-Q <`A!;scalapack-doc-1.5/html/slug/img411.gif0100644000056400000620000000017606336072231017152 0ustar pfrauenfstaffGIF89a&!,&U #{5|]%"Hr`P.kh񼪨<"] %7i4b[d^n-uG ;scalapack-doc-1.5/html/slug/img412.gif0100644000056400000620000002405706336065333017164 0ustar pfrauenfstaffGIF89aE!,Eڋ޼H扦ʶ&#1 ĢՒ MDԪ֞m 27HX8Ghx t#x  i#jc j7J)yƫ**\۹ʶ ʅ w []  =l{^Km|N_ݛ 3 $ 2s]pe[21:֐KJ2Fl[uQw07LI/M#|<8y<%ƥ:-&ѧsr֭hne$Ijr,Rv C9,(qZ e;P!vXKI 8p,I JA Z[m٫XU^l0"yn;7^K!|7,1.%|Z^}ר6ɪ7koѦ]1<̆#+Zr܏YeԢota,ռ3ı ?=MFR;PzUSd'c[W[gфf^y-X%MRZa$FzSJńԄ(ʄa}B!oy1,1#5rcG:荏"$F#G.D2 %JTdV>"!U^/r &4[ZefƍgZ gigqI.g~A" I(YinPfhWeJIctN jBJJE8NvbHa{=k^3UBiRwhPmr 抉wB- kg4 p! : uKas-K`YW㽴CnSj6Va͕ȟ<-̙ +AqL%|EiQH:Y~&qoo ɷvܜ/{.8Ke;ɝ7Ͷ>w8W߽|}j;խ{.ڡkdB}rG/?ы1왎Bϼ`i}kco#u}ލdc}=IrC죰5MuK`0YqӟƿoTx=p:kƔ V aR P[F>)gm1@:*8~*3 h,bq:b+la,!lr e*T&{*]RJyCN 1 fȔ&)E d[,.]UF \b*G5AѲWqb;$( " _%Np}\^rūp{^ u,v3#76*gjӰ^F5U2fs<މ]"5LלQ¬\PkwAȩ VH۱,oLf _vG jǵ\\ֲf_yM#,섭*C -wGY͉Mlt\N} bKԕ3ڭd7/ЎNIַl 3O;΢wF{\';Mٮw_h'CV.o'/_i}|OL\'sopG|W5-'fS{~:Bx6_!CqGyhljw^ԅwoh`'#rUzh Vbnd2SX\s38}^T$@6,(DizhfzϢ{E(t7'e|~$bafa]Wr5~RhoJs݆qHqdX'TGrot膫fr3aqvȆvqjcqY1Wx&zȁ׈SqxzppX~QWa6puȉXFH{|ㇰ8ȈX%H{zsVF!T3TZEX_XOM$hWu*Xh\S²"IxZ_gŽXh&cHA'{|oGHHa5X@DB6$Gly3Ax/h-0 Db萬X|V$(9ex5gWz0vzBsz&ъ)i3G{1_+b`Z$Vχinm(4{Ө7 xyWvsXXd~[X}(ۈxYU1f+bM]F~3g烏h~O+途Iz\َ{2tf I5w}i,Wyo&pQg"W1J~hi^G@:I@t)OyC`tFVkG;vWBQiiiw7![ |&iz)Iv )S؋x A8OȋxLi.8b!gš&,*#3kxzh ȣ}H2z ZAڢH JLjC|:(Z|Kj!Rr#wZfÖӌ&Wt\C:f*6kacs"͙4mzjY,K{8E +~ Ee ~uMQ W>Khi@Iy 7ZLWn7|en\F<9d#6Iy$g~E$Ə{*G{iX uzڐ$NOV@ eʔ2鸛ȓv]iz_Rh|)ve:?xRyʙ-e: $5jy!uY~ ).69[|S&+&_NV^ZfbY4y2umZP<39FB>[@;V)|9NfB8e:d'f9'j\+ɜidYljRy1%h&pv{v ؃hPl*t=;*YG~!!HwTX*6xvkeڅneUҮ9IXkSJ}Z ak[0l0ZQ:_ڻыݛK%JK'컡+Oھ˽ MLl +; #T  zCIk ⋿כozQ蹜u悁lEkàzPsj*ab,UG-`TǗ,DeiY,t~iğ A܏iwÉ,;ņUA*O|y7WET}d Fb[{ ՘k~u:{yU{,Hlǻɨs/9efkXIl˼TyiN2iEhBѹڞ'G>ry|-ۻKg`_URYfd1N+P9nK{ɺ"ĄjA}V6I;lj*GlJ;%,U{f;է0`Gs !N)'9YeT2!ܿF.|í=Z\^z^mn~|s9|z zrXI*5Es|V5ݘh:Xv!n[mʲ 7#؀Jm&UxL3mcgtK,nٯG=2٫ޙ^6\tf-ݭl]dFw.λ؁SȨ<ٙCeoLA/]9Yآz)vֻwH,e"4ѿ(/,swQ|dr[=w .\j< c-x8?]>&a.)ΎvIt4- 5E!k|:uC-Q?=b6wzftvG%D~{\V,[{Y\̀KiM~hjOtDI(,sO/ƌDvika6._e*5}+?pmWu£N1u\S`ŔwP U􅎣O>tK``SiOg-~M=Xn+gmX?ƕ(ͧZmqz7J߮;S5s^Jz,ռyR Ng髋]Z֬* V௬E&veFgսk秭Vէ.-)KP539%k":kLJ ␼L>aQh %C=q>HCIǻ\+sj;BdK*K&yxp)`ɧM%Ef(ѭ T31%L|4;MqLiYF%2#;9|qU? 94!},2koRjy]cU{cb15<XVcɉ*=Pk-Tuoj-\R@pN\O)ww Ewuo}E_^MT-[~HV XsXJE%NbNh$8]X䌫m8eQGn9q5YlSfhCvYG:AW' Ez)]v>cB)6EQ-Q4Yx"PMҨ!L_r'ZDR<]PѠ;L|g1|qXIpYʶthݽ9w#Nh 'DD}`Ipes]$S\ ]24i.5pz<;yWMJ{UD;δQuxKucב5Dԯw_g6NQ2>Ե:+)' {b3qҚ<u L2@mЎ%iy/ P{V@w,uM%D N8яx5L]#'=UpA-*4CL#Xʍ)5Jmjg:^&Y|Hi 3˴?4UajCr] !27fd5ĩ̗ڜTH:t3bmm:oTeڙOe3g6^zΚ!^%(' :zSʦP'E/ -j5D=Zя.4%i]y: e0ĩPϜ [7}f$<җЕSJ;96-wUq"rMRAJ ifޖZPO Q쩊sFBq=([hX )j${i&:kBƯD_%Ib2dK7jM652{XJBoAcj@ VB1)NM_LCcV,>dZn P4ޖ= tyQrв]U{Fִ~X{ 3q;Lroqڊ1%(*VpM,nz&F4paA{_.vwTߕiTcWx6PuWVA]= $sf».WqsrǷ!$%1Z>YMt]Vhs LF6w*usS:(b KeFLtӢ&F[iPs4KO-RCxns@*O2l\a@}k`puu}l{"Y#c6MlX nq6ѝnuvoyϛo}>wp7p/ w!qO1qoArs%7Qrp-wasϜ5qs=ρt^RYqS?D%e*H=;dW:xew)K5nku5 ǧJoYLJuI3n嵝Y9YC{evf fj҉[KB6̳AҠW7_Io*lnTw%rHJOtUP;bfȅj+ZB I0$PPZKzcwOZ+@C/cVlt+popqpp&<@HW魌ȳ^ӏSܐAWn 1@ސΤ)† ;0'qPϪ,l{Q0uqLqQ kж~IGp ,0./Z y ?EG06|ӎ q 1%χ $2 GH!!R#Q & K| :Rn)nBI& I%)ڱ*o,#2$'OļO0Q+r ) Caž% Ӱs|13 m2{3ώ07#ۅ342mF44Q35Us5Y5]5a36A3Ȕl 2es79 X7hzs89f~f9@7sm32 po3[F@.ƜQ1?/x?,4L0CpC9粱mB cl?Pt2FGE+ a4FsnƎ*(ۏ.?%*2,fΓАK :3O2tL5L14MH݉4M4NtNU̴NtP4OO jhV'4%dK@1S `|4Ӕ"Ř(֮.Wd2&!꘭1U Ml2VoVuUQV FPi,dAhoxϺhL3*vV:,$٥V5Z4fuYKFoHL7pl"RjT)Su K,B]T-/USJ$_ҳ=M J^/Ur 0^(,+a_bdE-ԋU 3#{TO#@i2G$wVB$G3Q #pu4+o2#jװGc1qeͤRpFhFs\"3Xh[,S1B'&"vVRyos)t qjSY K?++ov\Eo,PB/Wt V/LpcuZmQE}"%Ws*?G&hϏnIrs1Dj$ mGb%q1z0yssgW%N^k۰Iu 7u6&p>CHg "lKpMu!RRzuK/wi|y$u-^5&z7R&CP~גO+׀<}M;r.0 ?6(s)I*JtER,r-#_Qs/_4a%t}xJUdw5(ҼP8cMjTdx^J=ӯw*qS[[\ 83ӌEƍqYMa8A]U05`x P 9y!6mNJ:\#OYPuV79e:ssUQYWӠ%/93IPٳ,'jH<6t+U=)}.>A%V OGIv%tG,ynqFxvwByVCz?40O#7 ";i_t9fł)xg?DHOEڡu~oJ_X<.MQeZX}I-%[:2qڦ};scalapack-doc-1.5/html/slug/img413.gif0100644000056400000620000000040706336060523017152 0ustar pfrauenfstaffGIF89a{!,{ތc+޼3e [ lI2|bHK؍9NC9)¦MbҨAgงƵ^GՌ9N^n~/?O_o/_;scalapack-doc-1.5/html/slug/img414.gif0100644000056400000620000000021006336060574017151 0ustar pfrauenfstaffGIF89a-!,-_U :YxyX拮*):1 ;?:8zˮ B5' n;scalapack-doc-1.5/html/slug/img415.gif0100644000056400000620000000021206336061263017150 0ustar pfrauenfstaffGIF89a&!,&ay lZ`αrOm~mB.z ۝8Gl"o$S;M[-Vcˬ98 g ꭯ ^;scalapack-doc-1.5/html/slug/img416.gif0100644000056400000620000000016406336061371017157 0ustar pfrauenfstaffGIF89a !, K ÊJ4emh#9bjIZ4v͒\̜jܮ 2;scalapack-doc-1.5/html/slug/img417.gif0100644000056400000620000000013406336061504017153 0ustar pfrauenfstaffGIF89a!,3y{tN.I!(5㹄m2[{\T:BĄu~;scalapack-doc-1.5/html/slug/img418.gif0100644000056400000620000002057506336070331017165 0ustar pfrauenfstaffGIF89a!,ڋ޼H扦ʶ oL+q~¢L*̦ JԪ#>.֪Nk͆Bjay gP0g0Ѡx0HS#q9ƙZHy*x`  [[8Is˛ *Kyc)܋3J\+|ؘ[M ijl]Ml ^ݰ{//D?N ? r@s'/aYRq‰--c$"8Sococŕl5'QHBD%˝h0Fw:.o! UOQjSE&řUέF痁A)JƢH$-عmyϣDmnI7h@u)QeSHd2;U}gQ{]-<9 ʷLI.m 4YѴlVt;۰,g۩~,;,2yx#[~t{ݚJ-ņ>_cYi[$R$V{%NX sӂ"N5fWv"".Rd1/ވcXtBIdX>^t$X@TdN>Ō'ْAeZWxߖbIf`rf*iCn grIgvމBu|3.r G(>*R)]vLɢRV8i=d],B)RfםnJ#CCuUF#Ӱ&Z޺ ZTrۭf~nUO ,~܀2<7%-^'p;iEo=L12KkŨf\2&n|pkZq'6*{)0 k3lLlz7{{)M4ɊF9qm_a|L ق+cu [-KMo slmLs]_}s:O)( ʹquuݷVy]݈ӝ* _dɞ䱒Ő]pH d$˱ĕc,/KZp\e0cRli^4 C5M Bap)%@ۍ{J|>5cMm2 :i)[7OYTGD{gmӟ{'7Cp'W2h1jN¥?Ќ6CbRfZӌr MN]B4(P*QδI5JSM#9=MQEtc>8P*/Bܧ L:@ I`RFXƊIH-kk:VijlA%AEV*`՝53P%Ӌ~G[(VєX^Csl]_ZY.OTu R iJzuO[ FjVJ}PmJV;Fe+ܲzm rSٿp `)[FkUUۋ\IMmV.d5z2}]xn5r'R6&"KMQz«Tx]߁;[zUyK_+ͅ.؜Qvk)}Ea5nڎMcڏ b+%'C;i9/HvM]1yq5]ޗlU nۛ D%+D*ϋ=c]u|XidNznt 6iD ӝ:Pԥ~֟3JYf]ȋ${-J^wOm'kR"{n')M~}JjWۦkl[܄vͭmh{.lk˛w%gn添ߝn[ݶI+| oT3-w-oO\Q#F]PO~z<y LUbU00Wj@G3ա1Cak59Ȁzru槒Ơ_\V:n;cv=4G9l|*S[+5_aUZ,>~?@\x̱p*Aպ^%5m:][K/}]rp'HXzL:f}ߦz3ީO~޳=ylE{2bx\"Aj%Q=涯:΃8`s+~WdAv6T~ǗxD9w?[v77WWr82E)䂀յzB~ |6}Ȁ sX ca~W}vwy1HmWbWMfRgj7w.'bEt/Q;&(lCd9M7"7r28n4zV' +~H8XQvixS'4V|V[6v|gyo[V h|8ɒ:yՃ.|)W'5$9/}yJv _zԓ ag)z&|x; q: F=^)U V5{9aI7.YaŜa!fKy𕗾Gc-LsE~Y)(eǗ{44I #߉=E՞38tdsi7֟ØdRxih4:D-}%zxVR(h$zXxZP9dHP+al(7Չ:qtIU i.Ib-Jڙr_Kt|Lwj #L|tǶ1Z;.L9J kiڜ2Z hYLɐя "S3*5̃ T!{z@}.$_bYJ-ujW( 6.D^X:s`͡)*oG,OyX?})Oo7 F[{OnOzo9}툿,-?A FS܁/fϭLߡ+߸_OM+_O?/_/Oo/ZɏݿׯG3DOv?PR*7W^"Nq PςP2Ul,0LF1KM=Eq!cnX;|TXt ɤ\5Q锈bc m.K\.ig蜵bs6<ˏֶ*,+/33t4)::AC&Ӭ6RJ>a dWUidg}%fLuz}W1e vE^7E-9k CFQO#?nr{ww޾u4^x*>F}xwr9-yrhNvX9qή}ծKS3='ܪU̦{?{S*#^3Գsr(mK/>#PNO*(#m6t&N@~XP--ӑ;1< x1:CFJ'NT#,RYi`rB0 ?!h yb3|SN+E42LD6QP64pd(Z:WbO&h*s_Niw6:eWd?zf-&NꛝN+[XyET,C:tS;W]upBHoO/p$;s']DfʣYv|;ȸ FR%?=[^}W=}&]MHkp">њKU\iâO*䓃YXEUC;"Q Mm;y+njPeDťqdZcf U߫tɈّSrOKMk-fu) =ީճ ږ5d7͚Y@X*#>k-g T \5-gp ۗ[-1 og(G49a7xc/ݏd!E6򑑜d%/Mve)OU򕱜e-9]f1e6ќf5mvg9ϙug=}hAЅ6hE/эv!iIOҕ1iMoӝAjDoԥ6Qj.խvakYϚֵqk]׽la69]ڂK[oUhϥ#"eŷN$bTx]>7͘-'LnihIj"vVD=owyfVPpFy~k1&ˆo󇾤AƢ>pؒT_8?λnx`q'->V2Cf،V٢lkab( 7g;ވ jg\>'.&sozck Nz ɌXxU͉sDz~TM%~ρB4+u)D|`5(݃{:X)pbfaUTMMó;}S ~heKΩOsN7u|_! /C|ZtzT'.)/;ʕ, :Ŏ)I:O2<_6/*赱yX~*_u"(=jz(koBOoɀLntC Lfp~ ZD;i 惞o`.?p`L!<G//k{Oj.o \pFO D0EHv% KuI ͤ.roL mRjp@%vhnumpw֎ Qx-n P԰p.n -@.X> [1TqK^b]f!MM9d 0|M*;scalapack-doc-1.5/html/slug/img419.gif0100644000056400000620000000033606336063012017155 0ustar pfrauenfstaffGIF89aW!,W}4 Lu "騔,EU Oӽ#^[6{z&ѢKԍZbܫ+lRcX/&Jlu,;'xRHXG(wthgŠQ))xz*gۂZj'{';#+l5\|l -[;scalapack-doc-1.5/html/slug/img41.gif0100644000056400000620000000356406336072343017101 0ustar pfrauenfstaffGIF89aF!,Fy{}&扦jFm7nY2š L*cEU ʬ(Ki=ڲI>Qk->41p} G'8AAȶG(9`8GH)J':ZVjuvƺ Rc+"z˛d( X|5sk rM&}ؕ^5m6~o^B n_/`za0pJA 'XrIߑF%M47T\l5&7[hmH qmd[ ɚ!:qXQ2%Qi)#9%,F7@W2eޥPdD^oY*F坃)l`4ihY蟇v՛W:T>74՚}Jy*ꗂItJ"=*֪묡j>VcAǩp؎xl's&dhZ ߔ䖻a3خhfnKY[靸j6_y90p^KuG#6 nG+R!;%+䭼6ܠã̲<ӼFGz1c;n*ipaTWͲC֯RcʯjՄ޺?t7rfk?~kS~>}J_ y74$~?)g6P~7ĵn| Ë"7͌sV89 tCke?%, 䮦-/ lᲤ 4d ,w3~ؔ04NAF NPC 1BJ y=+&{TT1ϊh$5oX7ǽ?w_G8я-FrF?z" 22c!5H>Je % 1B'Dl4)FTBdVJɝQdfiIz#.juTz//"ZR]*藚d{pIỉ|!yI.թ (SV <'0,9-?C~3IorϠSbB׉G~~;]DJF4,F {~E'&RVRDQ4jm򂝓&~&J^}KS(H&sut1+% fNTUͩKԫ*FX]?}%k[`S [wiW`ц|k`Ҧ֒1Ȏr JV+8Sy6\kԚ²=wv ~6RɪuFK˝q>|SD/յ{||B^XVY@RHHx "!{^U yZ~܅ ~2"Ab+"h b}'x5:V؎9hBH?yđH.iPF2PNG\%\eZne^~ fbfэRRÙhÊ5fq)Mf)TR^xN8h`c^ٱ–iӷ.+XضZb*RFf [e,k h/lds(5jsĄ#P :< :Uhq\Qz `q +%BDJ.Po8kfndHgR1zD-.MyШ 3=Ŋ7۬ϲ]']5RRomkU/=Xzeީk7n6ȶi"o"\Λs~ *tg|6C-+ul"ʒ=ut\|{-t3\ؐ{ӻH1|ߝN]p[mܭx^w;pz1G׾4eD#З76H1{X W2eӯ&a.d3u2/UZRS!GT[:1>+(Mw2瓳wC!2OOK]Քb7M3ۨE#f{jd@@Zra#ADCr؀7C?2 $v'KKLP$STI=O¤ITV(UJW$*|9DY ~}68!mXbha MǺ !R iղaK$U;!NYX ;s2U3;\.e.z`MV}/R)sKq(/dRR:R*2SsW"eIv#O5ьMIfL?:oRN:yr@!D-@Q\R VAV޺Gm3k~ kA:o&laJS)$bA4Y$e*,9{@ڬL+ZEVCQjEYP6elQR^,1Gk]1WAEzar bo(M~7DƨQŖCޤϛIJBs=ׁ5 =7+UoT7{#-IK zo^zq¹]h fOrTA$tI2Be/D)8hg,K$;gB.8 :U&ٗO7BJ(\,? (]FYfoőydxWkЃۇޠˋEee\jyp4nkw4h^^ 5!j3S6Blln+љcvfޥ(G~c*\OG+i77G}~6&65Oe/0ܞʩu_Wאdu[C£wm/}9~߷m 3i"կ'6o1ٞR|=\QaħOan~SqUA#FkEm.77 I8-gzŁwU^ODŽh| [fb7gpGsIdptyj{V$~7fps7sZbrWHxa\eA2yc%Ec\xX}WuWd&3M2l4j hg{wuq&NxvT;nTX]x~Yhye)q>mAj'| DŽ s#m345|hl8؉^Fh!X]8Rxi狿X[8‡큌xx؈˘͸Ȍ%¨}OJ?"V8J{0^e |uU6e|^2gFV~ozSUs Vd t%Q%zZohz{~zvm1˄3djbm.m%F6}gSGwAS@b?OX^>6# Y,=VIiӓDV*,dephuu5T.]$dǕo FP69O X*eD~bOhc֡9DBlwFo n.Ym<6y^M&Em\~s7 xw΄gGv9"oGT &8qb(h hDOafe8c"b֔țr2g) A,Jaё(mU*A5=܉שY)#9caE.8h]i~gb*q ]b:CB&@(UR6ws)W{9fOPH9 *fPd{rhOD34VNDq️g [nIuxVV.4dyiԂQDbDw#DHW zx깤mڝ#gJx"jTS)\ `uʍw:GIɧ= uV$__SsYZ荌%|9!8߸hxษZvꪪ ڪzηi(Ԓ׬wו1lga3\9ģ"Y<;F) aʐVŖx(eĮ1S耥 rĉI)l0yp+he< g"S⺤ɖQל *$RܺL9iyt|ۄ:Q?i3):xږ)їW6i.3h xvp7놳6c!b˨UKwH`i XW3JHJiWKoFK6= ٩">;vX+@(VW,;.J49I?DʙQ9ppx h9gOl!GVI-siT3&h);Εf[ԥ$Je{sfɩY U븶y}:r=4i ؋X+%G<~asTwÃ7ҺY,[܈ZƷʪׇZ!Bmƹ*&ƦaxLJ8lǤ׾orpF9WKz ŻOn_\KO )K\Q 7F$;4xKXk,?(ogʻ\W„%.̕zAa̔*ˌ ;$:E5ٳLżlJٱ۔06"<Z,ڲz̚k>.9˭;܅Д!tYPu_ ,C|ƛ]-'`^}yHRmlʱk E [C>&˭uL::۶ӷ&uBcۇs't^[o)1ѹjSųLT|拹&:HAM5Cm,ZQ*_Ԣݵ{"\V8|}x#DG-ͻy?Y!L:0͖,WLɂ@0ڻXM\}BӋIOװ4yۃtNrLp}-q 5^ 7 :U-w/Ȱ]{IK|@{ΰg, 張F&o "ឲڔAIj Vykm0zwcLU4x>m.3MvB誅j2 >ܱ:M(gZ+Ryӻj y~Z}vr[$mFY-볎znjc~+sŤQm~Q%cK[ dxB|Gy ,)cK4 H  (nū3\/7:nG[=Lk,*ϧ4?&!/#O%o'%;scalapack-doc-1.5/html/slug/img426.gif0100644000056400000620000000031706336102602017151 0ustar pfrauenfstaffGIF89a^!,^7 ynx%ʆ]Z:r,\m9&ʍ\JgjԌήSfd8cږ\u|b[9ȼ&Cז6$(y'Hf'6 H +;K[k{ ,<\[;scalapack-doc-1.5/html/slug/img427.gif0100644000056400000620000000030706336102674017162 0ustar pfrauenfstaffGIF89a^!,^Z6ExВ ;%,^oN.dEpCbA3sj:kPfI.1b=VLM;2#ńGFXAvh4x9Y)$Fz +;K[k{calapack-doc-1.5/html/slug/img428.gif0100644000056400000620000000014706336060022017153 0ustar pfrauenfstaffGIF89a!,> }预.j3Vo! !Pj}#L`R 4f̦ JԪuY;scalapack-doc-1.5/html/slug/img429.gif0100644000056400000620000000006606336057765017200 0ustar pfrauenfstaffGIF89a!, plXzVB;scalapack-doc-1.5/html/slug/img42.gif0100644000056400000620000000010006336061737017066 0ustar pfrauenfstaffGIF89a !, y~`3oF[HfR;scalapack-doc-1.5/html/slug/img430.gif0100644000056400000620000000036706336061165017161 0ustar pfrauenfstaffGIF89a^!,^Ό83ڋspH-کM(#CTLp\ F)yX YIEycR2l,.,s ݹjt=fEGFVȣHFWf!&8ˆIǙ&W$Ywz%ŵ7d[u1ዢ̫`̛{"4|\m} .>N^n~\;scalapack-doc-1.5/html/slug/img431.gif0100644000056400000620000000101006336057672017154 0ustar pfrauenfstaffGIF89ax!,xPݕqZ2fPS wK$3dUQ8&LRODMIP{2-U#49s?ʿ 56v#D23Hwxa 7' u$2цƩ3Wi4 I; ,KY9jȺVk-zȈ|껌: .*\ilغ~kKe(ɋEOvPĦ9ȏ௃}#pjDAx CV"zۦ> |cMy*4IҸc+ MsvKLi,efL[0=cSYOAy/f4PmeTY<ҁuęk5o:YFԦt0cv lOz3QpEUXs` x^eL>Ȳjy6S2{%GsՖǮm= &h pe|N'd_ɼwY+;scalapack-doc-1.5/html/slug/img432.gif0100644000056400000620000000011606336061466017157 0ustar pfrauenfstaffGIF89a !, %Lpjk|ZbNF9NJmLs;scalapack-doc-1.5/html/slug/img433.gif0100644000056400000620000000010006336062011017134 0ustar pfrauenfstaffGIF89a !,  랜kֱvH&Y;scalapack-doc-1.5/html/slug/img434.gif0100644000056400000620000000030606336062046017155 0ustar pfrauenfstaffGIF89aA!,AcslĢwMm&Uz' 5dj*{Ub(DKfm.0*NFmt㭛=_zkЫI'uCFev5ǣQ8y)a)Zz +;K[k{Y;scalapack-doc-1.5/html/slug/img435.gif0100644000056400000620000000017606336062346017166 0ustar pfrauenfstaffGIF89a$!,$Uk zRG'd f}yv*ھ}dt^~r8nG*†&~1#O6ܡN ;scalapack-doc-1.5/html/slug/img436.gif0100644000056400000620000000072306336057711017165 0ustar pfrauenfstaffGIF89a`!,`Z qH.Vʶj,m>8~"B3 ȤkiqJIYVk#ݣyįïNc$z}hfw(WaH(9EhX7shGIYJ)% U:8Ձ [ɪEhU;4phT{BK/4X1=y u\ _DB +G񡡫:]:~3֦a&I2Ae2dN|F ͑<dRr8)*WLY'NNn5{L!}zS)UUAF\שc=Jbٲ%Fӷ|6å ]ÏEi}.%VO;6ʢGسjɫmbN;scalapack-doc-1.5/html/slug/img437.gif0100644000056400000620000000103106336057730017160 0ustar pfrauenfstaffGIF89a!,`.PVy;AwiidmҽLעe9,ώPՄ^82m?r5!̪zaz)-WUAu錍)%gi`$'Cw4Wx1DhٴՄYxxضFS;(991:R2 e HkڧZ b9 = {́]Y*Z.9INn;HzT?;n*Ǻy \V:{Ȥ}E(_{ά],'O۟^I$LرFq!AA o`k_BQ',mR˚1z0Hiiө)sViQ* Щ_HJ2ZA2n8@Ƣ+kk=1#~Vڙ+_A]GN]O1Ĥ2ݼ ^(6ҴbGf8NS*ݥv:S*ݨ^1 k.1?,rgr/">cO}>=g|P;scalapack-doc-1.5/html/slug/img438.gif0100644000056400000620000001203306336107560017162 0ustar pfrauenfstaffGIF89a!, ڋ޼H扦h L oL*" JjܮwgNVy ﹝]'cxׇF0ȣ(`Ht8'isXxY ZsZxZ3˗ۂiZs{<,(C,,KJն lkXL$N۪#ݸ o |n<N_3=ep@X9o!PPr}53KIV\O  <{b&M6mqªbMW MXpgRhoT0 hW2Y*R}P%Wi.Wz;۰RnW^ΧŎKZR "+mnϻC͕re*oj`>m(_`a}Mj[Xf\TvyY:fdN1p3zh"i4e¯sg<4Y Ž㵨rI٩ZPQ'aW{*qɬd2RC ;WZc bZ{VNlY>!ؚ)繠Vtuu.RK)Xy2Up_IrT(o _5viz`ಥk1;hu'))0r3T^2G[pC//]^kT]˛][rcؼ%e{vCgn3!md'Bqx 8|"wS8M,$߉?9fDjDT$Gə R|sکζ谿.T;O|3G ?}OO}_}o(/)?ǟH$Ã3Hmse@o=S\]Pq%YY4Y*X`oxjUj/CP-me%6D9MuK&UUՀƣܜp ? I";?r*$=16ԓFUƋX_lH_Pa@/_6Q&4 >8M`xpn"B9"PcKM=y!iDqdƒ\mgz\y2免1;DL(!KF9 &i04aqEs07RʜM@FHKc@L("N?|Y9Hhr{Lga>:4OD* @JRxNO< ʧ- CRE-Ck7,\?YAƱ/iK,r&q,e\T9!OMRHKw-kF,mus岴Lّu!"1Ix%c\Y%ҐM$Wʟl+̲j/>0خ1qr+5+FQÏ;&2:g]IYFJ;P +άeȦ|i6!D4_j ZTGeBnSB"_}h[sN4e>Y~:1Mv0,v:0] Eiέغ[]4em)Kyi9Y'`kW\A{\Lvr]r _t`0<("xF+<c8݅7r=sXx]Eb-alcy%1qc@q˻^ KkP&Vd% aK1.<69oc\iܤkVJq&2ݙo2Ys;OÚn]\=>oDPpmbgFm~2_(%k0%yIc2=>*)SBȊAqaDP =wiHZ@6*}:07oN+FSXű !Dت6pk<9Mͷ4TPm5^*jg)hyLK8[pvo54l._]I&|4,Djnig@jp`.4'Nj⿑}f1l֯FGNb3~>Z3Nj:_׏ƕµ&q6-tb:䨂V݅11bͬ8ꢞsn6]F\[ZϤ;>fw۶b>p]GVbᓆeWR_'^9gyAԆnrWg#XqgD|usXĒgXp&X{wjEy-6X˘RXTdkYHu$)^sLNYv.Jg=N66]4"A5fsuZWeUb1QE71f_hBizߧ|WFv#Q[級:;)d9Gx$:)O9):TxhIəv)l9Q p97W#Gu$9eYxلyLb]䗊8%J=vOTvŔ} s o9lCIkfPFUR&ّ ;♟ Lf( 4DG6w j8y (F9iXmq(Ơ Zgpf)w}Wxy&ڧJ:8 <'Kkɰ tK)A1?yk}&Tb"Fd{%,[/T.'bNdNʍoұ<۲'#+c|3Gc.ч֙e9BMִ`heHVEddxŴh*TJhI}x*^Z_,8Nqಸ%I4KWqgIx@5[>[UَJitRO˺kI+x r%cVnKؖQ~`j) Zj8뺆Np֙8}%S`ꥪB@2_^?]ۘ +wQɼ TҔ^Vjm  'JG+TXلP=˻Q̸'p3oɽUֺ¢X5@[¾+kghjDZQwRn4Z$qɊdA~v-տ.QHtnuc;FŘ%~`=\elƚur\ ػsIz}u!H.i@{n1[%*WY*y~ʫ5{,2|{ƅ'jW\LJyMWIUOxU|:ZkۄSΌ1U  > 9NqD[Ou,,0i<;L;T`;[ Y ^I#q\iS{y~Xam4=C[ wn`l7VN-&&bmҿ:]='_4kWD!KmyA;N]aՉ\R=l\ [}cݽe gmfJfHjuB_Y~CsJ ֫&Pj+dgJ&5g̦xyة.*hBuAP xjӦ*9lM@-VzVY JS$Zz kw(k= v]́>q h?JWWɌއ:C?ܫF LWN:,$LIϵ~(WIlcAMqjr>Jh5-;j;scalapack-doc-1.5/html/slug/img439.gif0100644000056400000620000000036306336064407017170 0ustar pfrauenfstaffGIF89a_!,_ʌ˽k ޼☁RfjA+; 't]iǍdAˤ!\K_X\љʨ 4STC#h/ug$WgHHhcVx&I)g5ö98'wj7JzxS7)qW+CN^n~];scalapack-doc-1.5/html/slug/img43.gif0100644000056400000620000000020506336073330017065 0ustar pfrauenfstaffGIF89a2!,2\Ո[ޮɉ*Z!qKiJ'q*eE|^)ﶄEʬzkN o;scalapack-doc-1.5/html/slug/img440.gif0100644000056400000620000000045106336064550017155 0ustar pfrauenfstaffGIF89a!,pl4@@s>j~+ϭ+%`!O8,d"x)9@NꊑͅțT xƆ"ql2{\\A"a15ԇ4֐fTGxhtEI) y("Iv;W*ȋ's gK+[CF;Q,s -KZ؝}$}å1c>_ꊴ ;scalapack-doc-1.5/html/slug/img443.gif0100644000056400000620000000032206336064673017163 0ustar pfrauenfstaffGIF89aO!,ODg>heO}ΤLd6bRr9T8ӵL lԀ[,QeB^w †,B_n:el~VHGH6qwAf0ig3 2Y: [k{ ,heO}Mc6rzrJJ\ݶVZB*/i@Hb+rZP-ȋL#R.q/iJaH|nޯswoه1W=egSf$fT&%dyQ(15 ʩT6*k{ ,N^n~/R;scalapack-doc-1.5/html/slug/img447.gif0100644000056400000620000000021706336065457017173 0ustar pfrauenfstaffGIF89a&!,&fDv˫"{Jmaq3H%<5eP9 ҦVpd:9Z9ӂ%٭(8hQ;scalapack-doc-1.5/html/slug/img448.gif0100644000056400000620000000703706336110075017166 0ustar pfrauenfstaffGIF89a%q!,%qڋ޼Hbʶ Lh YL*lJ"ڮ =ᲙkN} g;0ϻwg(ȨR%ؘG8'FbY&I6Z HJS['x:[+[5,yƛlw],WT` ̔Z98^/,mmX߮}}/t(œrA +xfskLA[MEe zt_@4qUxO:#~&Қ:NԊzy%4겛,t`@U1 eO?2ihČ[fGwж}kLQ2u[6ٗꝼJ0<8Gj=qbMchXU;Ft»gaJ+DBDtuwb.%%k»h#] sSm. ҫ"n\vRԷgg}exn^Hٻ<os4]BH`` .wgHh~ vezXƈ^5ܵJݠ6&b8&QdI'=ب^\"6.8U"HTX IY*ZBBn>jfƝY$EUNuf$r#86eKR cai&yȢY "0)EYChja egmC uDr9DlcvqKhGMDcɆesE`꒕1y!eF`I5]NK^PY7~v$+Գ5+K+&-A̺Y.--뭱#"XA0[,UM&AWlEf=9("F7(. _zM8_s= (o|JvpW)00ϋ7WˮŝVBS? H=7oVX@G>]Rx&,fK-/z5nLn R;?g+,C x#uj8XGՎ74I#s5ZZS"+zm%ţ79#_:Ě}i+؁|J`-aG*b05;o&>o./ / }ACd/"V70uhX;O}$?NnJ[J 6$>Ѓbaqw?J2|>Bfja 7CfEκOCސCHȼ ^&Q Z'#8VZ%` 9hW(\LĨF&Mِ\aD0vpoE)=`Pe>hH*x_T"]eUPb sE!+ZZ Ȼ d*9yi:y-dR 6JnuҌET1k)H>CNw^)flDO\m_ݾE {3Guj~V#kS^&=H.JHzKi/뢬H6|\RYɧot*N)¦V"˷X*8z-C>J.E*a9EVNQwbQ~ZfQ$4eOݪiG FF#dUt9|zWM4jGͪ՛EsSpEln KXYB\?U~K% . ]se`/ 1D-$d̼/s@\fK]4h' mk:r w|jN 5D Ӭ+="VwݣGO g!72H,a/Z&=baߺn59]y isuGǀa1M,RW[UQGSe&Q*BMlQ.q yZ9͇u8_)2VUYB|1(]|l. ,% fOԶȩZu36tJcW|] CDM;( €$3E}yd$c;'mJFK%dyJVtBX8n x*#-fL9[u!/i,GɈTsjڑ05154bPOx`ifp'n'bmuCc%8jmsy[)D~ F!ih u#(!x)ICwipU!X؈Nh։:RhSw9uz Wͧ@Ldxvv[hn?tlք;c F8Gc6wS7JyfPy"VHXl5sA'D#T>8.PrU.r"-(5ȍfsXz#s+N")I4}xtB+Ig#8"uivHrBlRagˆ>VVQp'MlWxߤV(eYTt4gw`eFw89^ӥlZ{}[Yy%dG9}׍Ȕ7DywlOw XEb3CՔa&^~WC6`W&ypIW4DXenvǧXEwJIOU%ec{{geVZ4FdTxYtfxqvzn[b)I]dfÕj#U\5 (To/ȅyB;_dSY:'hbƘsCl֖Xznᘥ,J~Z h>~AgI=ɦ,F&؊yA煣aYB8 *JjZP;scalapack-doc-1.5/html/slug/img449.gif0100644000056400000620000000007706336064075017174 0ustar pfrauenfstaffGIF89a !, pȞDQjuL?u הV;scalapack-doc-1.5/html/slug/img44.gif0100644000056400000620000000020506336073370017072 0ustar pfrauenfstaffGIF89a1!,1\HZi-wXjrh*o9azі:*ab@ΗUDoPzi"w< ;scalapack-doc-1.5/html/slug/img450.gif0100644000056400000620000000026506336067063017163 0ustar pfrauenfstaffGIF89a?!,?DvJp*F֓-Iȕr5&n, ӱ1ONʈ/81D< )-&2hy6`K}kFUio;.Xėf17hRRX$&YQfx *:JZjzR;scalapack-doc-1.5/html/slug/img451.gif0100644000056400000620000000043706336067353017167 0ustar pfrauenfstaffGIF89ak!,kDvJ3UEXnyXe$ Krr^-"j|In*Uсp7zge:"L.Z,65 al,3ܕlP{Aymwa^#Z}>uŗU5FRhx'v8Gy$#YCIYz:D +;K[k{{P;scalapack-doc-1.5/html/slug/img453.gif0100644000056400000620000000043606336067540017166 0ustar pfrauenfstaffGIF89ak!,kDvJ3UEXnyXe$Ř&I,}qjO j;`]=_)XexQ e=D-(Fd-~r5'7G7'(87iHwZxX I秗vgZxWfK ( 1AjG ]웭|S:wH噮COh?0 <0… :|^;scalapack-doc-1.5/html/slug/img454.gif0100644000056400000620000000156606336077241017173 0ustar pfrauenfstaffGIF89alC!,lC&~^䉦ʙlrHmλ= |6L:;ElzZWXv KY<9 wڐMKv?'vPh8ȠvcA9d9hyٙx #YY:FȺĩ) kZ۪gG1{;\f|L)k u-,=-vHZ=K &SXn7n5_3/ o ܐ/QpqxB_SHQhqOǍF"Rɕs8*KY$80Eu5Up„u0Mh<%(@\@uT0ё\ci{P م6SCT +oKl ͟jxqnci;#Um}g87`KhI'3 /ysT=b;mM籫͇Q.b1~MLLxkݬCN\ O5=ZDwnQ`ᭉs{[lޝU"v~p`NSerAe\fυǙyQX- FlJyݬb,w5/آ@3Bv"<(ύxɈ#ٝd'L·$FɔC$YZM]zYK(_RI?[*Zʉc[eҘ!y{& jRiZi4$ј#njԟJ(SBv)v'rꀙ:P&lz Ml4[,Zњ9ڰJŬ`+嚛ĺٮ) +k1$.㲻GF;scalapack-doc-1.5/html/slug/img455.gif0100644000056400000620000000500006336103140017143 0ustar pfrauenfstaffGIF89a!,[qHMؙٖ L 6n/ՀLpTJOsWjjŢoXI"z5('1vgY$fQX(IBayiyzscYEjB Jt;   zr+K\,='+x,KË=Nf։r=.^nS)<:MG=iu`,~ ,? Zt0c:B\b!-2H=(v l;*rQ PC`j9$J>j S0,TUU4V#;)D;fDk5Vaׯ[lS Sru2t/d<3u06'{۸sSv>ofg4iQc3d5mgm&4Z&DhkOz;i+U=:nwn,~?>ȿ_P?|̲_u }$P$3-K!E'dbY( d]ݗZB zGE,^j "Yz|d&EVMM"cnb`Џ%s8b18¶bRLAG5fR%^e[.jieFoVfau': 'MrGzz9~ց+j~sI^h&dz:JjJR :>ɉ)9Zᨺ C lN }~v aw~I.&X(.x. [Nڨ½Q贛p{1p.IYza;hj zh;&r*r. s2Ls6ߌs:s>$27 '#E\~iuCt },EuQm@ 4L̕'ھf*}vv .rZVOiK|WtU@犻wɟnt8aCk sVN֭54NdNk y2ު;K^M;v;oT"'#~T=Bl$ii4a+S">F"s}%̡B%-(nk YBa7?([H:-wriC jrTR!=x~pfWѤ6l?p#gF/ zp6(DtM*/ ԑVZU42_6!x+*sh!׼Q VX=zccIzGD[l`")ANk|HhrV!omr G:ъ`%MרD`dtH,򑍜-1;a1!:AT68UI2)ћQ_Tb V`;ȫ]n7]cr/<&~̆z ҥ^My\G<; S~KɅc@IB{5n_n\y|1麜5My//r(Ae;ЅK9ÿ VW.P&)˸hDyaFjh2[y用z#3\-qN=36vd,2s5!5d~V4.%Ohu=Y2S/^t]&gLY֐/]ńvqd :!yHfWÇM4L:')CI n96n^G._g_4)ǻ7 cİjuWשrNw-mf &ufuu,Cu]9P5hs>ؠzJb%bLX]$&Ԙlj"qB#CoH[2zvӅZpϔR/N]S|uLm!qJCvuK;Lwa늕 ~G\0h.vk eUnUykqSvFyRKf."DbG7'WDR(GU"3Ԩ6yyfd&g"fG7UYR[wh(ew85&~QJ f3i+hJ.#ՎG;By媊CjvL2n1:pRU^޲X1(S:%(l3iԪOֵwў-G| 8ń;~ cɔ+[9͜;{ :ѤK>s;scalapack-doc-1.5/html/slug/img458.gif0100644000056400000620000000067706336103626017176 0ustar pfrauenfstaffGIF89a!,Dvˍڋ3)\ AȶK%Lbk{ǻ Dn!S!aci,[#UUd)~ pnzU ?vׄ'whAx(gTv7bxuv6HIՉ䊉Řj7d:yFskKc(gUڹGE LhKy㉪]-ʕĄ݁7 ]a-nفYh >7[^t6nԜ֫ Pz!8٣n4ю1$0 ޳Ndy0Ȗ+7.oQt鄞N>Z-|PT52VҕL!k2ꥦ.NH'Ͳĺ1(Tr,ު(i7 >8Ō;~ 9ɔ+[9͜;{vP;scalapack-doc-1.5/html/slug/img459.gif0100644000056400000620000000032006336103720017153 0ustar pfrauenfstaffGIF89aF"!,F"k͋dqH^לXRh>ö;{TvSVCZ4pdu:,Ju~mZN|6: q)p{2O6B87兵x'dY”ș5h8Jjz;K[k{ ,y{͔jJu^K%O#jRNpJnX$6(!(hH7exF9Gƴ F'A yJYUiI6(+Ǥh\g' ܥ5l|M y ۝<=Q{^*ͽ] ;iiXOsw%nu shpH?`X\uqlu<"F$>~tFJPrِN%/d3'Tyb {.}fbjQHU?lKk:R,tw^Nana~b"HR8 "0TgaXE>Q;ZSRQYuDW 3[dsf%^^6V3[%%jnj %mn\u\{(GX^yl=\Xn(i}f䥳Pi EZ%VjyZUq_ViZY%+mjZJ2엮 f *{5RclQ3uiݗae{`*[9 7 7m'iIC[?XkF$MSL|vkhy bZ9̝Ȧs3&*)(g*c۬{a39/C&sM\di|Hpv䥊_R .D6tm*`7MRoMtOrtO:ݒݕt*}%L#^ɂ,UZM/$/}c,מ~:o*z+{{/Bs5Gӟ9IN 9G\SS7 WUK{~:J5?Z*5?v7,/rnq3 4Z{$i ˠ< ~t]߀Al[Y8yU2LU5"PE+ v@Jx` &m$O~{Ƴ1O.G**Ox\C<6эo˘6z5B)ȉ=$8DrZU)aSؚUQL'H JA!!3vw 1pd,g_[!6ʄ% +hb2 Cm+E{ĐxJ^dUi6Y3);LoMݲO*=ild]S4s?"4%πoDыFgx8Q碣3Pѣ"ߩt,mKKDʔvH81U#5H߬ zRՙ@i5"I14$A;|u{%6JC` %*Fe 3-HK ǘBڦ= ,\qdYқBA] R(8,W9eQ6(eZVӛܚYcʰ< kuBn|;YUL) >U;PP"[BkIN*JHDJӚlvI肖9 ܆Qm۞"+hPOc?myiLpM+@85uF3nlNX}#9-ANeSye̿5^ShPj'%+O2QW+NDzw8johZwfS#?3%~O0z`qE%yP,KqUjEMSA3u8WcB|]7D+zь^iOjq.i^TᥳHbZ!H,2S/yWs-&M-kSrV~bKr4eS22ӳR5M۱v& s/H=Һ<7*c\2ߣ]s/܎*=vaq oa6v}6^wi_xX?²YLI<4MN [z4wo|< }D/я+}L0&ia[6#sًp_S;ɲ) I1$Ysl9'֩T%߻ZrI}O| k,`ȣ,O,]<1OKh{~6QWdxOy'KKh'k;8.4>W`Z^G=T1}/6? (;scalapack-doc-1.5/html/slug/img460.gif0100644000056400000620000000011206336103755017152 0ustar pfrauenfstaffGIF89a !, !ȩ LVe/\LXv(jZ*;scalapack-doc-1.5/html/slug/img461.gif0100644000056400000620000000050006336104100017135 0ustar pfrauenfstaffGIF89at"!,t"ڳݼ09$ ZI ,f0sd!bɪy!ӨU#:(OZm;sԆ5}TmD&PgEwS4'׈yɦ ɇJzij):83XX{K7UC|W"AmZ[#c N[Fbv5\5nQG.?^:x_>7EPK-G.+Z1;z2ȑ$K<2ʕ,[r,;scalapack-doc-1.5/html/slug/img462.gif0100644000056400000620000000262406336077274017174 0ustar pfrauenfstaffGIF89aC!,CL̋3pH$jYrv¡("%|jb-7_11@o*jp6;=n{wWp'G6H!R'hXb6)Ia)" xӨZZ:gصʇ(; A U{1lFjۋLkMHx&Ml Lw[y nT6 ?~L؈}sjootS8w^P#(QŔczH"d B{N˒MxEҧ#τ5svt ɡNb[x^OD)J jF欴+ H5fUӞ_CD:SMyu14SJG(CagX'V89 [2g--0iS>Ciòdl)]I<3,8l1vt*72u98oպAXU]|)ُrރMobyGV=FaN ؟GĐԠ]!ş9 D~I]$RT>maGA[qfa>4 byN^k}ʟv˥]HJbgIX+n }iuq.}ah[1d$sX.'+*3Ψ_- 6 sF߼`MϜDK>M3YxScPw^g6|X}ئ .}h6Jr=6O}sǍn`JB:~Ŋč;`|Ϧ%H >{N,nA^`5~4VkgXC4&fmx&>%HoP.Sk7v빸b#pBm3X`䪾T&QEcrtn"TD6fAL`/eombzż=B$D:~L"bH7xy"&LoglN'F']amX7-l^m _^&. đhFhbdEe&KR; fwL{#:]yr;N &gG79 -"=W; G/{MAvӣ;z(L3$2oyEI_XGB`;zdⱢMgzy,D%|1T$&1bf^&gF !\#(P)G,a{.搃 {[aNW^:aط RkcE !eCr#CfJ_6WA&H<6PƮ CԖ;9Q epetCBy7UQw)%IP[l/_ǡBg,vLUƐЄ;*m_!F, [["́ \v8>+oIbVV* O7.и Ɇ]UpjwKm)n$[d;eoԫ!WZ[_FrW#>4¹vsSx0".Ox,n_ x41? lCwRٴ޼F1;PTk i5n,uwF/L#d0- ZֺVt*1ůš>]\L"@LZi|SьM>h7An&iXimTΒ9ץ &x&vW{yݷ⽛uE/ʯ n%ӣ6N:1cK9QnN{rTMwIRnЮ*9)K %s +sTxVB4-I?GϥcOO]*W:C}1ho&T ?-Ԧ?H:K_5 }]N٤z}~/mykq/ eK`Z3R0{oz3=$MoêNr정'WrOQ=@|{W);Y9A%;N< DYgMp7GR/ u|xYZ0KU$o yJTg<aW]gKpfteowz'q HO 17GZ~k?v; }VO'^%&k&g(r.x@BgwfC!681&:x\(`z)yvA, GH>l}]8~Px$x[kEhXl{L^8}TxVƗl.wwoxd!vhx5zq~(HhȈ舏y#"1CG$SDȁphe>7o^p؊$8I-_'#fHmgHȥ25W艰xhtt3XhXpB18z]FuDyӄGwu.FhsD(w(%|ǣjX5u]nA zd>%1~_v{Yӏ/y3ja7R(V6_цEvJDe%eVwFGFԒ0V1(#.J5{%FYDɓXnAp|WlFf?w YĔ%|qTW!L4xivKIMґFhA^4)Ƙ8rsU$SrUzKW|(n %PJV[獎'RewvA{)MEeLtLsǎ(hAiOdxY#[uXIImv65l%x9aF$SNgirvלRC E O\=Drw̕{=)K@SUyyuJ.l|ϷVW{׷fth- Pxg6xwpn=I[󴟝z%LȚ~"Jqa|xY]y/*%FFg A2#0V7$ؤ[9kR؍ghJ胛;33az8Ze:1ۨ'E +HsJItkl~ȧ kXZijUv=J}(D`vڞ6DǕ8YEj"֪$Q;scalapack-doc-1.5/html/slug/img464.gif0100644000056400000620000000063506336104640017162 0ustar pfrauenfstaffGIF89a !, ޼$#r售[^<[3CS/ctgGYmquJQXm˲J1E<wäfVַȶ%FT5DfghYj !gsx)(XZx*thH+&9L{kUh<XJ⇶9N cqT2x] &5ϩoݗ~֧dY uDte튛o5BS, ˘4H0)yHO͚Prȟ| T&Je;!TղTLSh5+ Z_p 6رd˚=6ڵlۺ} 7ܹtڽ7޽| o;scalapack-doc-1.5/html/slug/img465.gif0100644000056400000620000000010406336104731017153 0ustar pfrauenfstaffGIF89a!, i}Ne2;hSM);scalapack-doc-1.5/html/slug/img466.gif0100644000056400000620000000223306336077332017167 0ustar pfrauenfstaffGIF89aC!,C@ڋ޼+H扦VLKl DL6ʦ 50Tj]S?q>GWMw okwu"'&Q7"H`x(Bif*J1u)ij;J[+T*k5{9Kk1llr |Ȼ!̀RM򜹝  ^|I.N1G߻f4.&A`xZH=c4x^' b<ȤZM T2gz  Q՘@et|8u:}Z CN #ӭrX,^(k6 jmm OظN:wZ޻Hթ_8~[axqYF'`>K3ޚ4>oɛ/l@4Ntt& -kmf8ӽ"GM3$dJ w?ݰu N\fTݣRCHwv}%Vu{1<ݶ Z6K]\x~0H_.RE8k7k\U4ȱRT]5P3Yf{]Me=Y^oks!M#ϔi~%LUxS  }"цNZV\=R h5Lކ`5x A8ĆmQPhzљYf}8=ǩۑUgW.d#m. H]YCSfk2 c&:糧[=>[-*k$WƉ~ kna +(?RlLJ Vobh1uoPzyLʸŕ9 O,*qԙf䰴}ߗ%d'l_̶͞㦟8a[(֜<9*p^4w#}O@}psbNmov;scalapack-doc-1.5/html/slug/img467.gif0100644000056400000620000000030206336105467017164 0ustar pfrauenfstaffGIF89aE!,ED|@`՞u[8)BiVeH.gODү1T,ey9'r9] 8&R1WT8%V`,o-^S6FGx!Ŕ24f6b6i9jz +;K[kcalapack-doc-1.5/html/slug/img468.gif0100644000056400000620000000035006336105577017172 0ustar pfrauenfstaffGIF89aT!,TDg]MyC")ި R0b{y85CJ8҇+.D&3eЎQRjs6Oh/S.MWDW/ۥ7gDdhtTcFY8twXsY  c*B*gk -=M]m}MQ;scalapack-doc-1.5/html/slug/img469.gif0100644000056400000620000000020506336106047017163 0ustar pfrauenfstaffGIF89aA !,A \oGҋLm:Vx䉌ܢ> kz_m6(MKb߯ēR#lTb׈=#jTiY&t>%Uҭ,OC/ ܦa9h å԰[x0Ba#RHQb} z[fa=AtM;[ŲŒq.J,U2Q?GO)Qjd0޴* ͓XM]3knm8 cJtUL^-~ WL ۨҲfdgL 4e̢t(vUt9zqW_m}llYl6mR#K \6qG^>ò})yWuztx9z 98=[̷^?{u UNEWMiI;5[p%ހZxywaztaw:o?݇SxEy3H!faSeȣddqi.j0(DmCbc{=iGrX&8yTIfffl0k@%g:j V`~fkw6q6d_2ĜyǢ=!^E6Q3 5:2Y=i:*BGdWQ9]ݧZ+&ZjC-^5ÊY}k]fKn>`h7?+vIofDH->L=+j0pQ%ti YI(~lq: r\Y-Ɂ<(|/NY8_l"0sElolR 2wX6z0ֻ4j%)Y`qQWTzXb׬mg ww 6?TZMH%1p {q]Ϧk駣~7{!}ວ8gm:G #߿-tKC0jL)kL]],\MX~f/;3G/BF(ѹ,g^ Ys 9xJٚ; OXl2UэL]Ʒ/EX@1ԨN5ko&| UNXx dmV"8Kv:X.@lcAꖹEn01-ܹ)^9D8BT hqo5Bl <{  fX(FБq2YM2JcNT} %5IJ, e,GG1|d)g]~% KDц4[#Hg&펾e.5zpf68K{HPSwp _M1: :*0^3^Q U0}sa8yx-w3}*0E+'tHm-T@ m\tN)b 2#U0F q0gɐEVe^2L>КdwU`˩He R^c`oZ3ui-lվ_UZLo,K}Tկ\Dn韬#X;uj@[B!w(~#n2Tzp(G] ٴR" oq "j>6.;CCv<!,>DvJo*F֓-Iȕׁ(nmc,۪c`ʈ/8Jh"۪sSdo۫͗CH}5Oc!#xև88IYiy *Y;scalapack-doc-1.5/html/slug/img475.gif0100644000056400000620000000010306336061665017163 0ustar pfrauenfstaffGIF89a !,  yMnZنY#QdEU;scalapack-doc-1.5/html/slug/img476.gif0100644000056400000620000000045606336064527017177 0ustar pfrauenfstaffGIF89a!,`Zit  m{32^R=h< /£)Fgi}^U.!shzך̳96CE7iVRwŹ'ɶhfg5FbWGgڙ+Jz*FTfy+%yb6yjz} j隄;~inDN/׿<0… :|1ĉ+Z1;~(;scalapack-doc-1.5/html/slug/img477.gif0100644000056400000620000000011006336061237017156 0ustar pfrauenfstaffGIF89a !, lV8dr ]f]bNsz Z;scalapack-doc-1.5/html/slug/img478.gif0100644000056400000620000000016606336065117017173 0ustar pfrauenfstaffGIF89a!,M`ˇuYy5QJZ⺵ؙ, _~$Iv ]&Pʧ{aכ2ܮ \*;scalapack-doc-1.5/html/slug/img479.gif0100644000056400000620000000016006336065262017167 0ustar pfrauenfstaffGIF89a!,G`ˇI oY1m DUəھN<ҵRv;-Z3U ~ldRhav jܮ O ;scalapack-doc-1.5/html/slug/img47.gif0100644000056400000620000000271706336107352017105 0ustar pfrauenfstaffGIF89a!, b|V`6rIٝh ;^`Vio!b<*<|Bjlf[ŢJ áx (8H58v(CX)d7)YYI3S I9梚ڊZ2k{ez;V L#b Jv}5={x3R ᭝|BvLam?nQ*Edi@}H)4P$Q"f(andX#cIl'rCR H☙N`b͞C>y)ЙPjEA +OWrSJԩh۪+XyjD5SZy86`\5xa_7R31D;Sۏx4eg\ĔF=5o-Zd\9[҉Me [nfB٤KwlʳZus~3,kzUqOw9;el r/ lԟbv`d Nhs]fڄyٷZg†4XQ݇2]RV\21MXj:H=^!E.peF|>餒 2e0mF=e0xb(Y7do=v^UR"%"[}—Afy:(_iNJi(܎n*ڨnhjy)tZ Sz ᕀ1LJ\Jl"&Ixg鬼v^:_j,u[~ՙ޷"6KM6v h١k#F/fꮊo⮚2vƹgj/rnY;!."+I #{\-oצ-/s)snΎ{t3}֙J/um\)\)CgWK{16ZG-S[4> v]0 9CKy2\1jb6%[kK]<ʀSɅCł#d72C^-iؠܚ$3A>ى޾+:r n >|K>ۂܠ3}>c8Sׯ>A?{H9h- Wnj/AJed?p$,TR!f(m/["B!'\bQ C_CO Q{4,N-o)MT"VFp!ŚWke.n1&F#??Џ ø\a#_SەIA<$!F$Bt4{' )3@q('qKahtFeƒ!*=Pi\֞{ULtqCqH4M)s";sBJ ̖ndaNhz6y'ŗrL7-)_5k@svLhI3XȟXQU]V`PD+Rkd,}50W>8bo{MtԱww{<ʜ;{61$]i51=QJeVCJ Kyh^`c3TX[CmʾDp>yW/W| '4| Z6lv˧Ӈzw} h_\LD_>.؈ aU wPD1kbDV,'V_| S#^x:%L4H? W$JrT[D{8$`K^ɤID7y٤X[ |n4h&AS>z@z\FVSPUiឆ~U"v!aF㡒 6cnb``j iJT 8E)jz~ 笴vP;scalapack-doc-1.5/html/slug/img481.gif0100644000056400000620000000024006336065565017165 0ustar pfrauenfstaffGIF89a/!,/wXZ U+V9ԫLF^f* IgԱr8jxJ5>uJ Hb5JbQbWd96X->̢̙k7zegx)9IX;scalapack-doc-1.5/html/slug/img482.gif0100644000056400000620000000162306336077410017164 0ustar pfrauenfstaffGIF89avC!,vCZNx~]䉦l2Af~ֻ j1sˤdEZTw 4Y,Be 5/q}2'(HR8xgȘx8cxYy6HRgZJ9IJ+:X ˚{ {lQ͌{k, =T nn Nm z}W~Nco9|"(Π8-$C'=TsqpmlFGIpD1 qd4TeȘ*Xn)K殀7izIѧN98SЂC|4ҙt[5Fx7uHSe0MV>sZBIR\x/6;7k6`A71N S30.-:;Y/ckֶa~ݙ8x7kj4@vV -vl joKy۵4T~|կ3筚^2|㓏yne9J}=pnڀ*R[?w}Ɂw\ vh]sRY{tgKuݨMÕVcLԡ7{}'XL"dO?wԓE)eIS b VbyNn$V|GVi&W&f^X$4gnɄ39N(tN!h[-%ZWTb)J^ۛ9ƥ%@)[*GgbQH^Ikqj5k*mի鱊X6㰲>i&uWnJQ):OpEeVyAM kolm,V &';scalapack-doc-1.5/html/slug/img483.gif0100644000056400000620000000047606336066240017171 0ustar pfrauenfstaffGIF89a!,cbڋZ_m 4f8mj m>ӽ|)r'"|^) =]%EP- YrZⵈUypG&XB6UXW'WwX8yi%GVixr:XEwJEGɊHzD{ k[7$e*ȩjB:g" lg&Z̔\zw:mdvI=~υ%lҺT<遐LRV<&U5 orXm# v@Sq\o(9fwUUE HH *:JZjzS;scalapack-doc-1.5/html/slug/img487.gif0100644000056400000620000000117306336077431017174 0ustar pfrauenfstaffGIF89ai+!,i+ڋvH扦t Lv{2 L*;!D%)j/Ѯn@^GdE$8 'FYYIh׀8yzfiG zJ[Bi:Gh,礙ɋ|<<M܋)Xu| K tK[;#]Z^|$;.$?|HإwO@Z;< )؛tDfc_qq?pmDe(c5"Gc4›R]$մ)*sCbePҩTZJTjȣ6㷞,t^iUOVT*pڮPA|^_a=*&kWl!(!:"L&L-ԼYj +E]!-YWT m8Q׼&NŇ;WU,ϫ[F;hZ$)-z=Ky7˶?8٥T¦}GIaM8`xR6|z`R߁R\\E64&X(3(a9Ai婹y!‰IZh8Y{ JѨ+hȪ *KZT[b\\}ܔ ,h׌ \M>r[ᅐy"lY~4l;#-ߢyk(ÕA{+]C &JH(^ƑZǒ(&FsI2*3sM+;cs8+'\JRg1eeS0mjsjf `Vkd~s3)Zj[ajܧsëQoъ{u0rMs!?eAt˵rW&v!a>t[^ۃJ8`xׅ꬟<-Dw(:$&N߈rJ-:g-ޚnZ<@+ɹfHjzѻދ׾C 9ժo>HBd* 5 ,9 /逦h rÎǗ1t?w9y/'YeDYK{~< Nxw)]?%~~ضm|kΆA=mO4Ħ4Bq[`ChB0u`_6)yFBVrr!VX8Ɇ5 Goe)ac~à%$K-}DdW*̂kt(('n -ی&:vibn|XTf:;scalapack-doc-1.5/html/slug/img490.gif0100644000056400000620000000042206336067724017166 0ustar pfrauenfstaffGIF89ak!,k錏cJOqzcJ82騜A j !`őtJachB űg\\Q=w7p='Ww~5s37t%Cex2Sha3YGǹbrW%f:#Z+rH+ KWJTyrM] ]s>\/?O_o0 "(;scalapack-doc-1.5/html/slug/img491.gif0100644000056400000620000000252006336077453017170 0ustar pfrauenfstaffGIF89aC!,C ҋ޼HfgRrl[LxфL*sD[=ԪNk[i͊yuiFV.;}k1GwfxxieYшV4WYjzFʨEJʉ:K[![81k,, ̣+< } RlP {}X}mͼ.͝{컬>O^^e)^0{$MFaPi _1"/` E E;1OF#=< dɄ+-D Cw̌Sd/sϠ^3ER G@S&*-jsiBkZUi=Ri+WL٦4+qƆz6\yl:/5G>M*AdY{pK!r՗4GO$!3Wig-P6YkǎM6l[8ptӟ϶6[>==O7lԴݽ]ۅ÷щjjjpyw wF`| a{Y;`S }M*T|~ق U y\v]\r%ަKm1$&>d96&6&Dr@:]xˍ1vEzޖ^#wA顕m"i$fVVW`9@XUʹE!Xd虿rܗ7 i k:"~\ J(h  $)?ꁩ^z&( iȢUƎ+q}"gRCi(::(]z^;, U[{E)|,* %Z`H%` rc6b-jb>A(ō ib-* /i w!g‰h'_2K2>?zq<[29ʞU(ˊ04xM.WJW1pC GRpŸ,lf8t^[=A+a0Lݠa&{wu 9XޛOsa~TPXꚐ.VzwnYɎL?흗;_;&/ί0Cs8p _>1Gp_|v1ˢO?*o|Oie@ pcKnJ \Y߱Ufv"p 0 _?}  aڙL4l& utnخ!*4(%J\,nwbD/Kz_,#X\QCH 0r`23D?V)Z"F2q2+*2mj_}6s,e&3VghfGC784ESC٘%6Dy&3"j1iJxkX) ܊Lh],w6[{(CNC~A :Zl-j&#ϸ$d,e +[]Pk2w0Gue 65tҖES2T)ɳXIZV)g@'$+Xi+$r̼hX`lM,Xr-!m:./?O_o0;scalapack-doc-1.5/html/slug/img496.gif0100644000056400000620000000133306336077500017167 0ustar pfrauenfstaffGIF89a*!,*+-z}"8azHl  9 Xox)UsX,O5o,Nkčε^M{22UqiW6)YXy)y*6q:: 멪Gw){{kTdK'{\(JLK} =A+}*]{nl \'^_X7̞m1 Sǟ 3M-qȁ4vf$Jw[JW=;14Q3˝*O-p6ԆPi\a[3Pf8oߢ6L:ͼ%Vp]<KVkLd,7my~8ecxvjo~Sͫd}Y9囃5{ v6>zd\XHvz]FT֝[2onAsɗۏ 0:EY!| @rf^]/nWxe]Y?dCzlu79JW߀EHp`j W:D>V4,bEU6'Ve o-Vь ָʍ ގC !?L'IǐTd \mB'=e*˄% ^m,9[pBI'*v|Ϟ?駒Nx蠀P;scalapack-doc-1.5/html/slug/img497.gif0100644000056400000620000000060506336104602017163 0ustar pfrauenfstaffGIF89a!,Dvқ#-J}UzNҮNL͞"p}N4{͐91j.7Lªaj6]pkkfv)n&A'4!v8fSWG8(֓صQI:HCi9J:dG󪪋ez;t[ (új%ًla(]7y(,Mɕzjjj_,Z%fθ%|*0ڧ2",҂=㱹t]dɺQo+6%Y0$mEiq5;O:1bf+VnCW7&HxQ67iU&GWf[e%j{WH6lF:|x\<Mk󦋋׭B5 BMg;+-o4ɉZ)K%0 0bCimd:ϝl1c' Dˑv26r:rjϋ<-^Ҥ4"i(ԨjR5֭\z 6رd˚=6ڵlۺ} 7ܹt-;scalapack-doc-1.5/html/slug/img499.gif0100644000056400000620000000024306336105257017172 0ustar pfrauenfstaffGIF89a,!,,zVj8pfӉSʥtn/RaӐg%;eqO%8E4Xtmf9UeH nq>."JLGMGGX)9IYiyiP;scalapack-doc-1.5/html/slug/img49.gif0100644000056400000620000000241506336072441017102 0ustar pfrauenfstaffGIF89avW!,vW-ޜ|hʶLh}iY| GLv,f9a*ʪEmJ-֫F1>2#(WH"pGHsȱ(dظbxhXɗYi7)yIGw9 ZZjKFk3jy!R k$u{s"7l(̴gH'-S~LM2zGp0(};(JWC=J"7}vTd5'2KUN|*̖<}eBb M9gўVP4#2i!V#*FzL-Z#Dkrt{eu. m^qЦ6.t񙿉P By\㼂ж:s d땃Z^L=vlgMH8YeE-I9Gz)yw[7`:q{}6\A wi=Cj"`3L0%( yhL7F"]mH2sEdԖ~)Iu(蛋ZP h7*8a~%7$?%$,8M'/n'ЕUZA$y&<~Sz$dhYQrYPsFg}|YzX"ɦcn'Bzif__&Z`FFJNjftaԤ$QojUJꖦ]|+V:"1=!+cK2JJ8hVbʲBy|ܮ:+v5޴zxΫjE[j̀SIg "a\10+fho&܈M٭'SC|n& H0Mk#=ۮmG/QJrOFK%tW[Kj+_wrSgXYqʥo-v+wwFINwu&Mxes>4K[U@po.4rl^[Ф3~@>5r9jy =e[IF{.3ޱXO(<nk ె@G5P;_ჾ5j.>MO9i4voH?d!0[`88B`AεVRwJsa]\0᪄ ՜7P6db"C}(;scalapack-doc-1.5/html/slug/img4.gif0100644000056400000620000000007206336060004016776 0ustar pfrauenfstaffGIF89a !, pZ:E}ll;scalapack-doc-1.5/html/slug/img500.gif0100644000056400000620000000056006336105341017145 0ustar pfrauenfstaffGIF89a!,Dv˫٢Zjo>xcf":G˾d*d6"ExeO4EYΐTQj3MY"$׹]a!TH'EGg7@IxوDu% :Q::k9'&H YI;,\wH },7ex \yxM, l;6~(O{6:ivJ2+'\3oL(y[ H/ )q_Ha<#-z'Y2FԨNs;{ 4СD=4ҥL:} 5ԩTZh;scalapack-doc-1.5/html/slug/img501.gif0100644000056400000620000000457006336112043017150 0ustar pfrauenfstaffGIF89a !,  VR]ܸ}xa $[RtFgii+5:\i&Nj>K:,g&\(O?T:J=XT)zT_`y!|j@juW\bٌTFn*-MƋ+*YG%i %'pᠣWdtA|Ϧ q/ܥ^`ESr`j-Zxw=H8iul@vzN @E`>t灅v {sH4pU} NSoh~'&3*6i?"xPMGR\EU: /vk1YF 8m-\)Ueu\Ys\`OG@u3cNcYT5gpF7NJl,y.ڙ]Z6\2$$p>ՠTJ^&a'"CE \]8Y:Hݫ l|$C F72Nj}|"mNR m*֦h,(-˱nn-/(k|ۑ_ptw :_qoq "Lr&r*2 s̷is:ǜ;RB}qL4H'tө-MtV uR3M^}Hnt`68vGk wܵ `kz݃˸w{-2GqІ?.7vMCגgxry;9ަ >㣯N:l#c㮳N;ѥNyŖ^{:.lpλ |/< ۼ+S|g<=֜C,ۏ_Cg~_/ˏ'y;qWѽl;5Bw/6O7[0,~! 7pw{N +$!dz / *#TT/!2p]`(b #D+Љǀb˰{-*bS 04<ͅqm1P|#ucDD&B{! ICsd$ Ғb(INv27%EYHR>єlDe[]F&Ը53/;hh'y*ehۄ򣭍T?Gvkܢj3s]t2o~xl~F\ II1`c έYɜiTsƵ*ia6YAا^ZU73tVŖwU082)"yuayyGydeĹz鉘fV8jkx궆:کۈFk3l V9tRC \-zJ^jI<Arg.N;~L},T:f`?z]Uj[ۢ)V̅yb ޠdo>*,Bͨ&T7'e/:txS&, &*f~|QϤe5UZm+ƚ=6ڵlۺ} 7ܹtڽ7޽| 8| ;scalapack-doc-1.5/html/slug/img504.gif0100644000056400000620000000630406336113262017154 0ustar pfrauenfstaffGIF89a%!,%@U|g}YG^`' Lczש>A>׎ `UQʸ6ɪ-8Eg0ϛyf چyZ%;v§7նHRheY77AuHea)ZZ [%kH+w)˱9J* 9W*7ݰ,ak TcvL]&m-.>2X~ ?Pb QQN^}wT-_b!JxiK:ULW%JZ,l!"hjKF4UDtG :9ʩQkjі=%P)xYS''&9ӦµR6Ol@kիi{e*&{,U]9f&M95M+'xqSW:қtb$)9yKD\EpT0Bwa`T2\;}p LvyRAbT\[B(aeW H!zujt BxgauboN^AXY!׍#}gPݐ:z)XnTzdi"cRiG99gHMIWT(TdUY ϡrDt}F\|܈ccQy瘞zrGA'^jn1rU#RO#6Acx $ò*c)T$4f; ʆ ZVC&f)GBR H[aKo,KlAGmbR>Lʮ;'ij5]y0!÷pޫqL,-E 1s:s> tBiFtJ/tN? uROMu՗u^5b ]veͶk|vrvwu|KtN8ق8w⎿3#1_oo>,7>krh:^~}~{3q)#z?.N߶qr@x l: fAll Admc ! fBrz)\!:.2A9l2 k55_v2IX"DM_9s"AQyUn+^q?d`pE$LUǨ21c}+wMlzp5:f7Ō%>0 $'4pM"8ґd"?^2fԟ#Iq%GL6:4T ,e*R2D:^җTL3`є(9DY &7M ~g3iJs*L4N3"=|s?FOz΀A:OfӠj>`GD3:Mx"Cz\%n2UBYÕBiIsJ˔Ӥ8=+TJZ԰RI/㪌fLVR@RF5GjO+? E[hv=q,UԫFWc2c&c5^P73ȳN+BJe6ݵ[q`)(]Dw5fJg!L1 i {Yl~ii #ܥ&ԦvvEkk-ضiZUI}TyZ@IMn]c}{*R9fN-,wK1Y=8>Xvl%{#J7/uZOGWƃ(~ LM\e\ $.O⦅آ^0_ᎶX%f{8꥓I݊;cr6bwg{59Mr׀T/f(-VNfu+f< |"o<@(kԩGn% cNƪdg1v=Y ƒVr9V-X@K-I^#2ux-ZeH RdGZ9vsSg zCo$!řޞjŁrgNd׸@*m5yl8})i-B&Pnl9v~ms 8ӕnƂ:s쮰ȼTܭv< I|0T~7_[Ӵ2,WZ߻| cvx;5B^_Ek?Esױruf\)nӟu|퍣n5So/cf }4whMu 8M 'ݜK/Qvu'm\.M"YqKCtųsպ#/8+3;W X gRi=bW=ԏi. JqHrаtÅO9ݾ_؂E QdUf>ʤb Z8m-߷؁vauL8ewօ8,B8{^yG(E2vmX a_;džzu,Gs F@zKHy!Gu{n#Tth-|sG6FWևUdgR؆vÉ\gFDX(؊(P[dhiU4EnagȋxGpw(yx%HPhw􅧷o6o茡x(V3ŌXoX渆b\+!]y{H(Xccl}ؘg"_趏WuBJ֐G~hu~ir}ZD7(yiݥ0S%)-9S7r:7@phLȂʆoX aznWgUr$8n&GAz1IgPpFYHiDbYӍU8w^`!Mwyyf{eoAIt+rtSȔe92FሉsȊɘ䘡J6蚯 S;scalapack-doc-1.5/html/slug/img505.gif0100644000056400000620000000021406336066625017161 0ustar pfrauenfstaffGIF89a"!,"cDvˍJFu| 'iQerij*>RX2 G.8̗1P̏75MMdY9LCGb ^(W;scalapack-doc-1.5/html/slug/img506.gif0100644000056400000620000000026206336066742017165 0ustar pfrauenfstaffGIF89a;!,;װS~敒<(jz©0;+DvfoܜHƣ٫ #IF}:M:-dLͥ؍PdFlnw,;ҠT53u(Ԓw3y *:JZjP;scalapack-doc-1.5/html/slug/img507.gif0100644000056400000620000000026006336067140017155 0ustar pfrauenfstaffGIF89a9!,94vfl~i&kA9kk 0t8c"|[/I:7jEnDtN_,3֒vb|yBD4&FǵӅwT5Yiy *:JZY;scalapack-doc-1.5/html/slug/img508.gif0100644000056400000620000000055506336067315017171 0ustar pfrauenfstaffGIF89a!, :i6Zk web t"-ЧL2hxt!_$KJLy"w{ݩMvFWh63HthYy舖Չ"%fyٳ"J9Ztjb8<*9yZ:ww,lKJ Z+^~ih4nJeH<iڲl'07-m'E(T͹xb.$B(GDI+oUB4ռΝ<{ 4СD=4ҥL:} 5ԩT,;scalapack-doc-1.5/html/slug/img509.gif0100644000056400000620000000062306336067372017171 0ustar pfrauenfstaffGIF89a !,  Ƌ޼)Gy l4˰cڮRs i!LGDvX1s-T6 )xקM0&зTT,|7.6u65gַgGXyhT1iH󃥙7eUSwX ǨskZ;)W$h) YL;<˫  |- F]挦̜y}Lލ>kj"|m=nZ6lRl…3׆שPrrlkM16YO^G5⏋&Q 5޳űCaQ䝂( M}~Hte4bFJaL TQS5֭\z 6رd˚=6ڵlۺ} 7ܹtڽۤ;scalapack-doc-1.5/html/slug/img50.gif0100644000056400000620000000115306336072506017072 0ustar pfrauenfstaffGIF89aK)!,K){tH扦ꪈ2Y{L PY{Ъmn[.tf2Ω.nj멻F3wwW#FQHxiG(EXgiZk{Ըw ShyK\R{;l,=a# Ir]L. ~H;.~ގN_;o_0 :0… :|1‚+Z1;ʈqoY9_$ˣ&a%p.2 6n>c;J5ʡrK(Bǝ^PJ*t(zTSkTʬQЩ:{hհVe|%+PſHx0^8YO=˴"glA纠KE;k%d^ [p^S+Ա{'۶܆[~i~ i3NNԻ{}MZƿ??veG?~߯OȌi?noG`D.Ƞ!5aNHa6daw~(R q'<&cHu3c͡X#:ٛZ"] ɤy@V&B6Iel 'Un\~Y;scalapack-doc-1.5/html/slug/img521.gif0100644000056400000620000000041206336106350017145 0ustar pfrauenfstaffGIF89ap"!,p"oF޼$HzeI ,Nv𤟺UہE֋A̪шY+V:EL_ȼb0YՙhUHM łv3'Ș2ve''YcsǗ7jhVcIz)Ņ ;:X{QxVAhYH]"V{ܭN^n~/?O_ooP;scalapack-doc-1.5/html/slug/img522.gif0100644000056400000620000000024606336106433017155 0ustar pfrauenfstaffGIF89aL!,L}oԩҏB`if2E}Ϯ`@7aܰCJ.ӤN^n];scalapack-doc-1.5/html/slug/img524.gif0100644000056400000620000000023406336106757017165 0ustar pfrauenfstaffGIF89aL!,Lsy픜ڻӸ?1HxNZJn˱D[lh+ObBDN̦ tѪEe)ܐ攥ld-7p9UxNB5]0$GVh$270ـI*9Sʴ꧔g9ڊKٙʶ+v&&{W<(lxz;vd,Xy-Y7}(2<[Nx&^l wifꁠvabBr FTI‘b8/AL"29y9!t#{ZkTЉl w:?Y5Է3ȇ6$磝*RVd7ֲ6)`WNkKsZ[cg l7;e8KQޏq'emx3<ˊXNN0n:.bK}siӸ?ֲ;scalapack-doc-1.5/html/slug/img526.gif0100644000056400000620000000246706336077540017176 0ustar pfrauenfstaffGIF89aC!,CD̋޼8$g: Ǣ% q IR&LȲ Rx* }Zح> Uk;+zn,&38v֖H9yI٧hVI$ ZCZ2j"z:{I[*$Aœ ˌQl!ZkJ}={;zyҾ-N8OC͹< -tܙ:7 ;W7Oa-Fn4O )ذ\& `Nǘ0ed8C໐kI۽`2O#sIM@L7GIJ_@[N8{::^-UWANe԰{˴[JTFÞw3D۫b|vkH:9_[HwȼTp4sX9h|lcsn_;وmոp)S5wptu#.xJc]SO|acX7VpBo }+gTH`mLN9!`: LM1:Y"c!@RӇbxI)N_ A!`{AϊoIal(#N<`2rQ$'=F\!pTo`vEevC*e NI,B~6=!zMID&h§`~vV挗d}c`bF)>ZВ ، jsns٧2x٨ Hk ;ىZ?bghWMFŋ(ҚLV z-"3P%zQeوlVhrcTd<oZ)y:qYBi;YjLL6[m=Abhko/2XeBcм,߉p+bm@k?VzJo*uU'4qz]˖ua5euFr6mwFo6sݭSw7qw]sHu8ۺVooY&s7r̞3kIBޗk~%6vE|3V{َbI!}.-BM'H_3=,o,ۦs0o5o>g<*'?];+ w4Z[(1VEA lYl',r;DþwkvǮOr2/P?@ax(D#vPTb6&rA@ڊ(9+:nRaŴ;scalapack-doc-1.5/html/slug/img527.gif0100644000056400000620000000045506336107152017163 0ustar pfrauenfstaffGIF89a{ !,{ Zݼ4Yi貶eF*R-.ĉ4L*]yPͪ5YlMr֣OA˼0\߭zB|7W3E1UcXHw)88q ' F$c&yFWFx;|yixDef2 gl4)iqe}q>YO_~ϯ0 <0… :|1ĉ+Z1M;scalapack-doc-1.5/html/slug/img528.gif0100644000056400000620000000040206336107536017162 0ustar pfrauenfstaffGIF89ad!,dٌ!ڋ3a9G㝣2gNFj1ug~ /A|Q#;qJԜʪZPk>]mق3/v:Su&XԇgEGv4ֈ(b(YiIy CD68y#9%(Hw*UlÖ4\ g|q,\]= Kͼ .>N^n~/?O_oo^;scalapack-doc-1.5/html/slug/img529.gif0100644000056400000620000000014306336107756017171 0ustar pfrauenfstaffGIF89a !, : 2''nZ5afn}{"h\vI";scalapack-doc-1.5/html/slug/img52.gif0100644000056400000620000000254306336101551017071 0ustar pfrauenfstaffGIF89aZO!,ZO ڋ޼Ou䉜R(L-#L*! qjܮD ݀tZ6v[@Mu(瓗旨&i6hyQvU7Yjzjg7 jjw:KwVtZ,ܥ8ir ;#i<Yk C->N^n~/?O_o0v<\vg̏l 9hR(m &4d. c ūՔC!XyH!eruW`PUJdBIdFdJF#\Mjxs&XYE+Td04W#YL X^_Y)]+0Lxj:WZLQ镃t^\yKdZh[iI ifD)hG5^jkvwXYXѥ'e!u+&YUɱHª&h}dPd:k\2BR'efme6]bef;nދoo pLpAk0Je V^D9&D&mhUnRO1)_T-i[}"5v܊a_}v fl8ɛ~!m>Ipbh׀bg5m<;_l*|:L&!Fտ}7sXYR~۶}*UT2?]kD©b;_]jdx'ZtۛH8tQnyNޛ5Js w{NYj{_(yd?]FV{M -fz7}Ǜ˙X՟hvx3ݷc~J.|y1 n˪Y쇻AEuڔ&IKu.'o/ <0… :|1ĉ+Z1 ;scalapack-doc-1.5/html/slug/img532.gif0100644000056400000620000000007506336063404017156 0ustar pfrauenfstaffGIF89a !, j\]Q];scalapack-doc-1.5/html/slug/img533.gif0100644000056400000620000000015106336100640017144 0ustar pfrauenfstaffGIF89a!,@olxWM[a`~q˩ԤvN D8!$DmFO JԪ};scalapack-doc-1.5/html/slug/img534.gif0100644000056400000620000000016706336101143017153 0ustar pfrauenfstaffGIF89a%!,%N3U|uFnHQP96A_& l0Z}Tv '0U5z@x9ZD_8A.rd㱍H>tb9DD?gVɤ7YiQC٠zЫSz 6رd˚=6ڵlۺ} 7ܹtڽ7޽| 8;scalapack-doc-1.5/html/slug/img538.gif0100644000056400000620000000112206336070605017157 0ustar pfrauenfstaffGIF89a!,ڋ޼{H扦ʶWx-)~,p!ҖqLD9NsxTӏM\rg /݌˥-W؁gb4G7FrDh1$GIYٙHj5!z*wj*0ZXc)X(9@ ||ƼmLۦZl\ݻaHlhu/Ѹg@a7 Xz#jIxt43T9ʰdEV}*F'Si6Rm +̝cź՘6U#OĔN撻h^KׯOSvZM ǭrfP/G+4/Ia,.5ܲF6XeodlFf7)oڶCYw!k O⫛.sҾv΁wN^n~Y;scalapack-doc-1.5/html/slug/img53.gif0100644000056400000620000001573406336111342017077 0ustar pfrauenfstaffGIF89aB@!,B@޼VYH扦%ƶH >wp4?Ox厍 JwNQWԥL3LAFHuޭz1gwF(wᶐ)gs8؈98iٙ)&x9 ZIU&t3dE[J8q5kcċS\ ,L=5[j~f8ܱ#D{ ]o+^ /} MM_ lB #/sx]FX .2%/%jddKbř3HMB51iY]h|Ri5PŔk_2jsWɜ5sYKi*FF"Rʛa{;5mh@i1883Q :=YKTg~md=.caNl.5 -fOlYC[*Ig;lO2cyi {vܩ V*s{p߅-3pןMXb߼~ ~ (>V_H"S8t(b2̌#Tg#=$z"9,əx(d49O`YeZne^~ fbIfffjfn grietށ7הI"G葴łyb&D&zevsH*e@g9hwi,+FԥBDŽH$}Iݸ*M:ٮ:9"ډIA5z3L5*7*!~[Ď+`x0bv 9H'aS峅uS| ebrJ"̢CS7OqcjEQbTtq0a [e^cqMv` ZφKB!ᐭWElҵUIˢT%Vuzo Yƍ02MIOZ]U[YL:`Y$tY Bt3&^WH20ٕ<#5uLؾZ TR IH(51|F:!$*qLl(JqT,E\׌(-qL qlQʐrNKklEÄ_s?XfBܦF=Z: IDieC('=|b4$?#IK`6Z:FT60Y)fp'M{g)3ġf/El4V7{L*O3:WޮeepegdlYX-z7c梅Tq`?>ؕf,\6}nH]U·[1XQoW{rwsܣ\8Zw}2T2v5MZ.{E%qʼ}刜r^,$Yr,;7(A-bN;/uO"_.FzGgw{$?[Ct"~%g`%Mw1=w~8vqۤ7d=Wzf&ue2[s;s6[w}=o~^Cv‚WM9D3+ j2d߁xdG`zIzZ83rgfrx8^h[(feleUIb~G>1ߣdž_= 6N{ge0~h&8t@ih?QhctV^i]g-4_Yo)ǔVO܇i$j6i (ke"N!W7`ǒdqN&v[#'r &ˈL8(bXĢBzHTRdtho 67'(&m(HȎVWdpB"u(.%uLHM]0"{VL$AՌB=E4zԏ ٸW4ZL AW-.W)?dUn(};cDwO3@)KY9''ِ>3v#1"Uk-Ld]h8^ E#5#))+xL8;E>QC49FoWN=]<8RZWZ–~eʕԤLcb*B 3ssSRV~gqha*cjȝמP-%XP9Dʩ꩟ *JU䚯#"I |HU&J΢;LExo"vڪ8%e9G,jRx [KEwBV}u(_'1ٮFu.~U97!Oׯ:JAW89MhǙcfGwT:EM8xJ&ZZ&u(>c){ʣZ%\og.W{%;DZ 1{.*_Cef++d3aǐ3'Gk9Qۑ ȵsI{KA(@}1:ִNS7de}qeAiWzYwT{ոdyBbswT}jD=.>{KǗK˦ET9shk'I6Jڜ[fe`"x[[y;@^8do6=_x2%%后 iWOjvU3ig˧IrQ SWʊV_TJU ;'jP긿 ,Ll|P?l*D:8'CJªjfCܚ\\fҟʨʫ[T&̧JBǪsD܉ ;c;gshH|ֆ>L eb9SWba[w1#Zgf6Ž%+K:cWFh;6=۶cfqzv#']Wxck보U.jmYFɘ\}~Ylzn5%_p7}9[̷NL6ʴzƸr˕˟̢w3idz{ʊ,d߬qָk{{Ί5aXHƱbMKEKΛ Ku-R˼{/i|㛤5ZYuSΨYcJq")++ '3M5m79;9gޛ \,ƊCox҃Oowl\пz%z}rğ2Ip-\/,%!ihчq̚8:mՈZ5N',Zz$@+wnQ=؍xŗHFLo ]HS͜]-kÒܰdWeg6#]|NkI1`hw'GywJh0;:/{ԙUfvǙl lTóތȽҷzl{I >y0Ε,ܥeSx9|ȯާ=tcn{W{,v LGJ{.YƍoY+L\R}Ln}`=ͷ #[;;ub>*ߚIX+* 9/)(FZ{N󤦻3M~dc|,_]d_wYrEW;_<ή߻(zx]n7wmա ewM_R{~ VH%=Fq|}&jMQD5oV*2^s/Oo׏ُPOV? :dோfm, ߿^m/@VC~`B#La\`%7koj8RZ}Xھ\5F6jRu0ɄY~a$fZtOC[}yvҤF"%')+-/cѐ4BC8D0UWY[]_a7q>dEaJJbR,FTe Y=!ZQ%ҞA; FI^S3`'pp*%=8i{FR#r߂ ɮӊ\j֦O vQkY= ٬)~+D/ylqmCW=-uMiϷ{m6kk~7z8Rw:OoD ;Oǀi#/8ߺO*̏B 31M Go:0}@Q$E4>5i!M\ˑr:R)D ,LQ^\ $7z: m=\Fl9 [S3OY-ZrZ|$Om"|)RʜbA {T[AshCm(]yD^vkb߃fC Wn!X)1X9me3s0K3,QP#S VZJX杋foq%|y.qQdnbh9[4i~vV\mJj/t2)V5^_|Gxm`ɠ˵ܗ}>!0k`~3i.hZ`5bo@)bb#;y҅:2Ȃ60 Vil1 'R|&ˊْP,/ cIUٰ˙4(lgu2[r9%ith׍&9Z4j$i'գZiWs7*%f6ua0|uxܫ$sw⽓1;-4H,Sh -KRh,gum[p,=lPGSQ*.5qb0y>-7߳YΧsCgǨvX!xP&뮆p^sCG|DpuIcsov !o]Vµ_lѱh1aK V_=XJ߱YhIdFd\/J> >ZUdLI VTIb\u:8la\ uJ6Hu /uH"9yY?Zٚɧ~bNWd`-ea4(ΧhIაILxv\Zo_c]#Z+’)k$JL ՇYj#SMXij~ީl2p)mKcvˉ"]bf-.i;޻rh;`  /ыh~zk(pݾGo{Ƈ6E| /v2ė3\k(^+ש 2vDoo[.  \mLj򾶵λ2T\RԴSYꨁnK~5=Zu*[;h]/m mA{1ߢx}6Wvŝ9׍x|{h8y> 8Ȭ3X}Gܺ}N;scalapack-doc-1.5/html/slug/img541.gif0100644000056400000620000000044506336102074017154 0ustar pfrauenfstaffGIF89aP(!,P(D#0v+vMajAY.2''j-J=ٙ(bpyل'y$~i5ajR΢{Z`5LN9ڜ"vbF7ƔD׶Ș4aI933Xxœ9W*j +;K֪y[֋\&8zL ̻yVȕ<+Sm{|nL] ̭](l - <0… :|1ĉ+ZlX;scalapack-doc-1.5/html/slug/img542.gif0100644000056400000620000000023406336102132017144 0ustar pfrauenfstaffGIF89a.!,.sL-9[}!5zz*l0Sͮu1~BE2*hYBT#ȶmڌirٗPjooͭ!(8HXhxR;scalapack-doc-1.5/html/slug/img543.gif0100644000056400000620000000014506336102206017150 0ustar pfrauenfstaffGIF89a!,<}ꒃV f%#VjfiFĢ;Jwr g rі̦ JTG;scalapack-doc-1.5/html/slug/img544.gif0100644000056400000620000000016406336102432017153 0ustar pfrauenfstaffGIF89a!,KDv|BEIg>=~"E2&2y3*˰NP C֒Y )3jܮ 2;scalapack-doc-1.5/html/slug/img545.gif0100644000056400000620000000026606336102506017161 0ustar pfrauenfstaffGIF89a3!,3Dv˜$]Fiq]W nH+TKb& w~ɦA>mKX4$&pae4*Jz.YK=g5&277XX((u蘩 *:JZjzX;scalapack-doc-1.5/html/slug/img546.gif0100644000056400000620000000043206336064316017163 0ustar pfrauenfstaffGIF89a!,[ڋt(WSmVF.a¸կι&m725_R\[6$ƉbҀ lVJ,{^屸rsCGv%5ssW7WUx8iHEc(9ԤS8jf 舉zj74+qUd'H3H7)sы :-T:.<~rnEO_o0 <0… ;scalapack-doc-1.5/html/slug/img547.gif0100644000056400000620000000042306336061136017161 0ustar pfrauenfstaffGIF89a_!,_Dv}HgiYsif`^WW TJr yŚl1ܰ~2PImz6cMYY(05G])kݗInc%562צsGx'3yI29)؅aڢfBz(RDW2 jHEK(G{GvRJ=:;z ̽8^N^n~/?^;scalapack-doc-1.5/html/slug/img549.gif0100644000056400000620000000064506336103267017173 0ustar pfrauenfstaffGIF89a!,F`޼_"F9h̰ql&['eЄi*wNT,h6|ݞFtF}~ϩhgG&8TWGXxٔ9RhY"ʙǂ5  %JXekZKjf5|H| \zKG)\V}UŖ#MJ{uNi/G::C4*S:2o.yhI Bꋶľ})XIB<YRȌfȗ&Զ&I6h0G4)2q 0CLFHKƳgR*'رd˚=6ڵlۺ} 7ܹtڽ7޽| 8 >;scalapack-doc-1.5/html/slug/img54.gif0100644000056400000620000000023606336077711017102 0ustar pfrauenfstaffGIF89aK!,Ku˜T.saafiIjO ḜQ2?"X15o wD6Y &F’U >|{.swA}ضסWWأ'HR;scalapack-doc-1.5/html/slug/img550.gif0100644000056400000620000000021006336065366017156 0ustar pfrauenfstaffGIF89a9!,9_oh\aib5Uz稭(h,|bMՇ3dawmL*Y&TZ+=o7J };scalapack-doc-1.5/html/slug/img551.gif0100644000056400000620000000023306336066443017161 0ustar pfrauenfstaffGIF89aC!,Cro΢%kmzm`hE۞q2xJ[]zĎ.s0Ĕ6p4vlS"dN9 Q}(8HXhxhP;scalapack-doc-1.5/html/slug/img552.gif0100644000056400000620000000023106336066525017161 0ustar pfrauenfstaffGIF89aC!,Cp袜L[>v J[4rN,}'ZˮgU,j C#+E1`&wM'(]f5[oy-ZgtT(8HXhxhW;scalapack-doc-1.5/html/slug/img554.gif0100644000056400000620000000017206336061334017160 0ustar pfrauenfstaffGIF89a!,QLˊn^*'{WM}\KFn^EvYG9 ~$4BzL#vâĶZuhӊNT;scalapack-doc-1.5/html/slug/img555.gif0100644000056400000620000000023506336061647017170 0ustar pfrauenfstaffGIF89a0!,0t M{0ݙjqR4rcʐFz,ZmZxV2>GNGDH:ZKL#VeúӓK#K:{e.HXhxY;scalapack-doc-1.5/html/slug/img556.gif0100644000056400000620000000017306336061703017163 0ustar pfrauenfstaffGIF89a!,R|θ(WYQcvHa!u)mߓj =*XpL=5ӡjmfmQUyN;scalapack-doc-1.5/html/slug/img557.gif0100644000056400000620000000023606336061773017173 0ustar pfrauenfstaffGIF89a*!,*u qz0U,mX4Ot.zdOd(JeY7ŭ^8ǴEf#fTxٴ 4IsJij +;K[k{kQ;scalapack-doc-1.5/html/slug/img562.gif0100644000056400000620000000077606336064334017174 0ustar pfrauenfstaffGIF89a!,m\ڋ3u3a9Bj _n64[{`{zDPRKocѳ)*Y,Y5#B-Ep_5J,\2&7G'! YG %Dh)VzHzs8y˪  Db v8v, *jX{Dz<ܜ<-NR~,n;LxE|'kʙSSWH>yKAjfYgʳ -hL*̦s|JԪ4׮kM1N="G'8($Up0v@ըHBك&9h Ȱ9Ig؂`) YkLJ*+ŵ[6G Ty LÛK z58+f<ˌE.#}NM "Ct'գ8U#uE1qڵqe]XdeǑ儔Ǖ 7SI$Hpr,iogNeĂ0a;hʨLD|Zd֋%EmjQG(VVr ?.ɲU-b۱z"ymپxXnaC-8R_V VF: 3X`Ab{6$Y3\ׇD}JۼA+-l֟/k[B_%yfȣ}o9?ID4⨍#_{}MsNoc o#hGUkUruF[|_$ŃB z'{_Si zVYX8iIIY()(`gt' XpT]8uUWf$G;+U8L WZ2#N-ZޖC_LNe"6MCgsQ:i*g|eiyr siWyaցBq8n鄋X}@fV飢hww ZwQTlDPS@!maH,O :W*A&4I4:5ҵpt!]pk-H)]xojK/K  1\9e^02G[ +( 4AP:r"?lZ$'[2&S0lM\-(B3@20AӺJ7 1yPO#դZYo\mt_DdX n wrMwqvlkyGDA11;(_t'G8,IO~-?[3L?I80z * b0pcq-μw䣼Mob:"tsӫVd^^uw<~dsMe ;.x/ŷ~Ud&-ײ '3AK"֕|{:CC x ~ &PEW:RhbΜ i> oe ((DL ir BDDဴGăqWqe;ͩB'vEC)T ``&(TTw6pǦ0>&  -K5>W"5g,}@(=I%baG:Bcc7NZ?ݕ!*}B;j(*dm jy\o52+&Tt !}2cJ5,9$AP)ae\I7qM{Z~+)~= f2_gꓜCR8(Wi[/嘆j(f(iƤ!D:>2-F(5~/̊fg5+T 5}24bd~UbK~KJTˑvR;"mdp;^GZTi|&rá*aoʶRf\, R^|kXH66N \4k9},g#WC7O<Ϊzmavo w*,l5%úֶ{q h_*w,-6L .=ϸ9֧jIƒ_ $LY{_Jׁ_13/!XA3;M莲Itv½%d@ܚ׈䢌7]Eo%,\{r\L wE<Qmۣ##-ɟ4r8TVRu]%OeGMn\X Ԙ9\g4z}2ZD=-Tj!2.SָSsjA>MQMbY'+53 o\ ^̈fTLi79Zr:QAդݵw\jm FKkq_ʓսk=D} 8)٠`zf޿/oƆ;C1_r[ '0w'oU۽^㥽)wI6rJUq*@&ݵ nl'$=9d>Ǥ3)Y>ƍJيY^~Fi1~P+9@bOu;h򜿹bv lcj泼'MWaֲuu'hlyM*Ӳ|*0mO2 Nrۅ/54>t੓"4+gI,ˋhPאW޻Cy0D5^8^3S?}Qha}FJ tQi_?SX՛u6?nP+ZXɌ U3$/zn6oW|r4/5mG$q @ "$H<&j DF9;l]e]5ȁW4^\-uy({:6kHvŸhHqhmfqVF{d#yxw[fG}(wNlXg7hxEhZbEhm㨅hf@f|SVHfQuG8׈CIggzvcwhy{GP6id{ A)?D,KhuVФ~)؊ƑiqypOen>:%S8RjXFkGfNG(7?qt>lѶRO. .MYFdžq&T#A6 oM_iv w9Sǂ٘GF~h䷒4zA%fRsO)JIl'{GA p-&wuw怣IW-x㉮@$gqFI۶W&qԒԎĐy~%T!r5f|6i8'G$cI9ЩwbRt?rv9uRu"7@n% ْnʢ(HVszu$yZd!\|Y6!0P*֏x7+i{GzC?}|oǹQfR %a|WZD9Ɍ5bbbjXA*9ww8z!z;u(wIɦ9 vc=tKQhlX nCRSQOW|d ~QڦOcJ}7$kڕG P}ȧ}(bYهI؈P8:ㅒ&/._D:WFuB }G[r !N`:w-|f 'V9BKtz8IJ w'@J"TYXr&W8dV,:cLۄ0Ȟ.h?XҷCs6\G(uc?f: 7ۅ [>$aAt'9Y+;9ۆb8K۳ @ۇr8ŴMO Q+SKU a6MIXXA;6}^+kHU9m˶t۲D;'1[;C 8HiIH]ȓIeXqz蕒;PuNka>h=Kj 6yh7hZٺṎʪhwƳe[2S4ˠ騣8ƗyV[gpۯ# |ЛT۩ዣHWׇћk(GZ`A9s7G@8Iq _ʽ9b'\kY¼+`bYfWFgÜBwDiM& NŖn|lHtvi͒PӉ˩険Ivu>疤5_\,e,HzvrIobhNp};9g9\I?[ַZƎ"wiV"f)IDG8d|ʢYʭ fu|f ۶" KiOٙ`닡ҠgiKZېVM ٢%z z,)cyygz\Wxo7 WIܨij)e✖jrk,=]' sLz[\:#K뢉b*1<,|sВĊzm}"Qw& w.oD к<}lbiUD(Q=52M?VU/JT|:sȊЂ/:VJְ 4&Ֆ+n֚uk?Mk~?lWK| +ݸ3ܰe!=hoz |ܵJ8l, \_[u{s젩M3bdګ{LD g ľ X۱Mo Zpܦ=ͳ1 5S>[];k szxӽY"-'MƃmnޭI-k y1b[*MmpMv >)[?l}@'<ɮYAk5縶N^߅ѪrPJ+{L8m+֙tW}&߼ ON.r땤Jx*Ns,|EuGŷ9¼@N.ϤuT$c1br-o'`=*_a-Z<F{vv r pβ歎"ULq^~i寷3M ٮ=-t.M@V.Nn7n{QNǝu?> o~ :b3e>1H` *GIٸ*ʜ+/,o8 > IR5=QaJs0&f;]۴þ{&o**jk Xƚk ,e>U0cj e28Zg߾FOp>k{vxo=},g^Ҝfh"LCˍ8FYæG$ϔAb1,Hk6f_# HPK0Ꚉ4ĦbE*afwO%YIąIQq }ǣIgl_OBLHP;hxSB9PI|pT),}.SjjFm2#vBsL.@ftp"%vj _Ȧ|>*)KƀҼN&ISFeS?FmAqt[yxiu&i*` 4ky1/sO{CvϤi_˙w3oy Ι@:ݻ70ܾV=jB>1qSNiy*'#EsPǓ9E8jXҢFUYUϒ6G%F{F8LiY,Tkcrh ]ٶ:+JsKcǮc}.%"1O0`%֔SfIDN4ң%;LhkaV5ZqiPY}$2ԠxLMꗶ+r:mŠ㓞+g=^tOn}{#|y;P?}>j PS*P7 rSFk8&XnJ1kqUR;6t b2\7L!qBOXl2[Un4Z Ei+ 4#tE&e=zVqs[p&P{yiX 68t` ~yf/4E0W1ZAc~0ĉ&3 :e*o& -<*/ܜ2RȐ@J¡^|٬sr§Xg=ieބЙ}(D-+ύK EqUF/O*R.H'jҥ5;]t<].~][z{Uve }4 Nvm's{~w+;f6amSgVZdoď&jηcLjkϜwnx#|AֵD&Iwƕ,QI*q5`q nƙ ,_o8؜N"L5æd;3vo~zWN,->r3=ֲo+Yq>qw*eד߹Ao}惆b%#^}&2xʍLat︲9ysJX{?+Q.EymH8}նWwӟ}l^ǟ>Uz/^=M.~^/ϟ=}HqNpPOcMڪ&䃈ŒS)8PEhÅpϲP0, \.dpmn=d YP0vHLFh, .Pl04k0ŴIȊ + M 0,NPb'M4 HR"k% BN} } p0y*ʰ|PE$":ɇ0MI|LŦ萜DQL1 O͸b j.h6JvdIe c\q SP΂GNP_Q˰Q0&m;ٰ&&E QK"2;tమ4婄NnUcn__N@-!Щf*G #1#%x":. ѱ8#h9   kÈb~e#nxs!{ CPLRɄTp@o1*aL-{_* Ē~ѩ*dG]ňЊ1/wy-)dL,h,hS s3-&ѲH"S25s`(d/-j2}ȶ r戾$+ K"f5+8cJ ~OhU 6r1G12;"4,  os"ώ1¶8*Lkƚ>2O3s.:: 3v#o2k1K"oSKb7t"؈1ˋI+Euu1֚tF6'f7Nb ,%mzQva^5WCҁO%6Q"qi\mjnuPВW6 Ѩ||u*rk .X2Ii$CDtw5W4,RX80e(F붅kvl]eMu}<^vs|sB:2SYˁ\7G[*ӥlǰu E76zEmֹҒzE؂GhkCtp!<4i['6DĤN<ɍY6FVAf6>@7u 3J6և{\G}MLo~!نAc)9(W xMEqgxV uX=EDY8_vso¹ 1W֛ؕRTislHC.9ZKOrǥquHo͊F0:LcgGM=72 ,)TM1JѬ,! 3/9O#6[EPV ?%d8uF զsQZ!O]:X[Wf5QUWuP栗Z?uiب1Eu[5w]afwg/z麮:zYӺ,3WP'vڭVZxUY/56T\BDGyו'7uK5n,yys-lhsRTG7S-%z𓻙*GiKu{uFٱ;dGe?!Q[l;5ӆKCjJjW qnc#9y;ُ3ۯ7ӊI;y lVlwI'?%q.wמ4)@H_\kZ*, gŝTtgmYWL8"KM6y KmER[M3Ni%2MhOΗ/K !7 M E׷pInSY: й׃w]; A xn.ם"'Y0Ol]¾5c֪ۡ1Xrnҙ1\٘2 4Ṽzh4_xG+Y=qQrbL jKtmtO|]ʤ`TVF<릀4yt6( >f{ <[&b:y$Տ굔EGWݼvUk]G9IO{t 6mQuCu^ͳ=ϬxQ? ضж4M4㛲Rg2w:ٙ{X=;&?P'۬Z[U6ZXxxک?iUX_V1E ~.>rj/z:#':)jnZ?iw7d>3CO}sWɿSYQ` vExtƇk(7І]ʊ*&"ی[xm-(cV:!%eI*$M:$QJ9%UZy%Yj%]z%a9f@u #&i9I1ivy&gY졨B_ytRe[ʟ PiZHV:k;vKkhtӺ'7**BO05afq6wk2)D,舫:xv7nae:Xbc! *-ۦD}O.]%DadU{28E?#>`5?]h{'d$ V _Hׯ7<0 gCb~AbσOD#:N'y;>IarD%Qc ^h] `,9ac!(6L2`o=K#R 8H鐋R Gi1{a(|q${43y!E&RdC,+wC q%%XFrSd&ɼ+|uP|4K$kCaxbyn8#kɽ- l"$qHO'T4B}V/FŇ=[V(%EwOowQ2th{ʲ>VreޛT)Si$ZH_Z5q QIOwӶHҧu,qqow p[ ddDIxnY[6(`X+lkv7&5~/qD"bHb!ï1 c pv1Pv';α-spۘ:Jt=Z$S=Bv^JcIϴ}bF9̼ëh s-),jJ/Λ>p1=V`w0ǦuP Pn-Lך,v*|mN㚎i0&kzbt;Lxˮwmgγg ?. A~f;߯4HU'a}gB$~^}Soɵe6K^v7d,ਾmjy\ kĬOtvѮIk/^hJ##6)׼'tnrDس!v|>{&RYԿOjt|d/^uG\ L ԍW,!oql /|.o͛\, ;8_W즗_]vUÇ<3>/SַR;scalapack-doc-1.5/html/slug/img570.gif0100644000056400000620000000075706336112061017161 0ustar pfrauenfstaffGIF89aB)!,B) ȷ#yH扦jrG&mLn. Ģ ~ JSN]uj2#]%Nq<̼޵70Ϭ'8HR5qXԧx(EiXBWVIYj'ȢfFz:Kkڳ[ "k5LZëP٫;' QH=0 3}zlCOo]:=t_/!Q,,w-QDK9˞ [946O7Sj1==Zf&Jm0 p77'o;scalapack-doc-1.5/html/slug/img571.gif0100644000056400000620000000115306336103233017152 0ustar pfrauenfstaffGIF89a;!,;{XzZH扦I7f pmSCO[Ӽ|DkX 3dӰ 1@٩»~ =s+}qz%Lj}0{XUXWfÖ V׸H x'gfUֆfƕaH*)h k{tyQǗˊscKq_Bx0D^ _` .`>aNHa^ana~b"Hb&b*b.;scalapack-doc-1.5/html/slug/img572.gif0100644000056400000620000000231006336112101017141 0ustar pfrauenfstaffGIF89aC!,C ̋<&e:e Dz;k_i #;"U =)Jͺ. 덁㰶|@3ͅ'8Ƨ{,F7ƶvA()ˆtWiIRH':aW8 ZPxK +ٻ$q#8;7H&MA&i" -*~6Lo!YNO~ZNY2vl4svμԑ& DЙ ^dBţm”P#@Z)R搃#U0K6 3N95²(4mt/ўI:ԁ >5T:J-ֽʞ`]L"S<9#ѵ5u{5XoS{߾,%Yʅ e2B4U4fL\ޝ>L=5g^;}秶w;wO}]ЅlϮxe7 \^ ~Ȗ `MJTiJR z y8Q$bX26(Qfq4!hJbxbw"~1cn;d=_WN8bUpx?x0k$$&ayvYT *`)onɚUeK'Y(Fs!dxkJ@yaL5#-1i"&fjs:Ejsؚ%Sf*dZ㥸"kJR+'vXbٰ ENibh*ҹkYZ5{UZy#hR R#,lzw/0Cy~<(Ʒ0&S L1kt3K9G)\3پyfRiSŰ .BџYz7ÜQ'N< k侌{JIv qC^hfMt~ WrgGC{]42QWNPQH6m_{9*;.!eK9k]Kϥg~ z١Mo}jVe ynZvofzy_}r @s?v( cYwY;scalapack-doc-1.5/html/slug/img573.gif0100644000056400000620000000216606336112121017155 0ustar pfrauenfstaffGIF89aC!,CPwu䉦z`+gvahL*Sȥ/dB)4+7js96;FF͋s޵8cW!bG&Sip8 dT؆ljIW*q(gy)۹Zz)sI ;y{+$<\[;˘ Mc*=  ̒|)n~OcL?E Mn߾M! uf ? &q88R+ dߴxgd,UJif GE WNwZ,I1|2]aCU$OBJJtbBFViU$U!j͠u(۹3E%n=潋7ߤ \80XWKVq<Nj! ײ$hnVh^ Fz՛B&9Hn(!%)ǃcNpyթGk5QEW:ׄH+.c܆59/guc沆*{smg._nP88|̈DG QU Ë6MGkf˟eBӂ!Q\%.0l-jgI侄63Z]ۭBq}R#,7(ʺR_vĔRxi=vaʹNn11O vWXj/wP uy]v{`75=8eQ l(!if;x=-庱 8 {mD~z:i~d'O_:`?/;scalapack-doc-1.5/html/slug/img574.gif0100644000056400000620000000046006336103323017155 0ustar pfrauenfstaffGIF89a!,T޼pyHꃦ|\.r @9bhȥB+>bliޞSfe3V<˳gL3|g^ȴ7G36h(w'G3iV&%'w TEʺ z9ӀƀX{Iv) ++L;g -)|)-NNFYo f|ߨ8yc0c)x< HdusD9d#gFXV9yUQ yJFKw+9z[۬wʚ圓K ),~ɩnٌ.{rP;scalapack-doc-1.5/html/slug/img578.gif0100644000056400000620000000025206336103664017170 0ustar pfrauenfstaffGIF89aV!,Vڋ 3]mhZ1$ϧnӂ{"\WAlUJ)ɕ^VsF㉇diY J2l5L%:uLg04ugB&XX3dYEA阸P;scalapack-doc-1.5/html/slug/img579.gif0100644000056400000620000000074506336112160017167 0ustar pfrauenfstaffGIF89a=)!,=) ȷ'yH扦rH~~Ģ\@ J'Jj/}Nv/.w7HXԔwg(9yX$Hɉd9WGy"hj7ۚj{CXzK;LX)\ö['7ͼz&8Rͷ{>wYNO خM0? }r 0|x4m;zDރ1%.$ rʙBi &0Z5֩iZ رf RzTչtDznTw: 8Ck9$l;(yfdkl&X̟K+M[~ڗjشkVݼ {c햊cD5=;Rb[>%g'GTCFVVE9bg6g*t:)t"D8)+(gwlxy *-Lr<---Kl<ؔ8Nb?qD*Y4MD]9SZ1i k}яMh>~4r걨"+g3KO"E~H_a9d0I| g%&2) rXT_,γJ#rTMyUl[9徥[]]'o%k-UZ`Z!bQ`ƢwrLW:gN5a?hes{S\pT1<0UzVqo$@BaԷSFעܭck:Å{Czs&=|I]Ϻ`H``;scalapack-doc-1.5/html/slug/img581.gif0100644000056400000620000000036206336104334017157 0ustar pfrauenfstaffGIF89a!,ɌTouH:`[k53"hɔ7a!Cd]0]ˊ1KVQQ5=iDc”攦G(SŤv7ŰIg"YCxډzڣgK;VjX{j;u:2(X[kDDZfs= Vm,P;scalapack-doc-1.5/html/slug/img582.gif0100644000056400000620000000023406336104370017156 0ustar pfrauenfstaffGIF89aD!,Ds \Ơ!}ye.8?,Xm[RX.s|IJY~̘OJ2zLOۻ4s!yEZ "ח!&wP;scalapack-doc-1.5/html/slug/img583.gif0100644000056400000620000000024606336104621017161 0ustar pfrauenfstaffGIF89aR!,R} zȋ)&qb8,^vqWR=\H >IeJژM6pt]>vYAcgx)9IYiyQ;scalapack-doc-1.5/html/slug/img584.gif0100644000056400000620000000127106336112223017156 0ustar pfrauenfstaffGIF89aI)!,I)ڋ0`We AʶHGץsx R1{4?JfKMs^ijۅ{ݸϢW6v6cf(!V8SgV7)i%:%FטIqz7:Y˻%ySܼ |ɄKz|MY ΋xfr; <>g .//csmV>1 ݾԈyճoʱG?оtĒ(Ċ$(pD]䑈%kv 4͆*C3(0LggݦM-jV;0^z+؊[X4ڽlۺܹۺZ;j7\T('n5xĂ 5l+`,-f̌e8ėrƌzSDNhX鴩k[\oeyw<4vjϦlʄ;oZaȓ/\_牉C|,z FwxuҶ׽Y<Ӯ_iN__7#{ Vj 2x@zhuv!aRae!-_0ە8V,%=M| Hhb~֠3 t&OB%vAUg%^N#F;scalapack-doc-1.5/html/slug/img585.gif0100644000056400000620000000074306336105301017161 0ustar pfrauenfstaffGIF89a&!,& Bl&N;C pu*r,] '6/Cd3⯅4QaEA'!Ĭ@piTbc45=%2LfLtqE0qwĦW37f4(F%& ŧՓx"*FyhøɡbYg ;yөT+J\K9L ;k< FDHX"<#=-H粠WjLyPxDZi6SY]9heN#Ƕ::Toi䗤-]E\Eӻl ¥˲Ol^d=} NI^Ɯi8,[:լ[~ ;ٴk۾;ݼ{R;scalapack-doc-1.5/html/slug/img586.gif0100644000056400000620000000114406336112265017165 0ustar pfrauenfstaffGIF89ah)!,h)ڋsyH扦ʶL9K=] Ģӓ̦ M%Ө{Ԋ |a ՜z*;'8HXҖXi&™x 5 D ؆6:KK FK[Ks; l",<%9Hʾ;R~-K%,}_LƏ#Ftݒ S1BXjܹ:f=q,}*md7-|K A7H~&SqW_i=W~~ FvqvetI4~QWLPkfa}(P(J@;scalapack-doc-1.5/html/slug/img587.gif0100644000056400000620000000035206336105401017160 0ustar pfrauenfstaffGIF89ad!,d rڋ.Mcfщf w:?WjVXJ27Jhh4Q-1Zuz ;1_{xgVwť1"G رI5GiwFsbxQװ%&E7:K39;lSۘF -=M]m}mP;scalapack-doc-1.5/html/slug/img588.gif0100644000056400000620000000065106336105527017174 0ustar pfrauenfstaffGIF89a!,F`޼_"F9h̰ql&['eЄi*wNT,h6|ݞFtF}~ϩhgG&8Tإ!7wXF6IH2sC:Ff Ub2*iX5˕hEhY<,̌YyI,M{'m8Cv^ĦڍN/H$ O~]lJ&bHh;x! ƞcy4u5DX@K'p兗ɜ^.`"YVz(8b ;scalapack-doc-1.5/html/slug/img589.gif0100644000056400000620000000015306336065514017173 0ustar pfrauenfstaffGIF89a(!,(B ctNLuG>T%*ij B6w#]/Ј JԪj;scalapack-doc-1.5/html/slug/img58.gif0100644000056400000620000000020706336073060017075 0ustar pfrauenfstaffGIF89a0!,0^c@+)MoKT%Yhp͛XX g*8 yFhMi( Y2x1 ;scalapack-doc-1.5/html/slug/img590.gif0100644000056400000620000000126606336112304017157 0ustar pfrauenfstaffGIF89a)!,)D:sWH扦ja+K.աXbL*M+\JԪeܮkѸL춛!\~"F2Cw8%0`Xiy 'I WT*FF$Zc!TQ ɪ! k<Q&\ n*w7Av~u:Ql?{ %dOU>Ġ;qE1oG(;\IJ4q\Idk#ؐIyDf$E~̷3p uə;oGDo`nOu=6ZLtJH>fsAC`RR0ץJ}ޅcok1d”=Xャ%]PL[f~-,36uLH^S?$VeUކ="E,mlGuq=RzǡưkCn󂜼iMrUSy:~Z z} _XEF[ytE(矁SO N(J?Da"x b*dcYb68#+ȣ̕T_=bnHmD.9anݷ*2Ie/R_Zuegָet)\`fd/;scalapack-doc-1.5/html/slug/img591.gif0100644000056400000620000000020506336062605017160 0ustar pfrauenfstaffGIF89a,!,,\ Q3I_ybr)gh2G쫢xNj$ M:bz1#L-0rB5 ;scalapack-doc-1.5/html/slug/img592.gif0100644000056400000620000000030406336062623017161 0ustar pfrauenfstaffGIF89aV!,Vڋ ;tfrxz챾*pvz-Iv*)s\R$!lj1Zg9(|隲Hx8P&>{8|Z TDž28Tf礠Wx9I%W16byiVS66BEYKQ;scalapack-doc-1.5/html/slug/img593.gif0100644000056400000620000000044306336062640017165 0ustar pfrauenfstaffGIF89a!,˝VtڋsoGɍд,4Z-kB|8WC /o@aTB{Qԭ,oXpѪ9Ic / |kF"0Uo_Tk 4PJS%DmӓTdQ'jsiLTg:)̈3w֧3G'5HXx(y9T(8'SjQ)cg3X8KvUu8X*)FBe* ܋k\9j =y9W,w >ȍoZp -NV2;\(:ְF$ n|F;scalapack-doc-1.5/html/slug/img597.gif0100644000056400000620000000126406336112323017165 0ustar pfrauenfstaffGIF89a:0!,:0ڋ3ȼHY ~ 4ĢQ6L/KG3|Ԫ$H]lVᵊa{-ZRWꜸZMdV(u3iOC[:*ĒC4 80£ds23׭Uc=RPCp"T7~~cp I~/DmOem?juj!tgYgV$Σ/v;qҍSgwaE/AvWo9 XKuSO X=8+E!\ňDMcQpFI}u]tt]ф(J:a)s>hh茋.Q Q9`02qyabI\aF;scalapack-doc-1.5/html/slug/img598.gif0100644000056400000620000000073006336064443017174 0ustar pfrauenfstaffGIF89a(!,(\rڋt=!ۉR%wpJko2;9px 9#.pǡ:+A')z :H[]oMm7h1OkɴJdƥg(3F5Xh886HE7T97թ%ɦyeyF*K6A[*ɫΩC~^N .yEm-܎)< w-Vl9#ٶѨ F;xM:VĹr'‹,nDDQpG*83$|;44E}j/VŤH!TH;RA-M7^w)Ϭ+$s,׌rpK 98G{Yd,eE{N +7:";̕;{ :ѤK>:լ[~ ;ٴc;scalapack-doc-1.5/html/slug/img599.gif0100644000056400000620000000112706336112472017172 0ustar pfrauenfstaffGIF89a_*!,_*ڋΠ H扦7 LץàL*)Fs J2>ܮ+'g⟎JPېZk O"n9<ޠ ?-|ɼg/0g_{^wWv}-X o "97[ r]甅9!;scalapack-doc-1.5/html/slug/img59.gif0100644000056400000620000000034006336101665017077 0ustar pfrauenfstaffGIF89a^!,^^{f x ꊠ)֗2mϟ4? cH'By&pϢ76uRLJŝa\LrVVK v$d Hx׷v(TwFYڕGדH:+Ki!ۋ+ȡ+>RS$>uV`ʐ:l . 'VRdŸWScYbrF#c6𙨢:'vv#;;˪ ,kn 4{+/d@ՕRZ7 XaPF_EY!0rAFSQ؅)D8uGqa#^萀8 w">FRi$%B_TVS$ ;Fe^~ fbIfffjfn grIgvމgzg~ hJhh.h> i ;scalapack-doc-1.5/html/slug/img602.gif0100644000056400000620000000112406336112513017144 0ustar pfrauenfstaffGIF89ab)!,b)nMgH扦ji+nx;xĢ<o.2 JTųxjn IMl! qPׄWhxx׈8IYɠHIh9h% & %xAGw۪j{xZ&dʻ[(Yi|HLܼ`*}ܖ$aVkmVK^^ރc,?_?៽Rc/^[Hlj@;ceЌEFkr+drAH*(ʊ.'3Ν8!zsc&w:TdƚL >GRyJepu\զ\Z lB0류%M#نcv.RfkQV:;my3m 3ua *.4nBZP A"B 81hTLݜʁܕԘx6)1vb}ztF:={vmGyÛDžI&웆koX-f={q-:Jk%_C@ n NEq|ET&~P;scalapack-doc-1.5/html/slug/img603.gif0100644000056400000620000000011706336065630017155 0ustar pfrauenfstaffGIF89a!,&L`y ܾ9YŊ- :oV~;scalapack-doc-1.5/html/slug/img604.gif0100644000056400000620000000044706336065710017163 0ustar pfrauenfstaffGIF89ax!,xDvJ.)m 6ӒJ\8ˁ>j$^#0l.=RjI=WY$:i;We%o^kMe+@fKg&EsxxfUXxVȷEW3WB$&i):WZ4h"x6kUykFk {H)QĕXl H}j[ A{AVmQ;H펵>N?nx_/0 <0… :|1ĉ+ZqK;scalapack-doc-1.5/html/slug/img605.gif0100644000056400000620000000037306336066207017164 0ustar pfrauenfstaffGIF89ah!,hҌ˭4ĊKxQ1jfni՞r"o$=Yv\P"q{"?dUj%$eaDMTxUդv5#5"gw8X"58)Fy #YɈhJ7#hܨ+ Ew\,$l; .>N^n~.T;scalapack-doc-1.5/html/slug/img606.gif0100644000056400000620000000207206336112532017154 0ustar pfrauenfstaffGIF89a+!,+ڋ3H扦* qK;BÝWé@Cu7QYQxjܣ I̢U*VXSҢghvF'06FهvR( C7 Vi*[IB4JJ;:*|,z <2yM]ml#\ʱ r\^-ͥ B$gia ÜQUHb9ʂquW)&͙pav袀x}9"w, "ъsŋ%Q;a!MMEH#^j5FhE ҨTvaYR%s.gkjRp 9ݎ\̛sg\Uu**Ji^ #Knj$Y(iڟj[jJkފk뭤*%lh$LW|NKm ,l{hi~$^ǎFRߚE n;TJ1:clfW_|?c|'Zq"?,Dv{Z6rɚWHx26>wlYE <#(Nv@Gmg1ܶ. >;uS^ ;scalapack-doc-1.5/html/slug/img607.gif0100644000056400000620000000007506336066313017163 0ustar pfrauenfstaffGIF89a !, pʖl5eu.m;scalapack-doc-1.5/html/slug/img608.gif0100644000056400000620000000020206336066400017151 0ustar pfrauenfstaffGIF89a2!,2Y˭ zE G}fmYek+Ȕw=b9Q#v!r2Z2NF2ە$6LpYN ;scalapack-doc-1.5/html/slug/img609.gif0100644000056400000620000000024206336066466017172 0ustar pfrauenfstaffGIF89aD!,Dy ݼJ[df(N/C'bߏ9>C`IQ;Z3e`y٤FdS 99jOC|o;{wf׳wsd2Ʒ2P;scalapack-doc-1.5/html/slug/img60.gif0100644000056400000620000000035406336101767017077 0ustar pfrauenfstaffGIF89ac!,cÌڻ̛t) ɶ)v85hJCH#Nz4 8,zMRN& UNǬ_}OtIo gch7#U(!7X(h굘I8b$9c*: Da ;\{Ia -=M]m} >];scalapack-doc-1.5/html/slug/img610.gif0100644000056400000620000000030206336065244017150 0ustar pfrauenfstaffGIF89aI!,IDvT R$y sEZZc+:=y#U>3X4Iκ+2<0Xqz+Ikxҳ6Uv%rv6XCEYYyWYTHz +;K[kKZ;scalapack-doc-1.5/html/slug/img611.gif0100644000056400000620000000076706336063043017163 0ustar pfrauenfstaffGIF89a>)!,>)Z ȷ'yH扦*rH~~Ģ\@ J'JjZ/%}N_& l {p޶ԔGXHh3Xh8IUwiYytj #I!銡Zkh8ˣr@v;|{5(L̬j;r=ܭǹǨ m=޶k^@L+ʔ?޼A+XG б@ 1L!7s}6#fJ<[81șԀM/S9ifG!YЂxk!2Hy8eq&?*:j=E } ʢ (ewZb| 8Et:><)Q;>/+C)!_˜ jnўe 0$ 1٫$L;EUisգ#ũ[;scalapack-doc-1.5/html/slug/img612.gif0100644000056400000620000000244306336063075017162 0ustar pfrauenfstaffGIF89aW!,W4P&gfh WdLoð!ͱzBcij6'…^GX+:{Jijn$nʞ'RXȘ&0&8ԨbG8swe)8w@y:Ju'zfZ( sQ Zڜ̺ \JMF ɫ]} i .{,Yei+nN߾ԱI=jG+ Ç -:!{0!lqc>x],6%?N&28%} 9,+j:8dti#EaS3JZ >AZ}:UW-c},.Pئ \s鸭q7n6Ků^ZXHd)^l^ʍ,h ў[GYҺI.2TэS܄ᮝim$n7]3+3purEOS2Z FW {ՑGyYI%"m^c>7H GG@8F!<lh:<1K|H~QU!(;scalapack-doc-1.5/html/slug/img613.gif0100644000056400000620000000137206336063125017157 0ustar pfrauenfstaffGIF89aTC!,TCڋ3@}H扦RGL5kxo5¡Ln`JK J[OGxe6vLu W w.+С=zG|wf>(Ƕ؇Q8)צȸI&IV֥b% JD'w۫+hv;K뛬 WU\|m l݈= m| nMn&ڑE^[ O IA::4Éc"H1 R8jH# Kn[Фʐ$\ ӁV/cz9sQxP&͟R'_As/pT a!Ȩӭ vrxlp: j ˏֲg!)-6q:ʝկod ..EeUifyhzEKWc3-kfjc6/eL 1N*-/PϮd`]>zĿ{\4C*8x[ FuK==Y5."xY: t<┮{Rqa6/r_G΀9H`9`!xC"! N I-\I!<"9giTIҍxo4LGGG6ΥXOH)&PqORW25'|eN* d6:lUiװ"7͜# bfcm SyG7q%&uvv¼9\~)ܜiyYM(aZkVWOfjTx~^WB;scalapack-doc-1.5/html/slug/img614.gif0100644000056400000620000000034306336107271017156 0ustar pfrauenfstaffGIF89aj !,j @Tj􀁜4^ANn)Fx6\,f^>S)B"^ m&YʐJ(:t<\oT 8bOWs58awǀ74Ց'GvRWXvIzXH٥IX$;z;Zҩv3v3;M#-P;scalapack-doc-1.5/html/slug/img615.gif0100644000056400000620000000035206336107310017151 0ustar pfrauenfstaffGIF89ak !,k ퟀm%56suL9]hƁ9SiSr-YN׫}L6aBYq{ K+)2j*7t4_1 K?ͲUc4wWЁWvR7($fF5yIx8榶7Y*;Wg*YGkiŹ+h cJVKhMR;scalapack-doc-1.5/html/slug/img616.gif0100644000056400000620000000045306336107330017156 0ustar pfrauenfstaffGIF89a!,tP޼{G橍b\Ƽ7 3e'mvqMXqfby) c̘My|D)K4Z4haOV:~Tj, g;scalapack-doc-1.5/html/slug/img620.gif0100644000056400000620000000056106336065650017161 0ustar pfrauenfstaffGIF89a!,DvRыa<ۥlM(jʕĚo=tUPD(I^heˤs:;rTz,!P7TPyܵ=.Gu"ĕF0(ƢHxucd7IfɡzgF"VkICs[[Kۉ깁 X&ī:[\L*vJA<] l(-y fݘ׏|c)_3=/+H*ڑA3n\Iiɘ3IBygvLxɃ,Ұ#bJh3mdYnd͝<{ 4СD=4ҥL:} 5ԩTZ5;scalapack-doc-1.5/html/slug/img621.gif0100644000056400000620000000142206336063154017154 0ustar pfrauenfstaffGIF89ac)!,c)ڛv_ቦʶ"GFbB ĢxIN\6j\ORIs3ݶ ׏}9r7Wǣfgs$)EDQ3XyQ9):J*4hjY:K[&v8f@jk|[(|TML] -xR<b5iRe|<8>IR#XdA6] icZyiei0T%nڇfʩؗc_)Ǜ"D1L{q>*TQ<Qi3>)*o&Z!ɖfK )ꔻhS;scalapack-doc-1.5/html/slug/img622.gif0100644000056400000620000000064506336066605017167 0ustar pfrauenfstaffGIF89a!,֞Zn2xBdHʢcY:? \vje4u6C:^`dj.5PtW0Gօ6w%7E8hX7H(F釢Xc䱈GVzZ%:# ɄY*yڻ!;sk\ksA1L評] n {lJ[:yM}~3onS-3pXZ{OK  9/>}k#-d02WHAMRP#4)B01rHcir`˕@T::qI44pi+3 jC_*Rd*16ڵlۺ} 7ܹtڽ7޽| 8 >;scalapack-doc-1.5/html/slug/img623.gif0100644000056400000620000000036106336066665017171 0ustar pfrauenfstaffGIF89ak!,kȌ`8x"݈~ez *A' #~6Zc2x]U,n kR2v`r3xbJ氒Ky׭zWwcGwfhf'7vHV%(Pzti74FeXuqx {z*lX,<-=M]m} .>N^P;scalapack-doc-1.5/html/slug/img624.gif0100644000056400000620000000044706336066723017172 0ustar pfrauenfstaffGIF89as!,sĚ޾aLÉIiAVv2z<|L4 b ƙpkl-d#"V{O ŋY32$c&gՆg5WHG8I*Yfxu8{ժ";stjZC{'%'9Gr=)\LyΘ]=H}Ğo/ <0… :|1ĉ+ZQb;scalapack-doc-1.5/html/slug/img625.gif0100644000056400000620000000012406336067044017160 0ustar pfrauenfstaffGIF89a !, +ox/Lت.}s%b)4zas;ZG/ ^ ;scalapack-doc-1.5/html/slug/img626.gif0100644000056400000620000000022306336067275017167 0ustar pfrauenfstaffGIF89a)!,)j\R_>t]^nⷤ>rV-\ 80:hD#iFFZ Vj}}N5e[}0L=(8HXhxQ;scalapack-doc-1.5/html/slug/img627.gif0100644000056400000620000000015506336067334017170 0ustar pfrauenfstaffGIF89a(!,(DpR6eݼ>=TqĮF oW .o)ե:YVd-(D`Z`o4Nff'sxb l ?㧆кL}E`}7be M]=5~sޘ`y\a%]a* KVa1yhI#cĉދt[R)d6!W``= ^6mLi[6pzZ{4 dSTd`V12gEeeogd[kd hE] 5(kZh_֨%PYhUv.\b(hr^3fw)CR MҪ锨SYjh"<{N{+O#5nژfzkG}ںm 嚉%o*,Zʸ2jnHG0D Lˮ3|( xuo~[٬iJ3 ,*9Wb7Fп,tGXқEC35SKQ;scalapack-doc-1.5/html/slug/img629.gif0100644000056400000620000000023506336067461017172 0ustar pfrauenfstaffGIF89a,!,,t 螜-qvhd%ng f¥iŷC 2٭h&s}˨Ux{0 sZ&itMʝHXhxP;scalapack-doc-1.5/html/slug/img62.gif0100644000056400000620000000053006336111750017066 0ustar pfrauenfstaffGIF89a!,dݙ|bf)%z7vv`8~fDNTX6Be5qMuzGUĖw8(åEge89()u֩v#H*:*[dH8[+4e9z` T 6llYL+hFI .>N^n~/?O_oOZ;scalapack-doc-1.5/html/slug/img637.gif0100644000056400000620000000011706336065306017164 0ustar pfrauenfstaffGIF89a !, &Lzqj^d7" zT{ ;scalapack-doc-1.5/html/slug/img638.gif0100644000056400000620000000010706336065404017163 0ustar pfrauenfstaffGIF89a !, L:@$ ?yOmɶ ;scalapack-doc-1.5/html/slug/img639.gif0100644000056400000620000000027006336106412017160 0ustar pfrauenfstaffGIF89aM!,MoΊ\*ޅ(:_6Wz,kˑ'_Eey uUMJM&m8dڒ'Z\ޒMj>W%h3s6rǨ4Qw *:JZjzZ;scalapack-doc-1.5/html/slug/img63.gif0100644000056400000620000000013306336070154017070 0ustar pfrauenfstaffGIF89a!,2˜oVB::`oy☙Sn"J4_$7 <؅vBVMC;scalapack-doc-1.5/html/slug/img640.gif0100644000056400000620000000034006336103341017144 0ustar pfrauenfstaffGIF89ae!!,e! рݼL!Iʶ1LѨ-C?I8D 3Ya uHtfmKP\O$1yMc~9ۓQ6BRȨQU8sc!xs9R3(ʪzĪ:9+I ,叠iS$=\U!R* kBg4LK&ÜٙH088R:DA"d)zZEE(UHձ>'8[ɺaWm-;scalapack-doc-1.5/html/slug/img644.gif0100644000056400000620000000121306336105030017145 0ustar pfrauenfstaffGIF89aEC!,ECڋ%}H扒a6\ Loͮ~o¢( ";ʥ 4DfFѮ{:^W)8¬ܜ5f؇xbPwy'XiYH y J)'gz*::*ۙAK˻jtsKjk|[vEFWz]jm=A- wMw^־?6o]-/] <8+4==Lw-D*z˟~;z(HDv$12{K4(,Nt6J3/hHTOHŰlPAJMjC>z=k YcUT]vY`[;K @2I&€X^&Y7oブ͉vsΟ̊GS[BźD'cӪmU]:4=NB8^+_F#=1cuq^](4<ϜXuwLKϗZv'ڒd틳w]֟_i•2 (FX&VapҡaXXCńTVO-H45ڍph"o:Xɏ8p Y$tyP;scalapack-doc-1.5/html/slug/img645.gif0100644000056400000620000000610506336060451017162 0ustar pfrauenfstaffGIF89aZ!,Z ڋ޼H ʶmƲ{0o8!L JQxm&1% ͂qkNWg U3Ս8'H7XGe(gȷ9c#i9xIhf꙳DyS E4%iKɛMPwq Ξ]G08|Fp?E{yf{Nvp;% M7Or_!p5a| 6%o$nŚ7_ȝWtDMH|$bcQ(]AnI`wFi?!2xNMQ0ɜYo2OL:ކfttgքTh)YXxy'qh\QQhHJgַY+BVjb{@)zGnC>eEj2\^#HK#w`[ΏlR)JӳX'@j"r{q MV驄:F e*+!c0<~k4:\n{KJT.{<vlo*2.Ǟ3}s\3<ς$TBBMtFtJ/ʹ #3;QK#g5Ug5V<*6ר )dmZLpOgemOu}7;HrQigzmxÆ| VO^۳bPŞJ?w޸ɛv5ߴ;뺽R>.1G;WUrYWS|8ίkyvݣf~fv-Ij6ƹ? .;mHRV{ ;;KG]+ڢ(@}{ȿp2V^w+Ѯ5^DxBѰ{\Ʒ_#!eBq0?k'`. XQ n %I-C(zaȾ1Gg0l⊛w5+f4bE'f7d\) "x96>$&)Jǎ { QLW:s|*t80fpE!,SYFWrC*ԁFŲR f"//ǃ$fveЂ~!u jŅqD7LĴ23@@qs}L)S>WYRɮwZ3'ClccN&qiF]P@K"M1ƃh7Q]NNL2rH^Z.D JIJiTi$əf*]$}[d'^ e$vM$ilSR//%ՙŭ\qFٔMdAeş=aze&8klm9ױY&W˦&¼;ΛUdgM~o6[()YfNVvj Q *m(9^Q$&ŅK~mt?<>UA\Qֳ+F}Zof^kys"B-WhXzңV~T>:Xu•H?֩]ݤfo;.εpGWMNj8[Fd;PQw{Ǖx>vwXWՑ1J[ȯϭ[t; 拍iY?i#ޭأuۅ_^7\s(PXEdz'g}=brvo7~_4~'l_izqaR+5KTtCw|D hD{b!u/6ma:6,VzOfH,w}k$XuEg-ghj"8yDZ:ͳ/bVbe8hb&bqhsOGdlfy|S5iGa'jηjGf΄^BiYdlK8 ׀Cnwf:DXig:VHA otMa(l}|%dXgtxr<&A8B\ᢅZׇ4.26e2ƀ&}oveHMƈx;ҋyFdhn0L+Z+8e m"L=wmToKqe6fW%|x|( +GU+r pFPb(ZE~h3{FnO@&w/Gsks]gyc#wAFw' |ny'W-i\WU*N?8|'gGGhu;YɃx3(EI9Y84UiWY[ɕ]I;scalapack-doc-1.5/html/slug/img646.gif0100644000056400000620000000021606336107775017173 0ustar pfrauenfstaffGIF89a/!,/e p{()h+`6J&EI裝g1* XuڛloctPdup0H%tU>1lgP;scalapack-doc-1.5/html/slug/img647.gif0100644000056400000620000000075706336110055017167 0ustar pfrauenfstaffGIF89a !, i=z{}HJǪ밶 #a/Qlh0jRNzU"lda8-S֨TSE5!x&sXh$'t7'GvDچI#9S*jEBj;KyYĪ yIu1J\LIHe{ 9-{ eڭ w[ڕ_Set4*;50CVL?hH*1 ?y۲ `z hHă ;IƑmVƐ5 :ѤK>:լ[~ ;ٴk۾;ݼ{ <ċ?"n憴4e< 驙ZM ;% gjan!?wO|HWc2Xd)9IYiy X;scalapack-doc-1.5/html/slug/img649.gif0100644000056400000620000000057406336110135017165 0ustar pfrauenfstaffGIF89a!,Dv}\=5%Ĺ_yv1ɉbgBe>/Kqw7>3Cx_x؆(.d#(~O>őVia8fP6u CT/r\<׉%+Df؇#IY.oK3СD=4ҥL:} 5ԩTZ5֭\z 6د;scalapack-doc-1.5/html/slug/img64.gif0100644000056400000620000000013306336070116017067 0ustar pfrauenfstaffGIF89a!,2aKMzl! Z)`[k_sszԮ;scalapack-doc-1.5/html/slug/img650.gif0100644000056400000620000000045706336110303017152 0ustar pfrauenfstaffGIF89ak !,k VƬ޼fHn`֩_iba!z[pߍ RdEfÎe*V*/aJ +"ƀa+f~?2W'28(5w7swȖFyą`7V(xI 8jjJj'Ssu¤9\E[Sy7ŵiګkeܗ8N~NryشQ#?T# <0… :|1ĉ+Z1;zX;scalapack-doc-1.5/html/slug/img651.gif0100644000056400000620000000012706336065475017170 0ustar pfrauenfstaffGIF89a !!, !.ҁ.[v|x_&:pڤrpى DC;scalapack-doc-1.5/html/slug/img652.gif0100644000056400000620000000104306336105051017150 0ustar pfrauenfstaffGIF89a!,̋޼H_Uʶ h FDRtԘƪho,Nb^hWwէfwp1'uWIYx% bh*HH WiBkA{Ô[:SʁZ+jk#,kٛ[E-膉,|+E~y,-o -~nm?pf9^,U٣I4Tr6rq !"ԓd!f\kN#\j,)m:sR l𑃐YTg\R5ŊdsAs_:wR1c8}cD:8ۜ9.^_1 Kӵ[;W).WFZNC 2[R4ѷgֶU=|&xIt)誻AQ`wVnG˲vvS ;yƙV[eĿ#MΏ;: r0n=}nP2-ӿ;scalapack-doc-1.5/html/slug/img653.gif0100644000056400000620000000046606336067160017171 0ustar pfrauenfstaffGIF89a}!,} IIŹ\9ȠInT[jl1E{Ӯ\`Ivi chUeX,lNWq=D =q%4HE浧8YxtF4I#hd1uukwWe2Xl$(9ٶf ylZڃ*lt+><Ƭ+S.V9MY/bQQXB :|1ĉ+Z1;z2ȑ$!;scalapack-doc-1.5/html/slug/img654.gif0100644000056400000620000000011706336066157017170 0ustar pfrauenfstaffGIF89a !, & |.yTuW:NMK^;scalapack-doc-1.5/html/slug/img655.gif0100644000056400000620000000134406336062174017167 0ustar pfrauenfstaffGIF89az)!,z)ڋGQ扦ʶk錍| h-FD: JddlM '"iEs om9VIޙg'8HXCChȸgXiygH :IUZȩX'8' y (L*ֻlZ (W+<ܬ-d >έNx:daXE%_ |nQ2|u0B-g{l"%+;0gR;W6~T(XÛ×#5^)He31{\"˜CjIaIk֦0Yc-; f͓`*sZ5SU=S 2ഃvѷ)a%y{g۴~> ;ٮ/Ulybb_mʽrvѿYT̓u-ܣ/K7HS֤! co7fKGr JeW\ @9iQ]GMeFUjȕXىQuN-jPf*φ3~ս؍rZB$y ڇGN>neN.8Vnyxe"bN6i5fҕ&{ vC9`gx h*]g(7ߟ;scalapack-doc-1.5/html/slug/img656.gif0100644000056400000620000000054606336065152017172 0ustar pfrauenfstaffGIF89a!,\`*<]Pyxl= 71Y1KhZ桙dQZ=Wd$97 ][½spn~q͜''FS'UضFF()5yht3XӳvYqYI)(DDyzJ( :kZhӐb40Dh2eB"Tl2&;iڼ3Ν<{ 4СD=4ҥL:};scalapack-doc-1.5/html/slug/img657.gif0100644000056400000620000000031606336070212017157 0ustar pfrauenfstaffGIF89aQ!,Q`9{xv]^vB ٘f\3nN+nf,x;ONCg\>AagQ^*縕Vuu:]#=\uW&7UcVg8dIiƷIUv+;K[k{ ,LS;scalapack-doc-1.5/html/slug/img658.gif0100644000056400000620000000010206336061450017155 0ustar pfrauenfstaffGIF89a !, lQ.!sYQV;scalapack-doc-1.5/html/slug/img659.gif0100644000056400000620000000111606336062213017162 0ustar pfrauenfstaffGIF89aJ)!,J)ڋVD]%ۉʶ:~fS:Qmq ̦ $C%'ijܮnZ +8Mގ;vtX1(XV6u'sgRhdX爠8&):ycy)V:wI*V3e*Gך;+\ʸk9{Y܉$Rֱ!'!=} 7ܹteV 5}Y 81ԳJ+FN1[. ̶ b*Sl zPшw:ꏨCLlQ>k۾ZĹU]>3|tsҺlWWa2 xz>0q{(%?0'__\w@H ~f`l8R9HQօ*;scalapack-doc-1.5/html/slug/img65.gif0100644000056400000620000000054106336113016017070 0ustar pfrauenfstaffGIF89a!,dݙ|bf)%z7vPg<&A@MNT6 @l;i:f1<{#=&gW'Txf&7ĕ(dhh5F9y$jUʣjZJ6qvYv(YZI ZI vjsfm΃agìqVvgwx7WHU'I8hy:9*i2 )*ׄY *jaD{<lRx +8k\\tɽ<{D=^,SM#^zꧽ-#;w6U:[sa9YxdV gƑBDLGHd)1WY%j9˝ 9r+ohT ҥf$ө55@?i 6^Ú={N/hۺܹtڽ7޽|ׯCXՊkSL930iXᛠKoDڴі;scalapack-doc-1.5/html/slug/img661.gif0100644000056400000620000000051306336070735017163 0ustar pfrauenfstaffGIF89a"!,"pkR¼bJ"i:nfIOwmi"qQD$KfG"+ßiʎd 1|N6v)e Tt\gWGw'efUץ :ș)eGZJwZ6F;;JYpɺ(\+Wx9EI:լ[~ ;ٴQ;scalapack-doc-1.5/html/slug/img664.gif0100644000056400000620000000032406336071227017163 0ustar pfrauenfstaffGIF89af!,f@ڋ牦Ty.F|޴0sfK)kCɌ2KbI&)Jb}^ ݙۃTaWg'ط'V7Y89(9YxRtggxڢ8eٹz:+ATC+GӖf|{Q;scalapack-doc-1.5/html/slug/img665.gif0100644000056400000620000000027506336071305017166 0ustar pfrauenfstaffGIF89a:!,:mz_-x!`pmҹcˢ>|Zdd lX%UťHRmV'()N2~d~WǠǗd'(67Ԇb(iQe  *:JZjz P;scalapack-doc-1.5/html/slug/img666.gif0100644000056400000620000000017606336071657017201 0ustar pfrauenfstaffGIF89a!,UibK̭yjޭhM$yOa(JVLr]c\֋`6Rye)WNqN W;scalapack-doc-1.5/html/slug/img667.gif0100644000056400000620000000065506336072047017176 0ustar pfrauenfstaffGIF89a'!,'#{(޼sh#Jj1V"c!K64.8L`Q j)FMhSK.' >8Ō;>P;scalapack-doc-1.5/html/slug/img668.gif0100644000056400000620000000020506336072133017162 0ustar pfrauenfstaffGIF89a1!,1\ Bwhҷw.yeو2%{:!7/5[KEn>H+ H3HO8crh֡U3 ;scalapack-doc-1.5/html/slug/img669.gif0100644000056400000620000000073706336062252017176 0ustar pfrauenfstaffGIF89al!,ltP޼Hfp]ʶ Dz Dx\ ,JڈYJ6;MQaVиNk-{?1gAVA(%v%7( g׈ȗHIzBuXx*{f 1Z zj+IZek к{ $x54¶0(8HXhx8X;scalapack-doc-1.5/html/slug/img671.gif0100644000056400000620000000045606336110263017161 0ustar pfrauenfstaffGIF89a!,|byV"c/i]u, Qh&ļuU;scalapack-doc-1.5/html/slug/img676.gif0100644000056400000620000000012706336074063017170 0ustar pfrauenfstaffGIF89a!,.p̔Bw^DNDi$iv*~[:k5 Z;scalapack-doc-1.5/html/slug/img677.gif0100644000056400000620000000146606336070564017202 0ustar pfrauenfstaffGIF89a+!,+ڋ\A`Hnjʶ r}%>|ĢLrP<` .g]l'8Npn|cQ/\ WeziՂЫܹCu.^z%-^rh㪭㯝&3cf3J9ѤK>:լ[~ ;ٴk۾;ݼ{ <ċ :Ǜ]Wg??:Qiu9JdTъa"Io;V'm|6zw^F\Ԏ EbpW^;b͏7.9тSV (U:YYW FmX#!])ISzW:Y!8m41>B.%%PUA3J.cx<xhOhg X.D2Dx_*IS%ѳw fWda &୙ D=) lxPYV;scalapack-doc-1.5/html/slug/img678.gif0100644000056400000620000000152406336066132017172 0ustar pfrauenfstaffGIF89aIW!,IWڋ}Hfa^\ Loke$ Eʦ AӤu.Ȫ:y>ٲ/Ɩ"_l^U'8'('GUuwCy Yٙ:Ju*'Zh( 7FktED8[6M8;7v?k׹':9%=`/;`sp=|||D0@X'1P[;8?4'5lHЁ;dCSB 0C X@$%JL_Ya5HqT,jq\E Jp3ьC#BpAh#>eDDG>Ѝ[ P\ 2L$ab|; O+* %.댛]CC(peIf(D[ dU<TUFs"R7b 86)[&%͵̰YB#Ry"1}iSb6Oƴ,L w r z>ڦ)k&i?Ev\Ru9b֯qD:c4LZNjY*ڐu=Fm*d{Cǖŭa'zb5}WB7z}E-gIZCktqA7 em^!?)VKd4} Kiز>h+cZ{׈yG[(&4ae(,7ɾBYm}#(YT܃{цu  ^U|+\\7ޓwU 9?nqg^ jtŘm{n~ ے1J}Rts")>F֤6BHsi_z̙,Dg-P0 C v)ۥ ?X`?{#& l۬&L+}4;g)QJ_ҷzjF-K%pzG3ћֆ[@7MPHj爇eDbIVB:&  yv<*^=MtZc#=YqXw.vBMq+ޕd >pwp1\ xXEq@V<ԸE C[&rI>܍$8Bn\;"O`/۽@:>dD7Q~qn7.oj]mA8nk>?3)p߸yJ2n0+cMʓtzs(-1WY#^3{Ƽx4_wOm{lޫ[~P1`bʵGIjp̣Mi8MyxT`{v<~}-x=ifW]hM7$(jMKۡbv\3N>bcXwtE6m}A4<1`G&8.Wu^vVwkgTvw}hV&nc sfVhS_xhXlWUT}.vG}/8f~KHfSKhȏXE uuhG9טiKhȑVJ(hIfW:ᲄ.Iy5 giEε HGCkoyjg=&,G;|fuN|kZisaK;MّOI|留Skg t,lƉնbr f%HL o9uOV<=P~kǗ;Dw9uIrft(IYr#Xd+טDG7i~ƙrv)csh77T/W~uHqj֛(ٹ4~XvN%98PcJhuhVֆ\Y$<lYr TgYg?j|p5NzM)|ׇO{S }9w}8Eat8@z6 6zIJ(SDNShr][zD4k8%w_JcQy|nicN>؊iVzvӹWwyL ¦fZ^u\m F(~i#!]Ł&S&#ay(KW|葂4l YhzjWX(J_$xI=zd~h؋%C ˚g][J=_T7I{Ǝ3v%i+"#أE}XFNSj]zeL? ې# 4ۍʯ/)L\xȐ9ty 4)&:shzHZ"J> 0I%c۸:8sc5/eɵd<Ȳ,DZ4٨*Lk`K-: ƒaK[kDYƔ)r$XI{eTf`)V`dY2y5;bhVQ7,ۺTHmY򈴗ltya(Kmz)<+Ws}+sƛr[8o I+ҋٿoG l|Q잭Ok@,"6v,n($ȉ&u1 9`d79;=? P;scalapack-doc-1.5/html/slug/img67.gif0100644000056400000620000000033406336113552017077 0ustar pfrauenfstaffGIF89aU!,UDvGgUMӏiKwㆡ5:1f)+y;V" 2J;Qxl-HYT^O[5\Mt|w^y&bFt87VE8gS(љeW&h D:q[Rj  ,k撗G,UPUYyA(JR{.N weLvx'!PmV-]H=+nyK/`n{1 )␅t 9ɔ+[9͜;{ :ѤK>:լ[~ ;ٴk۾t;scalapack-doc-1.5/html/slug/img683.gif0100644000056400000620000000025706336073513017171 0ustar pfrauenfstaffGIF89a7"!,7" pLi< {GZEiZ:vzV۰|g88 "G2Y<72[O)}*\5/fob}~rse)9IYiy *:Y;scalapack-doc-1.5/html/slug/img684.gif0100644000056400000620000000025406336073532017170 0ustar pfrauenfstaffGIF89a7"!,7"pKSyHZEUVNŠz-sMe#_9DB0 ƟQ]MP7(툇eSe;J\ӵzMQ+Utwƙϔ2Xc)9IYiy R;scalapack-doc-1.5/html/slug/img685.gif0100644000056400000620000000052006336073551017166 0ustar pfrauenfstaffGIF89a{ !,{ Zݼ49b**ipl&2"U0IR#dH_PZ T]out9f0Sm][Enkr\GwC5t7ExVXTg9c3XvGӳJ g) 99C59X8qI+ɕ;u;J7L:bTVDK}ZL:μ+{ Ji*O! >Jo6^TG@6FU@(^2ȑ$K<2ʕ,[| 3̙4kڼ3Ν- ;scalapack-doc-1.5/html/slug/img686.gif0100644000056400000620000000051406336073614017172 0ustar pfrauenfstaffGIF89a| !,| ݼ5扈ҩ$p:if|:V!MAܽldSpIu*8Y(5T.dpD{#LRټl976Ʀw6FuW֩*H6f(YyI%1zx'kDhgJ [jƫH'Xڬ#xKYYUۙ3<(^mcպq7(Rϥښ/g}0$hO[#hm8x$$;z2ȑ$K<2ʕ,[| 3̙4kڼd;scalapack-doc-1.5/html/slug/img687.gif0100644000056400000620000000012106336073747017174 0ustar pfrauenfstaffGIF89a !, (ҁ6"vZÁIh~ vEuZa5A;scalapack-doc-1.5/html/slug/img688.gif0100644000056400000620000000076306336106233017174 0ustar pfrauenfstaffGIF89a!,\`*@o}H扦݊pGm}+a "7곹.7m7sJF6˙Fy:UzS#CQbH"ÖVyEG9zZZJ)eZ7K#:Eyx&Z%FF9⫺Ȍ: a,\i;]&mk͋ j,[G݇WJQݩTcS$;GGٽ}&8EEZ M[kvKҢƆ&1!8/TeKDCl89E&Ġy_;{nY.gMv4"4n4 єŤj$bSVf2o*#bD=VT`YlkTZQX\8I᮴-9ω eXj!y9ln}4=q0 ;W>^} w`޹}`|F;scalapack-doc-1.5/html/slug/img689.gif0100644000056400000620000000011406336074011017161 0ustar pfrauenfstaffGIF89a!,#i{@2uao\yqjL=;scalapack-doc-1.5/html/slug/img68.gif0100644000056400000620000000476306336113173017111 0ustar pfrauenfstaffGIF89aC!,Cڋ޼HnA L DVL* JcN:jܮ0FV +s M^`Whx(#փpDH(#DY9ىfi&*GcI)ۙKdk U+\H K]}=SmMU=HL>G#=/EnzZOmt;΍Jl40:@(_B ;sHK1SADg$h!K2 L8*eȝ3k>\yeFp2DYĎJNtJ Ʃ^;,f֊ر]JDM[Vָ_d*lfYC]lc[jxk:!Ɣ gdMR:˶0C5UNH>:Q4`.8ȘL@xc -]n-Z6nxvM7:+ x-fԫ~'cw/iue=~ہx 1W@YWjA_zWsT9h׀g'"Qegr ҵHybۈIahH8BϏacl )^$E &T[Xh1Bҝv YnZdsyvSreמSZ(CVIŀevV鋂ݒ~tHe:Uj፫jJk=v+8&ZRtS lJ{H2zjYzG+\ 枋n. cݾ1/Iֻg!. QD,,4eMت0ģWJX)aNPlI).o#u2*L*dݩmZsޜ^` ?3LάttrwltץnzWthmYS ƛ5K;~yw; TW[(ю^eq)gN~.u-:x2>pB4zG<oo )ve"W/gAJB:afԾ!|!L賓àvr,)OTrl BXl#')KYzX-1HR|%(L<YZ&zƊ5O 4Z3cxh7l&Rf&4DnB j?N>Y OC-j%s1t^2( .2l!Gp'e>X7=.l[t@,n6yR63Qd7 굉[H^iI&Ċ ,PjSȍHZdw+y6c8]jUs==OI-`K[Ӫe~u|&Z0AWE7]_)R첧ڇ[.{Ռr?%69S'd;ѭ%UulH뮤]z/~&JI\`rH9cH?n~qPUAí<9Qvg5G1sg/%l9]PQgƜe-1id ͎dqʊw}-x8XuIkU;ػus]jb| hM*^t-cy˫}(6*9ܰW>ch$֬~n,1]nq56r{J/mO~BjEؚ%FYt \dOp2~YMLvQ3reolT{9F-Dˆ޴r2[ \l'y/8mם}3oSlK}낒1XZ/YJ fm-LI%i!z,vZ;&ojOGx J+VE{C5ӘT2*CpWgF1} Mm1؟N&$K=!qCdw+oS;scalapack-doc-1.5/html/slug/img690.gif0100644000056400000620000000012706336074063017164 0ustar pfrauenfstaffGIF89a!,.p̔Bw^DNDi$iv*~[:k5 Z;scalapack-doc-1.5/html/slug/img691.gif0100644000056400000620000000141106336106260017155 0ustar pfrauenfstaffGIF89a)!,)KsGUǕ扦ʶGm ^H<Cw$¦ J;備(Ү 72;3=jӈ|IllcWhx&7xs#)9xH uIGI#JijJF9 Қ ڇ%W+ Y\QLjuղXb31Jx#f /Oz460zȰ!uQ@,{0!9БMY*e'OA^ݚQ.סe":;bS7q.6ؔyU%ڽlp41b& aA2֖ˎС?Ue 4T+RmzfZ*^Z6z0ǖljL2;̛;q£vAt]*vJ: $ӓV}pfY)Þ&`Y_~ՠipt`UaݧP$ (Yry#1${a8}9k~<ULEIGM{qe4x"l^I& DhgeQU'drΉ_6Ggs)ڝ h鹳Mq憅ifȕ4<9裖hhy yi(ȍ|Z_,}B4qf;scalapack-doc-1.5/html/slug/img692.gif0100644000056400000620000000054206336074210017161 0ustar pfrauenfstaffGIF89a!,\`*<]ZH:cnߙ2lLu*伤Y ! Gf%HibèMxy68]jo\._UB'׷3GAxXwW)ډ)YJHsg(:Q{{h89+[+$Y  6GחV6W5W蘧6(9h$XiiW‰Wff׺Y*Jy[([Z eJ;닊Ik <(],,\Ll)C)bB}LM;_ }ں13^G *s\A ~PHt0P'l\y]AOLn\(O˝8Ō;~ 9A;scalapack-doc-1.5/html/slug/img695.gif0100644000056400000620000000012206336061116017157 0ustar pfrauenfstaffGIF89a !, )қt aɚr+7>U;scalapack-doc-1.5/html/slug/img696.gif0100644000056400000620000000032406336061353017167 0ustar pfrauenfstaffGIF89aS!,S`,91zvݡ\hsK/ts=:m1NAn$Ͳ\~QXp1IΰL{QkT7(CeDRHUF7!%vuT(G Sɚ:G{ ,2w8F6xhW&&gғY6t":Zv*K[k{ ,'*ҁ޶8LvIͮ1YĊu6Rr-1P/+DJjp},w8lZ2Ѳf6882W7Sh88S8gySIiC )C*8ZT[k{ ,~@[2F֤{ dFe(6#'(#yyc)ȩW8H'yFeyVʨge;kS;scalapack-doc-1.5/html/slug/img705.gif0100644000056400000620000000032606336102451017153 0ustar pfrauenfstaffGIF89a[!,[o΢T 'VPD]r6L)uxtV\k<0&ѥAj3br}*Aח Zv]@$ۿ w Wsxcs7f8G) H'泤ɔ&GZZTj{ ,l2蒇`äʝ[y*D;T$qPtܪR[O:5h^ᡔ6.&kyA+b]{(;scalapack-doc-1.5/html/slug/img708.gif0100644000056400000620000000340006336100501017144 0ustar pfrauenfstaffGIF89aC!,Cˍڋ޼H䉦vl L .G SdrlJ(gj8 Fyñ1h νp|7WwbwvõȁȘa5i’ɧ"* RiQɚꈡ(Z' K[k'ILZw|9;LmM Kn3~[lnό(߆sGd̋1\fyBLj/ ω8$AyA`j̎&cv œ? 9Q$A D;a:8eQdn'X* O]3ו?F$ 5Tp/[cg|;m,U ytwUⰋն鴢\OM<8Fh4&#!l9[f~-.߆C;.lSx>,w2X93[oS=lo$ oWE+1JDž*ڼ_d o9&\>}{|kw/|"'}l\=h]g Xq 5`v`0*~j-Jg/HF5Hc3ؘc.PFyIqK>y"IyT^) `yeh٥Mn1fK%šMF g#q)ƜpW`J&vGIIh9 XzWYYVV0XK\%v(-=iG*MWF$GL(?j fj;#딿S<̲i7hrJlE)K%'^I?oj̫FEsVk,Of+~iY,~wHhI{glA%Џ< bvd%pr1)9Ssi9KR+E\6^RM}؍~n=Е#<Ѭ`mg`z] u j]+h.?T 8j_o!G:6G,܊;8₢Ĺ޺mSZ1M7ycႛ2! 8/ /%>Fя*å#?HO.  jV߽iWO&ܭP5qZ7?`JIE[V=m/(0Lf ,114 6~)$<ד@XBu/,Ta1x2ݙ!l($&!v(& z"|#HdBs%")S=8iC.i"WRS.nS5IwRcYJS:wnUI- N*\Dvӓ]-i%eZl_]" ~yFZxAYj$%=3`6Ypo]BΨLU/D}1&4Ѩ7?9ErS+y*2d.m_m.K~.Qip-E)Ε,e N_bˏ}0Ӊb)Op[3& DWxx"34I<9),SwN;EAQA 9г2h*랪GLyz_}T}1GqЛd]X4nϢ#GkgR5YEd/e>!Ԋh "*P(bX2d˞G,\#=v %Jja8B;scalapack-doc-1.5/html/slug/img709.gif0100644000056400000620000000036506336103740017164 0ustar pfrauenfstaffGIF89aw !,w ̌@Tj5Nue6) 1[hlukމZ!MKlS"e8S%{Ahl gѵ Y&~''X3uWWw'%T&wT6Ѩ y4:F Y(g(;jcf6(YzV\%YLӺ |bh[Mݘ{|hIrM^nN P;scalapack-doc-1.5/html/slug/img70.gif0100644000056400000620000000017706336061754017104 0ustar pfrauenfstaffGIF89a'!,'V\Pd9vUt"_)p/, :(`fv6!SY, nUN ;scalapack-doc-1.5/html/slug/img710.gif0100644000056400000620000000032006336103773017151 0ustar pfrauenfstaffGIF89a^ !,^ @Tjq5;`IE|ff۞hں+kP£g3&g[2Ȩ fSMƸ4}cN^"n:&Qw1է%զDevBVH jv9cEFx5hhaEFCKb{+<,T;scalapack-doc-1.5/html/slug/img711.gif0100644000056400000620000000036006336104116017146 0ustar pfrauenfstaffGIF89au !,u njju^xXb/LDZ{jCX+)wIۭvT6Qe4+,vЕ,2ΠiCKm͸Kx2MƦS5gacgGGUrȗ&!ffD#PtSZUJzx::I*dK' US2(5],m Ϊ˹R;scalapack-doc-1.5/html/slug/img712.gif0100644000056400000620000000614406336076740017170 0ustar pfrauenfstaffGIF89aZ!,Z ڋ޼HJʶLm q ̞ F*;ڮ%"M.,YJd{ n='Xg8xCHiȸSg)'yVIYʙ:údJSh*iP(VƱ [e|k Tms rL[>F9Vv >F1N>pN(`9  ٽ^VN^2CsjH\;d8gԂE"h$2ʜ;ynS:x* 9ETief({b^}jkn@^zl]r_jzu{k*j!}^QR)0_d䘓nFlPnx>[︟Wwm~pӷyHn,ʼXו=AJ:.cbG>WݛpOh%?ýZ*>)CҞpF?[>I3`HoW}zW1mnu<ߤI,a1v ~lhYy1̠t90ExW mSZDK'O1'Eqd (+n[=Bi3r2%Fvwltm7W~P[lKGd%9[<M苮qr.Qʰ (51*$FKgbtP#EU,Q24W"_ٟVC$\$27!:Ў |G"c^kW*)4IR̜&ШVN&$`1'UΰDj8"FǴJFiNz:D_CQ3'B=μ3qy$J'JsVOwg&33Y^OK 5Ѻƨ#T5EԻzXɰMVdUIEA\r[ݪ4t^\F'Kk,yL3syk? IĎ]ZZX>M[ػ@=[Q.q\d;͂Vn-J{7U<]Wa6V qMRqgOg\vIK@ֳUX񄛙a+_xFKmLXoUo]{bVyGuo_UJA#~UZ<|Co&~SGUa24m⥤j9xu˄ N>v< gZ1S.=Ѐ}KN_(ASNSI'cOi继ӈ6MpsL ☕;)̾!oQ*&VКڐ%aql#wj1)P .u${fEtUg9r:p_}}ϙ1a2DwڼnMk p0>mm`xf~SRgqjrpYi'>W:qbg:`pb>VaCu.b/bWe1jxtJ,eRc/Tfq4zVl.FvYoԶjwXITbU6zЦj^l.t~l[KU fo脷vi4}@kwvu=J^]LC_Geo{XA^|l{gUI/Ė`1n5|lMHwoMLDEneڦ|+oLZsQfj.È6?yP&nj&pw#x XheVF`UnvX*I DȉZ'b>ptm&X׀wf3Vvׁ&Tlҍx&WR$rw.؈Hq츊X5T 7=;G9(irSGTtEsB5;s0tLtCG]03HnHYv63z8S{'z9www7Gדv@[Bi§ju>iyQ2gWE1v{[Yg9Y^鐕c*gkɖmo q)P;scalapack-doc-1.5/html/slug/img713.gif0100644000056400000620000000612406336100026017150 0ustar pfrauenfstaffGIF89a;Z!,;Z ڋ޼Hfc* lL^) ?;*o eFԥrծ7vܱ-a&l^ Wָ]I{W߳x6hgTxŘIy&h3)YJ3:v )ĹZ*$ۉ k:J+3yh+jB(0WX(Je]ha@-pR:r$~/ Icuɗ,Wv!!9:*򋛾RJDBʸ Ǚ2Oڼ룻/Hz\z0i^)z&Μ DrJ[ FT9NH`GL{ ☏ kq\r6L\tSv=.; jWRc9ဨ \;UXb&x 1fnu]$Qd~03^!bt|!vahW4)dbiJ8-F(iW1v^LcCYG|\x Y\|Q Gr")t-itt(M#&Y7"StRUd_'cAv8V)d}yd:`mU t){M{Y^fkH j $nhuy!bF)"x PJ398ĚA{dfQIkd)͋Cd2%qc> Ĕ6" PC qVeK alMډbUQN W/W P!ZFTCF?@t)goRYRc+\G[=v؞wBjvn wrMw8mJ3Arb,Oiz{1;AÊK׏ַ@3>J9?N9I뗸9/ \Ƿy麬M3 3dݭmYfm. ^o2mYo$.-5W z~أU/O~VT\e#ޛ·e$J, @BU!N  LB 50yR"zK`\x0ArgٳŌXMq%+ߑw5_:a@B0F4!+U],\BnD.\pfCiFKņa,b #B1{J:vR y>lךıuBToh/w;]F8&qԘ+i#$MP"2I֮DRH FlG lNP3@%`%eC)'a:!?tA ԔƘ=dp$NqT# OҞ*)jJt:'І/YLHtF4;ɂZS\B)t=NsJDC|a+IW\\LQRBTޙ&SoBKJ&l4*Q;eDtRG2yj&yzԃfѤgQ22X3h.;c"su]Sne,ĭ~Ul\;*evkf k٩}vhAK-0YbMRNzl^H[ZVmt42h%nX;K0U.`{]6w]4DZHKԴ-v;ŷߴcH$*0#TȉR!z;rvWj9ŊI?z?Mf JenArh-|"} X}eb!\804cR7~ M\–rENYhWܪG&A5f8Ԏ Ԝ}ni; " 7\0rmpB9O椎:j8swb7ZQΥYۤ* K8mvĦ1Y)gwvNZDuRiS+dE]X$X6XN>>c&y>L9ͻ=G>ö"uPNuEsPlWb!w SRcw8CKBtk)8{YS#gepc'n.u@mUpx 'NGikF3~z`wp`.qpcwhLWVpz%i xW"5!(RV%lF -o/xtsmVhd}w6cRj3Dׅi;XKb(}jKlh؁fknHrmSgW}ֈpΧnD#uGyj?y|恅z$o"X-98M1E&n=Efݒ&\0`aK+)yhn_̴xRXwPMd+Y\gqzYV}3TB_ƋҘ懧+7ŊMFx1rȍƈkXj_գ)as\sgWt󷈪U4!'t/UHk"k.(xWGLgLF6Nel#_zw${7sG{xw]79=G@Ǔ7{'0I9yG9B6yK1Yd}"AI v_ a)cIeir;scalapack-doc-1.5/html/slug/img714.gif0100644000056400000620000000012706336065475017170 0ustar pfrauenfstaffGIF89a !!, !.ҁ.[v|x_&:pڤrpى DC;scalapack-doc-1.5/html/slug/img715.gif0100644000056400000620000000155306336100541017155 0ustar pfrauenfstaffGIF89a!, ڋfob扦S M+׶vο ex!BԪ2 ֆjhZ7 I&l,*&>B='!X󷈨VH7HfIibB A(qJi({ak ;ӤsZ ۗil;  \+I)\ W.t|ܼl,??<ݕ!8e8t(W-ur(b9BMpZBaR,bBrD)c {yKE6 x4MLz&|YZ4L΃\*MP|61bmLd&T5Za,սZ(\3deSpGtfHYƏOHex4LvmN\ι-U*l7eDdE:W}c<彐l΍=eDn)&OO}:uSozwьoSEd,z)ygPG7`-%iTY3/V !^qQء9Ut}X?Oqx"g+b-"yJDd*R vwIDgMeESiNȥbcWm'wYF}5UkZGmϡLJ*MQUr G~MfGe§FvjY)ʔ`*!y%x`y$|rZqHBd* Ef Vt `2LĐ#_؆ s7[]lKXH΋or/;scalapack-doc-1.5/html/slug/img716.gif0100644000056400000620000000222406336100561017154 0ustar pfrauenfstaffGIF89a,!,,ڋ2扦d ʛ7D2$ HK-@.nZ;ZVrG !y皿 qwbH6ERu8T6w6Hק4e8:tYJ9%izy i wjk& Z!:8\k[L+L m6n!mm]lio\XK$KjB[ -ć1#*n&7~@TG Q3VZMΝzSբTRR$Jt8'LEϭ\AD6jӮ Ӱ t4=F7L/,5oױl/4f˰ Z.U(z+仔N%'γ ^㢡^-زlL˱D=;qd~5o_F&80GDmԭNo\oҏgUeb~W[ PKF-WO(UtYb!W:JM ˄݅yA\vime|)nx׉tTLطffm@"SC LєTtd[@CIdXc9Ҹ#đ Xy eze9]xKu`xugpւrd,-tbFX2^c 7U|wֺhFw~jNbiۚC}x49~TKoj_y8 zYixzk zor~{gtɜ{?&7n??l }Go= }ZS⏟ 9Q;scalapack-doc-1.5/html/slug/img717.gif0100644000056400000620000000153606336100602017156 0ustar pfrauenfstaffGIF89a!,Тڋ޼FOH >#ʶ B災 =]M66/;2daUъY#9- dsBo#QL K_FFPD'Zt3?]IGf*@H[j4)Q:t(8T jBԙ-´(>+jJk [V !Xú<ݥZIYh ˨Wݧ%!nI%EȮ*&jc\ 醺Azq=r:-3OյLv7${ޯ~)wҔu1Dr9ƖolulA {wxM]2HISyh 5&+-eы'H 4 &h܉^9Y(m8-!8PJQ$Hb6^ 9U7g7>.qFS\XGA"X@m.AQ;scalapack-doc-1.5/html/slug/img718.gif0100644000056400000620000000231006336100622017150 0ustar pfrauenfstaffGIF89a0!,0T޼FWH扦 dž+tD._c<*L %3HbFKVF8" Oeَ׹u ($gqSP7'79GwɣHG G8HYtx'g+"Jj[kʛE*(g|:lRy{ kMQJ >^nZ}-Mm(=o[}_F*cJleIC>(lFe !T Q&$ZJoLLC;N8{bS:|br4:3:odDϕP4%KiNį4X>Օd1g٦S =;@U^r n*N>!Aj|Ƞ4(iКc%H0ƚNWbͶ^VlFt.BWxIzەƌ^w+OS{,ʫ R̎E !Rɱ! hf"zGM6r3AXsZKލ£y[%Hgǽy[7=婯z%e}}:N}l$:/k;p?',4'<:"R[}.a3JyqMǵЎ SYN>uvx۾(8HXhxW;scalapack-doc-1.5/html/slug/img71.gif0100644000056400000620000000073706336075347017112 0ustar pfrauenfstaffGIF89a!,ڋw H扦ZqL:*} ĢL̦ ]Ԫu2eew[S;L|y@n}{Zzy fWVw3HtX2H(T8#3"i)җz)s`zXG,F\J6lː+{C=wMXj۫5-< w~Q+z:@ρd ܐaRЩYc08滔-#׉E)Pz)*c+Cfr V3%4χ›0;E.c?iQ)\]$ˊeIa 0!#AJ]kU mI%5HX a.yu[*d1`,s/H 4:׵X~ ;hlwk^h;scalapack-doc-1.5/html/slug/img720.gif0100644000056400000620000000071006336067763017165 0ustar pfrauenfstaffGIF89a'!,' `ڋ_ͼl1.WԱ/aj2HLVYdbdPrX u kbd! ˴P(-V<9u/rwfxVefx7hU鐩9Iw RYxyzꆧb{(K JJ$(ʶltl,Z'XEJ:;eBY={ im}9 ش"al.ooDi{rT[db)tm :}ȡɧZB/OkH28Ō;~ 9ɔ+[9͜;{ :ѤK\;scalapack-doc-1.5/html/slug/img721.gif0100644000056400000620000000023406336103644017154 0ustar pfrauenfstaffGIF89aM!,Ms c[wn}"t&jݦ,?uyϳ > b;I[f)mBI8Wq'y<tu5N"6>s6UdW;scalapack-doc-1.5/html/slug/img722.gif0100644000056400000620000000135206336101056017152 0ustar pfrauenfstaffGIF89ao8!,o8ڋH扦Zש Lאg邇!L*SȀ.Ԫuu ^7*N0N= _xxE8d(89ɲ9Riz)PQYzSYzt*Zղ lIa#1"J-G4q*0ccT𱣂5L C&D2̙4kڼ3N'ZD[P {%=98T;I%~Xc~: CSqRViT(.\Z `#ifi49-ql؊|Wҿ(Q̘dOBC}J4.Ҭ0kHբnMһ_w&}vY[O,%LJ]ٵ/>z8)q7/@<|ۨF{șâ} w_F7yF XQlQ P}9LDYV^xJhϋ_hXx&Έz(d'qdaU; f@2BPF sY%oBQD55fӌ&9Є68Ivրga|7hVß.Z!2ܣ>H iqYQ;scalapack-doc-1.5/html/slug/img723.gif0100644000056400000620000000026406336104264017160 0ustar pfrauenfstaffGIF89aJ!,JL<(43:c|q(&F[VM7iߣ0^7x\)\RiEKLgFm-®]ʠDŽT5{&m&Nk3:%%T$TXh(Yiy *:JZjzQ;scalapack-doc-1.5/html/slug/img724.gif0100644000056400000620000000031006336104353017150 0ustar pfrauenfstaffGIF89aJ!!,J!D Q6mȁ䉞WʦO,Ժ ^y3}1C/ț!;qyMPYuz~c- m4{e@%9 ǿo2Vfç%EⳗXw4v(9 *Zjz +;K[k{KR;scalapack-doc-1.5/html/slug/img725.gif0100644000056400000620000000063206336104406017157 0ustar pfrauenfstaffGIF89a"!,"Vfby9&ʶX}4l~?_#BL-ezt.V2/9墂WOu#TM%_*@7&3%(8"HvSwXYIuzyj XY:(7Yٴ+j{K4k7+f\U -] sᅔi(g=dmnM\NE΍EZn, sߞ~7QT <$JC#( Fk r dV짩+ibt^M!hG\Ti_N=:k*RJi0ETtS.5֭\z 6رd˚=6ڵlۺ} 7ܹtڽ7޽|f-;scalapack-doc-1.5/html/slug/img726.gif0100644000056400000620000000030406336104712017154 0ustar pfrauenfstaffGIF89a= !,= t͋iY}bN)JQ tR}gmK1`imUuHʶ[;scalapack-doc-1.5/html/slug/img728.gif0100644000056400000620000000047506336105507017172 0ustar pfrauenfstaffGIF89a!,[ivtlBS(Fd|-L;=; KUo*MdS:œӤeŻi0xUfbV61(GI&hy!IG( tJZښ&&QjzScC|y{k \ZezV : b) }lo~ _~o./ }*b?5|1ĉ+Z1;z2ȑ$K<2J;scalapack-doc-1.5/html/slug/img729.gif0100644000056400000620000000027406336105622017166 0ustar pfrauenfstaffGIF89aD!,DDv͚BUͼu&K&bk/3gZ-ʥ )*0UOR9ژH Q0dֺj]TFtammwsFŅ6r0Wg8VDӹ *:JZjz V;scalapack-doc-1.5/html/slug/img72.gif0100644000056400000620000000046106336075521017077 0ustar pfrauenfstaffGIF89aH!,Hڋ\`ؕ扦ʶwL, AL*،J)PV5w5AȪ52g3KMrtiy'GR%swh3X( 𔩈6׳ IV7yU6:phJ9vxy{,ԅ8 Z)L= ku= P]}^VGbAؘ~gzyo!;Q4x;b;L{eĉ81 ;zW;scalapack-doc-1.5/html/slug/img730.gif0100644000056400000620000000017606336105732017161 0ustar pfrauenfstaffGIF89a!,UDv}z%lfD!Fia踚nͿ:rkGUbž5Ij?Lm~N W;scalapack-doc-1.5/html/slug/img731.gif0100644000056400000620000000017306336106025017153 0ustar pfrauenfstaffGIF89a!,RDvˍTrk2i-fD QexzbzMx͝޺Ą@hiPZǃ&?& Y)߶ N6;scalapack-doc-1.5/html/slug/img732.gif0100644000056400000620000000017306336106070017154 0ustar pfrauenfstaffGIF89a!,RDv}z%lfD!Fia踚Lt:;j1yHgadM#7)N;scalapack-doc-1.5/html/slug/img733.gif0100644000056400000620000000017606336106140017156 0ustar pfrauenfstaffGIF89a!,UDv˽:&o"qe >ƈky޸ZX BbՂLui^1Oݶ^{N 3 ;scalapack-doc-1.5/html/slug/img734.gif0100644000056400000620000000164706336101103017155 0ustar pfrauenfstaffGIF89a4!,4ڋpy~[gʶ LfhĢL*#_b757a@j.wz֬FCQ ji1Z7'8HX槐чwg(9IY Øi9JZ6cZFWj:K['xi;|ILIf<|,4}ͺ+l $[m.S6on[^ze(/I؉0a+2SL= '5 Q n-%E)ɍ2bI~&k*?3|X=mƐ} ?*1p,3֭Vu1K7[=ۃÎ`% m) }tڽ7޽|EGoj pg3B+[fZ@JyѤK>FN Gɜ)z1 .s>.8NpWoȫtĸj=O,ݩPv[*5 ۿAR_S_k7%!D ~0 GQU 5I1W\߁!m"܈MÉG"Ae\hDKN7Z7HHCL]&[&y#t4.`\7MIU ov|ɋibjdc>U&; *RG\IBwV^mы̂࠰L#C9뮩q˛?P;scalapack-doc-1.5/html/slug/img736.gif0100644000056400000620000000015006336067177017171 0ustar pfrauenfstaffGIF89a !, ?mk0Ė7znM%bci:SgxQ{P#但̦ JԪj(;scalapack-doc-1.5/html/slug/img737.gif0100644000056400000620000000012106336067256017166 0ustar pfrauenfstaffGIF89a!,( Zza^ɅUvߨ)łV, fz;scalapack-doc-1.5/html/slug/img738.gif0100644000056400000620000000030506336071152017161 0ustar pfrauenfstaffGIF89aL!,La ܜrыEj{H{r[7IrQtFx, m.q:doT3J E)s–<3uU wڣybWH&"DG#gy JzW +;K[k{[[;scalapack-doc-1.5/html/slug/img739.gif0100644000056400000620000007316206336101032017164 0ustar pfrauenfstaffGIF89ag!,gڋ޼H扦* LĢL*̦ J0jg­ yl=. :m2(s8Ș'ɈhI (X9JZ)sZ {ֹ{9i{Ӌ[,l;[)?4u3.KLHQ͡)&Υ-&ѩ>CBßG:)if=亳iԯ`iJkO:}^f8(һ݋%05gLll9[5ֱK{AJot؅$]:Z5N][5Pq0^iy7ѧn}Qq{^K-=<ۻvw_?oz1`݁W$`.Êl |xGna3T`Gha*f>bUb6Bh>x'2.EB-:UpdDITd_D^-"G] OV I6eeb8IjYQbDxUttSg'9[e$(C9>Jꛜ8uhdJCin9)ZG`cRW%(Jk|3]6qc5ZkvYQ&&OF%m^4Gێ۞֙}n6pȋon^oK^:p E-? qOLq_qp r"Oq&r*r.<21|،spӚp>&maE =E7]IQ*W;G)Zd\9-}M̜];ؠX\1zxT{Ci*u7_bh(+n<N,mdq2Y埚6]wFZ3v [8tLa ;K^G7[.6|1nJ&^GE;C?Q.^۞F)#vߨ$l\oZN ^@-޻VKD7/d2hZI9BN7=~znh^۝w'h@u\p J,y r9#\D|3UcKByOdFhE_jwD9. V/Gc/ԮWQYBTH R d8ewVHkIPdV;v'͸>UUBb,AE,d mQuZظJDs"%'=H|zٰ ~5ťuYS %_G`MZ;)ڍʬ)gCo;܃LwCx'稽Ezpj~rKM)*v Ž3-|kC7!fxޒ:$k49sLym}ACX}p<mG;Ez%?߼B&+8yLY* MkJ{jWG*ەz_TL~7>[|FXTօ[w`dS|%x=,n'Z v*XG2 u\wzRTJǃ6zuzbVi̵T()ciK(]k9;FGU PfjwU[ %+G98EUEYr54DRL_zFnB[Sn#)%O Ny(o7zeKR?{hHW{Y$xPhuUGZsK2Wj.M3j{7{n'.vT %W'8)lta=uUĆUGj([wVbgSnhRNIXQe>u}8FZhk4kJ\sUhfqeW'u_xdWvCWSt iIXFu"aO旓QUsa >Cwy ?FMhr!uT?䖫v؊9$AهmZddЬISuă?ǭs$jsCjQFq_ͦ| ru}7 $~~DXB)O%tYI@; c;7~]' R8JTה{Jė8Nqi| Rvy^HZKzوMU$ R(?}穨Ժ9SWʢګOٽz:fXu5K\-4/s\u* ew\Z ڽ WԙܨB]Kl( n=,:!s {8z>S~{F頱U[P*yE~,A%u梅 Q;j!X-XTpMrI xCvDZۨ;%J_媗 Yiq|:h 9yR`]CՖ4Ml%w}cHsw鐽#ˆ;l* \?:3gɝ|D<q͘'֦șο9iP:=%[Tq침]H5qe\Q{UУSj8V *ʫGk4%|~*5 5auJm/ۮ ʜR}hs֮,mXE Lz(P۷+݉1cKӈf6S-iGԡ_lHSKC)FGk3ټ-,ZŒV `ZMѸ1ݛEK0ˉݸ3Z#,؁ʩJFaam<~m髂7iU;l^j Xk{Q ʇ˗m^k6Fgk`t~||^`7}m/="30/ 0# e^* "eμ C3Lv6)Z붞vSQpt} _հ_)]a\^&| 5ھGM~w.+ǵZ޼܎u(%sJ(DxČSL=Zjw;x:zMd;&(TUXV8^\!&Xbg;arpWq>pJz6eH5|.\Jܒ /]{@:3$ j߹0l:gix@6!Alˬj:w]Ȇɓ}ׅ:p}- ||09ޭYk0f=? :V;Ϟ?d/BqtV˿B\i w3ֈm!јO=9BJd:qX@8"7"o$F{J1u adguwDiS,4[7vGUD#_ws> _Έe2 HYEaXsmfɘ;vvr~t,Z h^ 134M4-?==?MO*D"UUi74NJ[*qdElsr@twyY/c!G몓Uk͵ץK)ggw=o:J_뷍Z´ Ȥt dJʢs ( Lr%7VHԹّJzS ̊Kc55*пg1UUC6}Z47[N{bFqⴏ^,be6fCEkRZQ qHPn8pP'n \y;ETOdޜuRHf&HN}vnilpm-v 8r y5hH^fSlSV(W{B~2a;/ Z@+6? 45 ,]L.\ⰺD4$ ȬVe ڢvȏE]a 7 l-,|L2SDƂnIʖ*CRTTS+j|F`#P 2'iζ^=B⒰{0I/f J0jG:463(4,$bb[T<´<UJD"qW1Vv{x,ڗ>y}(|θcڿ忙cدttĤ~ huUde5P18<fdDȳHxXRX+tda`*V)euj Ѕ/!CS(- fʄPRc)&I &Uzb%d[>w YјǿQs]>o|$dIh?%d#1bL yINqh&AiL94)HTltes^9XΒeoKW/toH wt8fN-2\'L_Tf̼ܶtr8ۜT*NZETLA Gi#HzBAhJ93Q]B Ȅ 8hY/9w^(W&1sd")?]3験ީIʖY\&#u̜7[CR@T6]'K=hVYF&TPhJKg:mR-J5'լjř1),IDYEAFjeϵuԔij:2QczxבcB!%8 aluXǽĨ~BîT5߱nm,;5ǞuJN?8+GUh7Ig06yO#X%i4gM[9e¼*4FJZ:5yuF[&kS`{V8x /:` ԧ\0H|ܴYObRՠ {7ߛ51hPJoAyϏ?^ h SNN$ Kd2.nIFRjBʩLοKپ'PfcR˨k˼h+a$.tl- Ftc+.G Eka췜Ra0Jlˆ*,&MJǧ1z 54WPx鑣m71Ġp4!] R72l7 -n#)R_%Q:3+KJ2*}-3 R|vm&5-̂&sf(ݭW&v,3둞A =./¾T[kQ1;=4G n0ea9FD Sx$9OS1TӢȳ^NnN&Θn6p2jsb-.av+s%Ouvnrf-5L0DZ6Sqq[Gp 3O?Y4 5Dj5H^\O_70=mu^pavu6/# Vx.B/2VbI?ևBVhP &CR`K d$&uetm70^h1mϛfL鎟L#U#-74kKq{YSJ˯$SsO=݉f$XORlvnwk!L!5 ÷ 3XU, Nkx4̪40PS~lXqԔl&(kM,9#7R$p.M,6'ISB4pSgV/M3x_ 961Kt:tMxMprؖ"6tъfUh׹ZGox.Xl;b ѐ̾>xFxϠAqj3^o>V.%w[ОK(y5L߭JN1,Jt1dhͰԍ=upH?)QrOX%m2M;u{;};-% WTySl:{3z抲;{lj_m;q;;(;;{;;Og \|ۚܵ#|v'\+.go9WT]ƥ!3rWؙ/'WSR)o9x}d]ZS|ȟ&}Ý ۷˙:u=a]t;5IEcm]ŗlj<9})zI ޳7Š zT~Uggz{yU+WI/.#->[U S+7}YDFPw&cw>0c;]㿖+㛞7[IEs71;HW3#`Sw0w|q+I^U^A1͓E}ٜ7J[ݡIe\u[시M*Oc<3C~V9i8=z{ӱio[{}+4msFAA=A'ؑ8=t?ܨ{>I%Ym7Ÿ_ek SKt ZRp:ͿPI$G*e6ʮWC[+X(T#3"Ns$T!fz: iv-7zKvu]'^~P_"N#`c b_d^gZV'iP%!-ikk+moM0lZ-1/ZpO\7rX9 7st\Lљ:2 x֬`:И9vO& cM"9\AŐ"G,i$ʒli˘2g^id8wi-B-H)MPRX3\լZZTZ{-{1ٴBbU-ʮpˠ]řm .l0Ċ-nȒ'S'!k3˜CS4Ԫr^زgӾU6츾>ǽ|ʻʃ.Mʎc<ɥ-'|#LV}vٗ ~b뇮7>6_K'6 9 =lG=v<(:NB0QUmO!b@Ĭ$*X7XS͏ N5p" DN!'8!-8%:"vd=%* Ht$KI0{yT$PVJ*Rrr(i$3^JN)-9X2vY\5$J) (+)LUkZei(9B}-dH* ;:jMZY"-W:䫙&b0mv-JBwv5h) jkDzB$K힆lMW2$ 2:qT,pbS GG`Y;FC 3 B%c{vX n0,Vh8.-ae n6d & CrT;4^Fs8 z#6@.yPFtX&iCr, GB$ XhD[+H[E e }؎̖EQXD8P+$ a'/FSZ,f$|ShՌpa0EބҌk";ed̂cC0ے%Mrha"Kd{D~agk WypEZQԣ+V")uDSv>+7=ELPP 3nD/~E!D#RLLBgĉ5 ROY͗RlQE\ihdUX:`ǚd=8%5-ˉz,N=åd_l]]X;Gҭe Z3(t9G֚6͸V~vݘ-jgCи7nU`)=D$eëw}B.:2fF6$f p Jw.^!GwP&,rs.$Im͒/!S$eOlѲTA^C/x8n;:)7gJe7(eK2:E׮zz}5Srɧx:HGR- ,b.cA8+"f95oFUv/=UXI:5][{SZW͸^4 _V^݅"ecu~C^zG=O" I~un|>ѱi+>lڌx=@ g6e~ΎoG-NҮRzVڡߕr7r"%}H١CęEmQԁ.-QQhɟ)]jQPi~a۶ B c 2`N !Fb 1YMYu^ ׌\"O]_%eC n!2օ[aV¡Qa2؎q!n`r]_w QbF>a ȋ{1MbO]EaD=y"!T !bQۊLb}Z7n!U(VCq->$%aiHł b~U=e Pi\1QT 0>W a,aٕtȒ(#,H;ZR)!2Z(cR}>ϭ'R֠>= MJ4DDZ`᥅)'T} z:i];9ѩ_))3 6!dUOv]4)(R m!C~ F0GbuPR,vb"D@1Kl),%0"lS:a~m%rAa2$nMm8bD? i]%H9aQbEףރfo}d%h-LN5)fͽ%H_urOKC9 >#D(M\0~h\4%3~1\\˱(ZyhA` ciOҖ_*ulXYfNEݏPKj)1яh|ad+jO-`f Z uZ2㪺߬z⇛<୺m[R `.kh+F+ JZZ)A^UyzkkGݛ8UN'* S`!#YlkCAVZRX&RfF߾hESmZ臒([-$4 N!i_fn0\rކ6{%jVQ(n셢ņ|4[VXFOXiW{)*IfmM:z䖮c^$-%Gz".jM=MrS7bM$/_X0pRp-PGﯞp1fgWVU`eW^L/.Ӕ>j ^0ZNdZʺ!jUaꚈ؄.fߩSiR0mUL*Ÿqߡ Âj ##C-rb^^2pQ0~ܱѥr*$C/",*M-rN22./.`/ rq"T\϶Y:c3+#j(g8]pnU&ׇh.r}1~=X\ϭ馀S:w_Re&asSV9F FN5 jFIJuؠpV>?$Ho^V:1VVrB(*O-zzR+c7OkPEssaCbhKd¦[4 U6( ':gq,f6|Da.sμo8Țͫ>uYg|o>'p GO]7fc!oyXws_? WۧdѮ_b -+&7OU5Mm;H痢YiOUqd}V~mYcb#(_y~Oz(ouc7lT/VaijOcdvK3X=ތTHZ'W{''vi*$^w˔$&hGQ<įab6۲:# qx?-:g*P7 CV ̽M\7$4NȄE汢su9&AJa_L\1yW[}_rdF͵/r~x~$,4D;\Sd<{wWSo:sp|ʌZȥr֞2#.ec羉ng8S,o^_wcki@c1F0KA 8&D KKd6N?DL4,DkIp>`8V 9 nď6]W\@&у~)ӛ.;͊r:J-H*Q,nrӞI8Ea$d;CcB>d=(c/!ԉKO֊SEJI-%K?GNPUyY [eYh3+u1J&UovYb`ժth2! ra5(έ?pH&t!>~i$q' jk3:Q<oڄF,5Q.$)j}D%l çɏ8*w͋[U1xQU vn4h PAj-[cڧ41&zᙒn. ȊII'K#XFRBbtaW::e'? Zr 5&M-Yޒw4qa^c2qec'Vd""*F n|,`\E5|沤m;V*n~4Y@&e]Q(:ÒuHtzA)K;OF\jD BKьJ$gYn#5_:/ӑ</ /lS^A2+U7VnF MYˮզ̢Ľu~N !t,$WAŲiOUm5_ΆT5L|dP?hyBTH]2ԃhcB s meg(|}o{@pYB䭾5ܼ ~M[\ ({ ;Ydxw*XXSWXAo˭!GE~AZr) 'x 7U~<@8]pcj0aċ;qrcug_.,Bs)=j,1/OD0c8ZM!L֐ 'nɃ3fnr|?[Zi!]`4K>H,d4;S%RݫRn`FfDW|Vެ>ctN\gK9}늞lwQ^Jm.'k\E2>N?&(<3uYGiN-hQ:.3T' [JbZDm zŜꕂtA[0^ 4@Mr]BEZ6^%@);uצ\)8u'GIImR-6]oRu28RAhiMSMC[:lpXeYg.dײvlc!V67 9Qҝ )lG\;%%W&Q 'w zfyg3G7 w֎ ׃yq_:`, DYK} w?4 x9,ȟqzuk qTQDlY|笄z~hwu+Yk -^Ro}254NZyV6q߽WpJmTk(Y5.SkϷ8Nr;=nrƛ-R5aˮ-w1 HA@r;FRa/@m@ TkA<;qܖ42tAiAcf !@"D"AdY$tžA&$0)|Ba.*A? C ?/|?.t*A"lBؿrW%%"{{:I8DRC86,ʽ$`#3ZHsB3l!D-3ĭ$Sc$%*P|=b}ꡁ@\A1`^[JEHBҊ#Wұ5E|#EX BJ l۠['7,Sÿ*(\cKZ{CFC'%$@Q${D3囷n$u\4cCk88MC=c8Cs4R2;D|;J*sƂdw=CrHD[,X[3GJ®9.;8K,y-4 G*Y$ lD\ʜ[]2l)63j:qTxY}'CX?S1L:oX~=uU,GcIE|h\G[Q ֌CĐwV6ՠLY=W]>սMJ8m;e?tͤI{JǪ5kʖЄ9ÏI"lZ=SIʩVk:2Mʕ:ZxR'=(q:ۻ =QBQaͭ)ŞIX9,V[ϦPCmJ-T|n}esrWk2]v-Y#\=´\uѼ. \.m25V!%ܒ_U5ӻҵ-`ӫR/.4fb|5S,T?P ѕvd2ʄ8QˉڄၕM*|G%a!&t´D.b Q Q bTb' DlbdhO&y4T6+t8^/8::Acɔ2F)b"Y@n1fT!9b>ed+\=H̄9dୖdeL!MO)FcAeG0,,IBGNC_NID}SKaeV 9 6K5KePzi3U`l^oFX+nֽ%#ys5UwfZl]kS)'Z^ɥmպX珌y[}C{EaOyY4CZLHmL]gYW͕-WgD|0ЫWa+KVzLU&0éݿ>Z_g۩]O%Gɛ)"}0ƍ9hk!QISweކN+J\#k^? >S>=K< {?0%U7. >x?B?n- ՎO8?A mMvtjJ!㞓Vncn8q>~nn&Fco:BCTNBi}_#,\23Q2Dv(Nd6mݵw$ pEnIHoZJ&3a6sc)PsaaPVυW}M4CMAR O=dUlDI)!G"p"g,> qOga,%+G(ǺBVBbp4SF=Ѻ 2Y8GEY~G?Y4MsSSV *t^<`F(OY;NjTE-7LL=p3^fZ`gUh>T^?F-ܫZu vm P_.nfu߀>`b>ΜֹSt+ooҺiTr@vP,5x6}H )^~gL0̍o fL/]KX`&8m)HϚVIKyWi+bW ~Hʍ^bo .c]x"h_ǐ^.R^I7zE؟*Zd;&w=ևf!)P؝XPr7]ceyZg]d8h֙nfI:b͙b} sP)__`Xl؝Xd *+3i+!Jl. ,,z -R5 s&h>LAnKJIJmRsn #o$,8'T s+vCq$w٧ V^*xjD<SM45.mn)#8-K9;ٶԊΟ(0ᝁevKFVv6"]Xf.,r 촧nF0d49˽/j ô9 ѦgIVAe3L>H2mq ߣFv{b2gpvx3'VAjy<r`̣1yo;.&< B-ZҊJd.EȂ㕰4 zi TW2I<8/Y)gWHć f:zO"uhx2tah4 ٻ G%Lq~/k Jk!Vx 9r? nXjލפPښ),(f4hk8Oƅ0=O v"Q\߉ -J M)"fv0P$L^j ҝvA| 8-VLm=X()Э/_Eb~CҐׂ Lfildz G5 [Dʱ{ݘpz+8K3c<< s훕,*_v&q䚝u̕rGOϵi夑M3:- tһB_[hI}FI/QYf}u;8Ww8vI`NFc(dߐGsTnjJvKkٸ-6G1հfoM\UoDBl!5$%j[uǪhp93ǫ`M'B,ajC_v|[\\ʽJ༖crf13jMڼΧ/Qܝ!4֎:G>6^UεW%?'ϳ /NǸBUVv:}& 8$yln\l=v> W-ͳci26O*w"=oSY'+.tS3լ3Ze+cyiҜ['!d8O͗*=wEGGW/ckO iAc+ wŎJzVOgQ{5zSa zsĤ vtWdt|tuBux|7y'WDu=q}rYNlEkNwENNzKp!uQMdroy0xgIAg\wn'RW|WxyrG;kMw<؀gVDB(Ut7Pw}g3YAq3eVrTP~S9u(Sԅ|l/bv)ec*^t8\'s@W@AYY`Yӆ*}[b:tY[94Q$!nFz6Z*D^*wHq*S[%uG[)Nt&FVL!b $(0텋A؋/( ^?ƌ FaSи/؍.qZy^/hVV{˔bJe4|]%sɵs0!Ґ'َ>]Ag8435i EPfȑi;fwQE$ /m++zW6e4.8yVGIxMcmbɲd!kh19d%l2nR7TBmnllr>nY@(iKD 6"׃WB%So|[6Q1ܖB|KrcG}rqjIw_@GEKOQ]VS^'ZFPIWvvvv}q4tBkd(Bȋ)h*: VV#ovybc!Ct'gEuWV~rщ}G|'I9My@Ir}(:) HtEzMtۄp`wNhYlȅ5őIJ8VYt}6>iY!K%yƐA+ŔBɷ~W jhǩ#ʟzxV(7&(&vak9xkntd@ƒDž3@JșⷜWEI:v*)٤$S[rImoXYD6Z5|X11b\:ʊ)xJ F2ڌ#ƪ)>fct3o7ŠŚFȬ_J. Ǻ.Z\̚X-+9i1E՘Jf0ȭ"V:kخchЭ-ƈ S:ʚ.3ioc 뺯+L6vٕvIm>"lDCnanQmZQf28(cA(وxWcBߖhv(d%Z4Gꗮµ[ I'p؃BjXEzΩIpAŁsY_;L(XU_BGidלfPk8zYFAE6ʘGqjĉU2Iף+~ u:{&FX=(4[IpJMk2PXٷ~HW,Ԃ,{g߷ut:~EYeyKM gpm&$֗UtZWG)ԵMGrOfTgg'Y4E9ĕ Ri}j l;D7VVs$W$WQ\Y*^"skoj6hb VZsLTo%[{helSV$8ȴ(KgG0'yqlt:Ȟ!m.gBeAUJ4̦h[2Y>Lv4FIzD*yXU7| y!Lx%|aY}V"Ѳ[Ls=ULEBq dՅ X8Q| m]a dHS U]F* Xҳ}BMYIJ?|v={"w:Uy iZ_hk'|58Ţ,ǟn}Xٝy3B\ҺHָS֪[ΨynULݽlp>@bt>x~by+~[3>2w8?b{}߬t#8cG-.zitBv8꣈>;UHݨl:mxJ'͖>GUZ 5#wɏ0&<͡\ǘ]I$rDI/N ߛ&&Q]6!I,aca:-kg(X~NCn_כU4z>֙*a7x= [ϒy{;yR}'KTH /~^8%e$\ɭ8 -Ӱs8Oktڰ=ljژ7VJ:٭ŻϣE:+$ʎߛosP ou'cط m}W])F߬7yk< N,(Sfy ҬSM]f9FkTiHLY3$hn(k *+Pi/xL;gSD`;~p}A-l+EFrR2Ec%dҨ D3)0β4iQK St#w 6jΕOѶwZtڧPt006P[Xs4T$>7t u !"e;SCDra7p߯~d_BDQ$zf"XɆ+d#s >UfX| p@bAG ƪSR6UkWX^-{vkٶ]mֵܷ{Q5ָ͚z\qxB7qƄ% |9`1`%}z.*s>9aҮYҶ]ܹy87~yrÕ7oztө\{[ٹwu)m'}zͯW}|铭?-~OP|/yisʰ-%(6R3 }O=D,* B? STћʟQ"3/!e) x-/HaWHCB$mKlEFZtǯˣSG{,βDqPĶ h% hl*ͨf431IE :(RRU?D"Z.猑 ;FT\ȼ&r)."&%XB" |jKgA $Le$3 & ̖a\.vlYuț}!}פdU\$ CR;UJ`4KMQ8kŒZvLdU[=RDZlr"epv!ZMFYY$gMx$EYgy!ydmg BNYl͒halXh(~]ظM\GXk|nrKLZn6Ef;D:oӓ$>rp'xJrC;7ކі צWK1}׺uGvi)mc]4 Gd^ΰԩmo=ԣٜRܯ6/ۧo!>x6qr_P8`|q;ʱ?J6>z0ܟAoETҿצa`hJHOK 탠B+1H Y8vmt@=RR*NS-T|.q:ѐvRf$uj@51y`&G\@UGtEs(,E>ЫPdPX5avFP8TwԍFσQ>U;!DD':Zt%%*‡4605H (})MMS4<2in:TMjzT>+w:U^9*kU~]kiZVUhXmiYek\ɞy[j(}`cHX-u5b XƖ(_7cgejVXDZDv<_ۛЊymloSI{< i7@O1l;OٱEI"pp5tS%] ͞r\u ʍjE߄` P[־2t(f ёTK/yfNY=e/˧ ְ]7|$!!&EX@ѡ)6,S<($peq ߸3qc8<  I&hd'?Q\e+_Ye/a\f3;;scalapack-doc-1.5/html/slug/img73.gif0100644000056400000620000000041006336075672017101 0ustar pfrauenfstaffGIF89a'!,'ߌڋ qH扦gLv y~ xDL\ :$SXA NzENX(gv/4jyڍǧws$8Ȉ(9bصupI*JA䘩9:yBV TFYx['8䷗GXXTIx98I IZ`iz8*X[ڹsL+5\,Ђڥ0mG d)9)]m}͍ؾ.l]}B PVyۧ*4UQ!?$u%6Nb<;JbjD3ɍtȄ J7v?ۿBXa .0 =E 2a` Cu5a&TԍG`"^lj2θFnb8<d";RM ZD >yt9W(dV`]/ ŔdqCS^f|3![dv.vyP$De$j'l6FV0cKyf>jVbbT5rW)*@ jXgDAsN>z:eE[*(bt 79kzmv:=u)Skl&&v  t8n&L`n¹˝ʋ oƵK0R;p d~? qOLq_qo r"%&r*r.2ɰLkq:9 i2$49lm.j/sH8,%= 5Wx\Mx3%7vY&~!H>ly7t/5V*~[lkp~"' !Ϩ^Fjk ggZt⨡!69Ѩj떯i ڄ9f(eT;|H;6ޭaV?J{Ͳ6+JfƹPmJ/Y>%zlT3oQ/9WQx򟊦iQdTӰ%R`PujY!h-]n4_@2i5m &8Ѥ+8bn5i p~ ! -S4&6 h̄Ġnvmy+8.r^fqx*Ԉ#RqLeП6F)T 3<4`DjΊʡD&*jb4EW˕# 'vM"&&D&pA'9IJS2,6PTre !>oDǾ0jKHG*-$HIB&o eդ+7>_J0H$O1A W=en PyƜ)hlgO)NnnfZ7q&L_mV^o)-LFY)xWeLsJmޮ3z ,a6P1yuA*WFL MN\E-ngܜ';Fb\gHTO$"F=L57-!7I;,zJ*u! gDbiX B c*WantI ٬~Ԫxq]놫ZmM)ՊoUo*jIk ֦sIm=ȨҖRUMkvI4G)[ ;:+iHDG?m,6MWC+AkRxsEAvc{WzhWy-5*3wÁQq%I m#vχ)|:<7iS qml:rv.CpX=btcc 9 KdRE/|eey0{˴A>l4C` O't9{hW+A$zVh*LܦXƵrNW<)BZn3<$:BrI0a ]"1tn 23m-! TJ`wIXzN^Mb}*$dq$ҥzO8noūfe˞Ҥ8{ÇOLTmUJfEOMW-;b[ZKwͪIC\I1,~8ͩb>~lMm?ﶍ}#tZ?gM<л h20̗y;(Wu7:η|>ւܢs;(Hf_K+,Ttlx)G [kn(Ty?uyL5=W{WN72dWGM'Z⁽CQvm7Fv& oF^TD2( W;{xW|X}A i)dx(D˔Vv}cXXj!hM%"hEx{˂)jѦ7GT|HsG7k %z<8OCkK@m#FJyIDuJY=R#twYH?b{GyuFyT[yUk@bxsENGn]\E_uv_ۅp}{V[hFrp'qٔI0``r`tv|M^Xvg^4،5q`Ha#pcwF;X~fhv}5b.f="BcfkmFk$֐Lzq.'<~(Dse`Б,8%&&)/a-YcZe,|[9R7y(M? F5bh~rq ĐQXjZ8fg>IhW"g&vx8jfrxJʌ;&$SS,XPIȥz:x:lt䢄=z(f:'سswCzUVEJib-i[<ը*sʙGbeBHgh`x\%MhzNy|)gp <藑g)Y5s4O8H0?7^_|uy]D`6OWhB9/i@&ѯa9Ud5/ [%{/VBےK.$+ɱ4R#'+#*\v06H/;+<$8jVKj):>۱is޶wyl-Z1{vDQijWyV^9 g,p Ʈspx^rxֶwWr@I"ô&+{^EF7EJ8DIl$Xyo >7@S3n{{~7K JMj8v (}e[Ų%2+DINkq֘XE較W~̫w?Ȼ(;Uz*EW;Jһ|֩Qv%'rsիUEXũJZ~8ʦi^T2ZJQډaV2Y\XPXׅ \+ {xȫyXTɻS&:%60x娋p}NN:̶`."VQn+UlTlY̊,wr H`3ٍJ\@h4⚭YtF|ÞysI(Μ\qhQڌ6͵'q͡A8[L͜ ̎\ҎƗN^/bΘ6,ꎐh,r<0[3 P2_,^Cvgmk-lnp'wʞ ?&G S5^HӺemaq.d> *Ӎ+lv茮Ρ2 ھV}ANYcj:~'1$MGz}/?<߹ 7^H GvGy{e4ݧnުe{re+oJr-$X.x.\}^98gü`;hl=vV8K^Ds[Yɽv Uz\'Zzaֻ/ύQǼ}']_Lyy!Q"LE|ɰ|&QB-4h4n4_Jc\U,k0nTհ}y"NN<pM=u(O9֫s51Jmshoɢjn3ĆV.Ӹhgx=t  _=<ӁE2tY=RPC;._`ǒt6 miY6}1o]8!/Wc*$Ox Ao{`7/!#5tESYzW{?\%ËMy6c0zAYpҳ՚%roy) 21Nh]e1${CJ?"y(cI_z⺗=͌ |^g5 LJX-sp7J-n(I]wi$҈uDX FƑv+V#sIq-HPVġL2TL10kF5i֧PSS#k@Tq.lD;V[p51i͒Ia!i4<~k?c A$q*:u F(BЉ̐*hDQ3\xd?QHuˤ+R)HUb_Gy^*D9iATqDQԃTiS$FUNTUw4Uh]KU넕2dVŚ U1+G ik]3}^ d5` + $c!i?L 2Xb1+"fEuN#BTQ5GYծ,xʢb k+ͺBimo WҖoQ,7msʹfВJD =w̓8DnkV Y:k({Y _%͎ȭ }'boZ V;>h*fĩª]Ox)<ja OE7nQLx5klvnJ*)RFh:1L^ɒ"$hózj0= glll;KM{F 9lQpLUn%9 &^%uCG۟f3{e+{EhS@3SeElG1Qxo*l,&$\V1XM 1-.\.+2pfYv,ӷdF-Үno0KW!{rәب t=gZ3g}N KbW8DtIKzĵ@׋z"ܚWz 6%֝⎤cu k,Ƕ0E\65P=Ӿ n/\/|_ qRmsi"+ʤ]_3/'Ա}]5.RO`ߌE~ظ(>4/\ϓKyjσˌ, MH EHh;Gkah֏mnM/}2k"&Іg'-Bi HЋTďXp$NւkOVo&oyfO^MRKY(-0^jXl0sSLIοLMӫj2Pu:fI M 95L\ ᬯgXȜ>lV [47 f)LzqgəIZg 16)LO P6CرⱻQPNxѦOԂ- =L 2OAfp!{""#!+#9R"; .>r$!"][$URW2I81[2&q$W KΘ1֍y*"xp&,24lp&)!YOPM‹#Q&. -+\2'2-,Yp6J C<$e-W *-+R.hk*16Rd z')Y,a'M2/&2.b/2MRsf0 0aOQ=qm&l>S}B8 3Ҩ3*@{NkP:61w`e˾O' 8/9'] ϕPt(`h/k27{D( /)R-3ԧ J1 NM˴0,s?#1pAiR$13,CO1( |mDEt> JT]PEAmt$t oto qRr=QgPK &"*ֱI *>=kdkDWp ;V,N468+~t.P4h F1#TTsoJI)oN5P/s6ųMP!Ќ-1XShOX}c)N4FO1ѱ>(HZKųF%cjF5/A]5-u^^ K_Qt_*Q` ֮_ `6auaa6^!ְ$vbkbc9>46Fca S<eb .Y\6Nfff g7egLeˊr54Z "NDh1̤BK$%՞F C> Ѷ)g%Q3;)kgmsև ςfS0:1"Jî>nv^RO_E Ee(?#qomw$rQJopc5.t}ډmb9< "Vv1TnnM:whP1 !s7wxxS:}7s+7^5B|[TzNMsTKU/7@0v,4zyduM767yFkYԶx"D[,Uԫ7u˸S䮘H_IP`_x2G-rlMTƒVÚUIG>X+gnqۈ0vU׾=Y)u:q":Wdpw S9ZcͰmh[qSSQ [ؒhpvJ38X'% x)s lTxt벎luEPVLzފ6A5GPpDڒQur; {{QzT?2zK((psNt#uk1;ßZJ+s5U썮E$.y2WŴ=dCbljX vvX7[sPXɕ;6+a7V\(4We]=zZg]qGrϩJ@@GDvZ/=Йo!!}t+RQ6cGx#KӿQY^#=֓e])mݨޮT@݋Wzo]+wDNɩE4"నչCFO~E\,^n bA MWub[3Xսj4M7bg=WU1{ M{sNUst:þ&BI :'Б:s6&pP|+Yx?R8.?%~)aJ/?6vݱO_rj ӃJ_ u}tyV9N[Ll?cX6?t[KiKYu4X輽}{_; :I~G >,|jW7%y$VmlkAaL36 ߴ~j08IbNMF'Ĕ:E lIR╭gX5<eֽw%hܓJ]܎BUXb!墐c"#&U٥ْk+ ^lmQPglNO'g0q0)j xTpm909Tqz9:w}> ¡R'<y p82+M MTx EN! $E`*ϒ8E?dp*$D2m΄&3Dlh'82UPCB$ 1קg+E +\Fxd 蜆".7*VA@I`:3&M,YW,0S+p3=MmVV=Fc gzեy7۩"lKǗ]$)bӕco۪^ctwVj9{7lI5OuNUʕp)3y T~$^ tx-uKb9ff 5W'h](qF\"^(N!z-;(.UኔnHٕؕYZ颕YqQ;%c/SUu];yɵ۟d~ zZclR?U5'Q}*hTJ8S=B\]X錆~bzOh⊦ }ɦbyԥޒf'yI17ZxELxqs{Ҋr(id^+}۞g#ѦfS!}JUy4OH#Tcީ@^Gݪ |RYIJ&qlD'O W2[M\1qf ckXaxFʳC^)l<2;(5KO!ҐD/Yk}[{}J=6ٙ}6icujJov t}wt7xS߁;-q` e+;$ 9/T9Sw9葇>:饛~z:۬:Kn;۪O ?<Ѹ#<;G_=_=ԟ 8ܳD1Ӝ3E-]obd'Mf!qcǤ*b*'AMxS\ĺ:4pDrTnQTr.(qsB@uI#tq4#t AӒ! Y$j%AZnfk2vJDp8E`t1%*ؙD .q;O‚C9g2}x@X.|aR- ^YXD*z}՜ZAc|/ >Ikx&2IJCG*{ XcHM;g(+$-(ބT/iʑ2kKJ*O'@sVԢ҉Pʉt|UQc3 `.BPASH]?MNä;QՆT¥n\jDWQI-U6FUk\$W*VtaM%Rhdu+%ӹwpk3Wo}kW'իa u~C,cɊec+eg͚4(cg>ԱDD ceʹi6WVƠcrˡNpTRQb%k5M ::@5nm8BGߔk 0UDA`yfLCWh?'a;ZuOzYU:koRKz=U {/b+ +m#+H"ޡջhk'6w8m5KM;TI:v)- lxJ)6eUeZ Lp6&&5N>5lUc& h)Ӑid̮yѢv1ŘR_`hAqZvYz_iۃ+@]ǜ xONn gDޤ3u`^I !  Ơ &T ~Ra !o".!1a8!FaI!cP"a^a9hj:a}>Nj[ N^ey Xuz`4 NL!ܡ39"I`M% ݹNT"jNR%)5b*N"-XZ(j!+X,4bҐ).ڵdϩ|Ae٤!JTb89̡}aYhN}ycţuPZ96 Q#E\#2Z$:iiDSD=JH؜ˢ _&IMP=mZQM#gU]ݥYMm%hߊΧ ըbݕ+֠`/R!Wb"iZݨh>i )ViQ bn)jfy)gi-F!"㙶)) RJcơڥ|©V)xqi^ Y|V bBjY f )Pv[^J2z'nTJUQ*Yd͉CP *NNVDv(G[6fݯjF֮KNı$ɛTj%\U}OKNpq+/Eb[׽!kԺhQ~en׾Vֲ2 l%|ہ®Q +d,'uz nt !9ViLUTȬѬ5,n&L颵j+-a&퓺l:-`A-gMR_Yn+fV~m-e-fjڶmBֺnh pFhY-"$BNPuʟB`^"Z`i.+@l.ՒuH›ާl GUT= V"-qN%ҡ -J(ﱎ*jhmwmO2_}rbUe6-`iH ,V ӆ9fPݿ"y~͖'B6lI9jQ0Qd-8S7^l=+c=&=v(nmKţ cwca'"`I#a~0n#Ì``Xn{j\~MD5+Q$Fk@6HdI H5$_S"rUFd: ]d 0R0~+[$)0]Q^Yo g0"paPWP[-&[TeBWvn 3i%` _͊Zkc٨Clh˧){ rH*+) 7z߅zݜ[.˛b.-**Mp2ίѫ^cvo)~>i.|ys9/n^2//$/b(\N*x:r@ґZj3pj4{]z ^gF}P bE75W˒\f PZ36sCno94>GH:3qsфZ|!URKh^ #+ f𞢛f?]۽&iJH( BgL c6_c6 V{8߯걬}1[Sg('4ҭms;7]iv.qw6e'v+O:v ҴA7w|z z_n}a~U x8m7;.xcIWUU_x6jhm/ (mr =w7$.x}Evww8!U7z.S#G'y76/,&eZ.oł܎ˆH{.E|  /*EyuJ>[7Z:',UmZ)/:.i  zPWH%;y|o3Q?:*~7V!Gj+JhrSl췾[H6WͶ4$qk<^_A(dG ;~:0i6<2ek4ip9^rk[')<_+Z~dXD_/[gZAc(ϑADT gݖxZN /53yJ>x3M&;<90<"s%Ǟ]ZV?LXҝ/HL)ʡ^),K,SIIsfΥS44YqF21lR`&K_z'NuF7?GNgoww_K/Q@3Pa6} "tq*I =z=ʻQG#]DR]$[$_/iXNtTPE4OM>u$RU%2UV]YykXe͞HZfcپ-\'l ETNT(K;T6c}L` y9rPs]^)Uhq:#.!}cMj;̖-E`WdeCǀ)mi WG`7 ש j-t3tˠ|Е<)oċp3p74͠#{al h>Av<'<{B^Ek"1:УoS"Uĭ7  A𑥒[C!LC\;,Ӧ 27nIIX0-C11816B R=%HΤ3@$$pE(DA\{C B}Jgm$‹gxSd=B5? !NQŗ';"܎̈JKYەOmB4X4_[,]VC]Sjk1SPfKke45:%v qWru9(5U6u%v\݁R^VE5K׫4q/t=k$K8 .wꦵ8lJ8vrDدR~0fs{?뙶 YTɓv`)ԄOEqT >!-l}]0lUI'"b1tJڍu]mdzl8WWg-nSMV7ÅPl%g8W閛n5^I\@]y១@1IPk"|z🳳 @na@<8 6 ;\uVuIcRfp]]ܷ{槮s] ]a c dɘG嬀:9vk!$8׸f/sڐ>;)IPLpX܊?{ dN103]t1(1F%{bCQ.`w Y:h+X1<\㎠C(鉀E3Í אC?` @p.#K3l)EEdHQQ2dX4Jғy"kW2xIYeP@yT#-\2K`̈́&OM쒚f6ͧXY79Nr& 9չNv|g<9-g> M}e9aQA6q/@'hRp1 H e$Cytw `$ ~<ُ҆Ml1ARTg+]'sΤ]0U^o'ǪJ5p'Q R-C BԲtٛj:~U`m](Ub Qi1R0 IWRt8"knÅSx;2x`{JbSqBG?񪶛]M=&mc wCt6lhfl CkHܘKB̃OGփ;z/yn82 ځźץ e\4Raūuf]a.F2d%X|6mf-EElMwD+rj[Ur9mV50Dx8pC˯]E>Mi]!$u毯HgU!@vew;XҴ-t>#Jָ=V:"SV6Z;K{$c^N3aP v-[3fq H`ϦEf@8ڷ"46NMD(Ҩ7Y43l5[Q7ʚY3KdX bsXK): ڜMӐ1'hAhZsyu/u8pZ\[>>e+YJx PNM|"։ Exgp $rܖ&8U<.gabi6H,]yYbD6#˶eIu'OIs2&uJ'u}OCQw= ]$yŒzr+o+{Աe +ɏYn=nw u%)5$\Jr6]wxc5DRG9}}szRf}TZsܺ8Fԅ[/!M{5f+U׫¤.Dy.Lvl{J" jL=;":,ʺ5< #)Һ) 2N+3J}*B1ajG{ >x6:AA ڵ2#4:*ccš#D.@w4@:-R@՚>3 A!L2ܢw=B8+0Q_)ùKy4!Q2V] 55;U|*0$#A›k+.4d䍖aB{Yhn44i3E+ә*:D K{-QS,,E.ٷ!y'%;):`;ӖSI06M3|=,AɿSk,3LIM/)\+ClH) b?+D.rׁR9%"XY^szӨGA(=) V)MNL,86L#gilʴ5`¨Y6Fً(n[[&& Jj:N'΂,%0-i깫#OkRϭ(RN\lO&OvʺrϵjBp;-DU|8UP e8 u9 E']9СP+љ{4JS&mb |#YMJuDg?)%D<GD;K4*0 J)1l'"P>.@DNrHe,E[lƫݯ%ϳE[{lP#t\f\4 ƿM{#L6:â2ىʽ(\e\]X5<>(S  ^%^N*^S)CHލ4t8T콦8U5__KeߥX_{TU'V5iU.hRsXkq9KM/ѳZ^$Q^M&UR ;M˽׭;*Dƞ:-ݬYG{FaRj$5Bڕ#JSΫ 酥K8LUk2ˁnQD2qdK6,8oZts?=MŮc"In<38FWo2\ڟ:o3f02b/]R5fPjUat $>FY3{Ə˓K$?h m(NUM}S]cT0T<7ٹR?B#}T[SE]eke]@9:H;5nQEdF$?D4yRǻld&g#/9quA 6gGUuac]^E# k$hl\۹\ c(L|BSqCɈľwtAuehXe\.B2=r/ K?En4>EC_Ă~ „&#ܙWeL^af}ĞvZ=aQL PDUu-Xi0\lho\X<2Liy&X[|,^b[:Ӵ Ly[єMEM̌SEEU;|lb1 tER!Tek}\qN/L@}*g&X>lcLgCg2@F\jjȎշ2<(²JX0s-yδ{eDݝfnJ6k}ˎgR$; &^R\ǂRbʱ[QR˧Ejȅn>K>,y,;H!ZotKCp@LN &WVdݭ2iػ0YĿ. Ju,SֈєM#LWL/bfNfKu0'j"_Ԭ'rɥ\MX %QΰfNfNbocBf_bF7_ v&Csys@EEL]R7H em@dsAg+JfL'%[,7P8RƞuNC\P]b^uu4a3Y8c'dueOtf X^d5IcTeeg6"UQKOi_7# $LvjEfa'ESQ[)uc T}U,}e3E%;_v?hf?;o>3϶k}8enjxz'h<7S6c16NsZd2YP+ x.٭ݏGxmTdf<wwUSU7) }NadVRO&8on-rjG¹ <-ds^Rޭia0ujKd9匦]giF<F&:JϳdԦj{bn'`N|bY*QFK9U| o~'Cى}gLyM},IzrU7Hkۓ$GeTThmYBcS^a& UfopNocNm{d5C/& 0I56ͻ`(a'b(&/j0Npk[3rɬhǤb&Hx+] 1:}wME޼w]N+eV=e\2T5YvGki mlj w]hRA CuX Xd Db| #^(Ep3"t8XܲNiY:& }pcQK 8 zRވWP|tV^nA halqvJ=byis^4qt%X~aHXbtW>(Z h㎱bB) \8I^܍DǗ(Y(`*q!Y?f!eU+jdr\Vcbm9"Xf!̌.j][8AF'Uwhf~8ߢV9ܠw)uJijJ|Dܖ _jxfA>j&dS$î4!|VBc^zrkm^+uNUg".*kcλ *u9`,*jh:"& &l_ ovVJY9rpA(o^+L,F{_{[<) <-Kj9K \\JMچRN*~xu4;).s6 |4Jp$3pL}nWN΀qV7Jzz2UW}'__5m=j 3޹됈zH\+6m7گla4ps.A\|)g]O?:_{G_|bEw> @-}㮴]>Q;1rhvkڥxg6=v^;>`8/rĎġGN#hoB ACץ]Kb P]^2e=<cm%Љs[‹!BKdĨ WEI7␈8- #WercqX D *̊ ֑ϐ H,l Hd y Z} a4ɓg/X$ I@$H")Jk4V+fw%5fK&#u)LL@xLc6Ќf`)͈P/M\ S)N *!':u>D%\J΅Gc"[ZJ1V$(\ c?HFvҝBGGؤT?QnݓJ5|JZFʜ{:!ZHImqr3 m*]<1G<P+9M)3ikvף9}IFO<>pU7CXSGWZ#+.TkBVmH&cR1"iDU4Wb$B'jmRrʕdѷs=P]dk]GDz Ϟ2ceC}if e/W\' !4QWub"І0N{o9ρBCJ ˅_\lmҟe2ݩkjuN箻xx\ چ_V} l~ob)jUעOzS`7)֞FT~3iZ ٲEiEk4qR`SZ?C\-1AƀiH6 nU/lO3UFi1%ӳKH ]]y6OikI[Eb:@KU>ZWyR!h)QGF@%vIlqu(#aZN0CrM2(7Ԫ|!P\־JGBqƃ7kg8fPc:Ll;D6Trgɻ]{N޽=x=W.48+ 3<_.p[8J43BGN͕|/'OٹlO+FY-@듒Gr𲅾P3mGls)p1$&[8q%3TiӠ+eQRMn_088&9{N&C֛Zϝi>YDމ{6n4~֝i9>Կ_uWOHxd]5nPʠbbkSSzQ+ y{>+i~{)ϸpV#-Aj2Wҁն{G>T"|ѺVEI?d>1Z>q2rg`to{vN8a5Dk'3&NBBdClLDyWXgzda-O1Zէ@1{`0lb,6'+*dCc#)&=JS AY6i'HvSw`ŋsi'1VTNfz`F刼st0HIfȱ-Ex6yq}w1ƍjSaƊUcq"DE5\<򨄎:r|hrs5.qkÈgC8b2$UyË Ɉt({_GxcUw祎&&^ ~y3G#G֒XhH]x֏#h7 x(je+yERƑ`7@&%(R>G Hh3WUp TeI4h9VYӇ]=k3hZ9IB8]/iX_hjFl32)kvkDDvjx~&HX[u0a$9V|6kgP+EFBWcrVDXk癫P)yŚ,^,<ˆD"IږfhRWvOl[\xry9h2ãnI?Tlb:xi&2lhpiR#G,wy_d<)'BoIٴnL%\rb&b26~ǰB7YޗYYW\ȉ|jIk-E>oUt˙?5'Y:z6Eyr1}K TJy*LyIG I2 Ӽd6}Җ#PA$FNMRݦQ=Vm|U}ZHDPSZכkfnF4@!ܱVtVtoth$أ,@x?uc~zEyP\s}si8mmږ -ȿIZ׀iǻ\GIyMŧ-v--ܣr͌z͸s,)E{G{'Q]Y]r5T^\xzt{=ǜ#j\굷۷/5F~DI_Qh7S+z3{T>l.? 3XxMN썱}R.ޗNe7Cv[82? _7+ЍAL}}(>ƚfvZ߽߮cʤ2EH t dN^,@D應T{Ӿ`ZKfqW5* V྾B˼׋+E)廾?,hMKP˾O??K> 7wB4Np@ 7hxҠ]L㿐7&XRE'RԠ]P[Wh—ke&?:J0RU^]uaSn`}M fZ*ҵ<4,]`eʶ׼q v^J1;nrZ詸P3.t~A^ȩiz[jV&U*[cywŒ-q/-$Yr D/fۗ)=;[v3 =Aкdf?IKJFQ>!iL+tI4dG~41DJO{+Kӭ"]S,l+=;3 &.DA9#CըCѩ.c08Ck17 5)? oH"TOA4NUdU0Y:PEP㏥% H- cPY8!ɧrv6јuؒE_iý8B$0 >aOL`d` X⊵_%⌡a!QaɒY喭R嘡uYfyF砅Z裕^噙n'駥a>$ꫵ:;"kQ.{(G7|ov3Ed nQr{n W.<;dhaO+ƳfʡvPL$!"w54quZ$OLUVITDIA\sY]Ӂ>FExYz:wve=2._?s#7L= 9GTewJG&G"RP^P;\9QNtRAuIp'7: X@¢ς&WAӍAl B.5Wg e%BˢԖ{Bks[q"&)#nq}o!6@t`I 'G>{ !HB$"ȴ͑%Y HVd&IOoD(IIP&T%NJEBi*_˦2d]Ur [a"Fї 0!c{2,LÚ{m'3|5+^5Kѩ0靈HD5,pF @98,8yFaM wχQr2xE$Z|+)VQo| &cH3hFJhrͰlZ"Iaz0mISS~:Q",:IrzjGJujs7Fg6OS\4"j*gDJc͊1ø\_KU$2 k}ote^iYA;:iaK0Aҵ*d/h+bz'+Wt aJ\q{|BU$45*nZ8(ɣl\VRb$2z&5#ԔjI= |7JE|kRm߆'w<?p(9)u)j:-C p«UL`7P' U? e|'9pRVU-pu1&KZwܸ X6p5 :0z1v Z!|g߲amq !leotZ;Q{26ly&RffbdJq_rk}i9ra1ٵueM3f'rpԯB;ѰRs>Cw(:gR}.5ST ,MZup~mqSˤ(r3gS5mMLjSSGL'vOvuuMY󫟓&}߸.ݦMQ[n3TsB-߮ÑjJX~7i'o TW5T E{V1COhIa5}ؕ_ʊ{ ^eV̨&m~ }󪫷+NŌo5ild /Z߻;jnZ:.Te_hWFyZsaY@G7$ҐًD"^%+?ia~j0˯%UϾBL~ocr͗>.OϾöO4˨B82Wtx+ovTRC2Ž$KŐns,n~> 1tF|d!Gv,q=Fp؍mp$R%E GSCR+kUT*=[0(J ! lެބp:e$Q!/N_2H&% EލL@ZČ+nLQEYxi4E1Ͱ0m oQ2{2/> PS93S\*7:ɉ0;:q3<;d;i<Գ(==1>?ߧ/Tj^'I1В/1Q_/41PRksgN-+.6 SK30TΘ%pV'34xViB4qU}uRKK ~~29BUH4ߺ+QT켎J! 3L?MyMN_А%-%p{\j4]Ր`qPEai6qVlB92-p6 N!2M(][v3\[b38m\Rvr%Q6hGxj0 ~$l)v'KN[=o Ud{lh4V^N3/Ž >T8j.tS ,$CL*յƋhg_\LR!*,,ie=Lx̝n4*JmjIųDmfk9/oSb n~xt%`7s yRqmeQgL7"M 7sMl[T]ARM67QLQRO (RwT2"|܊_u;v~mxx9)聤rQdc3yב/ ؒƶ\'3,s QH=Tv|kvȬ - rwHD׹_-xĒMݦ*vImohϾ4P#TD zn#oX5_ZNU[F}EI2d2ۘn|%ۈ jIbo"2La3Uӑ"?ccJA D ɓGAH>T#b8Y٩0{3eT<{SCy{>+?AuóH5RQ?}J}/k;t6)vɱ :1 AAZx gvH3xswSS Bw#snwrE`z29$isoc )m4nUy|ny0{s?ZNVTP$SP y_aW-YIXwYoBWUZ%0TqN2l[zldsbo7x 8%׷m1`5?B89 t۰pzox[W>L|qGrB5~be1u+TTwТ!U̠ )tDl5cu|2g뚤2vkKڏ3Q2(r0eQx"6b!PWO;VqbIzR%7gWΆXs=yOoԕiccѩ=gڇ¬As֣ X\}ܪuUa:Jhtvemŏ*ߪ7&ԉJԫW/fC.Z[bu|rXŋIK]Km(ӾU6M{ٜTOT˩&o=pbSeznڡNHOnWţoWؿ?}'H^ǁ.ȠĶw:1Dy4a)N]S(a&!Tʼn8L!xщ2Θ`#-T<d?`n1dRYYӓ$醐V^R~OUegV,ej6e9hu3FfhRg٘nH:V/ٛ<~> )J`reSqnh\Qm ԡgZUEgIޙhJkZƔٝ3Z˩lqcfYk~>Xi!Sf]ڦVnKnwr;.KIF{Y"bhᄀ p\7/)0SqOLq_qo1  r"L2pr*r. sL 6Wsi;sϖ>|٠L /NG#}[?KSO7[HI~3D/ L("Lj[ӟXj06v;N] ?]ٓ}KLܛZ!8XpϾ8g_w;)\,u)#䇻)\).C9٥4h gY_x]v)Jxb͵+y#k.|GZ.9&AIB.Y;*Lъ,ѪJ2)$I*SF1Eh1 O.X{]9w[#Y6m2DQܕ'*MW82wl_m#ɘ*Z6BXA!s!=Ǻ1ġxs@EA@ߕG\%z8dŅQFP4yL]0H'jJTQ6FoRJk4A W衛Ϣ 'OƖ_kD PUѮĩ=Y)/.,RQr{\P Y\ʚ$XLa~`Mxfo컡^Or{!Ė~Jbsmyʿ-scc^6b| pSd0{1FrN>v{QM'iGW+U ˶Rl9:zOώ#sIZUI+|<ϥ7;'F䰽÷"UԭKr|=nISʱb(Eץ%Em$+ӦyJ뛴:2?fe~8T<{NN[cDu{F{_'Yȃ^'w}AMQfYyn]hF_vW^{xI$oTv#H$ak/>$([Mx}B~#gtsg`C!] 8^b^~DOؕ}QzZwFWp z.~pF[~1dH8_=kgY($sBs$AlDD!cpsnbbHBs)MPXD`SJVgb_675e\'I70iᐔt < xWTXk&&p$OSNגX'D fHe&ol˃-tZ?@Uyz.` gJCnV7U%{aya6u9aWb3B/dai'V%Bsy5l4C`3A֓糘8D9wG_abi|+EEXTɕ\slF)9Tr5GtP:6qKvX}tԩbtYX9<MHH"(WAXzȞ{qGSenDD8j6ĉTw^YP)iT hUG}]IU%U瞵XP5Kh:)yyV4[xa> A%Ux:ďZ_wv-=:e9WHsWLFiz-Z)`7Vh[S>gɛUrM'\DZSu=zun<6;)O 9bmRHmKu VwR.R"QUoW{}˥qGjTʁݶ4pTs;Yu{S9Mԥ} R~3OB6ȄɑUSS4`ڜ8ڄ>*Va}ڝG7Dz.I5*暬ʯyoʬXϕ+)+JuqJ]R(uKpx7~޺]uAɮZe K6٪G4s`R?gJx)wKWa凁kbccsMKUhfYZcJd aԢ,{@;SPk[{iؔRzy>MӉƷ_i*h6'RӑdFT//3q3(i [e`Ҩyֹsh&﨔:r;#ˍ`i{"ƺKeƛi+*ɫBü0EQGk:gmu&>ʳ'J۴ڛ`'hێƾHs[#Ƙo5IY߁##~it @(}57pg ٵq2.=I{S׈*'!ks;Kju'L2pZCo/*TbiJ.[@*&GV;GFt,9i8vE[l+lA nvi$v6@˗4] x٣o蚰H!_YYw ~wGɃF=ꐓlL5-t̴*ZDu|Ѯ LIIw $ W ZsۍypיkV$[o'ux^ds}|x1}SZo:ǜP ~E&xLw~|z*m]d&#Ny@fKj[oTZZX?XJzۿDCܶ:rMzt &)mQmZ}Ϸe]z\zKҏvt\ Z$ 8IEcGaJB[lucc瘚Bw2F|g^_ya̽65>Ȧh[32:2X.z^* ~lwm9rbٚ=RpVL6l{fXhۻmqۿ Mڪz5̦&@n}}BĆݶ=ێ(ihd-7ݖ:ɓK=*R;\d(;\?(Jf7@JLCy{"푪emӲ^įN}.8»NLqU\ ﬡT.x[YD݇%Su̢OV&|%oXU;baߥz_+wͳ97,*^pF^.rӻU~:]QT#9X[dx-=} z8K@,5CUk4qKtTCktKyl9ٔ!1rD#5 9|  ssp!)+- /5&-37ACIKGMSUWY[W\Qa,fmWn[d9{]su=KY;%oQ]ŒCų^LUC |&T᷁ !:8bE7=ĸGALQdIJMT(-a֊945m9% jjB-vN)sV W3BԹQP#cQNX1d¢WK^V6n.x0Ŋ&%ruzبqB¤2nl-qBrFf2kOl6-d5ry\1V2-=!;bܻcwm~}s-͢ۂ`v}8*X)0Ԇ;{2"$Lwk4ː7@,4ZOCOc1IBi#@%?ϲѰKM$ȿtKq< #JlLE纤1 l12AE8ф/1=A,C0mDRD204=@7E掔*JċO64-6%5gNS ut=Mk(qP9YJ&[QsU|آ-T3kYYW5L!:U1uc\Im9=jt؋b C7,Ji0mq]6T0%56#c3 }Ҵ,vox;5"{m&T Vev;P%΍ S˗ۋ]OW贎\#KѪNP*x!Z"y. ݱ?K={+vdj7GUnH=poD*V&6+Ac|zqMrF?f\tOOY#Yw5fm'ww}m/x-?^"o,O*g??9SUU:*"OORlWp +y]/ɟW ق<Ѐ)xSAn.֟[مms7(;?R LB7rl,&LRchziIC:Ӝ.[J{AP{a3Wk<ط kC;`[|,'!M 6hl م$ald!=pw;Pa8ntÛ+on_,<}pW/ !qJq.GnPC(L *@.%-ı` >n{\9Q>r{<KGO(\aN/)2[93H <[:P\H h]fkJ[@T&޷K*LS9X]l9ɢli]@!-.jRfMn}pNl?)\, ,j1YyZg\a؋8fxV3gӏs7z,ObWxbSe7UqS,ᷫY;zNV}&Eo֮6u/LFz-/ZkfO)Pl.,lEloѨu9x ϹTo oOa+\x0!+hMߌ_jÞdZ bp O" yqx0T~b0D>,0 %oh:K  ʆ IJ*ʳ0P׊(WZe0˲Of0AB(jz 1ҤOfjml@xYij$~ljEnRl1mޮZ)MopEGٰs}1rq(QﱶqrR5쪯 K!M!rvr !%r$"*r"1R3#f#"sB$=R@ގInJg &[-7 'J2uN(H,=t+'wg'1.OYp2& 맰*]bu)""/KF|n\_q m,ÒH2tN*=HDRL /tR&$0䪞PJgmL%31Cg1#*n-K4A2FS8 eK*I5i(+ӻ^3v</1C8a4k؏x 63816n683:R^9 I::CKL끪9G.H=e+lҴaBqiNT.8VŶdn2ͧtlLG-48((S:(U OmRa6UH}K,SMJ's7T.16(g00c>Uu;UM1`\r,J}-@VbUQ85Z?L 2s`Y9-o԰^O^KKZRҢ?n VNu>vJ 524X{-ѤIpMZksRo)fwƘDjPEJX ֊U*u+ o=+\`t'tzi~V;GVMu Moaj&xVoOQ1ps7|Lj^1ޗv+̒v%j +]U(q5FMUjtoa=b2TfP'l5*1hiu&7~xXN?q#j}q;Mqci3 il4~k)}NLz*;yq~X>q͗BgWZ8KV. xHqR1,g<8+Wa&"VkTg;|H0*~ўe1ӂ31TUSm*Y8%Y}~$bZ-amMcQWAGQkxE-8lPFHsY#f4yUwD?DkىkMH͛<ɹtܙz92$Y֙41Vti rZՠCtHd7!'$yŤ1:kњ^f^^0`?ӰPEs6yeq"wPo*av a2fjZy"q;J8ӷe XԴQzm4%/υ+vX3ZQ|PZ%ےz뷇Kp6zO*kծϚxzISK\J kXe]C:ioQ?8=XjnGtX/y_wims4lkӚαa)i{xk{{t;|rw!wނ){'.F8S3!3=ۻ!ceO)ilZe]ET-.ֻtcU2zq0h쁼vIO$> %Q)Ã63H"x㢟־Q U,69 O5—Wc f) 2Za{1{YQX5פ*%.c>&/:×]b3Α wuYe>J:XdF+߳=t7m5cGkvt6Q&o`U`c:-,/ɋoT\gMY; xw]R>y%pIQjϟOeYOɈ]j71 ۇxriXw  y|6]};2-RGi٥vTٗ GyoG9xUgQ|e9Ӱ˼ aY0;ЩȚaIyPxj`v+-_nRMО *6)+CvqwyZ&*-Fof2&9T%,3,8[ /37 Yi:=}h9q]* wg.YƎ}RShߩ,)o>gB>݌ aض:^J̭Qbe9 楑F+_o-#ё-2]B]){LQsղ&Y̎*G-۠^[O$%&O&g'N()*jk,me-g*m.pp0/q2qr3'3(-u6qqvиr9:+;{s!}>?|??op h) 2l͞Ct#R7"ƌ7rlV#HmC,i$ȏ(Wd%̘Uʬ &Μ:w֢)ZBgQLIፌ(ԨJXwT{[,ΰhkN4;Uצ`݂c-޼om LjkU.zg c|'74rB?ɶcvh?}v̽Z)^nh"M]zGؗxoS·{4]_yݭhG~ye;uCi'o{ 2Ԝz\PRaxbwT #N`{ (щ1=u)7cr-X Hb) CtȀwޱ4&%jۏVaNX gT@x"o "U!X&h;z(Gh72(8Rz)X2z>ѨЦ*j:?ʓF*N+sۨjQ'pF. E:KaP@ ,.Mc2UߞWɟYk]'EkZza/HZ*H?۝"Y,\YgP֠f Ĭ|+,ʓ@"6~%42(s#In>.L #+Ur2lE8B|=,es[ZߝCB&ƽY.yfD_f6Yfyd|z-gil޵,,m+xw+KOcmm쵅Ecw~s|~OFwOK 43yc_SE]ݢ}6'\unNEY@M|6NJ9DHiqkQ7V?]Gh5Bfq> &5U MPm"b43m6)ΉMo+̩@(, m2zEaI ?pt,00BrTҪ}#Pz,EWBFxrt"SIs8aӚ}LvkCLA8D'uvy$U!9yrPdtWꟷ!Iѯ\a͈jxysTH\4)+?J X9jțKW0߆44M_.J]oQ.J _ My `  譙I`qXm`KNs9=}yMUd6 .YR! }T,SS^ `FJX=EYqe-`!aNϦhNqWVVِcS&$)v@*IOyս\iO\*fW/r#A\.΢ ]_mR"5R "*O:,ma3Q'rauR[4`J55z-֡:T7jR8L1L99J:a0j)^SARoJ`:j\=Z%DTV(2L#x01rXٙJLP>V,J"H˜HYEa bM&PʕPbRQ6@,e{AU‘TM_\eVV&Wxe%Y`XjeZ%9[ƥGХ\ߞ- &D2"_Knb㐙<"Njd^5٭0^c>`KR1Ned+bq!aUUcFzZFff&ֹaXQ A=fTQfK=NCB }gDyR"."R$=o5lufuv$x'p4pр' 4SOF\h_V`ڠ~d-$h_(Y("i9ahm ckal(ANB$2&|vX]Ai-)t$>ndI( Y^_ZL9pѨЖ,+ݞ_+c(Q(e)[Oa6Z]%)؝y P֧ak@续;L7v)5əݿ)| ^?i]WMB@ R* !lK)v9zrd-%U['AUv"Nܯ&Oј;_tR\kΟJ=L&]䝁-'~$'=&ODgSI-B1YWꞬط"׉"u zQIܧrfTbA޵ad\wSD)ږҩF穚+>+7Z6*yaꉡ+~Qh"1egrޫb*7].'k2bziL֔\؊f"Y+׺E2^2,E*ʞ+H-UQ(y틁ʡ<t-hv؇_.3]qxfe)ٮF(af..}@Ƨmܾe}v]QڥCneo]oJt/ЯLfE|+$Fڊp "0o)pϊLp\8&,hZ0Ӵg*nOe _"h=LL 1ٓ>[%AըDmMnF$1RO/N2 fqHqʨzfoiq [i>Pig$/Ic , B/fZ4fX>r'N^o^:]$ak](ߞ]H֡z&Bj%B4(6*),C^Mt6clC;lB%qP6gKlFs*.Kgny2f>ST"^2KZyXkѮ2Bt0 l".ˠ+P6UH2պtY9UmdSbB'0b9`5_f/T3",NKH~5X?#']t4B]iѢHnd;nneQy$|61ZPjnv݉fxE.43uW wV׸h6$ iato`s?:)kw.wjNJ [t o^pZv7:7~'w7/w_7u}%CNف^qYdĚ2 !>!.$xBl+;kaڢU-TExv Qb1,'Wq3'jxܸufQGNcZQ|H)MF2rUZm-a4~kKԀGWԥ B&~9s[ҏUZ2ڏ]+5j3"7t.FzB{0:d8[F5c`bL0GeebF?g}ܟ@rXSw!l+--9cGN.ر8cF3/Sl;`'n9nDz W;f_jHGFʡAjBwN:I\'.[މG0J L?cG¶G?1E{G8 |̃k\odq_o+9TImuc.&r =_vc2~-Qcu6~~:09xӗ˓x{}*<=#p;{}݃ }{=oWw^΢dU>c{(һ5KRYyqKvhvf.|b;1V^aOAn~؍Se{63yӧLX%x1gͰ1/sPl5p9sgf; Ǐ,βf5wI}CX38>3q57Dʠ¯sEٽ:8$@K=I!TG$EKSRL5C/Ӿ~oIx-JUr0u(.Mf҈IC 4Keܩ89m}rzKk h|} K+ cb:1k <ӔLL =$EJdah]UyqC,2[M ]d|ݝ6.ƅ,ξT & /'DW?Sx,F_XڮacuO}:lXfCb('R%" H6q]: <Ңf1A[a𖻛[F[ J(%t W6-3ӏFԩ˫-܈BFiN+sG2LfN5˗ª -Jg ? GvyTpn9(#K[In'Ʈ5;ʶi)wj7:6o30v͗uGwNb.v,2kN8̱uG9i±[\6޿i Tͨ]LU:"슡T۱nz[UxW 2>q~ɂ%n?9_EsDž]qɹh>tb7qo*P?ՠtu65xK>F:'xMQpUyD?qC}PzQzov܃&1ut><FTݝmQZS=(8SJ0A/GRYu,380fۈ}2t#K*O-dH4br!`.-*+d&Ip4*΄&\&+G>7$ s$B9 JcbIwA@9Q8X|LؒH 1iU!#WM8 tGkd$7dJ./Qݖg%gl4kTnu9\9q$dh,Ҋ R1: {eE*c"\s)X -.k|'?7OGcZ'$9E:eU/&azl~mvbӰfpWA/KB6j=t0^oן[M RV}hM]+%hWk,z>~y{Fr Wƙw3N֫MgrS!|ym z}"6xؤmEs@!#WP)N7 DG j7k/G(¶t:倿G8f1|9]iig|+mm:&ִx !/x=p,za9`7|c^VXZZuhu"Wx%RW thaO8W"0|/:}T:3t P}at/މMsm:C˔ -sw %Z:rV8=2>-B@+5#*S#e"(':h @Xjek==!ҏر$Th6 fɿMrA55s5H# tа'*!xh{%$F=S=/(K ^7\4v&ÏX:r!W 6̶⽃śp*jpjC$OIQBg%)@"$Cx;Nt%a˜c>)H: 9|)#?x;;ʰ  )_ [⩇ *:Ƨ>ŲS* ijĹ,8仚^Tp*.mSD,1u5q{<G̲|z-:s]1(3L~GVdHRSºHCt-2-ɔ-TɗA-ɚGAɎI:Q D+-a6H;3p 39ʡTm;FK-j#IA669pcJG"a&qJt3l3?=ʷDǸ{,ȻDʼLlT59k|8py8'®KL,848Lk3;(иL˝D!G²6 lGGĂj`)gJ&9s>8E+z:l:Xn@`qtMGb#eQ TBL; B>#fK=*$ʼƴԋH>GT ъ̏P"L Eڃ% J:?^d@#)>d $$Qe(K!Z5&T;4Q#AOKBy>`J޺"UP[^$3QA"T{O)>5aJ+BWSc;ϳ|DMl#kRSqb*M8|$S}Aħ |dеsBF92 BTCʢ[ĸ#)):$L3E4A>H P^-BBf`ΒTDŽݞ 0|_0DW}d5t5TPJ~)J2E,0,9,{b<ȏc=n;Y?dA6>>]GNDIBK3IHEt8UCusժhMQnD Z(#m4e։ʪaM!!\|ҊfHT$S2 \D@cż1G7_ڲ@c0UW`9f>g8',a7D[dۄҞ6,-?Զs ffNԕ$Ĵ\8 < ^ێ,75gtzkY=,>gV>i4KubQh( G IݣL6[ƅ1F4E+gYݿgeWCmhȼbklCVې2 \>/i5 x>M>TQch.ʜFi=bUv+]̑ tTr(aMT(.5+nF@ϝlN$MukPڡ=[|;I/$KFBpgVzC2tx.WV{7>ts=(.jO?QEetPUAEDʾlQ FR:eXFVܞ-QFR>`bnAF|]ɩq֏˟qLF˅=TE gover͸o1CV0^2)r4c ㉖ vdrq$[m}id(ȏK]؈ǀJˋR>IxԪc(dx:_+Wy gO*뤼\_,, Y,o,LKȺTuyJǧ/Lgϫ +ͣt2=toz+Ѯul}Ec+t`kVcow{͵uN%)o&OS^LO+lbz|%U'jH}|*)l4Mqf}/}~/~'+g~~O~~N~L^UƻE,ORV^YFܴ*' An\`(Vި1ron6#/V#Ό8,jAjA*tkzS2a6*[)Ԥ}ϏKMEOd;?>tֆҗٷw}!5t2X~c F@Y B}-EA! 'Iv~6ވ)#.\aTySt Dr8Ǟ[MBOյPA4-TOB-dT'·C?Ԑ h&AQ2&",%^z3b󧓇)KקgIvvv.b)P鄡)$qQZzn1礙\'Ըi16ΣbأgMZa%]j䴭MK"spӐ_z:`lC\5L+ݺkΧ\nuZ')difig쬢@1pժk: (2k2xk`mdnl5 &qaZM&WBATSYf`*5#^R _]/I`SHe}F7B͠kԌcn4]eسn_ x >WO9lwew[:.Ҫ:,.~ǭnc`~;/#ΗnGo=__yw/܏o}귿/ijOuo$EH~_:u|`1=teoTԸAZx,\󂴺AHt!t wB[PA +w -J6+[` WH6LlbGh(5;$׈&5$8$2Ebbx + ZP;2Q"L$+atm$f(,K:-@IEh$qG!5(GfCgFӳ BG2ò|h3^Ej=nʴ$~:4M*T7U龠jRf-P舔/ET_tX$U*JR6͉P{jЮ&:p*(b \-KKrfk@J.yXCӱ%sZUL)ueXQxYAP)5Fؒ#VZ|֧J, KR7{ YOzb!H nЅ}g"udHl $4!/&咼, nucl["Um_0#)sA8`3Fi;o e 37>stp1x%>qR8z~X,4u?8>GLdŸDF 3}S(='RON3n,1Πq_>7,wL9]֩U9 0AtEu3ɧѫo.ή30RId) 5ڽ['#HA'0Оa?)c"V^J"% 7` uqk԰]LֶΖԟurE땾ljo1:S'u־v֌1+m2ۯΑY쟦Z5&25>ԱJn5**q+킵ղޟxq5sJ'CdxSSUH3]f5@ߪۍ1qN˓36yEYPӴk_gڦ7Azց>=\;OsZ$͎;P$g^@3\tϻp—}7e,xx#~_y/)I&&j!yPϮ&헱>/BHeyy!0D/Xƞ6\s/`&҈c!;!yB;'hFwHEDb/^Ghg$i)4]_QI*qDO60GZ7bX4G5Lr}JU4k+:ObM1Zk% |rxRV3Okx1VסK$yKEjWEԖEbzuzG=aHs/h8IQO†_!$L&X2tsdGq"N(fRƆ%؁n\NԂ8.H!0a=؏XsĈ(q*7qՄfA,݆]8q;XQ*"GU"y KWx3lBo#G,;f3c6qGg M0rx~h\(#)pef5IH8OeuUxCt yqcRAYՒ850^X~(!G'dw)p֓r[,2rIa*6n(HNLZX&Z9T=HP.T8,bJ5b*0o~*JW6+(&Å-KX}bp4j0&xJxY[-i$cKI-xZYW4^piIE58ܕhٞRH]yccz_R6}8Q_)i_e^#G_ʆu'9RKȟh4wSz_c~QS7X7tt J~W>=e{8~=x2Xru!d>d+ak9j]afyEAJ>xc X:\J:a`e*hg:j}mbޅ=Vwc7|\ v4uڨ;=zθ{'v p}& }-0=~=s5::͹wh4\n}:J9CEA?mӵpڼFh#^N9몼Jّd~L:Nt3It$yrC.nֆLahs4XGn|vԱ|J |9*\[,ۣ2Mi$F,2jNjkQ ]$͔=I >ėWgL&Y{uj!'` ۚԌ[u.y=éL{ !(H6N:y][]pPWz%쬎<<9Wo̺h-=7)yOޗ|xMǧs ѱM#wTeݼBlcް0l'Ԋ荜sʏ|-SUu%ODՖȐmGfr}{͇땮؝A s)d| E/C`XLm*ٺ;F t/t}ZwCce5Fȭpo6^h7Pj>[uܯ?<mca5]?yhYzr6݆l¸χxi|"2?sae >sB߸_+/:kUƁi*CZ&SʓӃđ,MՕmm1!q&И[\, 6eb PgIeVlǝ0%//Lp0HLq+'PPNrRRs(oOJ0+3մ7Ww7.'Մ4T)3OLjvdt88[{۰n1:O]v[9:?_W;"u ;[Ar<GUx?+tX;iăVjD!G3iyӳZ>@dV"bى*ϟ'REaLSY+"bbmZMgն R[sz%[@q߹w|qdOksf͗*o shѣlujեW$lٮՒ ,SP%۠OiS.=u^؊^yٰ҃Y/oGk>i}t?xyT*8v@fhH1aihJȥZ2 C=:b^0E^kgWB PBgp%, tiѺrpDHKh@2K$EDz-k&`‘ARF4a:18),uA/Jn@MB<;?M4܉щZn0,]SͶLmJP.5tQGCtUMH5 PΜ:kdq((#QAzF*|؞HJH+deO=`m0F۸Ms\r/\6?=p]r6xEH"B^%'kB%bMuMpoT̽r,HlOekva)4uS;!ք_&8ffY,xavVf#kFם]Q8RbyNԋa[bten 'g|Umouh{P{~.(jVhe{|Ъ6밡LjyJ&Xy &gqR$)Z´!-%(S8LLs/Dl6^0K\8yXtQ?z tXif"ubo'ّ?JfSg~2U.е8uKC%:VE5J͎~hH0R 2iJRO,j0d:Юŧs2IS"Aj:gPh *)G/GoV@f? s`z*h8*pZZ huCY-5GkvmAXWڥŘQH_ X.9aUޔl ]NK XC_ǰ(;Q{(BvPQZCWӣ ʱ2ղ݈6iPB1tK|›191Ox4"mMxDxax/W)kɝUǵ&N` :^Bip{=a:NOC1K($i#jq)WE\UyM^۾gA.Ͱ'JOBǼ ǼA7>[Ba+>Ά& S'ɅPzU۞jWfdll9r^45|`[!\*l<֗+E1řY-] ^뫗/l0NchO]^gڻ9f W͍>uWCI)mcx3">cyl}xl ]ͮ~4kOr)?irKnW>/) wo$jKuʹiz_ Hv}wɬmsQgdZGor,x1igVhxcϞa ?g3r:}yAtvGoҙҤ+MztsYONa{rZ'YUm[Yuk=O?z( [/tV34驱'fA7OQ!Ik0c+]ֿi Rߏۈho?g~p,H.FH^oD\X`fm]p#+p*po&(jRGG- .R'_t,ЯWEU#55K вJ@ ; o- I ː̰ P pK +iv Q Q#Q'+/3 ;scalapack-doc-1.5/html/slug/img742.gif0100644000056400000620000006227406336066074017200 0ustar pfrauenfstaffGIF89a!,ڋ޼H扦* LlĢL*̦ J9jí yj=Z }'88tH3 xIgXX I 9ZyZڔ ;J{KHjӋ,;f| ( j5͋-U,9I]mx}-NlUn}nP7|$2M=Dɫ_85Z/A|zL"ő2Q=_؏="1ǜͻUs&Iֱ<KtZ*"U+&iiPZ5KQ1tn$K׵6Ǹgmem؜ VMo%`xJVNWEnao352Y:ٰ;E%\.ˁ؆E$U `Cm ׾W$n3t˧ snݸ٫s=|{/O ;죪o?Wؿ?{=H`h &`fbM!?n.762%rbd3"Mnu"ReĨb6:be?y4(ЍB :aZN?L'IdT-D^%9R\-AydbFj)PaТdj-gY ~c}ac0\h`$O%>&jf]x5[|NJ*=yJr/YjJbj&) 옝'Ԙ6oElN>`ԴO"-weހvmD+ˮ~(΋:Ƕ+[o 쎽p5_OLq_qoq; r"Lrɰyr*r. s160s9vAܕs[o*II-FS[e]dtm ~~^|5nvPM"im~mW68R8ެi7ؘP6CkHAYݘ 84n8.6xudݥg8ߥs69 sznH))u̗;N\Z<[,bc3W;Z_WOik &-?߽4 ?+ClsT?& iU:n@Attu@pYzΛ֦T5aݯc* 턵F|BXBPDV}4]P9m{LCj!S{p,MtD(& bHH&,zgR'kQ: 2x˙iGm1-!ɵMjR`N"9)3R '9X뜧hJ1V]["arMc*I)N$`jWQSTV`' JUП/Զ|.xڜgUGh6>n9yPiks.9ed0t험 U23:XpSUG2JMOAg<_JT EY #&jS(ԕ"CH4EXℨq" Q1,))ybtP[I>>W2Th#qQ|r,պN5',fLc~j;J8R%#x7G5uR\(uqp;):SJct 1=w4 'ٵ[$HKx"G{Bl+R7%(Rժx#Q-f>b=%Ic̎OiHGV-!/@e~Ey(Aזm.}cfE0Jhn&<)8y.I7^Tʘlx::o_#F%-ϙNg0Rj$B2M4CWgob!l*JRK~yE'dWgHrrrFlFVq2\G+QW5օ'G5Mj]kTcZI'St'6Ot!%?ء )[ʘ7]7UerG8EKȸS?wOBE)?xhx<WMZXڈR|[ZuX]ۈLVTMh4ԍHO1xVfx[ǑpNVٷPČ=d/)dTssxR|xMv&ُ WqڸFWUNvR P$8ؑqA(s+I)Uc4(2r"'EZI׸"DvaZ]6W|eٔLYq]t]s xu6UyT\d}|&HUG]c6Ȃ(;$\!D)FypS]'26S\闼(6\%N%Yl}gx vu~deI:z9 JzIjvzay[HQ_Db)P_K^Hz b@YQaI,WNsfcCp3r"Հw8b|y4':fzt%*eDVda/2Z+;c"ʏ$jY{mcɓ:dgȃ]FCnNĸ3 K_*8E 떨yz憩Agȟ|qAH"+EPBxqBǪ G*4|{Cj) ;|g/:ryFhBL}qJF)ňx옞:c*3%c b|l9ɠ\ML:r;{9ËCxH䵣+ 4_L`Ū#bJ 9u+λʽ}˫]'-ɰ8~uս-`甁i[ca*PI˕ |$΋2 ^ S<$(M̒ YgOģL=&9 NwW5Y58MJƒ%HAX'sh5g}8e}Pem\o}˙TI֥d׃lGזY JNAuΙ؃m9 Ҽy4[c=][֫VLzğVFm ]= W۳yau5V3Ꙇ&dTU#%NWٸRtaJ{PuUb1M);-Qd+/FI$(t^km:w%G*>,v}>;X^!m+-Q ҷyq R*Qoz2$4W$'6|fRVݗ ᶙ{7>q `>s`48P~:ׁijKVmӳUV<_5Ma-=th}}+uN ŒB{5ZA׭$u۝I[m ٌxCaäہc|"\hk65yQ5Ŷ':\D~x7c͖;~j=؃9mƈAͫy>s˙m}6uɈ8jrǔ`#q|ڞ܎i.Smқ't9K( JW:DU_;?)zItNl / ߓ"=6[| {˧K7|nmJDǗ}YY:DբO洞BBsr/.ɴ-ſ[oYe;dndN&~MUE̐mxz½)HC멨)E۴cȩ]xa^hSk>a5kgUUԩ f9*g i烆7+:=c/y2:'ίk:3W|W?R}7xG1uaV{q֛wPA T]Ylˌھ\ÙAKa2Gc@<44.fɕR_ic@"vD˶ *-/1e$39;96 %N!RLQC'MbGZlmeeU={}vvC)I]ay#s3+Ajqu_AZ`r & vQCV@!L%a`=(Ps9Q8%Q"w"5HA[l꫙cM4qL9E`(Aʞ1-G j}NuJkjR_c;J\'t:S^,V̪mlv^?cĶi 8 BaZ<.eˉ=fxaC2+^Г@㼼odd5MZ,As%w`l 7B_'Z׾<ͱ];]|BȗWCz?{㿿 ɾ3O9y ) א 9,3,j 1v##8tP5Th1FQ18l~2J!QC*,~rа,{B d9l4QA ^:\.0ÂIKܲI'B:3ʔn2rEi\m 1NTQ1'R-{ HU#N-[4C@FTmD?V+fkde&`]#V~͉ۊ݉_bYimDXa:6Z7m[D QܬuٴH}<qPUO,7!gخJ-Қ_ZWփ_m^3&R>SJ!M8caހm49ך4L_>+e ]_mݝ9bȏW)+!q R@95mV@5z㞇&_z,l&˞ZY -]Y{ 챟rM&c{^pUZ`5ܶEQx3gKO]qԶm U3]@IǬs+7HQIѷNoXw:Ҹʮ55̳Gd#7=m+?ۛ\d<<Ǽ3 *dT]r.;wՎL F5Ic~;Xo4"˟b& w4Pc~(%ƣS}Dd2df8a̐ $aH[D9gh'3HN\( JJk )Gd?Unbg SK<֣qjQKuwj`Ѩ/ZnPf1#fhas(2pmvʬwARd=/-2ym ciH_ 1! I(-;'GhX)g9Uu֦m."WSs=nx_ռ53m+}_VْԎ1g0ъcdrmm:[;`L-:ynٮ[p:ןE=rL/l]u vE|B!,6;Cu\ӟiET}`=}_K~I-yY.V`L }NG@ʓ9_/fw~y}_qJcXF\.sxr7Nz݇Is(GL|Aj$AwSWJ:b (/c,&&04nl$؏䱔LDk:pkH"ƈB/㨮ʨnFl ^-:JxnNn2*F 쾮ttl4io-$ NV/D H 'v Lh Zy pBG~i+ٲ, q=i<)BdhB/N 1 1h)$8 H0AVq^d1fj#$Iqyqf1QOqqYq5d"<",*Jΰ5ѪIW L]NI4-npmXrjpeLM Г1)gOߔ`D_Nq悈.!b!!sδw!=h:#I$iЏJ2jnRB'zJ & & 'N"lP#Ɓv@f&9 t6)ڦ{ gvR#r)\$qEkИ'Zr<2\ e&I wn2Wd1Pj0050+4?CH$K3N`5WB`3836k66uOq7}Sz6ps8Ϫ88ӪnEy } \ N\1Ѭ3//r3DE.ʒMNgJU/-55O*4eĀ>Mɶ $#Scp ǀ#LL'$>t&4?(a+Tґd }f;1peҲF*M[\ͦzБpX-Ր*ؾ32:[g 1 H-Ms ϴHM1a)ɪ_d f(-Mj. Դ*"R#.|iIY'gMQ> KfF}R"UFFUHU.R\ q$W&%{dMAR+1PW uL/nYմ®sT’Y4:qZ^JD#kUBsPr*Sf.']/wJ*S&CYSALUR΂@/yb /吰̵ yε.aLPl)V.NO>sYQ>Y2ף0V&eela/PV+UCGQ}b)̧hgIruۖ沊k&w(/uk/BpKr%jN&/Z=RoVC*o.\(ni ncURg5:SLVIوeQ [|pp51*ib_pV ! )ROPHUttI]WaGT#28wu_hņVEj3#/*C܇/ՙ7)ye(cΤ<.Q98u9i|dU3q?)c}7yM|;OtAT`>~37 A ط48%s7dRƶ)X71D'ujk_%>CriRf7me32"4e hl ) XH!H4v ŴjXpsw{25PS0hoQ5 gҌ}xn-{/樎QV{7iTfZ*G:Y0N;=%qoWӀ!DAOdOw6'[r|XQ o,ٺܒjioU9d33PGȔi)06/nIf/_|t9&/em2wI#9vMsIv7SM)ukXy ʖ󑙙a#oY YQϙ蹞9=/ҟuڞ zl7SPƓ]7ю:  41?";e6!{w1!  6v}bZlo[t=C͡錌W9W~-t=͖_4.DuV3TYhR4+:6OQ9ŋE*04='hV2]@a$Qv&Ĩq#cyEIDžeQ qQf_kt]{JGԢ|jm\4Jȉ3I#RHPv8HԚ9 i J囇XZrT2fk(:TtoT50b5 x2Ͳ(I{(",URy;/3NTඛ,mqgu[^FTPQj6cIz[~X[d /SRv-f5`1#[E;0ҵNvZWOo60Ң.ghU@ݵ){z\•['^\biZ 5c~0oَ|]_7lM@%*עȩf}̓]( ę(ٺS*6Z^aA,)Yj3iVIy my2Ms?X}dk V`sjnO:,h.%ۀ}N 7}cֱvJɐ{ll^:s뫄WLm 7۵!PХn%r Qߝ=ѭƉL q}[E: oѾ|:;uE7l!A:5r#:$rIs3tP2w 됌]3zpxͨ0h 9 ߰0^7ВPJZ7^>6=|f%e뷌=w'q>j^|>_~٠ЀZ/|U;R?tZkE?xi#?/,?ASnp6 /!7I؜ܹ+Iji0+]tYVwL}?w45%P! #)X{xGҽ{1iyR.` >.?rFЀM8lgtfYZ*jn2?0(#2\'4*=j[SĔtWZaYPk);;>_?LaNa`H _WZ䣣 `'hhi)ZJf*jk,-,m/.o-.,i23 i3t`5v637xMwx9z:z<}}?P ,h^2lhÈ$Rh"ċjc7 1,i򤴌( \yG˘SV&jbnjeAyCdrRmYԦXzmLS*3y%FldbGWw6飿B}YK1$;s~WKϜ-V#Z0h)P]tbXCj7-]m ¬sc:iuwrY=Zc;Aőy#oD8yÎfwJSXD:WQ/zY~gcwGF_mEXRZ$-p2_V5ZeNpW0s"u3H#**w:XՄډ'Gu`X- WN}`%eB!]]ؚihq\AX=y0m'o7 5ihed`Agnh&"wZ*椄HYpWuQmQd[BY*tѥl*颤lZm\5kji$Պ,ڪ`ؚh&ܹwx˩m.- rLEꮥiyɧmSb M:%RJ0zyg] C1n Jժ1ʩZ n('BJ/spsy~<-9Zu׫^,ݲp-p@M[E`F73lí`S2ׯ;spve_IqvI)ӛ눱̶y{f9^yG{mК+i.5: Ȭ嚮yjewxzme ]Tqވc.zN\vG;/>=}s/N78MҐ~$f=A[H! edQࡊd(nXڃV!nFƲMP)Era'aȃDrQlD \1N" HDbVBnQrbnz0) 3UP\񯾍k V"(1dy,SƏqc 1mר<X}A4돀 YCy822d0IPrڸ'C)Arp4%*S9Nrl%,cY㕴%.h\:9X ?df[*I{ynvBTpʒb^ϬFYcJ4`cv<EF[Z2?'4p{(PvFOU>ܸ>9jS` :ctBxZT٤֡5 I9)<Ԯ~ˈbJjЈ>2R*Q0 "'Y:\ HxljRsIn";H{T [ ?xo;LjiXnt7gZ)/5JnĻ8=Oɬ1V_xY+6VΥz;q{ L]EWcD~b^^Ȫ+T!Www܆Syxo#nveiCߎ =MaFUOl4IMV_M kuS9 -_d]P٠Z¤ܫ އhA)᢭ JF߯)D]%a2Z"[R >6^v!@~!oaFZjQa b(! !.b1"p8"$" "F%Ÿ_AN v!SM6ټ"65]p],J-ƜgD`yu!,-*x0cR`B1EAYEg}BZܗ!b|Elי9#M j#3U:q#ue[/v[4\cR%߁8cx:@_+zQ>I q?F7~88&ּ@QdB6`$D3B?qZȀ^2uHPћ#}E\&>qБX+.%J5%ƈ (&VPŔ"6 PbPW1E.穙[2\N Pݭȕ%iԏ>}K^Zq__Ɏ]1ؒ^HcZcq<=f,fZ0!*r!2bFTrYtN>_fJW{ XO%frZW=kUbܖe毬NXُt浘fs_IF&šZc3~V${BOa&~jܭXE #sY]^e!Y0L^:ҔUex 4dD6jVєO&qxI!fiWxMK‹䈞e`Z)\ *T %Vzɩ iV!' 4K"O!XdjQ NՈ f\:XPbwV~hu1VLd]v@c=z4>B)Ny;ej!_:: ) Ob XIrBP5jd*]@(]* ›e+*W1)UW ^|槃&gmŒrPRf N_.pXPh ,T)aOY+)%Zy9)1iGn,aܥ!mF,5ɂZ%6ȊRͲ/^Ȃ ,F-R$na6mTlANKQGZ-֪av<,2מ2Sǻ,^jcdWyZ=y.gmNޔ۶!*'Z{hG(vZ#ާ%bUE+̹mjN.螒Z r'}&G鎨rgT\ Tۈuf*VaҗQYeĒ R" C&'"]%O.Vz7asɑ& Pg="hܯleefVkܝoz&5uX?. 'Mnެ28 f^*ªEf&}\fsVƔuvwoaIlh, 4"Bo6Ѽ; sFni,3ۖ-aK;\JC}$aƮJВ4Ѷ'&u;0GShTO385Q5VS[uV#mKaWEu0ѹ"tъu@zmYu#-E!ԧ}5 rBݦ=>i5/^b-뚲C.^5beHZ+u5.w2t mbe gojg601JՏYiA?rz"5!fA*vBclUQo-l,~`4"/,2wN]/Ghmrucrriiv\ddwʴnw䎷r%RM5w.yW<5XwW }sߪEu`^wjeX\yNfe}ϦgOa]`](ކx nG\n64Zl*ZKa1!Yui9]ʵRnȒui,גo9oui9єҙ渴WG7˖9ݹ,9ҟty %kFB'k6O,GO[JbQ_N29ޑkʨ @LrhB avN;aW;ᐸi:Vn*{f1~{Ftf;tz[_Y9Q؝\W@~{? ;;(sޔ;0 "{KӘJ/:]J%vt" NiVPU˯m О S<XNhߵ姦(mǨ8+/gV(ޓ&J =&]qi$oUr) &J9zSo+c|pl[0bK<'3jT}$/f6'jʞ1!S˘O W2/.[>;MptI5nK`'t ON<_%H 2yr}) ?wE6  +Y2I޵hgB )7[?Ӵ^(8gN\481 ?mZ220\N.3 4c# E/³$79<}~לBLHזUu{G^c'^|#_?0@@40Ad0'B A 3P;CԐD@CѵNE+Gd G7,15oh [R{Gd11@, Q2*G˧6X3*+32Kӄ 5d?43<9Lp҃$ ū(h$ɛJ'ZMQ+*Ik蛺rа2L5U QQcOG%uO@  4E َNdYfsj4Kfrbkc1ROڶlݶ r޵*RoJN5zWX59&kgT}n f&8gYR_J9a/G>b_7_u(*c!7eRY!lbx픟`&VV#7"\)JQNZe'᫥ߜ&-[;E],ñ9Rzj4-ڠ1lfmepRnݭ\sP9Zm=`^ۃ^Fr;IgwIHn8Fk>K.=,ovC{Kƛf ,h*P~Qiνx DiU+ZK @sRkyN2@^Jx'A(4 "dL,WהeQH#9ч~Is#a SCn.OLY&D0PA{fI"=H(VE(@!"B.fOѨ0qnc[F9*GucsG=H𑐇D$>K$LW84'/CYN TPR̂!C/3*u/Q3 AA7Y!80m4"ɼMz"P#HJa"'˩a(Y야弰fQ731DnZ֐rTrCbAm93pfYġC?KNR&C ptg6!4'gM0>.-  <zMĨ\"љ괦Xf<"Q UBfF"mԩ6KypQ$SsEOSJcdUpE 5+S)ʥU!$Tr'%Vɸ5RVӯh(ʴYV5}ws(rwWTe55oZQ[tARh Gb,CJ^ʩ&k,효HNmm}),I7\lc\mjfc>m5aO`QUu=eVsa eTerO&iv\*&/WOONd?YUY5_%罆0C0<[mb^yJ*pGG8#ű/6UUA%k>ҽuS庯}{+؍l lVf-73ί,fVċ ZxriOǼD["!s-l>{r]4KK̀s4)|{$Dv{~we#x=JJM/Kqf@K2قС2c#05^f#r@!+04p3pt?@@Kt: @r1V7D&tk:j[G5lG"9u|G^C=DyG)GwGڣ}{D<Ȅ,>&2RʲL<üv4bT.`) :1)hJKC49(2$!=mk֠$>odۣ $I~FF \#uId7 Ѐ2͢,;ϭ\%4%]`\;U\h׽ԅڅWإ]=9RA^+rPռi[؝I-KJJ.E^[EM ^;C31BXJ}MeMSvTLU5"]Ya ѫ̶ߑ_ܔWҳ`%t>uR`M4ۘì0d%DUN9>zZꃹJ*ŷ.@I,aU`u IM2Z EXYpb'S|5#}cPF,c!6-ocX%Q99Y+Yu]bND:58cM]Sg_-! U+(Uv]=TJcN^];`OV +]e6&bmmФVQZSoS(fr1 ֯O%RdW_KY>6'x5mxD%.Jf]PcZ+h-MCn]0Hem.b3A"Q\粩l]%l0[ߴKex(U eץ_5RBNT%-C8…[c5 Tri4\ mƋˊ>%]$3tf#M:`ߕ=ހjЯ#>CD:}:7j\kDk[ɹFGkk æ .Fll#hn~l3d=ÊojgAoL=ƪgďLFCGgN`opp.]pp on'_7qS \GIRhTOg%qd~\]>-qM%>rd0,ӯh̽d(WE'%K%k.)v$nxKD `LNd4N+B*qB/Ͱs2ssȑ=/1( taA9N<7J&!N7">Ū#9_~oB̪ ʠ^jL-*6l[qO0/vsVa. ,fdVMyAo]RQFK5SQKNc'|@k"  gx+-ADvxCs>KMyOx6wu5N+hxHIomY]XL.ù%ro:~l~Lg8$muM f3? uiv[/ bFNdT0i #oY1q4O"= 7϶iǤKLtO* a/@Tحv+&jc8y>ԥ)Cd0?t$ ϤI$cz<(CR:Yi_}3x33"ZXiš0cWm*M5QzԬB!FAT/\hxF"J[tyIz7ِXniej[f:93>[+'܊sTӤZz5ٰV-ٞmWfzfK*01e-cf>ohV3)׈lnuo|sm'^s7XSXxGt 8wP5Yd33w"w^@RE#؂y}wԆmğ=WR$dYJb㞅0] ~U:H!H= Q2^~%BmnYhW!y6'G睮h ]gvrcݲvYd9iƵ]]c3YgWngtXbz#x%iZrf{TiʺZ7V%S.Oy)o℩7&)޹ڭ6[qˬznN/ /:{p4̊,cNvEZh iHW$Khl^\xaВ9&/g* , Zg<t*Y4Du<4ҒrR؀RdR|y1 3,.gNl!j=|QKJދ-,Th8ҚGf^l8skwΐG:f_:#z趿,F9wyՊ:޼nݣtf53ª QH ߰~&CK>o~2Z?Wp;-0+C_Ͱjb'HR7AuGHB|!N‚.C Q!@,HFlGN&߬äA;|"AR'B-1j(3Qf|#.":͇ ƱBnCK H94)!M &Ӵq@nvK6-NJDwS$, ަ'VyqOJy*MqAJ;Fvǥdw\nR7tisѐ#67EnotXfSU7NVSeIG],7?ď8Fԍi;R^]Mx"ܴypخ2ۋ#(DSfE{Rz}^WGnq>67YfaH):'+l ٌd[±Ff,w~^2P6ve$mc!9tBfdI ٌe2(#qD6 w8Zܖ4CjkmdEaI&<6_)֘zǃYL5$ڀBtEѻe< [2KWPO+1R?mЌ׉ F{q޶S&[HNeYb[%3akSށOh](K6{n>ʰ¥tGmV6a lj'MwJB6g~NYJvݪps#nf$T/8C7O{쪷j%~ʚ_41p> FS^eAt޺gG&qR\n(qSmtn/Yz K/znt|o[ sT_R ^ y;<\ґFsi&Rm {euK_ؘȏgU2AO!;ᯯcչl8DZǧsIȀdT??ַ@ȁd6–=ZG⤂E{ZlXMeσY/V4U h냃#KHrG爝_LR1^M6q#r"yrHhod*'vhL>y6d+ljhѨ&gb4`Z2I {i2m77v̸r%rIVu(Nn'ƌCvgvgxi%S(gOƇ{u2_5{Ra"gi-6Rby8Bogyڢy;'2 }HQw[gاGJ`2~;U0UӓXDQ}P`DL)ԔRZ\ٕ^`b9dYfyhjlٖnp;scalapack-doc-1.5/html/slug/img743.gif0100644000056400000620000007604206336103552017170 0ustar pfrauenfstaffGIF89a#|!,#|ڋ޼H扦 L qL*̦ J*`jܮ N/ zgyg&'XhxǷxi(Dh9Eyz3y:)ʺ ;;IyXʫ#$bʋR6I$8ayaHS3͓QX b1PF&bt}JiG^Vgх㝧(Rix.Fj -U٩暧zF bfd♧Pk[2Uk ^oln:w,/[ƶޞ+;[j˜Ź Zj Lp w¤: {UHq_qoq rFLr&rʀr. s2Ls6r3;@G J4g mS6Oi #/ .fG8Q :ڴ]7 qq3aЌ&V\xRuv)E80Ķ 2f~hLZԸfF`eZ+ 8_ 9ג5*NA Ϲ X ihc" Z'=VzU\j*G+m};}GZ/Wł6Eяy$'>A}\>Mt פةsI[Te958bos>} %%Sa#5b\c ֗`$lk" c5^jPNtDMr @X6M! sٴD$~Z' R˛z5Ή^LB(A[ K31LaWw4Zkۆ+;&FSpӺ8 [ugZSWW)?9A[ƚfuspHOpSuҼ+$90ji'*Q_CR>ღ+bTkӢdj9u47?%e_Q:S%Z $Ҿrot 72jC,6) /7 ^'= bu-*yTrwÓ螆YnxI"Yv6wK-q?zv!Ux1VwK bep<ߙYrq%]xW~LjޠI{a\/\*V4*ϻV$W9hFzG%R75:s#vAdŀ_ }o}W^T="H^1J+I6$QHH2ڲ6B%ʩ=t)=Y5Í%>%{[8Mn7{nQ]e.?-]?|˷Ht~RT"GzJ4g1} :xr"{#v \^z08Eg KBps;UZ1[ՀQ'>n6d<3lq(7Lv'TZQBuHBY *5"fN(q"ꝖǜvkfAHi`b+=*p-7 ԰ۧEPB-xm-D]h\{PwhCp[M~ءԗv+}g͵Um؝Wgp:tޢ]R;|੸@)Q%.u='ʝ÷ q;$m*:;u6G(ş ӱ2U"yѺ]۷=M5[q{zk|ײT-kyυ{}ӈZN)׈-eySgWձ,XJ8z oV_ 2 ⊉|Mk]}d^v]T UI>^.}>3<:y <^\Ok(m FϦ5ߴ@BObY/"9|:}?K:~'ZZu<[s_aquDWoțUMjI%r4vϺm}~V;|Z˳UN`dHj`X VBy䭟zQ*s>G/K-mw>[OKUɅ_,L?h[~G0"5ljAqw1 L*E&UUq5:57YP7@ؐX4It>C^ C!lK&Qc}{gVab7ƛL=WI|.13579;% 'ɺ0,c}rb׾{&ֽg\|y_G^5{g>XNdK*P[(bb(ùo W(0,C0 B7 8BP"-v~, C1/&3GXxiH%p'Ǽ`I ziEeQ\1kVzFq~fp'F!2L,㹌":2 7fIk53ܓG42F:XƩ2܈ D21tҰ"T,5BTQ3ՔTNzQY| vacM|*%..sԂnUN Ѫ‚M+YQrm\ HFYEG{[Eu>ByK`V]i\71֣&JGYQx @tT(M6qHꪈ=[KhgX<+J2@ٙ:ea=q'T>#jTe땍=(15jj>1ICjFdswE{f͚˯u-k}a{0qYAtIn\C{(Zn-O<|59u ^.}~ݜ[\Ԍ-/EL,$IG@6@,uK+t 8t50H{ Z8h[0)W4A9-? @|/0q8G0,|!AEIP]'dQᱢc0.vmc,o>jD猧3SG<6Ǎ{܋=4!y3HDt#oHHgdyHRo8΄R4bUD !KJ( `CB勔e` uJ^:E_^&`pJITAS˜d}^'.m'_xY oz]dh4S6My`šnˉgjpœ1A$yJl E`1idUw9P-EտiR$ԣ DC:xz'JShd}@)8 TKq -*,dq*wA bm42"o`fO|(-DV.ŭ 1UV UBj Ł:ֳ,J]k:#ZŖWuwt#^.2feЂуG*0#9le=g1VVN{-9LYbK{`,^bF `c[4ŎkE'b󛡼xڭ y,D <6mvܒ7JѬl7^,a랾Pe-'=K嶽ڽ+F1wc-F4 J>$p6(* K=cCaZt|N$ ݎsЇ3[< g#,ac{hਇԂmȮjCcT(.S_X#/G̵֭hHB&20,<α䪭e$0vЏgKȵDچk1MN̚'rmmk̸+Z Kxm&+m0v&Pl`NtLϢ:zpF*z X,^৿mެLϊhӰA\ i 0kNs"uݰirm<8qzs&*qߞUncȴv-P0'sP:1ɴpwE3OqLLfqjnѢL{Y mĦc@ qCjMuNzC tݾ vΑ6\R0lY#!|2* vȆK7 iqm&дj ._Gd (&EP=k@(Cz%Aj {'eQ&ZI+,o2Nbʎb.߯-#-g-R:qR/%.01#'"i0뇑(2U3H393>3 4E;>4's4QSNS/W35]s^Sbs6Ao2̪'/0J0f&.S/6_9⨎9y:A5fG XP ш^RdM!sڦ :Js1/4bGO--/.3Lw,MS<"FziÒخ:aK9=a3/ʈ޼"uB3J01pA#sD4l">:s!v 2?jFJDS4 e0URpHssEЕ@4?.m2(6YI1fzLgD*  L4Go˸65IC;_$1 ITQ-U>ܐM EGQ0TGkmą:G)'=aOc DoAe5urP'PARXR^+!9|M ~~N˽MI [M"Q RbG04o'g(Ĕ'7L/Y%;0tZj6RQa=!,Q#vZ_1RNx ӘLeE -P  OQ$ $MƲQc;۩RB+!)TeThgGް%= dVZCd eU5-Q˕T`l+3HP%RlM,\Gky*5\G6՘6ScvfStiI[gIv);U^q^“R{ <7MU]"p6rSuR$# IntFlK];ia3' 撂%~T*OJTq辵LfT׷ږOj|GX@'Mkj6\(W R <0躗[cg Nkre]F/H=<8Vis-%. q?CsoRHB?Mt(=93#dtN&tCAIQT|xX7)aPՋgⳌ53<U {_:gW =㸝FW}&hwT{A3“tO)ՙ؎/ !ݒwBO49F13zZϓX[EI8F}Y^YYG3lrI6+tF *Yp`)hIs~-w8;Л!Np|tu)c3L1od@KU/R?$r4]Ew}d)MSVyKRW cw̞ r92ǵ%]kS͐a QUi7RBun l^}I3o{ gl 76U w&fzz|qkjvY_ק,cװ̀c@t0r#5c,*j'hQo r%ѬӚYQEjؤ9yxiY`$Ӧ5 Szxrh#R{8QϨRC)[\0H[ ː^v)X;ZSӜjt=Ie^f+R`f"~мuRC|2w(l΁]nb8퍽[Y*¬zwƙmy=sk3:݉oMo~^gt,'x:\uq7Im˾xݷrufGq*%^G"5? ]okC>sQf"MȞ\;$%6y,){)[< p>/oi`2CGkp5t?[&ߙ U~G%_]?I~Gރ0mCq[yzmV)*Ki܀p,((-*֎U*F2BsV3x}5m(y4jcPEVH cabd`&gQhi) %l\Vl) d23_44)])2ds 2uxGdrv9mv -;:Pw<N9бfAC)ٹ@?Qt$a:5P^G5 Yr2g⒦8ŵϠ/ -j/HUО;mBZ*֬0B֯)Ò-fٴg5&ضxʭk5ҷFڽ/NLߕx;cX߬J|u̠j˹sD.mZ.ӪWn5԰gӦ&6ܺw{7pؿ/nkȗw9ҟ*nڷs;x%oyr䚋?Bx|&%i0 נ-Yu>OA!vay*^`"7:1QE1 2/ C#?"7$S2Г>ҤaFj^[c(%,$p'hvL^\sj& bD90R(3cZWˠw(DhFiKjPL.6sܣS&2:˫zdp+fȬb ∮斫!yF>|vm?n$(9њ+^>+mv.pB[U6o];hvVv #bC[ܔGۯǏq2ɠ\(w,24'[OW?>7",udXk8Zͷi~G .㜚 Q $;afkn-:^KF4:vb-NNcV+;ncmSt>b#ef<)1:.[}Q3z8f]e^61?t0/mU.Ynֈ~iva,&1a"Ҋ{Jp+*~b02cb75Um-jbZ0Qt߀cC9fMⲪǦXyv)ML2/% '"-X$wTRBl$ ({H7Ցtp/R{P4&QyƶE"6 d%UG$&{MN&TOxI.'A{xD#퐋\W}(㸨㾇Yi&oe mnH?p$tw(a{8eDm*c+R&+1X&%jVlS#5U% qA^ǎ،o#_bT<>-Ub (9c$:#I%QBZLH=uVQИ1$ .K:: N>D>cT?F>_@"[ KRbYb">$3*ʶA\4^DdmYV@AaYd\Y U]V!U*"&$ "$Pe#>YUea &tT}6n.&4eVje1VN%&"d 8fZGɰD#9b Ѡ4fW:ri?WyVi Ӏ!z7!c&}'VWq|&@ygzaٗn&*AMfi!{nf]ꜟ qgoQ)^m"h |آtH߀(z(K흿觽cΑu(YڝȔ(ϡ(P ƋѨߍFB(bhhrbmbY "Y 'Yh|>iDMQ閊))qixi鏒院)閭 n. ަ~ :9)ϫQ"siciFJ)0=\Va*uҩlE'^fnk]]$RSfL* j54!e e%@>⪋"s!X۰A)kk"gzPR#r:YqvYv}Itg⺦"^+ *²ӥ5lnX^ĶY^nx,̾2lȦ"Ȗ,̜엩 ]ҙU blvv :PQlk-,(GӸg1i %-XRBd~,3Jk#Jⱦ+ٖN`&xۖb%­0۴}z^ܶF Ε:ޣ%.c &.n>ͅ-Rאna(V a)l/Ja&D~oj:,L dInP ^r ^Ж&^zc6ƕPFlҗ6|p栏yni0)]pK1-UvTq3b}WzUue7w/qFf`'64ne*ӞۖN#@"زd& e[:Nӣfqbxq$(n;dEe /1b:".I=C ?z^ڬ-F1"HvrUcBN5α0!aq#!AM:0Z3i[IHwY.hơ`Aqޮ(FY?k[I7T!n޻?I/#r.WjAemh&W-''"=/*qY Jcnmh @md1J&7_Rq;wfHXe-OO!C2GSFq*U%J[Qz%WV $yp23'V,_ RF0QeEK!h%mC'^pFoLɯs &4J6cd2?!Db'!Df/tw񬏠b lTmxEgiFd|l/Hp BWYZct5[u gdGwъKuq}o2HIm|].d hGV$FhgkC Hz˚1pfmgFlksmv֭Ҷ˪LprjssjtӌrOwTw\7v ww^wvl?Z Gb6Fx61m73vx7%wZw) wwȷOE.)X|g]?2/Z uy?Sw,.Fƚja up8*Szj( oV87ۇøe8qa:bW2yEFno*cOB*8嶓sɲ9{xTeiJe2A'eC7y;ݗ9n_^B;wW+a76u:'j͞\>uov:zAgvRzYnH|jp7q#W6::}ZꩨM'{ࢗ05*m޺N]Wz3E>=zHM5Vmo-lJF+.^Ue~!'rLq̹f)Χ6U\+ a$+ϲ{%(8{J46eN=S4~NELb7/G9?1:'"2:srk ?1# Ko<Iق4/ 9긏/aMpҍRNK۱|:\Ĉȝ?]z^uӭU.:뽻^ěG^}:swpPh7L{S>y<;n Y~ljf]@] uTi֡#/(f;G׽ɮBCR*RUۥN vb"f[!7%-.UoU7yv[vc1|^Иz [YPƏagT^ƒ,TM XI Ƚ] mGpxb('1.b=0ޞfxM\a`tm&3c˷4fd duݬf(!K_+Qe" $= NSpgωDzDF? Y#gUQM=0KP9j*QTʑ>v6i=EK`Kp&]s׶*I!`fW6u]rmoč3wW'*"q"kV4Քby?lQ5yUFVLՂ·QAt 0c,lE=ؾ~ւ%;a;c (EjpӪ d ? >VcG+*l{WisO9Ϊ𦟭ǻ+Dm^_g{ [j.(<(LO7E9Mj՚\X,ּ̾߮#hwe/T?!ωKq:G <헉6#!Od|YK1gPZSl6!Xa 7 .IהPV2/]cHU%(Vp5kD2׭^j#2TkX98:7$C# >: qD>Z1M1Rcѓ1 - G?k,+!v|-q&-!P}4fCxT -+9vSJ0pq3lǮ!o 7y1[QV-20fC~q;ǚE_W8W͒c=Ķt1߳!Y4}.Vo'yxDŮ`5Mu />U/؍vI$E}t:楺yfƝMyz<\H=tMԥO=>V:gup뿬ש#v2egih':)DҒoܩw &w^ >f%A[w<:q=]/bK5,gjs*{+yVBN2qkB_4=-5<{5wPdwv[ncWxl+e]+_k8ѯ##0Ҹ[ ; i3ȳۜ!哛Ts>)C+8 X~SPkH@@@ҾkgӚ 4z9?풸S// kB"#+,*$ B+YIa6F:cRt ەM{øß8C7%[S5C3¬шak *E]+Deܱ6edJFj= ( [t1"@)4(F;)T*—Z@LCj;8JbhF3b4!GS#bzG{G>^ Wi!袟 ,/Us $zƃt\HCɄdA[98GȮ*' Ij Jq:b<+[LbDFb b =M< DnT6'23tR<.NRL2evM<#+kɰmN15Д=>S9-/2պC ђIVO 3:a*-c#b)}`31b#X 1\LlLx*cт %Vrs^a\ m_!NuL - >D=5a6MO *d̬ϥӄa^)A1bz9bM5c-e>$LCbZm(d\SLs`"\67egEU\e}/h 7YJ>6da[6L`M;uu/ lC)Nnb=htn%qKeN~NeeLKx"ylv8$޻*!'X%ZόXBb'MY*_-ihYhfĭ9]ЁV"V:PND[%jUxljE"Zrm%\ vvf&..kUkk۷:kC볮iMjNc-DD껫 =`dăIl׽⚤cBSA<fQlثlU];lծjji*j_;Kp='KJM{б2lnMm0+n x_M7y߫-sc6s!O>.^fd%A~LoT9ʽ=VgNN[6j}o&cbN/3h6N1KbcrfO$ K,TUvtrNcm=1ebtu*n,9NN 6⦴qd?MikBW +Y~jM3#d7 fX.'(p?f;\SV*StNSYXq6vjG-r7gڵe+tT26^(OE@8]UA]g%oXw"hjYNo+X7 eWaT{fu.k m`O&VREfXWGvgR)7inQW\Z[EA}>+-F0D_e1ވ&['brXұB~ <6!ؤn e*ǹe?:OoŒW8SyyaJɪp똫 yw&'z3zN^CǺz[N4fC5m:{Vofv]H2 }7/^7\{[&޾tƹHܝZg&K*wKuEo}3š_C6gk{&T4?>NpݗO 6uZ|wSUUy^1WPw-Lcee E7-|YJ~?=p&}kgq5adPIb韲 5LK_~I$j̵],{Q^Qhrh9|pH,rl:Шt=`jGtU%'i5\]wk߱:[]Yq/x/yZSPkAr\dn,|!e$~[sli"[$5{aJ;Ƃf_1^q]όd(f}1tܔ}@&H`#*\!ÇHÊ3bG#?܇oɓ8\i2$˗ISʚ8ɳϟnjl'ѣH*!tDFJJeӪXDʵWWuٳh6mFasK+qeck3zxo!EERc$\NyA-9YO:t :ZaԞ2 ILo5r"SE~+8h;lf~;w'0H}X9\yU#/t!$`ҚgȬ^=БJe EgzW_1tj``jY`yrxυH"~Fh^u xh{ ;Nfyh4.f]hZѲް)r2i wg %(ά)tvDn',Kzg 6:hx~p+qz"lN Ś4}{顨GvGrȉ*˫,gtYkzhhg"$"Oݛb&Cb*d5$p.2 m7qk+hL[5'SLjиv }S- r=6X Qc0`-_m={mc66pJݵ^,]{}~s y| %]yfoi&ޥs{˩ ɒ];&+*܎ZLx1/\kL, b^O2{s(.>fnߪ[#7*y>cr59ܟ? ;ryVJ1oaJ6&q:KEt罌J^ԁh]oD?fЁ3<8$ 0L"ҥ~uChPU'deC Z`)P4G0*0 $FX2J%bLcAШơ$f| HG}̱@6lGOJYoPM*76eżjUǡ@[o`ՕjX? NO(r*uZĤrmfOX̢5] E^yahdUy`Yš +S,j4٤O |+3@>E;7ZxN#nXZj.>&vtw1o8 F+ bW/Q;e;z֋Mrj€)gna_K ggzɳ.xMZ|߮zN0Ձ=qad*jŒ@C )%BJQqñ(fAXSbaHC{0q%7.֐^IAFI(z!WE'#WVr$ cGyQ7$=aUaqvrgIwDY9>䇷Hv3jɃ{3{><[DOzFV!ϖ=3 ?IQ|4}N[" Ar{ؾ})YiJ=GåF-S"{.ߧ6)M|JTʀV&G^5z}Ir}iX״5WLRr~7 ]5e?Ce*EcZ#Xe@*7UEVOg'fLfmNVgD)v?g9V":pgbp%(ZfIS.odN^@jr7²\o g=t(hַ)l4hC>dT@j;ha2eToWRhhghpxIhܦ~Fc1PmuzHsGt+dR's(DtE5efŌLp$T'!h[wZ_cwSHS8FmY!x(|jg(XtzIُ YT Y>A1ɑ &XTHu. G'%' 12~uƧ z1{nĎuR{;qtpe0QhSVN4Lna!S(Okn6 7x*?Fi52ZTyiE4Au%hcHEe_`Hh٤nوŗ3U<[S"6vMRylMW :(`8c3 -@օvp?.=: FgITVH6(>)xf]hֶe?3aeq]9Tb;f`wZcLJ|qp9xl@/_:r 97bs9{daB*ib 3,&c]ThG^J >醎H 8_54flcn560ٹ XxX} ȡ}xemPq&u-aSf<0kOt`0щAdnWc֘rʷmp|J9/wք^/f"1颞4oɮgj($Ӯ(q2tꢋFLĚm"*t(:KxE%7gָO4&qFBT4O4E$-;D7Z{L߇)K'rwǣ":A:ǒ`R V{MZ[8W+fMd[gGȶp }Jq+EfCjGč:@?Iu){J9]6"z)ٵ|J3@=,Cl6 CWJ9Ֆ۷{D޸a|v[E!9mZÖOjyP٣i&mPI8LGxu4{˼>|&* jgK.(j[ c$أy;ռnpAf?Zv\7\˩㇨˹dleЩq[wζ+)^|y|`Ԭsj|GF:80^) N9N2>y4.dC"W0;oYTNVx°m則fjnxm739.DN0nx޴|wfM.NWn` >NV{&vIO"+ॱ⠮vEDʱiєFNm'VyQΊ~W΃HpA[O {ͭشNdθ뜆0YgwPݦ,4ImQ][Yl̿{KJ,}X2ճbC\HheCq)S7V h vq-N.HE6x{cljz䊴z[aLy8b=l X1 `ꄘG~cȑƉ-˄:ꉣ³/ەnK8ו:^ռ*a]*=|x`d'mև\jZʨȞgK҆蝴Π*e 9qŖt~6:n?=aBg"H(ʊ_/OXoҺRQ񞞧(󧷯)-I<߅Dzzk(+u4mŽNֿAp3QX<^<IJ2i_R)ڳg+f]*ߡ[2uþ;.MRFf80C}3sa7y]Tf1 I#;;KljSB1[@ϵSP "Lܨ ?kcz+A\Yd G0 q?j z(&r2I(qJaFKc!3$QۊM#۔/6P KQ6?Ӡ;O-kM+):/6Akt\gRH-K=%!5LAHJnT;5UՆ,RV[QiH[uM^;`Uq-YHmM+f}5hE:Vj['f[k:: .&!1dܿM1#-H,ӶSg"8I3}#ȃaF%*^f&MJ8<}&+ĂX!R'S_>cpf/i[ƫ u?>䌰5iirY)i.-%CpD6&vs8DmhA-œjC~Ym1@wg\>⌍1ߣAAyl5 _"7\27)-[cûfԉ}jp0x}BAh?Oe/mizݞ~n04 *sD-A,f?=xSaxp>ccX${Z68:u^HAgӉ @o}٠pqo?ꎕn]қds ፰l؜ш!/X'~lXQkP, `Y޲OGQ2kT4CPҮͺq9MtUHU]kalGԟ.P8Rq?~ju7ܛx&8I +dEJÄmĩ#-:C+S "3sS?dgq2mjyMVOֆranKT0=ZhQݙ]J䨷&Axs!)GQ5*=5Ff[:y MpyjN/O dBByrun*ȥVꓲDMΈNdwuJu%&)(ΰ )ƭ+@Ѹz*1 qp$LifQ*eJm)/!oώPP/Ώ.&zK0s"P<`)1} q 9. q4d3Uο|GP/b1ML BX!i0jLJh K}:(r.0.nR/qS9+s:f=b0u 2n  P_*ĺFu @=.A%/,_qAg3B13q,tBm3CBK6T<ԷD.@4%QTE+EWEcFkFoGsTGwG{GHTHHHITIII;scalapack-doc-1.5/html/slug/img744.gif0100644000056400000620000007726006336104544017176 0ustar pfrauenfstaffGIF89a!,ڋ޼H扦* LĢL*̦ J0j­ yl=X =˃'8䷗GhH0 Ʌ٨x9IZjYiz2Z {i[I ,,F{;,gLЂpڥm'mͬT}M]]n.> M=)Y_2n.!edv U'Nۿô;jL1"K(LiWE4#nbG7`͡=PV֫`|>c$ &.NsܰQi!-)*?k׊3s\Z`Om1Y*A)J &ڸx^ZsJpU֙!?W=JzXP:pϡy6E'>wK3žt5ЛxK;x‹c6Oėr?XQ_jk<;xHǛ?Uȷj%PZie4h)J7S&<|JfWd&tT+0$Wؒ.BLc0&ҝMbl#6ŗcҩtpt94!H(M9c ǺIN4]d7߁q"ئ>zQ xxl$8NqTJ=jVs# [=^mtevO|ac8E#3I򹁒.rh,<+eQp:9K1˖d9OTJg2>Hr\2Zy`ĉKRЙa6snZg*/c4D1}K3y/s:%lP6B]"[g7t\/(xI7(Ŕ 3SވVE>5c`X``#uZ`gRѤAA;-f,efyݞ'1+۴&2[Ayͭd#K怱1%uH*hΘd.)Jk[.@p3ADҝnwM]rV-sn % z_C3 `*ZNp`kUkkȅGS1)<_pX .#RsWC)Xp_( yNu K\QU{_v(R|1tCr-049XLVrI|e |5Wy^XC5Ox*3Σ{^0?L+ķԦ8Ȟ%uafF΋Wu AQe JF~2EU[ʧՐ_UӪRYv2}1Rb/H8C\d:9EG[2УZkCg8-U {I7Iž6S Up71FB"әÜcU&;u-&$hDiSEeȯI۠x渗dn~[47+?I-|-hgxWMՕ&mÕyYLo3̱MIpF%[w~K|օBc*=MT/h%YIFu- N,(uUY8RH9-EB||kH>Edz_0w궭|x)}ٓd}(^Us˙i~z^ %?S6|!gotRޅg|*QhM}._%A^}gVsg>kِyzv7VwF'8 054lbGw3& per*_3h-`|s#Q_%x;W:8&>ȃAmBh^G "fѡdE<8Ys u!N1}g=Xbwk@d~3xi†>=&gTM7WfXtFS%vWEM{pj6!8wjQuFGIc8[30V*<Շ {uv2Vc8<6=jidh>@~h}ki؆(s5Jei}XY&jj>'c#hאhlY=T'xTuKjMt9FpqtdxrrІaDok7Υn1m7ikfA1dk?]l95aBp=SaĈ#~؋Ziqyv\FT,9p-)dNVmmFlB5uۅ=EV'Hw|\4t6SGd@PS|~ue^5cI+;ՍUxx$jO#Ylt׋iWlFlXFtyLw߶}$ma57wVxۨCL}X^hjsŝdL)L=n9xHM3'TY?R!Ve;&Ey'p}]x?)3~c_Cؠ)1DTbA)&YjUikXTzu{W"5Miɉm{F5A)UќvԒCXc癪{Q]STwG'y ouo\Udz>5e%5pF{iJSYkˈ]yԔo}eo:Mxl&O*XXJ8KJ-qxEJٸD ;L:@ogr8oJsiXrJhIW٥Y94jzKJ㦼IsnzRo>p-F᠁&wktZuC7l͎>>y] #*"6Y<=(]#Xˌ}3+Uc~8;~>QK LfҠ>IH;/oZ|U}ixGM'[D]byK-=i(Մ h9/>fmάmJmݓćW&/Oާ=:\kCWjͧF,̮<'f4|I߳ oN&kVGJ-{^Νt8~t\ ĺ4 9kY#8حtTE@0lM_śO+&J;.;af]z|;S]Nj }XD[&QZ^Y2$2L)EYfCod1SǴUf^L^]j>M"1*I% 4$Z]_aceg,p@r=|EArpYmzvGj{餧!r붳o{fC|إտCœV-&TB bE1fEPc,AHPMveK/|iEL7iȍv:aO*uU4 Ӗ|~jՠPBɺjR_;Wg1Em[_kP\wWo,u<‡Fqc|:VXre˗o$W=xDyuhaP2lھ{/n5)AO")+9q/R= dOksM=Jܪ:kRQ?|"R] Ϊ%E# 9o![06 HKo0L,8p[> Tq"ECNYq?KB3F^; :KRL̎BJ:+FG+i,4դ']oHߐc3i&;!sI8ٌ08[%L F~2O n=EEL ]U*QGh1LnQ5-/EFTOIMθL2-Sm7/9t:^YQP]lU#9Wy2`:VPVYkG[R-OBNVEP#(&XI4mB\^Y7~;]yݘ+@%m6'Ĵ\{9Y^*2Ӵw 5DWȉANV{, s'ލ-oc/`Y?R;%: [gzibǥ7i5:ޗe8wm|W=#W;!G&FuvYgoP E5YJ.lFoSFE(N;%]fyD͔zEǃn]WNOR{h񘥵^:>O\z+T]UE!\e[6ySѝFOxfQvI%gG]Kʩ)Ȍ}o?c -Knbs=p~ Sǿ;_9@*L@^H;>:0IxP7{-B&d "ᐇctãJB4bxqȘeQEDQ.bFTQO)(2Q$_dؑ51#nX;@PR}`XHE"PHI&dP*iIM&|fcuǜhcJOdebpv"W4tJ]ѕ\z&s:ʑ|LRgKWCgѬ`39Lfә;06 f4gyΉ%hxk܍LAOJ:i=zk ' ~5!rJjǡpW Vܹ,*d-bHYǽjYsR-U+?"mPt̲4-kwf9қaڵ+d;;l%ւ֚ `72x2{m0NJd,)d,aV32QC٦0Ia ^Qݞ#rd5/@ }2fzƑ&3O N©?bA2Ös]'¶2-i5@9KI\cotiI3pMcNMCud:Aɸ7gvZ?W5a>C?m&bZX|BѦ[ c9Aah u)`azܸ+=}ux)Bk3)JWhdZ=Un'zկlaO+rtßߞm#\pNmѪ|tϩ:}Xu }ӷ"r'pc׿]ί-+^nRMx"g5v'EtGDP- Cp,dnlo0=. >lH2`kB neDhޢ]eJkB0g\0Tbdpאkp@8,j`rKKf oPyN2I@kdk PHnk~e j֋  hI(έ>GnNzG*GZ*Ғ 5ĉyV[.^8Ŕ-qdV8X'r ٭,`Ll?8 v-%] A|-,@pLլ¼1ސ& Qb\nmlʇ%[;ѸqWM0!(1ƬZ*rnRQВB`ȭS&Llr7.@2Q /RN2PZ!1P_ rpNq gz O%+q(--TRV.'!l'V ڢ](Q a,~&hзonbK+s01$,3/wh2oqw2pnf--S6&.Mcbzփ2nE4G5pY4ms1M7+)3<'%F`Q!'!k3枓70 h#sfel -qo>%.NjB$n1SJHҫN?NP\M4 O2TCn>Lx.Cuc?XIZž Krt @0HФv,$J+IJsJJ_ P4H(UoK4LԒ4LtL'LGTtM{M]N44F3)BNq2T~HPioAPjs$O)" O"N=OQQ3GS8 F)-IIRAsU-R"uf*UK)UHs* @!$7ѧϺ [~P Y3rle$0[YCPEBA [aթ'Y(Dh2' '6F\P({^7 y3g)u%ZVd3NPo^C3ag-Ưn=cRs<vR.(,2DԒ3b_vIFZR;UQ:KdoU'ulh-%P"yMts5iτLg-Pd<-Dmmse ;oi}A`AO 6X$yf]ons.&5_1˶;.q3,AL -4CBJ䆠tS.uQt :yͬHvu= vzŦew<:S(aSW2-zghQvSz401dOv{TU;7Qn:5.rANugx*6R:ZX#jPa>D ?Tb0Z jS|)V?NA$A/Ru)QX>kZѾ:؟]H8X [0B_[![EĦZPX0 %pB@b'ۯ&kӀ_.2Iv;Zؓ,su7`g-m .5Za#f=:ʺuq\7w\GqEcTC!!:dL{62/uZ2{?Mfq6g瑻W46(:rO5u$p);n57;2cSkm%%9AQq>UIs'4YSޚ~y~^~IÞ^T1 ڱJ8o懲Iߚ~cUVOPT;RXEl*~i'c|5#p-)_,ߠhUG7?^S'=+tCaYK5YY]cݐMΏ)sOi*v>>g싾 3|;Q C9ej%vݯ+u?rsd6 a^;{rj/z?=4*ȊK4[fү  $ *X97E?3ȳ.'R嶈FԼ/Y)ǎ`볺H aSS O#΍bٜe%T炣S*)M\h,-kXk)n&id/)*14u55-X[kwgj/inx0*Lv;C=&4zc170na]9=ъpSx"ƌ K "7lمpEL^,8f\OrT*R(ʨk*g]Zr+vlvI ZQ.IsjqgMyv.K5,PB-p.Yr%֒4y5Wj?m#{4z.3eRbdY'E"Oݕu0c1[7lc rdyxatk;&m< ?J̟~U~gO7`}?x  ǟ~Z^.x^a]H}8"a\)b{q(2*r"D2!7#=O? iMCy$IRXMQJ9% BY%e%]zYa"!&eigNymWi™$9'yry?9+} :(|:(fh)V6P_(F\q:* y*驱:'c7+,#fE!:;*:uljKԶ#N H1ݚKjrm\V8K[ש;oҋavȉxt/5|0% ;LҰ f1`v!sP\2ʢz, /O*3ܲ7\3ۺ(HȤF#[LtsJmBM\Iߊ4VKC+<#x ՉN(C(1'hEo_$gjrn%g'[_؉gYx& (=ޛ?~wg+ߴJ !8 ̷ek1ۛ$b:M :IGc8m0n1~{ _/aÑQPG.#/D~zj]?WK2]qg,tvC]@r6O<5)U`pJN`-BeAk`T9xP wқA`q|K̀&`, ֈ! m"Ho.RFZn-#_E]iQ=4bŨdn&Vΐw2d7hKVg1ͬpfq &~)r#?>%Q&DE<Yq=%b;yzLꬢCr眊qk@gLjHzb)ɜt"RY -xм-Ty6d qR|PQioTK2DEuOQ/i9΢RU.誺ʗ҅ ^J` &: :nzKdgm~k]Fըb+1QJQfSbeMCȸ%c% ڭ3]-[c8Z)9:W uԾG5_J,*6McriVkSӴ槚@@"ϳƎw[>W-q 9,5/ W;T↾v\]{ҞHe^߭ XZn}`sp"X؅?#mؚ/pi[%;0S<$ *~1X2ø2hl{KNj1y,3xG$J-4RؖE^>OiG &SNtv6VsN_\3[ȋ&j8"YX*g-3vrgJ|"@*f+r Lj~BK/vq-5'wX5$O.lk:uUŁF *>3qCBWE#BYЂ !´ݗeLX{T;5MB*\Mpݫno%SgjrQ-]5uFvk4vt*EN^(-Zz>r|]4}}iM|}L8Mtl{PhĩW2/kcڈ+C\!*{+\dGߒ?u,*.2G31T^Dn[Ψ9med%Z]]bFߨҁNtg˒*K8G6c"Ֆ敧]}QT!4o }&iPV*}X.t:~1tf_>lIz\1_yܥߐ0iu`s0 yUbIUAU5ʶM n|!8C{Yovk!^U wU^sI9YeeQ \t`sT:nYEI>ϨQ ߐWR a=Y!mٌ "ݔ"JL#!$#:$F%.%Z&n"Ubx"'b)).!e bbM !S-rwb.BJ+.LӺ"L`W`*Vp,K0~b5$4է`5r"5##\7"Q ]k֬ṿs Y >b]=5!j.״V %^P cXd=CiQ]aILQYQ#ko#mm$IbQQ`ѤH FFL`DEd%`ud=xERSNaS>yO8BL%prP{A&U͍3b%Yi_?%YlIQnJe-`ؕ%Q5PVuХiJYfYJ)VUd¦TakjfֽyfZld}\`A bbe`Q^?N`]rV vP|ffݚHZZxY}g!w6Hyξa\t}-$@xkHs$D)gE?D^D`%ڏ|R}˰8R-¢l(<ڈ.ROf"ڕ!#b(3Zha((I.$2)&ȍj˔Fi(XVXX)NXհBf[qUev"e[hNX␝d/i4"NژM]&̏1#͟I9fN"Tv )͋f}ѐމ,6aSEԞ]1dRܮ $ٺT9Vc@MYud&reSJichb~A$N޴2Y^dD٨ )kJ aIJ&jӸMnkaSkn !ClW=[6)dDNTqޠޑi/3țnъ)P\]Ϊ)$/,MPp£U0do7pZMƏ&f/3*-_-ê Q- ŪVpKJpU.\*g.r*'L :pB`a /{ݎeƥ2q|uE.Ge*/.o f.~Ŧ !)/j!1Acq*kh|ŀlF!g0ŀVz; Dr[&Ir2Hlx-Wxkh(. hׯhj~N~"| #NLڒf)z(v /./c& i,j sHPE2/37[57w9?)99S:3#):MZYjsٛQʼn: Bck~.'⤮EK"~e8sH=/1hhLr25_M1c4^2!z,P; W=`% G@j<ЗV$#U[?NRifA18J. - /rk]&:]ܭP$A6WUw2}.az^XU5 `dzhmtZM6)up~'gm2gIp.d_pm*P,rp$p+W\IlxA3Y1H)`2\Kx[mGLquR08=sq^Np.0[8xguFˊ2r%*(pW2qF غ*טG v >y?)Wa/2+ r2xA 8)NbZ }fi rŔgO_$MI;0v ٥$i:=;z4:[*y;t\/c/{lw4=bE;rl)%"~IAi d+;Is8̒dR{SY;Gd4Þ2zCo7je*.O?yF|EW|N# W7 Jh+ L^5#5ݸUPERu5 r}2\ DZC.t)7}v1V4e0Εo;kNz:ocKd jL62Sܢ7y^3]码7{lzܫ`ǡ!Ҷ>>R2޾ږp9;_Sʯ6ZE+\ʰ}mPzgTa7GBU?71(|[ m>aw[?m]Fzq.+,:3RhQ]M}ӸLp3PtdnW tE#W$ɘS%EkkY g/72~k֐ZIJKK34ګj*k1ۃ:KHxr4MS$%aB ܄YE$%]ěMk$c^l匞}]6 *}6PϳFU,G,HKjbGƳ 6vΈ1DvFpCYlαg1@P\ |-[ ,TH}eiPhKw:EHn+3S9}էJ;l5Ѧ5!5 HM.Fҙ>/ вΑ٨b_ꚬW3c(sFP܉^Vl0濪Xqbg%C>w,|(+jՈ.Unȴ˝OcKovvݽ^x4wzC>= ~/@LήAd0B8KB /P: !["=0D _ DWE_1FgnCoFwG4PG OH"D2I%CDqI'3I)횴2K1K/L1'L3tK4r-*l3CSs51N<Գ+;:TӖ@DA2PEC!E+ѫ0?L?ݯSPgj)̵d^a1d}#G౐ 9nNfv味uv~wqyxT xf;jٶ3_?ˮl $,227U̷nI~9]<5# wvpJ=ITkedDWw=A i=sUݝyG3x۝^ɏ/>yza4z׾{'|7go~}@<#pP3i F8 `䞧9> KN]0 b} 6D-됡ޖ?F^'+`EBl^6σa{Du<#D'L(eDY+`!``v$dbmlʛuê)+?r¬HQb{uQd$IX"Å E1<ޑNƏEOS.S+4OX MU(UL+Yx+\^ Sr3*-8KݪW&?i9Mqg.ḥ: y,2l!1$d%l@7GɓDf9׹j _)+֠Zgqρ[rJ<)P L!K-ӢмEFe.{QA 9PYJomkC(ܩQ͕^L릵m1rr0T'GJ >ы<Ɯ87Pa(iȁRp#ɴAפzbvXS=jZK0Πo ]fROlBȵ@1&fT+IJlTi4DT%# *elSjETղ)kҐS E.RKvV)ڬ`h]ܫDx *@:H]u2AˆQF>t5B5edԭ; d*29L0x9썆 选r j c_h4%isf6d4osWJW;I{υ^ef 踺c orVpc' JN32_U`&VuY4o0gg[꘨U2e߉\y\e0H?N khB#RG$)Gs\:FjB[]W<ۮ̠ F(Gϲ~`l-##:Fb~3Cm6f=p{&werݲFnzCwo n7ɕ6y88&6cf^p>7XmHVefDߞش /$]vˉkvύt\8sPJ^^ +V>>weM||͵ݩlkjG2-x s˼l'{ 3Z8bE̬tbD 8aڗ;b)V1W E].K [V-o=W˛ YOx>ظE86X- h _Rb&$I/iB2Nz K879WcI~=/98~Uvs=|-D eŝn_B$^?cDD=SK*ఢ@q,#{w3:2)=S䓟r<<ĢK=k0Úۺ-#39&['I)m·Ը4(UѴEc;6f83[8õ,t3V"::x[?A'A'359=ܣC'I)@GR4E AEYTtE_{YVDZŢ\E^<F-_Z12N*H {841LFF [#F,2T!oܑ ,@&K"*۽P&9000u{" ;D)?#!1"E$F!{D)ƊHkv1g0@P+(:|@L5QII¢S#@SCm*‘@H T&f;.γ8Q_ZChAVɵ3\•~3O#%JpHs†`4 L ˿lKCɪ[63㐦&;<Ra%ҧhb 3+=튯{,ʃL[I"=;M+bԄ⭋ͼg>ͳU*28Zoͼ#!|)1? w2rM4P CiT3Ҩ u>]9./5OJ+OD04 ?TN7}\NqkcO;5iѦF8{OJϾ:rDzZQ-zӡAR-)$/ ψ@Q8S:5 [şɉw`M9)MUL)v$>V 2sO S۠,Q1>$|{VjkO$P, MƷ p?,/G3DqÚ;JòR NШ7_!C6iW?#L*l%=p4m‚W4=kX*mɴj=wMddMa[dTUګ['#YksVde\|9LYfAjhfgfi<1cj泥5mmg±޼K}KC=go5p8dR؄R[_P.z8`Tb\\)vc ^:H,v>e%YdRe\V Z#g#^HΰdI1j+~KE$s݊!1idLHRni{N^hM3եHyK:b>i^h^_؀YԍeeCJcFch+%c*LKʼ^iWnF_Zdt}\ 钵a-V]d0lj<1Д {(ȩN,*NW$mql|?.0cܤ<:+fIQ ߜ3~/\I]3r4P/}p&mF_:L `:X`. p<ŽdEehLB*b4𮊾vA.ґVV:iR9DVeNh$9|L#NDmVԶp vq2gBEMngsariӨ4>1ն&Ay}諭bV`ՂsۡbH ?$Q^12MB 2N5ZRTRՆ򱜨hM-cP-jfPl4 ?eCNh.X|W-׶tW\1{Bu3mЇ͜V*X>\rP k-el^^4NIf-/}X8a_r`.\UPUyaO?6>?&ݶ,ФpyE7P=Gg.撧Ȫ}dx|ǟ”?{G7Yܓx5mw _[GyoZ<~٧LqUzUy wmLzzyp'~J־|q pTMCgl־|ľJ,4Sw>˗k6\ZBK^M!{V׌Ϳk=JNY"$#3 DMj, ̺zC;?.~S <ʃAnp?}ҏ=ϕ]ƫn[Su6w`0?޴H)VmuֶN%%`i̵uZe3C %`(dihlHEfWW[s %%k¥PEpXՂ#v2`9k{n[o=ݝ}q>Rg$Y0,Mkz@xtl~7kYhSa QEuFpxEe^zy}V{_āR|&]QÇ͏ڻ@[H7ym{rJWKW+wՠQC_sIu8 [bcBIr7a/8#gvaq>Wh$g'z QvDΥ9\)Չ2{V Sj,Iٳ(NJZhEBleFq-?(ҩ%]3;Kn3V^=(JDj4 ^T n(Ґ ɰМ-+}K;MB\ٗ1d &J8nBpV[rZvh">zx=#{Qfvz=ء;iX"ᄡq&曗b_٤5'zx0Y@_10Ft ]HK`QZ1ƪMHO{` 7@TpƒK6إx{ 2A2_LSؾ!@EYs"@H~4/ !XVTNK|إ9RrVTtAͫb莜[#>:`q\&!2+* 'džGN(;dd:y)YAs|DLyUs 04f,r1;׌V\+wMƾPrnبAͱ<3U ʯ2}٬yZ]nM1S4pu']Z!.30O+a: P1c_#e5WPBrJ5W\1br|-Z=(pOlU |RyskYuP$Qn.̧=`*}U( ?4_l1}ֶ7X|4#E4iKe˃NOmJXQ,FbZaBmS%Ǥ՘{A3I17*t&wښ9O'#q~iN~ғl!Fss쪻 яy/P0qW\{c%OvHUk1ɴWMJj>g|EA9Bv䲹Hqp}~C"hMVmg[{HT*̊+y۲5 îed-`_{ȝ%2K|-n1VW'07OPvrsCqlZC&55u]wE~Á1hVf%f,FQd*S-f/QnF(H87hc9؃ BXCEx;3H؄;@1LHj jߦQir, (jFʄ06kQRf9c؆9xXcuy|4g ^b_]cyK#Fo{"M.X?LWz5y0(I1‰SDxr~cd1r X8HXBaghS{-%Sa<Ǔ@wNuMieŅäeYlDwIقQUP.ls1)Q5 7ɐ4UrQU`w\Iv]W|vguaX'OY1^bI.7VrX8uByTsHUه^WCjz>s7S"u8HPy8HQ/zեk)X Ip.c#HO%z Ո$s/i|ʥ\XiSg)1B~htYg7wFd@{~*tGi/EEk6tUs3M ȠA\eY55X~_Á7ҟbC-:2f:K8Ax2` #`'h쥤$i`A`?ã #Rل^ dJYhciڦ7=rpڥszRxz\}$#|:/AF[ZZxGkOv/pO*3FQ&%GG"#luQt x .v锗s:r-Z_湫vR:u-ly{XGoGR/#ZDŤ֚4ڪ3z~ڮ҉g:ت,z2ɲb+(6R+b2 ;E[)#AVV6Y4 hf(e,ZU#W*h+$LxEe;CȜMhY9vb946!Ja LHFI>y w7U{3A$ZZDFȵq*Rfz b׉-)0F&~ֶJd۳^d(@[K{vgީCDZ$W_ۋy<T>Ը8ӱ(Ԩ6yYsk]{~[jUXi`?7kkJjh0eiimKftIwTR۸gdžy֐>ep8Lה#ʸt .ٚn@,ru;ھ S+ʏI~/Nߙ@hhYLQ= xTdo {Ћ|YjIk6h؛G5dEozJUzH_;3I5ܺ[tZ4OG(UKnUYu҄;4/ejQjQiVU:䴛gaD 9 pb陲zC@Im% XrF U e'DZ| ĥE7D5M*g{CO4Law+RpL'S{LXLOlHU7{sEy',}iEGsig|N[Isf^aHX3h= }\'zBv8T){] Zt%=8-\ҵ{g \(뗀$ͻ_$Iݐc$ *l# k*MWҲ ?hך*dm$aK6a!O=*@tλZ +^ ke-*q=QZxmuMd{}:}vB';b־뷙}M<@ؐCKγT͕rh'G`축JTKلN[9K|fBۛӗaJ68MGd͒{0DmFN]MJɽpAM"!2{ۚb[ͺc룤J <2;U7-) o& ǝ<=CdYݪ|]8jqO(lCB$juhk X!@qih4niبTL>5Z̛H޴gF{Hԃ8D 㹓:YL[K˹'8e=cpa\-gN(ͰDΌhgx2F":BzLv\ G oTy=HUf~y(|gɯ9H'kw\> uyߜ\ĭDZ Vq4 iСثћ |8^ T)yߎwxЬ.u`BD}'ifݎ }BtLxǴnwyO}ʐr*ɩќN܀Ev}UNםٍ˭ Yջw[x\{=9 @tM遥E~o#9o蹜=27^.j|Q|my,bA@z-GWX}mLCw|p rД撕]%*o$d6&Zx6 뺧됧х6AI…]L sbjdoo 5~]F]k1iՁ(Q 9}}d~}YimZ ׁSEֻ̙yđdMՕmw YLEI.MNc=f]cySFm `uyp)J#PqqgqEG2 3Rt4RU 56HvG7 !QXy؄"ٹ0;:z`[{XX1}]vBy>;|^__~=p1G6hs>U"+x{6Θ 1`C  Rɒ<?yٓƂj7pZԔnI%˔J9hRhTvU"UlJ_Nj5ۑp媺V\ox7=;c[h2rVmIgz.^}f1(׵-ޮUJÕTҷqmSm:"vݜuvl|Kƚ,l3‡! ޘN[f?K1|Yn3빸FlZww;OÛ-oa)&>.l]-`yV>]-5-L{__/(v˖/ *skև))mr :0m^BV<Px$ɟn>Q7=ts_'Sw_"֣^Feqcs 6kUfOc9TnVb_25)pT3އutO*0A4;Dcl3p'Zб.a 9uS!`MU#dH?'sD|b5*tr0I“@Ů)M:T[u|a$n$P<[⹾³4C?Q5Oz P@OE"qR;M0x lBk@&> 'A-S2 rO"hIaeP.19g* zhNSc@̈́P:ӝHUjTD}a)TTU7iS=*T_xձȭf'Zi[|oCYAwEP^<+$ttf(c\\5w =] \ T΢sR;[^gMjso _sE@| TMpRY<㚸?_ɖIԋӈK\wJ\mZ\11tlrj9Huo\_0xYx_ߎsiҨ |Mp.]L׳'`N"Q[Z%ZgObZGoM$\]_"\4C>Qo' 'V7'aZ9Bi9K"Sd޷խbNb6w s3e˕k_/j>8/YQ&O Fl֢ZG_T&΢yziMz?4INTԥV5hRVzQc]C3 ]"|d3FS6T50>9#;lf β5$mz!ἈpW8w?3pzXj0EzG1^aĵ'=!EB2w-tM _hluG}Ft;o ׏"3 Sx \XSKQWmyVj#o'H}곋!*fR^{dA)x NV{7TA#m.b70\e+Jg.y7=3rl16 5L{Ρ 5~f+6eMyrm>ԉBv$"\c=S)\S]$WlvnՅi/gv||ۍf݉j` *^o̮拳aʋ{:*,*Plo`jazjocܪDܭ)z>(J(p@PJ0 ` L1K.r8/llȶ؇yO,^bPNۀpЀ>0B~DNG T08l.5I` #Nyn>},F() _nyfmQr+p7̂n5ZrZL˧nqɜ -Gvt\Px1&f%.A~\ilK˔{h_1p̠ID ٬1NIn"ذa.!/  O  s\n׆MY<##UǪpJ2$-~-2%7%a&stx' 'U,(dR(I'r)?)Р*RҦ"-Pٺ.nۂ2+ EEJZl ,,sɕ"Hu+/,uD1FFL~q0z.'r'E>N+#I r&]n1(뾏{Ҥ^ 又 'mN43-`6Uj5O0#͜) _jH7X /fmZn:oR$M(*948kR s"7!i~2 U)AmqtIF Ψ̠ `˱rnAqEÎm] F>iXdzԤGɍ-lBNX35 KUBݓ&3J%RH;ԅu;K,/p|H }\MZoH){FT5bŔc8a5pSP cdS_(OA,m($(?ռ^,ڴ#V]og-W[AUudn]խV8nUTe/ea1_~|VkA} h*Q1quT;p5=NPYiC"̣Vٞ7!WUt 0UKK (QurmeV/)U"hs'NGD:RDjxQyxٵy%yJy-mכEW{z{'E}W}ח}}~W~~~W3;scalapack-doc-1.5/html/slug/img745.gif0100644000056400000620000010153706336105442017170 0ustar pfrauenfstaffGIF89a!,ڋ޼H扦j LĢL*̦ J0j­ yl=lY =˃>'8䷗GH8 uɨhV9J"ZyZk&+X;z[ ,,v;;l Chkr `%] Ë;nzi>y+ۿoo|H`XƁ. Ȗxx4a#i% ԇ~i!& .Ab0(HcH7ڕ15̬N:ĸ !U:@&SH>dVx#o9WpOK֒vCxe5JkMy/qgBXjgixXsJe>gugxNni)>zB jURUjJ6f)j)Xfj%&ImhBv'6AZ)j9RC֫>Ilޙl` .yQ&wn|zzO:8o֫/ p% //= qOLq_qo1 r"&r*r. 3:$S2s;!6$:s$3jlon%;̦BbIKM/:\'VҚGI$Gj˭rnaFFYriPM؅As"?!ޚ&޿UW!i8Utz|+w;8(*:uH.䋕:8~FjJσ}%仺~঳1;jxў {>,vN]ncDhB@%0{_Ug!O}ы_ x (iأv7A3Ez5-56b Ppk d85-)JFCe%TNx95;\Ӗ ^ *p3ժ֥(yPyC5Hh A!ө:cG:ptI 2:2qNΚ#Y$Z>-z[]f(m3^ uk2F$זA .]cAB{2cA1Ox?R<W)&ʤcP%(wCa26هJmn~~(=G7qЋ2rݢjH ЈCdoP>U 5/\68HA(+ NI"’2)㭓8e]]Mv U1%T) 5 )R(JEIp*ҥsċvhhT9#N-5ui5c7c}_ݩZr{[=G>wzojMۗU1"uLBSER*,!5,Ao iRF}4*ԁmm$h7,B +Hڅ $=Ji-BG X2s):E20]@ȲV,\O:=.EZ9T촬j^3r h { mRcCeó|c5D$Xfv]sHTx7'I M0V3>,|(-qEbR/`c2 2qD0Z&/r;}R'cJnu',_*$2Bfuy@Z<.7yRIgY\rγg'y^"G=8&".zt HSR&MA+S.1iQg3!YgxIbIf5v笽č;mk [Vܻj.jpZ̬W 6YvZJ3OBDFV`A5mQ))hbgn&Gp{]Oq_O0w I%oNmc$ )ήweZ2,UH*ND (/A4|6sJtӉNw߽G=niOxԿ\O*ާ{=`nglϾ(W{5|;!Ђ呂I3wG0BjDϹjwY@EI/V-0Vמ3>9нCRyRU~G9_t@붲,7 Ff7 Yrtt_;^^-Xzfk;"O>I[?:p+w  CI$k%4SnmD e(EfBb4N*5 CO3c'tDOstb8CQs8qv!D4IQԁ`b%C4Vow&qƁ,vPFUw }}78El&mÖyGJLT*SX;%Xv/GPU͕IKm$mRktdSCel&KqXcZq(~NPvZDuޕ>]JjUTݔU~OH[f%rx[NՈxpH>ΔXmeWUeEkU`epٗ8=n Ssx:0qlw`NzQ4Xu|3t1(sNz$_lcu<hՆ/Wz(wKjȉQ]DŊ{7|TZdCw)R;O_=r9D Vz5m=VuIpXk?8qFۖ]VuIak%&y`fh| ne.|WK$Yv@yzyUz"I`AyX>'YxM(7^vKXS[4qB-VV(@X !za֋]]ٓ^yEllq 7o4dy2RLYh+vM]BlRGwteŏ*(Ly\HcHX_CSW_fs.H5 a-_T>c` Iqa9b4_8!v6jVfJrY's8IPPws8i5F{y;Vdv#uj%JZwdd ٠ du7[sF *y=wxwd RUwav=c(A:d9/BjJwwKz\7jLZiN{FHʸp3Y7t`ɩ 3 hIj#RYo~=n Ԑq =i &ױƃAZyYD(C~Q2 Jp!oZ# GxSIX&nÐx+7bUhDh5w G]ek ]GV~hhڛbPs JJ#Ɇ Ǚ&p޸>Jק1JřjnjXo)Q*P)aMҔQ[ta[#.Ƈ|xCXKdBk|%­i]vѤtZR՚E;)6y}ijO'9dz>7Yhx)+7R8GP]ZEzbu h]ħ*9TLuS_2ȏe\Xi^k4 '<۱)ֶؓe>eX) ^$YNR%Goi[Uw:#7T&s-8)j9s٢*܈,y9Kc`^ z!(r<'Zd%S'{ e~V&rWh~Jt־2GDҸSѿ6ܩ- 9!}J<#;!%B1[eiOfz5!:*+Byi}r~G5|FELKii! 87,pB9Mzj G,AF_ \^HjJ{ea~)Gx#bƈw/[# ܃||lhm957kwtY{Q^zY#].yN0q9Q>kFo.;p^,s-tUN v.g:"F7BN"\a jwB_ G㷕=|\=ݭ]:i <8>j^چz׃ɚ4:_[(-ׯثRRȟ FIOʪC)Nߒ_˼nkZF-B#~)l|mNȾߚpòtEΝʲlO0{n*-1֕H]lݜኻAծ8T7޲E1:X̉lCx\ґD7x%ƟE0qQZ8Y-7]]a;~i.- \p h$ .O`:e#317S0MG\_acegiBkgoqwy{}5{thZ m){şׁ[Ӊqٛ7@c^B48bEp1N3c<A9䮍%QFeK/19J7qLN:h1EEiS=T*+NTW暚V&r;eSm[ U0n]X &Rx~2FoahL061"Jqe _0@tiuM+|Epפ=\} 41!aoӫCoơGgV\:܎%g]tKG^M}Ҝ^S_𢡊q~}߿ /BPcp!  g;C[6Ю>4&,D\EDf*a+ECVNi$n.܄G:F 'c1(9o:* ,IM\"+#{9S3L7cůhM5 $2\t)\˕Wj42aO1i쭦ꀓ5M?S+Bw$$aOUC3+T}5&|4N B ƼJ7+0L?we[}U67-ڡ&ݶEp$$ZyZyeu-(Jɉy^Be`U6`wW&D:%UvP9'4n /oՒQ1"ʪlZ[Ef=mUUv62k.JIdo3_xsɐyxX$޵4«ͥܭ Rc8;Y.V~,_P-buSj7kݖ{ݽex[n8PO?O?F\hUz~7sYDXZD'NYQvn.eo|]b:odψXD-&}[IQƇÁ ~}zG&6m>]ި?}c.qg?a[!u9Veq3^ĆtO"R吠-oxrαwu\8?ha~G,iV ۗj?ջmj^vH.mЀZ&= `y $%AQ,ox=toTs 3)r΄dbF}1A3@# H сW.6/d<%Pw+]ʊiȍGcTaΓ[-:[c&=~$R/qy#KsM.5ԃ =L \XqMrR|Rqd5aEt: LFS2(줧ɮegsOF2 =C'"dЕoz@NI M7:ȆUM2Eс"mSlPh"E\S*{ե1)o 2=O+*@HԢ4LUSbT:UʼnT}JUn%XVVxb5+[zS'gI[c֒J9SXgKiЍ5nap9qσ)2W?YTe /ZFVug2_4-6Yϭ,iMVvmmY~.СR;rqz("L6_&nWqh8#uCƎk3\<*ϗf`t^*rV΅޻2(ظv;9c&nӚW(1=+]lp7 )޿R |'Rt}Ki笑Y{>|\/%1_8Eivm,y3,@%rބ%+0K68+:,%)tM&!cjþ5]Jhc0WiϝM`̌_& Z_D>@s^Ӯ8g{"&k"e&ZJ`1{ Ns #)mf˲}/]'4Yl\oh*YdI2${-KDC6L-mPʶm }.-͝BVy+𶷣Ѥk~+~WX@ey-)E M &5|$^+yQsꀆS-G,[^x7ʑSkhvR$~KK\pMȷ7jssyr$ ezD5ϧa/3?Q.UZ2g2!ړb=+u[xgKnyvLe\SR,h}sBuDO;T0%!8YMy>X*Uj>b<]NzdcMqMQ@X":}>:a=' vɇȷH8lImR'd.XBo ߴtF[$쇄oB_FNnzΕSTɈžIvfpd}xl,`|H &Zί)?MJl  FmLk+VXTN06K4yDO*NԒȄ(& tΖm `Gp>P&!(~pi&t@w(52 4dpS ( udI5 $qgg.Z =$-8pGk&,ip,N`0bPs;*&wCF&Ј ҨMPnxQ$hbN MUH =ВiΆȉ1$- 41 Q##1l!sP:N ߢk߂~R?1oGJR(/j%ʣ90qK39@~.;s; 掂;I<[b 6s>R2Rs3@ȬR@~ t??s)ko0O ? >âPh/gA5-B@@`W%1@9H)R%+4EC&t@!e4h4rJ4B4󘥢Z.P/N$TPQD0.DkO}0SP/昆zK&_ L}m31U_3#P 8aߊNZNM ou^U{PU/18V \j"qV} *)\.1ZuN]UbZUE/RŻn LNuI`10 h/7mpr~laO+x>cԮK.ӲbM8&,)z@>lHΰp( bBno'BQIDʹk, Cg'K m}(l l϶ F IR!/b9QC6T6pp m7!]&,LuU "˶a6604șo%UD RF(m7W wRӂJQ b;1*#cu`DrmT"f.sPX[Ʉ+UMVrA6{Fl0p5j[;+^#Qх6P7i-mw fkm-1u7O6 ~篁KqlWþ797n5Wя0kiIP32KWvV!uw90xi0xf1;wy)y%Igs_lTR ʬ|vSauqڶbK{Lu㴻'\@md;:j;ec}[;;60-M)D$MM2NRYN‰Ĩ S{˳}KYɛLMN"M' ,ng,x?i_-TEu5JCsxuW֫B_G:4QQShUģ$e,\%.r͋ IS^uU^M`}||ֺc)$X!eW6tq&Kdk )A*MB1{<톕wVmӼI¼9W˳|c; i?fe2K)uOِ0ao$/6USpr}:sد[9=AI/Og l 3KRupL|s0¸u$/A׈)k=GϺrk+X%1WIuTX(xm٭5Tg7yw#KϲV\~3-'pItϷׅ}+IЉz'[>lw-P{j}O(5kiA""Ә?U=e}D+[Tװ8Yڗ'\ڿih.7y?)8"W0 ӕyqA}1 #Ax! ]+4[ЋmRv)};R9:>M7\G%r|c9x,я{ړAMbI=X6+cʹRVFO✀gϘs$2wYT>C)O(y?yNyܯl+2X_^rcuZEU1s98}Tt%ܦ埭_.?rj/Z?#Y'+ۺ3];?0؋ c\2'4:#J֑r/ {:6қuX\;)I^ aÞb c%&e&''")i*ƣk_-J&hM.酫 +mrs ]+5B4]G65ydy6zht9N"zn|?g)0h(z ҧ!;}[AO]5sah&{YH&ɩXߡ/93Ij'c aڴS?= s';+-Re*ɫ9j+ՒX"-)ʬpV-۷rr(a}*Һ5NZfGfeX'RxDϝܤOQ*aR33LԤR+ب7h2; 4ȁ;v!ł9TM\o]F\$~7~wUr{p˓)^߻lmaEl_4v`aZsSJ\]=]Rf:QSo >Zg`x*^/SIβ5; @x FLm?n dw.ig|3 V'a65G|_R/k9J[~{ieЋ߽j+ht!l.oX[Bt;<b(&qB&Ɏ5 wD}|!1'/4.21al/9ηz*Ԡ#-2ᡲ8юPcDFA:KG}}%4 ]I;3n FAˁ]c*<½҈[PyH /, En!4!b?dZ}^<$]`h 8'gsDtmwWKH3;I6[#TgTKăqnD>SLʑTU*'j:z#jvq0BC")F5T NZ҄]cT-eMk: bUzF+T_S*U@u*V0V\U VxufEX/PVҕ"(? Y: cEyM -l]ѴBA_Y\9Q(,'YiEh^Ab\lUbءxb$W:3F]P s ?k>ԖUl۩M18$wY YʔهXR,.R?̼,^+lkwvךr˫XmCQϩ̉lvx"Vh򱴤h'"b^׆ߝ6k$2`l6օr:"Wp M聅NnXrmKb:U{9 2Qĥ|3&4~>튂WҊҦ1faxM/G^K,dꥄ^X e>9k)3ż V]:p͵c|7cXch4#3`67 tBkoD vh8w>6]5T[;i|kGM4yP \5[9X.z Ws'3L\r#F'[X&[MyԲ1/bksy;; $2 ςR[Y 1=qBDm@iIuۃx}1b@%3B9gkQ> s8m?>Ҏ}Рp?o^?GU NW`& Ebu>a_8^R`YMATqYme 5Ֆ I J _X -ˠyL ߔMhH)ǯJjIJA u ]ԭ^Q ]b= Z  <1Y ~`)\+p9PVazm ~eu܋{yΐ &b-OZ`ؼMa"aن "SJ5bC%^^׌'~"i9'z ߉Se޽bIٌ#v˙) !]3$.3u^Y%Ll2I8rڮecEْZ4 qIT^Ủ $?nAIWu g:1Vy IJ食EΡqO1eݭ!-m.U[6-u#Xɩd=Z@Rh]=AZE1P>\_e]M5*JSf[FJC Z59z,e"-鐰ZX$πfBb[5tLa5r E̾hU eћ(Z">N\u\dZFE\ 1k1dfeIiA^QyJX6Lcbzq]L&5EPH6eԗ!;&6vWivPqu+Ptrf^V=KwQR&eh4ΠIyS]~RO gqL@W͍'3QcwV4[jǁb]jJKʥ~fЉe4bS`j5T Ed;#]4fv%ff&:@1Ii;Yfaq^C*2呞A)NA] 9ԓ YNi}n)7)v.3Rbژ2"Da 6FĔA b)H8)4JN*yƧ^nʨc"W*Kj*_jમJ _Um+q +Aj>j>+^X@(~+Uk z6ʝ7+Y9+VK\L6+Ekr&x+*Nz'+,+\,,>SlG2l-gAN S wn,)Jlʦ,6`ˢެBll,],B+r-[+Ҷ.i E)0 y*iU'棒ĦYYoO#e߳Ppd9 Ӥi~8ښPEZ<am2Y ݃RT~[Ypr"rK-T^^~gޚ`N0$ %DyY]Z%uVƚh=nP^hIOrrR}>&m\. ĝ* odg 癟:gV'P i?[qa\b#Lš&)Q-fq5s9AQZ4)(0zBj!tȁsknDw R,J?LsfK: f9d1xTΑB2$V%j",jT+U"K" Uߦ <5u/MWcuŌ"p;bH ;V%2ٱrMn3*W8W:B`_*cϮʺlcoecvڊfi[gvf6k6lv6m۔ndvʮB}>S^ٚtjwۚv4B)ݎݎvvjS i$0ւo3Mw6@v/šL4AgJ-o^[X7"&r.z'x` }[-\v.{9n*PhL#G.{jJxwrp ~ˉ~-&>_x?z7,#䑃w[\㶪0f Vǵn5QR9,b4h9"s-RCfzdVȵW6p2> &c^0| %Ws'\#`մYChpôn 'gVNv <:pe:e'FΙ"+13q:29pR1" Z>wLҧE4wOɱ.S;DK2Og$Hg_gX{hzrR}3"VxlE2ٝ/ C7Q[ܕ$CޕJf{§ZC2-;%Rf-\+{#yc{+Qd*ǿhj|fJE s{k$| K?+n99 XrJ4Qo/ifH" [qrtܻ(s<đfDG"pdd!wNwM&ط6{gUrO3U'fgN{< ?{C!7Xۑ~o;-5(Na<b~*2lt*ϨRk؜;HYW1_w㥓*|,Rw\WF/mk^[)\oiVRH 37bLZ7^ MZWoޏB ?,Sue[cyo|?u;xD&KCjrߔzf7Em=OZ^kۆsZy~? #D0fiSȦ6&i.7-2fwFO{7Y#o˳f gn@m0 hYvʔ t C#C{N"B=tLj_H$VP'D+jˡ  QˠB=*qulTDg=VeVͪSx~]{(ٙ"ƮÌ a!ڗb.c\9GE3M,QVn[hK)Wj sӴ_{v9C ն#d`K9vrg9q|Я_N>u~x<D, $ˏ4ˮ@C>6 B?c@I[/B o|pBb0 GB$}Q3a"/!]rAl9R/m1= NCť в3)̍$R|3G3D =QEq|;˼N>#SW291%37ksCH462UDuAL49@MUS5V9;U-Ut?p,54e:C5\+Je _Uq}8Tw;v [=wZ;YF;Wyl]Kä2K -.*,ό b,)찐%̉=O=Ii49df~Աg+ y.8~`FՔijk܌J͈@1QꭽVb*HԍCݏ`Pv{Ʊ1jd6ſy 㗮qI:<_br/Ǽ3/r?COtOes)Zu~]czl(|}U7*xq%վ:,L4rfELTfyywޡנ~ݰFo o}||'?Я~L`41 0+K \0eZŌv%P tN8HH!@Fg Cip'2A.JYh 4OT3<̚ND"Ye[Oќ)f9sbt 8̹Ox%R;^!Ĭigrg<{I:ԤT jQQItoLWt,9c kd2L Q*>G2cC+ICl}pZYGŠ4>pR-m[j `'o'ފ)0qv\2wxFrܽRW=Qt۽vwn+;^ₗ;ozi^{KO!&[./yIoQk] zAو oV^o >y]k]X>al%&n/ ekq3Bg-aA e,4JcsBe$ci2\e1@˰D{.j9IOU]Le /Z(Knӛ}ыd#-_fs}riEԣu Vs*"4H}IF2ӏ?e7Y~VUɻJB51jLSZuTi$(eme~9o͸֣8elG/{|O$N25[9j.[[,蟃NESi]f܂ )m -(Y]E=mn|F5$e;d7{|6Ne/x%"i]a qdJ֎h\_(uts D=2u%Ԭ[W0s<δjrV6+,auNfZ>ˈ}O,ljQЮdi7j/% y:F ( tma/?'^K<䧛yswa(yǮIz:=\/]Ͻrzxoyo:Bo?|'׬ Iȑ&^esʀ=M|Nn?fsf#o%,fE,`ت"\翪i=k?F+Rp6b:ST9g 4rR? 43۶">x+A %n#eĶ#8(5D!LIAL>s15zD?!"T5(:C3lla$5ڴW$:;jĎ[; x*,S:K:"n0yF̒Y9āFL:?d4+ F@  kQ7Mń G?ѪS-Ȧ!x,3|A7)ț|䛠{DQE ԖIC tHqKGN|<*5{Tl{p8}_w%Tra+Hd0:!ʵ 1?YitȞJy0cLG Ei7\Z{C6uJ#O,\ L.|˹t~4H <1wF#Aq4:!LI{V:M骒dڬܒ ,7_ª˄"mZ&Ao';A7IC!$aвgB$7M? )=1Tܵ5._ ,OE<=utT'T |B3 <>8k+%P%Top=S=AM-ՒK|IA{U+t%]D{|X/KWSDQ SZC[V*mL;U UJcRoG_TCxMy 8lеyջI ._i]r  ]T` <&a`9f>/vaUΚR~a #P5}ZNk¬9>]q}i 1&/2`Dɋ 3t&2>pz)2&22 ZqFtĈ"!fFh) QXw컓=rj}5@7CͳA&SIs(8"4z5fh;jc!֩Y վgU:>Z␌Ż™KQnMk K15,Lv@twZdHK'dE@HIJe`Je J@if3e³bP˭G"%kpiSQX*_ pMO\ 59Rs_2ȯnJw6z|i^SZ]LHI.=dKUI9ûָʾI+Z3`aZo,Z;UK*97[vu<kg&J6nUd3?FsҎjmjiy JD؏^ V%9 yWLfx3p{Nc֦A_ˣ͸3P4K}^yഫ/4&N_;6TblZ%tKJ8~:2YH2yʗpmym2fW[q,{!9:, 1+J&-gZC}CҴ#Oˬ3Glqf C$C@ϳ*, ;#DG;b4=?c!;6p.:F vRe2.uy]SN%uۉ[]Qq h*D-7Vew*NvzighgRjv<`k5vN[=\vwzs_ohDie bJuboFF(KPe<&/U$53o 5c8w|?k)Rxehnʳq&]l55x{tf I&|Y>_)=z,[ּd׿pLy^w?}[՟rw;)&J~6ȟM&Zʰ}T ~~,~ѕPWY Yԯ*i':N J&vȫܔXetQ=WWlp,smx|.#2SHcәz,SEkQ+la -L nmـrcuk|cE@>p5yExx(DMwaF_|HvC2̺ѿȯѢ$Z8Q=:G `& VH?ų͊ 3tñɀ"O\S/X:Jˆ eܙsϟ3hJԒТH詴ʣNJJjGXOBui6(Mu,ٳ7Ѫ9ޠpynK7ܺkjxojR ٶi6&̮^AZ Fw%͘F'gb%{갭o0]~ yuqN zi,axʭ3ڒ}yn{_ww>Rk ЛAtQ@_[$Gq)H}WU0|Ѧ{($[Wpag Rw!_."煅;yvIԃ18{@"c%q`Yx8#AL>'Bz2s} dFWff9kP23fg:٣VY0b lY&B桳hnk!gU9(@*!iܖxFyўC28.R&M}W8SjC g-C=QV:kbCdHjV~*jQ+z&r\MX__e=2ˤ+ 6QnN(s{ 'fƩ"򨠀n2'4%Q4zLhF?jGC0^㊔4D?*~mPc2%`MAΈQB,xN҉A.TIZ67Lg[~5x[w_.5A)V dauJ cPe/ƊêVCph3Xr:Y{= ®q >%^-V1Xj9n﬋oPEWkMdvF$Rx|JcUEv똤] 錅 xZQf?[+c&sdϢ]D &B%K(˧PsGt_.Jԧ>MkV{u cM^,51b{։X ـLZҪJ˾v];i(|>M=}̙;4г}T.qwN!UFF4d7Dz\IüʚлQ8zz{ǣ]U j2ŞIqWv&}ێYP=oT H*խ~3 K>5v$ ̹l dn55*svY1_n!O/NZ+x8. }G[s+[wT"~A '{c;n,V9}eW*,lwVr9)to{Ψ&&_BahV8=Dn|ys3RYsz H}^Z%?3s=h&Yc1ns~1=U~0W1v4^b9c'S`7Y#ZHc;v }'aw}q* 'b3TET-`HW7]S9x%XepTSu@]Ye=3%FtAkW|y3t41X:HE[khW,c_Qg4zx<7vVh0a{y-D(w;O1nccX\Vnb*mv{Lyz0yFg9y"hhD;Ww:w!+MPwnAwDnmt#v R$*6J ˜cd)*鎉iny )ٙvə9^G7bylniiymi b51Y xC]U lnevəɗ9iӦiloCf0{g'b446fee%bWB#es8(ʈ|7_x%L)~?.{Xp[I b$rLiKxiV{]Sz{b xxUȣWg޷AZkIF _QQj{z^ȅZ 6UbW֯ ;'>*{uVR\j{? >:w<ӷq+iUʝĺUWP>yFn:DѶ:h:.hKDtnvg ⬕z7*Hhijɩ}g<|ٚ~BK:H̬zI|LkOĺP,V?|ZL%[ܪTk^5m[C2YT"ph;%Ǒ75DmJѺ*rw)RKY&Z|Ct WxIwg,r3 C7u9>ܭKa򪇆٦+8Ywlt;RѦ `'v*3@t-.t )p̠|u˔ҎMz%|7dǵ7@Nǹk";wfzg{Ӫ6n{})kEƄq5M gU9ɒ='{?|@9e 7˖+P+}H# # q~G&8-B;[>?[^[}_{(W%ȟƠgwY4΋Y [-Hʒe&֩Ȥaw}]6dJ@L} +s88Ӊ\XݽΥ:K8emZyT:YS-SB0z AGTx3X> 7 wE ]Ƀʵ 8qBMig[|c]|m '0eYKɜ;ͺSׯ}+*Ѹ杦2L~ep;:׾ga{LSF"uXaʓ̠HKX5Wt /I e3ya:MI}jkBTO rlHh:8؞_ۦKI59d;F9xK5AX%aѝmoX*`v;^1GxCWR2E9' '3>9(<Чi\# N~Jݒ$ٟ½ӺhTǎs$K.ŒjF!4$/\ʲ氕ŏٺa]y 2e給Cj>qζ},xמ~)QLV &d.i~d쮨^>KrE̎)aP(X!I#z.Ur<+$m|Oa`B)m!a; VP#ȹc`_fz]K~D O;sƗ7{ uT_O3#\k¥ -1 9E|m;4CU\qE Yl>E^Ejmg"K }LԺK()DRJ,m%6.kȾ:s1/0|s-\o̼$lMAj=P 4:ϡHHʈv%HxTRfZtE됳PEiTa6i0+X]E ""j\`b4]_T|(%$69SL}DA\ W4 t!W]:lv坓y2ץ/zE`i-*^n%m]=dC.@C=yܚK=լfĢQ,9ӫц*ؚIYQ_ި-3'U\g chkK} _RFۑ4.qH|nΑuPYKx,$rAe%5wSڹ@ OaZbk/~O_!M1LvV2[}kȾ@LhK\@ MVaX?M/uS2Bӄ:ij3e IN:@1{'z LHY'=$@C Qv=Yg0MT&PɭS; }萄K,5j9/i8AUQ!(wG^}$EaJ$e+E(XaWUg扽MD;_AV?)$b Lj..d Q 6w%њf]IJ4u\YÈ%JkR)-<=.0"k x`c0閱:'<Ӱ 鶠(^tvyڿw-5;Le[ֺ&i R!YXAPSi< VJ%a ٣4axxx]`~ɔotH2 \P({qcd>Yd IEd%?CL,")rL4s}|<杦-FnkGygz`G^tV(K1cnV,GhΥrF)1wnk'MlkpU٘]7-S)*'o{% C}/o dZ%|^s,wm&Ắuox`ՠ1I7F%ѱ6sJZ/M;$$ÔģiG-bNL=f)qg=/u[nE/67M[])maM11| AhLo3(Whnits S z(׬Z2eHݪ )wL7'x \AoVPleG~"2gFM+3m:uH4sl =wisuzkyګ4y->2szG^\m]:ZւE!oЋctcN@.<0Gqw*JwU^x>c]Îx J%ւU6%>9cg=lkʇ\{>E Uїc B8 aTFޓlȹ'[l 4HYd `ɣbXc⧵zN<"M'˫%n,_"؀ "t@*B\$*+$ ̘L -K N܇/6Ž: nKr6n¯C+Ȣ -#>0e퇪慮nccYC5:̬VNG/bqX;Xf-jJqng3 $%[fsGndt[{ʂ1b1klqVQѢؠ y X tajQav}8JfPQ0V(91 [,D+"rdFWRڂ ")2\>`+#jgڲ $gQ$ nŬ.4P"=m%s%%`*&0'N+ +>m(Q6O'"H(RV^ !+Xr^&n,K-,r%߲]R.)..2]//033$ ?Ri2g 0ϚJ2g3h*1$Q4M(0H4 hI >-XmzSF+"99f='q~ qzL@+7I9yFRܴ3nSZ-=s9ol6R({G4ۏyvs mXۑ>@ۓ,H9.0s-|ɑSBgC⬏B?/{R< ’ڰ G3?ԈfPg ެz4Qr'H+8Tl)MʁMZ -Hn/ fӟA剓(Ϙ8j(40#O(^#u viZh'ʈOt.󜼴K1Dgcޖ 8TrQwg.I<M-5FmUs)Of&\՝.6ն4L'CV PS U6 BnYZuCȒFP0ic"'iN9 U<MP~({2-/]L?[κJ@Of4%D)+=c5ۢG$/jբֵޔz*@6_$N$4$SX6Y{H6EuDL`VT0-#CU Zu+6rPuhJ DcW52Cd[ժO&Qu';6l:@g1IQyVMW|S&5j0#2N6U:9? gt8 VimH k'O#B f$D mߨ6RM4 JkosbT@ spQK 7p:0^K% cXHz5.hͪ)x샂W, 4 |3MU>:ΆižSO[/:޵8:DŽ&9qIT06=SCwM20 P K8XBiθLzGQq}90K9Z#iDUumMO|9mixH+0Ț+Ysmky0lc*M5؊5tk3FfYtx؄ y8ghh5ٗuGW@UŜD:䯫0w,sڌYU$ZyPeiaYIUMOZ gʞG+Jгf4ΦQDwk*"@Un[?QѨ 5/cCڨ4'jy9PON tE1KSϏ8Y۰cUW [F)[v6dZ%oJ.Yj!uUn;Ma/Y;34Qt0j0pzE0&>WrC <%KtKru;F!:. 駘ˊtw)÷if;{[yiwrUyUtmx:"A `qXLې}DuDFRUT\+a\äSRzVd'qfO3c|HweDa_!IG}a*N]"1\}#+,ވz8Iwň7Hd2X[mdBIAYNAIDFne .^ٕV>mK? KQjHHn$bWc*5geo %`%ZO9'n| Ji<jY'ݙ.L j2=􇥦ۨ"ըdh=ƌP+vPZ'u.k_MnZPEeXJ~xB2agx n. ofދ/Kpb;p Sup / qMIGq_qoq rLr&[!r. s2L)sqs5 : A=ρ:LN4{H %ISYjSWX ܵi*" i:)a!;t`jv˪]'Y~3F8ܡ ܼد1M8&w/t9;5`])vDaaq~Uq]8Y9(t/ϣ|yλrOO e߻hMzZٍ{د6fOrfRvkkD5)- o؝@N(RX d$ QyVƯ(ek|wjYJ`t&lin Q _~`gZKMs7>(}J<5pvŋUT f2 a5L(ȑNSC!͎m FoCW2]i#ĽfųD"k +."Y"GyXa|cT:iFscؕ*XgFʱ_쥪Vɽ%bnu4ɵYF "̡!Bq#pi.ktU,퉦SΊ-r(CZd&j:rXs,F H`/=\곕?Btyl(O23;͉l#CF]B*F퍔)%g+[UL4~;hTddΟĖzC!w1d`WG7Jk՝Tp*.[ςVtKa9GJk "T:"mPEAo @Jץm>uGvb_%'O$a'HT4V֤J!^jgM}ڨ!E gH)&'V}xB&UЪyˆԿY8JԚԼ0A+ %H-d5YKkpRu{ }ir7>s̢1noǑ)0(%i{=mSGXI'c\"ȅ)[uhrFxSӔ ?}8BC {W+ k(%W(dرx֖8qi2s#^烗y YILﱘ 蘑%.iPYzHeLrAc{5\y?^ 8`Cc6Dacg9ƙUaPe\emPv=;ʉ3фY7#F(C2}~0Vla҈Y +C(j$zEVuFf7|!(U䲝8"7oh& JVbB(!XeinY=!םNC9Y&4*]FV DW9~rx93kΉ3x**jI4:^HocHyyȇ`:$P$i&q"(x`CWBpxCvi 5}A] iEjXq0u'7ĥ1zEBJ]RXdd}X&|Jluyb]fuԈYud¦eD~-JtdoWN:LNaIyJ~j|:p@ 4$iGSY([Ȥ j`J'~G\%pxmѹb֢i,x(V?9%Q)Hڂ TFo/̗ĺ.K:&wn5ٶ]שDckgEY+S.uWwr5J9ɀgGRVi%dгZtuZVFG{uU[5[8x%DKg˸QD\ ' Q+ea7SY8ʓI+eG7yO&]s*E,{7p5 ˆg\`tĆۆ`eh)UF?ǿ@7u"Xa$C6t)&aIֺ6zI/-|eҋd:~:/)ٟ(bCYxĆr$aŚYr^\Iggk\te6dCf|rFfh #䩝fsAwq Bq鲐\!+,48~F&. ev6mל#sv"hJ2} V|)o_l!7DnjP(jN[kb8̲ oia>&K筯\?p!͔<;=t`qusQsqp yiUefYλFHD|s̡GTuNAV\}ldh ]jX+3wtzD dDܾł{/\Kjӳ,$ &]Ӷw}u\ုjOl[L]Ȼ[=Y,qv*^WȑgMiGZ M+fǽXFrQyUT'YFujbD'ij͉]ٻ4䓹+B[jׁ( }+TY25wL*m"LϹۯfU*;ٶ R-~cݪ\jx ؎P]#Ε:"hmMn~1ԇTyN}3i_KE|څ?ۻ*ָ b ]pwAGV5;MMSݡhz sM-c[kx& 8N)vMJUuMBkT;6qqoh; ;oݙ*ɫQ<.Aܗ+gb.dİaOlC܊Xt%}މzt\.Θ.~y)鎾x q^a٫͛3.>) ' INl#ⱼp;`xON넎hnW~n\W}غzrЮd-] P.8)u} RNiɑޓwZ˓V8kx{ia=_> :/. 쓼?<)-.2?0o064:/<ϒ>o{$QnY.{-"w.rJdz[ߞmQFWY3<)Z8Fב% 6In^q6`x+.1_&AD4$V}`\fZWIxvr DҀvpSgtK5oMYrn_ɠSrM [oT7*W R Wߢl#gRj])$JZ!MWBoUd w]YoZ>#_`Ҩm;3T(;axgoTy+@\V<Ǎx.]4J; iv-yyJKʐ˘vZini|9ꏧ~Ze y-a6i &>eEQ6Ld^n'MP";q|'?.JY$5o4)SF/+Yah{Y=YI|{ZJtTWCJǡ|%Qd~=m8]Js 9 عemNŐj2١T҉A^M#~nzcTQLJk}&(JVbꮲ̔,v) o+PъgEv1+`G%pRS Z.Vz8c"MզYj%j鷷b ]hgۢa|-V'-q;3Q-X3&7dǥKmβ{]yW奚yA^nu{e _W{_wWߌ Vc~ |5& NM&/V%C01 gX >\&Z s-֯]'l cgO. ;6rMu|dӏXU/c&Q)je1 4cMcJU%E"2e8NG2vLF7ϙK+s)YgE|^v$ Q'U\+gL TFQ654mj ]>3jLzzj4q [Ù։sl{Ѧ v-leGw`kGl3ƟLmkov4-`l5Ϭ u ӗ  w3 kC5SF.8ρnw޿]nX85V '7l tf;)Q¸>79%J:N^jykSp6F: E_٣w*)tqLnW0ZlFi[p,(rhXJ\0=㚬5n[/^b0ʹ&Rτ ؕm2wC JVA^6Y87p ford&@SyGѦz3*nSnn9RG4q'˷.ى}&AJIHȅz\a~!/׌Ǟ*M&d&kHͩqbRQn䦄LsʫkA p2⏰dHH| jΛXJ-a|$406dįF,L,r.i6i*0xЏI+ 00l&(V+L*jo/v0Ѳ0xJcCJ0)*}Γp> )az",/4sM银 Vd`VOG GFGXC>0|v˜O6RQ֣VcЃf$ '̪paCPYnwN2G PCd*Q!jƨ=j|x P0 {)1NJzQ=!gG#5r§P jFPdLf \eoNЁ!{wjT2a<,k'~v+*+o &PʯR푺nldKFMo żKj0aզn`s3"8x n(2k(2l2 l1Hs4M h5ebm5y7w3uK67u38[7 88#8L999q :s:::78cMפ3o)u uʭ<$'ZOm’(a +;=>!Ũ}MuVGn{:b 8DdJB=0P11, GO44W,Aqb+7O MJ U/ R[Q*=>qrg^P$ mOG5TΪa]P;䯲,VF* #nЫ-/)n,fp++5&@)o42Z2JP_̪uL]1#o2^&4#^}.ggd;_$i`2Z W\01;&wʯ7t['0M/HS6*VTIKxBBemba~ V}g;[z/K;40;1acF^VxX#ZurVv.4VTTk`ݦ6q, ԸRR(cct/*YTO-'Ks7,hScGnS9Y`lhɞ/g1qv993e`S;),4yD81fhz {x? {wkHkqͼf?Ϸ},}I ~w~ {~3~wS L~78 r ²7xp?æix)l 4ַ"8@=i52:8AxPOlcHGM(2TBtqe&-r,ȏp3,NUX5.i%?1D$l i.O/QZOLQxWѱ*upgwuXjyJ'1W5#t9X"ʌ&2wރ$-x&9ˊ -5 *s?'5NEYB4mx%=4]jYoחWx^YĮ,v_3w LPk79yƛ˜YGSluK+- X?@>:[NK xzKҺ2ɜ;xUr| /QPv,-B Wsqk7fuJuI9G`&g.ꄖ5FѧH"uo.BwL1مASQE1e4yj"z/fElйW[ڥZ_Zf3"G:(}N=ʮk,/cq<;'Mˆ- =17lM. '+-/4{lPH<[ixO.oṕWQ6Tu@5S6lty_uI#SgV2WKr.v!l()gAlآXoS6Req1\uH^;>SQXmT ZYе[#SUV;:V+6H@ ^V>5X%yr+VXSfGDrfӈH8 wz;*?JvY;Vozq:ٵvmpHF)n(3m_]W_w8u}+{_fc|^=^?7r$|uO)PjK.@zG*,~VÉ'F-Ek`d>^XZka-jT W3ZkÐMrɐO/8Ϫ1V@u^PAn|Gr\E>JVȊFG8>])RbʹG.åvMkdBXS)ۤlJSۭl9/gz}KZNѝ-i3eJEY^Bl7.h_ >)ywhUNqjͻM6<+InV2򴯚Kndyb nIMiNYwLL\]Xmuz}YeQƍ(Bם#rER]Pw#RW,s96I$rǑIJb"3yȒIZdb$UGN(7\ u_eb2uW~_y鏇GI'w"ZlaSt*iA*(ߜjYqJ9?C}zj"u'Rꇫe*@롱Z+델Zk+6l~2,⬴ [-f-zk,ߊڸ{.ֆn.z{{/KҺo 0F|׸ T4 8={w Bg<LGΥ?%@igԠ5Nf=n.\_p?`@PP+u?!A0 ELpl?8#-3Ps! K6ǎGdB8*0Nҡ8>5r`1":L]Z*(򉊸bÆFϑ46@#@aQgU#y卷clT?]kK3\]ǼF4?MS`u=>p#ѡKbIAw$H />R ⫂S"Z[#1 itL `{|h\+.ؽ@n H(?|ٸ/YFt+DF8:&9Lm,K؇NSR@4\H*;rpGˑnMTr4| kRP*5'2'FG2 4P2g-mTZYAٸ7T*HA(t!,9&$jyh|GkȰyX\ɉ3NuCKgQg];+\˝Ɣm1!QCNQ=F!~,-=XKg79f^:z1 KKRI&1b׶>"!u[RId%sVzqVL@nF YhWV^h𦯟ޭU B} zYJLOFa_âEdUsjZ#t"G9e1ZJan%ٝg3RFZB-d3y[L&__h+.- '+{if7ɷLa} b~+uAlf{DP>EDvҚHZeՒE"+]\׫v\[{H/>Y[ :ʰgސ(_!É&`M[kˆq5뺑kg9q<6MX<^9aσιpB?.B#gepT`yuo'n?;=v^&~wIU&%+u9*5)vY(L?IT#i5?}gzS:نa+ii{{!s ;ҹ{=JM}U?SPp +ծ3ݟq>Nd9PJU 0_2čgà U-jemSpӜv4;QyԲV{`޴eq_߃e{t_^d}y`5Us 0 VNLMImHTGS2&zǖ]QFlu#h.liD8}B݉5 | ^ ^DyL ZQt U1`dE ΗyDT&ꔘ ?ȡyΟ5؎ ڟY)Ud~&|I<ƜMW Dt@`U'b!vZ`a%NO#Vq. XU+*"eY)2ʒUA(~+ULU*vHh)3cV`LHa5X9U]5ƣ0q!^+T[ *\۳ݚ?_K۱鍏ٞQTPGC^I<[@TboEM#1R$P(`TK L]ߡ1`MLNfPP\$pe=*%SM eSF %IU&U:PV^%W]SneWeXSbНQ$߶a] $4@ I@ḙ!H}Ta`|~`5[,_NeT f VaFѤf~R/uVJdZV%TV6W_!Wmq2ݶWiUaڑu}[0"m X-c ɕ9ZabF6#9YQֲ:Wl. J暫ipab?hVwIJYMnD/ZEL#oqٸv,8}oJ,ZaO*Ka"ֲ$b2.۔sim^a!]fׅ'].a:>tQ&f-:gj~.f[D*b o?,erVhXQ,xl 1Y~_v$ʫq㿘O~0sop"άv_vb}"U"I~>שZ*a#pƎ b {}_( qpҤ_"! "Q-;bM232<#7PqopvR 19d^'72B nBrpp$$V !TunjdN[ 1m@mgY1AF=Js_9Û;387gTʝ$n=*𙪩ҫ-.$"0iGg.\Bt)D@+k4dGVQ&RJIܤ(IL״G3eKtNN7KO54QԤ3=J# 5.LjT3rA5JcQgjAN޽d4.)BaP ^µ]$2]QWdW5n4A{+^sh^uCO`+%mA@i[ٲnA9>^"H[~>Gso@dk<fQm6hfﶝ}=.~oi^n>&z_G؆q$Z )nqd;႗?vx.2dGsN`,8w#b]j j25 *YU7(W՟5h6g+G=/("z#Rwcʉ")p+tYursx(uv~a*AVIC!}+ oc s.B|! Op~tl3h ڡq[x|( =YbgWrysk⢝q)Y))"3Y&6bQyϢ=9 X74Qx򄸦7#省*8/+z[v .s:7>_;C[F.3Ɏ/=i'CjK^MPi8MյԻi5K"Ïm?ă5=27_ eVss4A`N m3og3YU>0U?N611qc})DͷN_nƦ9hR=ūiΓ<ٶ"=V}gr=böx.؇LEnp%'0 K̢rKu8D/=렓sb3Iu(A "Z[uL;90 r ݺ^]ֽݗ[7|aFzцuB/t>gz"0򸊪wj(]cE7~[n~B|C|"Royze4gm:?ĀntE9f8DSue[cycs}{"( Gx:%(LZvɍBW$)sz~{~ߏ[ d*ӨQk|9tdB %-5=EmMemA|,r[եu-\%,>FNNUnt&Ʀ$fw&?GOW4_wx?oM_@Oȍ2`4" m8Cq+ %Rh(FdX`,4 (H謼h\h0|jHB8E`F(D ̓@+Mj Ht*MMYЗ=uUjj;u`)GhC9s"Ͽ+EֱܷPɍ)G-GwL~:,_Y͓&<;@S*@0Zi >ϽOH&0*rbH?ꛐ;{B6:0Ȱ<X<пXA4;p0\8l@(D$wk+=Q+EҬ6R\Үtb!M%|pJ0p;iM7˧s0( *==,I:\+#T5Ao䔲(\JJjTHKL5RV1-2ø VD>ӻ3*M +Yc ,K=RPBmUQՎ.]glVS_g}Vڎp]wZwGUvZgKW;W0V`pk]]c6ڭ]V,XFݱoZ WnaX$MEC)5VC 1L~xIesxV3E39qAѱ#~VRͩ5dO>9fmWr߄ujxw.byc^ nYXƑ]5V:pt<5G6K}in9җ9mAURl{>oj/MrU.V\7"w=O8ٴxAݹ-wPSaKRaŴ!l3aĵl&\Q[+zғ<~ t B]I (RdKՀTTqmXt%0fG$##{p#(9]\hGuđ$ ADd"GEʢd$(IPҒc1Mvғ';#JF0g9eR閜~}#4FLqL,BˌR\%}FdoV`5{lL@ES0/M`b z@FDӜpRrDٙ|)NFi90ٲm"3=xr ρ6 I;-F`xL(JF?HACUH52C]Ȥ,(9'`5L4չ@%]˺w:N;4q塆&0)[ֆ=/n=+_edTZTbm f`uOŧYΑ*f SXFc֐b)!Qx.__/Cڷv |ϼWb~V0jbG^Q)۩e&a_M{[1V:ɪ3|g>#2ns ɖ4ǮfE N@"z2g cE[ɝ6)@@uiv](Xz$?6SiK=h òf;ZRX V3٬&k'[kKhciNNu7YVٟ&Vg4Yj"?=?y>&=}G]%w 51\i R_f<C1 A/ A[((g06 ?u/]A4nO#֎郸vvN}J]@YjT|`C`ԩ3k gm r @Cہx1G|BʊSsvU2vg1vgcեSARܮ43"Fޫu&oWlb ׈\q :3 ݑK tB?6~V .wPN4>!%֦{*>ZOhew0{T^k7;+7S:搥׋yK@z/2bݛZeZٻW)'1>2 YAB T鑤C,aAIP5d5 ܹ4 AAY:ej0 3kӼ)YBⲜZ8{L0Ac*Bۿ19I:73#0BKQ;t>K á "!ID?K 9CK\*LG9}pr-(SE2t˶\(A"Y$2YŻ .E[,]2]'2{+F`$dlcƃPCe#j(kƂzF.nhFoGG*smT.pDvuL2>Gw̲J34RDs4—=bI8C~[Q42,4%eb[JHi&\9b* 6)Qs5+r.r:+ә`dCO3&$+dHf#R; ЛI㚪eA^A⹛ve<"q,;=8·'n˭)J1#,'9@r jCUJ*-s\{\AZKWߊl AŒ7#(j=C)HJ!|TI7}Hʹ(c)Hi;LͧG:>1NSD[GLR)u˓M, iJVLKkC+1<<)Ӽa|::XNT<+O:K8<:+ O  H1M)kY?^KK/;ɫ|R-|Q쁫j iOD;z@+X@+<\<b?lJsKkHM #vy"C\9.Gg们9r63R$RC§M=;,aP)٩R"E2IʚyAt,Ĵu?~IH࡬LRz?в-+M)AkX0+T?3҉5*B?;A1l'4VCJSiӲT3dCr,>=q-1A<ċNBJe+z+[Vf;McՒV;wCȝ=IK4Lê̬DP-UKOd`auqEԒdJ|WT dѤbUNQ4}S~=D\͓3┱{(=ʃLCĠLfuG1D<51 Dդ͡ۼ6Ş-kW:4E5}ؒzD=z8"a(L%{3$b'-"bJJ^ḊGI^!3c Clۗmw[t=-b\dܽ\=}[%܁:UX\]?m})إ]+JD8U'E3NmsF3JЈx"^mᅝŶB pLH}PG] #K _e[쵇l3%FZ-o3_9Eݾd^_M}Y$kL *iL4|`V(͕{ 9sM5dLQ]H 9S S%N]c9 :9Z,hᑓ\]~2/Tb/2 R[:υIRE^ܶ? [DbjMwSK(d K(/]UTzK%0|=[zHܮESgLsd`79*LX=XaèYaн\PS} W~dTFNIQm*KУ䚌-m^B} qJDaEAVnT怃>>#[t7N5c=3īӫ+֝D-J⟄PbSEWHE:VLߩg~cN&53ԝ<;`D:g(-c/(bFڈf1Ca}ᖵBDS<&5Z2`^p\B#22Qzܣ'*¥ݠߤO]>%ݯj&k~ݱ.>NV붮FvUDk&jߋTu{<%ȬUκ$ Aܸ!YDե>c $g FϤί[5'C$:TUemvPnl 2PʺB ѓ/ns@nD"Qڮdky.ev0VڎqK]flo#O^<׍ЮSfj^CSYܠoBK:& vʒCr3jw[o:YZ=p-dڜB\,edD?tO.M{D=+p@%U P3gcnt!* )սE5†vdnPr'5p<&+,TWn_dE<$~)P&IRhR+{^X1r@V&_?R~g%;?tm*Gqv#u2NtXS'c X~XpiUc8^:7ePNf_|I gro[YXLc%u6%( A<DuJAT^ouhbhbXtqwqmܶd_Ott.\WV+!srMr&X\{l@D- 8Q) 8o2oDieԼ1eRGL?VSMT'nƵk xo^VW7stuą$r_z;zF]/kF$뷿mtt{OG{$HO{Fj*%|c.|EJ^|SnDz|zȝ|,|?%v͏MMѧyoJPga7oMrAwK u.|ݿVM4T'qW;уϊ0GM_'d'j†Vս&wlKJ8gҾ~:m$䘺0I8ͻ`(N1hVlx|淋pHldb@l:ШtzHRRn֬xL.z]e|Ih~Wvx/>1u_T^1.^K ]$ǨɥU# դBlݿ!IA5[H& DL$#ۧ\/")1Š%A :9pIZNe095O)䤳`J,zRBQ44Tl'J =Zfͩ*иCWMZA{>A!&8"۱trǝ D$_Y Z;e\Ӱfizp2VҎeLgҠ+{l5h _kF7;bdM2͛{N3a76vjxiV-pN>}O6փ9؈լ0҆/O_t}4_Wzgab憝 /jj[e(ebݱϳ`B8p+Y. 50Znvk7$z?u.?_|PnkTsrw{k, ]rB| 0Gx2X0 zp"˔yQ+6RO{3(oSŠJ'2_̶Xx{~3Lsf~UI(?qz8^'h3[gkA7^J\u7܎ܝ 6齩KõOzȾs쾴^qg*n^iYy+oe*׋$ <޼nm[QHlnM'RpE^4 2,/k(ZNbo@3H0D3Lwe #Jw>ƒ&G WAp .| gBP 1w p 9A"*ˆz!%." vDRl_XhȢ](ˉO̟^&c^<đ.Pt@A8kp HoLyA@nR,ݷ&(#UH2SV9I >W3#Hg;5XQt^Vä]s`i$iew{Ĕ5k ع1)IjYWFbBZf3s+֩ΟdȇpDq!uir19()4Gtƥ)5 d'9ůn%mto]Yg(*P=s\jgRwMghN0$&/" 'T hC8T#RVzunՈ2j:~b>3.9ЭlscUӌ*mm,K̚: [;CjJe6Sziyk~~jAs氺F}.ִ >|i3*hĭf%|㸹JhF6Ϗ8i(VҘdN{st78OGЦ2=0 6lP%ӺddZ겔X[сU9l(WBT6)]dAl[Z82L,cQ/&27qSemjuv?$jA23"^gt&NSxZ7/2,I@7'fKpv|M@kLK#ʴ|aM1|BKE]S==q@$vrxEv7Sh~SmWU~,,.d3}3Z5[ %5!X0To$Nm3Oqԃ)V]uQ^ugQ/4}Er׊1[1o|275pq#jEe7rFGHP&Vh8pvX&82%'HS\BWxXqE%`\~d864a'KES8Vp.Ks;UcNH,AU/XFU#UUHZD[4bnV}{H;2Vb.va}$w;F|OƄ2ltcJc˰syx-?G}kf {Wga7Dsy%#D'*y*ns`&3ǣn@8zh:Fڏ>ZGK|LhCzCۇ?ScAߨQ֥(%c e5KJ }6bn߄p*GfRvX7s`hꧨW&HR&Va%ʥl Xgu[i\ZvX6_!NJaF]I焂I9H2::^v,~woʪlD ybxOzڦ+X;:E֊JXjԭz J\Qs:dj횯 Dگzc&|{ZdȚd|ki$ D )9V헖蚮љs-UDhS\67NLlFmw%l9=]ct}湲ozy43^f b U8+KyJY2NG39JHLG$3;9]V,Js^whbZD{3=kH܊aU"dDhB:s8#/F$n(PuHYbk7LZ}n%kL]Ԭln M N+a:5;754 i).do:r;؉]*[3'99Ku 3)jqkSHwSYd^rX%&~0H,yr%ImSvJxUW ,Ht`)qFA#98~+EOB XCќZrE0pՌ/:<|{ ]ooF+ғ["!™*£Huq{Gd_cSxy:+FOE:,AqT[cja62R]|wC{M*9¹+٩ioSSŮE{1z^Gڎ ^K-8MES%L ªH<|ſl)SJC8ri BL~LjvFCgǨyo˚m~Ϭ0 ά]j{vz*ƅ)n Kycˬܠ̞KcL9FkWbabiӞѨ[=Jx,)47qg=,Q(F+5:iFU =z5-y ;NjF-zJIENm<ORCU=Ո7\iM}q4]@}zg_e^M[Cfcr' .9DGFBD%{H7LqPzQInUWY[>o]ךbMqx.\/t_|KnJ$dxҋFxx=O}XQ]GRڂ{4.>@x,YuŠl־,Z)䍦G=l`j纑~gޤM <^!Yrۊ|ĕ7~(o}JNM19NxҡǟIH ' Ц iVNMdؿ};#O=۬jؼt٪a2&lݬ)!.FD )2SLS1qξw{PU8YˇF # Ҭnמg6g|\lWII2_:ǚ.ucKlR8܀{앜ΜjBoĞ9ΫG.1oəˏ?ď_c/ W/nU_T'9OzeORerd q 7背2AO-ӏ0]_4 qϜ>⎂5^|(߾S^ "=lŸkƲtIdx9O %YTx×mX1ǨpFdu"]Fܤz]9[s 쯉aШ0%1rPM̭* )ftld,j-otʒm,57xm9OnoF9v#Z\qQ]3']y1rqr}^?Z;Y3WIDuL"*] r1LCă{%C%1ܐE[\%Ǟ' < a$&n<(P 'ҰL2$[b!~f4nLfŕ[!)WIM"&KP0-A/܅bQg=LJS a36>AKE(4R%177WՔU<%IgzheSQNQpe GNa*UITSZ!7c5X3(\OUH_JY5bɕ Yqܔ6 /9C`aS2Ć]s`ճvI6Z uq݇]oxU4\~ՉE8QN7Zh^TtIF_LIyH\LLlBhJٞ0XܧJ,mꖥ~. nKjᰯV^8*FErkvn| ?q O.!oJ-ETX\ޒPFt6W ;{TfdVb~$b^cߍlIkS.;T`0_Q1qp#""SePZ$u3SwxEb ߄g=CFTxK)rl*/h8~Jȥ&OOLq<VٯĽ2d.nXL*29hn\$9ms9Yurg|h1s~#h-\I u&$]F|hGz(VXs5wdB=t͋5Eiឺ͵[T'eitC/suke;ؔu1|G7wxaĴH=ʶx'SG;PæAZbJU^JcQ\(~+.2ҵMpo%ڋ[-궲WR_~k+LzAkqϐ*7is;rsOh smtjG+U` p]rҗ.qw&'\j՜W'qu=^>C,O|~j=Da0,&lKy}h>$;cmw$0E# KtؒeW:|1rQV\_bTtG.Tڅwߙ \}{W~_ ;scalapack-doc-1.5/html/slug/img747.gif0100644000056400000620000004773006336064771017207 0ustar pfrauenfstaffGIF89ac!,cڋ޼H扦 L/ĢL*̦ J/jo­ Yl=. :m5<(s8(h9PɩIZ*j izJ7: k9z; l6KU|GЂi,]l5M ~^ٵvOYO_S'=~Ut04h:>'1<2։.WvT&2*~"!NIє 4&kߊ=2gj5} MDQgPkfZM5PTd9-\[\ƔJҩR+My3kF:  -9&Vͺpͅ#\g9+0thv4LMh]U/ݚpwËnd2O.ԁ.=:ۻsɛ?~;'!UEdn(×75RNI 6ZX>y?OAef>Ac9I]%Y&U;FYdgYId3`(p䜣cz.(!|2&m_lj(/Zh9._%皅ji)*Uv>RƩkJ%.`9I5ndj& ‡܍~Rm@~Kމ%Hvrn0[mKmw pd^o} /p? qOLqwpoqwq"Lr&r*1- sͰ<,6|deOA6̖V­ٕ^w`j971˼.=XHw~*>Ǐ^z?d.h$y Q#<)/]Y~4p0V٪6*I >,^APLR`l x(4;8c5K:x2ꅃ<V%/e 5{M v Qmp]R+cR[bx߭X`DaX5ST,Rsc1'vg$-*M*~C)RNiL*7*LP=0oHۢYJ0.0 eJuW-mgYHIr*I0I-~,v ̐ )ѱnΣ~VrBmJe$-)(FKmе$5$_mhi\ӘUL2H&szduuT4w"tS,KhFơVS9/*-YS0NSHSUZSR$d̔ Hmj w#I Me$`eJܢ*Sr-F)_̉Jp,K3uJ-TfÕO {zTb,D~ dZIa*9+GsA"k?rKCŸ.k۞U V+5GQ2ͥ ˩YV"8Cq~ԕۦ,4J0YERТPk m'3a }in]^YhT`Q hrMXcI=FcXmgFp,;L *T$uNŌq]<C&3%} 䈨S40dy'.(/P0a,őXbnpRe>$ckRcG7oi%((q6rL_-rљDiQ.AtF4-=ast}|'jm $JjҐii}`7=16ԨN.Zhl0ũ&;w_lZ`\O"~U<+]B~Tr(5_ǐA*Ť/v+̽(<=ɾHܞ0շ]W^5.KLv-x O7KN3-Xf0?yR|68S˪DZΓ$vƜMɎ(mKŒu]7jiSu;SYwΤdt4&gd BIu~n_ mٿQNE}c|<1,a~22`ЇV=yOYYi}>2{|$H~M_ȏ?b@⺐3wOB6Tudo5PzӴm1vvd"v^9e8h=d~~5nCyFhoXCvԁX8(M~` mSd/%vk"y%tU P9@h:eu{4L=xwdb=U{8Sscv=>3IhuHŀw`AR~^\Oυz6B%UGvkk@q%`wrB'~`EjK~FBxpDžwv`FLl~:"X{[ Dk# 4Ua'n"k'tu͂QMmUF_&[Kut{:v[;I8RuMvG`#_HfTHPw~M1iPCō(JXG67Jg:E>Vtx^vwDKpzŃhED?VG8uXq=xFRuA^5>X%IH+(\ZsՆRxktʥnCp&P礐@v \(@qjJ]3|Gp`q1)~?Q,VpCXED<őo\} xK˘O%3i7QGeuXK]] W@Oit;y`9Պ#*Z'97g(膇qP\ksY:CヨFl^HQy8aut$V–`WbxVV9JsMTtN~VXǕ7'H-fgif&f[r(3|񒆖QXҒYriTTX)qјl*Ii=0GT"tX'lȲP7a= 6lōFQY6ɟx_).&DT!` ^,`mu`w( -QS4"ɠxo +؂ȣsE Ib5apLz gyȐ N{\pı7*G {qJ>/6Cu:P:"twq|w* )JI݁2r;2Ѩ1}iĊ-y'r䗊_>t5"5j٥yƔ <Ǜayx:l餟&z}fth5ʎaЫ9YI삦vG3ZH P4C:sw:Ӥ:;CH|2CXm)6x ׉X: ISBA dG(EbXq;7*ȈVVYYWSypytu\8 7劌EYfɲ7ij(ZXJ?Zn9R!J@;yyV1?&p؈b:KDYXBd:Sfo!WrvXN&pk{ H Lz0lڎ+gsbb(xy8|źةRr*mxETHOT8ɊZȌ빺);h EצS[k)V:gf|iUy۩ɩZg[9'Z5ٺZڋZj޻vFIpijc, 8 ,T墅wc:k^령f`3%wg\}!`1^4yIsqkJ/aTr2~:BɴIg!Ʃަ9z;Y'P]isag*e HlС,/_s5j]&5rEd9~Ks|O}mL!Jؗl;9sSÎ,"8h <(Q,A!MrQL$3ˑ ƞڅF7d_J,~Ŭd|iS 5,Jݬ͔k+O[yΆYgLӣy~<"ɥ~ܒBZVTvz̙5f7Yѵ 2:<"-E҉1"@g\/ H8M= @M>.BF}D': HJeN-PL-T=Vm RՍ]M,zebM!V"`BmHzvFy縩9u~8s]Vw֭*-׮뾌ȫZ<+ +sZqsG\Sٹ%YkuY+뻊Ll5ؠ2U)KgΚC8w~Abݷ䕅ܔjoKxR|Iw k_wt[ڴBIẼ^W_aw.GM}YкE]So*D˼nElȗrg;tkh\INHTGύυ==\_*I^cnb.wt$xo~(s^~S?|A)ޟ٢݃(uަMunS^봅}˪'3E;$~u )ςxk},oB`x.N#| ֲaE`9zxlPNJ.M?50|/17XOk !N>~q]^[3s_[!Rz2vxܸ"Ϥ8 ~}3sY ֔ooE=ϣ{oħpG.҂ .휇4yŝen"BkXV6kr ܱ-_ GYclٹdy_v<ʩ !?)L1#aMNs1k3B.]ADߑEےNuJn K \#Ve[Rwm%䢒vq^ Rl=* EՌt '%8#uB73? M%'TgY=Oi/eUo0kC>,3+_!aPQo򔗙s|<[EmE/jG{IΩ5Ӧ_#a57}xs׏p. :YeKV.ZE1fmug\H`ZHQ6 CA'+I%ƴcO5D*f7xH_&OLԸkWuN2Z'9,9zudȂOAtz z\q~|W˸UcHb1C"QqXfּQ7)"5l3znGpIMOd4]vmT9q3ƜZ =Q⻽&ˮlpױgwV4,тKzz,S%K>wwsRSG:1J;YOHbNZo봫 Ʈ8X H E~P|Jab$WVdqF&TiϪjH@1GπI# m ^|Ƣv/71ɌC2LS#1;+"7ՌSNdSNSO3[ΊS2Ю g=i_5,8gzCK`Dj+7]N~Ӓ<ۍOHcee{&qZ':٥UyCXik{ 8̈p ݣ963ˠ yg?{7l KBC/!גTV <'v8=&ϳ mDxĪL0%ў =љHv@L YHUNB옐&"fk͊'R1\Y"q@;9 h,|etit2QЅccMR|)%σ UaIrd?ډtDelj,)npч0g2Ft́i or+$4Qi,7$LgOY%J|yO}&K>X$.Vbhj'F=,nhGQhI}Ro"e3\RΔd1ilzSt<؉jQ bHujTaoʩTfy@hFUrc͝*YjԲgf:yŸgs5k]$2|5<:VYTUU?J1,LsYK'i٬֓CJZ^47uLdo܍hǠ5.jnpu.X+̷ϵnpԟ5MSu:]7Eyɛ^=שU{W߳ʗr]][V_*+_C;WWid斠[ݻͪ2_"},&dhIhYفrصgUŅ@eFd7f}Cڊ'U S|xM!̯ȐLBk=u#9 ;lڅ {L5v?MvK 11 հ-/"amdn;6;|WLh怶6]F'К&lc4v" RhS-Ѐ9>? ʲo+'XlCy%{8إ1»vϽ #6Hκ,!$kZ[z7aksH^#{V(Ya>gy&A8Zt& .' .0X*9!pl|[Ld] 900Z67Fb:Vuc=@] y]rѭ҆f C<ε9_rW-a) Y$U6{rY|.^m,Q\3c릴֣,c'bRӼOj-'E?<ȫ8o 1`ۂ EA*9cۖbNO 1$1_mW˖ت bi*K0bB'HkqҼݦ*mJn*Ӓvј4ј&~lτjBJ9*ġPuءhJiuV^M^ФDJNGi9m~)-ƨvashlm! v/kXO r8<)tnx)zN#7:oaΔR0Ll#F*,"'f}F$∱Z k⣁`*2:0*?PLg!ӆ0,m(lϮz$MRVE"n#+1r8>P_R( o- "=u 2$p.Jr+o2n,ٱ919Q1hC5'.)(8|n3.*}JDk5-sO px0iRq܏@/2O7s A qIN1JicD70)?So.<-SӒo[ZUuŢ&Y58c+'/&m>oV.oR R`"T5|ogtA:/ᬎqTh(U) z$U{3#3]eqRzV:$'ަf#fmV%+S3pvgQRM1nQ-#Q*0{fjR5VYȑ~,H\ %yJ3d+5OkvkiOJyrs5ƊP}n3 ˍ1<6bR.-cs2q97vrVi6J ssfw g[t4w.[rfA]xc)2Dl֊4u~MބÌ·>an25o{B|vgo7eSwdm8׎wvuzyG2hAo93;!Ep9sbrx6T4z8@L!?}'O*k5s.!E-Y`Rr[*qaIi HG7X<1xro?{Ysqlr)O}MQRy|. ZoC96MW GqۯpZxqUʍXlҸG,G yc7JtH0?L-Qc5Tt=yEy  OLEmUQf~XE^qʑPJ1?;^|tCqta׻rOTM BdWNoiqO mV) '1ZNŖI;eߏZيK wS'r,guWZ_|LW ZV=(- 󹭀 5m7O"V!'7r9ǥc7hY_ )tFr d+3YGPS7v.4.b14:HzoESpow$y(QX%wz">uw(aW"m -3Yi4&Hn*k:cwGs\(mfI%pUO1Gk=~ "8 (2Itgp:5362Gzb@O&VfZU)#$yLT+i|Yì/([$@וںxy?aRWz Ivb9d{*̱ؐ!/6ft q*Fk:h%1$@dfm o3>Ǩ8O!sCYy~՘d4XS+\F捛]<ɮ\ٛ٣E,i ٹdF]|ɵM*IuqG9υ\';3@`~~3>)Wy@Lۤ7"T'y''%XHzo}l6Hyj[I]ddF[Lm#8޳ >q{#sXHkM+Fgm_U`_~4{־E m=]7]+ fk>Ƨwˉ?Fd$Tu_K~ >3sjoy8%Z/3.v΂ Õ[.#ޖ)iVdlX]q*>ZS{v~gTWO՜"^_[#X&ۋ\$h)$bf\&)mHUGa)/Xok\jpYra0rs25-xNlv:x.G=lꆰ+}o<7o KrޭxD[0bRb"Bw>*܈7^l򤥄RJzr *w@uk:.hT'[<.B/wTMꃩǞ[#Bv!dH1Ե qMO!%&ךn蠂fۂ?Wd5ytO:މz7cZ%_YrGLCd\g)~xb N=$N"Xy$?Iwbs5v^R(V9mh`z*ywW,By"Cm%љjyw,d|[[> Xfd)3V`҅RrtzۉyVהk^ uaM,;K Zfm~52<%%| /M,W7l1M)1C]~;$Q ;‘21c5˜9-;s@ 3C}t$E#=3tvPK=5;JS}Nc Vs5a-c-i6I{o=7uo]X˝7}~xQx#n4'u)g83)yИ}6">7NzWCx׬'.} {hLOgdɨ^zDx΍(Ȁ5ÿu|g^KcRg{қ|z0}7o"kG4~6&?!¿-ɘ/:0t^ ,UXϩ41%fQKq—t!_R$NPB#>j! r3QaiEDPPʂW~Vz F7T"HQá]fI;-XTEFĄ8*1vB}'?6REy@V|xUqnX:X-q&"(,da`D@'Yu2qEni&%IB1*eKr/aC%H$9Ec45'MFC0ëlhhDٚ`ˌ&sN8jƌ2)5C$lLe-\V@dѰjrgA7eXvU8074 BݳS籢G=50ZeI>}Y #R @U *.K!鬴x!'ӧzNxz!&-fd4LU[7j*: bk:mDbPe[ouZaB >c}]HkV}%̸mq2 O!d$YBK3m @;-5kMލ@ytuƹ Ɋnws+:w.vj7r.xj=/Ƌޒr߫ ҷ"}Y&ސ\~! {`[6'KPUsK``P00L6 rCl)7D](!5+`[Bt$1q< jKk\_릩%C#=FVTLȡ4OGNz<䠦'(AI4)^Y,gϹ cfJ$UksɠqO$67m=t~7e y;nw[wca N[Fm+h w)ዥ{TpF C K5rJ|LйjTb6Gxr8^n [|u-&[fz*C?R191O ;gC!z&wo_x;ҝwvŅxw~ƕ (2\6oto\zվ{תsmLnq2DΦ^EhS@z7f  \j0#zر?qZMG^Ƌ-xZ2G?0 jgt@3'oWK)D^VI=1^b^ăޚ%XXU )% Y-qYԿ% NˑXSaJSITyV]Aeޡ P)è ݾ\BV]1_@eYSi]1K)ED!R QT\(MA_E`fս^ Q%aXɈ_Hk@P_(Jq71ݑҒGLU eb:)Ա8zNa$Y B:^#.Ro8aSU)Lb,VaMfUV٢&:c*ё'J ("]ޣ6A=%cEi0ÒTZ ׭a!a$rD)$FlQ#$c}&*'Vd:\]KAZEbby?#[@jQ FY!2EN^ Qd?FR"b\a`I6FřկZ;tZUR;YBH̞_񋽴`U]=`Uً֮3M]UttD[J~5FxWLhd́U5م^=Dy^(ax]2a ͯ/-(RZ0 #IP 8gOw›,5 Ud*8X`Y8K|Ɇ"Ү UŠF2*96 jf" dܯ*&A&i UigvLK*+X2Ƥݶkƪݫb($v%Mv`(ѿvց&/9kˬ MDYHUD2>앭yۚ*̎ B~\$]n,i<,ˠ\C5#$,N(lT#ši]ۮ\}}ɕlUʥkjU" ,nK)ݚ-=f<O٥ӆlVیkktJivQ:kخ[nlzcv.ꮠZorn(y //&/g)ZV>/qN/xўa b[B!Rs1/}-_qHll<8v#Gp ZW':B ^!]9ns`OJl N^qaۛ~b5Q&[]^ 7L + wWi/J}`n/2)tu~!2le!c^#y GW*YJΰG]Fr oIu1z}q 1 &qGkן:޺D0 7qo^:T`bbZbpiE&"2v}qۉ'krg-2)[./*r=+k+s,2y2..g/n0!/qI1oY\٪/%>#DCDe[F]3g/9-Z_TdG.SmJBIcsy"=n핢&g&^(:1(Jn^,bb?kEG4:>kG6`no{4)gO`zEˇD,1 J 찜pѩ.XbK/trKK=8xMON!kJTpLe~U6nFT` 76_)si)ť3is#O3)MZW#t=2b*mS_ !Pd@{!q46Xi^!j؟j`ʉYtHo`*P< --=^X5#s2&PO5`MQ,u8JVRLO_*1PM-I)ɐLZC*6qm⿘V:KJ7rtjiN(1 Lr3I5f&%61#e3v[#I"鳆8EM,QSqO-DZTD-*$O4&ڠcs768p 9wC(<3^.dTtb5f>jtvz!}Qz;J7`[uNF\CgK"^?(cQe4\agcڵ3 둾؁&@V_F{:é_rHa #,;b:3//+2%u8r^Snf2[{t̹wgn;N'q!炳rC/O߬#ϰs,o[Yi{fqm|ϝ/VcOtTOFkbgfevXu:|fcS.wY}.!+K}%/>3_?~ØZ-1]wiؿE &Vo>˒z_ns}93yZUS7WQf B ȩG5c:#L5F#RmfXcrLv\(S~N_~=v{lQ0Jq">oa!MOS|.7[3/a1f:N7ŝ_~]t}7vݽ{x/rG^ٿ _|q_%~$@qf.I>\K0|p@pCHC;F,El =,Dy;#^LF[}dFLH%Z<iBºɧ2˩2Bz D-6 NJ&3NΞ&O5Wh z(7 OEPIoq3@],!г4K|/OP5+@l ?M8VUjU;ZmQtTD56 }NF #t,WbM$Tl QU*Yv [jRd]Ezh%XD'ݯkAo,ๆwDa."ҵ6ۀhZdZ]by~ dxRW XܕRetbhc}לZqE%JS Vn8rEΚ=t:gZ)|:3.Y ʮ:׭{, v)Besa4G 46&w""|b3_lr/qGls=K<9M_ rOvۋt}oxᇇui?>y7~ݏ]y Lhᖻδr\Bw]w,wįW~œU RIjR#cG2U%5iY_UYl^؀!LiyU@@ FYKh OH` ‡`b?-hg]l`hCuғa&f2" @`TմЈ #ĽF 31`WJNUb?:>m6E2&xPqP"Fe`cDq/JaԏA{J_).".h߮dJU eB*kܻ^&Կ7{ګپd%D^iWV\Q]?0_ԟH dk!4ψ1RcLf=jjTZ2Ox@_v[UGTB6&S;˰/*(#^4JE;bǿ¤ł%Ѱ _GjiJ̖b ʳjߔ31P$M:u3h(@da븳9QrTBP24E5LG aBJJscZ)4 i9ՖE-zjμ 3,f UmZL8FT ֭h)2*V415|-₨rL2 \XSvP6*e[J뾵?,&QN+\ C鬛nSr&5|]pۨb3%|"J=.bq+R cIGqKMdQҏDl1)YN*U{c.@:N:_LKMsu?F1qLkCFxӶȻei_'g%*X'BMjyAMωzU페 xHͭN3<49/OY{Rsfn\:)2{\0uCh9zїgiLwZkV==>ZΦ&usZկjXC>^ -9~ƺ.Z3Z[#oͶd8hhFZsYmMe.״A#n]Y3Gf3q Ãl=1.u!hH\~x#_[cD+2 2ȎB`>䐤q&qAnRTl_Noݻ/ {aLmeN\[Е^:Y(f4 jL j]w{¥KuS%&{nݿS c)xkݘ,b[TjTr8/~qW+ᅅMG GRnE)/`/Ĵ"lO9~UNEqBc v+0ĉ|{1TӿB;j QCޅ9-vyKbؽJ=CҒPdb*qKFX4.Lڞ{ߊe%k3T)ѝ6һѢl.K0p.s@k- ü 2;,(*%&:P >>K(/AûJ .3@;Ab>E1= ,i@ܓ.`- kJ>+a,l<$L|kAjròY*0qA222~b91#k891$9X84;DA񛮙>T@\95,9+4½3PDDQ.v? v 3Rh0DI("i>[ FS=KEa{4;o[bLdӾZEjƚq kƀpԓlms4sFtdGYlzxu,GMģy8[ FǷ dcFy4;Eڳ$`43QƟ3H G!./d"咶ȊH;]7 9=1!)!٤k6|9 +bILĺ9% A0*'K? @?s1)!ۡJ[ɢ+>. >* fK8]qB d+I'3#LH2`Cȶ<)D>װ?>r."\ʹΌn̴,򡩇":vMBE,S1Ëᓾ/LBj[Kp|51FQD' [1UϕIIJdHluT3. t tJhQLd -P}PʓP PZd $?$5EUeuQ);scalapack-doc-1.5/html/slug/img748.gif0100644000056400000620000000470306336067600017173 0ustar pfrauenfstaffGIF89a6!,6ڋH䉦ʶ LF ĕȦ %2*핫Vk~ l ;#5#!y\h>wrݴW@"Gf!J[œ.ɖzVoVb@QRȡUbN䓬f4R F P*{8lF#5cCҤďAm߻D -ѠĻըʜ7e(m_ ;gT4[m.]j^P,ŏn*mլ[;-uk[kf4[,!ki \N W<+70>z0Kz٧o>˛?[^t?SY؁S ([J| 'B[5Ei)miЄzZb!3h$Nei2 5!].1F$%7XceLz#?e%RaȎ;AYgUZT?]TJD^=Ry@mjve]Hɲ_^Wngڔ,dwFGT2HhlhY*UQ".:d!n'n2Uir 69\U裘xb9l8U)˒zp%ŗ?e+.ZO(^k@}Xi(2jg9P4꫅iP"o?`ԙ`ń$?}]P4W&;Sw]Wj*\jԒ$ Х6gv67<½6y]yX^誘>KL69)7_Km%Se=[\ajYa8Րr%{鹳~v]lI;<:Ck 'a TQ X;'QJPsT谼SJD֮ zG g6ߡNV2l ~=ȥ̽- l^  LV?`26DH)I8ɗՄ$G⦔Ɵm16r 8znjɹbAXF4o|):qsdH2{cFH@]Ld!fkи bpylXn6zέ(E _4%"YIGJ+$9paw#Dr ${G7DdmK&--(ⱎ:Rq.p*:GN/nZ!jpFtR)'qzω?cP;'CaD>3-x}T_B|Yig>XaP+@qJZhE7*+-!^$> ~!6:<%NGhhEeBE=u?:ή8*wStK. b.5/ư.*W(ʮ_'1e|/QƾTY0rk5Nv쓤[Qv@ex5HATTYGrvElֵmo-҃Ъrځ`ᔛ)6@L5;졍~ tV׸ol,+~a /MՋ^j_('$v#RNbo^1ӜRBo$UqF_鼩9WV$E r){CGzV4~&vZJOy0ߌ+]51=F*L* RJ\Jnj*gӃ"y~ d 8*Rcu<<# p[S) /Zߜw*7sPu22Q˙ӊՈf TSĊ+nLقl 1&M595Xcd5{f"XQ.kY7=Y@JmvqvmNVjin.v7zޕ&] _jCUN.ܮs%NP&>Lxe_*Ӹʼnr"T#Q]r*>+ %|f욆j:okE4Neݣt40/K=D:;W3ݩSzk!tF؅Tmƶ A;scalapack-doc-1.5/html/slug/img74.gif0100644000056400000620000000010506336060277017076 0ustar pfrauenfstaffGIF89a !,  p{JJnz!Ma@;scalapack-doc-1.5/html/slug/img750.gif0100644000056400000620000006362606336107101017164 0ustar pfrauenfstaffGIF89ao[!,o[X(~MeH扦ʶ Lf!:ʃ$ JԪjwB2CyN ͟D=z(8HXhHEWrt)9IYiy0Wy'$$z+;K[k[1ڻq k4\+ | -=M]}̧˼m !G~m /?OmLQO >)0_B09|1D6$` V(1HQ01ʕ,[xbN&V4%Pf"luӥСDsfH:tI-8Bi4֭\)嬪,S~Mȏv} 74@&EԴeդ7ms >LnxquZVNpĜ;{6 s({ Q_Y`4'V@۾;{ |bċ?ni8̛;;ԫn=ܻ{>˛?>ۻ_l(vmb6^_P~2_q~cCedkS\a ht"Ivp( ̊qƒ'0M2bBRU.Eָ& cWS;چxblMm斔ȀI;uHS&SBť=ИhjiAw:$]d'is>BFZi2pY'E!IcVZz9`AcF~*}: E48 n+J%GJΰҖ$[[1Xح:cmҊj~k6f[զm %z~5uD.Ja-N\K&/;oSPi.e.葉ji:ȫZ(PRԹ:'r/OnF)6q;3I8}2$ ўBto4&|4c,(o}sjuz٠O_Vj5H4\=`k|ܘ]v 8ݢz2x͸uVlh }j i YXlڜb&~Z3tY7;&>6n;ƯTY$|C1XLH,IixNPm%6TF'B-d]K+hC^4W&3%/+6J\jv!VXiN1Ci-KR%&Pd2mb'7Gd"[&73:Ջ]jHZtV82^Sc̙IPi muANJ:!D/ьjtGАt$-IOҔt,mK_ Әt4MoӜtEJ(X}+AV7>j\+[ȹb*˺.zīzYA6\eHj8{_b SO -Hfsoe-#zBg,3BdH"N06([5$jyXfwn򟏐R1p,`&\Ѷ%ohHK$Hōryy[tWN4/kH56[12\+mZd"3O*nz0q!AswVc=,Nwĩw^}2IoLv jk9Epm}۹.s0GVAmD{!\ݞ>BWʺ "`ۅa (wTN40AzmUj2/kh̍r&^Hb{R8y&0G,96櫯8Kx5~+] H6=ff-`ZخtGbzԍq/ N1))6S%ڵXP%=4-8ݾݷۜ}ʨFtK$kӑFFLPGLvT.L{gtB̊bU\lNH;./rdGnsJۮonWvWBƛF\ڑMilvu.bW]5Hܾn°[_Gωmvwh{\0;^G4ge9d">~/k~?G'9xk]Vn]LjJ{Xt5<5f6繊~z@3wYDO&r-lY'X+zYoɧ0-]q]{sW[NcLKRgM3gd[^~~T]aVktBnfedcpЦlr/ LoD4xbtF-h9Hf=F88ic4BVaam!ȃn\tkv3r:2Fym@J8&M{hw@hywq{oB{dJ(~c#d-Qw0qge[(VϷp|փH&fFpFΆQGg)aA)git8uQiThPFa8_aTZҕKlpk]ȧ 愷VgX.4egё82Hȃ59h}pjN7x1)kwhK}xk=H$:bRby`rzrU *zP]Džj|dh vm'ɨUijeixظUUHw]7ee2_83(b`4F*آ|wjB ˟w.xhu0~]bjYt(}V|ק!b0:^C+X5i"߈:P娳aTJ'nfmJkǺ|&Z^Țd~wSٰIzjHۤi£,gKMX+:+-i ZLa;g4`z8KL6Kȸ_³0:pDz5 [5{8g*1ˎX)Rg1J2KC.E7!%X/ZQgqwWǥs9Yiu%Y*w2i/s )U YPVxϹrc_|x&4(rKtFI!ɗ.<90̨4PV 8tiRȕpJ5WeIR[ðww1lLxBzD䗼S|RH: ܉GsJ,~iK@}Odx:}}\; kU,C*FX7r\7Ĥ eɳ5!).xk|a}:JMngzʼj,ZR;̮a:j*I*EXh+J;rݑTbYڷ%4m{U,]ZZZցr*5~62ֳ2=Ϯke8ee G>XJJz(Lµw#YL"kѧKXdm^{<˷ ֏" E;LGrYʵ6 D 6[t83-Ũ~mˈp܄;Q'5KyZ`)=x; -~[=KM)m.KYׁ3̅"9Ί+ܫpFcݱqKoA3BM*W M%̅DI#^ L4%jfpLqF [`LGrp{׉Q:lm'jcW/"jlFenjha}O޿0)q њ.~|҆.9)wQ96DMTmLb-j=bx9_ƺMVYI\^E#GQ\NťJBux F yYi],n=gO\ n,rLo!/#O%o')+-/1/3O5o79;=?A/COE?U$izLoGɾYVYH/V Jg)yd̪s*ɱ|;Y!K|w\|Y^xT_Q-'ѧa~l,<ő:&z Բo5-}*Z謯H|蜂iZ ^ K"mĜRC, LN:\Kҷ_Y ^5:~=a-K΀}[Rax ȗzJ+0BhcFS`38 m]nw+>;JVXھtFAT;n3uL w ZP'&4tD^sa.7%?_=ϑ١XCVb<^ܲ*-/1\8Q*BFK@R)m6:YaD C_.vzo?|\$/^ԢJ[T2f^Ou1Jg-,Ql@c,WM)-TE>f-ECⷵLgge,F)BbaIt 1`ͬ 5ɟξm#]~m5^#nږ]avܛ\ ϛVDFr)/Wώ:/N[t}SvOk\1{ L?'l 0 CCpBA&XmATLO)qhfƯJ%:qd1C >q]xcL)1ǃA2"A%ɺ2r7y70A|M프aɆ} REmх6 n"DQ[Cw.,SÔl-/=GR65>2ROUYi:N^Ւ0 X_7srvi?w`O6 6 V93&$iU "=#;JBςA%%(B/KSQI0C_H iwP 0  /~}^v> Bat_K[ *'F#녆s;'^qPZE,&o U<*1,]qm+E8Hd7B vI ҈uJXQXĀvd:x"Te#HdɌ*UƔ攘ڴJX4󎊘% h>V)>2r{c" DjňJC"ft-ɷX"lLQ`uT &Ӝ4/&E}Rq$tvhY-}gD=Ű7fPO!f4zmz^""5j9MkGl"ifQRXWiŮqSz!ftO;1Oc0嘽 h1B!Wq!j(بzfŸ:SXeH>S _(vcE>VN\Zl(RI7Qƕϖ ~"8::Ū*YMawx&diUpu\{R(XN/u;Z;*4FDz3=sTٴT'ܬr*ƤB*LDǙk8t3pi{j ]1JJK_;WWQrF.j.2-SiY\>r5'7qg̚~on2 e9ϙug=}hAЅ6hE/эv!iIOҕ1iMoӝAjQԥ6QjUխvakYϚֵqk]׽la6le/vmiOնmmonq6ѝnu6,¥XKV3fw[HU/72?Iu!z\URHo/XYۓ$vx6c]׽gyG9-Up/ږbt@]eA~KGwX;Ned:ɏ귲Yzmz\25 +7J$wT,|ce5lLkrc {^j]d8Jwwy[J+5~ȨƪKC-{>ml8tω /#h'#6{>E2Qψ > ƹ&߭jW5wLG;md{"= ?!'7qo,gQlmK(Jfs,LW"L{Vn'й(R:5:L>c*2┾v"Px/OPΚ&0͐Pkbr ΋hEdh?ڋr@| ZO$RQ":|J&(vjd`ǐ򦷘PXF|'Z$H)+À޺+ĿЀRlrI'Ok\K²+*$3e ܨ_0'.·HFbPĄE,&Ps?22Ê3#€2 +d4gxL6L6FPA7{6[7o81kMb890, 9CS9O:9 ,$:KS0.ds<ɳ<<3=s=ٳ==3>s>mE1;3?S'3S? ? 28rFFmPnI^sm0NQtV C%4 BKT.q60>w2ttxEEyӉ+r$`H$tzηsҮt/ ^:ityґz/BZ0ht .0%40)n9% <?i\憀Ų0jc/MG3#:"KiL܎*P DS%ozR (P'ި#Q-bp+.q#ͰZ/ Q 2i[q?Nim$ P]Oo"iʒkY跆 Q&il(86CuU +q/-LAJ5X] ;VQ P^rU| [I 4V>",lO5ِ[oNc5s4]|L s1bddt8o9,QCZJ\PqOqgKQ~X"Rr'qokBem1Wn=*p֡hm|զ!qk+1 ?T52q ;%]9h/qm8{ѶDVSQnMvW5wM(XHq= M"KA X1ؑ%/[/0Gz$ &7 MKqe, J2w 'W1U(wWu*ufN&)KRc [y|4;0 NrR(*L Ⲃ)P%{`K}w/{OT"DVbZMsf$3g53x2`׈xY8913Dtٚ47m͚Nux,ll684NXxGXSTgkgxD7 &;S5K:;T:ӏYn%-195y9=A9EyIMQ9UyY]a9eyimq9uyy}9y9y9y9y톽9XJ70D,-@y[EQqaIعb!+B+8xT44+򑠔FaE[t2+4#>t8^ty gpIb2yv^nKG;,˫tQnބwɘIHTv'+p8J څd)m:IEvu%ek8b4NuNUZqTXs /&׬zWe!i w+7Q+uyA{E^mpRCc^oO|;_CثJ@UVPwHunP8p?F 'q!OZߔ*2"Ϟ,P1fn|?OZ.`sc goxZ"h[HI#;vi7o,h0?qH5#sW҄`CiyU yd>!9?]Cɹ5G7=A?EIMQ?sƖNNUwU_ih?ksNLAZ'.H<(aU61%YgCR oGr4IImYz]w;eO֧wz j0Yi/KwUXb(Tz>? !46'4*1لgMr̈s7 _( N^M;Z ]_Hb"VdߙZ& K k뫍d_#alfBbq2+p(Ped^U`(#`Un5t5 q|=?J9,.K-zAЗ`MėB"ҕй+De6.' չ|e 9[~9k9{9衋>:饛~:ꩫ:뭻:>;~;;; ?<<+<;>髿>>?????(< 2| #( R 3 r C(& &O>"ĜOv$SNh˩xSmBTPea>e8ZKPfQ\b<%Ւ֚Ryg|~(Ԧ̔ꀭ^ H]J0}թ!ښJ1fGVEbv#"kZئ*0(9鐳h4Zm*c#L{PݔAd,.kS;4w6m :7Q#Y4M}IxvQ}reћlԫfc* g2p,!Z;ؾBT6,lnB>_gtt-x 8BHUIm iMLHѼO1*4 Yv5S09_I~plKȎB0~Eso Zl /򐗭g)/ ͇IWcÄ8Ua2|u% HOb 5Av>+>y1TG3Wdrq3(R_9AR? %G8z5g撰u_#*̥M/ʵ,*253[q]Z`XóLG$Q2 XlfGrd ,Dl]RMZ UNV姓ņpd"ۀ tRva9krx6^8Ӥ.]\[w58rfxLO鉳}oo-\u8}f Lu=e{-vv@=$9n?~;/?<3 g~-NBS1Kˣsgmhr}nlkZnŲFldC)uEu}/Vk Gk{h ChdYiM&'(f?To|oޕG.-AJiҳ$ 酟-FRŌ:פ !S7юU<X jo^T{e֮ TutUU1`fEV0]ZWj!ͨbzAUEՊa#^%{][,"[MQ  _ᑙbRR'!3RM'ZVEF ?aWƑĜ`iI]7.Cf%aVMu02ryWxcTe"Ѣ U6G Mcu'^u".*-!| a& h^*FN*V^*fn*v~*****ƪ*֪*檮**++&.+6bF+(_ƘUT)L,nR&y+Eح`UM亭1nS!zgծZF$Q'b9|b"hOme̦$/Z V~f 5JGdJd):.~V&cR\gv 3NǿW]邉p0dn/ (/&YQ i&? z,$+% - -RLcq!V/nNp}upV@o_0T`s&* dddqSrTG\_f]-Xca:qڂٞ%-1D1 {$>1~D%'2ԓefg֑Y*fWz%An17r2f)oYw|6Ɍ^ MΚ#4i1}10EvmJ =g9-Q'' 9{ ;"ON,m2n'mZ򱍊frmx8IP>n,b_V)ƫgr~r j W:w||hJ(R)I{sKۧKkw4pD(H2X61Мkhi)hUKuh ʶ[foxwh^' zѩx-x -ZR[kOb`ҒYӒ!VC_s'FAFrH6g-sC6m۲HRE q;cfi|- J 7~+kJ sDzY! 9t'ssF.%Gzcf2z Ozn3S"? Q13 Vs%"T$nBx=f8'm8pmL[9p= U3o7J$2u 4/~XILUnyzo:ZiiwD[#t/yz ؎p@e67G{a;M\rf>B1?oKۃtV$?$wgrUy1p8- R$- <ƒT;SWrpja?ո2Sy0xv%gUneDH򉭑w"!lpUpeb}"#cj f+ʱvvq ޟfOô}fwK3[B[+pk:&C}꒮i=:OI7h"Ry.#;={0mU2b=O2 sJ+y> Z3toJ~ __~;badsm3n藡+lf`oNķo@γJϾmu%]dδw3#?ā3Hc+ެ*|ӾHԜi8=f>Krf6-gowo-r |75_' >Y _sv )"4 ݴ60~29%1F|6I"$[#b&d˛8I4+4TUTU^ŚUV]~VXe͞EVZmݾW\uśW^}X`… FXbƍ?Ydʕ-_ƜYfΝ=ZhҥMFZj֭][lڵmƝ[n޽}\pōG\r͝?]tխ_Ǟ]vݽ^x͟G^zݿ_|ǟ_~yEǝ#Ert%Bz C>dD{TB!M^fYBdGK"syǮPLeA5fZ\Efak{c3 ;ŴUjVe8ZQdqJODk&e/&.!OmQ$Vv# =~˳m>btK'=̯vqVoXs Dt"jo/{ף1vzoڑF?\d.Ƶ[">&c'W>v {8|W6y]5IU޳?p` N)M~_f-aOqWF~c[8=s J=){(~q/F0I ?{k Xz+]0;).wV<2r(Eܨa OrȀE s}L8?5o"k yGƐIG!Zm#66j4#5I>CtH B')H"bo$ '9I3CWj櫬 X[8z`:KNHMZdN J#TBslfDD& KRD:k\MYɱ3h O蓟D?jC4hB"OJ ^QVԢhF5QvԣiHE:RԤ'EiJURԥ/iLe:SԦ7iNuSԧ?jP:TըGEjRT6թOjT:UVժWjVUvի_kX:VgEkZպVխok\:Wծw Iv@6CW8Q&@j ۤ85xF#{&6le=2r\ A)4hmNaa%6II+(rρulpo@3ڦC5ͅ!ˆE-."#t2M1Mnsμ‹?"7&h%ioa[B)A^FVnMn]#E3*jyyka ";r<'ru)16ZpiJBW]~y-|j2G!dVk988b21g=N`zgB=wgmi9Xs:'O ^̳0jdgC9@+: &2 @m@?d鄲#C= ;AfAk9+9tl+_?ˣqz>+"k $2!4d%;@1)yқס$;ھO;5nq۹^?5dC3b&ID:0M4B ?KBLD."kVy ;#@Ez1+tY=͂Ð7TJLD .BH9 ] f̹EUFdbFT,i4`DENƇ $EY rCk8BbTC9YC=+e ^,ғ򛓄$BJB|sXOH,Ԡ-H6jDF,I<=$j^G]v9"2:4ÕYNӴO%A1#&˲a-Q6 ɤs<dFlCZ=7cJkHjQ̷tv\TKa} ˘ܙj{74 C,D 742ȋiL9A˕xL$I;?IAC % ɄL`ts(QMV[Hq֩M׼͒Mz&Md&%ļ K≮Ƥ!'tJNlN|Nt?μ =՜|>98C.N 틳Q0b ?zf4T%CIRkT̙̻Hѻ#Ju\UV*,K H%ӡ)$ G/#9T+Dm"+=^UCey-AIv,DQLu;9-53ݡ:-DH5Ĉu Ug G~sd|I,-bA|u#:Iݣ}5~EALVie?6M'Md&8{]UՄ5էҲ>A8TYLSY Dǯm XCXHBXlHQSNEaL>d>5zLS#$b+kZqA_CCCJXEлVP!@eƴ\EDmE$]2j4DΰT]-=;[/4Z@rd]4ˮ*V[F$V6y=mIƿݹ ػZ _Y,uz5\k,ߖTďަ%0r=ĝ9iЗ4R[ʿnؽU\ױIrz\7ݺ >IJ`&n[ZȎ$yHfe\\1Mklmnopq&r6sFtVufvvwxyz{|}~&6FVfv臆1pHo*7YR&cR7mH&qZ hR>uoēKq@'m^&K,I1R\~{c^.uՃ0M!_SWi]VR^EF0Ojkj%b;mjE-M몝*RLoæ~]p-/}ڕ>b@ujw8eUi)YyxjSl-YWO}li :)OqԮMQ-u"4,fm&'M <cZXW'X0~lK:n92*`L"^՜<~Q7A՚nUnFJov-v}iA1|P%jnVE^)Ht0u,qe]+ޢLHJ.˩dá;. Gjp O$Wq`OXs>60H *J>_%V=# 4%3%**.ԣSǶN E+Ơg~IEղe%l]ӧ REq弶գ"^lL&DAzO؎>B<&c7"P?`=ՊrA`a q,>7!1.GḕK'r8񟭎csXߴ0Wu4S> S!jerr\?q}t~`acEDۀPw/`GFڡ=po&?Q›oI/dEuLUQtK5w wXR0vK-lW1[v5 ]jYF$VT#6.n"}%ݝP)!eS6{Qe.v.p:]Hzr*xzonzOsD'7GWgw'7GWgw|8ӵ'-de|>s9uu|WE=6&H@Qt>M 2NCg_e6.w]! NSS|VDG`$u7r>N$Y%O\mX/42v rxec1` # ƭ+P`(d)(FeN 3fmx|pH,doqQPOkHC+ ӂ\NH,㚽)W:g7|~x.`usWZZU3rQYde'gQwmtr$]MncoH0OL`bdq^R[;a,zv&/oQ!VlBHq=4Eww HF(S\ɲe'q͛8si&?cx Jѣ6} L)ҧPJJ5EVʵׯPKٳhӪ]˶[%ʝKݻx˷߿ LÈ+^̸ǐL˘3k̹gN?M+0zjC}MvTIJ+Jºh<ΜXE6KB0GټCӺ[vֹlw<7Y|鵎oM$ 6'dBxNM5;יq7/-СGn߁e%+us_uirb|ewquX\GaC"foIV%0J o1 yI%0nX4ԍTԦoZgAw"xn6yg(QHQ}Α$" ; ԋs8t 0nB̌yHaª=RyҀˈ?XTMaVzSdz$>!aN=;F3F[+0SjzryX\.xRx!Ӯ9c&UŻo$h#i)#-J料\ ,*ƢTx&Kb0C}Zxj'KCΗ>4۰Z4L?$pRF(Qfk:.Ӝri3~3tIpW ol+mU"<+իcG#uF;Ms ^'PWjmU7~y/[k>. 1ݮ\ ԛS̭OԶG\y_ꩲ;.Mh!z_+8U;^)ǟ ӝ(2>LlIN6 3ݸґiv+ xPVd> z髗>qv \SG@n{r|MKhCT!h 5, ΢Zby 1pv%T1&P*⧜#Azˑ/vx2"ceBu˒%C uAo? # %%>&UYhP>cc{E?2)(|LbupIQڧ8(eVs%e}&$ /Mj72D"ufnlmjL*s?eV1MifRIy}dS2kP~ ԑ,:}2yA8ZSݬ7{NhGd M+]3*JPֳKTR\Ք5cP=PJԢ#FMRԦ:PTJժZXͪVծz`}HJֲ^ѩwƪpg'-xͫWY7ΝN TiT谂MbJCJQ(WKJn "|z-/#$MYa\'*( .glG~x)85!Gt8* PNioQ>ͮvI==~xtn( qκv^XMovfy{k*gڵGGX"^|U"ntVe*PW.Heش  9mvOtg|4α!@ꊃL"HN&;PL*[Xβ.{`L2hN6pL:xγ>πMBЈNF;ѐ'MJ[ZK7Y}UwT V,\٭_0Hc2E@a2S#(Q&'.ml=/@qKbѝ8M%ok56rMP6'_iO- VfC^t@|+ONq"aĻfa܂ZW]xl>rhY"zsPSoy>o)۾Xiuu6佣๽@): _cPNr%лo|Նro4Hg3zuZM - Z(~x"ZV6!چ!|3PtN.Ϭ[c w.^GaIz~k|)mHӒj z^$KL:ɣ mARlE> sv]^\^erUxj4Bh{ 'YzhD`s|baRcI(~_ł@^2ZՁ#%Kl#;@( sI̤A{g\;;wEu8STu$@gC,V0ftCW1>z`-(46s5=jp#"\}ʰ%䴁=akNquX8e~H1Ak8c[giAJv#ƃLjrmxnAkWIXvyXj88a,%YEQ4zqPHktSsMouL7KUIT2XXD{B̘GuVtW~VȎxMj{Yd{z8liPJl7.E%S *4%UGGbfSR%Vs5_vRCJS7G4S$L6~GS ٓ>@B9DYFyH9e"LPOd2bbבMI gffG WcW#/dŊ֐R)4Ҍ2nSi(sg(W7xG~^i$cXpan])}t)klpi}zFz |ٗG V>8CX0gK g[p:Ϸ~(E4؄gB<`E$H 4DE2ؚDz"}|~ma 4w;'v$xv&ڙ)~.H}54 +0cS |Gi}`wd[xC]{C4FMěwS7Yh_86y|)5PPpT6Դ}i#H 99ʜB5ʅɝ7zs)E7?>Ž}wGI8Sz8'X*{gTbSDJ9Yqc`h؛>)=f x9/(?r^J\1J?A[XC?Rfm)@\8߳_ SmvGW} _H+vCVeZ|hjt'7𷤡 d)ꦞ9*CaglS U4#bS䚫ʑj&)x>LKx9,ma5$8aHzZx1ٛjp%FÃZbFQЖvEL'UTkő !5Lsof&Eэjk`6q>hhWIR[?P;scalapack-doc-1.5/html/slug/img751.gif0100644000056400000620000003074706336107473017177 0ustar pfrauenfstaffGIF89aa`!,a`ڋ{GJڜl*57L ĢL*@6Ehӡxi׌ tzm+ͱ54EvƧ2X)9IYi9յvؗ׈Hicz +;KKb3PUrk҂zLKHel8<Ur}})D-] .Z;O_o-}F^=/q ;_$*4w 4r;"ā Iuy@vQF̙̈4kܝI^ gzQY'0 >NUʎZi*MY%5e<^ʵ؛lۺ}*SWr|=lY 8֨$~Kh E4݆FyQ\`ƕC5^ѤKYo]>(g5841c{V]z ~^` uRԫWU-:?Q%p ?Cq7yya#:>GܞuH`[sIĚm}7T`㕷|Z.Su|,QeF\Tjn[c2 P/)H|ˎuiڍa7D ~V%vd;R&PIdWd3~ f&fffj&lpIgcigoTg~ Zh.h> iNJi^ini~ jJjjj kJkފkk lKll.l> mNKm^mnm~ nKn枋nn oKoދoo pLpp /p? qOLq_qoq r"Lr& ^9 ҥLs`fZ<(gBC> 2tӕx`I0\K#t֑@ H!76f+2[WSvI }roat͒kEkh2|ǍhsnNnR+_~NeExx#>wH!mNQ[62ڮSfF%Y#:k%Iw:6JuGPV;?V~s$NclzH;.xeg4۸|viYkTq7-+-fC7?0wU6%jkL_U6eMnˣۮ$9ǣrۨV.iW5?UXByK}Tկk}\׿~cK2'6Y˲+}?'˹w TsX;~ .߶b̤wݒ[;^z9=n[\eW^]I vlhMHwAL=Ry:re03/Vʲw횧{)7 ͸KPj"S>ţAM[yHȶ}ɽz-Ld8X"RUW|nTTt w| U8T}FZYk XY XP跁VETA5gMG>U}~5|ffAYAQ1fcFx.6&kUe^UӂT`i I\qZXfbY`n]^ekgw}wgXYW2`xȆ(Dž<f=xCo{Hڅ\z^ fC`FL芓86؇GVfH}$UQ'҈hg؄FiJ)R~p}ҴlfרV{Җhi7u(djnvŀ(pX>FkĀh=2EGylHdvyjgK–KtѶkwmHKFnw:x>.)9p Ŏf}ox;7 '{7-Hrؓ.0q_ɕ7(wT|2d"j^Iv')gMp .`Ywo)Mvɗ25rxi56sptyYW>+xcX7 'lPtgk锉ɘJY~9vɚ隯 )Ii,uՕYn#xrfyʆ$F# SQwyx{GZשjƹ^W933YytwyIotI)AaTyTl7"j[]J,t'q}D|&LiL rSzcx (:O=xaz Ra&cG4I1_1 uN_ < 8JK jEڣJh£)*vQ$tx]+:~ -җYjOkv<>!^J\>b#hWmh$!Ȍ]20$M|y*ujlʨg%vTY"Se}硤}PU>z?p,rjʥ H n+FF16ɓ:G`0*` YTڋ^hAVz:i.gZjl wTFk%0SGʫJdjJ:٠Cuڇi^f1՟jgJRe*@dj/{t-"7<RXg=Q)?c;g1[2[L9{xͥD8ɲzI2WJ&9w"=m6H/K%Lemy^$dhS_3Tl#V)P EJɸ7f56ۘ%ʱ{ɐxٛIڞjI4'4g+uɺt)% OgI)Yp)4יtk k׋٫˽ +uVJːjm{Mǧ'ik2y%rK%k{uw{:I-[*5;#hK HnjLJ{Etv)x FTK|_6%l u,+%6*akvzQKMqX5J }i~DR ^S.F,G{'5w4O5S<%[M*%hS!t ڳbQ ǞIWSAfF[wܹ@|_%<.*˓LNS-AbȮqT1_vWUX1GnH [`Œˤ˓W΢6YgB̈\~(㪂՚+3(\VL|i|: JuВ,6{ ¯lbyfh`X6 J/_A]cAkƊ9ɼ'Z =Qe?Mݲ<=g3ULah Yߌ=7pk79;=?A.CNEnwRܤ7)qTB]S{{v\]Q|SD ΜҮN>^w# ͬߔ ~ΩLt^"("WP5HR8Z:/r; ɡ::>b ߠZ'{D-&"?]8gr69\;]"g<և+X.8x/6u'_Ѻ6oRe,i|#-?Uw_X Kvﴌ}~kݫhMOڞ:hxٖ+vϴPύ?[YU6v a󹦜OInZܷtܿ㿹UioAIj/L/.mO+N101Tad@V{z-3ΛOTlF Ḛb[C\W4>/MZ^Y6*R_k*6HlnmO<,+!5\?tvּy %-579;=?ACEGIKMOQSUWY[]_acegikmoqsuwy{}Y+%:U =c"("a)R8 #vYᔋ9xb@ D'$F0  "DAVLifǑ i,HȗzJ裞P:Q1Ԥ!k4!_M}t >@%OUbu[.ڹḼk'H=+βU| ?^|B lgrb:}4gЩ,[0]ԈnL{raÚEFo7&,7&L]^z9Enxת/w7rd>( ,oFͥ;J4{O:-/Q<ԧĮ@> Eѫ~QtQ \uտTSM9 Ao+UB=] 0N? 5QPq " RE vSMNEtsԅ:ռK(vZ~Zn#ш6Ve]ZMVnS5M- XI 4sb{yMWY.8b0R+ⵈ߅7a35h+Vc?BV‰̖y}gLkJl بc*'Σumy͔5n\Y@+f'DpL4:z2H D$2-|3&VER mC K9 5eBDRA/Gt,jKԣ65g:8%7BeM^II 'yR`G"˘ȁ]Tjgx:tiG_ O靳(T]5ځ@?KM@U%)P^v4cUEh hjJ1~a~{@;lQC[jdvXy(dnFҸx5v.1?w($'[2~\VrsmgWZוu(.TE^eBq2jH=XD/vw<ՕKݢSm\0dr5#pi39|jfgnwKV_71^[y.:([ i0gݩcQȢrR6*UUD1 vkkFd]23l4-NcD3U6c]W^(aKXsE$CyI[p'l(#>il1`VӍW{2n鸘8sNh! Sv76ّYzfn(hBvnxk3~45(Kʶd5J2C'I=yri+=ѓ;)JlJ%#w1\W0Ǚ_l0y@09ɻotMHC %o+ty^a]#.3m;vr'mSaj q@ox# ϯ0~x=ssii^MF|t5)2󚿼RM7puz\\|hCt{>} /.G/ww}Oշ}o4)y3 B}{w!|I_X'9NT NLIH){,ɮ ǜ..,ᜈ)/Fldw 8N Oֆju$-&:d T *@m?*ώol꣥z xm./|Wxa ) e/eT+ld" PLP w؊|# 5.H y- 7LTp:2n vA'삏޺fVq&$cƚ pEwPB Ba9e dsDD5sxݍrG@+x2 5v(s@hymAM4y3Nl)r㰎M<{ u`UU'=_! qYE:Ԉ)h U1IVR3AԜ1önqmPe2U8-Ao1;mJ;x]OQ c2SG7Uu1Ma2=o~f1M1!˲'\D; Qrbz ݜH-a~5{תRT;twQ {.- ,Mpt eՐUTڨ{-Ə7"Z@3w:ëUӁ1|A8XU|76Aq洗+ȩzFNۛ&h-m׍v鼇vUoV){=.[k?W_XXS=ݙ\nb=]&y}.QԨ;)E[eV)OA]2/ FGٕ=IڛOw7yKLqU;#]D]ݽ.=}>~ >>߱3>]@NSi? 8 x * : J8!Zojk mYUXdj!] @%hTf#IFB8pG*,Z)ld~cʔn[jcPYY#0IeDa]<bjNuZ`榌iVN>5 % adL('DFBC6*$j jRz9B:ߩ ڧa΄ M ;( Jj8,g(GXL2+:R{+I=\TNG*rZt]/bo6r$,/; #7JmjlhelAۙβiµ<23&u[,+^f1 1\m:F}eE7͉7˕%7\ǔ"E|[4.IjRKm)yE3F4L맅NS%gFkԌ-72ލ%j_q;8]+6⭹P^)C.7DNWkcRs5o{JO}3}FM{K>E~1W_nf*O3   wNE8Y01Zg*}r_7 eo+t/x0.˒f,i)SJsH)p86[|XĂdiMHGtLH! ~` 8$*kT#=81oȵIMbRR%60Y 5 McFeZdE,IDJL4\( 2m")_IGǍ_d##?IXrj"s'Y핹%nS;scalapack-doc-1.5/html/slug/img752.gif0100644000056400000620000003512606336107606017172 0ustar pfrauenfstaffGIF89aou!,ou ًXmzPhp̐к\9mSvS+3FSNïzYR]v)2ɜNa(:>N׋Ʒ؇wH(uf8hyyehŨHc Zتʇ4JEWYTK{<)Iw9#Lk1:W]v n|e֌~'*N_oғ: 5k>|iq Q>qh$<8$PرA1!=|H1 &OÑq`.ѡΝ*gv9FiSZʞ"ք)Iˈt$TyF AFJ"Xq e&ݜRݦl^r \8aasjp0m0Ip"Y`^OC/Cֈ3w^45̧[Sι)ϓyyH> ͲkEl~Rg̱EӆΘES6˺cNVW9zXzN'!Ugebw gIفyfw)~ ,@לag!2$WO*)>xBg^颳d7#yD)YvSnVYf騢*/z"JJZ*[&.<砧{fڧWoz<-e .ipBYá2XRjʤ?3w ?eX@&V}f)ւ%׀ap'jV }oz6hrEps6 c؁QCuTyT6QPU3tz袏Nzz땺{N{]վθWk|Ŀ/?K_`_}o}~O~柏~~Oߏ p,*p lJp/ jp?p$, Op,l _p4 op< qD,$*qLl(JqT,jq\0qd,ψ4qll8qtƹa%? sQ,eLg{L:4$J OX4guZā {;tJnyϬg2ݣ줒zJD''Twe>g;{!R^k)L⸖9$&K㥗+;{Y*KP[՝=9U, "AޝKƩpu۬|6SlǬHiHu2VhQ;nm[1 MUݣ6-:Z,[˩=qJ{ڒЯMoB{q*efQ׭W;#nz'Z+:Tq'Z;aXb铯ek\׏] ]-W |M$bN3^Ty3$K]V]/}{k_k-X[=i{SlkW=N7ysZ޴7𥎙O{75o~BI;/VP5Q{ׇ7kz?K3kEG-&vaKЇR"r|(|t4c-vMܦ_R}gc*/~ gK9ufoZa7dcc;w7 Zrd'{]Skg{`hhi;HjBh/A9((&v8`gd=aJ aU|q(o6WZĂGmof|h {fnI tLn MBk7F8wdt77%fzhhf9Dge2I8HvlgcpFav'IȶeӇXH(ʸycx_Hh׈٨ȍhH$V(Ȏv،u ts4j*TID`(I@iv78` (8+`#nW@XcicX$CASWY 1T)e0YJ8xb/x3' -Ւ& /c;Xc~^tC0'Q|Yu~uu,!6x}]Vp%8hGaoNlhM8c$ ?42>fu eǓEyRm2g DjEk?kaVVemQ'{X%p,G*?OzJfite&~sTxa &V c! )Kv#Pvs!. mr~o$lWfPQ>q0K O|ى憛@YsU7s'نɗ ҙ=ͷ6}g8&'-+rVXo*ziGIgJA9mM錔'zIy 2S8~pF {DvhL32r jX' c8qi jJ4Rz~gx}܀wsG 'n{ٓ:Xo U1'X{2"u' hME:x{WJ1`7{kx1|Zi >e&{g~WWQ1?'g5 >9z&ԋZ-x8-Wٹ%1cm:ØCg\wII~sYIţ*-(V^*n |61S>=`D\•VW$r5jҩv3w|5# /n)W vfi*ťJKſpЅu sV|9& >}~uZ'vOzo N>0~f\ 0Epը l5HNՠae,~-u Y>>KY.ʞn~8ͿQ_g^yڢޟ -rZA}*O]ua*c[l>; թr:Pfjk5uKW[M2ÁUb`}nZ]2xt[d B;?% z%JqnaV{q֛wDAcTH&E;32]GYҷ,5Rʬ0Fh媼Q\6iNVm\}W$ 3% }  7#59<+KMOQSUYUq.Yak1τ)xH{gy3uQ~`Su?,\=Cc8`A&TaC!F8bEøcG顥$ATeVJAY$ $!#rhV\bOAgB(4=:37R"9ez,jmM~ʲ%evM$j0*ջ7p/u Ɩ@ޥjr$knImN f#<}g&]Qm3ŲRm{Go nDgrjo?w{qǣuAGv\C;nY4*Z;S<<BI,Φ.c:EP()\S%JS<"O;.TƤQY5]eu,3l6X'1FI.7 юdV@=w27}3{_52 ~e ݰ@ wItX%SEruM}8R#v0QՖ:8z3f9Ew9OX"NzT:5뤓D v! =5b:觝) xm^]sAni\UFYP%Gcwͣ"s=\9YǪ[Ć2M`tJ%ڃC=9bl!k\ve7ўvmwwϝuw}x7x/w!yO1yoAzя7Qzկwa{Ϟq{|7|/#77_2&ʑ{7ގFooQwYYgVB2?Pt$SOb \X)ٸ7 ϼ¢oUampO0Oݸ QM^ $O&PpQITJY i3{6&LLjL*0mm 몤GZTM˩#KO Lp0'ӖGTADP 67b,f"¹fx@m y$(|붌,q0[B -ď85` , ak rf p FQy( 8%0OKVq|j~Q0FCzLBŸFSk{1nB2\GgnrʉvpSx%l$PS"TPO43Nk& rg*! ̞!%  P+#ư OhS(!@&@1ϐbPŘ PQpQ2!! Q f q(GfD2 (f=W P ήL!* .qk/K&em/'ײO0M '-BRԴr +%Ɯ(8,Hr1sl0+P4Ϭ, D)gB4CrRK5o!U75{4S3rvS8% 8$Vt inz01b;ML#m(8YfY-hQkȐp<LTn>4kS]$0FէAqK|4ޢNtL~RHw +t>⌴έ&tO%B IIeA4KELđK͒,LϔMOLN//o0OH9>G?P=R? u (PIQ#uQ?4UO.L6LկL?uKFT3uLCU#O5*9ZM)SVmVq5WuuWyW}W5XuQX2X5x$YR RH倲"5JO(G/ԐMYKFQhQI"[}\u?ISUT[Ғ@jZwi24]U[)`* b `PW(]OFp^(FY ibyФVjS-q5߱B3hJ3 -+b.? Yr&" ql"1GUr\YFMEOj鈸gE -?!ivҪ셎fؑDe4T94dn' _8)DިG)ЄM;[a7Aw+1gGS¬s6TWf2ۄk>l(>h>R!N.6AŖ x :ɳ7?&37 ewq3{x4QR8ŤSToMǚ8fFr rd wю%eZok4];-3#=ePsFBSږniʦxۖM+5Wu;NЂEw.-qmQO x\7[oy`JCtmМIaEmys !5O9t dz#)C7X mb5yutUQVNEvm:MID_Zz+U5 Jfm)?//Q':krKQ:M-5 XmcJŚc/:V4^;m:vZeuUU9z级ez_Y;!;%{)-1;5{97X[hWP oBXO9>ɭv|D[կ=I.֪;; ncs#oB=J 5jcm#ml_"\]yHm٦h me'gݖ.=+ eQ :01CKwӧ.<,x2IVҽ8 %$7 |gMrKǾM\~鋼("nq+M(g3`ҿS2]qǖ~oOYܹit6YSg.mԵSm&aMYIiI/uy(- 7-ޙGX ]AY,hTjMItD!K?= WtWzDN~1KߩT/[%dN:JnJb;cTKdK; ڈ;_.O9=A?EIMQ?mQUjB[D J YuO5Kނ?}kFicp^7 VE]]<=MwǿNpַt#iIϨcc -%&FVMBa )jFiډݱ[Yރ/(F@&&PC׵JUJSoVzŀ/}gul.G ^]K]#c]ZʌD' "j*Y%aYYJ#`%O-lC/kI`ggp哙0"*,Кd0\^n9vkK42%j;-ZaE#_ fkBPzAdŔA鶬 yYA)[蜟ti:xOG+YAHktҨq*XS=nE7CrQ-/ L;)hZX^=bVY(3<˒j'.-rlzk[hFƗr_O=mPh5U:ͅeRTǭGsJn{Yn8ViV뽚U9{c7N1s~fW Q=gmnuWR?oHGVEV6`tWBb)h8!a~QɄυ(R%aH܃EyhE܅Z0uRw]ݩcrwj qQ1jVdJuIwM:p .("il fȥ*|r9ix!SvȔs"(XَHj2pv!a (>F*d_})<_?*먵M+:(kjۆ( UIk(7 FK" j\e0*_tHջ^,dmܚ%:+/1{m kF<𬞘fkT֛on-)?ln0ЌCYW伙\Po]+ 3M;*dM\5WKݏ*u[ Ma]:`'?=7ujXgͨO<`M牷l͵vk9{9衋>:饛~:ꩫ:뭻:>;~;;; ?<<+<;yw9$K$7Ж[✅oU"E^Pi.b uA l_X3c@ k_q?} $M(2PW()?$0G ?qn m4t'@K HbDd5j)%!ű!O["˖t&aAPS҉M4HUTKϥ=Mq1Q.| +X|ؘ(+?A?FqGz҆)K< ]aD6(KI➌\x$dL7x1٪H <Sz4W&X%1Yy𥤬0IϪV^4+IVY4. B)kךh5Mֲȵ5{#I {kY@3|mS5#<3~"R%QbJJ"brm+8>ֱ[ mo Yq,@e0)XkBn/Ԭz2VūrjYpniSL[FNflR;x,6qҘWXqNjR)W]o%4>)@ՒW-vM%cUZGh {;ɝ4e16]In^ܱکavm@aRD=\A׌t ^*T8}F4DެY&XE)dG9sY3vNJ HW,X ̅>o4gL*uҒoH#AMWƂLڥhXg4fGj)S?Nn0FMD{_Md M4Yxl^k̂?DנQipٽcHm&[ٖ62]ѷݐ]u2a NqӶwFr7!Ez.Q&|wʹsg83Qs8C.򑓼&?9S򕳼.9c.Ӽ69s>9Ѓ^Az=O@û_Iw84f#oio3Ylf]fwOo1瀨l/`m`w+}*0oDZiϮt&m٤pSLWǞF:_ޒ|7R%Svް7* rc aO Od lnv>OOU|)Za9籎[T5t>wkweǰBF>?a◯5~y*Cя*e @[O׽_!H8ɛ@ѕS\u -S-:\uם`^Ymhtj _I Vj | iL*I ڠR &aS`Э`mYY ZUt`X%}e V Ar !j_ v*R uI2뽘iIu5"bbeN"&Ҋ&Fq3%Rpe'2"4 ]!"$ ' $Y\b%2I\bUE #%c/ V 9@"ɢ>b8NZZM0VΤc7*j;x]a2ٯ  "ʷ9?N@f[)A.$)婝q"$^˛i ѩERDFdqEVD:jV3vd%qd:@;scalapack-doc-1.5/html/slug/img753.gif0100644000056400000620000005413206336067121017166 0ustar pfrauenfstaffGIF89a!,ڋ޼Hiʶ L ĢLzf eP'':ʰ֮  Պy igYd9T=WCG6H)90v9Hh7Yjz:UYGӓؖV˪sikxe\Ci|6,% L=ܽM-+h]+˾/?߸:{Mo1Eޥ3|nAQ*ZX4U#Q,:H16}|VlÄ/b3gCLx/ZqJ^2dP@;NpJuZZ&`[FyOJYt؀WK.ȝ٤kۮ_k}MZծyneհTY#Ahh 2jYS^z~ut.[-'-̛#;<ҫ[|=*lڻ{o>;';ۿ?`H`` .`>X_{NHa^ana~b"Hb&b*b.c2Hc6ވc:c>dBIdFdJ.dN> eRNIeV^eZne^~ fbIfffjfn grIgvމgzg~ hJhh.h> iNJi^ini~ jJj* 녯 9kg֪nSr&mjttf4ZN+e8~u%lb\ &մV{+&V-MwO;o,BW`` ke,#`RTXiw dJuƍ_.M\b챹^vuŦ0̫qEs6r{n`p-"7-8ѫoѹBن;07/nUA/mjk޵ v-ru׋+=3'-8ݪ-}46'[s>2ݴ]dB֒\ın[1h.mΌλ<=s^h1"G)Fg`{\9:kC_ѭ~֑>ߧM?=O2xG毝?|+v)+xݦ Ng:PJ`.aZ%qlA!z|#*/wdn`;ߨWbv[t3pr `=)0)tjA&8.zJv==`,R6<$.n$#N`iTG)0-a3a#e8fgUx9B1{ % H|0 Xaۊ_^Pqo"vʵYoCb+'(yw[#I8+Rs$fłOxf(mDo&0(ȥ3N&G6Nc0i!'*4.d{aMU"Ms)ӗ{<}hGR{cL*ۑ@=9BuGUuլAZ*K V(s[@q5_ [/87h! t`e"ׅ-9X3`sdԩU jk=R_ѳh#'˵1z$ۥ;SEACjH%P l>֘ă\CBW*y܊`fՔr!wSEw L3/%/8ͱws c?`^V)~b%6I:V`/`s Lbd{ϗG[-6!}]u7;אxdsE, !|$x 1r%ģָs@jTH|rrJ=RyV̔ .b13Yj^q77tÄr[V.2|='۴y }.:JRMX&"2>aMlҠlْW货gjpm%lc ,nvpg 1ucݭ$]6 H(eBht%9ԤZmnSpMuom{O*gvyvBSŖ:L.uF~Ǯړ4wieGx&oN %a}5,iXvlJ|-'CV*M Nnl(ɡw'K;Dȵ $"\vcmQƷW楒 s^jSw>c \Bd| cyg.1/aNzzMi.jѱU,]7K;oEe?\>>9H, ]Y>x,Lea{G|ٛ/]h}π0.Oo-]̯;ng|چpM(7fu wBF~-4M`et&wfVc 8#HUpzF qfpE&Ejiw6w#-Fxp3C+-tPtvWbU.=u:Pxc\sJPVR!Ah<[3dJ5dX5Tec{on GTew؆yzv4y~x|eXT$҇PPj,&5 y(gە؃zExlvzJ^BKx8j&'j3QrtaVnHԇXs}/V#[V}o7(W}]]wX֥~! 1C׸_(F\n_ǀl7bw>(.Z̈nP"ACH^87habd9Gȏ&*&ʅW)I$"aӐtn۴2~C]fu4)tńKFd9eio=s[=ay"s*u )#QhJ~*Fh*}o _`rs\ͱz_ts'{6 Ȅﲗ5B8yCX9duȁ'0m'i|GNHYe0g2=AHшB)xgSr4kUq,j(ٙʼnJɀꆒubHFI8::Ihyjr/u4k@Jhd[Z 6~$㙛a~skIq[ yCri5O}vI,⅘X!L)mRțyl#Aؕd, VcgPGzm?q饖i)kʦo)q*BwtIB&WfBQ;fs338凧h'v%Ǩ>PcAdтB| %ٝ)MDhǩb|Hik4d;}}%aYb誊}znQb퇖Pd/$7`:ZIdX2xJ*OiL'qvvh(Q]@h591ؓD>nJ[8w6к;[˪Z8$6whڡy9#)GhY51_vi1*&9_Wٱ/! !köקSWH LL˱7Kpi4\=[ꟿIwztG8kKnw2r`Y]֠dY5+*)vݚ(|# +l"([XGB[m[x^Ż;˲K=Riݒ?TЋҼkP׻8z{gcڑkb'Ql{D)N^᪥ +W:yWzf+}L)lfzaXHzK5+wqZ~ʃ7[kx vOF %6:V$̫ėC Ekۧz[Jg_$*گ̆[l: R)08 ‚/mQ\Q˩Ca¯XJ[蝁M{d,C YQK9(9 ZijM wo#QDPJU{˙,|5|Ʃ+^+̼;ɰ܊fl]4x/|h, ׬\j2Z)1̷ˋ)L+_ZYPU!tYޘc;8k]xu4M-%M υ<\y1r-@Y}mR](_aqn%<5{lRLqM=g{V%؉'؎<=2D8Lp@tgYlyhcv-!/z|lP}O횧՟[O`nl\&cZi!|=cnCīY@*|ퟅ\Z.׬WOp < ߃fxܘ&S, mNĢ;6;*PMZFMtB(S2ь$V6.ީ \I;.xC]Fj$3E F >b[JSb^"1 ׋|M&֕p $欠dnW<מFEunsoށg0w^g[DREЍDE{`0hD_9έY9Q,>W\| ?'J wf/z -5l ݮ*O0sOً1hћZ Ot(T]}X ھ\AᐈE&ԧѩy:# TlbDUζuW2bp-o^wcpxN< լ[X$.]~LF))V4Y_acejqms yA83$Bt'WTK*Ca'KU; 9z?]gsێO}ǩnQtįжҔ}k l]LU(3vXc F&dI땏%?)hcpvsX΅9m !8i>-v'~zmG'"˺k1_\iX-) M~*u+Qܼ59Wo_!:KApvNKH s{*3,A~#Uj&Mi5iՌObmȗ&wϽ>|34M#43B>zʏфm{?|ѧW}{Ǘ?~}׿ P ,LPl!P ) 1P 9A QI,QLQYlaQiqQy R!,#LR%l'R)+R-/ S1,3LS5l7S9;S=? TA -CMTEmG!TI)K1TM9OA UQI-SQMUUYmWaUYi[qU]y_ Va-cMVemgVif;-6r,kU"nDLnE&rAIuՖ25g 1ٽگk1ؼ֢xk2Z` *qҚb3昸F$Qb6"ej6tUn(_΀ec PF~j42^%bhꞇ_kL1;l|.뀋Mi97iݶnen s{x9k[l,imjnn4'gvYkf'A[j|wcHj2gޯozb\ޓM`4#v|vY_Aqv0:_d*R\xXB6rKLDYkye8hzJ_eu 9[։C >Q0p 1ak_Ae|C$!kf$VD;QF/ЊS^ չ*7φA^Ѓj4 =zt (W*(E+ pȻh139,b8b%  B"y))9}(P x)g|KΑ(TJ&-sx2i8b-{OD )Ka`5!{ t% PZN!s[tE%z RڰNf-~p1mΓ !gs2mdTֱsk"H5Pl;S\4㣻j,|S(9g(`4t DJu|ku)Yҗ=3ּd*,oIP\ӥ;}I+wR=B\+ReWq%{8Bu RSsRB&Tm²hU{(C Ҭ+0U}]_kyQyxO.VYh zeLl4L H;"ɦ0L&ʹ̴-ET&f3r|Sj3 u%mС1qV&93iKJ ]*rz.&f $]i&SO{  j)i)N^W5mn]6 !x%-zYx^.pm3ࡊzN[uw(W)~m9"8wmHƱ,5D-7E.(Br~0fGq0o%-$e1Y4yN"Q({Þ03wr[h+F/2!y`prJr[ӌQ٭'N3^ 07X2r .go10p!Fጳ *;*X; ߈;p ;sI\R3q :TOũ5vϒHK1L}褴Glsq'B2NQ6CHQO͑ΑLܑt:@pK0%/)&pR|QHf\!p :!r (ҝJV&FojfW/%XuRr5/ >?GD/@dNh73rX?>uzJϏESxPLZA%|EUxW.H8΀me!}jz/6GWَ5,7!q{{{NU칒7/ڊ3طoaq}9r05o8XnRz ؍%n|lK /2JUT]mc|NwϦӰA."WdWJ9VT!p[,aS1W z=W)褽W4&m/h828<{m*( Kx;_rٚ$mgw3u_FoE[`%W8TH$$sdYԔcgީ29noU,sGslϨYx֔1x:պJWA*/X6Tmx9|w~i1M7mssQb};6͎^̩wtoJZl 9w 8@`x7xtz^{[Y3՚WeRKTmK2 ΢/:%W~3C8I7Q Oy[_+4zw$k;+yڲ}S, {j*-À۲-ԓŀux!ת\S&! }Uie Z7MӇt:VS3&;k-[[rB{=-Yk<ؗRQ2e\ҜcNbۏYl;k Yݽs5=[Z}AȲۻ)=;h97^[Ha4K;a{ۿ[BZÿ0,> ýr>?6Gxs'~;({Ma:_ѸNI>69}\*Z.wM<.i`*tȲ<5C85{oB;=ڳgߓ[= }Չ[:H]=ݩ +c?%]ST-ǒT'ZzM^$q?P0ņP='nA_ɋ{~2 ^ ?j)^%q\Ͽ ]V 9+rj/z0b4c_j/(W#2\2)x+p#eA;Mŋ ,acU aᡠU\"]T^eJ _ _(gd-nndQe`Yը#Vjlر+X0gT4`v,,p89$osj#;8pWq#~~hu+W2l0E<8P>IDv!b#bbvaŘ2gښ$ W'2r0$kFaL饥У<1"*֬Hl,jC븩u# Y"'@Rs04M/.O j BM2"wNcKqbS^fÖ#A{4j005칱{m#g-.@‡_yݻoW_8 W UW :xˁJ8Nx!e=sbcr#I1̆)N8 e v!Y)n"Bhaq8"CimmxXy&%#)W;&>ԒL%MZfeVv])cd.&sT]Yk@:ɖfIWc٠$E%?vV. 2*8'Div}ס.d)Ҍ`)ӭv$W' wؑIc.z+VU+ he1vDV&moSjK=WšC%۩k.歴qu֒ S~k>$Ͱ~V +[Sn`Uґg:Ti()Ǐ:ݫ0 Žƺ <[23y\5l3΀Vr=C 4E aC4M7QK=dT[}5@" Yov~dc1ö)-` Z!57t8;hTYf 0db)9T9>踅K{G/#Mq;e)Yv*cQIyYY>oCZT :{w.7ǤTx1g‰,#mr&\$9uM*H)#D!0Mo{[&?9O <1+ND*ɘJ%~'*.ȦeY*]W]>A)r l)q{ d(!mx߾N90b\<(!ru{rqxjVQ6;m):5&(" тc,&>ȍǘǾA,#ʪ@rVtFO2n4cezt\Q vXҖ%(Pe2Ss|&4IiRּ&6mr&8)q<':өu|'<)yҳ'>}'@*Ё=(BЅ2}(D#*щR(F3эr(HC*ґ&=)JSҕ.})Lc*әҴ6)Nsӝ>)P*ԡF=*Rԥ2N}*T*թRV*Vխr^*X*ֱf=+ZӪֵn}+\*׹ҵv+^׽~+`+=,b2},d#+R,f3r,hC+ђ=-jSղ}-lc+Ҷ-ns򶷾-p+=.r2C\{$Uc:7rY] VLu=zA+K F *@7CɃ{/Ɨ2o$hR_Cd I6CbQ&Ix^aa8EJJ)fwZ,IlCZO_ KA^ 2 s tzr_US(^\'{_ X0.+.uG N&XM_ibNz99rXXO"W\bQ34bwc0HLA3 >># wSLseEΙWU8]ؐH5j-cZ1ּ#x L# R€/^uQ쥏n$f٘L)m+$ٻγ R׹?TAfތ,N(lcl). :SA~l$8pB)o:x#jsOk 4q̊R}nLS#)h@*ypx+:/2Ybkuwy5r[>Y-,$p2+@mQ®Y޺Yrq:{z/.:\h]ԇ2Xg8=P^izJ.'Mn<]"B"z[˽I]0q`îtsRb˙"TupGQ@Xd"Rz%N?1:߫[dTĊ)o;0!}hHd='k݀*IpY]L a_ r\m GS~]͉ݹ%ByBLEA>1Q`  !ܗe}^@!\maG M~ >!l\Bؙna >ܜ2nW=5!au*Qwa0Q R[y81 f `aB`#^z!ZWٽ߹Zd)]߷=)~Z"ReuU(bݝ"\X&޿j۟ !)=ѩ, 6D9[@8.M "0XEƵ`tNa1K%_R( ]hO .~TލEZAި~Bq }b+! ZhϣFu ".-"?~p)`'GXf*"RViV wzQw#X! _Y^'^&b-eZcsgjXQlf*#:_a1 0h{@)េ%I>r+>`tdH6(%RX11uőf0&"է @~bncgReq~_t_0Mjb~ڭ!KfР`vf`cEf=Ɔ5LEidUFNv)xtaS<ja}qעz>t'~ѨtIi`S>2(J"h*RV#V)96h&f,(9K %VhVrR**v䬊"Q*tcm/^-&w"xk=Zm'ڙ|">LJjA躮NюL+:KN]޾鶎'W$y#(⣬>ki V&\"d^vДC1s)4F"BlAбA*=Zʮ>DL+ N:edC6l$gAQ%e‚eO(e<#uV:͏ X'O_.kU*:=\I`mt-:8mum#cА>e[F̦MZ "|vf\>l%ꜭ:j$N^.fZjNBI^hYjwg+eJ߷ J歆!$>"kޞl,oଢ଼/2ށ.n')Ket]k:'ڭ.Bjyj R/0jt;6/wij$mC}Bn Y[&E &`)U:"F .1β"jA)qiW1I֮Tũn޼hZ՞~?b1:qD奦 GC{}\JٱC{k$ BE EFܫ"=djMrU%1 R>p]'=]Q B4'/&+{I>DNٿ~fo#)c.#43S#6Bs2kXm.a(s'ǭ"ui0e- ao& ޱ|/VaC4!.Wld t&/*>ݹl>I-Zi;eISFe@ϭ}Pk.L[J$ C1֖ UGplpbk1o[e=kJ6D/ELF/bG㳛΋`J*0f  jfj!vjZg&۞Ю%f564M<s(sxޮopnk[/kJ r2'X>C.Z 1c83Nbwm'ϑڥp~4qN<4}U38'&p1/;7XUpj-roHHFpW?E 3ї|B)& Qh$-Ir*(]vT6#l7ZJ)="eQ`#oIJ3i>K79)ò.6m8;uu-Jy9 aLyq5fx888xC-XlH 9CE!B)0E9yK" o0r (%k"$sN_*whF|wM303k8n_rW~$g .м6S_d,?.l_m;C9vy8=nzݸrnX?/\kz?o9Szvֲz$b>ptbz,melO!eD$Eg'4³EN.7&nS;Nk{_vjJC,4MJy3?zz攭"Q[Nu bß\ws Z{,V{;+zJ*W(c&қn{.ˏل %\U+m3oXS4óQ/3d͉/u#4H}g L;<"7z+ (U'ۯꨟ|k}t{f&#O=~J b.^Qft30uz'=hܣ#rˡk}wcp}Si|'{ o1 w.?w ~Ov։Ê1_Z䛶:Sv3S8Joq76Eǚ1 8wӇS?yKm[Oq([+IG QNZYSCq$KDSu꒣[VcwHxD"K&z6ӟt")0}P-OlwkD;aRgR5zÂs,SBTёK{ ,4$2e T<]Iݴq}˕)te3ĸJv,dCĴ>=& /՝0G\vZgofR>mWb_Fy@u 8WÃ"!A*tk!y]VVT3_&w v/'08Vtqc?@UuM7EU$:hi8fP.jK6WԊ489\*B]EFB)#|`dJTY3mCYAeSy:utbAȣōm )sw幍= ۬:9VyP-c9-Ր:Ƚ;2wsu[%cOo}ʗ;l?Gf/qǓ{OY [I0R6;dhʆƊs2s4Z擰)̱8 IL#mӋǡ3A!!P$#) / 8{l?IGxarOR;K1 5KC QJ[QHRrЍz51L6NhhTAavX:!Җݮ_:HLj88ȢL=x_ PIqAZ%eXM5~Q|E61*527T+FO F7" L@\63*aV}ftfb9hA_wd[蝅T#:j.zigolϾZߵsWfn;o~obvq/ǜG6&s~AQtEHOIY/vg'IRutu} |vq=NwD&eمbB$d1e\Et"풷cSWgDVQI(wRłd]e\Sr7 "QRKx#ttٯLl.aG7ApJUCG #B+>kS!43N8S,Az>Q}X|A$br0rf+> W}p-)#9 X.=.&1H0:ęHX3g%2"ְUi ɾ6'+ば7kz.d"gi.URK͕uʴsG׼f4@|Z$b2l"362Sb1mRbW:Y#/aƘi|`GN5-:sa !D 2 jqR' %ϔ3á'!IS1Y7qjƭYI U9[5te~_y(D¢I;S8 Si'W+1o_&OjNɚSx7ԕͣ2Uh{htU.|_jё:q[S+ucWAIw5\j-Џ9lE.CUѨv ,_sQRVv cs 9NQM*69B/g9(8סZ%aػSr165|  BE3=i\*ΰ{ŷf=.Dwo5Cez8Df _7aup؊7E/_0,a PpEg墭6ԻfG*4.r?/H=Z4:% ʹHDnLK¶4>KAK K˳K9$2D\TR$lLL<7D:̜ʴLPL ?<:,aK*L?̖I/ 82x 3#.o“zɊM͡IYhI 9qDŽ;Pt̰ d!ij"C8Q*ld2k24D$C$!:+9F 1Y I3H=, }$o{,\9 RE}Jx4QDlM G1UfD3.;Fe/EU O T%\T]H[݀<ǙcF0=QX!J ] kW|P8ܵYUGL?#R_]=GNkUeɸ~!x-ZUlPW-;*e]bH7]sM&){ka7څP<V:IŶŅVTel1ɞZl̾j LfkUϮ+Z>@crVsZFڥJmPTSǮ(b3e{WĝOjzI)ŷdh-i:aVJ[fmL%N1m6R!Ld(6o^5$o,jޥI. G\YUo4Mm>L)ܶW0¼@]Δ #5^$T 7 9ͳ"c0ay*b2hRr> DD# ipM]TMkcFbKiRͪ<4 hJc;G">/Jʆ`CnEtb(a/o/V ]e.pnZ0'G^]޳=s t:S*iTqPuٜFnł֍t3UFt~[i./^r6Q\g^nc$Bgw-qagA?{Ciz?>Vx_#ɵvϡto T̤j"!&r^U}OhlN/N سk|wOW"x:|k~+~);scalapack-doc-1.5/html/slug/img754.gif0100644000056400000620000004075006336071417017173 0ustar pfrauenfstaffGIF89a8!,8ڋ޼HLA>l-1XOuӵ J]%9.s d^57\sv a}S ,cShŗ8؈V8sv刐Wi77*v(E*xzGZ I;9K EjY{j̨,]iX*=8Z.>~ h̷ ޞܛM8~K0@ ^܋5tT%fڊ:{,;zhcx㣈\5偼CG\,MEZ<#I3:xt%Ph4T";}̕uڵbrqii͍qq!ܷ+]YwQ PŌ4Cdǔ+[v˜;{֬RϤK>:լ[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?>ۻ?ۿ?gQahbnLMf_\#BMh@V<"uWmm6-BOeKheAA(}AlzYTLٌ⢆N=O\/5͆xc'.$D!ʐbJ%I8iSH-!%]V ֐|ՂN7\%a^¦S&#҄O̸V𹂜xGg+LrEdJRrk7埆#iD)Ri=).8*Xi)M u+鼔j,K!gn^y02"Y^Ph,$1IlKn枋nn o{KNfZؔ*8&$ĀUt%L0`5ֲ8;` :qW#$9Κl3@+Xu ]㵒ƫ>8f:30 E뾺K$ъV <2-%bBH*1WkbiT 3x]!DCXB΅_Ꭺ#&iu,Lk3 W%8V'fjsSklwe!!X>j,,{͞^u"g;/4e׊tZZv;؟nSV3 {mWrK$Pi7sX7~+2eG_N;Ϣ&AޯB97pLWJ-&$d,ݫY$^<63$`jQ ~$k%s!e9m}CȰ+jMܢK/qd,ψ4qll($8`u;>ر/5H*[푋(0r%>=S$ HB/]#iH.\B!QI f2{邸""#O;)#bk5aTe9K+r4 I1KSᘕ13wCPI<4ĭz3* 15dû)Ss4'Wh"6 x#ʝ8,R5Χ3r9]U=QBqJ<vɻPog]9}?HАjDa$b H6) ci/}4kM_Z|j3)NUeBcG*鿯Sg,*TyMB;d_vt=&E7ձ6vPzt."nHq/!϶EtmeGĒqP-c:I :/4- fۂVG櫷r,q*wms Jwԭujww w-yϋwm{ w}w x.+x n Kx/BED&DϪcCw]x JBX&nV0ml1^ x6&Dn ϭlS3x(O~Q'xЌT/Q 2[ 1 \9UK r-k3K+sblqI?]>+M0wq{p5lسB9Vn~UTpSr{Cz z =\ѷL~j h}ވ"wP/Ϗo(HhU21GaSVyGpGtyBPA3u6#{_W+~'(24>qnGt 8bRI&gDY~@tDuÃ=m<Ȕ{&;/B!Ui;o3P}>8Sy3|XWW?`hU7fk"we2؃)ȅF(Hh|DuR@~|77GYE`w]Qeȁ$Z!RbCfcHn$Cl;yKCza6bpksNY?(v'}Be6=yNx}x72IW9biJvJXLsY- (rS4rKEɘ阏 )Evs̆Y %wRq|XmG ZЙOفsMP@##x9Mh׏Z{NjvV9R:TY"PɕHNmfx'yGJIZh1SAuEeaUxWY鋪iS[՗zTEC/WrFdBC"Ԏ _FЩ !*#J%j')+ʢ-/ 1*3J5j79;ʣ=? A*CJEjGIKʤMO Q*SJUjWY[ʥ]_ a*cJejGrtbia0Uf=itg4XVqʡVflnIp yEVp6q\l?|ÖH͉Zycަpcy=:48ni.gc O*V k{z7Ygƫ9Jqpn'v%e 7{p~esv8'符 ~)jCpo t(mayJFYEBz{8 ʆe֔{lmUm;nm)vq :s婟i +/u@}(&tg%Kl'o-ѱHf`vɒ?rfogzj2ng6;$7}rFqr u"rslsgJekgicB=:(1dA}5GV@hʷkvYɷY*WME;kǍ*KjTMڐyel>{˘Jk AJkl{K+>KkNjɫ˼ +Kk׋٫˽Zd{kQrEf%t8 JIs+Kuz|]ڈ}kU#g#NP*~YIgm}*R񍫵e(u‘Aj\L Y3iؾcg v#[tdZxmI?",Xa>l%9~痊VTTYx堫LHS_lo\NkVIxx:DuAxIOS7WlpLƅ|yydiFIƳ&,(<ɐU#OcǗ@:WdLF\8L 0/ RgY *78+vŗf{O3ELGئcQ7UPOȺ&UaT%i],jvp=ɍdTc˷˻gɬ4jΉJvxEMn =0/H|kNÀKzS{mW^][}RZry'J j^WpU͋)$@:YOwI2pYTݖTn<iZ>ig|z=fɝT}}{'J&DЧrNǴhե$&,ϴWDsU vb[[#SX -ԸO#.V\L=NT.oV:Y= =7`L-'ǜXa'#X, XrG6CE\}]lva>٣a™ ڑ.N<;LHlS1m?@t`n#Kx ߙ@륷9:H‰ mӥ u(S$닮"Ǿ9 LDa>uQM-7쾬bE0ܼ$l 3:-::1F_mp\v$qdz!=(쪿ˊmYFٖ,O}Lk=H'r[OQuUG/ _/2Ѿգ-ݒC9}eI/Od5H7ޜdWlrWM əxqǑ'*kHL"i4c:S󖶿|MF3|X~Ɣ#k~C('kX<`"NgeR#\pGG (%u'sN9![LO+kIEUUhR ;cUKLTxti#VR8gL6:w*.UgGBBLp ^kS+fuSF$$l\7F`F0~ ]~̐'gWKOkPtx7%T[_rU^Umn~n \o!\)1h9a pLAd<.Yk)qꭕcXq41Z(0Dv 7VxeWeDr3Uץڣ>wfn'㱺Kp,G9F%)OzԦ ubDCԟ6uJ~Ch`ʂ,8fN($Q>=TLZ#wqp Z A*Wڊ+0R1sF!!$mUǸǻlʨcgr26cYUtk͈?Rl}ղ4-GϡF(70gswV =C'whB^DA4:%5IQR-uKab HUQ䝍kAD6ӡdn`$[\#ҌrĔ6ejQ?̜NڋYU)*Lm ZX-okNN+B< w*ɱ>6[zܵXmp]^cz#lW̆3 *iϭ-ݑ:l uͭY T"Թ0M8aVg@J7 V\tm] EN{5ڭĉ]f'I/eW˽$}⭅/]Kì Lr(1ct}QU*˩GI57,h\nQ1+Ll%E\RаR%tj+Sʙ1cO)o i_g)ϱ=Ojl/N唺66)>W+煗|Kfů7y(~t ϏQ2Q;6 )zϞq{|7|/w}Oշ}o~7]/7| w7:EFG t^)(lGX. @rLynu֬+.ת؄j,IPLOuDD 0p$X<(/Q\lBOzpDNK#Pbhƞxܥ,i' =H14BH,J =*(EPpԩ' !Z\KwA Ȋ p ]ˮP׺ HtV 1Kߨъ$qMmHc)Nn+*K4q9QܧN†.l]\pE.c.1 լ iй1 O"@nSzlp(LЯ@ 샑8NѮ6l00IOlǂN-ڇI #耮JZQF)̏0 z ه~!! Pkȕ rG0!3 $-LD$vq` &龆KHNwvf&ߋ`lr e ErdL~lk\<-W` RR&WM+ gN q+F0Q0cIfL1IIT&2*j~2p)I VQl-2mNl3XEr`3rFLprns7}738s883/99SP 9cM% ; z<<ͱk37ȓ!#==={#>qXIh>QC+%7?oCܲ4@S".BO B)TB5tC9C=CA4DEtDIDMDQ4EUtEYE]Ea4FetFiFmFq4GutGyG}G4H5T7so)20/p$1167rS2T;PJ! At-ӳ.4-{@3$PO D4㘢jO甦0t ?}p-Hʍ+g4 T 4Ck6qxwh02SE S<ລ>?'r;3?Ӷb)IZu X3э*铌M+BXtQSȊپk-/u EѦ8 ([HMԭdet,ѵf.^2(@oM1Tʴ6 ` &U݊PMRh,v!HլsK1  P~4V4Uɋ8,̽/%}qSUyL=*)ذUfEm'{) ^-QXGi %'=*@##-!rQ2r>,I.%*'i{V[2fWpɸ 11d:Sk:K1 nq;XUs&$_6iةI^ʰp6^YbP\wvmvq7wuwwyw}w7xwxxx7ywyyy7zwzzz7{8qThk/u52bPISN3tZ&Lpputд& Kb+U7~7AzUY˔vu um{ X-̮QXo`7Ucr} IJSCqJ}M:5 %H<^sV [5Q[yJmV,6WAwmR|\A]6KUK2ZQTq4s.j0>T5kȊNhsu@=leXt(,Q%kMQq3l腩C_W_Kq8(Q<&Sm[b3ebi.w@V45Tٍ;b1 0L/\էUC/t}klMɶp_8CWz*pXטs;h0 ٫3ԍ5Ad+Yܠ+|ežu^gs*hu>EYeݾ%]뙵WyT$? H 8kXMPܘ7o$.4ηu칵sQAzY;_Է+1>,]Lag'9}8m9e'wxngp*Gˏ߾Y!% &+ yngl6Ub۫#1] S6T~ZnEM0WB6JYz8rCO:oOm䔿ݟHQZZS$ fCg"a)؞fk*ۜ[+HSnn_ծnohJm!s2htp.u50rv8vW toq&af „ S,l0È'Rhᣋl⩏"-)1ɔ*W02gҬ)O̓r92OXL&b)"pQ"Fˍ!^Dn7e-*I ܨ!i_8& Dcs5Kuj)A0K4dNzO8gE8R(x~h6؝>*#Fhigx7Jʠ$Hkjɩ?z&[#k;'܊[ȒOFe+4ٟ+r 5E8[I@Ar.uJ( e(j kg4ls,+\bgŴZ(kp*Ϛ< ÌG20*#1FZ\ͮ=`ڰ)np(8@0(ki>Wu0IY>:驹'T:뭻n Tu~yn*u; ?<<+<;>髿>>?????(< 2| #( R 3 r]tǷ1%HNa0[C*\"t{Bp%jI ߥ.3ّF1!!ч"G3~$Hh8!W_>JR1'XsFX1Kl"'BOg?ª渻=*$x&$ WD/;BkEK..at$=FQ>,o#[W5qTQy CL$"*K /tunVZX &hf|  G. OW:وLF"J%VS=X[ PfNP"֚*!Y6S`%j0ؠ"5JbČ O4;XB5g5:fe6.kό.ZtR AkjVW5ڌ`͵ d As YNVrU+ΎV5whyedF5[qb ^>+\ #:֯bZV_8mF79G&C j aSnyMB&֕ڵ[G6%s;0 "8~珕;H NF*v,o&*11s̸_70R ~ {F03 s0C2xe'6Ht)fq4k;v.xckQ15Kة<,ǓCV%_į8˷a(F׉YL)xYj,2Le09t!sSxERKm:2%_]g+ #_ RvYEЅ^*&/]@lo(}EA$ʬiSגDǤ6.ykb--eUu %\i*MWK27:WɚȒ奅\& A9 &'%l}3[9&wﶝ̭+9ԷuKYk62}UpߩEiH}{28$v'#>-A~knO ;*.1C-U0o7[K kmʵ{E̊T5D /6ܜ#7o"2?:8Ǿq+zI 4-X?Ҩ;Q~iqKS}\趠xk-su?nm/gk6=e7 .֟[_牻3ۓ/~6ŔG;C&nKn] k .^jxqћ^o;Vs2$V#7o">/??ӯ?OErZsXMvT`]aU_@5ؕMX"Y9`P}`ZuxU iQQ]Tv`OXVYf ZFfl`-U HH|@Auã58!xLxpQ}XyDa:R& 03͈9Ͱ- ]<!6 T1I, ۭ| fM&1!oiIaJE-BԚE^`KM^=۳8̛(É[A޽"$2#!dScmVm]Z]d_]b")=TaHc2F 6c؝ཊJ.] Ѣ%R.4j%0YGZd?!AB R.שMW% ՗a_Nr a";!X{e1͆Vfmffze~ zij&kk&lƦl&m)[jh q,sX& 5F5Q1nLYb gRhJ`w2X)k 5ZQ ֜ ~ fngmYNg[Ytij'|Θ|~`'CM Z-&eکvQ' _cUԅ^!Ema{znZǫ=Z&Z-QY tS\R\WV2Vʅ.fQY2Z41$"^SV%#ZUZ[e_,.=uWT@⹹1ȍJTI/S "(qH)U~)弴ԍsWAdbQ*nXIc)D~ceڒqAi!yqZIf~sJ6Rś9T=W\es#pYG[ )d&}SD6y*uiZ?7ᑚf+i!ynL "*jԖQeaԈ'ȽRlcTjSB; [O&b),\0jɥKTdW򢴩kjHiSs,\"Xe 4K̕d>_ecJ2=lT"&!dᬣ)WQb_} Recj_*)df$ג|-lzf&-ڦڮ-۶۾m.'OXʭݢn*Agu"Rg'(a)5śB&qQ^' UE"m0.u{&bnfnEtV`Ӧer6(~Ұpo^lnjf҃R*1/Dv@bg蕫;dbWF.+k>!EZ}]+n)>e,("YRTGʓ"#A^%2K-BMݤd 䭾#~1:q>0CafFKl {I/QБæ"Ksg0%HUVu3ߴsm P>s۽v?E⎙ڦpn?'.~-ٖ>íDO4EWE_4FgFotGDA1ӢpRm-֧~$ \nrhf!.iEjNGar`vLNtEBPNҴ4GUy~SSpX3tk$)zP5(.ooH>/>Z֌\^[Ra3huW40oJN#.2eLrޯj莂rj/vk-jά2[ 7*QkM j9b)/\բw2m/q _#[/ nGy#jOQYp w۱2j!@d& 7\& ۶5'܄8NivŌ9ԝ7pd_h'n욹UExѤ;|jοtkjOOuҼ{ <ċ?<̛;=ԫ[=ܻ{>˛?>ۻ?ۿ?`H`` .`>aNHa^ana~b"Hb&b*b.c2Hc6ވc:c>dBIdFdJ.dN> eRNIeV^eZne^~ fbIfffjfn grIgvމgzg&ka6(IdbMQa(6*¡FJ`) fN $mjAh0*֧7zV^j@IyL*kb ;ֺ겙-뫌zl6Bl ,چ-bV.Ӫ;Ha.zk-m:["/E-p{>.S$n=!s!Ls݌s:̳5 tBMtFtJ/LOԚfSYsLm#ktɪE[bf[30x5 ܗqWP'-di82:ھ-Jfu -@BRmݹ嬂"ޥ?^hvvl{_ި%ONept璻){kL=!ݱ2};.Y>4X6]1ʆ|^|y)⮅=k!Ps PaedFv.JۿWMqcvQUE?%uUPLus0o&r6c+_B'Vˆ>x՚)ŒPt35KU.ٛ fpK늨F~V3 E iGuJAI7 NTu^0BW+;̡:ל $f+@9M(>]"2۳,Lm}r;(3XWffvY}!&C#'ٳ,[C+mH;zҔ/Lkzys~iDjzF)R4fN]nbӾEOkKcM!xT8V/pmP9P2EU~\ o('Hh) 5LkScR29|VtgW{S1D>-VEcV;A1hk[M߷l!(3>k)`#nf0WJvc t7'EǀDnn5ha\wvs#HqGcY_^8>aׅIᝳ3p ^ nuB{@6D6`tX_sw(j(>l 1CĘh8]HhtY r)Z\ 6xe TH4XZE`ʡzi879VUa:U|W|RZ˚=?UF )CJksʍգY|Mܘ%re_jƷ>k{ lhY&S$hD|i\Mt\Ix ϴj̖J݅tNv_6-xpMzl}L|ZoY>Xbq{ϯ5]fy^]Ǚk|zp`Ȑ~L\ğ.l>>&nvN}|ĥtJzӟL-ɤȑq~NNu-Pv'ʋi 4xP4Ox.i7mlݾz+\kyFT](/BMjX. nYJS<-/W&[+nە~4Fgmn0zNUPEoOݳ "|&vl+θ78[>a^ ϖȏ뽟 ܭHvވ%x9߇H?ԊʮoӟOe Hgթ+oLHtj֛wPx>}a\6ik&נ\3ye,!#!$/13579;=?ACEGIKMOQSUWY[]_acegikmoqsuwy{}Y "; ٦3({Fű,ٱ,74H÷6pfܹ{vޭB;ԣ8r,rN4B6`”FCT"io'M%@MhD¦HT'Oި&)50MA)δ"̺0l,mY| b-NC4$ 8z"2B8ڦ}Xpbq0/<2,B,L'h2L0Rh(r=F>J ?SQc\Gī9YHT8ȳ.ItSQMUUYmWaUYi[qU]y_ Va M4c SQg͉ ˄X F;򿥾V"I%U dH n-'-P}πqJ]b n+dAąqE8~)_n(2K15^2#ܘBμYN5,4bK:#PLз  dh>ƹlˀdwm4qeYLiGO=kPkzh6oO>Ddˌ@Cl$ 'aerm:Px"Ӟ1C|+Cft k\PM&s?]xus8wfۻW,4Dx:r۲VE>FsauM-9O_\v~NYL?4@. t!A N1A nAB%4 QB-t aCΐ5 qC=D!E4D%n#-H@IQ]"tJ$/w  'F5eipF|lu^zq;vy;qJ^!mT<,zA ows"8h@t$4J=nl#DIzDiJY)jyK]!%Ka21Y?b.t3MiNsʤY4ru*^^St$s]RNiqiC5y3w'ٔz1KQWI)yk@݅AtV$!d'aP<+h3/qS[>':i| N*HmП\)C89si$Vg/h'MdO-M-iIBǨ& =H=$T)LuoiZsd"]W8Ufa+K1#դAubl(c4ܩn&=zԓ c-)X#Onկrj@^Ʈy RL/ )U0.(xeKMNv7)u?kJBN0J͎usZVZȌ)91z拮KiuNv{!7@Ůyˤ"PCքfjmosD!:W5 $OyV"5o5$ ^ڕmu;X){-LWdMkW労-c~FE?{ͩ,CUU NF#k0a yD6'<$yB1:[N"l^=u&[lob_񁢰j9܎1͒@#7S\;p6pm_ڂK&YOQMF2/j=Ÿ3; ƩDi)XȺ-_>HHwM^׺6D lsjZukW F*?i'2_'^ܗ~5oyϛo}p7p/ w!qO1qoAr\6$:x:je8'ݎdJRmӝФYNYFy^ _#6Zc8Ԝq횳̐Qw3~ކ }yi\!6@|HۺND/ tOC+G`'FIoĿrqޱRZ˪JP/ǿ,;c![Z lCW4FFnOiU晎;qN0;ؔ2JZ{n)YZ]LC-2qw읨F<~]=(ut|Df̨ B.5",HLQMDͨhIF'S 2oϲzGZłN**O87p?xpoDLr(P7 l 6Gʉu^˭ҧa(t P˜/IȐ ְ0Ԍ %0J ^f'mmfVPD yepjЂMPK pIvڇ|qJ1;MlQ֮Qq؆MȑҞT ֐n1bQ}بݴܲq rr 2 "2602#5r#9#=#A2$Er$I$M$Q2%Ur%Y%]%a2&er&i&m&q2'ur'y'}'2(r(ڱ('͞o i%쉴3Ne$0(" ݱѠȨ_/mθNg 2$ -d)E’8i+/*!1pj O$'oIQvq ,?SIJ-R1K o6N)-АvS2<>lf3NS/ĜO<^P(VZ,+Ц/ǠO*/y- Ès'k> 6c:@s>SOKjP7}f8HlB,N1RGi<]  MP{ qTu SL9JvxWNiE104id^,E1YY{ڬ, y\,qZ0VI 1֬[݌ؕ52nYr]n^:ۺAx-_s2AO5vP]Z[mbg0!.֐`G6׸Qq!S6cKvӀm"-R +`#YVB]Fgw63chh6iviii6jvjjj6kvkkk6lvlɶll6mvmٶm0mM5hYR2**TϘ,s2&R5-Ϡc'+p cS.Xc,14^ e 2.)/brR/Ì"TqO.K$07gX?_]1uX[h +737bRK0PyC%/M*}J/K.S/XXWĔtg̸jJnU4PXpKwHr sW!8FLr2`aUmUբ8"tf̝ܴ+k!7k9a١_0n=s`bZsbC3$zr夓Y.KZgrk67C6Kqz0--bW;f9igoVn:z:z:zɺ:zٺẈ$b1{aF:)y}w3%'8%A@s2+:W c҈0F F/hBW0FD={iTJ#pl5DaoYjMm:8c[ CyyX1[1;O+UwEa>[{;\fn{;{낢ڽQRMz,*,XqqtT{*cc ujuUâ!RC 'KWXwX2;x,Kk<2?GʚU|95QƏRYX%}km1u1 I}EW'5:#9YV<ˑF3[Eֿ\ݛiw}˹7IDt[\AYh -b|yZ_5yg9iܝlY/,}y Zx9 0aʬ+|~a}yQǽ_]_^v_^ƞ2+r}ᓲɳF ~[>E:~cUGYχVaAw?-gKW|#w7Z^hAfMgvzZeT;!?%)-1?59=A?EIMQ?Úi-!uv5bpcwvpeCK4G\mv#,³GLXյ?޾B=cn]up42~ׯݱ\3q4ݽ3 &:{jZ<f?#Y'*ZRz1n]7Mg 3Cњ$"z@V1Id[eTڶ:un5)kǙk6Z I `!]dOסXeZZ)+l,hMޢQh,.Yt'XkH)4iiPy:; /-|I e-)yeDĦθ55GR݄+׫L0|g$mP]Նñ.':/ٵ VU..+]t.Qoj+ViÙ+یdzլC-mF9"VX.cԏ .;G&nhhFmRƜr5[޹:g3f:rd2xDž'o^*g k"}>qޜn?8UrF 4 O WVLЬ%!a}Ejw`dä+>h(/ /h#i8yKD6!~zxeMNrTGm,%;\楑a&|dМ 8 1y'([ ːdi&ihZ W;BJN ̣ MyΠ(z***:+z+++ ;,{,*,:,J;-Z{-j-z-;.{.骻..;/{/// <0|0 +0 ;0K<1[|1k1{1!<2%|2)2-21<35|3:ߛv6荜m] 4K'myIg)}b\ZyTt^x\NYw=w|U}t]e+x8D7}mwqC**|9)dBRM0Z z9:Ր~IU_16hLɶgra7|+\_k:6MD3t7mPd:Ϯnp^7dw*M77t!Q[ TƒJ&#120TN-( ; ujS]6٤9R KZ~)GSBuVU5th?E* F>]Qv>{Sخ:O}`ʳ2},d#+R,f3r,hC+ђ=-jSղ}-lc+Ҷ-ns򶷾-p+=.r2}.t+R[u3݂83B)-j8d4!\gq/HpFLfevȚn,%,v7 .$ϕu.jEf,e-6WFdS|5mf$!UKG7w*vot̾T qe{6TcSH# ޓQc pgr6 fMH(FlI,3$I.UKLi tT"fOEʋ+YY6П=`]]TE-]vu~  Ơ ֠   !!&.!6>!FN!R֋U9 VdU U1v b VO.Uv 0B~?iY>֡r@`_^#?ң9UHMUm449uEV-eBhANS,TC*T~_zXQ_!%a[* d*`᧪&BcOWL~1n-. ,|-T j*2ra$$җy -$--..&..6>.FN.V^.fn.v~.膮.閮cKPPJbTl.^JzM%`{]`fv"fW>~eoW$YW.[ZÊf,o%dg^m].Ygj[MTfV&&a (Xm/eFb2oymg:uFhl6IR)nYڸjviĉ{noJcI!+M+N^kl*\Yt> D&۝5kJZjUp'QhE-6p{ӵݶ-h#}îhLj +#hao fy.q|>qqw[Cpu&'|F61ƨNgB$i((+7>ΆKEUnr*rxqh =h% hS2rR(;&:$:g阮icXz,(f32$KɣZ16@i-=y""&+@ã!^ ڧ:^)Žt'*>:qr ­~#4`%tH_E#cĥA4S^'f+!krT3PcUZuuC0,֞PUs`93Z,lM5lbl%ޥJ`w,ĊlJEoCl`, B1ǭۡmڢhdі jvig6r6m/KiZh+-h'% s^g5и62.s7s?7tGtO7uWu_7vgvo7www7xx7yy7zz7{{7|Ƿ|7}׷}r7.ŒTZ^Y/}mkXnF*/[-frUNX=/Xêtd_X15Ai3e2^\^[ 8* &/3SpFptpk:Lc+m-'!4'5a/KLROt:ӈ-_sJo21cZ_1/(O e]FIoqB#8,F)WU\ Og_QN[)w6C[~ 2k:h= 57:0K[z12S/nv>'oѦ;9s ŲNsW *.(֨j 2;x溮 ?Tyj_ +Y,ѠR10<ϧ˹..>g*aQ PQFe4gZBBfy$3:k:GoGtS(u8uFɼEm̟#膕{+=X/7y5:^qduͣɵbeC65g׭{˃üd:ς[;CXߗ4i6nvj6o~'<>c?j{>Ӿ~mr-m p8MoMp~/>ۗ~UY?s?_y?????ǿ?׿???QNZYoCq$KDSue[cyks}{CbxD&KfFSjzfB s 9+ҷ]e1]c[~L|@Rs4tklTy{dDx4heڌ|5}Ľ<}^Dt-N]V5A&͔ FCa _NwXo3_86Tg> ǀߙo|}ao @ V 0Čc2Ȃ<*|Lr'/ls$zъa~, U~)pɚ*>Z ӫZ4(@e( UZl;ۢB󎷾;?VFg#Uqmdh'-,_Ü̲1K;lZj:}՗ѡ&u)Xfx uM1ե(læ6~}yqf'tN .m53Ta8kW|O6о{b;?b>x{AÈ,p,:z;JP4>E=Fzphq-8î8 Y =S/HN!+ƯB-L $0ZTqExsK)|x=rG p9M̞4N,wAh̻Q9Mv#>&5AjMԱfҨ*# sPz5*Z*\ *^K^c-/@=v}v:5`Ӎ8`PI%N,\Wvxh!x;Kq]}5}-wHIwwEE>e8~eX=TOmc1 aWA68SkH8 ;scalapack-doc-1.5/html/slug/img756.gif0100644000056400000620000003661206336113241017167 0ustar pfrauenfstaffGIF89a&!,&ڋ޼HY1j31Z0:~ Ģ񈜵bAi<Ȯ AT7idww(tf3 Ye %) 1ZRUfw#əfxWCC)",;\E]LzX } ݽ;+]^5n |}~ \߷ Ӧ(~1T!?u(1#A—Æ?Dy  Y RR4QG&1(<B.4 a*f]ڑ+бd ] ljXj "Ipt90*Cѕ=Q̺9oo_s3,QQ{9D29de:KNlYe7)ȿڻi9q}U遈U_Gg=OXA؟J+:ۿ?`P H`` .`>axNHa^ana~b"Hb&b*b.c|R6jwWy}z:zDٔdCwIbQ#I^5@w=a)Y$VEeA$%X&5f'OjUeYo|'AVIZe|fm9RhK9iU=gilO)Ir&n;jTc_vz=f|opeڝ d!ZK:Ⱥ*FhI]*@ *7kI)\׶\)SLjf;Yz֮/6'"*¾43 ^D;۝lMW; @ 8+>c[ns z[뺊1G+b"aZɫo;8 8uw/=a/]Q`Jq^|cݔ+cGF毌~.Y$naTwd'qZ[e%Kپ 0*v)7|iLa9IBxl3*S*`jJ)e+*Jmhh0Y[2Ue4!䌝:9yL 2VF3X>t?gR:bVn Oƛ.5oݮKE dz|$*k@&ED U[/J@uC7p{%Ogt.C$n[w|{ <նPܪTM(7pIHi{sQt+>O,SZxJP[fF!VEWMVf% Kaؘ7זHwu+KW@Ti^=?zo;bOeWvTGS ?`5ek=]׬ZS9\4{ &|W}'8|[R\_%( h$x W_Er\=ƛ|2,dިPVյp2|i ͻx^n!m_['O,S_JU}ۇ}v,2Geq,:n3EF#E 燀L'xT;Ta|C_Gwhxf%wEYw!a>fRwwy^D`(`4^cs+PgtF6_h:ȃE1Gg\&x"DbQ\&d3w/*Hw46<\GA4Dv핁f7:i6hVOAYy&k\X7'__h]ah8FFd%VbjCq=0=?Z2ui=>hzuuDȅ+﵀2Hlm&{Q{r8=73Z#೹ _2 .|/qHo Cx0VEVo+ʙ)Q+ㆰ6/F3b{Cmȁ'#h0닩4냢#:n{[{/ h&hhJ6;Nn1dУx ׎A*l.Lz;&<+-/ 1,3LÍ5|BJ :3暄̊$|-LGǶKʜEllfs:\zz)!ȠeŰ}:&iܴň@{ujC"46 9t}xGw,:X/`^}8|ȬQӗ8F(_k?eɻf#^e52Md9S,[S<Ǩ4Kc"FJk3Fji2@|4s;KdgTۛliԜۘl͸L )6㌛lΕ|[|Nk _I?KS<,KzLϦ (N'?iHs,Yʰ+XdMܙXmJǩ ;j,GuVGhA$3 au,ZEs f.ٞ:7=Ȑ}jA]ŌL9भ" J]FDv΋d JC'DAz ׵m}0scI7˶*zl:,FnG~}6Ҕ=ه͟͸ LmڧکګڭڿU1zK)ݥS+XxFWtKwgRQN̺4rK*>} Q9iWέ}PImkI[d+gl|9>4͠ 'L P^HU2vq5 FQP*z)ܰOԃ,[&=k 2{cj dC(̵2u( ,ⷙ2yHY*>ஆ^_ ☺ e+oyx,L;^-PE~-(j!f9Hdehxf:ײˋMmn'=Ro:<⺕C/6e귅{n|q˰]ϷbvǽGmf Rϭ?wS :.=A <  a<0cQEnBll FyH Ej+(+KڮdO$eR sAO"||,k qFm|´qIpOL5ONV[q >+jU\?]d>4ӹ/o9yRl7ʘX⛒O\)NڀρM܉O=lIE.2ȟRܷ$ǴӵJ罨L&t:ը:7c9 p%L2 hRoxfR=LbRƯl`;XB1{g3ϑVB ,(jEHU) ٪ઈPlM2dk% SX9}B^F<2bnDdκhVj#)I*X+A%TY1&+Nj1 )U4hRYS\JD[<+KK]/La41Le.t3MiNմ5Mmn7Nq49љNut;OyΓ=O}?P5AP. uC!QNE1QnGAR%5IQR-uKaSΔ5MqS=OTE5QT.MuSUNUUiRQg̞Vwiyz JQkV O{GaGqō[j-YL|3AA\cQBA <끬.ɓuV55 %5q~kTYT6! I|FXϦK !E.,l++$r pY]7*#ˈ5ykOHEz_Η}_ TҦ$-ZM0Ŋezw_£ӌUƑ$F#oO$[1F f֏9ZkzvF2KqS!h&TFD NƧ':oL dlavIyQpNpd5%xHAM@Mv-`DLȮvyYr` N!!rTpVq27D,l3LH25nW3*#+MIx%*sg/{+jRđN=]NJPb-'kiqԵtmφd i4>nQ}+Ri:0qʛ(6HI}L:r"OXM3׺Rfey{>yۏbr]iЛ얝۷PkcX߯l['44w/kѬ}/^Q5hYŤImxw6E>fT'`HG5DnTt;Ybt8ƆMaJ]Tկ.fry 6}Rq7gmxdZod'U9^@>1 ,%5,]|_կwa{Ϟ:Uj 6Js:M[}X8/YPǯ0oP\cٰc_< +#{m8bI|g:?qat 1=z]uxX lmۂgȲnnpl~j=뱲&̲flZvzBKϺ,)rƬ, q1.oLLhM`PDdh{$ kltWxP 'I֦GP / qlȶMRBL6,!xʎSfЀ@Nm?(ը- 0ڒ;m 2b(hsQNwV0Y h}1x Ƣ`Pr0Nk e䉸(֢~x30b7rPN1hYQ  ۤKNq*OPz ";HW1q"n4vhf4qar-r17s5ws9s=WdsH-T$oi㍤)>Lw|67Qbqq5Xk6vsuq{+6{ u zwmnDIo]Y}G˓G@Vz> 5o@4DP|?NtzILP~M+y yuwQi1O$ ,[wחĴgF9TBh Im(4SrX;v _WU:6+߰vfnٵ+e?m.UW3ج.4 \9`Ԋ[2X'GX~4eP?{ċ%#V?/z f-m+''@w 7˖.D="m~esFYk,/:ly6X;oHfA _9ߘ}|,/ΙuImn6+J-ooESۉ2 7p1sŹtwr'749Iq?9y鹞yBF>֝E!K=3{vZ*/.yiwSwzJԍv;G0nw%Z8vuIߏBR]u1"Q|˵I7%8z}API;V}wy6[wÑY8}Ha8Ja$Ut^x{ԴCh o7Il '%X݀;'xL\c yYUWXk`,Θo՚P08M;#;K~44Z[x!Ec<ICCvRzO邉q8kXXV4\agVAe2W[\-EmQT;/ي]ykc|ջKEֻ:B={Z!e qY$"wz`O'_:f[ 侸#VKnj+¿zS&K&Sb{y퀲eUY@ܺY]Z:/F،g%P+a 1vDvm# ilm*rX7ݶ2 i̚oAܛnE A /S3+ r缢\Yr߼A=E}IOVn[i#eCR8e1k0XbX7R)xZֿ?Zgژ- }z트&`lqE!Ii'g}l{=[^; J'ԲM%rNn j=(݃TqI=x=[fq ZIxi ћ˛f3Zu:[yGȵY{^߂7S̑9Ezk܇u/|OY턼Xt 5k Zy\B{%^ËX^eWO{Yԝ\0 s%3?B=ٛ#=3{p3; >M=EIMQ?U^h-җuhދ3;oWp}erigeؗw\M-kwq_^>3ƋEɮ2)_N6'u:aɧ d8ՙuƥZK.6*z?#YeFԲI f4!firKNO6#S=Ν^!:^ĜgƤP71Ǭ(^U2q# =PVit:W.w&FY15KO,N)C- agOZ{@{96 }F`b$}w݉nv T+9{9衋>:饛~:ꩫ:뭻:>;튇^{\e#;ۘj-y"׍U%oYo$gGtvŝiYv?&ƢY__eɼjvGEKzZli~<!e&7 38Fm,r6JK"P$*YD@+Vr()Ψ3ؕCsMn*hl/s! pD yW#k]ΊZt̤Ş}d.1mR>aab'qƟ-L4H yc)A7r.Qe![f^D Ԝ(IǎBFLhrCe2#*^r}GNt -rZC )9`{׼HSkst z,;+KxXNSB'R@q)<(8ǚ'B#*щR(F3эr(HC*ґ&=)JSҕ.}ę>F#hMs8Q>~*%*RI2NuP2>jiPrVTW*@ +ZtpZ/n}+\*׹ҵv+^׽5m_JEZkic `}2x5^iԾ }tjd=uyҕf >0 7̯+&Q@V'ml+c cK-L*S.k3Cz Qޢ .t#=2{H28UUxG|EJ*VV5Mƴ~ @v uAR Z*ai Vy34'}C"hp+ N**Kxu6Ć41g(IfH])L3]q ^3%sP G%9]rھcDymp=3ehn)kdlY΂+pg&3|:/It&3#-ISҖ43MsӞ4C-QԦ>5SUծ~5c-YӺֶ5s]׾5-amOZp1X&b}l7Fi! 2{*Hv Ž:6?mCŖCMwXN7*H(Jw}Ac a1^>T&[xV>e^2=6ƣI/蔳&c.69sWKނVn@4-h>hy8:N#2JB-Zn(od8e:K"B+R\{N+_g5liRl>mЎ )!6䳮B;Ģ>)^vyzq/yl 9{\2xEifofMb֮@V3SwlcmE䳱A;JH–UVD =m4uk{2M媾d;ݚ^!Pu-WN Ř}Uh_SUWWtX?БA9ΰKd`U$9 $Iߋmט߽ҍ-U]1VY Å \jRT݄ Ld@wO=Qh|VT`vdtɵ_ Ok`!%V O^byVǍXZ"b"J%j"}]E;5^DwXٝ*שݴ=bb+ /]$)|`b=#-{|*R]4 #943~c19#::#;cX7T? bzFrE,`P9o]UI"#[Q$+ ^hYu$ɸM!" D4U %f"ӡaT_6RY 5aYdY bLͥvD],fRU/~ C "PUva֐q5TɟJ`IzMBfX+[^`.aa&d`B&Qa}&72ջEl:!|f@DoWcFy$1&N&f>ax Fa:FQNXf6&˙9YH~#vSYao_a& Y!e}e#u(%& Z5X"gya'L*e]T6jJjQ>8U٘ \bcj#U1f=zSg}6ګ ;.*6>*FN*Z V,dt݅~'z[e6k%)r^edY ]%>_zزv&XWSّeN*\6K:0!je63 dW!BVN]viןVge-g$g`ѶE rdDVF-+{٢С"xZ'w!k.&jrה6(o,>aC>(<-nJF{.͌: O(p)=ɥEi.hWl-m*n)hΚꤛ%%6jlZ}<9nFN.V^.fS&ji!⢟ʣAb2#B fZU.fiM晪:k'*iic*eեOƫ*o6]dB2Lv+Ti/k,>]_Jĭl.F{ /.%nGay)R-$ F/gdnWoP. HUrG*J)]oAp߾a)B'~3lҠdԯ90W0NJ՘x.R_~ žܘ0 ႖"l^rXVɤuFfP!*?_Z ( /~cNm]q"旁S(b"ٞe𺺮$ǞVYYM1"jHƪE.$ۤ͛nH g`lYr[.,)=JS;scalapack-doc-1.5/html/slug/img757.gif0100644000056400000620000001742506336060410017170 0ustar pfrauenfstaffGIF89a!, ZlH扦ʶ {{WynԆĢL}¡OiVUjQי춛˔m|J˒kSw87pWH3CYhaQYə)j+ wsʹ+Zkj0W쫪f8 -%z|2\}#^9nc4ki7)/@^պe… _AbR0ׁ8ʝ#c"v K&/6\4C Ϲ[$K>r TV(-f/:j왍cT;6I 5I5N8R|UYWYtڅ,ӻ|Km +KFŌ'ɔ+[9͜+s :ѤK>:5ά[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?>ۻ?ۿ?qZdm5!B&`^"UKLER<$*cZ0ↈ H‚%VDS6,Q?\UŠ,߇AX#DkTSH<2TXuVHWNyHLyN8ZX2*砖⇉eߨ@#fXǔY\{Z=lhT"Y[B(Vq&TH>鞜>EUO N[uPQ`l@, d1$`3np؈w?.n Y#LB ŴF.X#)'QGe = YڨT|\8UJҌdrC> 99\D|(4 .rb%(nƅ}K۞uM$;N+$Ґ S3SV%=')bnI,"P(d+PO̧s@BOO"FۨΊ -IOҔt,mK_ S<ʅ"_4#ƴDgMaB:`\Ng 13P9<x<oDYRZѷJ"=w)SUXN)>" #՗%5vl`ֵ`a"g6J=ɑy}lܐusl'Z; Ū0sjVήc*i f!q|4&.s-nO&1ï-ǼnUm=5N:Vnf9@m[fh羓]'ћT7p"~:Geo6w7Y'=O_V7L]@-ap]jC*RьC)Q5dKoOP61bc < yD. @JGP%cYSBX+zr T)/x̖e9!fŌ8 .2V.AOKubiUUStSSb ><,{f-xʹ a*UmsZLCS u.!JjG1V<gm=M٣sts2HV"w8 eroԆڳ&HF-\Fs\^)J)íf,2pE3vKmm8:=[։Kn ET-GN8!ɺ;MI]^-W##uO 4Dt ,e|]m$l3]3ɛc*xwk%,By!Qֽ,.`ŵ*ݡ.#Yi-jItUkƁ=K+dm7x3vw|{ Uwe{ ArpdpEFq2AfYvIQEGXz'xs.u5@HIFRcҕe`[`TuGm%[6gfoF$uGlwntad'ayxvx{C|OeXpaPt`6a(F9!d_)B'Fr'}yA8{Gc+dȋ(HhLjɨtfyd }q~WHU=~uOdf6*UghXjmbaI.shCgc~JWdfkE&NO&b);)؈>炾2b 8+Xg&m75nHd;H^嘇{|F\|:0X'7yELnLQ(Fa/$ 'y*6,yr@u#م4uHqrn%iX6c\!DžHZ1]ug̤9VXpwf6d`&G?`gtMyV}MɉMYنD)v*逅Y|}'3UEȑiCHZtiɛ雿 4bET*Y? 6va heio'$~ja YYt9Wep2~igURU_ƞhg}(68u23 9X_U7kNj Y3?WWx:jp+DhpkxbnWqG5&nq 1CmRbXa5M%\)1al6gPIGxjzmoCْtOL8qd;o>S3DPI3K:L$27EXCF9O9/% qZ2Yi{oQI䤞[:cqm:mON#S\x2neВZ9ژ2s*slV2:v"بJ_z v&0w] FgX ~*jȕO&GI$VhDMko]kyJT)ʧzAz*O:[UCV{yLj3uQfZ}Ϻ *˜6QfB9!+#K%k')Q=$ղ7h_ѣ:ek_kjw[zqS ŞYB*#gvUZGƭ{4")w#j~[J7 hZlĻ%tb \Ekv* k`ɻ62!gtDX6N6v&4=q;m,Z+~˾h꫏˗2סךx[\j;5tCgEwUbzjJjA0':̹DO-Rzٰ(ėSL}9UPm@ mo q,sLuT'hyv,:9|ȥƈ4~̂.U&mJLLRfl1XV4 ƈ؈ ;oM[ǩI$" ¥411 1ں \H38)O{;b>S|jX8ͷ7Z+ fټKܸ ̾‘xESzki )LzpxF |@EF,K#B;y^QXe)+-/ 1-SUYO&}kWfNrg"W[Xڌ&z#cL-YU88p|LmK]lՐtɖ9̸}ՓCKOKH=$6^M,ǵ JMwmxIi[cُi;ƏW<([KmBֈVi lA(yF9ZȀ:i!d֕y }*_Jkg{FŻ Q5C2#[9$]jG<ٺ ~w+nŋ||xa"3Ucu [<^ҍ͛"% }Hיz]'ߤŽX7\@G:׌ Gt7:߰MjY"/1nrG1wnEhwol]Ff9( 7^H-C}`3²))^nKzV/9UMOWZ%l_އֲpj]9mU[o) [1XI 릪%leꚛxx'  ڊ\Jyʟ}mM癛0ȼ^=9,S]'Z R ̗hC΃Nc>̢){kv*ֽ]$9is(< ρ93tso8Ec6=|ܝC. ΍C#񖪿 ):턷9o]xc˶ CO^=|)3oJ8S\{:h;őӠ!'LqTk椫с/T;L]ъn;OÒNY{.qZ]ÛO&l9/OǶ{zBmCrRiJ̫ͥ^ͥbm#Zùꛎ.y O+ֶd.D̀,OqLOB&brLLK"HEVھ\gzI)$qt^de2_)|6[hBvjf5ڢا]F k W(+-/13/8>A<;85e+%U_ace)k#oYqs wyqYg\_m!{-8`A&TaC!F8b0 #tiIiZzϚ p=bȁ\x[բ9AYI.QNYYnaYiyf}~._ӴXx=&!2vy @@QF}ֶkc 9dޥ/QmOXP']p>N ~rFp{wyXi_l Ro.H\uׯjb]`gE*1^N<N9OzKv=wI$7pUC"xa1Qn\/>T1Uat$BRF!]bI<$R4h#)6 %DjdVA1\|AA´q0}͌^0f&iMiLeS+\ysV-9i3yΓ=O}?P5AP. uC!QNE1f ZGAR%5IQR-F]t|T5Mqӕꖢ!{JLR?#.jGE=Oԧ”u**qxU^*T:\̤uej8 Ό5~ru]W}_X5a{ Q176`ӱnӮ ګ c?VfElr2\&1Z!o"hLi>'n˙ _#9e|yjnRD+q){t] w,K趩p) 33b% ܴHM75Iqbvܕ Q:ZQ{׼kƩ"[o /ۄعRv_]Govo807bwb˅Uv<8Bi ߾& ׵Y:]b *7Y&0K"o`U)YO 85ַWqGER'|jVEVs9:x!{69~t497)dndUQ6ӓ:*̰h}=>@4unN;C֮Ƙ8j8).beRkh%ȭmc{߹ѥ:< Ն}f1w/e#r=Yp4ots#L&`|q=V8iԽ섬:+ ~p&cB;scalapack-doc-1.5/html/slug/img758.gif0100644000056400000620000003042406336060433017170 0ustar pfrauenfstaffGIF89ab!,b̪䅌H扦ʶ _ccՋQL*̦SeDPѮϮ ˢǥ;t]#J7YZwFah؇)9IGwf ԉ82juZ +;ktQ ;bI+Kڊf,E =8 8\-˝I|m~.i{jy( oj0$v3m࠿m1FW OZ= ecTn| 3"G{ Wj$Òl⪉8\=e=NMS_HVRt." 6,A,=6,P̪} 7.PJʽ7V{ 8 >81!;~ 9ɔ*9͜;{ :ѤK>:լ[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?_2)x$N}'U N_~5P IM$x` ztYtHB5a4Otd E{9bE0UU{'1alÞH z'5W&eUF2*7e!ӑXJX~A =I4^VN'+NSQZIt򅧟y&I TO%2\9haxXOO&a*5;5j mJ% jP13=i bJ6(.[TVz*bNu ,"eH %[ '>βSYkϴun;B9/aۮ `Vm" +;"=j=~ѽ"oRM.)̅|.G_h;9-]vltoΐr&j'G sGȁ~DtR| ț6UfB*t-RnCnqTJ/JDmV>Y}8ފ)(5.7$8>6j(oe>4M]m,S20‚h~(ӒӾvNP~;Е_g/}O?>)|V~YGߏ p[Ҿe/[TI=Rc ֈ# x6}q7Ԙһ(j@Q- DsO md$:dzf<m3S:jkrË́,iURќCOMGUh΄>nO$a,DOOCcTZ3lc|߅Țj tH$шn9tdTTNEm&l{c |@K;&M3'+̰*z@§ +r+c~$7͒pA8FhtQ:dNqHPy9ӕjn0h*e6%Edj.Qde}1%l1M/xkN H1j:MյdTDl#QW5s.d\'%NhF%0y2yJuO֓OU/+Lai?`lhJâvmk_ vmo .yS.t #:RȒU,;7PIR-eC1"t H)KlwC ɰnXI6w]9J"݉ƴ\]y)F# NB*mɌ6dfsMTBtSX'8V}S' m4Nl~f T] $)C:"*])Yu2ß((\ot/TOxm+_exm/6 F8kҢl^?F#E& kRt|eV%JüI?zYV&܋w7#.ɖ^ש[6Pփ%?]S:Q&jZlkTrFUر Չ"2t-LYT Lz]&Rkp[uy nKu0n…p Yz_hk|pr?Tv-gnhb?Y <}f'%79WG^X\(yZvD/я+}LoӟK}Tկk8ߺy}d/ώk ˽\Zл}oMVϙ^M1owIwbq| nyJ?oogo $q׍Zgnҏ~FÊ2E47Kx oN?#7]}X+6m4#U_7K,y}rN4"teaTQ_$FzeeY^c|%FS~P٥ben3>8y!:[]k& )Aր\cT6d(h?UMf@BWTQcgHd-5FRKW^'N6R*4fY',ܦywy`5x4:ojdjz&KgFXYxjNVh[jѶ.qg~(gxIDo%#PփCn8k(hT1{VwxYfPlec$Cl&Ij@lg6j%`yyWo=EE,FGS2U&jXhUm3z(f,HjWLyHOx 'g7WX(|Nf84zvsw(LXv$8NggsŐ 96 Y")?dWcZI@{-/ 1)3I5{ǏH|%9l\+nX=#1Nt"eP}&bHI(X)}'Z$ {x3Ô&^U@yA_K+ddna4r\8bx97`@ăZzvt| l瀅K@Ym0`Y$3Ih|,8|CpF7=8gk`zhnI-P9{xQmx~O+3KtXC9h)ӅL1ь(y`pibhgP=tDr+Ejc&/}H`㉖;6c W,E4i(Ffojs85PԆ+if۶ǡ"Bnk5( ϗR9Ƌ@d9϶ԙP`h`hp}kg]VG:Sn$*7eR%(T:K]Vir89:{֟k n(DiԗYniCw]&X};+iaɒb)|&nǛbxuɑ*8vJ5!>'!6ɩ꩟ *JjJ|y VUz'HZ0Xʤ؍GȞﶠM#6oڷ7\jQKVkgk ]p׏V (WuZKx&!QȐyJJrDڻZZy˼ +Kۮw&sԓc)cۀzgIgXzZwRf;ǻ%4q1YOhAEUx;{x ئ{D4[(l|(Cˆ.qw6*<2812̯t5<Áç#MuDwG\BU[RIJQyXbL :pkz뻺Ƥulwy{};qWS|[P)Bv۲KypȀLxm#SAtV"6sj,5 am;j8`a)(ʱ{s+:%_Et9^#3~c6Ú)/5[?R\ŀgh^=(B6Na8#՜I7u d:>`D HR;n"!ʋу圣j6(SQ9+C 4%KdԱ;\VU`yʺη«a31uX{v9+]eɮ<Ю;Za ls7R?F/<қ!r2̃蹁ȫ y 7IAkȒ;>.(_h،l7ٝN6Zl|Kn /&c}HM01 n}-+*r͔_Ӹϛi=ƹF/Z%h6dK":Vˈv@l=Wˍ (ܩ@ɷŇjʈ: @w=r=^ Nړ}dqbL;bJL}-ȒۺirlKݺONr4;5xqz^;Nn闎陮Xh}e|>>Սl==>j4, ainLe6TOA3 8S։;B&D~>BHJپ۬9-_K`j^>,7>bEذ4;f4 x͞mkWI\eQ_^25ю 6@`JfUw@W°]L7#>VH hNi28]n5;A~SJik S>S./R f6]*s,h;gO=?>zb~I5*ޯ'0Vam^%<ߩWLAyt˭وq^hdטJrtIdET\"? ob25}_ o!o.W>nӞN珌l>൫ztm[c-C_ſ/Oo׫-ϸt@/QPΥd=E1uV{q[H5̳TK}IھeݤG~xBq(Ie,LhDj91Ҩ;NZݾy=*{|> װ  %1-w8.KRU,F^&Pq#Q]#s@LW̨'`f4yR%ZqT._B+H\Ky\_5IV$ǬTŪ%QczpYtp`!V< 1m6{R,iҒV^,3jqgR*ѮSּbg  }ltS#M/|}5T6}du~]ΫŠt۩ \pbc11} ~,0 LA%bI;ͻp %ÁĠIlA +:ĐMfBQ1tIQD} l'T2(+R-/ S1,3LS5l7S9;S=O6 dOXyCBa#z`[Q]|O&D)% O/pPAB"u_Lj3'JUO=IO]s,T-4䯶pV6nʖz(ib-t8u-s'ciBlW襡{fϜ#z9gFiS׳F6ܝ!6:8Γ[Ow_{Yu+rFM` EXxrl9< 㟏{yGNC;}S]&EBr`c=p2H|ݩ*A?IӉ=f?Ka4,qPR4C>pd=D!E4D%.MtE)NUE-n]F1e4јF5mtG9ΑuG=}HA4!HE.t#!IIN%1IMn'AJQ4)QJUt+aKYΒ-qK]/LafI}zh0Od: IMI6.; qLm1ok;c|gxDt'?_0Hw kS†ֵN!F AP8*rAI@Yb'_p"pÂS(KB,mEwRMqSҔ=OTE5QT.MuSUNUUUn]WVe5YњVmU U)k+U0)xt ̘ ]}'4hX ȸ7L,R@ z,K6Բ_dAf M(o@_60SW<¥^lhd\qJ{СoU4׊k+nq5TwbolW7c mI9ȸr5gnGܑ :6Tr][ҫq^_[^4opT幎|Z>`Wô0% Pnoz#kHi ny/eMHd BIP Ou D\). ؉p/l&OGa$/ߐGի6pE}ഐPXpF\,QxwNqTn93AԔRtM5 oNNN4OtOO.JHQ11zk^k[`/$QaSkvv@F" 2=r!3H͊U0*NUHG )Ր2: r@&␲uLS>u~&b44z04Ӹ݃I_,$Za\ekSW; ^j%YD[Mmjt-ZUK7ɭ]ﶲ)w> |H4_?€:7 a|oy4/1D #2-, sYUšM_층Ϙ>/*X=yckVXG&žDY~X02B֍U{i27n&dc!3vpv'`O*_- kJPdkE05_(cٖj''njIgQPrnTvXhiRGnAVAr/poUfHh\V.c$7N"WH.+M[h^M$ws-stMtQ7uUwuY65>^WnxR#ÅGEho1V 5:}I@'Wcm^OQT$`Q|2WR,O<dQY)Kj"V)5eUK2iGb7 ~p /8p&WBw` Q؃Sz2݄Fs JhAT5z O< 6t acv*&80ψWpf~q>~eE>:~+yHU/jآ`ˆ~WvPUo/g։j߶#t̍S~SӾ4xnӌ5xp62\7c$2+'sU7^|I *lr-CŲ拹lTvT%UQx_ sIbWhgXq Sx' l%[׃|yL*ُ4s=\VtiRDu9y9Ϥv\yK/5{k2Quw[ U[WG'1zʃ_y7W5M|9CPrNHe6^ R-9ڈA/->8Hri֏oUxIY1l<^z939r+o LleS뤵 tVxݐS V-$Jp˚q89#5:%Hź}6z9OYvdهsgƍR`h*U8p-tk@ٓޚsSiֵ,^sfwO`nȲkS[k;/ p؊Uմ/{063#[RN'l;f}]ږ8ud;Zԑ */ЁA8Jru.hzw+N 9 tk61˂:ٞwoߛ3fvIUi^i9΢/߲c}Hp i4|1*=NWR'. TYҭSYyzļK| \IOpY4?}Dzׂ;Y259gοs?*ec!:k@L9r0LݧܯWsOj6]C77Sfttx[.5WUa ߪߥg ^Y[)TN<c&]mõZ`0g".n~.0^W4a6}|tg8ƤkIg^O[bcEy^ysV-4>ȋ=VBH;&[ԧ7SjƎ۞سY/{^R~F-R=E9KuQ\*VyQoǻͯNv|3*N>1f dzލn#*/rҷ;zotyARpN&()Hª\kf32}h2v [VbKJّ_EܠסU!ۄÓ$_ߥVcTNb ]]fM++ia**c"nO(ʼncj .^lV%k0t)(j0'80% ,|<ys;qV;NZpV+~9ӺrܪɅvov4+h:Pr)BC~q6w~lMmvg`ؗ-X6Vx5`t$96XuS %! &* L!@"V1fՌxc{7# 9#s@.CE)ˬu'K2dqNB鑔G&9%OVYaM%=f_N%i&m&q9'uy'y'}' :(z(*(:(J:)Zz)ji1b8 %IFRW02iq`k¢XEe~ޫv'Tb9S'l`Z-euxzY _))".[ʖ|vTȑ[A61~,]&h|`OU(о:uCSQ\px L*Ǹ@.>D9AUܿ|srJpESV9+kyaI)=E8:k-.\8Sv@x]}w2\IR[iMn7.GX+BZo)kk/'9ߊ)C\1fw¼魟F3o<u?}UGp?9䥷^!:XMǹj3_ߋ'O;pu5ggvi#K x!9(w#K|tWexJV ~qiJ/?lћ*fmo^9N6z 0… i1D&e&Zq;z2ȑ$K2ʕ,[di2̙4kڼ3Ν<{ 4СD=4ҥL:} 5ԩTZ5֭\z 6رd˚=6ڵlۺ} 7ܹtڽK{`B;}ԥL8FDE4fq'0L[ I%}&c/_8 Ͳ)>RZũ\n[gȾٸC+;oQDg֧re3F6r(rcbvb9r 9\> tnߝE e Jan bx,4y(1Q:)GjّRڢ]q*qfOg2!X*Vbv(,Fkv%)!it蟊9rM*lN9'B&+*J8%dY|xh;n[jb,쵷|aE&Ё0O gV 2/Ї%#"-ۏq\^y tBMtFtJ/t5i7_ F {3Ms[&ր_sʝehpӶ &qSvvai*-6 wg,i&M 9grҶi;:B`Ry"ɪ^:u~z̪.?Ҭū_9=ٟg'ڥ@mt krl>)f?"@K_BV½Fmv|hv#'9.@RU=1UWhj~_4A 0LL߷:v"]u0Aj`7(̡ēo\ ,b1y9d"Tmym~m%;Vqn(1$EiBVdF+D10[Yn/3$%{NmVcb+jSZ, ]$j_&%c }!zRdcoT'%0wnqQEKX n\AznhV)^Lj-4s %R,v۹>/lYw:\+4q\;+)aqW[>Xp6+*`Kq+qY!qW[ z4艒CrMT=~KbKkQj/\^"~[C_;i0Vҵ몹͢|އ.cXa\ ԗ]wh害̄mn[؃8\20, - zff̟Rҩ0 d\̄yхNWL+;A"%Lljvml9-+ hK{ԮlCr(bM[jĨE ;Wì*!mY34>ڶBmm@3 3k|2oQzaeLQW\ Npv=[f4rz7o}dg:4":rhn"a8nȿMxnt -ɳtwFXЄ*1EY 7*,a\P4Y50~.C\f$:R0l-F8vpǹ~n%x/r 'Qd߭PqB[<+3xٵt'Zy{`Jy2<^L#ךI?R;HEH?^RB=ɗS~Ɍ[2NFig(v>Yu{$~d}|gH7Ysd4$N P" Uv?V`g:$zb~'gfτtt7cf)(DGGzwB]Ch qp7%[Nķw|tM6XA^}T=v^QXgM![h8J"k DnzPcjj%yrT 0Or QOZqPu>5Tqi!T(扡(Hh( &Ӊ702wydW@UV!qFrV-8VVAXO匵HXpYqnv'7tb>XuZ2qJWHpw8%(ZV}'xw `eqR_g9WZcn(g D\ercEG5bUxsҁOiFd]^VVc?n"ڲb5v"VuH7&RBB}gyb8,p=qۗCFL>yWT!ygS$/eƇ+b(bzy3caӲd6ٓ7=xdr&]kKRd)26Ń\Xd֕#|7/ Jw4v>ݶpQ 7FYb`fOHB!+]v~-Y|t4g(MMT/pg8XVEfM{hfi=~䈿Q&3Ɨ֐'S1ȎYm!mj6E52ndkȊ6jV! :Sɠ *Jj~RTR"s  UZ?6 Kc_)kDUpElxb +F 26H(:y80*c:xWqtMڏI,FYՖ'Z(xu9HSXv<6XceAryX%jvha!`Av[˙ VsT>oew'`~(Iⵛԃmnve8sFg_൤0xui)hGJɛEٝa7sڄ.`ȐT`nIO,ZPGHtytК\yh1|bCګysg|ǷۇduəEx٭f~ `d_pc-c8xӇzj#HN}I‰\A:js5NjFGÑu)Ԩ+EذXVևɆeEq%wFPQjsؐbyxyՇmndGyQ+Cap_E+ڶ&%:8peQ<"E  #Z:э !;}1і2<o ̇of,H:\8s8FqcqTÚKL'Ć7{9}Z,|2ٜlgEԎu FibIL geΛ}Ru֑-cde7(zˡ۳z<ش^N"(D'}1ɇGwkPnvhw) ciuQtDŽ&h}L48|){}$d* ǪG#wV Ny0+i  fY"v4?4$^8>vڶ:.^0nxCqL׵Β%{4Jع 6k8u-[(pkPmeQTA~¯Zn'RW^IdkՖ#iưH^ݽZXfiʽ<3)x.@l =7\{mEl4eWZK\x>I7:.֨i(KW]m>I"&gVm؄].N.UO&[]_a/2|h_}W`MG:D5Y{\ {pgcq||Qr\9>xșf6qnηx%𘆓ӳMmÛT]-Y͖9ƽ<:My >^/ƄptPg_gţW?ZQ;lv7M m `u_ޟy%-X/Ì?2Zy+N1LLme"y-1 U0Q&gE$ڭ8 "\HE,TK!BmR\7,zOvѧ$mCnt=ʜ#f [r*#;CA-/ݢ]$1OyfwLso!Q)s+Gka.zK{X򘜟鄩?OWSi7&ewi? |+d$qHpQJ+B"h .+*FcO:i sb!(5EcCiQɸ-3HƊoB 7?R-GӮ9b H~B41TEmZEZ[F-FY fA|WȬ& P=rg 'o_|x `3C[j{ U7B%++Y&=65tiӧQVukׯaǖ=vm$mֽwo߿.pǑ'WyׂG>zuǾ{w?|yѧW}{Ǘ?~}?lY9*6PTAA ,G7 М<.D 4qTǃ7e5 ;E Yl,0<k1̲#p̱*$7Kg# "+ŪL/o.r%uA݌a/"Cߵ4Fdp5p'\#m_Yfx$Q C(jXO\h].0%gx_GMY#V$٩HVm*:W?.)gIiYYUv9,x͝+1}r.񴐶<Ȓ͗_J("K2LNٱ]2#1NG1G _.޶6_/O_o]6U}^N~?F&f%4H"pE)2 EDҡY0_ˠ⛉A(N$-& e8~P ĦxN inK79yO xDĕ/CYu@M_4b*affתȫz6)QmEA?^z$_oJ̗a(D2&50 =k!< kdEV"+#"oz XD pG($v:!J.tqJ_c=d!E6򑑜d%/Mv򓡼x7Tyc3Xɚbd*rթ@٠y]r3m&c/а.5wG3Y,3ާkUl& )зq{]h֊{.1̥(SCMvQ?r#I-V?W֙)ź_dj0qlS|~IƔ`ZQ{R CBec+&4DYґ > D|#)6\ !ۢ_o 7"K4ZNwXaٹImsFܺb\B(V^"\a/Ͼ7c-@w6[ӥ.n"í/7=)7HVAVME5/V+H /O!^WJ7F&YO3Y5F8S2[5=+\u\ɵ\[5]u]ٵ]]5^u^^^5_u___6`v` ` `6ava?upDԁ0PH<,bc`OVҮ#&[549 $TP3@tYK$[1 EgS8fm0sīzՊ/'˰.Vufj6iz(8Mrd*Ɛ3 ic4JQ% F *Y%JvԜ^1ks9%pMPfQo/f[FL'*$Ҩ l_.se";RS RSlt+ "3r-n&Rf 1E-P)YOf/!gXnc- mspjg7jW#QXuwn2S^hyXomz@s|)#|'IrLw,wf,T} OVyw>7'&_вW\U89Pu;xBoZ)c&}PDO瑆3q( v)/,h5wE.de`xp\wM5u/tR dsd5b 4eC+Au }e9e44vر }_9yoԏCeem3$UVll0OljKDk8)Lk夋)C. 0܍h +kEMJg/qq)?[9_=x%lIљn3vnhk'\{z1w݊  LWv\ Un< E|;ɠAxgzNWw4|ojN^7H I1g/2xn][wWoxV9K9 TѤgZQF0-pZ\|(wZkP 8 h5#ڨ!J{;-FB*<]&@rQuX,q̦鄛m2l87J&g7; NVUl8=csu׼ 1 |jIMȲ3I] "yٻ;{C8 ,ۉC>:T6eSzX7} ˞i;eVhUhf'|s/ %gDá5'~T@gȗ$0£6-/yȟ j Y"h1D ]Xp pݖL.'C[ l{-~陾^<0}굵~y~09>\MVΠVpNCѭiq=o[&vQdS#׽ 'Wv8O!GktىS'Dpl)1;scalapack-doc-1.5/html/slug/img75.gif0100644000056400000620000000117006336110554017074 0ustar pfrauenfstaffGIF89a!,{޼{G扦ʶKߓD" '̦yRJYjn0a<7p䆏Xv41\4e%Ί QgLf@]~۩Z2(k| &#A(G6*`d.Vֵi,^Nxyۨn.|S2R& ֚"KY*fCnv7V^|,3,FB7GPnտծϦܺIm|M٩=/ȡɢ |QcHQFSTUUЁ1-OLWVƘal*bh*. h"IшuU;scalapack-doc-1.5/html/slug/img760.gif0100644000056400000620000006035506336061631017170 0ustar pfrauenfstaffGIF89afB!,fB|ւڋ̼H扦ʶ LlZ0L*̦ JԪ0;YQ^Nb $VwAaxB()9I !wɈWDiY:g:5i[+j ;L\l|.?O_o7odAV?<0B$*_.[Hp;zQ ;i1婁*| 3LyZ^d.Qk)rСD t]Nw3YdNP51t*o'ONmֵlۺM_tvHZZa&| 8 >8Ō;~ 9ɔ+[9͜;{ :ѤK>:լ[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?>ۻ?ۿ?`H`` .`>aNHa^ana~b"Hb&b*b.c2Hc6ވc:c>dBIdFdJ.dN> eRNIeV^e\.Eb&Yr$9*feDBI'rui9h _E h,~jQU=J^ 'RX6/~Z(XJhPcj5 juC޺H*hZ)루"lKS>f+lfF[mJk!k–kn ). J/Eʤ*W\e..P_Qoq Lr&r*r. 3w3s5ǜ)^0қZC@#14jTz ]G҆2DFN\ 5 Z#Gq[׿6 )vTm}M4i.WJ7nhE Oi'~-4+_s.t^K[^sO%?J^v9T_K,2z6-AC 򮚔Kީ{,cUT=ߞʫ=ԡ<;mjz{!Y(j r#A?Q_$דvLeRL] /ktͶ.8T'@PvP.u>/N֢ 8{꫊x'^f ͦVm%Nɵ'9{lk:"vGno{.ύ q>L/KڤjԶx[X"آ<='oqA;Tew*:l 'BӚM/YO~ٔ'bԸsӆoKC,eB?,x@6? u;:`4mGLiNC൶ӝe@l績vUiG7]G\ՉXmsR8'ܿ44has<&qc _unNؿqs<`u&[WnGw%_̈́ ^3>.q ,wdGqRUjjWϻ|L8o/}fF ́1 [><3'qsPw}wgݵ}emgwUY9G75~~ZȗwF _tTe6wt[MA$i'@[we/ hZirYhIBؕ!āv)}!(\ȕc)rv(]5x6b*<L~rk4:]7OEGȎ"9 9dzT9MG4NG6iHuOT)Hycy{U:j\Jj1uҡNrX:ȋ$:M9YHh 5m zfhrojT3!5ZUw +*o٧BUEz35V(< 7-5 (YULИVwP:TjT9āuӍZZB(u)UpOGYjَI4ͩP$4U1qৃZ9jxoXuZ}~Eяyc:^zh=x/Y焧EwewpIIX)J8J,u(xWD5ՙ ZF5o_԰*4-IW3h隉9؞!ړxHk +I)Wcɭ{O~ɬ~|b;7y魲%Z7DVu+Bl-EL~{i)uRf_GCyTH]v.ъ=YNvWiۯ)PF"EuFIoշKe>׺<8n洱/ ٟǹʜ)4GNÎs9< hkikUC{NЉ˵ٛ+i HQJ'OZAʯ&:*hxsk컘 nXjZQTÔj}HQ #2(j/fPsǻSLUlWYS"] *!R։khWiQjQ,*v.9E79:pҪO3<٬tMLqbmy`< _YPǨ~Vzݪms$$۫uoT,fKo%o›J:BܽȕV=gZ,e{,8ʺzhQO0.n vv?6l ^KٜoYeΜvάjȸܹ|֫#M[š }3xX 2Һ m]م-6z3ݶ=IVKO @ ʡ woiAk)R||]~}0˒_O ݖߐX:N и{]V{q֛wPw!G~,YeӧQ"Hk<;2[ [mz};al\W$i|GysEOWV@)9v# '$aZy{]M=..v+/,I& g/Vd{O "@l(ހCB 'RN.$6fq PzP_ѵX/zAq:6<G\SH2bJG 1S[O/rJ"(dJ<0AxD;IN |hGs4. 32\1?Bt<14&7-A S?=2soY+⮻ÆSJ`L]y]"] c)7̺Sb36i=I;|Z5Mjkq-sMWumwWy{W}&K` X58Z",\IäZc78>Wx 3G?yXx$/EAG pYh󲾼95[>ٙf-ΑNc~V;kf;NCiϛӱg5j.Z\t<>+uCEѶ)#>2+,ECWt|@ɭjH7FQ2D}S d+ϝ9Jti)./\Sp~?y%?>6PN}ߓ gC?Hŏnz`WtisOLB HG%G=cpȺ@  d)ۑKچ>!iЂL̜*z<Nh+GU)N|LG=qVZ956K|D Yi THuOhalM xOcY+"Fn1|ըxbE4) Rt+aKYΒ-qK]e\~L?1C(LeΣaPkӢJmc@Lmcs-RۏFH#:b϶ΘHOp@5;`Lc3$:#fkQm| md;=46m4%窾C üx6z2ea2!C@bUB+Ϩ@O}I:ЃR_:I+ Nqʔ4CA o!g?έHLQGb{M5+]R"t!8K>xT+dO=Jb]t!Н[C"aԴ?yiW:8;-H ZaO5%UY"D=iQ$~gg1tk-)Z1UpˠEbs}#AP:ڨ֑nت!7#fFV[G^n#97fSj|l΄T]{I$=a O1a oAb%6Qb-vacϘ5qc=d!E6򑑜d%/Mve)OU$r|N+oYY dV斘wb63NTkO̼K$u ?KaN;Ȩ%53Y#jhI<3dIoiS"ӥ&fZvGVU*E{ sql@WfHlY~瀝miO{#{-+joZWFBF\Q9Wnthx=J{4]r5Pclܦ7ŜTވ< |фY e@Deֺb!dN-o՜937&gYbRD|-$x3wyԦ1%Qbc+ƑEotҊ%VF7jl+[䣓U Iz;)Z2^xT-vDN|  P?:EzXB,5sW<T_xƣueՈ=ZzyK M>|)_ykq7p_k==tٷ&`a_}Z׶tkw0|P  Op /PQ nϝNIKnxy짅oj`S MLp3!p00 a0695p 0 P ' g-4OT-h,Κp0G>XALX/)Ey(| 1p P 0 !  #O q p@aH,/5g ٘.K.s˸:/oҦ^Qn-nl0T  qQ Q!q11+1O 105n蠺+qndq "I&F)줏 q-r  !q %(r."5k jk:20r#=r$K#O$UR$Ar%(#[#Ir#g%M%k2$m nF"po"b(E)22)R*#ɨ**+2(A)*2,R Z'a& .]aR.2/r///30qN0#lJ0)0ln[i>1/,bq22/s2O3g*P3sni16Yh.Mk(. GlXI6+ q nF8I8 9J. +92*!'q!jz.< R  1

+"Ot 4vJlL(ۧV8 t6U*(7'p?bdo3Pnu"p7K0>4sk7?V9U0Dma]Vo4v @ J9SvԵB]sG7H2 `l  $Pq(w-H |rƣ^k4uGm=QVVvuSrK6>{xSxpE4Wz,艹7e.7xsWH!C[B2F71IQIǷ{Evkӷ !]s[PzjN7@#䌤 {O(W >@v_Aj^U{}3#8F<)8Y's늁u*ǰz;#>gyOy=ZDgJ4;8t`փ~ط*t؇nQ¶X1X!ى5嘵6>xinzշA(e@;.yqotd=더lq~ԡ_q FFy5RpUٔ AI<ia-7@ʱ!#]Uu.IؙYֲTK"$&'+r@Xk&Kt' Z& 'y:Wվ2i[ aӒճH}Nx8sq7_ ;]TWcs d/Ӷ`3CObMηw;qZ{tq`UqtnXVF]q5wziau$e>hhG\s`qRA^ i#|$GVퟌu"i\?pxeeM&8omn Xް>9^u]d|=n^ƥEtpKyDߴEIﰻXWwtO}ǢJY+\]ho7qwoXxoh7 k͝{ѣ"]> wU(%2ĩo}U؛@vsY~sv }O=j̭\ǍwD]/^Lz\幑HyIۄta;u98[gׅE b LueD;zxgܷNX]܋fY ^E]I]8E=zX\Ѹe~*+}(]ѝ\\^ ߡX71viĵJ?~NjXȍ}YvsOGflNAxF{Ⱦ; lqEyH >y^m6-GyFsT򊃱Vi#5Y)z E9`-8 [N_Z螣/;CZJ[?J:k5$:9U('35S?t,UkeD-2 ޲?_=l%׺??* o5%ǃ[C >.?rj5nE2+ۺ/3]7;{ ǂZ K1'4*R+6l[_ Exvi?` G]!X"#De"f^(i)j*'+kD&%%m/(*q1r2V-fb0.nV-r8y9X:{g;={|}?m%/a1JĈ'Rr<j/#|7/< K WiŠ:wY> M'hP!Kl*0WFdfJD9!ɫdpi,ڴO|-ٷJ[tݺҤV5ms֐94928*Aj'Sl[mvToޯGG{h ԗWn͓}aKMvڹm4@vm8doK7/u{z΃ 7V<9Ǔw1׳o=ӯo>? 8 x * : J8!Zx!j!z!!8"%x")"-"18#5x#9#=#A 9$Ey$I*$M:$QJ9%UZy%Yj%]z%a9U]1h(lR`h  uT=(B:f:1(_ @(fX5LjbAtYT),BdzX@ŝZHIAdaJR s9 5/렭k&@VDU3쪛NBP+(84~KP"̉ZK縗~1nѯr`<˕40C,[p[hjS.]6".) ,cTsCSʮp4K[I|bu*8 yn%jз14b4$iX ΢y48yI-Gyt oscCQYtzNscpk{ɽeՒk}0n5|3 6JUUUW~nYǤw~nκ3-xz3 5:K޺Zo!̚{Ը~*unz5gn4_\L#3z?<;b-}? 01@ NjW. J4ϻs a UE0l TZrAkO]XP~|`Ai~ŒhCܝMRk@j*~' @Ė4QR c)gX$0ZQkahEYns;o _ˡx810g 8B*gI$82Alv [;/Nnv#S $Hm0#%."$Qs;I]( !T74>e%$ƦY־S*K+;1{ܥ.69jי$Xsrm'# ĕn7ҹb#״q=v3Qvy,bI߄o`ڭVj\b *~7,$+3=MV\W3JM k|4Z|m*^ i gCE kF-n~["Wm(gK(8zd/~/,>03~0#, S03 s0C,&>1S.~1c,Ӹ61s>1,!ۧjxde9s3"K3}>Y|Z.y'#/ʑK†WIsAse:.>5j/Ƈ*ҥ#c鳭dO->Y@5`?.6|d3~6e]iS6rms>͞_ wsyl0;Qsr8aGfgY_s۩QkL1k #܍ Vy.-usK|Zvn:;!JD}<rcd(M:/o85A_5nZQuh 1*u=majm֫PdNS3 Mt}]gurt?.Jk fsJߴa>93:FO"0u< OPvբ%<̒?G)C[h9ݠnJpؘZ/|5{Qdw=@a/rݫpvvU =ﹽ}eޑ&4h/f;3[fV?}g*`hޞ0JKYSAa_й^qEQLѭT=)=_9 6  gՠ ʠ`Ơ &I ZiP9=R,2} ~=H^ Y42*a& `.a"E "!:!!T(UIM2BU!QdXc܀v $U)Mah"6aQ%"-֡!!5b-a.0֟ ᵥbeEe-ΧW KP[PQ@m-"1cb1"b:-c!,,#=;2`4Aiʻe9Ge@oA*V@$Bd(V9d[M@dC~BFRBrFJE>FVdG$Dwٗ7>bz<# MG^L$6~NMWEdO$P eQQ*NQ>NIRR?ږ{j8JK*UXY%ZZ%[[%mڡ%1\]^m}%^_^vef aaQNhMP>qIY`bXA'(_f qf]^fML(?E`hP]Tai戡u”Ϲm,#.9B`.ec)M a&{g=#9c;g)Aa]FAKJIgP'<=u#awx2T" yfzg9'!R2fG`ma=#3Ic9eW~^Hj< qZAteMzHIdI(Hn(uy>&SBeS"%X%U lˎ())&.)6>)FN)V^)fn)v~)))))ƩVHbWEheֹqeyᴗ5A(+e!~)j͕ћ:9& 7"E*t qhP 9MKÙjJfҺweRAiafiܪ֑ÈvW8LtD\TbJ]R!%e]]hc$I4VR5፛!*Yξ:RgFVb]E(ը+F])&%&2R֝ޝuZq"Q rS 6ޔ;%|f,1m8frU櫙a>^i-kw)6aӽ1&h6' 4|~S.PQmʪI-^}lq׮E^m .gh) k3UZZaR6ߒ٧m#jsyO&l6ʹ@1)nb.&!7j鍮"6ba. bRZh=b 偧k^Uy®]Dozd_)vg nQebŰYRӮ.c{gOw/[o~Jr.NU#)K1-Neu8N͌[#'G*-Pa}6gPi\efvfc/V u:ϝL&ҿΫ` 3b#:$2n (*#iz,YEon҄]h5Ƣɯ_IՠJl챩r!"[0G"g#oo3Vb5bŅffձhVAZ(̯n$Y.$#!ga>b2l扲 )+pPfdT2sR:HF3>sB^~h/G1w%+28RWNP6sNh:(sNF`Vgy%97;*ltsys0)C7C?4DGDO-4E߈nBsE^$_jtGWE4Z4,$e컒0KV7l^#h"tngl,R.lQHS/S^ nFOu8ruJ{Vbb! g5ozd0X5>q$Yw!mĺ0>5Н0\\.ߵ-5^k iຫ']`]raײ1)U:6_he'Zff-/rj'6d66#$5TKmmrs=;s;۳qfppXW#stWu_7vgvo7www7xx7yy7zz7{{7|Ƿ|7}׷}7~~77/t 9<7ki'448-jj 2N8`^j􄇸ox2z!c* =.8Smח88c{hH ܑ 7yʸg&oOmW_*+,s]9wqK+$N!Vܴ~sXJ x= rr *v?l*pp3Qz{eTz[zdwoްJLкz܉߫Bʤߢ::z7#8_N#\t:#wqNO=^;ir̚{6W{CSҨwmW({;.yB?.f{bfn'.9 :+;Ɣmş;-ϯȇ>;٧NjnKPIޖ҇^׃ /ϩ!-E|2`3C- =-3> ?0Poncyf-,E)U<ɷ]˜|~PH#׬JBCE_c΢Fq&J8<;[cC3;r{QlkK:LS9 9I5EmYMqmUe+tC^ V,EN콜t 'g)?o^DoOW0y/՗PKQE>$É-^4:8x hSe*]eʘˊʹISg˚dvX1hȉC8TK:&#ԅUVLY^ eI;`sVYrQPū;%ޤMz35X⣅9fE1ȕ-_ƜYfΝ=ZhҥMFZj֭][lڵmƝ[n޽}\pōGnnr͝?2ѸGǞ]@Nu͟G{yEӿ_ǟ_6-LHX  0B Eр62"!$CkP OD1EDMH AE԰FWGC`|y0bG]p J|J- H! cfL#%l+(߄)̒K YKI),LPCPBr2NGmmN>:)I54,P0/(!3;TUWqOL/HVN@#C,s CjPu(Oq䡈Gh`yM XD# Tb'w$JD%$ L,=@zlaj"0Jz `nCM&q3!4Le#m8/yXe&5'#xHH(m7b0m=S"\Xx,˄3E916wʅBZ%;Yt r ?!MKu'8>j+ITh:)*"q&fYWH_)F i %ТR'R+}\JPjT`&8qLթI|B1v\,V C)2oIy)*>Wˆ2tUJ")(*N YWYuu[;H)9xņ,)T1 D 5 4V[H3)\/V›rCPզp 5`m%с|v|n;=޲ƨR).c \X=TF%yv3leeC2={äݬ"B_QL%$fZ5-ȫ[ˁ,+ّ֙/V1xH]a #Qb+bh71|f샌U%ǖW--Q3Q")7mבHO2wr"^L^]Ⱥ1Qy/:jId e@:;81{ri.)#I2it^%y5Q1j# ?ZZÛ=Wu4.{uLl=N6<uw=nrFwսݠζ㸭wA6amo~?iOΣ?%G{xHb!Z4#7 8kYDC f<3*W֕GyprCc88~9*;7?g[9smsEKWt7etk-:mZK\ɂ=/lѪJإ--*|gZ_jOjj p{5imr3ơ҉\Ɨ;q+֨w{⁍wчgo ƕػ^sqR>n^! -=cθQzrx|W~}w?~Gտ~?$4DTdt ̙@ !4 dRzr; 5?2B05`A43(ϲ8 ׂ8<_EӼ@%.,0%L&#° ";뼍ӪBC#Z+D;.T-@A@C4c+ҭ*2 u |'>is##D,;݂K-jj"EL÷ >\JL$±SØD1:|D6#EU%|UWL4á C.h[-9˝JN@*iiF l<5mF!npq$ĨsDtTudvdrxyGpEʍj"q6&> A6#!nˡ\4EԵ13j(_5*Hѱ4T ~\*䰐5p #5Pb:IHZ`}jI\-\cI/D55ѷE2HFKb LF) Q{ 1AԫC[kBvhŒʐ 864K0l]:5"l˓4. "OLC³C.c',ạKF )A)&es-Sʌ")EJxT%Jy tJ2 P2  TV}UĩJEQ-*Mڻ!tC[,Bu3224hr2:0+Ր,9N ZAhΊZiZ32SM3+%3b؉-U@K@$|T[=Eb:',Y;bUa9RU1:+aWeЃ*i P-#bO.*Y7\==]F<#[{F eS2F%Vc )_6F=7Εkc;A.SmS3-b.'edz*صOEd#PM)D~O$0\ٽA̐%1LdKd eCVfQdc./ü81 #ڹu?$[35ִ-SZ_ SrF?s4nz gn \v={ܛ{&٥P(g|>C>2/KA\Ԁށ\h&6FVfv闆阖陦隶&6FV~^=X Re{gu6_MH 6y BU u{5_.ɺhoTA= P kCd&EU%5l%PYDOkf[]s56z֍`RLV\g<0:&lP6qZnE]UJVlb&jmܱ~d&2M;KoV'MXOb90|Xf.M7ᳺ͓`ho4N]#vgI\f N$NΠL .qbkeT\enB0-c>l\jEC(a;{Ҧl9BKǤQT Ϩ\q 'c9PcU)gORdC-s&bc+r/~)uvĒ.Ov@ͺ~=^*'X\Me-X*Iwe#EsHR0Q_Vź[*%|5/>SAt)]ϲtDwRZﬦW(XcMqwnu5,k$z#C jKXu"SR.xmT(=wNgH4pJuwtTZn|,1[.xӄIL<ܓ,׏eS?vJjV7tGW^uFNzbw}MnxwO5wWGyY5]OL<r%*y^Xu1 i[˦BH[ِgz'Wds}Nk,Lx[e6e:)+eb*{.NZa1sZtTNZfv[en_;|4[ov=R[ff}O̮35aj2˯ܼU3}q^0A5^Ķ^ -7~NLɍDu߶R3~Hin톴M'vݭŦ~b#wGl܂WygK' >U6 H9*p,tmx|pXc-^ OGDL͈IA 54~G0qL.znR*:J_ZosvqHJxB-(]\}z/r!wI[M)ʗ}OKX H0yjfNBNN\D&6d Sɓ(Sd"c9ϜoCfN `@)ӟʣH* rjs*TN Oԯ }&4H>[!=~+nC7kf_w;.˷\Yy%wpY"DUqoj١YuEo幔eb$L,u,췵׎ 3vQ Ϯ;pĐQ[3ӊ_Y4kg.K~UKx7U=p9K'4(_>[ps r_R 2'v{=!wimmۇ#FHru4)b0(#} g~) أ}?&""EK936$x:8TT X dhUZGPblOlt%Fmix|矀*蠄j衈&袌6裐F*餔Vj饘f馜v駠*ꨤjꩨꪬ꫰*무j뭸뮼+k&6F+Vkfv+k覫Jr5 ~exHZQcp{o1veLt<+ue1 &kZpq"e[D&°sȡ0d|44s?1 V4]VlV+YArULx5],d S 6^ :%OG\HW]6S3(iKi1 #0NRHH2:G^V/8dyz'?adm1Zyٿ3^EZÂltƓ\5RBf~u&18k^W#bU"fK!z̯]\\֖Axwqdh$ Y'-h+z2<~ J]X8ee[ qAEfB3WKWk )+Ϧ8 .R^*ݵ-~ Z FP/rs 1#Z}9<1xjGAJt0?0!h 7C'J. z8E 2^ "OX)$~ё*zK'>QcL5A|%<99Re$*h<159~xJi"/XDd>jdXc9&hzy793l{uǃHoΒ&8dd$[WbJdt5̲@EvC)!DhB!N *톇[0/Z>cVw$9 "k_Eun1{D&f9UM6:'o  yxI?yd܎is#G;zj@HPy q᜿GԅbX ]1T L<v :5c]mz  Lʄ16id5b XJ-eK nz@<ۉ\#-ZR*v[j[ͭnw pKR-r˦nlנihePvҺ Uv᪸M]S|u#mBɒ쐺VboHK@1/+< R2}HkGZ )G7a„k{Öa)>V?BLF $7FyWƔ\ 9\#h>yLj4;"|B֡@"ͳ\NwK,j1ZѠFUnJs8VyO%3nu95})NƟsS%-כ*;scalapack-doc-1.5/html/slug/img761.gif0100644000056400000620000000575306336077205017176 0ustar pfrauenfstaffGIF89a[!,[ ы޼aC8 dLLVjc@!6~&-Íjyi]l@?iuѦEṔTIW^L_=)[f+xX +,Da#"^Dt+PHbq rG)6QIG܍ҥsiWo H uw]# ܎yۆ8-F!Fg)H&i,HQhZhmAn.˘ P;8ښ aDZw+İQu.GD"Ӌ2jঊv(ӁC#P Lj|2kzZn`/,Bʖ,^m:-K^->ϳ' m1>ߚ-}j$#7Q3y&>;k vakʍ.QҌ@{r`Ah>C]>HR)ܦE4?NY>ThT(o} TieTr:(=KNC)_Ie[+_:*Џ2yE)'jmګQ%Ĩ?TE>Cȶ'3~ UGQIp5iY9n%b7;t֕&G%YM~sfkІ;քIRqb Rz(Y\jx@g#)_[J #>зnG*w͘-i1BBMf]%Mbli+EN^5s!Z" S97T[_CbHI8s^ur5Au™>J *Lа5X*vQOJ^7uы-#9"IWAY#U!*z_iD0WV2*fLJrF u+Z]V F[ nJ@-vÛU)B vEzhzϋLxU: VS(abO6?5w:;͙Y65aq8W},G%Cm짊[˻*ؾn gH+ oPh&}nB-x띮9/8 Ϸ#6!x(Ory;2QzcʍLn%Rh1=b>i*[iq~R;k] U%uFi#SМfzF; {XAn z x W:`9Yb{zН:-Wd;Ql{W`D G&iMk>v:(bϙMY-=9r5?S\:,ukFEzj4x{d} s:kYdYɖhslIZ^s[2Ap?Z7h6rpowg;H~6X1|\Bn%gr#w4TH>HXDž]a53qaȃg'LSņhTP17~U~ܗJSs 'EwHcXSboewCdx8s6><(5yO*+Zq1x]"V1N{&˦oGtϕ?5SV2hpHyEy" TRX ȉ 5f8iDF7ՄGK!CmhVm]4q_Coݨv^b(HHpX*珫3긋Yy8H'萺o_h'3ɑك !)fQ;scalapack-doc-1.5/html/slug/img762.gif0100644000056400000620000001776606336100410017167 0ustar pfrauenfstaffGIF89ar,!,r,ڋ޼H扦6*%lz?͆ĢL*߫Q3:}SQv= jS]ĩ80682U)IW#siy)3jy5hI **y8 ,{ڲʋڛLj ]N78Nm-.|z.T>L_o?T gӗ Ap LDȕ8q6B212窛;% 6\b#/1`E]0<4鑒9>]P;MgYћԽxf4yEOՔj5Jۺ}ˁӪ8Hεʕ394v*M7ɏTJJ:mbmv ]ؒJlYٔk>*̲c$6jVWA ].0x෫[ƬSҪ>I.^x=giν1>Jw`H`` .`XaNHa^ana^`"Hb&b*b.c2Hc6ވc:c>$]WHHE sdI-O-S?zM CBΗl H]^_BR i&i:lXTyL7 &~}SogqP> B)w'rXm!@XOK(J6+!a(1*ujndei$o`9DT,H K`~:d`iq}Srk֦F퐭cw_z+Xݻ;ڼ1KR_xkeEl"N -<5TžeYTPs̒g E#~q49bA<42kȆ1AG]UG߼31Ōiڴg$/G m6h`VvaT'R |)a_tr͛3c equs-i ܿz? ,.q v/MQ۴1)v݄} 䠯so<o&9Ū,_uz4rlK*x}>k9_9ї n.>}v6mS`dW8)f[dz ,%G~&MM|\79]m{! ӹݭH-T  l[]ni;t? .Љ*"=P= 3CkJ"݀X4 1od,O"RTX6b)!w[W~Ѽ U^ c*$Ȁ%#;Z$IbEg"XdLe$" Hf[:I!a!0ru#h-z$iB3Q$NJUH&aMx LyM`0 4T"Р 1{8ts#uZ6JT@ՙJ"4H mC QM~HCL(dьUhD? Ґt$-IOҔ])ZzVRց`ZJBJNӛqW "V&a-X? A:gPQEPn%"wGSXuk[[>}u٨yvAblZWPYJ0J4,X>$.T]:da׻6ށrZ.kuWb*8Tyk-}_w;/毎Lag=ȶNaVgn=˻XK4Ƭ0OuAgRs ;@YJ2U-g=C-> ]7eDw;߲ަ"Ľȥ^-yߗ$X$b (xoksY-!,$<9ػvAlacb&"\bcv4w:Mޱld$+9K1uObս1[ppD1:Qa9+rS 7ąZ!Yf<ӹ sk ng9 cDrGjvUg]V2}B@S Xeҡ4[g&Cb+;ObBf3WÊjEKkbM`GgG٣Yi/Xje@Q[d00Ljܽ3>]uk6ηs҇Aoj;;!3pr)\ফ_pAjVbK p9w^|$/ym:G,o_|4gWmjJU)j7'kХXHgJ׍rNA쨏`DR3:I'RLH_l HAURQjS]1u;n |1%\#YC']SPXR XcmC4 M}Xb⩖*.1<ՌNGeazۨ2Vomrem2O͸h>σ/f8ڽ;l-cD#kirbm1L{0y} g7|(ŹzeNiggeN!`*;k˳mjh8a_[~Z h$&ϕye"Rg 8V|Āt&(Nt>tdUEcReDw8|S~=V}#Ke e^?-hv1Bf\ԄaA4]"9QhU?leAjwGFx`W;6+(S6$(kf}X02S;`Oa|k=gy(A@gD{wGf}dhET[fIwuWx7qHxf%+yE/@W Fn=3IFWr~ֈEK7zW狉2[DPFv SY`=Wj KrHPLa[ ֤/XRm5UFqXQM)8EL(PmXUqKSip،^2 lŐ+8)IyI*T*gɑ !)#Ivg"6w%uv3))۔5trvQz uXoHL09YVg}dL⨐Vy︓*vKXwlN[6LM"yG:w}h9iZkx BqZW/ c/1z'O{+gxlk']Wyj}H}q};(5/<@}(u,(Z\vwh`*i@J1~b@N*td>UuYZ'ʧ9D6H(}E:45iyD~:cIPD>& 6J8IrYcƁ)"s'Nhڹ9Xeo2df3@1AxcžĆ>/V%Bkhg2h`)og]F5fK4*\!Djjx-r3~+,hjéβ؊/3n؅qX!&vi6.ㅤfckL9 ttTaUxB'gZ9tƇm]ZynJujwy{ʧ} *Jjʨꨏ *Jjʩ꩟ *Jjʪꪯ *sťJUJY%)_|>zJewqǺD؁/eY41xXb:u)J[Wl5e|:QιΫ[x:P< &AmXVGȭG[K]+9JV֕^9L1E]WT)k9 \K9$<4Djs**G-Ofh{sɼ׿f"k oNV\BT. VΓ_a.cNengi_嚭L+k4eE&pr*h] ٠삈\ތ7BUGױnʕn z6ۙ.#Ex EՃN )^2*QnnfXm"B Ɠ{zK7q,gE}:!+^x^y#~\\`M9<+eq͏ ~?YZ|ׯ0}/.[ﯢ*K(TTetXŮ,E7Ylb5m&|kt(ms:ĮSm =k5OcL6DM޾EA~չ~J 3Z0XzYNsֻXCd/jo/[I ncY9Oϲ3 [a}Y^smS?kV>HE;(dLws_+?Ĉodt,=g-b_p |/oLc)0趺{WpjA~iskv/j?2^ixLXeLxH^eȝ]ҟhPη3Onoүm Gr]V{qֻ:ZgTt֥;cg}!*Y^(~؞EVh*$'c>3:>\ !#%')+-/13579;=?ACEGIKMOQSUWY[]_acegikmoqsuwy{} d$Ck&ś ˱_ӃoU941̰͆O"vD`I'/R젱%> ;8Fp$|' ,3p<LPc2B;sGT͜%Āy|7NY9>kCTlel(DKj F5P8W&מl{8-NKGɃ33el"\WK͜lZfa/t6ˈɹ@)8m2k2odzi5'ǹwd|9iG~(Gxěo_̝;{ѯBywϳ}T)K8ˏ7$ Oƒ*1vlPTO@ </|E/Zt?܏P|+G2"$9𼗈ҿ =j4=$VQJ L'sSN2l2h3N4G!rHH礴JKCXRP,OQQ4%58$HiP# dV*8QSq뎢+L .:0,LŔJjfuu&3lqIͶ>٤pC1V FRmL=Z6lZ?-܁-ĊފbW00X㍣K0UiʭinONyWee_^0CNdg//u<٧ujg,ۊ{vV鵌~h'igi.N[n[[￙KOoz  JW\O/ )_<QYS&BOd'rZ?YW[lf~@^Χ L_-筤Nv%p{q^w?Di)e(aO 7m< ^@ߢN25g+Z7JtOhi6AbVV!賡"B:Ted! Rs3;Vy!tuƒikbmaQ iU3 ݇U9Y|򓩛1NJC'jJE0iE̍mTcu3 zqp8>6lHD0N1KD$n7V1=k4hJ~QyJ9VYyscT0x%_"p"mWb2īO!<|`X4$yJQR5YJ$߬ V x^)Y*+gu'p2sVXM_¹x*4TC!Jk2&>u|%{JôhQRXZ<.oN JSy+U\:;b@F|!VJWuLcR{ U-7}MS~j=_[Mxm/k.UJxt.l|{"\7վ D ~^0[?q9mYu#>\`~G.'%5, yegY)$YTڈ1z(adpދB27hm0dmX$nid34^$}`4NHyYhh&uI'/Ԁv] |!#h)>GgZVu Xx(.(tysڕB"嗝 )fJ/G:MmGS[z\w y)1O t?$i`rDx2QqI[u@v5DlURn(2*M ;scalapack-doc-1.5/html/slug/img764.gif0100644000056400000620000001734506336101435017172 0ustar pfrauenfstaffGIF89a!, ZlH扦ʶ.*2+`᲎ oĢLl¡m.jܪI*ԙu[m殷?bɹ)H(AǓ3hXvs!1yQ :'gI JG*+;ۦcKyw)Q+yx\jp K+=MM4* |'Y39|ΛlWO5]Z-_L[fݚe/… w ԫsP!ȡ?#,bÑ$c=1CW2aT:c٣`a<{fV۷UT6'c:ېk*&g>ZEb˕IDe`ښK P\eΫtڭUdݽ|* w8Ō@Öɔ+[9͜Kf :ѤK>:ά[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?>ۻ?ۿ?~~mEZ9d  RjHauy!bᡅ$D҈ 2D Mxb"(#_JI4:R*}h֏jE1rEre"f%ʖӠ*jcNdz%&4)'*0œiOdZ9UmbуvߙayfB6 dZ&()~jl^z$*yd6^hy *VA59q!Pc +xRN7 dj*N'd^XRm%{hgkSMS0FUT7@sjCGoH % G.ǢdEPygo 8":2 A 2862rp)QRŰkEo'/DHOQB˿<5WjS Nj,Tj=hoLܣ~i2jrwfQ?p6,JcW쑙`Z9o}[Z9ި1\nj^#(+YS^-GQby}|e!m3?XzxIb5=Y#:芆o=? 2?/ e?dw}ܙzء*p lJp/^k{d? >B 8"3 Whw)#F,Gf8ёJs $:oF_zȹG.S"ӭTG< ޜ )C)c &IwHv&. V,Iw e#^!/ldXdd&)Lu5brj4! JC)IK,FACJf4YAM}>)Pls)#qRˍL 1WP>FM'| }^r~Znӕq8EVF,]FU}(qD3vBJiu46)+_O2ue%X-m!ip-Aqh,fqɢo{)<4߉oÄp{l~7/~VrX:m Yx-1lmknr3dJb17X _1sCH@OAOQx>6vեMiXM ilZk8M<[NHԇx<9H)x}(_h!LɃ9z*?~u.ǖ+`<Ϥu媳NvW lԈi.l|nN5|$&TUa}CJׇMNs`2g"֋l䶄3> ؂)) zA PK.K[|msٮ ̉K6~};"",qH\TfBkUvVb4:.˩ % n ̋P*9wqXZjH.\+wϾ{*|iDO!1O?Ξ{Ӆ'&/bLCBB8 ;xU4O_(#)+ *{+>GEmjx;̲>]Ih=ʼnojliy >bW 犋ҙOz/OooIU/-8bh]ў=>=oiIY]L.*=ʤl7=ƎNSZEZ̖ǎ1.R8L@py|HQU-lwPAQ+9]e^cegikmoqsuwy{}ɯ!7.ϛy;AW C-m|SL@/ˣp 9"OZ#EHH6P AKI䘕]D 7`}ULp%Rя4 h/N,EfwDy:Ͳ8`<ԬdQQIZ>*GgaJ0#RZ^).a{AӰ_ox}vo11g<ے@>7SζIMM?|q΀8qa@EhnZ6&ݯX_e6լ=cr~cE#z##"3)£ˤ"n242nA&링6k@ !͹xDN?y/?(lK?s<Ҏ`<u-n RFp$KcqF2J/aҫcTt7 TZ, A+͛mǖm-F. N?EhC/)Y p(EAKʐ#N;LL18P5BuLgQ^,4T*1ElG}V4Oc@eUBSQeރJ;Z}[|.u l]M^z/T*X .NXn!X)Y-6Y>k?vAu*{] ٜ d 9~aږ[[YM=})}B<)g +A"SN>FmW"tQ+뵰: XVqK5*@'_OSUn]WVe5YњVax=E[vAyIjDE+١VG"7`4=n=ԾźVhCLjHдpcR"c;vo4;Z[>2%gdy-(喗s*݆DіG{;`-N0ll^׮яb,u1@+^o5;=AЌpƻoZ'ښ[kvBхȝ<ϊZ)yU6|ml5ّ`߈E1p W/|6iNU{H/Tޅ:(B(^҈[mȉI`N>0>"lbtxwB&ۗӈwJ {!OLbX\3X厢)Ҭ/9v8=AنdL<̦oK%*QT+y,#cU(W.RUrSTI_O r{ee))/3PySn̮ fcX@Bfo-L;6M4IŠ^tl7L;scalapack-doc-1.5/html/slug/img765.gif0100644000056400000620000001546406336101456017176 0ustar pfrauenfstaffGIF89aI!,I̪䅌H扦ʶ _ccՋQL*̦SeDPѮϮ ˢǥ;t]#J7YZwFah؇)9IGwf ԉ82juZ +;ktQ ;bI+Kڊf,E =8 8\-˝I|m~.i{jy( oj0$v3m࠿m1FW OZ= ecTn| 3"G{ Wj$Òl⪉8\=e=NMS_HVRt." 6,A,=6,P̪} 7.PJʽ7V{ 8 >81!;~ 9ɔ*9͜;{ :ѤK>:լ[~ ;ٴk۾;ݼ{ <ċ?<̛;=ԫ[=ܻ{>˛?_2)8 ￱ ַuqTz5 ߁G ‚fH(|27?ͥ Ia[O8@ahq 5D"`lÞH (zU.PUJ._{d0ܠb|y̒Me{[H|P%zV#Ul@SOE)w 9'_Rٹp'|i'FE_u(5a2C?̇ J?xjPS9jT$O 뭤R!dayH3bVt:['.5/{Ѵ-O\ ˥g~RB ,ʋB R N[N\I RZқ(2c"lJ@Q q_ J(rd֯;ۮKT2^+걇ҪD0nh[U}^:, ,MSʱ~)Vm t:Tc o?K32+9.kn#QŎp%.ī-AE]o4}Y>uG*cCOɣ$:Ob[x<~~F}rC~n;y p,*p li~TֿOy̝-p/4~a &'}Ar[L|/_B U]0hB[aٲ68LL06N3t0=1eq[bpyD["ED1 _z_E5IP"΍]^JAjQDX%.MATh{3*WRQ1hҠ<>>(4߉z^„S)"[Vģ 1yR)Qn (_ ݫ]ԄTxImyU2M2ԭ*b#7VL% j9Dg΢V8iCw8 u&D%]YEV854nJd)J5mf@jNG9uNnB$(9?9Sm4=dE'*n"`zI~&<)ҀNYN979@b0SDhD31onC\Ux́uY)RTV (]:n0:T6NcE+ FU擐ohX %V>I雾l^KXF{CОcYvmovoɑ-TJ΂lC,c Ik dDR㛔sxxҰ.ހslEvcKi:LEFXB[pTAbMo_F6v~Y3 rc~+4rIbft BuK \rMSPZ UG561ɓDؤNs*+F Ž=#`8B )V-dafeAǢʜM{v&X̢o)\lg>.-leՌE3UGɴdO#'Di^չy洧Ve-\I˷fJDcf:zlTI.FF )?T`(ǫ $SV,Vp[V3l7oPwwViPQj< J6VeSԱxRaJZmi 1^=Wiq_c4jUtqe}a}_Ây'}??/glaG^vԧ[6ϥhG_~ /7C>;6&Szbao#(6b}ۭ|ki2NU o +o^ "yK/']]__AS>U4`X`!sW}Ե`wI3F&(>ds^ts'sG"w}U{1p.JL2&Oc݆Dsb7f+MUd~IQ&fKfOv(lO9ea~,z1+>aL&LcA2Lim2F-TcGaDr[?ywUgW=Y^7crNhjdfS"o^xyN Hl_AVk=k\U7ֈ}Q_Jhmvm|Hy_gvl+7XV]ЖRA8m8d=uږW}\|p%tTCC5NoTŠ fo1ce7&p΅~牫X_J&h-;ImyPEugCP%L] ݥHƢ4aS:ԐwIbY`VfcQWNAAFza:eMPI, H$y*P`#4kHhȈQzuʲ,+؋Ƅ6n04돑ڊ;Jw7hgo/. :O6.Zx֥鄊Djp- H&GMyJ@f6GJ8Vmm,UϨ8ոaFwjqpjkFo$\*n oyk_3T,LVbJt ˶S*غkYʺeu;{ G^txJ*sCf˩*Xة(+N&W\۫+Kk狾髾{1쫕{~+ŘPk]V{OX.8 (s<#P8Id}k\WEfaF v;X.{a p7ΔIAF QX\"X{+Ȍ{$sMB/7jA` Ǝ9.]­;;q@p|vlqઙ s;X ]Wc}ikmo q-,^~=ǥA׾A*vv,}",ϣէQS֙ḥ <-KZ 9VtSƻ)LüG=򛒀Ĩ?s2R Aȳdov}ӓ-HpP--A|{ʵRT 29GI<:Qr!4pvzKͤ)Lj7 4# )hs.YuɨXUڑ+ᨍ-yvxm]-^-"6n꧎ꩮnA\/)[μI=4؈dȱ }nKe;$Μ )MEZcnMB{ڍ)6K;~qhNcּFp 50%~} d7K͗91;˶j8KS)ac5kҜٔi"{|A=m'h|>/>g-ky EVkƏ.5V+D i.?-Gw ѥT:/_CPY=Ρ;(6sP5K;󜝤fnOL^7d$$M0TVB|C11S4[zR>>.G-]fmlz:-6;93 ՜Oot/j'v=K6-ƙΟݚ,/|l8b5{O"OaDjI{q֛wo) 9J2!A;&~e[o}AᐨJڧR)9ZWVHj+j6'W3ـ^tY՞p04ϳbr"P ͸&)줨ja0;A?C/:31LEU9e_%#7\NQt+DݾdPA*!񜛙ϴ.ͽӹWzǻۉ)oyfNo8~볐U{mAS $w<Γ&I\R%/QlI.J%$GKGϾ#A&M0Dq/et9/7Y%YcWm&<(%m:עl> *bƣR+U8rq%\kց%9mk:P鮕6qgh˦z2ilx=6 EwMҥ`otp43ZY-5`l%حU~-Kb4vw=YC^jHVA|b̃F=Z!+;ă&lМH鰃< ?pC = YA CCB-pQ$*LOI {R)(2H+'ђ/ S1_3LS5l7S9;S=? TA -CMTEmG!L\R+7W6aSK*1)yC7'4SSJ2bQDF |U4 8t蔿Z%Yq_uǿyR4đdbL|˝F(Mٓxc]j=GFMkō}ɹhkL|ؼ W``Ua% ּbl Si2%XPHt ZUe֍1i3bO|L+_  )-7Kw%ِ cp&3' !4gVJ*<ÍiȒ-"dGve3NlNqj̲^婀ȕ)d ;*{@T{'qW.v JYf9ps=m\W_Uq{+᷅dۨ QiĐ'y+kݿv'%Xz{/2? cA4zj~H݇WcQ M}vƉo ]'qL_" B*HQ,g42BaBƴKi&6-i>:wh]CD g!=^J ;scalapack-doc-1.5/html/slug/img766.gif0100644000056400000620000001261506336102007017163 0ustar pfrauenfstaffGIF89a!, \gڋ޼H扦X$BJcRq.CL*̦S *jjn%5 ٬,d i0i Fy=6Whx8E(tGט& zi"JɩQTZG J6 ˒[F*6E5f7V{[:9ceGi)+H.Mb}g70 <0{ :|1DB +Z1;z2ȑ$K<2ʕ,[| 3̙4kڼ3Ν<{ 4СD=4ҥL:} 5ԩTZ5֭\z 6رd˚=6ڵl:cϊ3+Q|um"qwݵ퐗a`%#r %X# r5Ěq+U$3!"saԉ?ւZNldfo6 W־(يnXKܽxÉC2,9t'Pṫ Oz{n~rePzw_Z|9Qv%cv`` p)VFA_ega{ F!ymX}!0NV<"'h:5~#s[3ad΁8،(=72:Y^'=E7fs1YO4Z&~UJ f݁[FuVrgI9&zjaCGIbgڢkRY"nSt*h(ԥ`&*d4GѪv^D*L8=Z*zY fЩ)F)FZINFckUg xgԢxҊ,*֪M1Bꗤ.RJ,1srK:P*n:NuD.H/siH҇ .#/6 m?|Azr:N}\ 0/,!=فGx ikyDw5*̪fvj-[%wpC3iKY~ xNx8^!t?g}[Ԙ簪_xMnxA~zF[*!bGYs-&y @7_|>y O+[moΡLi ^l}g,xx)_֡i}IǹZM!~^*+#2&+D"c9ຬgiM,`ڗkww A.W86(9J#\RBk`ڌY\ΆW ;a3ѽ|jncrF=:-Mub^0/3nE>Zgt!,-oX] _=<dX(6JC$$Mq׷nXI=/J"vl!&JTSz'HbRaIgM((&Qc"4#E& ҕ3I!(ɕlRjY>P8H#HR,9u RQ##dluI^-d֯^gKe#u.Z/k_3wz47FzFMhH;2s9eNeӴDSqd,Rm2ǝcߞ'}҅.x|x`.>ҕnvܼۡ/K&D +: -F@9:y|I%7(.OIρh\if3h=`wSFX3OjkL׸Zuʡs/"c)S_S։~spQ3 dr7z̧~ۇr_~'g@%w&7 {;QxwO~!_WH/cKȄMOQ(SHL2[^lIYh:d>wHd55sOE8\p4p1([RZ_ p1HG_E3:vn`lc W%6wiDKF1HLDMHBt'K 4aW []+zPC6hdihkwǦ9阎nS}8\qpUș陟 )Iik5%&U)DU wيeglGq9iRGȌ9[FbXz]}mIrwMӇ\C4UYc6.Kሄ5x@sNM旞id7_{7Y>U*su?w&R}T s;דG%µxyIdȯRjY@Wi<: Z eä:yY%qP`+jV;5'5ɃYɔդoYs ,< ۘ|ʟn@ʸ ˱,˳L˵l˷˹.*G<ʞ2 Z̳s ӹӲȜDo4˲,mʝΌ=HytS H1P+P;ik "mZ{Y6ۺN3ܝD۴皶 ##-ytv F`z•־qp{5~ܘAӠ5q'4G{۬QÝ{gԵ0.%O1VDLG.,KlY͈9|׿haw:UUѻhP28;S +km{Nxkk^*ǿ%\ )bЦFzdIۛ[ۻ |#{}j3 Xɩ^9"ca^vusM/S\z}X Eq)O!U ɽtVΞűv=Iԛ+D$Վ Z,/嬿=>՘YMy}/tVëH ĝ,ԝz~M=<77bMuy3Y$ߏ1H'(PtWCYJ,*obY\E NWC, ̍<-]V5/>n>OeL;K;scalapack-doc-1.5/html/slug/img767.gif0100644000056400000620000002605406336102032017164 0ustar pfrauenfstaffGIF89a+!,+ ~#jƋ޼H扦J&~.3؂/L*̦S#3ֈj\L:hpRwrY44C8)^ `GXhxU6ȐaVUxɉÇW)8I 8ع 7'yZt7g i'ZKz+f,Y -(m=.>V ~qiJ/?lћ*fmo^9N6z 0… i1D&e&Zq;z2ȑ$K2ʕ,[di2̙4kڼ3Ν<{ 4СD=4ҥL:} 5ԩTZ5֭\z 6رd˚=6ڵlۺ} 7ܹtڽK{`B;}ԥL8FDE4fq'0L[ I%}&c/_8 Ͳ)>RZũ\n[gȾٸC+;oQDg֧re3F6r(rcbvb9r 9\> tnߝE e Jan bx,4y(1Q:)GjّRڢ]q*qfOg2!X*Vbv(,Fkv%)!it蟊9rM*lN9'B&+*J8%dY|xh;n[jb,쵷|aE&Ё0O gV 2/Ї%#"-ۏq\^y tBMtFtJ/tK3n7"!2ʙ,5(5 Sߐdj-<̶ovJ#o262n2=mڏ(߮=MGԑ X#w qwH?$UҖyxQi;6Y-8ߥhM?*g,U#ep1G=P3sb-6,bLOC'KB\&=_=b=4l5g3cI8!4Ӓqs[rS#Mb(cV%EHʒ""S"/y]Ɩ0-=y2$# Na5l.L&'dBPbyCN $;s$<\z6$=y uC шJtESP4l}Y ubdYԠTTR^a[l7_krN)Bz6j!!*;b >0ar 9|| $bX'je^ ğƕvp,g&FSvR~oe19/8^KcJS1ae9I R`|_'Y>pHyv|I,.v)QlTl<鑲BY񑹅%nK,],:D6 *+R6[lgN6Ni&RoY/ru^zžw= sa\[ִx!.oK@q"Yab[6Z nr6vlfKɦ=E+2Mq[ćm4\^/[qÝb]}!0ۢ*{2SkȴNexCXUѧ]T!ɡ-G>ISCJkzӜ? PzԤ>ڞedxL\MR/lkTWr2 $liv< еpog=8Tg])0;1.meeߘ" pL,VcUQuf !-Tu Mezxç@gz̜]a]V*mdY8EG Vr~S$-@Yu#&:e,Yـ-yGJUA/C Ym#z)]sղg,FG![ZgW;[5HuKol33ol݊`lj݁Es~I1R\jq^}-G6rFo*ҩ-w77'1׽//w?= >.\[t=8XwOS> uX50{'&Byu__ObtÍ(&J딾{>ÿkzicfcP1qhNq:>JNNrvq΅fup8k7g#mke!o9k&5ZAv(#Qcł|1kJAp1h79;ȃ=?3x.(nT0k;Sl.dJU~gKF5k]f]ńm:CTfC O؂Zhre%x1f0uk mRrz yxg.[Kc`x@-Yz3a81}6cŔI.KqwMI`KlZ6uev\~BrWhN8L Ȉfg7Qg`gg1&WIS=yR tTpZ^)aq6vI284T55ɛ)IiljIPD3q9N0AaHp(ly iJTII4}&hHS_ST rfhWShwŕ?ff<Ƀvy RU^3hvNȀӑfuI[FrVaq4:EY)t.YB^ukÏpv$D|xԈK~A:qw?.Iu FM*urN6Heşy'"Q0RF=2W{')8Ioy6掃_A&iY(i;Գ~({.}$T77sj"`ehlo`IA^ vJI~ͨ^ychc69i뇎Z_}g4dA>#+\i`2X|JBW L*YCx☥Ufe07l*jjt4I {p)TI_ [NIu怂Lh"gw *%Ӱi$OWel&UH6yh)qزp=-7? A+CKE6Ka)\68 = mILHk&[Ω,EB{[`lw8n4OH/9d?d})D=;Ѷ :vREA1f(;ctՠf)r W7K؉AY:{J$ȩ k(K{=^*XnwM뚲Vn{#swg['Xc(RZayͅbTyDFJtחD]‹zG3 C$\ۢJƺ;FvKNZ6 Wȡ2tk*X#Rꆓe84z{sia2ix{F+-{"k{yƅR:MVJs|nrE|k[ Ɣ[t6Y'y )⌸rƳŭyxɶٝ8#`)n4+T*܂z=T A.CN2[aA,,6<;]Pe_M\u囶KC E-.KQa7re˽7>pd{ɴ[hqqt-傍wM;K j);b$kanK_c/\w xx-Ie<$ꡍ„Lg:]<׺lB7" Şٚ䤥-e ?ixpmďG Bj]}DLC|Kq:>|U1 q~3]WƓ ,[SᴼU^yn] exRʯ|pʓ|N&~07P< AOEoGIK4̵㸹%^a38쵇Ϋ- 2(SOߨABEHiU~cXB[ZVdi8FUlY>y5&L8ђU'wy̢쇡_)moyEW԰.q ӝm=ԘfqGq .5A]I/QO]Yoa]`-Enrpsv@F>D\wrv774屲ןVyJ{^zTRӖ8D dВVhl.Bt !\ XRX"Mp~jWW%ANJjdr;X {2YxRBBh&hd4֊;"D3*f+ .uxiZW^#G:T0be._za 3U S-el"(&=jb %QA9&SmUң64iB $&8DDZD"xDdMH+6)tЃBd _+>X+dL,KL!RđYHB1h,٢5SJr!f+i" 1aJB+`0$+pmټ]t ͘n׉ch%;a}+bN&Ty* -"ڦnHLW5"p[D;I.0 ZGsmZӚQ%2BP;cf,a2uIM0Wa_-xiox5\m"*´$No3ZwGd7>l n5&@JLz=jK 5D2K=y'OX"xzkKRN#TMI2$ *)Hy^'xT+ C=× R3,҅h,~ݽB#۹b~m} M1 ~Owmfg?rg^O(݁Zq^d},{+ 5c`Q`Y3lN~<)BoX GFzOOj˛ˤ=lMzgXUzcߟ'0p б,$C2lÈ.gj <'RNCF6pP'Ǒo8Ŧf>RACFdM{ gvքm&tǻ𽘐  OJPHfH",M[Т&_die ¦fXH:H Hijsd`0gB^(phk\6T/@M!1nW)ۂ0GuK԰嗼G%P.h Y0imW~CtQo/ C.dq=o2e2N޸v\Ы`Nq\qDD҄!$.xl!:g\#md,l ?1Q*Y0R#>&q`0!i2̈́8Vl&J ʏjF x"o 1rjɃ 0-j kf ~&Nè<EP5*0 / 331l1y1!s2)2-21335T4/0 K>r3|)мD35I3:V5aP.1i Mdb+Q2%R`/0l7ׂ_)_2'uJQ9} Wi`pgJCN!s0!噪q+0Z9lM"@cIӅ>/?mN,T(4,v*rލ&.ϵFTvhoFlj4GxG4HtHHH4ItIII4JtJJJ4KtKKK4LtLɴLL |0;4s8*41MP:ԘQ:aQN˂Os4 8 .P U)Ӧ\KhrD'[&5GE?Y_a&MNS:5/*V%{RP=_*SuU!UtG4&`#9(U!p'A сz~UUXsXKzEVY$QXe5D-ϕЌ5ER\I2O!I^ʬ"SUMgs_ M2n`o6avava a!6b%vb)b-b16c5vc9c=cA6dEvdIdMdQ6eUveYe]`c3V3q@'mf\KP5;6; Qp9WR5rbg  {0Bj[k#(=Qy5$[}Kkc ; 5H ݰ6t@lX1 9}B2Nm)(F>. nCRDhL/(XYS5' :$ RF"#(UMV:(T3E1'/@GJ:7., ZQASMXZkty5rQ&wryW]r{Bw!UnfowU$8`:Dn1w$pJNry A{ӹE{wsYNlQ8$0nOh@!Q@A΂1v{#eW1 C+xviUEמj#e(%ԅz\U'uakXs7Sss'7 zG)"CxƵUpR3VTiF5+St/89~.;e? aT8a8ccظWDFpVUy,˒u|:Yy>`J _Q9UyYL9G` ~ C9P1Yf?7 i+=5|V1뙁Wi92ǘtja 2.sg Vx-j9EN+]yX˖,pnh98жˎאœ!e혏S4ArB3%o~E/&-E0tw+Նn*=`NwZ-R׆7xۑP7DCK~w;{O]z,FݜmVP:Qb> pW'q3J(&ox.P 0]?qh2;8z٨R Rt#!/1)G J"r-^xO)x:!OЁhHbZFWiŸ塏8Ω^r8~|K*8MoVvzavmzJESQv[9Z5op2%28e**W7MtnM=qk+[-w1?9¨X*)[A>ϵ *̞ o]=(I7Si?uiHWQҏBzs'8uq̃QD;jXrt蜶DQQ}w4AOI[{#ӂU<4 *6ݽ!c ;fܡoPdq7s[bQ "KVbv ݍJ@R3v!-/"-̨/=gbIu)]&=#Iҏᅬjjnߴ;scalapack-doc-1.5/html/slug/img768.gif0100644000056400000620000000033406336067621017174 0ustar pfrauenfstaffGIF89aU!,U`{ q* w%76&)g;+Ejۯ~a8az9*ib&/=Z]5~cѹ:Nfg(fȅ"E8aè89Xiy J59JZjzyH9x{{5X{,Iki4,L+\tM+ ,.L.YhެN'cˎͯM T!a tpXhQV}7l\/?!7|+V’'L8c?6ݜY<)J6>32(R4cUjiW=BbzWfڥSmvlT\P7WFwXl )Ň! 0Zrޠ Z:~DzF\Fmq5Gֱg^:¸k|.8#1cݏFoo~{5w}=t8]oX{mOJH`_)!]%vmXfgW)fHe9XU奈~FDg*hAl^|"iXVVFm&fbc;Y!Z'-ꩨ٩"2ΙkrzgFj6 *+lo{lSTdZ2[fH^]YfVNnkߺ _$s^;.ɢA S`: zc^,+2vSƕx><5mA򞼔t")(_5A0:\6PDBOv4֭(R5رRJ6ځ(;scalapack-doc-1.5/html/slug/img79.gif0100644000056400000620000000114206336110616017076 0ustar pfrauenfstaffGIF89a!,ڋ刦ʶ[il& Y8L*D Jo ܮ؂W!<&Iε3Bݟymӛ]߬!gAh#Wvc#YȤYYyx:!x9œJsjITqfyv kJiɨ L(;$˰}tg],S7M.Ng[|. YnaKZޡf!.E| P x1 *:־e|\(p^Dc%;[B_ZyƛcJ&"!TfQg0p7&VE+ rX5;θ9 =%-Q+eȷ6ھ2F\ߔb$1\x2urdYu0q-3vf/"jQ ]=C_"bMm=?G<)zLSkN}"ak7N'F{^Ѱ@x}qB[٢W)j >Ubҥ,.SΊjVsF└ӵ9IhSHhDE9ʕԁcH92h9: jKf8ۓ; Bl|L -=M]m} .\;scalapack-doc-1.5/html/slug/img80.gif0100644000056400000620000000011406336074162017071 0ustar pfrauenfstaffGIF89a!,# `[&Vz *w)*l۽L;scalapack-doc-1.5/html/slug/img81.gif0100644000056400000620000000011106336061212017057 0ustar pfrauenfstaffGIF89a !,  ɻ LM9-n[dGEi(׭ Dz\;scalapack-doc-1.5/html/slug/img82.gif0100644000056400000620000000775606336062721017114 0ustar pfrauenfstaffGIF89ao!,oڋ޼H扦@ Ln ޮL*fJԪe`A\] S$6uO=}`pU0W`H)s@Ĺ)D: &Y:uZ [+F)C{I $ {dJ=\lNJcl}Kfc>|^~>;.rID V:t 4:1l8ߠb lme$~Ȉ%$`$5z\S̑AYQyD ӞS,g)֢QեTaU%Ӣ9[C۴/ Lu=϶_#ը_VZF%[ mqU~,¬J;.{-mī^uƼ2Kֳ:~j˄؜oѷrg.g2uע"'. xq'>^T ?o,j)ΛwEEIgeוZ} v4]pgzx+5#߅vUGJ%/ixF#6b!~`vd٘߀tq0ڧǡz5Xa7'Xi$S`g!M{Fro%rPacWeix4Ax$=e DgrI9QIɣqմGޙ;Q%*}v #13f=T ͧUQ97aF@K8)U RYe~\(2GwȮ# [}:X.!2lيKnB4e˭+/&S<ͩ#, pD-;bŲe]qz,rA$~* s2Ls6-˲(n*7 gҽ( Ӏ l4bFثT-n:0J=f3=EbõWWLaʡZV!÷p>mc5>wmwmyXURPqGu8яmWbzטP TPYzzP]^W 2>{z5OIdHx~d/:/3[ gm7`o(p~ܙU9 _[nJc78m|tC%ҡMw%8 LbN'.Ot7A yy7- IF6<3V.i; UHWA[1 E}G94UH!-"B#JRV䷺MpQh2@"E|qHiHqD#H vjx2^<]#"α!oXG>DT72[̦cPnZT0 LȫhKjk&S3,lƮQh c |bi̙Sj26BlrtOmY͟-sV_ԉEe3EmL[%FfFV䎱1_nFĜ ` @Ǒzʧf5d(h@j1$ Jj1#u eRa{J6YWJ-j*d$a 8 8&}9#)i6X~fJaHZ _jx鈮J:EJUcgSfIljnsdʑ0狱ZP y 8#"S ?/ :wc:bhȯ毠!680iKC6A&4whhB1; w%F!9/}(WFY蚐9zmI2EB*NҢalJgG_%m˚}ڧ[=2 vK.+yQjY׌U)\FiDFi8 tL:d,+ǹ*HkIe) Ww+ [)鹽_ٙkNjɫ˼P;scalapack-doc-1.5/html/slug/img83.gif0100644000056400000620000000023106336102260017063 0ustar pfrauenfstaffGIF89aM !,M poTTB(,;ŕxE4j-hFXmI~U$EeCЖLJ*ΧLW)?(|e:YfuNL=Y'T5haChT;scalapack-doc-1.5/html/slug/img84.gif0100644000056400000620000000022606336102276017077 0ustar pfrauenfstaffGIF89aM !,M mo΢իf>z}\82iD乢p!1unC<,h8Hɼq4ȤR86ebdLeKVnuxtW#8WևX;scalapack-doc-1.5/html/slug/img85.gif0100644000056400000620000000023206336102313017065 0ustar pfrauenfstaffGIF89aM !,M q袜*Tz(Z%Fg5Ӛ>v@vtѤ,7SFa(HTg8%٘$S;scalapack-doc-1.5/html/slug/img87.gif0100644000056400000620000000027606336103023017076 0ustar pfrauenfstaffGIF89an!,n Ƌ?GiᨢXgwVy{`0r)ğIncRDzW*J6ⴙ|M9që|ħ&ETC5Xbe'Ԕ%h799ŃٙcyI"ddP;scalapack-doc-1.5/html/slug/img88.gif0100644000056400000620000000025006336066356017110 0ustar pfrauenfstaffGIF89aR!,R˜epq(5j&%5k=NKbĝwG[Q^TAAZk- Jl5\)qT香7S'7xg"chiuiP;scalapack-doc-1.5/html/slug/img89.gif0100644000056400000620000000027306336102625017104 0ustar pfrauenfstaffGIF89ad!,d˜ڋߚ4l9H^敡b{* ,ʻ#ohq|3# ˜8SfX:U;2XN2TinZg m76'uu1SC"fcHפY;scalapack-doc-1.5/html/slug/img93.gif0100644000056400000620000000027506336060646017107 0ustar pfrauenfstaffGIF89af!,f ҋ1HcW{)tV2BR[IṜr3=YOK5.O=NCSd T]9Uk4ƿDr0_j:K{Ȣ;_4y[̜  F-qMHHaB 5_?Ȥ9vPC@ 9-O4FGD񈡘h88{kHAHB "ƈ<0Zw  $:VgT;+~+(l"ў861sSVFwPze=vҋl@50zc&V|dWPU"D.^) ^j5hU-Ki˧I]JxO9U%5)-X#\&yM*B2$g'{[f'7v)Or\'LMZjρs 5,)D"Q"G? Ґt$-IOBh5ṼNtmKqS;;ES=4 ~JOfFmCS9Q* =$.Endq$cM(Xfrت?zA<)5hUXKN'B$E!wWȂi5x NxG\aw΋:d֔K}`ĺI8`}vlj)'U0}obkĥɘ)5!wZ&F,aDA%=vw)WwHe||o+a*o<<[ۦcEYIUo8HeL#F© |XDڭ0W)I$*Vގ-Ns xt}N ˸Q.\Xx3 {7Z\L% xNDJc~RTd[ֆi|-Y(L6bAV4)25?f˕*qǗi?9< Զ4ϋ]8y[bw]K=d{5twƿZӁ^j2hT)3\|;t[j>^J=S~[үt(;scalapack-doc-1.5/html/slug/img95.gif0100644000056400000620000000047306336073426017111 0ustar pfrauenfstaffGIF89a!,ڋ޼b9jTHL7K冖HL*ȷCPF YܮCF՘LҶ d93I(Vh4(ҷGYwXiyAHyh@ J a9Q5y{̋uv;*ܶ'S l*-mwl-'4F:ڱmNG][?UHkǝ/@W1l` 6bC/'Z PB8zG&GL;scalapack-doc-1.5/html/slug/img96.gif0100644000056400000620000000050206336073723017103 0ustar pfrauenfstaffGIF89a!,ڋ޼b9jS*kh'N{^XJPJԪ JuJd'cM3w`t F7cvvX)9yDFrgIX#9JZ8(iza:؊kzSk |IYLj{jF|]G-Q}ʪ8KneH1P纀sn…nsP É+fri;* Ǒ$K"(ͤʕ,{f;scalapack-doc-1.5/html/slug/img97.gif0100644000056400000620000000034306336075206017105 0ustar pfrauenfstaffGIF89a!!,!ڋY!&إʶkp \^Cv܉̦*j6ezjnˢիNO9TԤdCd#&WwxGdxh$(9ԗqY榹I ښR(Ih蚫[K!8W‹k| z 5+]mtS;scalapack-doc-1.5/html/slug/img98.gif0100644000056400000620000000115506336075316017112 0ustar pfrauenfstaffGIF89a!,{޼w zH扦YP /rBQ$̦IjBj. 㰹I=kp3FHNv#)fBw'`ArָWyXS#IB9IfJwiI)j)Kw8;kWYBz& *1ڗb==J2~AΞ.Fh Pg`\O?sĜs5gN;v UԶbw%dI}W0[(%3]˶"~ ^ߦMi_TwhR(9Ϳ(ʔ'yw3*5iԈ[M$nIVtI{>msZ`O6=Aщͩ2|Ah74]7|ܭ;X ]y42X6-4v+Fy7O'ͪ8tn>ɪ5d +zhJ"?+gXaAŴy|o_9rF&]fŔR%QG!Ga:(_haA%bb؃A&"b...P;scalapack-doc-1.5/html/slug/img99.gif0100644000056400000620000000116106336075433017110 0ustar pfrauenfstaffGIF89a!,`ܼd扦ʶVKה6ϛDPXL*fsJЩ2H->QڍcF!B؄&a(HW5vIyxH 7ɧBR{ 2*q)q+kJܨ9S [3;=|1hH(ѽm.97rWwx=Now d~+;0NRV1̤èd(ŐvnHʎ*R%Mè{uQCglMΜEIGHD^kZ2$JbRU:30.kg,O8p>wywV]YRE Seyoo%VN,sFVmRyl!baNUد}OꥬeӉtxߣ1of\95y}|wf%rv<9vTUo?~:tP砙7ż4_=OR)ĺ^5JӍ\8f9( e] 4 ?r rO!i\ֆ݇"&5݉*"#b1v(#B=;scalapack-doc-1.5/html/slug/img9.gif0100644000056400000620000000021006336062121016777 0ustar pfrauenfstaffGIF89a'!,'_r vQ㩔瘪䣽nMnL|3KW d2}ZX:.Fgt8I4j ||;scalapack-doc-1.5/html/slug/index.html0120777000056400000620000000000007151422251023310 2scalapack_slug.htmlustar pfrauenfstaffscalapack-doc-1.5/html/slug/node100.html0100644000056400000620000000464406336635600017526 0ustar pfrauenfstaff LWORK >= WORK(1) next up previous contents index
Next: Error Handling and the Up: Workspace Issues Previous: LWORK Query

LWORK tex2html_wrap_inline14966 WORK(1)

 

In some ScaLAPACK eigenvalue routines, such as the symmetric eigenproblems (PxSYEV and PxSYEVX/PxHEEVX) and the generalized symmetric eigenproblem (PxSYGVX/PxHEGVX), a larger value of LWORK can guarantee the orthogonality of the returned eigenvectors at the risk of potentially degraded performance of the algorithm. The minimum amount of workspace required is returned in the first element of the work array, but a larger amount of workspace can allow for additional orthogonalization if desired by the user. Refer to section 5.3.6 and the leading comments of the source code for complete details.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node101.html0100644000056400000620000001327006336113726017522 0ustar pfrauenfstaff Error Handling and the Diagnostic Argument INFO next up previous contents index
Next: Alignment Restrictions Up: Design and Documentation of Previous: LWORK WORK(1)

Error Handling and the Diagnostic Argument INFO

 

All  driver and computational routines  have a diagnostic argument INFO   that indicates the success or failure of the computation. It is recommended that the user always check the value of INFO on exit from calling a ScaLAPACK routine. The value of INFO is defined as follows:

  • INFO = 0: successful exit
  • INFO < 0: illegal value of one or more arguments -- no computation performed
  • INFO > 0: failure in the course of computation 

The value of (INFO<0)  is calculated as follows: if the error is detected in the jth entry of a descriptor array, which is the ith argument in the parameter list, the number passed to the error-handling routine PXERBLA()   has been arbitrarily chosen to be tex2html_wrap_inline16134. This allows the user to distinguish an error on a descriptor entry from an error on a scalar argument.

The standard version of PXERBLA() only issues an error message and does not halt execution of the program. The main reason for this behavior is that some ``errors'' are deemed recoverable and we wanted to allow the user the flexibility to continue program execution if certain values were corrected. If user wish to change this behavior and additionally halt execution of the program, they may add a call to BLACS_ABORT() to their version of PXERBLA().

If an input error (INFO<0) is detected at a high-level routine (ScaLAPACK driver or computational routine), it is possible for the user to recover from such an error and proceed with the computation. An error message is printed by PXERBLA(), a RETURN is issued, and the program execution continues. However, if an error is detected in a low-level ScaLAPACK routine, this error is considered unrecoverable, a message is printed by PXERBLA(), and program execution is terminated by a call to BLACS_ABORT() .

Likewise, if an error is detected at a low-level routine, such as a PBLAS or BLACS routine, this error is deemed fatal. An error message is printed, and the program execution is terminated by the specific error-handling routine.

All ScaLAPACK driver and computational routines perform global and local input error-checking. In general, no input error-checking is performed on the auxiliary routines. The exception to this rule is for the auxiliary routines which are Level 2 versions of computational routines (e.g., PxGETF2, PxGEQR2, PxORMR2, PxORM2R, etc.). For efficiency purposes, these specialized low-level routines perform only a local validity check of their argument list. If an error is detected in at least one process of the current context, the program execution is stopped.


next up previous contents index
Next: Alignment Restrictions Up: Design and Documentation of Previous: LWORK WORK(1)

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node102.html0100644000056400000620000001207106336113727017522 0ustar pfrauenfstaff Alignment Restrictions next up previous contents index
Next: Extensions Up: Design and Documentation of Previous: Error Handling and the

Alignment Restrictions

    

Most routines in the present ScaLAPACK library have alignment restrictions. Alignment restrictions are constraints in the type of distributions and the indexing into the matrix that the user may utilize when calling a particular routine. For example, some routines will not accept submatrices whose starting index is not a multiple of the physical blocking factor.

More commonly, routines require that their various operand matrices have certain alignment commonalities. For instance, the solver routines generally require that row i of the matrix A be distributed across the same process row as row i of the right hand side matrix B.

Because of their idiosyncratic nature, it is almost impossible to give a full description of the alignment restrictions inherit in the present library without doing so on a routine-specific basis. All ScaLAPACK routines provide a description of the assumed alignment restrictions in the leading comments to the routine, and at this time the user must consult the actual code to find out what restrictions exist, if any.

We are working to remove the alignment restrictions, (with the exceptions noted below) so that the user will not have to worry about alignment, save as a performance issue.

Certain fundamental restrictions about data distributions are not currently being removed the library. Examples include the fact that the operands should be block-cyclicly distributed for the dense codes and one-dimensional block distributed for the banded codes. Also included here is the restriction that all operands be distributed across the same context (process grid).

Note that the ScaLAPACK library includes a redistribution/copy routine which allows the user to explicitly move matrices across contexts. Similar routines could be provided for distributions that do not match the ones presently employed in ScaLAPACK.

Finally, we note that the current descriptor structure does not accommodate the definition of replicated vectors. A replicated vector is a vector that is distributed across a row or column within the process grid and duplicated across subsequent process rows or columns, respectively. Such vectors occur, for example, as the IPIV  vector in the LU factorization and the TAU  vector in QR factorizations.


next up previous contents index
Next: Extensions Up: Design and Documentation of Previous: Error Handling and the

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node103.html0100644000056400000620000000455606336113727017534 0ustar pfrauenfstaff Extensions next up previous contents index
Next: Performance of ScaLAPACK Up: Data Distributions and Software Previous: Alignment Restrictions

Extensions

Extensions to the library are under way and will remove the majority of the alignment restrictions in the PBLAS. The ScaLAPACK library and the PBLAS are also being modified to allow the possibility of a partial first block, as well as the incorporation of aggregate (algorithmic) blocking. The partial first block extension makes ScaLAPACK fully compatible with HPF and necessitates the establishment of a new matrix descriptor.

The incorporation of aggregate (algorithmic) blocking at the top-level ScaLAPACK routines, as well as in the PBLAS, removes the restriction of a user's performance being tied to his physical matrix distribution. Instead, the algorithms will perform at an optimal block size predetermined inside the PBLAS.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node104.html0100644000056400000620000001306506336113730017522 0ustar pfrauenfstaff Performance of ScaLAPACK next up previous contents index
Next: Achieving High Performance with Up: Guide Previous: Extensions

Performance of ScaLAPACK

  

This chapter presents performance numbers for ScaLAPACK routines. The numbers are provided for illustration only and should not be regarded as a definitive up-to-date statement of performance. They have been selected from performance numbers obtained in 1996-1997 during the development of version 1.4 of ScaLAPACK. To obtain up-to-date performance figures, users should use the timing programs provided with ScaLAPACK.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node105.html0100644000056400000620000000555406336113731017530 0ustar pfrauenfstaff Achieving High Performance with ScaLAPACK next up previous contents index
Next: Achieving High Performance on Up: Performance of ScaLAPACK Previous: Performance of ScaLAPACK

Achieving High Performance with ScaLAPACK

   

ScaLAPACK achieves high performance on distributed memory computers, such as the SP2, T3D, T3E, and Paragon. ScaLAPACK can also achieve high performance on some networks of workstations    .

Distributed memory computers   are intended to be used primarily to run parallel programs. They typically include an efficient message-passing system, a one-to-one mapping of processes to processors, a gang scheduler  and a well-connected communications network. Networks of workstations may be designed primarily for use as individual workstations and may not have all of these important features.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node106.html0100644000056400000620000000664006336113731017526 0ustar pfrauenfstaff Achieving High Performance on a Distributed Memory Computer next up previous contents index
Next: Achieving High Performance on Up: Achieving High Performance with Previous: Achieving High Performance with

Achieving High Performance on a Distributed Memory Computer

      

Assuming that the ScaLAPACK installation was done correctly, the users need only make sure that they are using an appropriate number of processors and that their matrices are efficiently distributed. Here is a checklist to get started.

  • Use the right number of processors.
    • Rule of thumb: tex2html_wrap_inline16164 for an tex2html_wrap_inline15127 matrix. This provides a local matrix of size approximately 1000 by 1000. 
    • Do not try to solve a small problem on too many processors.
    • Do not exceed physical memory.
  • Use an efficient data distribution.
    • Block sizegif (i.e., MB,NB) = 64. 
    • Square processor grid, tex2html_wrap_inline16172. 
  • Use efficient machine-specific BLAS (not the Fortran 77 reference implementation BLAS) and BLACS (nondebug, BLACSDBGLVL=0 in Bmake.inc)

If the performance is still below that expected, see section 5.3. For guidelines on tuning for higher performance, see section 5.4.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node107.html0100644000056400000620000000637006336113732017530 0ustar pfrauenfstaff Achieving High Performance on a Network of Workstations next up previous contents index
Next: PerformancePortability and Scalability Up: Achieving High Performance with Previous: Achieving High Performance on

Achieving High Performance on a Network of Workstations

 

If a network meets the following guidelines, ScaLAPACK will perform well on it (see section 5.1.1). If a network of workstations does not meet one or more of these guidelines, read the rest of this chapter for more information.

  • The bandwidth per node, if measured in Megabytes per second per node, should be no less than one tenth of the peak floating-point rate as measured in megaflops/second/node.
  • The underlying network must allow simultaneous messages, that is, not standard ethernet and not FDDI  .
  • Message latency should be no more than 500 microseconds.
  • All processors should be similar in architecture and performance. ScaLAPACK will be limited by the slowest processor. Data format conversion significantly reduces communication performance.
  • No other jobs should be allowed to execute on the processors that are being used. If the processors are gang scheduled and there is enough physical memory for all jobs on all processors, this requirement may be relaxed, but we do not recommend doing so without careful study.
  • No more than one process should be executed per processor.

Vendor specifications and actual performance often differ considerably, especially in communication latency and bandwidth. Users should make sure that they are using the most efficient BLAS and BLACS available on their system.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node108.html0100644000056400000620000000772506336113732017536 0ustar pfrauenfstaff Performance, Portability and Scalability next up previous contents index
Next: The BLAS as the Up: Performance of ScaLAPACK Previous: Achieving High Performance on

Performance, Portability and Scalability

 

How can we provide portable  software for dense linear algebra computations that is efficient on a wide range of modern distributed-memory computers? Answering this question -- and providing the appropriate software -- has been an objective of the ScaLAPACK project.

The ScaLAPACK software has been designed specifically to achieve high efficiency for a wide range of modern distributed-memory computers. Examples of such computers include the Cray T3D and T3E computers, the IBM Scalable POWERparallel SP series, the Intel iPSC and Paragon computers, the nCube-2/3 computer, networks and clusters of workstations (NoWs  and CoWs ), and ``piles'' of PCs (PoPCs) .





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node109.html0100644000056400000620000002140306336113733017525 0ustar pfrauenfstaff The BLAS as the Key to (Trans)portable Efficiency next up previous contents index
Next: Two-Dimensional Block Cyclic Data Up: PerformancePortability and Scalability Previous: PerformancePortability and Scalability

The BLAS as the Key to (Trans)portable Efficiency

  The total number of floating-point operations performed by most of the ScaLAPACK driver routines for dense matrices can be approximated by the quantity tex2html_wrap_inline12066, where tex2html_wrap_inline16191 is a constant and N is the order of the largest matrix operand. For solving linear equations or linear least squares, tex2html_wrap_inline16191  is a constant depending solely on the selected algorithm. The algorithms used to find eigenvalues and singular values are iterative; hence, for these operations the constant tex2html_wrap_inline16191 truly depends on the input data as well. It is, however, customary or ``standard'' to consider the values of the constants tex2html_wrap_inline16191 for a fixed number of iterations. The ``standard'' constants tex2html_wrap_inline16191 range from 1/3 to approximately 18, as shown in Table 5.8.

The performance of the ScaLAPACK drivers is thus bounded above by the performance of a computation that could be partitioned into P independent chunks of tex2html_wrap_inline16211 floating-point operations each. This upper bound, referred to hereafter as the peak performance , can be computed as the product of tex2html_wrap_inline16211 and the highest reachable local node flop rate. Hence, for a given problem size N and assuming a uniform distribution of the computational tasks, the most important factors determining the overall performance are the number P of nodes involved in the computation and the local node flop rate.

In a serial computational environment, transportable efficiency   is the essential motivation for developing blocking strategies and block-partitioned algorithms [2, 3, 35, 90]  . The linear algebra package (LAPACK) [3]  is the archetype of such a strategy. The LAPACK software is constructed as much as possible out of calls to the BLAS. These kernels confine the impact of the computer architecture differences to a small number of routines. The efficiency and portability of the LAPACK software are then achieved by combining native and efficient BLAS implementations with portable high-level components.

The BLAS  are subdivided into three levels, each of which offers increased scope for exploiting parallelism. This subdivision corresponds to three different types of basic linear algebra operations:

  • Level 1 BLAS [93] : for vector operations, such as tex2html_wrap_inline16219,
  • Level 2 BLAS [59, 58] : for matrix-vector operations, such as tex2html_wrap_inline16221,
  • Level 3 BLAS [57, 56] : for matrix-matrix operations, such as tex2html_wrap_inline16223.
Here, A, B, and C are matrices, x and y are vectors, and tex2html_wrap_inline16235 and tex2html_wrap_inline14473 are scalars.

The performance potential of the three levels of BLAS is strongly related to the ratio of floating-point operations to memory references, as well as to the reuse of data when it is stored in the higher levels of the memory hierarchy. Consequently, the Level 1 BLAS cannot achieve high efficiency on most modern supercomputers. The Level 2 BLAS can achieve near-peak performance on many vector processors; on RISC microprocessors, however, their performance is limited by the memory access bandwidth bottleneck. The greatest scope for exploiting the highest levels of the memory hierarchy as well as other forms of parallelism is offered by the Level 3 BLAS [3].

The previous reasoning applies to distributed-memory computational environments in two ways. First, in order to achieve overall high performance, it is necessary to express the bulk of the computation local to each node in terms of Level 3 BLAS operations. Second, designing and developing a set of parallel BLAS (PBLAS)  for distributed-memory computers should lead to an efficient and straightforward port of the LAPACK software. This is the path followed by the ScaLAPACK initiative [25, 53] as well as others [1, 21, 30, 63]. As part of the ScaLAPACK project, a set of PBLAS has been early designed and developed [29, 26].


next up previous contents index
Next: Two-Dimensional Block Cyclic Data Up: PerformancePortability and Scalability Previous: PerformancePortability and Scalability

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node10.html0100644000056400000620000000770306336113641017441 0ustar pfrauenfstaff Structure and Functionality next up previous contents index
Next: Software Components Up: Essentials Previous: ScaLAPACK

Structure and Functionality

  

ScaLAPACK can solve systems of linear equations, linear least squares problems, eigenvalue problems, and singular value problems. ScaLAPACK can also handle many associated computations such as matrix factorizations  or estimating condition numbers.

Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms   in order to minimize the frequency of data movement between different levels of the memory hierarchy  . The fundamental building blocks of the ScaLAPACK library are distributed-memory  versions of the Level 1, Level 2, and Level 3 BLAS, called the Parallel BLAS or PBLAS [26, 104], and a set of Basic Linear Algebra Communication Subprograms (BLACS) [54] for communication tasks that arise frequently in parallel linear algebra computations. In the ScaLAPACK routines, the majority of interprocessor communication occurs within the PBLAS, so the source code of the top software layer of ScaLAPACK looks similar to that of LAPACK.

ScaLAPACK contains driver routines  for solving standard types of problems, computational routines  to perform a distinct computational task, and auxiliary routines  to perform a certain subtask or common low-level computation. Each driver routine typically calls a sequence of computational routines. Taken as a whole, the computational routines can perform a wider range of tasks than are covered by the driver routines. Many of the auxiliary routines may be of use to numerical analysts or software developers, so we have documented the Fortran source for these routines with the same level of detail used for the ScaLAPACK computational routines and driver routines.

Dense and band matrices are provided for, but not general sparse matrices. Similar functionality is provided for real and complex matrices. See Chapter 3 for a complete summary of the contents.

Not all the facilities of LAPACK are covered by Release 1.5 of ScaLAPACK.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node110.html0100644000056400000620000000733506336113733017525 0ustar pfrauenfstaff Two-Dimensional Block Cyclic Data Distribution as a Key to Load Balancing and Software Reuse next up previous contents index
Next: BLACS as an Efficient Up: PerformancePortability and Scalability Previous: The BLAS as the

Two-Dimensional Block Cyclic Data Distribution as a Key to Load Balancing and Software Reuse

 

The way the data is distributed over the memory hierarchy of a computer is of fundamental importance to load balancing and software reuse. The block cyclic data distribution allows a reduction of the overhead due to load imbalance and data movement. Block-partitioned algorithms   are used to maximize the local node performance.

Since the data decomposition largely determines the performance and scalability of a concurrent algorithm, a great deal of research [27, 65, 69, 78] has focused on different data decompositions [10, 20, 85]. In particular, the two-dimensional block cyclic distribution [92]  has been suggested as a possible general-purpose basic decomposition for parallel dense linear algebra libraries [31, 76, 97, 17] such as ScaLAPACK.

Block cyclic distribution is beneficial because of its scalability [51] , load balance,  and communication [76] properties. The block-partitioned computation then proceeds in consecutive order just as a conventional serial algorithm does. This essential property of the block cyclic data distribution explains why the ScaLAPACK design has been able to reuse the numerical and software expertise of the sequential LAPACK library.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node111.html0100644000056400000620000001655006336113734017526 0ustar pfrauenfstaff BLACS as an Efficient, Portable and Adequate Message-Passing Interface next up previous contents index
Next: Parallel Efficiency Up: PerformancePortability and Scalability Previous: Two-Dimensional Block Cyclic Data

BLACS as an Efficient, Portable and Adequate Message-Passing Interface

 

The total volume of data communicated by most of the ScaLAPACK driver routines for dense matrices can be approximated by the quantity tex2html_wrap_inline16256 , where N is the order of the largest matrix operand. The number of messages, however, is proportional to N and can be approximated by the quantity tex2html_wrap_inline12070 , where NB is the logical blocking factor used in the computation. Similar to the situation described above, the ``standard'' constants tex2html_wrap_inline16270 for the communication volume depend upon the performed computation and are of the same order as the floating-point operation constants tex2html_wrap_inline16191 shown in Table 5.8. The values of the ``standard'' constants tex2html_wrap_inline16270 for a few selected ScaLAPACK drivers are presented in Table 5.8. As a result, a significant percentage of the ScaLAPACK software aims at exchanging messages between processes.

Developing an adequate message-passing interface specialized for linear algebra operations has been one of the first achievements of the ScaLAPACK project. The Basic Linear Algebra Communications Subprograms (BLACS)  [50, 54] were thus specifically designed to facilitate the expression of the relevant communication operations. The simplicity of the BLACS interface, as well as the rigor of their specification, allows for an easy port of the entire ScaLAPACK software. Currently, the BLACS have been efficiently ported on machine-specific message-passing libraries such as the IBM (MPL)   and Intel (NX)   message-passing libraries, as well as more generic interfaces such as PVM   and MPI  . The BLACS  overhead has been shown to be negligible [54].

The BLACS interface provides the user and library designer with an appropriate level of notation. Indeed, the BLACS operate on typed two-dimensional arrays. The computational model consists of a one- or two-dimensional grid of processes, where each process stores matrices and vectors. The BLACS include synchronous send/receive routines to send a matrix or submatrix from one process to another, to broadcast submatrices, or to perform global reductions (sums, maxima and minima). Other routines establish, change, or query the process grid. The BLACS provide an adequate interface level for linear algebra communication operations.

For ease of use and flexibility, the BLACS send operation is locally blocking;  that is, the return from the send operation indicates that the resources may be reused. However, since this depends only on local information, it is unknown whether the receive operation has been called. Buffering is necessary on the sending or the receiving process. The BLACS receive operation is globally blocking . The return from the receive operation indicates that the message has been (sent and) received. On a system natively supporting globally blocking sends such as the IBM SP2 computer, nonblocking sends coupled with buffering are used to simulate locally blocking sends. This extra buffering operation may cause a slight performance degradation on those systems.

The BLACS broadcast and combine operations feature the ability of selecting different virtual network topologies. This easy-to-use built-in facility allows for the expression of various message scheduling approaches, such as a communication pipeline. This unique and distinctive BLACS characteristic is necessary for achieving the highest performance levels on distributed-memory platforms.




next up previous contents index
Next: Parallel Efficiency Up: PerformancePortability and Scalability Previous: Two-Dimensional Block Cyclic Data

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node112.html0100644000056400000620000002235506336113735017530 0ustar pfrauenfstaff Parallel Efficiency next up previous contents index
Next: ScaLAPACK Performance Up: BLACS as an Efficient Previous: BLACS as an Efficient

Parallel Efficiency

   

An important performance metric is parallel efficiency. Parallel efficiency, E(N, P),  for a problem of size N on P nodes is defined in the usual way [65, 92] by
displaymath16290
where T(N,P)  is the runtime of the parallel algorithm, and tex2html_wrap_inline16300  is the runtime of the best sequential algorithm. For dense matrix computations, an implementation is said to be scalable  if the parallel efficiency is an increasing function of tex2html_wrap_inline16304, the problem size per node. The algorithms implemented in the ScaLAPACK library are scalable in this sense.

Figure 5.1 shows the scalability of the ScaLAPACK implementation of the LU factorization on the Intel XP/S Paragon computer. The nodes of the Intel XP/S Paragon computer are general-purpose (GP) or multiprocessor (MP) nodes, based on the Intel i860 XP RISC processors. Each Intel i860 processor is capable of a peak performance of 50 Mflop/s. On such a processor, however, the vendor-supplied BLAS matrix-matrix multiply routine DGEMM can achieve only approximately 45 Mflop/s. The computer used for obtaining the performance results presented in this chapter consisted of MP nodes configured as follows: each MP node had three Intel i860 XP processors -- two to execute application code and a third used exclusively as a message coprocessor  . On such a node, the vendor-supplied BLAS matrix-matrix multiply routine DGEMM can achieve approximately 90 Mflop/s.

  figure3673
Figure 5.1: LU Performance per Intel XP/S MP Paragon node

Figure 5.1 shows the speed in Mflop/s per node of the ScaLAPACK LU factorization routine PDGETRF for different computer configurations. This figure illustrates that when the number of nodes is scaled by a constant factor, the same efficiency or speed per node is achieved for equidistant problem sizes on a logarithmic scale. In other words, maintaining a constant memory use per node allows efficiency to be maintained. (This scalability behavior is also referred to as isoefficiency,  or isogranularity .) In practice, however, a slight degradation is acceptable. The ScaLAPACK driver routines, in general, feature the same scalability behavior up to a constant factor that depends on the exact number of floating-point operations and the total volume of data exchanged during the computation.

In large dense linear algebra computations, the computation cost dominates the communication cost. In the following, the time to execute one floating-point operation by one node is denoted by tex2html_wrap_inline12202 . The time to communicate a message between two nodes is approximated by a linear function of the number of items communicated. The function is the sum of the time to prepare the message for transmission (tex2html_wrap_inline12208)  and the time taken by the message to traverse the network to its destination, that is, the product of its length by the time to transfer one data item (tex2html_wrap_inline12228) . Alternatively, tex2html_wrap_inline12208 is also called the latency,   since it is the time to communicate a message of zero length. On most modern interconnection networks, the order of magnitude of the latency varies between a microsecond and a millisecond.

The bandwidth   of the network is also referred to as its throughput.  It is proportional to the reciprocal of tex2html_wrap_inline12228. On modern networks, the order of magnitude of the bandwidth is the megabyte per second. For a scalable algorithm with tex2html_wrap_inline16304 held constant, one expects the performance to be proportional to P. The algorithms implemented in ScaLAPACK are scalable in this sense. Table 5.1 summarizes the relevant constants used in our scalability analysis.

  table3694
Table 5.1: Variable definitions

Using the notation presented in table 5.1, the execution time of the ScaLAPACK drivers can be approximated by
 equation3728

The corresponding parallel efficiency  can then be approximated by
 equation3739
Equation 5.2 illustrates, in particular, that the communication versus computation performance ratio of a distributed-memory computer significantly affects parallel efficiency. The ratio of the latency to the time per flop tex2html_wrap_inline16374 greatly affects the parallel efficiency of small problems. The ratio of the network throughput to the flop rate tex2html_wrap_inline16376 significantly affects the parallel efficiency of medium-sized problems. For large problems, the node flop rate tex2html_wrap_inline16378 is the dominant factor contributing to the parallel efficiency of the parallel algorithms implemented in ScaLAPACK.


next up previous contents index
Next: ScaLAPACK Performance Up: BLACS as an Efficient Previous: BLACS as an Efficient

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node113.html0100644000056400000620000001630006336113735017522 0ustar pfrauenfstaff ScaLAPACK Performance next up previous contents index
Next: Performance of Selected BLACS Up: PerformancePortability and Scalability Previous: Parallel Efficiency

ScaLAPACK Performance

 

In this section, we present performance data for Version 1.4 of ScaLAPACK on four distributed memory computers and two networks of workstations. The four distributed memory computers are the Cray T3E computer, the IBM Scalable POWERparallel 2 computer, the Intel XP/S MP Paragon computer, and the Intel ASCI Option Red Supercomputer. One of the networks of workstations consists of Sun Ultra Enterprise 2 (Model 2170s) connected via switched ATM. The other network of workstations, the Berkeley NOW [34] , consists of 100+ Sun UltraSPARC-1 workstations and 40+ Myricom crossbar switches and LANai 4.1 network interface cards. ScaLAPACK on the NOW uses MPI BLACS, where the MPI is a port of the freely-available MPICH reference code. MPI uses Active Messages  as its underlying communications layer. Active Messages [98] provide ultra-lightweight remote-procedure calls for processes on the NOW. The system currently uses AM-II , a generalized active message layer that supports more than SPMD parallel programs, e.g., client-server programs and distributed filesystems. It retains the simple request/response paradigm common to all previous active message implementations as well as its high-performance. These six computers are a collection of processing nodes interconnected via a network. Each node has local memory and one or more processors. Tables 5.25.3, and 5.4 describe the characteristics of these six computers.

  table3764
Table 5.2: Characteristics of the Cray T3E and IBM SP2 computers timed

  table3773
Table 5.3: Characteristics of the Intel computers timed

  table3782
Table 5.4: Characteristics of the networks of workstations timed

As noted in Tables 5.25.3, and 5.4, a machine-specific optimized BLAS implementation was used for all the performance numbers reported in this chapter. For the IBM Scalable POWERparallel 2 (SP2) computer, the IBM Engineering and Scientific Subroutine Library (ESSL) was used [88]. On the Intel XP/S MP Paragon computer, the Intel Basic Math Library Software (Release 5.0) [89] was used. The Intel ASCI Option Red Supercomputer was tested using a pre-alpha version of the Cougar operating system and using an unoptimized functional version of the dual processor Basic Math Library from Kuck and Associates, Inc. The communication performance and library performance was still being enhanced. On the Sun Ultra Enterprise 2 workstation, the Dakota Scientific Software Library (DSSL)gif was used. The DSSL BLAS implementation used only one processor per node. On the Berkeley NOW, the Sun Performance Library, version 1.2, was used. It should also be noted that for the IBM Scalable POWERparallel 2 (SP2) the communication layer used was the IBM Parallel Operating Environment (POE), which is a combination of MPI and MPL libraries.

Several data distributions were tried for N=2000. The fastest data distribution for N=2000 was used for all problem sizes, although this data distribution may not be optimal for all problem sizes. Whenever applicable, only the options UPLO=`U' and TRANS=`N' were timed. The test matrices were generated with randomly distributed entries. All runtimes are reported in seconds. Block size is denoted by NB.

This section first reports performance data for a relevant selection of BLAS and BLACS routines. Then, timing results obtained for some PBLAS routines are presented. Finally, performance numbers for selected ScaLAPACK driver routines are shown.


next up previous contents index
Next: Performance of Selected BLACS Up: PerformancePortability and Scalability Previous: Parallel Efficiency

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node114.html0100644000056400000620000001472106336113736017531 0ustar pfrauenfstaff Performance of Selected BLACS and Level 3 BLAS Routines next up previous contents index
Next: Performance of Selected PBLAS Up: PerformancePortability and Scalability Previous: ScaLAPACK Performance

Performance of Selected BLACS and Level 3 BLAS Routines

 

The efficiency  of the ScaLAPACK software depends on efficient implementations of the BLAS and the BLACS being provided by computer vendors (or others) for their computers. The BLAS and the BLACS form a low-level interface between ScaLAPACK software and different computer architectures. Table 5.5 presents performance numbers indicating how well the BLACS and Level 3 BLAS perform on different distributed-memory computers. For each computer this table shows the flop rate achieved by the matrix-matrix multiply Level 3 BLAS routine SGEMM/DGEMM (tex2html_wrap_inline12088)  on a node versus the theoretical peak performance of that node, the underlying message-passing library called by the BLACS, and the approximated values of the latency (tex2html_wrap_inline12208)  and the bandwidth (tex2html_wrap_inline12064)  achieved by the BLACS versus the underlying message-passing software for the machine.

  table3805
Table 5.5: BLACS and Level 3 BLAS performance indicators

The values for latency in table 5.5 were obtained by timing the cost of a 0-byte message. The bandwidth numbers table 5.5 were obtained by increasing message length until message bandwidth was saturated. We used the same timing mechanism for both the BLACS and the underlying message-passing library.

These numbers are actual timing numbers, not values based on hardware peaks, for instance. Therefore, they should be considered as approximate values or indicators of the observed performance between two nodes, as opposed to precise evaluations of the interconnection network capabilities. On the CRAY, the numbers reported are for MPI and the MPIBLACS, instead of the more optimal shmem library with CRAY's native BLACS.

For all four computers, a machine-specific optimized BLAS implementation was used for all the performance numbers reported in this chapter. For the IBM Scalable POWERparallel 2 (SP2) computer, the IBM Engineering and Scientific Subroutine Library (ESSL) was used [88]. On the Intel XP/S MP Paragon computer, the Intel Basic Math Library Software (Release 5.0) [89] was used. On the Sun Ultra Enterprise 2 workstation, the Dakota Scientific Software Library (DSSL)gif was used. The DSSL BLAS implementation used only one processor per node. The speed of the BLAS matrix-matrix multiply routine shown in Table 5.5 has been obtained for the following operation tex2html_wrap_inline16435, where A, B, and C are square matrices of order 500.




next up previous contents index
Next: Performance of Selected PBLAS Up: PerformancePortability and Scalability Previous: ScaLAPACK Performance

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node115.html0100644000056400000620000000706006336113737017531 0ustar pfrauenfstaff Performance of Selected PBLAS routines next up previous contents index
Next: Solution of Common Numerical Up: Performance of Selected BLACS Previous: Performance of Selected BLACS

Performance of Selected PBLAS routines

 

The performance of Level 2 PBLAS routines is dependent on the performance of Level 2 BLAS routines which is dependent on the bulk transfer rate from main memory.

  table3850
Table 5.6: Speed in Mflop/s for the PBLAS matrix-vector multiply routine PSGEMV/PDGEMV

Table 5.6  shows execution rates for the 64-bit matrix-vector multiply PBLAS routine PSGEMV /PDGEMV . The rates listed are for a matrix-vector product tex2html_wrap_inline16483, where A is a square matrix of order N and x and y are vectors that are both distributed over a process column.

The Level 3 PBLAS are not necessarily limited by memory bandwidth because they perform many flops for each word involved. The flop rate is correspondingly higher. Table 5.7 

  table3877
Table 5.7: Speed in Mflop/s for the PBLAS matrix-matrix multiply routine PSGEMM/PDGEMM

shows the performance results obtained by the general matrix-matrix multiply PBLAS routine PSGEMM /PDGEMM . These results have been obtained for the matrix-matrix multiply operation tex2html_wrap_inline16435, where A, B, and C are square matrices of order N.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node116.html0100644000056400000620000001472306336113737017536 0ustar pfrauenfstaff Solution of Common Numerical Linear Algebra Problems next up previous contents index
Next: Solving Linear Systems of Up: Performance of Selected BLACS Previous: Performance of Selected PBLAS

Solution of Common Numerical Linear Algebra Problems

 

This section contains performance numbers for selected driver routines. These routines provide complete solutions for common linear algebra problems.

  • Solve a general N-by-N system of linear equations with one right-hand side using the routine PSGESV/PDGESV.
  • Solve a symmetric positive definite N-by-N system of linear equations with one right-hand side, using PSPOSV/PDPOSV.
  • Solve an N-by-N linear least squares problem with one right-hand side using the routine PSGELS/PDGELS.
  • Find the eigenvalues and optionally the corresponding eigenvectors of an N-by-N symmetric matrix, using the routine PSSYEVX/PDSYEVX.
  • Find the eigenvalues and optionally the corresponding eigenvectors of an N-by-N symmetric matrix, using the routine PSSYEV/PDSYEV.
  • Find the singular values and optionally the corresponding right and left singular vectors of an N-by-N matrix, using PSGESVD/PDGESVD.
  • Find the eigenvalues and optionally the corresponding right eigenvectors of an N-by-N Hessenberg matrix, using the routine PSLAHQR/PDLAHQR.gif
Table 5.8 presents ``standard'' floating-point operation costs  (tex2html_wrap_inline12066) for selected ScaLAPACK drivers for matrices of order N. Approximate values of the constants tex2html_wrap_inline16577 and tex2html_wrap_inline16270 defined in section 5.2.3 are also provided.

  table3908
Table 5.8: ``Standard'' floating-point operation (tex2html_wrap_inline16191) and communication costs (tex2html_wrap_inline16270, tex2html_wrap_inline16577) for selected ScaLAPACK drivers

The operation counts given for the eigenvalue and SVD drivers are incomplete. They do not include any of the tex2html_wrap_inline16659 computation costs (i.e., the entire tridiagonal eigendecomposition is ignored in PxxxEVX). Furthermore, the reductions involved require matrix-vector multiplies, which are less efficient than the matrix-matrix multiplies required by the other drivers listed here. Hence this table greatly underestimates the execution time of the eigenvalue and SVD drivers, especially the expert symmetric eigensolver drivers. For PxLAHQR, when only eigenvalues are computed, tex2html_wrap_inline16577 and tex2html_wrap_inline16270 look the same as the full Schur form case, in terms of ``order of magnitude''. There is actually tex2html_wrap_inline16665 to tex2html_wrap_inline16667 the number of messages/volume depending on the circumstances.


next up previous contents index
Next: Solving Linear Systems of Up: Performance of Selected BLACS Previous: Performance of Selected PBLAS

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node117.html0100644000056400000620000000731306336113740017526 0ustar pfrauenfstaff Solving Linear Systems of Equations next up previous contents index
Next: Solving Linear Least Squares Up: PerformancePortability and Scalability Previous: Solution of Common Numerical

Solving Linear Systems of Equations

 

Table 5.9  illustrates the speed of the ScaLAPACK driver routine PSGESV /PDGESV  for solving a square linear system of order N by LU factorization with partial row pivoting of a real matrix. For all timings, 64-bit floating-point arithmetic was used. Thus, single-precision timings are reported for the Cray T3E, and double precision timings are reported on all other computers. The distribution block size  is also used as the partitioning unit  for the computation and communication phases.

Table 5.10  illustrates the speed of the ScaLAPACK routine PSPOSV /PDPOSV  for solving a symmetric positive definite linear system of order N via the Cholesky factorization.

Right-looking variants of the LU and Cholesky factorizations were chosen for ScaLAPACK because they minimize total communication volume, that is, the aggregated

  table3947
Table 5.9: Speed in Mflop/s of PSGESV/PDGESV for square matrices of order N

amount of data transferred between processes during the operation.

  table3969
Table 5.10: Speed in Mflop/s of PSPOSV/PDPOSV for matrices of order N with UPLO=`U'





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node118.html0100644000056400000620000000771406336113740017534 0ustar pfrauenfstaff Solving Linear Least Squares Problems next up previous contents index
Next: Eigenvalue Problems Up: Solving Linear Systems of Previous: Solving Linear Systems of

Solving Linear Least Squares Problems

 

Table 5.11  summarizes performance results obtained for the ScaLAPACK routine PSGELS /PDGELS  that solves full-rank linear least squares problems. Solving such problems of the form tex2html_wrap_inline16768, where x and b are vectors and A is a rectangular matrix having full rank is traditionally achieved via the computation of the QR factorization of the matrix A. In ScaLAPACK, the QR factorization   is based on the use of elementary Householder   matrices of the general form
displaymath16782
where v is a column vector and tex2html_wrap_inline14435 is a scalar. This leads to an algorithm with excellent vector performance, especially if coded to use Level 2 PBLAS.

The key to developing a distributed block form of this algorithm is to represent a product of K elementary Householder matrices of order N as a block form of a Householder matrix.   This can be done in various ways. ScaLAPACK uses the form [108]
displaymath16792
where V is an N-by-K matrix whose columns are the individual vectors tex2html_wrap_inline16800 associated with the Householder matrices tex2html_wrap_inline16802 and T is an upper triangular matrix of order K. Extra work is required to compute the elements of T, but this is compensated for by the greater speed of applying the block form.

  table4001
Table 5.11: Speed in Mflop/s of PSGELS/PDGELS for square matrices of order N



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node119.html0100644000056400000620000002577306336113741017543 0ustar pfrauenfstaff Eigenvalue Problems next up previous contents index
Next: Performance Evaluation Up: Solving Linear Systems of Previous: Solving Linear Least Squares

Eigenvalue Problems

 

ScaLAPACK includes block algorithms for solving symmetric  and nonsymmetric eigenvalue problems as well as for computing the singular value decomposition.

The first step in solving many types of eigenvalue problems is to reduce the original matrix to a ``condensed form'' by orthogonal transformations.     In the reduction to condensed forms, the unblocked algorithms all use elementary Householder matrices and have good vector performance. Block forms of these algorithms have been developed [28], but all require additional operations, and a significant proportion of the work must still be performed by the Level 2 PBLAS. Thus, there is less possibility of compensating for the extra operations.

The algorithms concerned are listed below:

  • Reduction of a symmetric matrix to tridiagonal form  to solve a symmetric eigenvalue problem: ScaLAPACK routine PSSYTRD /PDSYTRD  applies a symmetric block update of the form
    displaymath16854
    using the Level 3 PBLAS routine PSSYR2K /PDSYR2K ; Level 3 PBLAS account for at most half the work.
  • Reduction of a rectangular matrix to bidiagonal form  to compute a singular value decomposition: ScaLAPACK routine PSGEBRD  /PDGEBRD  applies a block update of the form
    displaymath16855
    using two calls to the Level 3 PBLAS routine PSGEMM/PDGEMM; Level 3 PBLAS account for at most half the work.
  • Reduction of a nonsymmetric matrix to Hessenberg form   to solve a nonsymmetric eigenvalue problem: ScaLAPACK routine PSGEHRD /PDGEHRD  applies a block update of the form
    displaymath16856
    Level 3 PBLAS account for at most three-quarters of the work.

Extra work must be performed to compute the N-by-K matrices X and Y that are required for the block updates (K is the block size), and extra workspace is needed to store them.

Following the reduction of a dense symmetric matrix to tridiagonal form T, one must compute the eigenvalues and (optionally) eigenvectors of T. The current version of ScaLAPACK includes two different routines PSSYEVX /PDSYEVX  and PSSYEV /PDSYEV  for solving symmetric eigenproblems. PSSYEVX/PDSYEVX uses bisection and inverse iteration. PSSYEV/PDSYEV uses the QR algorithm. Table 5.12  and Table 5.13  show the execution time in seconds of the routines PSSYEVX/PDSYEVX and PSSYEV /PDSYEV , respectively, for computing the eigenvalues and eigenvectors of symmetric matrices of order N. The performance of PSSYEVX /PDSYEVX  deteriorates in the face of large clusters of eigenvalues. ScaLAPACK uses a nonscalable definition of clusters (because we chose to remain consistent with LAPACK). Hence, matrices larger than N=1000 tend to have at least one very large cluster (see section 5.3.6). This needs further study. More detailed information concerning the performance of these routines may be found in [40]. Table 5.14  shows the execution time in seconds of the routines PSGESVD /PDGESVD  for computing the singular values and the corresponding right and left singular vectors of a general matrix of order N.

  table4060
Table 5.12: Execution time in seconds of PSSYEVX/PDSYEVX for square matrices of order N

For computing the eigenvalues and eigenvectors of a Hessenberg matrix--or rather, for computing its Schur factorization-- two flavors of block algorithms have been developed. The first algorithm implemented in the routine PSLAHQR /PDLAHQR  results from the parallelization of the QR algorithm. The key idea is to generate many shifts at once rather than two at a time, thereby allowing all bulges to carry out up-to-date shifts. The second algorithm that is currently implemented as a prototype code  is based on the computation of the matrix sign function [14, 13, 12]. In this section, however, only performance results of the first approach are reported.

  table4085
Table 5.13: Execution time in seconds of PSSYEV/PDSYEV for square matrices of order N

  table4106
Table 5.14: Execution time in seconds of PSGESVD/PDGESVD for square matrices of order N

Table 5.15  summarizes performance results obtained for the ScaLAPACK routine PDLAHQR doing a full Schur decomposition of an order N upper Hessenberg matrix. The supercomputers the table gives timings for are the Intel XP/S MP Paragon supercomputer and technology from the Intel ASCI Option Red Supercomputer. For both machines, we assume only one CPU is being used for computation on this code. The Schur decomposition is based on iteratively applying orthogonal similarity transformations on a Hessenberg matrix H such as
displaymath16857
until T becomes pseudo-upper triangular (i.e., in the real case, having one by one or two by two subdiagonal blocks.) The serial performance (assuming roughly tex2html_wrap_inline16994 flops) of the LAPACK routine DLAHQR for computing a complex Schur decomposition is around 8.5 Mflops on the Intel MP Paragon supercomputer. The enhanced performance shown in Table 5.15 is slightly faster, a bit above 9 Mflops, and ends up peaking around 10 Mflops because of the block application of Householder transforms found in the ScaLAPACK serial auxiliary routine DLAREF. For the technology behind the Intel ASCI Option Red Supercomputer, it peaks at several times the speed of the Paragon, and has a slightly faster drop off in efficiency. For further details and timings, please see [79].

  table4130
Table 5.15: Execution time in seconds of PDLAHQR for square matrices of order N

A more detailed performance analysis of the eigensolvers included in the ScaLAPACK software library can be found in [48, 79]. Finally, we note that research into parallel algorithms for symmetric and nonsymmetric eigenproblems continues [11, 86, 45], and future versions of ScaLAPACK will be updated to contain the best algorithms available.


next up previous contents index
Next: Performance Evaluation Up: Solving Linear Systems of Previous: Solving Linear Least Squares

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node11.html0100644000056400000620000000541406336113641017437 0ustar pfrauenfstaff Software Components next up previous contents index
Next: LAPACK Up: Essentials Previous: Structure and Functionality

Software Components

 

Figure 1.1 describes the ScaLAPACK software hierarchy . The components below the line, labeled Local , are called on a single processor, with arguments stored on single processors only. The components above the line, labeled Global , are synchronous parallel routines, whose arguments include matrices and vectors distributed across multiple processors. We describe each component in turn.

  figure398
Figure 1.1: ScaLAPACK software hierarchy





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node120.html0100644000056400000620000000503706336113742017523 0ustar pfrauenfstaff Performance Evaluation next up previous contents index
Next: Obtaining High Performance with Up: Performance of ScaLAPACK Previous: Eigenvalue Problems

Performance Evaluation

   





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node121.html0100644000056400000620000000666406336113742017533 0ustar pfrauenfstaff Obtaining High Performance with ScaLAPACK Codes next up previous contents index
Next: Checking the BLAS and Up: Performance Evaluation Previous: Performance Evaluation

Obtaining High Performance with ScaLAPACK Codes

We suggest the following approach to obtain high performance with ScaLAPACK codes:

  • Use the best BLAS and BLACS libraries available.
  • Start with a standard data distribution.
    • A square processor grid (tex2html_wrap_inline17051) if tex2html_wrap_inline17053 
    • A one dimensional processor grid (Ptex2html_wrap_inline12112=1, Ptex2html_wrap_inline12114=P) if P < 9
    • Block size = 64 
  • Determine whether reasonable performance is being achieved.
  • Identify the performance bottleneck(s), if any,
  • Tune the distribution or routine parameters to improve performance further.

The standard data distribution will typically achieve 25-50% of the peak performance possible (depending in part on how many processors are ignored, i.e., the difference between tex2html_wrap_inline17061 and tex2html_wrap_inline17063). We do not recommend experimenting with different data distributions until performance that is acceptable (or nearly so) has been achieved. If each individual node requires a block size larger than 64 to achieve near-peak performance on local matrix-matrix multiply, the block size may have to be increased. This step is unlikely, however, unless the computer has a shared-memory multiprocessor with more than four processors on each node.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node122.html0100644000056400000620000000570706336113743017532 0ustar pfrauenfstaff Checking the BLAS and BLACS Libraries next up previous contents index
Next: Estimate Execution Time Up: Performance Evaluation Previous: Obtaining High Performance with

Checking the BLAS and BLACS Libraries

 

The best way to determine whether one is using efficient BLAS and BLACS libraries is to time them. ScaLAPACK provides a rudimentary BLAS and BLACS timer in the examples/timers directory on netlib and on the CD-ROM. This directory also contains pointers and instructions for more complete timers, such as the LAPACK BLAS timer and the message-passing benchmark program [46]. We encourage users to use these timers to measure the performance of the BLAS.

This ScaLAPACK examples/timers directory also contains pointers to some benchmark results and some pointers on interpreting the results of the timers.

To determine which BLAS and BLACS libraries are being linked in, users should check the output of the linker. If the Makefile includes SLmake.inc, the BLAS library is given by the macro BLASLIB in SLmake.inc while the BLACS library is given by the macro BLACSLIB. If the BLAS library name is of the form blas_LINUX.a, this is probably the (slow) reference implementation BLAS. If the BLAS library name is -lblas, -lessl, -ldxml or the like, this may be an optimized BLAS library.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node123.html0100644000056400000620000000765106336113743017533 0ustar pfrauenfstaff Estimate Execution Time next up previous contents index
Next: Determine Whether Reasonable Performance Up: Performance Evaluation Previous: Checking the BLAS and

Estimate Execution Time

 

This section describes how one can estimate the execution time of a ScaLAPACK routine on a given platform, using Equation 5.1 and the values provided in table 5.5 and table 5.8. By comparing this estimate with experimental data, the user can determine whether reasonable performance has been achieved and can (possibly) identify the performance bottlenecks, if any.

For linear system solvers, the estimate typically is accurate to within 50% for moderate-sized problems (i.e., 160,000 or more matrix elements per node). For eigensolvers, the estimate may be low by a factor of 2 for moderate-sized problems and by more than that for smaller problems. The eigensolvers take longer because they involve matrix-vector flops, as well as matrix-matrix flops, and involve substantial numbers of o(tex2html_wrap_inline17067) flops that are not included in the approximation. The accuracy of performance estimates increases with the problem size. Unfortunately, because ScaLAPACK eigensolvers require more memory than the other ScaLAPACK drivers, large problems cannot be solved; hence, execution times for small and medium-sized problems (rather than medium-sized and large problems) are reported.

  table4184
Table 5.16: Estimated (Est) versus obtained (Obt) Mflop/s rates of PDGESV and PDPOSV on P nodes of the IBM SP2 computer for matrices of order N and a block size (NB) equal to 50

Table 5.16 shows the estimated versus obtained Mflop/s rates for two ScaLAPACK driver routines solving linear systems of equations on the IBM Scalable POWERparallel 2 computer. The results show that for these drivers the estimated execution times are within approximately 35 % of the experimental data on the SP2. (The estimated times for the symmetric eigensolvers and SVD codes would not be as accurate.)



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node124.html0100644000056400000620000000642506336113744017533 0ustar pfrauenfstaff Determine Whether Reasonable Performance Is Achieved next up previous contents index
Next: Identify Performance Bottlenecks Up: Performance Evaluation Previous: Estimate Execution Time

Determine Whether Reasonable Performance Is Achieved

This chapter contains performance results for some ScaLAPACK drivers on a range of different platforms. We recommend that users compare their performance results against the ones presented in the previous tables. For those users whose computer is not listed, the BLAS timing program [3] and the message-passing benchmark program [46] can be used to estimate the values of tex2html_wrap_inline12088, tex2html_wrap_inline12208 and tex2html_wrap_inline12228 for a specific computer. gif The execution time can then be estimated by using the material presented in the previous section.

If this chapter does not contain performance data for the ScaLAPACK routine a user is using, we recommend verifying the performance with one of the drivers whose performance is presented in this chapter. The publicly available ScaLAPACK distribution contains timing programs for each driver. This performance sanity check should convince the user not only that the library has been correctly installed, but also that it is being used properly. Users may also send questions, suggestions, or comments regarding ScaLAPACK performance issues to scalapack@cs.utk.edu.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node125.html0100644000056400000620000001004506336113744017525 0ustar pfrauenfstaff Identify Performance Bottlenecks next up previous contents index
Next: Performance Bottlenecks in the Up: Performance Evaluation Previous: Determine Whether Reasonable Performance

Identify Performance Bottlenecks

    

The formulas mentioned in section 5.3.3, in addition to providing an estimate of performance, can help one identify whether the performance is limited by computation, by the number of messages, or by the volume of communication. Even if the estimate is far from correct, the user may get some information about the performance bottleneck by studying the computation and communication estimates provided by those formulas.

Comparing the execution times of a problem of size N and one of size N/2 may also provide insight into the performance of the ScaLAPACK routine being used. Let tex2html_wrap_inline12212 and tex2html_wrap_inline12220 be the time required for a problem of size N and size N/2, respectively, on P processors.

  • If tex2html_wrap_inline17099, the physical memory of each node may be exceeded.
  • If tex2html_wrap_inline17101, the performance may be limited by the rate at which flops are performed. If the flop rate is significantly less than expected, the user should check the data distribution (try the standard data distribution suggested in section 5.1.1) and the underlying BLAS.
  • If tex2html_wrap_inline17103, the major performance factor may be bandwidth (tex2html_wrap_inline12064). This is what one should obtain for medium values of N.
  • If tex2html_wrap_inline17109, the major performance factor may be latency (tex2html_wrap_inline12208). This is what one should obtain for small values of N.
This performance analysis suggests which computer characteristic is most likely limiting the performance. It cannot say whether one is getting good performance.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node126.html0100644000056400000620000000720306336113745017531 0ustar pfrauenfstaff Performance Bottlenecks in the Expert Symmetric Eigenproblem Drivers next up previous contents index
Next: Performance Improvement Up: Performance Evaluation Previous: Identify Performance Bottlenecks

Performance Bottlenecks in the Expert Symmetric Eigenproblem Drivers

 

Large clusters of eigenvalues in the input matrix may cause poor performance in the expert symmetric eigenproblem drivers, PSSYEVX /PDSYEVX . If the execution time observed for the ScaLAPACK drivers PxSYGVX and PxSYEVX is more than double the estimate in section 5.3.3 and more than the minimum LWORK is provided, we recommend that this part of the code be retimed after relaxing the orthogonalization requirements. This can be achieved either by setting the value of the formal parameter LWORK to the minimum allowed by the driver as specified in the leading comments of the source code, or by calling the driver with the value of the parameter ORTOL set to the machine epsilon multiplied by the norm of the matrix. These last two values may be obtained by calling respectively the ScaLAPACK routines PxLAMCH and PxLANSY. If the execution time obtained for the driver after relaxing the orthogonalization requirements is substantially reduced, it is likely that the spectrum of the matrix or matrix pencil has a large cluster of eigenvalues that the driver attempts to reorthogonalize. Otherwise, it is likely that the performance bottleneck is caused by other factors as mentioned in section 5.3.5. If the matrix or matrix pencil has a large cluster of eigenvalues, we recommend using the corresponding simple driver PxSYEV, instead. If the application can tolerate loss of orthogonality, the drivers PxSYGVX and PxSYEVX may achieve good performance by relaxing the orthogonalization requirements using the method suggested above. Please check the value of the INFO parameter returned by these and all ScaLAPACK drivers.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node127.html0100644000056400000620000000473606336113746017543 0ustar pfrauenfstaff Performance Improvement next up previous contents index
Next: Choosing a Faster BLACS Up: Performance of ScaLAPACK Previous: Performance Bottlenecks in the

Performance Improvement

   

Before experimenting with different data layouts, users should make sure that they are using the fastest BLACS and BLAS libraries.

Three major factors influence the performance of a ScaLAPACK routine: the flop rate achieved by the BLAS on each node, the computational load balance, and the communication costs.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node128.html0100644000056400000620000000645006336113746017537 0ustar pfrauenfstaff Choosing a Faster BLACS Library next up previous contents index
Next: Choosing a Faster BLAS Up: Performance Improvement Previous: Performance Improvement

Choosing a Faster BLACS Library

Users should choose vendor-supplied BLACS optimized for their computer; these BLAS will be the fastest BLACS implementation. If no vendor-supplied BLACS exists, users will have to choose among the publicly available BLACS libraries.

Many distributed-memory computers offer several communication libraries. The SP2, for example, offers MPI, PVM and MPL communication libraries. Since implementations of the BLACS exist on each of several communication libraries, one may have a choice of several different BLACS implementations. On the SP2, for example, the user can run the BLACS MPI, BLACS MPL, or BLACS PVM version.

Unfortunately, no hard rule exists as to which BLACS implementation will be fastest. However, since the BLACS cannot be faster than the communication library upon which it is built, and since the BLACS typically add little overhead, it is usually best to choose the BLACS implementation that is based on the fastest communication library.

Identifying the fastest communication library may not be trivial. The speed of communication libraries may be reported in different ways. Moreover, although the speed of blocking sends is reported because they are faster than nonblocking sends, the BLACS must use the nonblocking sends or provide its own buffering. Those who are using one of the computers listed in this chapter should refer to Tables 5.2 and 5.3 to see which library we used for timing. Our experience is that the fastest communication library was the library that is native to that particular computer.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node129.html0100644000056400000620000000542706336113747017544 0ustar pfrauenfstaff Choosing a Faster BLAS Library next up previous contents index
Next: Tuning the Distribution Parameters Up: Performance Improvement Previous: Choosing a Faster BLACS

Choosing a Faster BLAS Library

Highly efficient machine-specific implementations of the BLAS are available for many modern high-performance computers. Users who cannot obtain an efficient BLAS for your architecture may be able to create one from by using a set of BLAS that requires only an efficient implementation of the matrix-matrix multiply BLAS routine xGEMM [35, 90], combined with an automatically generated machine-specific and efficient implementation of xGEMM [16].

Users who are using one of the computers listed in this chapter should refer to Tables 5.2 and 5.3 to see which library we used for timing. Otherwise, the computer vendor may be able to provide information about optimized BLAS for a specific computer.

A reference Fortran 77 implementation of the BLAS is available from the blas directory on netlib.

http://www.netlib.org/blas/blas.shar


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node12.html0100644000056400000620000000506706336113642017445 0ustar pfrauenfstaff LAPACK next up previous contents index
Next: BLAS Up: Software Components Previous: Software Components

LAPACK

  As mentioned before, LAPACK, or Linear Algebra PACKage [3], is a collection of routines for solving linear systems, least squares problems, eigenproblems, and singular problems. High performance is attained by using algorithms that do most of their work in calls to the BLAS, with an emphasis on matrix-matrix multiplication. Each routine has one or more performance tuning parameters, such as the sizes of the blocks operated on by the BLAS. These parameters are machine dependent and are obtained from a table defined when the package is installed and referenced at runtime.

The LAPACK routines are written as a single thread of execution. LAPACK can accommodate shared-memory machines, provided parallel BLAS are available (in other words, the only parallelism is implicit in calls to BLAS). Extensive performance results for LAPACK can be found in the LAPACK Users' Guide [3].



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node130.html0100644000056400000620000002063106336113747017526 0ustar pfrauenfstaff Tuning the Distribution Parameters for Better Performance next up previous contents index
Next: Performance of Banded and Up: Performance Improvement Previous: Choosing a Faster BLAS

Tuning the Distribution Parameters for Better Performance

    

By adjusting the data distribution of the matrices, users may be able to achieve 10-50 % greater performance than by using the standard data distribution suggested in section 5.1.1.

The performance attained using the standard data distribution is usually fairly close to optimal; hence, if one is getting poor performance, it is unlikely that modifying the data distribution will solve the performance problem.

An optimal data distribution depends upon several factors including the performance characteristics of the hardware, the ScaLAPACK routine invoked, and (to a certain extent) the problem size. The algorithms currently implemented in ScaLAPACK fall into two main classes.

The first class of algorithms is distinguished by the fact that at each step a block of rows or columns is replicated in all process rows or columns. Furthermore, the process row or column source of this broadcast operation is the one immediately following -- or preceding depending on the algorithm -- the process row or column source of the broadcast operation performed at the previous step of the algorithm. The QR factorization and the right looking variant of the LU factorization are typical examples of such algorithms, where it is thus possible to establish and maintain a communication pipeline in order to overlap computation and communication. The direction of the pipeline determines the best possible shapes of the process grid. For instance, the LU, QR, and QL factorizations perform better for ``flat'' process grids (tex2html_wrap_inline17130). These factorizations perform a reduction operation for each matrix column for pivoting in the LU factorization and for computing the Householder transformation in the QR and QL decompositions. Moreover, after this reduction has been performed, it is important to update the next block of columns as fast as possible. This update is done by broadcasting the current block of columns using a ring topology, that is, feeding the ongoing communication pipe. Similarly, the performance of the LQ and RQ factorizations take advantage of ``tall'' grids (tex2html_wrap_inline17142) for the same, but transposed, reasons.

The second group of algorithms is characterized by the physical transposition of a block of rows and/or columns at each step. Square or near square grids are more adequate from a performance point of view for these transposition operations. Examples of such algorithms implemented in ScaLAPACK include the right-looking variant of the Cholesky factorization, the matrix inversion algorithm, and the reductions to bidiagonal form (PxGEBRD), to Hessenberg form (PxGEHRD), and to tridiagonal form (PxSYTRD). It is interesting to note that if square grids are more efficient for these matrix reduction operations, the corresponding eigensolver usually prefers flatter grids.

Table 5.17 summarizes this paragraph and provides suggestions for selecting the most appropriate shape of the logical tex2html_wrap_inline12182 process grid from a performance point of view. The results presented in this table may need to be refined depending on the physical characteristics of the physical interconnection network.

  table4271
Table 5.17: Process grid suggestions for some ScaLAPACK drivers

Assume that at most P nodes are available. A natural question is: Could we decide which tex2html_wrap_inline17180 process grid should be used? Similarly, depending on the value of P, it is not always possible to factor tex2html_wrap_inline17184 to create an appropriate grid shape. For example, if the number of nodes available is a prime number and a square grid is suitable with respect to performance, it may be beneficial to let some nodes remain idle so that the remaining nodes can be arranged in a ``squarer'' grid.

If the BLACS implementation or the interconnection network features high latency, a one-dimensional data distribution will improve the performance for small and medium problem sizes. The number of messages significantly impacts the performance achieved for small problem sizes, whereas the total message volume becomes a dominant factor for medium-sized problems. The performance cost due to floating-point operations dominates for large problem sizes. One-dimensional data distributions reduce the total number of messages exchanged on the interconnection network but increase the total volume of message traffic. Therefore, one-dimensional data distributions are better for small problem sizes but are worse for large problem sizes, especially when one is using eight or more processors.

Determining optimal, or near-optimal, distribution block sizes with respect to performance   for a given platform is a difficult task. However, it is empirically true that as soon as a good block size or even a set of good distribution parameters is found, the performance is not highly sensitive to small changes of the values of these parameters.


next up previous contents index
Next: Performance of Banded and Up: Performance Improvement Previous: Choosing a Faster BLAS

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node131.html0100644000056400000620000000502406336113750017520 0ustar pfrauenfstaff Performance of Banded and Out-of-Core Drivers next up previous contents index
Next: Accuracy and Stability Up: Performance of ScaLAPACK Previous: Tuning the Distribution Parameters

Performance of Banded and Out-of-Core Drivers

ScaLAPACK provides LU and Cholesky factorizations for band matrices. For small bandwidth, divide and conquer algorithms have been chosen even though they require more floating-point operations. A more detailed performance analysis can be found in [18].

ScaLAPACK also provides prototype out-of-core linear system solvers. Information on these particular routines as well as the algorithms that have been selected can be found in [47, 55, 18]. In particular, it is shown in [55] that these out-of-core solvers incur approximately a 20% overhead over the corresponding in-core ScaLAPACK solvers.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node132.html0100644000056400000620000001746006336113751017531 0ustar pfrauenfstaff Accuracy and Stability next up previous contents index
Next: Sources of Error in Up: Guide Previous: Performance of Banded and

Accuracy and Stability

 

In this chapter we explain our overall approach to obtaining error bounds and provide enough information to use the software. The comments at the beginning of the individual routines should be consulted for more details. It is beyond the scope of this chapter to justify all the bounds we present. Instead, we give references to the literature. For example, standard material on error analysis can be found in [71, 114, 84, 38].

To make this chapter easy to read, we have labeled parts not essential for a first reading as Further Details. The sections not labeled as Further Details should provide all the information needed to understand and use the main error bounds computed by ScaLAPACK. The Further Details sections provide mathematical background, references, and tighter but more expensive error bounds, and may be read later.

Since ScaLAPACK uses the same overall algorithmic approach as LAPACK, its error  bounds  are essentially the same as those for LAPACK. Therefore, this chapter is largely analogous to Chapter 4 of the LAPACK Users' Guide [3]. Significant differences between LAPACK and ScaLAPACK include the following:

  • Section 6.1 discusses how machine constants in a heterogeneous network of machines with differing floating-point arithmetics must be redefined. ScaLAPACK can also exploit arithmetic with tex2html_wrap_inline17191, which is available in IEEE standard floating-point arithmetic.  
  • Section 6.2 discusses reliability problems that can arise on heterogeneous networks of machines and how to guarantee reliability on a homogeneous network.
  • Section 6.5 discusses some routines that do Gaussian elimination on band matrices with the pivot order chosen for parallelism rather than numerical stability. These routines are numerically stable only when applied to matrices that are do not require partial pivoting for stability (such as diagonally dominant and symmetric positive definite matrices).
  • Section 6.7 discusses PxSYEVX.gif In contrast to its LAPACK analogue, xSYEVX, PxSYEVX allows the user to trade off orthogonality of computed eigenvectors and runtime.

In section 6.1 we discuss the sources of numerical error, in particular roundoff error. We also briefly discuss IEEE arithmetic. Section 6.2 discusses the new sources of numerical error specific to parallel libraries, and the restrictions they impose on the reliable use of ScaLAPACK. Section 6.3 discusses how to measure errors, as well as some standard notation. Section 6.4 discusses further details of how error bounds are derived. Sections 6.5 through 6.9 present error bounds for linear equations, linear least squares problems, the symmetric eigenproblem, the singular value decomposition, and the generalized symmetric definite eigenproblem, respectively.




next up previous contents index
Next: Sources of Error in Up: Guide Previous: Performance of Banded and

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node133.html0100644000056400000620000003713606336113752017535 0ustar pfrauenfstaff Sources of Error in Numerical Calculations next up previous contents index
Next: New Sources of Error Up: Accuracy and Stability Previous: Accuracy and Stability

Sources of Error in Numerical Calculations

 

          The effects of two sources of error can be measured by the bounds in this chapter: roundoff error and input error. Roundoff error arises from rounding results of floating-point operations during the algorithm. Input error is error in the input to the algorithm from prior calculations or measurements. We describe roundoff error first, and then input error.

Almost all the error bounds ScaLAPACK provides are multiples of relative machine precision,     which we abbreviate by tex2html_wrap_inline17202. Relative machine precision (epsilon) bounds the roundoff in individual floating-point operations. It may be loosely defined as the largest relative error    in any floating-point operation that neither overflows nor underflows. (Overflow means the result is too large to represent accurately, and underflow means the result is too small to represent accurately.) Relative machine precision (epsilon) is available either by the function call   PSLAMCH(ICTXT, 'Epsilon') (or simply PSLAMCH(ICTXT, 'E'))gif in single precision, or by the function call PDLAMCH(ICTXT, 'Epsilon') (or PDLAMCH(ICTXT, 'E')) in double precision. See section 6.1 and Table 6.1 for a discussion of common values of machine epsilon.      

PDLAMCH(ICTXT,`E') returns a single value for the selected machine parameter `E' on all processes within the context ICTXT. If these processes are running on a network of heterogeneous processors , with different floating-point arithmetics, then a ``safe'' common value is returned, the maximum value of machine epsilon for all the processors.

In case of overflow,   there are two common system responses: stopping with an error message, or returning tex2html_wrap_inline17191 and continuing to compute. The latter is the default response of IEEE standard floating-point arithmetic [7, 8]  , the most commonly used arithmetic. It is possible to change this default to abort with an error message, which is often useful for debugging.

In contrast to LAPACK, ScaLAPACK can take advantage of arithmetic with tex2html_wrap_inline17191   to accelerate the routines that compute eigenvalues of symmetric matrices using PxLAIECT (the drivers PxSYEVX and PxSYGVX, and their complex counterparts).           PxLAIECT comes in two different versions, one in which arithmetic with tex2html_wrap_inline17191 is available (the default) and one in which it is not. When tex2html_wrap_inline17191 is available, the inner loop of PxLAIECT is accelerated by removing a branch to test for and avoid division by zero. This speed advantage is realized only when arithmetic with tex2html_wrap_inline17191 is as fast as arithmetic with normalized floating-point numbers; this is usually but not always the case [42]. The compile time flag NO_IEEE can be used during installation to run without using tex2html_wrap_inline17191; see the ScaLAPACK Installation Guide for details [24].

Since underflow is almost always less significant than roundoff, we will not consider it further in this section (but see section 6.1).

Bounds on input errors may be easily incorporated into most ScaLAPACK error bounds. Suppose the input data is accurate to, say, five decimal digits (we discuss exactly what this means in section 6.3). Then one simply replaces tex2html_wrap_inline17202 by tex2html_wrap_inline17220 in the error bounds.

Further Details: Floating-Point Arithmetic 

    Roundoff error is bounded in terms of the relative machine precision tex2html_wrap_inline17202,   which is the smallest value satisfying
displaymath17196
where a and b are floating-point numbers , tex2html_wrap_inline17228 is any one of the four operations +, -, tex2html_wrap_inline12420, and tex2html_wrap_inline17236, and tex2html_wrap_inline17238 is the floating-point result of tex2html_wrap_inline17240. Relative machine precision, tex2html_wrap_inline17202, is the smallest value for which this inequality is true for all tex2html_wrap_inline17228, and for all a and b such that tex2html_wrap_inline17240 is neither too large (magnitude exceeds the overflow threshold)   nor too small (is nonzero with magnitude less than the underflow threshold)   to be represented accurately in the machine. We also assume tex2html_wrap_inline17202 bounds the relative error in unary   operations such as square root:
displaymath17197

A precise characterization of tex2html_wrap_inline17202 depends on the details of the machine arithmetic and sometimes even of the compiler. For example, if addition and subtraction are implemented without a guard digit,gif we must redefine tex2html_wrap_inline17202 to be the smallest number such that
displaymath17198

In order to assure portability , machine parameters such as relative machine precision (epsilon), the overflow threshold and underflow threshold are computed at runtime by the auxiliary    routine PxLAMCH. The alternative, keeping a fixed table of machine parameter values, would degrade portability because the table would have to be changed when moving from one machine or combination of machines, or even one compiler, to another.

Most machines (but not yet all) do have the same machine parameters because they implement IEEE Standard Floating Point Arithmetic   [7, 8], which exactly specifies floating-point number representations and operations. For these machines, including all modern workstations and PCs,gif the values of these parameters are given in Table 6.1.

Unfortunately, machines claiming to implement IEEE arithmetic may still compute different results from the same program and input. Here are some examples. Intel processors have 80-bit floating-point registers, and the fastest way to use them is to evaluate all results to 80-bit accuracy until they are stored back to memory in 32-bit or 64-bit format. The IBM RS/6000 has a fused multiply-add instruction that evaluates a+b*c with one rounding error instead of two. The DEC Alpha's default (fast) mode is to flush underflowed values to zero instead of returning subnormal numbers, which is the default demanded by the IEEE standard;     in this mode the DEC Alpha aborts if it encounters a subnormal number. In all these cases machines may be made to operate absolutely identically, for example, by rounding all intermediate results back to single or double on an Intel machine, or by doing subnormal arithmetic carefully and slowly on a DEC Alpha. These heterogeneities lead to errors encountered only in parallel computing; see section 6.2 for further discussion.

  table4445
Table 6.1: Values of machine parameters in IEEE floating-point arithmetic

As stated above, we will ignore underflow in discussing error bounds. Reference [37] discusses extending error bounds to include underflow and shows that for many common computations, when underflow occurs, it is less significant than roundoff.

Overflow historically resulted in an error message and stopped execution, in which case our error bounds would not apply. But with IEEE floating-point arithmetic,   the default is that overflow returns tex2html_wrap_inline17191 and execution continues. Indeed, with IEEE arithmetic machines can continue to compute past overflows, even division by zero, square roots of negative numbers, etc., by producing tex2html_wrap_inline17191 and NaN (``Not a Number'') symbols according to special rules of arithmetic. The default on many systems is to continue computing with these symbols. Routine PxLAIECT exploits this arithmetic to accelerate the computations of eigenvalues, as discussed above. It is also possible to stop with an error message when overflow occurs, a feature that is often useful for debugging. The user should consult the system manual to see how to turn error messages on or off.                            

Most of our error bounds will simply be proportional to relative machine precision (epsilon). This means, for example, that if the same problem in solved in double precision and single precision, the error bound in double precision will be smaller than the error bound in single precision by a factor of tex2html_wrap_inline17278. In IEEE arithmetic, this ratio is tex2html_wrap_inline17280, meaning that one expects the double-precision answer to have approximately nine more decimal digits correct than the single-precision answer.

Like their counterparts in LAPACK. ScaLAPACK routines are generally insensitive to the details of rounding, provided all processes perform arithmetic identically. The one exception is PxLAIECT, as mentioned above. The next section discusses what can happen when processes do not perform arithmetic identically, that is, are heterogeneous.


next up previous contents index
Next: New Sources of Error Up: Accuracy and Stability Previous: Accuracy and Stability

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node134.html0100644000056400000620000002426406336113753017535 0ustar pfrauenfstaff New Sources of Error in Parallel Numerical Computations next up previous contents index
Next: How to Measure Errors Up: Accuracy and Stability Previous: Sources of Error in

New Sources of Error in Parallel Numerical Computations

 

An important difference between ScaLAPACK and LAPACK is that a parallel computing environment, possibly consisting of a heterogeneous collection of processors, introduces new sources of possible errors not found in the serial environment in which LAPACK runs. These errors could indeed afflict any parallel algorithm that uses floating-point arithmetic. For example, consider the following pseudocode, executed in parallel by several processors:

 
   		 s= global_sum(x)  ...  each processor receives the sum s of global array x

if s < thresh then

return my part of answer 1

else

do more computations

return my part of answer 2

end if

It is possible for the value of s to differ from processor to processor; we call this incoherence.  This can happen if the floating-point arithmetic varies from processor to processor (we call this heterogeneity),  since processors may not even share the same set of floating-point numbers. The value of s can also vary if global_sum accumulates the sum in different orders on different processors, since floating-point addition is not associative. In either case, the test s< thresh may be true on one processor but not another, so that the program may inconsistently return answer 1 on some processors and answer 2 on others. If the ``more computations'' include communication with synchronization, even deadlock could result.   

Deadlock can also result if the floating-point numbers communicated from one processor to another cause fatal floating-point errors on the receiving processor. For example, if an IBM RS/6000, running in its default mode, sends a message containing a denormalized number [7, 8]   to a DEC Alpha running in its default mode, then the DEC Alpha aborts [19].gif

It is also possible for global_sum to compute the same s on all processors but compute a different s from run to run of the program, for example, if global_sum computes the sum in a nondeterministic order on one processor and broadcasts the result to all processors. We call this nonrepeatability.  If this happens, debugging the overall code can be more difficult.

Coherence   and repeatability   are independent properties of an algorithm. It is possible in principle for an algorithm running on a particular platform to be incoherent and repeatable, coherent and nonrepeatable, or any other combination. On a different platform, the same algorithm may have different properties.

Reference [19] contains a more extensive discussion of these possible errors.

One run of a ScaLAPACK routine is designed to be as reliable  as LAPACK, so that errors due to incoherence cannot occur as long as ScaLAPACK is executed on a homogeneous network  of processors. The following conditions apply:

  • The processors are completely identical. This also means that relevant flags, like those controlling the way overflow and underflow are handled in IEEE floating-point arithmetic,   must be identical.
  • The communication library used by the BLACS may only ``copy bits'' and not modify any floating-point numbers (by translation to a different internal floating-point format, as XDR [111] may do).  
  • The identical ScaLAPACK object code must be executed by each processor.

The above conditions guarantee that a single ScaLAPACK call is as reliable  as its LAPACK counterpart. If, in addition, identical answers from one run to another are desired (i.e., repeatability),  this can be guaranteed at runtime by calling BLACS_SET  to enforce repeatability  of the BLACS, and the ScaLAPACK routines that use them, by using an appropriate topology (see the BLACS users guide [54] for details).

Maintaining coherence  on a heterogeneous network  is harder, and not always possible. If floating-point formats differ (say, on a Cray C90 and IBM RS/6000, which uses IEEE arithmetic), there is no cost-effective way to guarantee coherence. If floating-point formats are the same, however, operations such as global sums can accumulate the result on one processor and broadcast it to guarantee coherence (except for the problem of DEC Alphas and denormalized numbers mentioned above). The BLACS do this, except when using the ``bidirectional exchange'' topology. One can avoid using ``bidirectional exchange'' and so guarantee coherence whenever possible, by calling BLACS_SET  to enforce coherence  (see the BLACS users guide [54] for details).

Still other ScaLAPACK routines are guaranteed to work only on homogeneous networks (PxGESVD and PxSYEV). These routines do large numbers of redundant calculations on all processors and depend on the results of these calculations being the same. There are too many of these calculations to cost-effectively compute them all on one processor and broadcast the results.

The user may wonder why ScaLAPACK and the BLACS are not designed to guarantee coherence and repeatability in the most general possible situations, so that calling BLACS_SET would not be necessary. The reason is that the possible bugs described above are quite rare, and so ScaLAPACK and the BLACS were designed to maximize performance instead. Provided the mere sending of floating-point numbers does not cause a fatal error, these bugs cannot occur at all in most ScaLAPACK routines, because branches depending on a supposedly identical floating-point value like s do not occur. For most other ScaLAPACK routines where such branches do occur, we have not seen these bugs despite extensive testing, including attempts to cause them to occur. Complete understanding and cost-effective elimination of such possible bugs are future work.

In the meantime, to get repeatability when running on a homogeneous network, we recommend calling BLACS_SET  as described above when using the following ScaLAPACK drivers: PxGESVX, PxPOSVX, PxSYEV, PxSYEVX, PxGESVD, and PxSYGVX.                          


next up previous contents index
Next: How to Measure Errors Up: Accuracy and Stability Previous: Sources of Error in

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node135.html0100644000056400000620000007021306336113754017532 0ustar pfrauenfstaff How to Measure Errors next up previous contents index
Next: Further Details: How Error Up: Accuracy and Stability Previous: New Sources of Error

How to Measure Errors

 

  ScaLAPACK routines return four types of floating-point output arguments:

     
  • Scalar, such as an eigenvalue of a matrix,  
  • Vector, such as the solution x of a linear system Ax=b,  
  • Matrix, such as a matrix inverse tex2html_wrap_inline13063, and  
  • Subspace, such as the space spanned by one or more eigenvectors of a matrix.
This section provides measures for errors in these quantities, which we need in order to express error bounds.

  First, consider scalars. Let the scalar tex2html_wrap_inline17436 be an approximation of the true answer tex2html_wrap_inline16235. We can measure the difference between tex2html_wrap_inline16235 and tex2html_wrap_inline17436 either by the absolute error tex2html_wrap_inline17444, or, if tex2html_wrap_inline16235 is nonzero, by the relative error tex2html_wrap_inline17448. Alternatively, it is sometimes more convenient to use tex2html_wrap_inline17450 instead of the standard expression for relative error. If the relative error of tex2html_wrap_inline17436 is, say, tex2html_wrap_inline17454, we say that tex2html_wrap_inline17436 is accurate to 5 decimal digits.      

  To measure the error in vectors, we need to measure the size or norm of a vector x . A popular norm is the magnitude of the largest component, tex2html_wrap_inline17460, which we denote by tex2html_wrap_inline17462. This is read the infinity norm of x. See Table 6.2 for a summary of norms.

  table4627
Table 6.2: Vector and matrix norms

If tex2html_wrap_inline17482 is an approximation to the exact vector x, we will refer to tex2html_wrap_inline17486 as the absolute error in tex2html_wrap_inline17482 (where p is one of the values in Table 6.2)       and refer to tex2html_wrap_inline17492 as the relative error in tex2html_wrap_inline17482 (assuming tex2html_wrap_inline17496). As with scalars, we will sometimes use tex2html_wrap_inline17498 for the relative error. As above, if the relative error of tex2html_wrap_inline17482 is, say tex2html_wrap_inline17454, we say that tex2html_wrap_inline17482 is accurate to 5 decimal digits. The following example illustrates these ideas.


displaymath17410

eqnarray4677
Thus, we would say that tex2html_wrap_inline17482 approximates x to 2 decimal digits.

  Errors in matrices may also be measured with norms . The most obvious generalization of tex2html_wrap_inline17462 to matrices would appear to be tex2html_wrap_inline17512, but this does not have certain important mathematical properties that make deriving error bounds convenient. Instead, we will use tex2html_wrap_inline17514, where A is an m-by-n matrix, or tex2html_wrap_inline17522; see Table 6.2 for other matrix norms. As before, tex2html_wrap_inline17524 is the absolute error   in tex2html_wrap_inline17526, tex2html_wrap_inline17528 is the relative error   in tex2html_wrap_inline17526, and a relative error in tex2html_wrap_inline17526 of tex2html_wrap_inline17454 means tex2html_wrap_inline17526 is accurate to 5 decimal digits. The following example illustrates these ideas.
displaymath17411

eqnarray4739
so tex2html_wrap_inline17526 is accurate to 1 decimal digit.

We now introduce some related notation we will use in our error bounds. The condition number of a matrix A is defined as   tex2html_wrap_inline17542, where A is square and invertible, and p is tex2html_wrap_inline17548 or one of the other possibilities in Table 6.2. The condition number measures how sensitive tex2html_wrap_inline13063 is to changes in A; the larger the condition number, the more sensitive is tex2html_wrap_inline13063. For example, for the same A as in the last example,
displaymath17412
ScaLAPACK error estimation routines typically compute a variable called RCOND , which is the reciprocal of the condition number (or an approximation of the reciprocal). The reciprocal of the condition number is used instead of the condition number itself in order to avoid the possibility of overflow when the condition number is very large.     Also, some of our error bounds will use the vector of absolute values of x, |x| (tex2html_wrap_inline17562), or similarly |A| (tex2html_wrap_inline17566).

  Now we consider errors in subspaces. Subspaces are the outputs of routines that compute eigenvectors and invariant subspaces of matrices. We need a careful definition of error in these cases for the following reason. The nonzero vector x is called a (right) eigenvector of the matrix A with eigenvalue tex2html_wrap_inline12778 if tex2html_wrap_inline17574. From this definition, we see that -x, 2x, or any other nonzero multiple tex2html_wrap_inline17580 of x is also an eigenvector. In other words, eigenvectors are not unique. This means we cannot measure the difference between two supposed eigenvectors tex2html_wrap_inline17482 and x by computing tex2html_wrap_inline17588, because this may be large while tex2html_wrap_inline17590 is small or even zero for some tex2html_wrap_inline17592. This is true even if we normalize x so that tex2html_wrap_inline17596, since both x and -x can be normalized simultaneously. Hence, to define error in a useful way, we need instead to consider the set tex2html_wrap_inline17602 of all scalar multiples tex2html_wrap_inline17604 of x. The set tex2html_wrap_inline17602 is called the subspace spanned by x and is uniquely determined by any nonzero member of tex2html_wrap_inline17602. We will measure the difference between two such sets by the acute angle between them. Suppose tex2html_wrap_inline17614 is spanned by tex2html_wrap_inline17616 and tex2html_wrap_inline17602 is spanned by tex2html_wrap_inline17620. Then the acute angle between tex2html_wrap_inline17614 and tex2html_wrap_inline17602 is defined as    
displaymath17413
One can show that tex2html_wrap_inline17626 does not change when either tex2html_wrap_inline17482 or x is multiplied by any nonzero scalar. For example, if
displaymath17414
as above, then tex2html_wrap_inline17632 for any nonzero scalars tex2html_wrap_inline14473 and tex2html_wrap_inline17636.

Let us consider another way to interpret the angle tex2html_wrap_inline17638 between tex2html_wrap_inline17614 and tex2html_wrap_inline17602.     Suppose tex2html_wrap_inline17482 is a unit vector (tex2html_wrap_inline17646). Then there is a scalar tex2html_wrap_inline14473 such that
displaymath17415
The approximation tex2html_wrap_inline17650 holds when tex2html_wrap_inline17638 is much less than 1 (less than .1 will do nicely). If tex2html_wrap_inline17482 is an approximate eigenvector with error bound tex2html_wrap_inline17656, where x is a true eigenvector, there is another true eigenvector tex2html_wrap_inline17580 satisfying tex2html_wrap_inline17662. For example, if
displaymath17416
then tex2html_wrap_inline17664 for tex2html_wrap_inline17666.

Finally, many of our error bounds will contain a factor p(n) (or p(m,n)), which grows as a function of matrix dimension n (or dimensions m and n). It represents a potentially different function for each problem. In practice, the true errors usually grow at most linearly; using p(n)=1 in the error bound formulas will often give a reasonable estimate; p(n)=n is more conservative. Therefore, we will refer to p(n) as a ``modestly growing'' function of n; however. it can occasionally be much larger. For simplicity, the error bounds computed by the code fragments in the following sections will use p(n)=1. This means these computed error bounds may occasionally slightly underestimate the true error. For this reason we refer to these computed error bounds as ``approximate error bounds.''

Further Details: How to Measure Errors 

   The relative error tex2html_wrap_inline17448 in the approximation tex2html_wrap_inline17436 of the true solution tex2html_wrap_inline16235 has a drawback: it often cannot be computed directly, because it depends on the unknown quantity tex2html_wrap_inline17694. However, we can often instead estimate tex2html_wrap_inline17450, since tex2html_wrap_inline17436 is known (it is the output of our algorithm). Fortunately, these two quantities are necessarily close together, provided either one is small, which is the only time they provide a useful bound anyway. For example, tex2html_wrap_inline17700 implies
displaymath17417
so they can be used interchangeably.

Table 6.2 contains a variety of norms we will use to measure errors. These norms have the properties that tex2html_wrap_inline17702, and tex2html_wrap_inline17704, where p is one of 1, 2, tex2html_wrap_inline17548, and F. These properties are useful for deriving error bounds.

An error bound that uses a given norm may be changed into an error bound that uses another norm. This is accomplished by multiplying the first error bound by an appropriate function of the problem dimension. Table 6.3 gives the factors tex2html_wrap_inline17716 such that tex2html_wrap_inline17718, where n is the dimension of x.

  table4862
Table 6.3: Bounding one vector norm in terms of another

Table 6.4 gives the factors tex2html_wrap_inline17746 such that tex2html_wrap_inline17748, where A is m-by-n.

  table4892
Table 6.4: Bounding one matrix norm in terms of another

The two-norm of A, tex2html_wrap_inline17802, is also called the spectral norm of A and is equal to the largest singular value tex2html_wrap_inline17806 of A. We shall also need to refer to the smallest singular value tex2html_wrap_inline17810 of A; its value can be defined in a similar way to the definition of the two-norm in Table 6.2, namely, as tex2html_wrap_inline17814 when A has at least as many rows as columns, and defined as tex2html_wrap_inline17818 when A has more columns than rows. The two-norm, Frobenius norm  , and singular values of a matrix do not change if the matrix is multiplied by a real orthogonal (or complex unitary) matrix.

Now we define subspaces spanned by more than one vector, and angles between subspaces.     Given a set of k n-dimensional vectors tex2html_wrap_inline17826, they determine a subspace tex2html_wrap_inline17602 consisting of all their possible linear combinations tex2html_wrap_inline17830, tex2html_wrap_inline17832 scalars tex2html_wrap_inline17834. We also say that tex2html_wrap_inline17826 spans tex2html_wrap_inline17602. The difficulty in measuring the difference between subspaces is that the sets of vectors spanning them are not unique. For example, tex2html_wrap_inline17620, tex2html_wrap_inline17842 and tex2html_wrap_inline17844 all determine the same subspace. Therefore, we cannot simply compare the subspaces spanned by tex2html_wrap_inline17846 and tex2html_wrap_inline17826 by comparing each tex2html_wrap_inline17850 to tex2html_wrap_inline17852. Instead, we will measure the angle between the subspaces, which is independent of the spanning set of vectors. Suppose subspace tex2html_wrap_inline17614 is spanned by tex2html_wrap_inline17846 and that subspace tex2html_wrap_inline17602 is spanned by tex2html_wrap_inline17826. If k=1, we instead write more simply tex2html_wrap_inline17616 and tex2html_wrap_inline17620. When k=1, we define the angle tex2html_wrap_inline17870 between tex2html_wrap_inline17614 and tex2html_wrap_inline17602 as the acute angle between tex2html_wrap_inline17482 and x. When k>1, we define the acute angle between tex2html_wrap_inline17614 and tex2html_wrap_inline17602 as the largest acute angle between any vector tex2html_wrap_inline17482 in tex2html_wrap_inline17614 and the closest vector x in tex2html_wrap_inline17602 to tex2html_wrap_inline17482:


eqnarray4963
ScaLAPACK routines that compute subspaces return vectors tex2html_wrap_inline17846 spanning a subspace tex2html_wrap_inline17614 that are orthonormal. This means the n-by-k matrix tex2html_wrap_inline17904 satisfies tex2html_wrap_inline17906. Suppose also that the vectors tex2html_wrap_inline17826 spanning tex2html_wrap_inline17602 are orthonormal, so tex2html_wrap_inline17912 also satisfies tex2html_wrap_inline17914. Then there is a simple expression for the angle between tex2html_wrap_inline17614 and tex2html_wrap_inline17602:    
displaymath17418
For example, if
displaymath17419
then tex2html_wrap_inline17920.

As stated above, all our bounds will contain a factor p(n) (or p(m,n)), which measures how roundoff errors can grow as a function of matrix dimension n (or m and m). In practice, the true error usually grows just linearly with n, or even slower, but we can generally prove only much weaker bounds of the form tex2html_wrap_inline17934. This is because we cannot rule out the extremely unlikely possibility of rounding errors all adding together instead of canceling on average. Using tex2html_wrap_inline17934 would give pessimistic and unrealistic bounds, especially for large n, so we content ourselves with describing p(n) as a ``modestly growing'' polynomial function of n. Using p(n)=1 in the error bound formulas will often give a reasonable error estimate. For detailed derivations of various p(n), see [71, 114, 84, 38].

There is also one situation where p(n) can grow as large as tex2html_wrap_inline17950: Gaussian elimination. This typically occurs only on specially constructed matrices presented in numerical analysis courses [114, p. 212,]. However, the expert driver for solving linear systems, PxGESVX    , provides error bounds incorporating p(n), and so this rare possibility can be detected.


next up previous contents index
Next: Further Details: How Error Up: Accuracy and Stability Previous: New Sources of Error

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node136.html0100644000056400000620000000403206336113755017530 0ustar pfrauenfstaff Further Details: How Error Bounds Are Derived next up previous contents index
Next: Standard Error Analysis Up: Accuracy and Stability Previous: How to Measure Errors

Further Details: How Error Bounds Are Derived

 





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node137.html0100644000056400000620000002300306336113756017531 0ustar pfrauenfstaff Standard Error Analysis next up previous contents index
Next: Improved Error Bounds Up: Further Details: How Error Previous: Further Details: How Error

Standard Error Analysis

 

  We illustrate standard error analysis with the simple example of evaluating the scalar function y=f(z). Let the output of the subroutine that implements f(z) be denoted tex2html_wrap_inline18005; this includes the effects of roundoff. If tex2html_wrap_inline18007, where tex2html_wrap_inline18009 is small, we say that tex2html_wrap_inline18011 is a backward stable     algorithm for f, or that the backward error tex2html_wrap_inline18009 is small.     In other words, tex2html_wrap_inline18005 is the exact value of f at a slightly perturbed input tex2html_wrap_inline18021.gif

Suppose now that f is a smooth function, so that we may approximate it near z by a straight line: tex2html_wrap_inline18033. Then we have the simple error estimate
displaymath17997
Thus, if tex2html_wrap_inline18009 is small and the derivative f'(z) is moderate, the error tex2html_wrap_inline18039 will be small.gif This is often written in the similar form
displaymath17998
This approximately bounds the relative error    tex2html_wrap_inline18045 by the product of the condition number of f at z, tex2html_wrap_inline18051, and the relative backward error tex2html_wrap_inline18053.     Thus we get an error bound by multiplying a condition  number and a backward error (or bounds for these quantities). We call a problem ill-conditioned  if its condition number is large, and ill-posed  if its condition number is infinite (or does not exist).gif

If f and z are vector quantities, then f'(z) is a matrix (the Jacobian). Hence, instead of using absolute values as before, we now measure tex2html_wrap_inline18009 by a vector norm tex2html_wrap_inline18065 and f'(z) by a matrix norm tex2html_wrap_inline18069. The conventional (and coarsest) error analysis uses a norm such as the infinity norm. We therefore call this normwise backward stability.     For example, a normwise stable method for solving a system of linear equations Ax=b will produce a solution tex2html_wrap_inline17482 satisfying tex2html_wrap_inline18075, where tex2html_wrap_inline18077 and tex2html_wrap_inline18079 are both small (close to machine epsilon). In this case the condition number is tex2html_wrap_inline18081 (see section 6.5).  

Almost all of the algorithms in ScaLAPACK (as well as LAPACK) are stable in the sense just describedgif: when applied to a matrix A they produce the exact result for a slightly different matrix A+E, where tex2html_wrap_inline18077 is of order tex2html_wrap_inline17202.

Condition numbers may be expensive to compute exactly. For example, it costs about tex2html_wrap_inline18115 operations to solve Ax=b for a general matrix A, and computing tex2html_wrap_inline18121 exactly costs an additional tex2html_wrap_inline18123 operations, or twice as much. But tex2html_wrap_inline18121 can be estimated in only tex2html_wrap_inline18127 operations beyond those tex2html_wrap_inline18115 necessary for solution, a tiny extra cost. Therefore, most of ScaLAPACK's condition numbers and error bounds are based on estimated condition numbers , using the method of [72, 80, 81].

The price one pays for using an estimated rather than an exact condition number is occasional (but very rare) underestimates of the true error; years of experience attest to the reliability of our estimators, although examples where they badly underestimate the error can be constructed [82]. Note that once a condition estimate is large enough (usually tex2html_wrap_inline18131), it confirms that the computed answer may be completely inaccurate, and so the exact magnitude of the condition estimate conveys little information.


next up previous contents index
Next: Improved Error Bounds Up: Further Details: How Error Previous: Further Details: How Error

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node138.html0100644000056400000620000001501206336113756017533 0ustar pfrauenfstaff Improved Error Bounds next up previous contents index
Next: Error Bounds for Linear Up: Further Details: How Error Previous: Standard Error Analysis

Improved Error Bounds

 

The standard error analysis just outlined has a drawback: by using the infinity norm tex2html_wrap_inline18154 to measure the backward error, entries of equal magnitude in tex2html_wrap_inline18009 contribute equally to the final error bound tex2html_wrap_inline18158. This means that if z is sparse or has some tiny entries, a normwise backward stable algorithm may make large changes in these entries compared wit their original values. If these tiny values are known accurately by the user, these errors may be unacceptable, or the error bounds may be unacceptably large.

For example, consider solving a diagonal system of linear equations Ax=b. Each component of the solution is computed accurately by Gaussian elimination: tex2html_wrap_inline18164. The usual error bound is approximately tex2html_wrap_inline18166, which can arbitrarily overestimate the true error, tex2html_wrap_inline17202, if at least one tex2html_wrap_inline18170 is tiny and another one is large.

LAPACK addresses this inadequacy by providing some algorithms whose backward error tex2html_wrap_inline18009 is a tiny relative change in each component of z: tex2html_wrap_inline18176. This backward error retains both the sparsity structure of z as well as the information in tiny entries. These algorithms are therefore called componentwise relatively backward stable. Furthermore, computed error bounds reflect this stronger form of backward error.gif          

If the input data has independent uncertainty in each component, each component must have at least a small relative uncertainty, since each is a floating-point number. In this case, the extra uncertainty contributed by the algorithm is not much worse than the uncertainty in the input data, so one could say the answer provided by a componentwise relatively backward stable algorithm is as accurate as the data warrants [4].

When solving Ax=b using expert driver PxyySVX or computational routine PxyyRFS, for example, we almost always compute tex2html_wrap_inline17482 satisfying tex2html_wrap_inline18075, where tex2html_wrap_inline18186 is a small relative change in tex2html_wrap_inline18188 and tex2html_wrap_inline18190 is a small relative change in tex2html_wrap_inline18192. In particular, if A is diagonal, the corresponding error bound is always tiny, as one would expect (see the next section).

ScaLAPACK can achieve this accuracy   for linear equation solving, the bidiagonal singular value decomposition, and the symmetric tridiagonal eigenproblem and provides facilities for achieving this accuracy for least squares problems.


next up previous contents index
Next: Error Bounds for Linear Up: Further Details: How Error Previous: Standard Error Analysis

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node139.html0100644000056400000620000004176006336113760017540 0ustar pfrauenfstaff Error Bounds for Linear Equation Solving next up previous contents index
Next: Error Bounds for Linear Up: Accuracy and Stability Previous: Improved Error Bounds

Error Bounds for Linear Equation Solving

 

Let Ax=b be the system to be solved, and tex2html_wrap_inline17482 the computed solution. Let n be the dimension of A. An approximate error bound  for tex2html_wrap_inline17482 may be obtained in one of the following two ways, depending on whether the solution is computed by a simple driver or an expert driver:

  1. Suppose that Ax=b is solved using the simple driver PSGESV    (section 3.2.1). Then the approximate error boundgif


    displaymath18202
    can be computed by the following code fragment.

          EPSMCH = PSLAMCH( ICTXT, 'E' )
    *     Get infinity-norm of A
          ANORM = PSLANGE( 'I', N, N, A, IA, JA, DESCA, WORK )
    *     Solve system; The solution X overwrites B
          CALL PSGESV( N, 1, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO )
          IF( INFO.GT.0 ) THEN
             PRINT *,'Singular Matrix'
          ELSE IF( N.GT.0 ) THEN
    *        Get reciprocal condition number RCOND of A
             CALL PSGECON( 'I', N, A, IA, JA, DESCA, ANORM, RCOND, WORK,
         $                 LWORK, IWORK, LIWORK, INFO )
             RCOND = MAX( RCOND, EPSMCH )
             ERRBD = EPSMCH / RCOND
          END IF
     

    For example, suppose

    tex2html_wrap_inline18242,
    displaymath18203
    Then (to 4 decimal places)
    displaymath18204
    tex2html_wrap_inline18244, tex2html_wrap_inline18246, the true reciprocal condition number tex2html_wrap_inline18248, tex2html_wrap_inline18250, and the true error tex2html_wrap_inline18252.  

  2. Suppose that Ax=b is solved using the expert driver PSGESVX (section 3.2.1).    This routine provides an explicit error bound FERR, measured with the infinity-norm:  
    displaymath18205
    For example, the following code fragment solves Ax=b and computes an approximate error bound FERR:

          CALL PSGESVX( 'E', 'N', N, 1, A, IA, JA, DESCA, AF, IAF, JAF,
         $              DESCAF, IPIV, EQUED, R, C, B, IB, JB, DESCB, X, IX,
         $              JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, IWORK,
         $              LIWORK, INFO )
          IF( INFO.GT.0 ) PRINT *,'(Nearly) Singular Matrix'

    For the same A and b as above, tex2html_wrap_inline18258, tex2html_wrap_inline18260, and the actual error is tex2html_wrap_inline18262.

This example illustrates that the expert driver provides an error bound with less programming effort than the simple driver, and also that it may produce a significantly more accurate answer.

Similar code fragments, with obvious adaptations, may be used with all the driver routines PxPOSV and PxPOSVX      in Table 3.2. For example, if a symmetric positive definite or Hermitian positive definite system is solved by using the simple driver PxPOSV,      then PxLANSY or PxLANHE, respectively, must be used to compute ANORM, and PxPOCON     must be used to compute RCOND.

The drivers PxGBSV      (for solving general band matrices with partial pivoting), PxPBSV      (for solving positive definite band matrices) and PxPTSV      (for solving positive definite tridiagonal matrices), do not yet have the corresponding routines needed to compute error bounds, namely, PxLAnHE to compute ANORM and PxyyCON to compute RCOND.

The drivers PxDBSV      (for solving general band matrices) and PxDTSV      (for solving general tridiagonal matrices) do not pivot for numerical stability, and so may be faster but less accurate than their pivoting counterparts above. These routines may be used safely when any diagonal pivot sequence leads to a stable factorization; diagonally dominant matrices and symmetric positive definite matrices [71] have this property, for example.

Further Details: Error Bounds for Linear Equation Solving 

The conventional error analysis of linear equation  solving goes as follows. Let Ax=b be the system to be solved. Let tex2html_wrap_inline17482 be the solution computed by ScaLAPACK (or LAPACK) using any of their linear equation solvers. Let r be the residual tex2html_wrap_inline18270. In the absence of rounding error, r would be zero and tex2html_wrap_inline17482 would equal x; with rounding error, one can only say the following:

The normwise backward error of the computed solution tex2html_wrap_inline17482,     with respect to the infinity norm, is the pair E,f, which minimizes
displaymath18206
subject to the constraint tex2html_wrap_inline18075. The minimal value of tex2html_wrap_inline18284 is given by
displaymath18207
One can show that the computed solution tex2html_wrap_inline17482 satisfies tex2html_wrap_inline18288, where p(n) is a modestly growing function of n. The corresponding condition number is tex2html_wrap_inline18294.   The error tex2html_wrap_inline18296 is bounded by
displaymath18208
In the first code fragment in the preceding section, tex2html_wrap_inline18300, which is tex2html_wrap_inline18302 in the numerical example, is approximated by tex2html_wrap_inline18304. Approximations  of tex2html_wrap_inline18121 -- or, strictly speaking, its reciprocal RCOND -- are returned by computational routines PxyyCON (section 3.3.1) or driver routines                PxyySVX (section 3.2.1). The code fragment makes sure RCOND is at least tex2html_wrap_inline18308 EPSMCH to avoid overflow in computing ERRBD.   This limits ERRBD to a maximum of 1, which is no loss of generality because a relative error of 1 or more indicates the same thing:    a complete loss of accuracy.   Note that the value of RCOND returned by PxyySVX may apply to a linear system obtained from Ax=b by equilibration, namely, scaling the rows and columns of A in order to make the condition number smaller. This is the case in the second code fragment in the preceding section, where the program chose to scale the rows by the factors returned in tex2html_wrap_inline18314, resulting in tex2html_wrap_inline18316.

As stated in section 6.4.2, this approach does not respect the presence of zero or tiny entries in A. In contrast, the ScaLAPACK computational routines PxyyRFS (section 3.3.1) or driver routines PxyySVX (section 3.2.1) will (except in rare cases) compute a solution tex2html_wrap_inline17482 with the following properties:

The componentwise backward error of the computed solution tex2html_wrap_inline17482 is the pair E,f which minimizes    
displaymath18209
(where we interpret 0/0 as 0) subject to the constraint tex2html_wrap_inline18075. The minimal value of tex2html_wrap_inline18330 is given by
displaymath18210
One can show that for most problems the tex2html_wrap_inline17482 computed by PxyySVX satisfies tex2html_wrap_inline18334, where p(n) is a modestly growing function of n. In other words, tex2html_wrap_inline17482 is the exact solution of the perturbed problem tex2html_wrap_inline18075, where E and f are small relative perturbations in each entry of A and b, respectively. The corresponding condition number is tex2html_wrap_inline18352.   The error tex2html_wrap_inline18296 is bounded by
displaymath18211

The routines PxyyRFS and PxyySVX return tex2html_wrap_inline18356,                              which is called BERR  (for Backward ERRor), and a bound on the the actual error tex2html_wrap_inline18358, called FERR   (for Forward ERRor), as in the second code fragment in the last section. FERR is actually calculated by the following formula, which can be smaller than the bound tex2html_wrap_inline18360 given above:
displaymath18212
Here, tex2html_wrap_inline18362 is the computed value of the residual tex2html_wrap_inline18364, and the norm in the numerator is estimated by using the same estimation subroutine used for RCOND.

The value of BERR for the example in the preceding section is tex2html_wrap_inline18366.

Even in the rare cases where PxyyRFS fails to make BERR close to its minimum tex2html_wrap_inline17202, the error bound FERR may remain small. See [9] for details.


next up previous contents index
Next: Error Bounds for Linear Up: Accuracy and Stability Previous: Improved Error Bounds

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node13.html0100644000056400000620000000445706336113642017450 0ustar pfrauenfstaff BLAS next up previous contents index
Next: PBLAS Up: Software Components Previous: LAPACK

BLAS

  The BLAS (Basic Linear Algebra Subprograms) [57, 59, 93] include subroutines for common linear algebra computations such as dot-products, matrix-vector multiplication, and matrix-matrix multiplication. As is well known, using matrix-matrix operations (in particular, matrix multiplication) tuned for a particular architecture can mask the effects of the memory hierarchy   (cache misses, TLB misses, etc.) and permit floating-point operations to be performed near peak speed of the machine. An important aim of the BLAS is to provide a portability layer for computation.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node140.html0100644000056400000620000002401406336113760017521 0ustar pfrauenfstaff Error Bounds for Linear Least Squares Problems next up previous contents index
Next: Error Bounds for the Up: Accuracy and Stability Previous: Error Bounds for Linear

Error Bounds for Linear Least Squares Problems

 

The linear least squares problem is to find x that minimizes tex2html_wrap_inline18475. We discuss error bounds for the most common case where A is m-by-n with m > n, and A has full rank ; this is called an overdetermined least squares problem   (the following code fragments deal with m=n as well).

Let tex2html_wrap_inline17482 be the solution computed by the driver routine PxGELS (see section 3.2.2). An approximate error bound     
displaymath18463
may be computed in the following way:

      EPSMCH = PSLAMCH( ICTXT, 'E' )
*     Get the 2-norm of the right hand side B
      BNORM = PSLANGE( 'F', M, 1, B, IB, JB, DESCB, WORK )
*     Solve the least squares problem; the solution X overwrites B
      CALL PSGELS( 'N', M, N, 1, A, IA, JA, DESCA, B, IB, JB, DESCB,
     $             WORK, LWORK, INFO )
      IF( MIN( M, N ).GT.0 ) THEN
*        Get the 2-norm of the residual A*X-B
         RNORM = PSLANGE( 'F', M-N, 1, B, IB+N, JB, DESCB, WORK ) 
*        Get the reciprocal condition number RCOND of A
         CALL PSTRCON( 'I', 'U', 'N', N, A, IA, JA, DESCA, RCOND, WORK,
     $                 LWORK, IWORK, LIWORK, INFO )
         RCOND = MAX( RCOND, EPSMCH )
         IF( BNORM.GT.0.0 ) THEN
            SINT = RNORM / BNORM
         ELSE
            SINT = 0.0
         END IF
         COST = MAX( SQRT( ( 1.0E0 - SINT )*( 1.0E0 + SINT ) ), EPSMCH )
         TANT = SINT / COST
         ERRBD = EPSMCH*( 2.0E0 / ( RCOND*COST ) + TANT / RCOND**2 )
      END IF
 

For example, if tex2html_wrap_inline18242,
displaymath18464
then, to four decimal places,
displaymath18465
tex2html_wrap_inline18495, tex2html_wrap_inline18497, tex2html_wrap_inline18499, tex2html_wrap_inline18501, and the true error is tex2html_wrap_inline18503.

Note that in the preceding code fragment, the routine PSLANGE was used to compute the two-norm of the right hand side matrix B and the residual tex2html_wrap_inline18507. This routine was chosen because the result of the computation (BNORM or RNORM, respectively) is automatically known on all process columns within the process grid. The routine PSNRM2 could have also been used to perform this calculation; however, the use of PSNRM2 in this example would have required an additional communication broadcast, because the resulting value of BNORM or RNORM, respectively, is known only within the process column owning B(:,JB).

Further Details: Error Bounds for Linear Least Squares Problems 

The conventional error analysis of linear least squares problems goes as follows . As above, let tex2html_wrap_inline17482 be the solution to minimizing tex2html_wrap_inline18475 computed by ScaLAPACK using the least squares driver PxGELS (see section 3.2.2). We discuss the most common case, where A is overdetermined  (i.e., has more rows than columns) and has full rank [71]:     

The computed solution tex2html_wrap_inline17482 has a small normwise backward error. In other words, tex2html_wrap_inline17482 minimizes tex2html_wrap_inline18519, where E and f satisfy    
displaymath18466
and p(n) is a modestly growing function of n. We take p(n)=1 in the code fragments above. Let tex2html_wrap_inline18531 (approximated by 1/RCOND in the above code fragments), tex2html_wrap_inline18533 (= RNORM above), and tex2html_wrap_inline18535 (SINT = RNORM / BNORM above). Here, tex2html_wrap_inline17638 is the acute angle between the vectors tex2html_wrap_inline18539 and b.     Then when tex2html_wrap_inline18543 is small, the error tex2html_wrap_inline18545 is bounded by
displaymath18467
where tex2html_wrap_inline18549 = COST and tex2html_wrap_inline18551 = TANT in the code fragments above.

We avoid overflow by making sure RCOND and COST are both at least tex2html_wrap_inline18308 EPSMCH, and by handling the case of a zero B matrix separately (BNORM = 0).    

tex2html_wrap_inline18531 may be computed directly from the singular values of A returned by PxGESVD. It may also be approximated by using PxTRCON following calls to PxGELS. PxTRCON estimates tex2html_wrap_inline18559 or tex2html_wrap_inline18561 instead of tex2html_wrap_inline18563, but these can differ from tex2html_wrap_inline18563 by at most a factor of n.          


next up previous contents index
Next: Error Bounds for the Up: Accuracy and Stability Previous: Error Bounds for Linear

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node141.html0100644000056400000620000003755206336113762017537 0ustar pfrauenfstaff Error Bounds for the Symmetric Eigenproblem next up previous contents index
Next: Error Bounds for the Up: Accuracy and Stability Previous: Error Bounds for Linear

Error Bounds for the Symmetric Eigenproblem

 

The eigendecomposition  of an n-by-n real symmetric matrix is the factorization tex2html_wrap_inline13904 (tex2html_wrap_inline18618 in the complex Hermitian case), where Z is orthogonal (unitary) and tex2html_wrap_inline18622 is real and diagonal, with tex2html_wrap_inline18624. The tex2html_wrap_inline18626 are the eigenvalues  of A, and the columns tex2html_wrap_inline18630 of Z are the eigenvectors . This is also often written tex2html_wrap_inline18634. The eigendecomposition of a symmetric matrix is computed by the driver routines PxSYEV and PxSYEVX. The complex counterparts of these routines, which compute the eigendecomposition of complex Hermitian matrices, are the driver routines PxHEEV and PxHEEVX (see section 3.2.3).         

The approximate error bounds       for the computed eigenvalues tex2html_wrap_inline18636 are
displaymath18596
The approximate error bounds for the computed eigenvectors tex2html_wrap_inline18638, which bound the acute angles between the computed eigenvectors and true eigenvectors tex2html_wrap_inline18630, are    
displaymath18597
These bounds can be computed by the following code fragment:

      EPSMCH = PSLAMCH( ICTXT, 'E' )
*     Compute eigenvalues and eigenvectors of A
*     The eigenvalues are returned in W
*     The eigenvector matrix Z overwrites A
      CALL PSSYEV( 'V', UPLO, N, A, IA, JA, DESCA, W, Z, IZ, JZ,
     $             DESCZ, WORK, LWORK, INFO )
      IF( INFO.GT.0 ) THEN
         PRINT *,'PSSYEV did not converge'
      ELSE IF( N.GT.0 ) THEN
*        Compute the norm of A
         ANORM = MAX( ABS( W( 1 ) ), ABS( W( N ) ) )
         EERRBD = EPSMCH * ANORM
*        Compute reciprocal condition numbers for eigenvectors
         CALL SDISNA( 'Eigenvectors', N, N, W, RCONDZ, INFO )
         DO 10 I = 1, N
            ZERRBD( I ) = EPSMCH * ( ANORM / RCONDZ( I ) )
10       CONTINUE
      END IF

For example, if tex2html_wrap_inline18242 and


displaymath18598
then the eigenvalues, approximate error bounds, and true errors are as follows.


tabular5436

Further Details: Error Bounds for the Symmetric Eigenproblem 

The usual error analysis of the symmetric  eigenproblem using ScaLAPACK driver PSSYEV   (see subsection 3.2.3) is as follows [101]:

The computed eigendecomposition tex2html_wrap_inline18680 is nearly the exact eigendecomposition of A+E, namely, tex2html_wrap_inline18684 is a true eigendecomposition so that tex2html_wrap_inline18686 is orthogonal, where tex2html_wrap_inline18688 and tex2html_wrap_inline18690. Here p(n) is a modestly growing function of n. We take p(n)=1 in the above code fragment. Each computed eigenvalue tex2html_wrap_inline18698 differs from a true tex2html_wrap_inline18626 by at most
displaymath18599
Thus, large eigenvalues (those near tex2html_wrap_inline18702) are computed to high relative accuracy,   while small ones may not be.  

The angular difference between the computed unit eigenvector tex2html_wrap_inline18704 and a true unit eigenvector tex2html_wrap_inline18630 satisfies the approximate bound  
displaymath18600
if tex2html_wrap_inline18543 is small enough. Here, tex2html_wrap_inline18712 is the absolute gap   between tex2html_wrap_inline18626 and the nearest other eigenvalue. Thus, if tex2html_wrap_inline18626 is close to other eigenvalues, its corresponding eigenvector tex2html_wrap_inline18630 may be inaccurate. The gaps may be easily computed from the array of computed eigenvalues by using subroutine SDISNA  . The gaps computed by SDISNA are ensured not to be so small as to cause overflow when used as divisors.    

Let tex2html_wrap_inline17614 be the invariant subspace spanned by a collection of eigenvectors tex2html_wrap_inline18722, where tex2html_wrap_inline18724 is a subset of the integers from 1 to n. Let tex2html_wrap_inline17602 be the corresponding true subspace. Then
displaymath18601
  where
displaymath18602
is the absolute gap between the eigenvalues in tex2html_wrap_inline18724 and the nearest other eigenvalue. Thus, a cluster  of close eigenvalues that is far away from any other eigenvalue may have a well-determined invariant subspace tex2html_wrap_inline17614 even if its individual eigenvectors are ill-conditioned.

A small possibility exists that PSSYEV will fail to achieve the above error bounds on a heterogeneous network of processors   for reasons discussed in section 6.2. On a homogeneous network, PSSYEV is as robust as the corresponding LAPACK routine SSYEV. A future release will attempt to detect heterogeneity and warn the user to use an alternative algorithm.

In contrast to LAPACK, where the same error analysis applied to the simple and expert drivers, the expert driver PSSYEVX      satisfies slightly weaker error bounds than PSSYEV. The bounds tex2html_wrap_inline18736 and tex2html_wrap_inline18738 continue to hold, but the computed eigenvectors tex2html_wrap_inline18638 are no longer guaranteed to be orthogonal to one another. The corresponding LAPACK routine SSYEVX tries to guarantee orthogonality by reorthogonalizing computed eigenvectors against one another provided their corresponding computed eigenvalues are close enough together: tex2html_wrap_inline18742, where the threshold tex2html_wrap_inline18744. If m eigenvalues lie in a cluster satisfying this closeness criterion, the SSYEVX requires tex2html_wrap_inline18748 serial time to execute. When m is a large fraction of n, this serial bottleneck is expensive and does not always improve orthogonality.

ScaLAPACK addresses this problem in two ways. First, it lets the user use more or less time and space to perform reorthogonalization, rather than have a fixed criterion. In particular, the user can set the threshold ORTOL used above, decreasing it to make reorthogonalization less frequent or increasing it to reorthogonalize more. Furthermore, since each processor computes a subset of the eigenvectors, ScaLAPACK permits reorthogonalization only with the local eigenvectors; that is, no communication is allowed. Hence, if a cluster of eigenvalues is small enough for the corresponding eigenvectors to fit on one processor, the same reorthogonalization will be done as in LAPACK. The user can supply more or less workspace to limit the size of a cluster on one processor. Hence, at one extreme, with a large cluster and lots of workspace, the algorithm will be essentially equivalent to SSYEVX. At the other extreme, with all small clusters or little workspace supplied, the algorithm will be perfectly load balanced and perform minimal communication to compute the eigenvectors.

The second way ScaLAPACK will deal with reorthogonalization is to introduce a new algorithm [103, 102, 44] that requires nearly no reorthogonalization to compute orthogonal eigenvectors in a fully parallel way. This algorithm will be introduced in future ScaLAPACK and LAPACK releases.

In the special case of a real symmetric tridiagonal matrix T, the eigenvalues can sometimes be computed much more accurately. PxSYEV (and the other symmetric eigenproblem drivers) computes the eigenvalues and eigenvectors of a dense symmetric matrix by first reducing it to tridiagonal form  T and then finding the eigenvalues and eigenvectors of T. Reduction of a dense matrix to tridiagonal form  T can introduce additional errors, so the following bounds for the tridiagonal case do not apply to the dense case.

The eigenvalues of the symmetric tridiagonal matrix T may be computed with small componentwise relative backward error     (tex2html_wrap_inline18764) by using subroutine PxSTEBZ (section 3.3.4).    To compute tighter error bounds for the computed eigenvalues tex2html_wrap_inline18698 we must make some assumptions about T. The bounds discussed here are from [15]. Suppose T is positive definite, and write T=DHD where tex2html_wrap_inline18774 and tex2html_wrap_inline18776. Then the computed eigenvalues tex2html_wrap_inline18698 can differ from true eigenvalues tex2html_wrap_inline18626 by
displaymath18603
where p(n) is a modestly growing function of n. Thus, if tex2html_wrap_inline18786 is moderate, each eigenvalue will be computed to high relative accuracy,   no matter how tiny it is.


next up previous contents index
Next: Error Bounds for the Up: Accuracy and Stability Previous: Error Bounds for Linear

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node142.html0100644000056400000620000001302206336113762017522 0ustar pfrauenfstaff Error Bounds for the Singular Value Decomposition next up previous contents index
Next: Further Details: Error Bounds Up: Accuracy and Stability Previous: Error Bounds for the

Error Bounds for the Singular Value Decomposition

 

The singular  value decomposition (SVD) of a real m-by-n matrix A is defined as follows. Let tex2html_wrap_inline14127. The SVD of A is tex2html_wrap_inline14119 (tex2html_wrap_inline14133 in the complex case), where U and V are orthogonal (unitary) matrices and tex2html_wrap_inline18852 is diagonal, with tex2html_wrap_inline18854. The tex2html_wrap_inline12826 are the singular values of A and the leading r columns tex2html_wrap_inline12840 of U and tex2html_wrap_inline12842 of V the left and right singular vectors, respectively. The SVD of a general matrix is computed by PxGESVD      (see subsection 3.2.3).

The approximate error bounds for the computed singular values tex2html_wrap_inline18870 are
displaymath18830
The approximate error bounds for the computed singular vectors tex2html_wrap_inline18872 and tex2html_wrap_inline18874, which bound the acute angles between the computed singular vectors and true singular vectors tex2html_wrap_inline12842 and tex2html_wrap_inline12840, are    
eqnarray5899
These bounds can be computing by the following code fragment:    

      EPSMCH = PSLAMCH( ICTXT, 'E' )
*     Compute singular value decomposition of A
*     The singular values are returned in S
*     The left singular vectors are returned in U
*     The transposed right singular vectors are returned in VT
      CALL PSGESVD( 'V', 'V', M, N, A, IA, JA, DESCA, S, U, IU, JU,
     $              DESCU, VT, IVT, JVT, DESCVT, WORK, LWORK, INFO )
      IF( INFO.GT.0 ) THEN
         PRINT *,'PSGESVD did not converge'
      ELSE IF( MIN( M, N ).GT.0 ) THEN
         SERRBD  = EPSMCH * S( 1 )
*        Compute reciprocal condition numbers for singular vectors
         CALL SDISNA( 'Left', M, N, S, RCONDU, INFO )
         CALL SDISNA( 'Right', M, N, S, RCONDV, INFO )
         DO 10 I = 1, MIN( M, N )
            VERRBD( I ) = EPSMCH*( S( 1 ) / RCONDV( I ) )
            UERRBD( I ) = EPSMCH*( S( 1 ) / RCONDU( I ) )
10       CONTINUE
      END IF

For example, if tex2html_wrap_inline18242 and
displaymath18831
then the singular values, approximate error bounds, and true errors are given below.


tabular5913




Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node143.html0100644000056400000620000002203006336113763017523 0ustar pfrauenfstaff Further Details: Error Bounds for the Singular Value Decomposition next up previous contents index
Next: Error Bounds for the Up: Error Bounds for the Previous: Error Bounds for the

Further Details: Error Bounds for the Singular Value Decomposition

 

The usual error analysis of the SVD algorithm  PxGESVD in ScaLAPACK (see subsection 3.2.3)is as follows [71]:

The SVD algorithm is backward stable.     This means that the computed SVD, tex2html_wrap_inline18949, is nearly the exact SVD of A+E where tex2html_wrap_inline18953, and p(m,n) is a modestly growing function of m and n. This means tex2html_wrap_inline18961 is the true SVD, so that tex2html_wrap_inline18963 and tex2html_wrap_inline18965 are both orthogonal, where tex2html_wrap_inline18967, and tex2html_wrap_inline18969. Each computed singular value tex2html_wrap_inline18971 differs from true tex2html_wrap_inline12826 by at most
displaymath18941
(we take p(m,n)=1 in the above code fragment). Thus, large singular values (those near tex2html_wrap_inline18977) are computed to high relative accuracy   and small ones may not be.    

The angular difference between the computed left singular vector tex2html_wrap_inline18979 and a true tex2html_wrap_inline12840 satisfies the approximate bound
displaymath18942
where tex2html_wrap_inline18985 is the absolute gap   between tex2html_wrap_inline12826 and the nearest other singular value (we take p(m,n)=1 in the above code fragment). Thus, if tex2html_wrap_inline12826 is close to other singular values, its corresponding singular vector tex2html_wrap_inline12840 may be inaccurate. When n < m, then tex2html_wrap_inline18997 must be redefined as tex2html_wrap_inline18999. The gaps may be easily computed from the array of computed singular values using function    SDISNA. The gaps computed by SDISNA are ensured not to be so small as to cause overflow when used as divisors.     The same bound applies to the computed right singular vector tex2html_wrap_inline19001 and a true vector tex2html_wrap_inline12842.

Let tex2html_wrap_inline17614 be the space spanned by a collection of computed left singular vectors tex2html_wrap_inline19007, where tex2html_wrap_inline18724 is a subset of the integers from 1 to n. Let tex2html_wrap_inline17602 be the corresponding true space. Then
displaymath18943
where
displaymath18944
is the absolute gap between the singular values in tex2html_wrap_inline18724 and the nearest other singular value. Thus, a cluster  of close singular values which is far away from any other singular value may have a well determined space tex2html_wrap_inline17614 even if its individual singular vectors are ill-conditioned. The same bound applies to a set of right singular vectors tex2html_wrap_inline19021gif.

There is a small possibility that PxGESVD will fail to achieve the above error bounds on a heterogeneous network of processors   for reasons discussed in section 6.2. On a homogeneous network, PxGESVD is as robust as the corresponding LAPACK routine xGESVD. A future release will attempt to detect heterogeneity and warn the user to use an alternative algorithm.

In the special case of bidiagonal matrices, the singular values and singular vectors may be computed much more accurately. A bidiagonal matrix B has nonzero entries only on the main diagonal and the diagonal immediately above it (or immediately below it). PxGESVD computes the SVD of a general      matrix by first reducing it to bidiagonal form B, and then calling xBDSQR (subsection 3.3.6) to compute the SVD of B. Reduction of a dense matrix to bidiagonal form B can introduce additional errors, so the following bounds for the bidiagonal case do not apply to the dense case. For the error analysis of xBDSQR, see the LAPACK manual.


next up previous contents index
Next: Error Bounds for the Up: Error Bounds for the Previous: Error Bounds for the

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node144.html0100644000056400000620000004454606336113764017545 0ustar pfrauenfstaff Error Bounds for the Generalized Symmetric Definite Eigenproblem next up previous contents index
Next: Troubleshooting Up: Accuracy and Stability Previous: Further Details: Error Bounds

Error Bounds for the Generalized Symmetric Definite Eigenproblem

 

Three types of problems must be considered.   In all cases A and B are real symmetric (or complex Hermitian) and B is positive definite. These decompositions are computed for real symmetric matrices by the driver routines PxSYGVX (see section 3.2.4). These decompositions are computed for complex Hermitian matrices by the driver routines PxHEGVX (see subsection 3.2.4). In each of the following three decompositions, tex2html_wrap_inline12784 is real and diagonal with diagonal entries tex2html_wrap_inline19079, and the columns tex2html_wrap_inline18630 of Z are linearly independent vectors. The tex2html_wrap_inline18626 are called eigenvalues and the tex2html_wrap_inline18630 are eigenvectors.     

  1. tex2html_wrap_inline18087. The eigendecomposition may be written tex2html_wrap_inline12895 and tex2html_wrap_inline12903 (or tex2html_wrap_inline12913 and tex2html_wrap_inline12921 if A and B are complex). This may also be written tex2html_wrap_inline19103.
  2. tex2html_wrap_inline18089. The eigendecomposition may be written tex2html_wrap_inline19107 and tex2html_wrap_inline12903 (tex2html_wrap_inline19111 and tex2html_wrap_inline12921 if A and B are complex). This may also be written tex2html_wrap_inline19119.
  3. tex2html_wrap_inline18091. The eigendecomposition may be written tex2html_wrap_inline12895 and tex2html_wrap_inline12905 (tex2html_wrap_inline12913 and tex2html_wrap_inline12923 if A and B are complex). This may also be written tex2html_wrap_inline19135.

The approximate error bounds for the computed eigenvalues tex2html_wrap_inline18636 are
displaymath19053
The approximate error bounds   for the computed eigenvectors tex2html_wrap_inline18638, which bound the acute angles between the computed eigenvectors and true eigenvectors tex2html_wrap_inline18630, are    
displaymath18597
These bounds are computed differently, depending on which of the above three problems are to be solved. The following code fragments show how.

  1. First we consider error bounds for problem 1.

          EPSMCH = PSLAMCH( ICTXT, 'E' )
          UNFL = PSLAMCH( ICTXT, 'U' )
    *     Solve the eigenproblem A - lambda B (ITYPE = 1)
          ITYPE = 1
    *     Compute the norms of A and B
          ANORM = PSLANSY( '1', UPLO, N, A, IA, JA, DESCA, WORK )
          BNORM = PSLANSY( '1', UPLO, N, B, IB, JB, DESCB, WORK )
    *     The eigenvalues are returned in W
    *     The eigenvectors are returned in A
          SUBROUTINE PSSYGVX( ITYPE, 'V', 'A', UPLO, N, A, IA, JA,
         $                    DESCA, B, IB, JB, DESCB, VL, VU, IL, IU,
         $                    UNFL, M, NZ, W, -1.0, Z, IZ, JZ, DESCZ,
         $                    WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR,
         $                    GAP, INFO )
          IF( INFO.GT.0 ) THEN
             PRINT *,'PSSYGVX did not converge, or B not positive definite'
          ELSE IF( N.GT.0 ) THEN
    *        Get reciprocal condition number RCONDB of Cholesky factor of B
             CALL PSTRCON( '1', UPLO, 'N', N, B, IB, JB, DESCB, RCONDB,
         $                 WORK, LWORK, IWORK, LIWORK, INFO )
             RCONDB = MAX( RCONDB, EPSMCH )
             CALL SDISNA( 'Eigenvectors', N, N, W, RCONDZ, INFO )
             DO 10 I = 1, N
                EERRBD( I ) = ( EPSMCH / RCONDB**2 ) * ( ANORM / BNORM +
         $                      ABS( W( I ) ) )
                ZERRBD( I ) = ( EPSMCH / RCONDB**3 ) * ( ( ANORM / BNORM )
         $                     / RCONDZ( I ) + ( ABS( W( I ) ) /
         $                     RCONDZ( I ) ) * RCONDB )
    10       CONTINUE
          END IF
     

    For example, if tex2html_wrap_inline18242,
    displaymath19055
    then tex2html_wrap_inline19145, tex2html_wrap_inline19147, and tex2html_wrap_inline19149, and the approximate eigenvalues, approximate error bounds, and true errors are shown below.


    tabular6044

  2. Problem types 2 and 3 have the same error bounds. We illustrate only type 2.    

          EPSMCH = PSLAMCH( ICTXT, 'E' )
    *     Solve the eigenproblem A*B - lambda I (ITYPE = 2)
          ITYPE = 2
    *     Compute the norms of A and B
          ANORM = PSLANSY( '1', UPLO, N, A, IA, JA, DESCA, WORK )
          BNORM = PSLANSY( '1', UPLO, N, B, IB, JB, DESCB, WORK )
    *     The eigenvalues are returned in W
    *     The eigenvectors are returned in A
          SUBROUTINE PSSYGVX( ITYPE, 'V', 'A', UPLO, N, A, IA, JA,
         $                    DESCA, B, IB, JB, DESCB, VL, VU, IL, IU,
         $                    UNFL, M, NZ, W, -1.0, Z, IZ, JZ, DESCZ,
         $                    WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR,
         $                    GAP, INFO )
          IF( INFO.GT.0 .AND. INFO.LE.N ) THEN
             PRINT *,'PSSYGVX did not converge'
          ELSE IF( INFO.GT.N ) THEN
             PRINT *,'B not positive definite'
          ELSE IF( N.GT.0 ) THEN
    *        Get reciprocal condition number RCONDB of Cholesky factor of B
             CALL PSTRCON( '1', UPLO, 'N', N, B, IB, JB, DESCB, RCONDB,
         $                 WORK, LWORK, IWORK, LIWORK, INFO )
             RCONDB = MAX( RCONDB, EPSMCH )
             CALL SDISNA( 'Eigenvectors', N, N, W, RCONDZ, INFO )
             DO 10 I = 1, N
                EERRBD(I) = ( ANORM * BNORM ) * EPSMCH + 
         $                  ( EPSMCH / RCONDB**2 ) * ABS( W( I ) )
                ZERRBD(I) = ( EPSMCH / RCONDB ) * ( ( ANORM * BNORM ) / 
         $                    RCONDZ( I ) + 1.0 / RCONDB )
    10       CONTINUE
          END IF
     

    For the same A and B as above, the approximate eigenvalues, approximate error bounds, and true errors are shown below.


    tabular6068

Further Details: Error Bounds for the Generalized Symmetric Definite Eigenproblem 

The error analysis of the driver routine PxSYGVX, or PxHEGVX in the complex case      (see section 3.2.4), goes as follows. In all cases tex2html_wrap_inline18712 is the absolute gap   between tex2html_wrap_inline18626 and the nearest other eigenvalue.  

  1. tex2html_wrap_inline18087. The computed eigenvalues tex2html_wrap_inline19245 can differ from true eigenvalues tex2html_wrap_inline18626 by at most about
    displaymath19056

        The angular difference between the computed eigenvector tex2html_wrap_inline18638 and a true eigenvector tex2html_wrap_inline18630 is
    displaymath19057

  2. tex2html_wrap_inline18089 or tex2html_wrap_inline18091. The computed eigenvalues tex2html_wrap_inline19245 can differ from true eigenvalues tex2html_wrap_inline18626 by at most about
    displaymath19058

       

    The angular difference between the computed eigenvector tex2html_wrap_inline18638 and a true eigenvector tex2html_wrap_inline18630 is
    displaymath19059

The code fragments above replace p(n) by 1 and make sure neither RCONDB nor RCONDZ is so small as to cause overflow when used as divisors in the expressions for error bounds.

These error bounds are large when B is ill-conditioned with respect to inversion (tex2html_wrap_inline19277 is large). Often, the eigenvalues and eigenvectors are much better conditioned than indicated here. We mention two ways to get tighter bounds. The first way is effective when the diagonal entries of B differ widely in magnitude:gif

  1. tex2html_wrap_inline18087. Let tex2html_wrap_inline19283 be a diagonal matrix. Then replace B by DBD and A by DAD in the above bounds.
  2. tex2html_wrap_inline18089 or tex2html_wrap_inline18091. Let tex2html_wrap_inline19283 be a diagonal matrix. Then replace B by DBD and A by tex2html_wrap_inline19305 in the above bounds.

The second way to get tighter bounds does not actually supply guaranteed bounds, but its estimates are often better in practice. It is not guaranteed because it assumes the algorithm is backward stable, which is not necessarily true when B is ill-conditioned.     It estimates the chordal distance between a true eigenvalue tex2html_wrap_inline18626 and a computed eigenvalue tex2html_wrap_inline19245:  
displaymath19060
To interpret this measure, we write tex2html_wrap_inline19313 and tex2html_wrap_inline19315. Then tex2html_wrap_inline19317. In other words, if tex2html_wrap_inline19245 represents the one-dimensional subspace tex2html_wrap_inline17614 consisting of the line through the origin with slope tex2html_wrap_inline19245, and tex2html_wrap_inline18626 represents the analogous subspace tex2html_wrap_inline17602, then tex2html_wrap_inline19329 is the sine of the acute angle tex2html_wrap_inline17870 between these subspaces.     Thus, tex2html_wrap_inline19333 is bounded by one and is small when both arguments are large.gif It applies only to the first problem, tex2html_wrap_inline18087:

Suppose a computed eigenvalue tex2html_wrap_inline19245 of tex2html_wrap_inline18087 is the exact eigenvalue of a perturbed problem tex2html_wrap_inline19345. Let tex2html_wrap_inline17852 be the unit eigenvector (tex2html_wrap_inline19349) for the exact eigenvalue tex2html_wrap_inline18626. Then if tex2html_wrap_inline19353 is small compared with tex2html_wrap_inline19355, and if tex2html_wrap_inline19357 is small compared with tex2html_wrap_inline19359, we have
displaymath19061
Thus tex2html_wrap_inline19363 is a condition number for eigenvalue tex2html_wrap_inline18626.  

next up previous contents index
Next: Troubleshooting Up: Accuracy and Stability Previous: Further Details: Error Bounds

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node145.html0100644000056400000620000001207406336113765017536 0ustar pfrauenfstaff Troubleshooting next up previous contents index
Next: Bug Report Checklist Up: Guide Previous: Error Bounds for the

Troubleshooting

  

       Successful installation, testing, and use of ScaLAPACK rely heavily on the proper installation of its building blocks (PVM or MPI, BLACS, BLAS, and PBLAS). Frequently Asked Questions (FAQ) lists are maintained in the directories on netlib to answer some of the most common user questions. For the user's convenience, prebuilt ScaLAPACK and BLACS libraries are provided for a variety of computer architectures in the following URLs:

http://www.netlib.org/scalapack/archives/
http://www.netlib.org/blacs/archives/

Test suites are provided for PVM, the BLACS, the BLAS, and the PBLAS. It is highly recommended that each of these respective test suites be run prior to the execution of the ScaLAPACK test suite. Installation Guides  are also provided for the BLACS and ScaLAPACK. Refer to the appropriate directory on netlib for further information.

We begin this chapter by discussing a set of first-step debugging hints to pinpoint where the problem is occurring. Following these debugging hints, we discuss the types of error messages that can be encountered during the execution of a ScaLAPACK routine: ScaLAPACK error messages, PBLAS error messages, BLACS error messages, and system-dependent error messages or failures.

If these suggestions do not help evaluate specific difficulties, we suggest that the user review the following ``bug report checklist'' and then feel free to contact the authors at scalapack@cs.utk.edu   or blacs@cs.utk.edu, respectively. The user should tell us the type of machine on which the tests were run, the compiler and compiler options that were used, details of the BLACS library and message-passing library that were used and the BLAS library; also, the user should send a copy of the input file, if appropriate.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node146.html0100644000056400000620000000533706336113765017543 0ustar pfrauenfstaff Bug Report Checklist next up previous contents index
Next: Installation Debugging Hints Up: Troubleshooting Previous: Troubleshooting

Bug Report Checklist

  

When the user sends e-mail to our mailing alias, some of the first questions we will ask are the following:

  1. Have you run the BLAS, BLACS, PBLAS and ScaLAPACK test suites?
  2. Have you checked the errata lists (errata.scalapack and errata.blacs) on netlib?
    http://www.netlib.org/scalapack/errata.scalapack
    http://www.netlib.org/blacs/errata.blacs
  3. If you are using an optimized BLAS or BLACS library, have you tried using the reference implementations from netlib?
    http://www.netlib.org/blas/
    http://www.netlib.org/blacs/
  4. If you are using an optimized MPI or PVM library, have you tried using the reference implementations from netlib?
    http://www.netlib.org/pvm3/
    http://www.netlib.org/mpi/
  5. Have you attempted to replicate this error using the appropriate ScaLAPACK test code and/or one of the ScaLAPACK example routines?


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node147.html0100644000056400000620000000747606336113766017553 0ustar pfrauenfstaff Installation Debugging Hints next up previous contents index
Next: Application Debugging Hints Up: Troubleshooting Previous: Bug Report Checklist

Installation Debugging Hints

 

If the user encounters difficulty in the installation process, we suggest the following:

  • Obtain prebuilt ScaLAPACK and BLACS libraries on netlib for a variety of architectures.
    http://www.netlib.org/scalapack/archives/
    http://www.netlib.org/blacs/archives/
  • Obtain sample SLmake.inc files for a variety of architectures in the SCALAPACK/INSTALL directory in the scalapack distribution tar file. Sample Bmake.inc files are included in the BLACS/BMAKES directory in the blacs distribution file.
  • Consult the ScaLAPACK FAQ list on netlib.
  • Consult the errata.scalapack file in the scalapack directory on netlib, and/or the errata.blacs file in the blacs directory on netlib. These files contain a list of known difficulties that have been diagnosed and corrected (or will be corrected in the next release), or reported to the vendor as in the case of message-passing libraries or optimized BLAS  .
  • Always run the BLACS, BLAS, and PBLAS test suites to ensure that these libraries have been properly installed. (If PVM is the underlying message-passing layer, please also run the PVM test suite.) If a problem is detected in the BLAS or BLACS libraries, try linking to the reference implementations to be found in the respective blas or blacs directory on netlib.
  • If ScaLAPACK is being tested on a heterogeneous cluster of computers, please ensure that all executables are linked with the same debug level of the BLACS. Otherwise, unpredictable results will occur because the debug level 1 BLACS (specified by BLACSDBGLVL=1 in Bmake.inc) perform error-checking and thus send more messages than the performance debug level 0 BLACS (specified by BLACSDBGLVL=0 in Bmake.inc).


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node148.html0100644000056400000620000001437206336113766017545 0ustar pfrauenfstaff Application Debugging Hints next up previous contents index
Next: Common Errors in Calling Up: Troubleshooting Previous: Installation Debugging Hints

Application Debugging Hints

 

We highly recommend the following as a list of debugging hints (and tools) for writing parallel application programs that call ScaLAPACK:

  • Look at the ScaLAPACK example programs as a good starting point.
    http://www.netlib.org/scalapack/examples/
  • Always check the value of INFO on exit from a ScaLAPACK routine.
  • All routines in ScaLAPACK that require workspace also require the length of that workspace to be specified in the calling sequence. If in doubt about the amount of workspace to supply to a ScaLAPACK routine, supply LWORK=-1, and use the returned value in WORK(1) as the correct value for LWORK. Refer to section 4.6.5 for further details on determining workspace requirements  .
  • If you are calling a ScaLAPACK routine that has an LAPACK equivalent, write a serial code calling LAPACK first. Code can be converted in pieces from LAPACK to ScaLAPACK by debugging on a one-process grid. When all of the LAPACK codes have been removed and the code has been fully parallelized, execute it on a multiple process grid.
  • When writing a parallel program, first debug the code to work on one process, and then expand to more processes.
  • When writing a parallel program, debug with small matrices.
  • Use the TOOLS routine PxLAPRNT  to print out each process's portion of a distributed matrix. A variety of utility routines   are provided in the TOOLS directory and are commonly used as debugging aids in the development of the ScaLAPACK library.
  • Sprinkle synchronization points via BLACS_BARRIER   near suspected error.
  • Link to the debug level 1 BLACS (specified by BLACSDBGLVL=1 in Bmake.inc)  until the program is completely debugged.
  • Specify the``Repeatability'' flag in BLACS_SET.
  • If running a heterogeneous application, please ensure that all executables are linked with the same debug level of the BLACS. Otherwise, unpredictable results will occur because the debug level 1 BLACS perform error-checking and thus send more messages than the performance debug level 0 BLACS.
  • Always run the BLACS, BLAS, and PBLAS test suites to ensure that these libraries have been properly installed. (If PVM is the underlying message-passing layer, please also run the PVM test suite.) If a problem is detected in the BLAS or BLACS libraries, try linking to the reference implementations in the respective blas or blacs directory on netlib.
  • Consult the errata.scalapack file in the scalapack directory on netlib, and/or the errata.blacs file in the blacs directory on netlib. These files contain a list of known difficulties that have been diagnosed and corrected (or will be corrected in the next release), or reported to the vendor as in the case of message-passing libraries or optimized BLAS  .
  • Refer to section 4.6.7 and the leading comments of the source code for the alignment restrictions currently needed in some of the ScaLAPACK routines.

next up previous contents index
Next: Common Errors in Calling Up: Troubleshooting Previous: Installation Debugging Hints

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node149.html0100644000056400000620000000656506336113767017554 0ustar pfrauenfstaff Common Errors in Calling ScaLAPACK Routines next up previous contents index
Next: Failures Detected by ScaLAPACK Up: Troubleshooting Previous: Application Debugging Hints

Common Errors in Calling ScaLAPACK Routines

 

The user must read the leading comments of a ScaLAPACK routine before invoking the routine. The wording of the leading comments is explained in Chapter 4. Basic terminology is explained in the Glossary and List of Notation.

For the benefit of less experienced programmers, we provide a list of common programming errors in calling a ScaLAPACK routine. These errors may cause the ScaLAPACK routine to report a failure, as described in section 7.4 ; they may cause an error to be reported by the system; or they may lead to wrong results -- see also section 7.5.

  • Wrong number of arguments
  • Arguments in the wrong order
  • Argument of the wrong type (especially real and complex arguments of the wrong precision)
  • Wrong dimensions for an array argument
  • Insufficient space in a workspace argument
  • Failure to assign a value to an input argument
  • Routine designed for homogeneous computers was executed on a heterogeneous system (see section 6.2)

Some modern compilation systems, as well as software tools such as the Fortran 77 syntax checker ftnchek (freely available on netlib) and the portability checker in Toolpack [105], can check that arguments agree in number and type; and many compilation systems offer run-time detection of errors such as an array element out-of-bounds or use of an unassigned variable.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node14.html0100644000056400000620000000515606336113643017447 0ustar pfrauenfstaff PBLAS next up previous contents index
Next: BLACS Up: Software Components Previous: BLAS

PBLAS

 

To simplify the design of ScaLAPACK, and because the BLAS have proven to be useful tools outside LAPACK, we chose to build a parallel set of BLAS, called PBLAS [26, 104], which perform message-passing and whose interface is as similar to the BLAS as possible. This decision has permitted the ScaLAPACK code to be quite similar, and sometimes nearly identical, to the analogous LAPACK code.

We hope that the PBLAS will provide a distributed memory standard  , just as the BLAS have provided a shared memory standard. This would simplify and encourage the development of high performance and portable parallel numerical software, as well as providing manufacturers with a small set of routines to be optimized. Further details of the PBLAS can be found in [26], [104], and Appendix D.2.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node150.html0100644000056400000620000000421206336113767017527 0ustar pfrauenfstaff Failures Detected by ScaLAPACK Routines next up previous contents index
Next: Invalid Arguments and PXERBLA Up: Troubleshooting Previous: Common Errors in Calling

Failures Detected by ScaLAPACK Routines

   

A ScaLAPACK routine has two ways to report a failure to complete a computation successfully.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node151.html0100644000056400000620000000752706336113770017536 0ustar pfrauenfstaff Invalid Arguments and PXERBLA next up previous contents index
Next: Computational Failures and INFO Up: Failures Detected by ScaLAPACK Previous: Failures Detected by ScaLAPACK

Invalid Arguments and PXERBLA

    If an illegal value is supplied for one of the input arguments to a ScaLAPACK routine, it will call the error handler PXERBLA to write a message to the standard output unit of the form:

 ** On entry to PSGESV  parameter number  4 had an illegal value
This particular message could be caused by passing to PSGESV  a value of NRHS that was less than zero, for example. The arguments are checked in order, beginning with the first. As mentioned in Chapter 4, if an error is detected in the tex2html_wrap_inline19432 entry of a descriptor array, which is the tex2html_wrap_inline19434 argument in the parameter list, the number passed to PXERBLA has been arbitrarily chosen to be 100*i+j. This allows the user to distinguish an error on a descriptor entry from an error on a scalar argument. Invalid arguments are often caused by the kind of error listed in section 7.3.

In the model implementation of PXERBLA  that is supplied with ScaLAPACK, the only action that is performed is the printing of an error message to standard output. Program execution is not terminated. For the ScaLAPACK driver and computational routines, a RETURN statement is issued following the call to PXERBLA. Control returns to the higher-level calling routine, and it is left to the user to determine how the program should proceed. However, in the specialized low-level ScaLAPACK routines (auxiliary routines that are Level 2 equivalents of computational routines), the call to PXERBLA() is immediately followed by a call to BLACS_ABORT()  to terminate program execution since recovery from an error at this level in the computation is not possible.

It is always good practice to check for a nonzero value of INFO on return from a ScaLAPACK routine.  



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node152.html0100644000056400000620000001174706336113770017536 0ustar pfrauenfstaff Computational Failures and INFO > 0 next up previous contents index
Next: Wrong Results Up: Failures Detected by ScaLAPACK Previous: Invalid Arguments and PXERBLA

Computational Failures and INFO > 0

   A positive value of INFO on return from a ScaLAPACK routine indicates a failure in the course of the algorithm. Common causes are

  • a matrix is singular (to working precision),
  • a symmetric matrix is not positive definite, or
  • an iterative algorithm for computing eigenvalues or eigenvectors fails to converge in the permitted number of iterations.
For example, if PSGESVX  is called to solve a system of equations with a coefficient matrix  that is approximately singular, it may detect exact singularity at the ith stage of the LU factorization, in which case it returns INFO = i; or (more probably) it may compute an estimate of the reciprocal condition number that is less than relative machine precision, in which case it returns INFO = n+1. Again, the documentation in Part ii should be consulted for a description of the error.

When a failure with INFO > 0 occurs, control is always returned to the calling program; PXERBLA() is not called, and no error message is written. Thus, it is always good practice to check for a nonzero value of INFO on return from a ScaLAPACK routine.

A failure with INFO > 0 may indicate any of the following:

  • An inappropriate routine was used:. For example, if a routine fails because a symmetric matrix turns out not to be positive definite, consider using a routine for symmetric indefinite matrices.
  • A single-precision routine was used when double precision was needed. For example, if PSGESVX  reports approximate singularity (as illustrated above), the corresponding double precision routine PDGESVX may be able to solve the problem (but nevertheless the problem is ill-conditioned).
  • A programming error occurred in generating the data supplied to a routine. For example, even though theoretically a matrix should be well-conditioned and positive-definite, a programming error in generating the matrix could easily destroy either of those properties.
  • A programming error occurred in calling the routine, of the kind listed in section 7.3.


next up previous contents index
Next: Wrong Results Up: Failures Detected by ScaLAPACK Previous: Invalid Arguments and PXERBLA

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node153.html0100644000056400000620000000544706336113771017540 0ustar pfrauenfstaff Wrong Results next up previous contents index
Next: Error Handling in the Up: Troubleshooting Previous: Computational Failures and INFO

Wrong Results

  

Wrong results from ScaLAPACK routines are most often caused by incorrect usage. It is also possible that wrong results are caused by a bug outside of ScaLAPACK, in the compiler or in one of the library routines, such as the BLAS, the BLACS, or the underlying message-passing layer, that are linked with ScaLAPACK. Test suites are available for ScaLAPACK, the PBLAS, the BLACS, and the BLAS. The ScaLAPACK installation guide [24] or the BLACS installation guide should be consulted for descriptions of the tests and for advice on resolving problems.

A list of known problems, compiler errors, and bugs in ScaLAPACK routines is maintained on netlib; see Chapter 1.

Users who suspect they have found a new bug in a ScaLAPACK routine are encouraged to report it promptly to the developers as directed in Chapter 1. The bug report should include a test case, a description of the problem and expected results, and the actions, if any, that the user has already taken to fix the bug.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node154.html0100644000056400000620000001251706336113771017535 0ustar pfrauenfstaff Error Handling in the PBLAS next up previous contents index
Next: Error Handling in the Up: Troubleshooting Previous: Wrong Results

Error Handling in the PBLAS

 

If a PBLAS routine is called with an invalid value for any of its arguments, it must report the fact and terminate the execution of the program. In the model implementation, each routine, on detecting an error, calls a common error-handling PBLAS routine  , passing to it the current BLACS context, the name of the routine and the number of the first argument that is in error. If an error is detected in the tex2html_wrap_inline19432 entry of a descriptor array, which is the tex2html_wrap_inline19434 argument in the parameter list, the number passed to the PBLAS error-handler routine has been arbitrarily chosen to be tex2html_wrap_inline19471. This allows the user to distinguish an error on a descriptor entry from an error on a scalar argument. For efficiency purposes, the PBLAS routines performs only a local validity check of their argument list. If an error is detected in at least one process of the current context, the program execution is stopped.

A global validity check of the input arguments passed to a PBLAS routine must be performed in the higher-level calling procedure. To understand the need and cost of global checking, as well as the reason why this type of checking is not performed in the PBLAS, consider the following example. The value of a global input argument is legal but differs from one process to another. The results are unpredictable. In order to detect this kind of error situation, a synchronization point would be necessary, which may result in a significant performance degradation. Since every process must call the same routine to perform the desired operation successfully, it is natural and safe to restrict somewhat the amount of checking operations performed in the PBLAS routines.

Specialized implementations may call system-specific exception-handling facilities, either via an auxiliary routine for error handling or directly from the routine. In addition, the testing programs can take advantage of this exception handling mechanism by simulating specific erroneous input argument lists and then verifying that particular errors are correctly detected.

For complete details on the specification of all routines, please refer to [26]. Appendix D.2 contains the Quick Reference Guide to the PBLAS. An html version of this Quick Reference Guide, along with the leading comments from each of the routines, is available on the ScaLAPACK homepage.


next up previous contents index
Next: Error Handling in the Up: Troubleshooting Previous: Wrong Results

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node155.html0100644000056400000620000001152206336113772017532 0ustar pfrauenfstaff Error Handling in the BLACS next up previous contents index
Next: BLACS Warning and Error Up: Troubleshooting Previous: Error Handling in the

Error Handling in the BLACS

 

This section describes the BLACS error-handling features. The BLACS error-handling behavior may be changed at compile time by using the C preprocessor macro BlacsDebugLvl. a call to BLACS_GET (see  [54] for details) will help determine what debug level the BLACS are using.

If the BLACS are compiled with a BLACS debug level of 0, little error checking is performed. A few critical items will be checked (for instance, BLACS_GRIDINIT will still not allow the user to allocate a process grid with more processes than there are available), but for performance reasons, the BLACS will not check most of the parameters.

It is therefore highly recommended that users link their code to a BLACS library compiled with debug level 1 while debugging their code. BLACS debug level 1 mainly does parameter checking. A few other services are also provided. For instance, users will be warned if a process sends a message to itself. Having a process send to itself is legal, but it displays poor performance and requires enough buffer space that it can occasionally cause hangs for large messages. The BLACS therefore issue a warning when this behavior is detected.

Many times, the debug level 0 code will simply hang, leaving the developer without any clue as to what has gone wrong. This may be caused, for instance, by trying to receive from a process that is not in the current context. The debug level 1 BLACS can detect this type of user error, and issue a (we hope helpful) message.

The BLACS issue three types of messages:

  1. BLACS warning: BLACS detect risky behavior, but attempt to correct or ignore. Warning message is printed, and execution proceeds.
  2. BLACS error: BLACS detects an error, prints an error message, and kills the machine via a call to BLACS_ABORT.
  3. System error: The BLACS receive an error message from the underlying system, which is then passed on to the user, and the BLACS kills the machine.



next up previous contents index
Next: BLACS Warning and Error Up: Troubleshooting Previous: Error Handling in the

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node156.html0100644000056400000620000001266206336113773017542 0ustar pfrauenfstaff BLACS Warning and Error Messages next up previous contents index
Next: System Error Messages Up: Error Handling in the Previous: Error Handling in the

BLACS Warning and Error Messages

All BLACS warning messages are printed by the internal routine BlacsWarn, and all BLACS error messages are printed by the internal routine BlacsErr. The only real difference between BlacsWarn and BlacsErr is that BlacsErr calls BLACS_ABORT after the message is printed.

With these central routines handling BLACS error messages, it should be relatively easy for the programmer to modify error handling if the default routines are not adequate for his needs. One particularly annoying problem is that on many systems a print to the screen takes a long time to finish. BlacsErr may then kill the machine before the print reaches the screen, and the error message is lost. In this case, the user may wish to make BlacsErr wait before killing, or not kill at all, for instance.

BLACS warning messages have the following form:

BLACS WARNING '<explanation string>'
from {<p>,<q>}, pnum=<pnum>, Contxt=<ictxt>, on line <#> of file '<fname>'.

BLACS error messages have the following form:

BLACS ERROR '<explanation string>'
from {<p>,<q>}, pnum=<pnum>, Contxt=<ictxt>, on line <#> of file '<fname>'.

The meaning of these parameters are as follows:

  • explanation string The message that should help the user determine what is wrong. For example, on an incorrect call to BLACS_GRIDINIT, the user might get:
    Process 0 had 2 x 4 grid; correct is 1 x 4.
  • {p, q}: The row and column process grid coordinates of the process issuing the warning/error.
  • pnum: The process number returned in the first argument of BLACS_PINFO.
  • ictxt: The integer context handle. Please note that this value is not the same across all processes. For instance, process {0, 0} may have ictxt = 0 and process {0, 1} have ictxt = 1 for the same context. However, the pnum and ictxt together provide an unambiguous process/context identifier.
  • #: The line number within the file fname that issued the warning.
  • fname: The file name where the routine that issued the warning/error is located.

Not all of this information may be available at the time an error or warning is issued. For instance, if the error occurs before the creation of the grid, the process grid coordinates will be unavailable. For any value that the BLACS cannot figure out, a -1 is printed to indicate that the value is unknown.

Examples of these BLACS error messages can be found on the BLACS homepage
(http://www.netlib.org/blacs/index.html) or in ``A User's Guide to the BLACS'' [54].


next up previous contents index
Next: System Error Messages Up: Error Handling in the Previous: Error Handling in the

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node157.html0100644000056400000620000000476406336113773017547 0ustar pfrauenfstaff System Error Messages next up previous contents index
Next: Poor Performance Up: Troubleshooting Previous: BLACS Warning and Error

System Error Messages

Occasionally, ScaLAPACK will receive an error message from the underlying system. At this time, the BLACS will print the system error message, and exit. Since these error messages come from the underlying system, their form will necessarily vary depending on which BLACS version is being used. The user may need to obtain vendor documentation or on-line manpages describing system error messages in order to understand the message. For example, if the PVM BLACS are being used, a PVM error number will be returned. The PVM quick reference guide, for example, could then be consulted to translate the error number into an understandable error message.

Examples of system error messages can be found on the BLACS homepage
(http://www.netlib.org/blacs/index.html) or in ``A User's Guide to the BLACS'' [54].



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node158.html0100644000056400000620000001401206336113774017534 0ustar pfrauenfstaff Poor Performance next up previous contents index
Next: Index of ScaLAPACK Routines Up: Troubleshooting Previous: System Error Messages

Poor Performance

ScaLAPACK ultimately relies on an efficient implementation of the BLAS  and the data distribution for load balance. Refer to Chapter 5.

To avoid poor performance  from ScaLAPACK routines, note the following recommendations :

BLAS:
One should use machine-specific optimized BLAS if they are available. Many manufacturers and research institutions have developed, or are developing, efficient versions of the BLAS for particular machines. The BLAS enable LAPACK and ScaLAPACK routines to achieve high performance with transportable software. Users are urged to determine whether such an implementation of the BLAS exists for their platform. When such an optimized implementation of the BLAS is available, it should be used to ensure optimal performance. If such a machine-specific implementation of the BLAS does not exist for a particular platform, one should consider installing a publicly available set of BLAS that requires only an efficient implementation of the matrix-matrix multiply BLAS routine xGEMM. Examples of such implementations are [35, 90]. A machine-specific and efficient implementation of the routine GEMM can be automatically generated by publicly available software such as [16]. Although a reference implementation of the Fortran77 BLAS is available from the blas directory on netlib, these routines are not expected to perform as well as a specially tuned implementation on most high-performance computers - on some machines it may give much worse performance - but it allows users to run LAPACK and ScaLAPACK software on machines that do not offer any other implementation of the BLAS.

BLACS:
With the few exceptions mentioned in section 5.2.3, the performance achieved by the BLACS should be close to the one of the underlying message-passing library it is calling. Since publicly available implementations of the BLACS exist for a range of native message-passing libraries such as NX for the Intel supercomputers and MPL for the IBM SP series, as well as more generic interfaces such as PVM and MPI, users should select the BLACS implementation that is based on the most efficient message-passing library available. Some vendors, such as Cray and IBM, supply an optimized implementation of the BLACS for their systems. Users are urged to rely on these BLACS libraries whenever possible.

LWORK tex2html_wrap_inline14966 WORK(1):
In some ScaLAPACK eigenvalue routines, such as the symmetric eigenproblems (PxSYEV and PxSYEVX/PxHEEVX) and the generalized symmetric eigenproblem (PxSYGVX/PxHEGVX), a larger value of LWORK can guarantee the orthogonality of the returned eigenvectors at the risk of potentially degraded performance of the algorithm. The minimum amount of workspace required is returned in the first element of the work array, but a larger amount of workspace can allow for additional orthogonalization if desired by the user. Refer to section 5.3.6 and the leading comments of the source code for complete details.


next up previous contents index
Next: Index of ScaLAPACK Routines Up: Troubleshooting Previous: System Error Messages

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node159.html0100644000056400000620000000510006336113774017533 0ustar pfrauenfstaff Index of ScaLAPACK Routines next up previous contents index
Next: Index of Driver and Up: Guide Previous: Poor Performance

Index of ScaLAPACK Routines

 

A separate index is provided for each of the following classifications of routines: driver and computational routines, auxiliary routines, and matrix redistribution/copy routines.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node15.html0100644000056400000620000000577706336113643017461 0ustar pfrauenfstaff BLACS next up previous contents index
Next: Efficiency and Portability Up: Software Components Previous: PBLAS

BLACS

 

The BLACS (Basic Linear Algebra Communication Subprograms) [50, 54] are a message-passing library designed for linear algebra. The computational model consists of a one- or two-dimensional process grid, where each process stores pieces of the matrices and vectors. The BLACS include synchronous send/receive routines to communicate a matrix or submatrix from one process to another, to broadcast submatrices to many processes, or to compute global reductions (sums, maxima and minima). There are also routines to construct, change, or query the process grid. Since several ScaLAPACK algorithms require broadcasts or reductions among different subsets of processes, the BLACS permit a process to be a member of several overlapping or disjoint process grids, each one labeled by a context. Some message-passing systems, such as MPI [64, 110], also include this context  concept; MPI calls this a communicator . The BLACS provide facilities for safe inter-operation of system contexts and BLACS contexts. Further details of the BLACS can be found in [54]. An important aim of the BLACS is to provide a portable, linear algebra specific layer for communication.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node160.html0100644000056400000620000000335506336113775017536 0ustar pfrauenfstaff Index of Driver and Computational Routines next up previous contents index
Next: Notes Up: Index of ScaLAPACK Routines Previous: Index of ScaLAPACK Routines

Index of Driver and Computational Routines

 

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node161.html0100644000056400000620000000663106336113775017537 0ustar pfrauenfstaff Notes next up previous contents index
Next: Index of Auxiliary Routines Up: Index of ScaLAPACK Routines Previous: Index of Driver and

Notes

  1. This index   lists related pairs of real and complex routines together, for example, PSGETRF and PCGETRF.
  2. Driver routines are listed in bold type, for example, PSGESV and PCGESV.
  3. Routines are listed in alphanumeric order of the real (single precision) routine name (which always begins with PS-). (See section 3.1.3 for details of the ScaLAPACK naming scheme.)
  4. Double precision routines are not listed here; they have names beginning with PD- instead of PS-, or PZ- instead of PC-.
  5. This index gives only a brief description of the purpose of each routine. For a precise description, consult the Specifications in Part ii, where the routines appear in the same order as here.
  6. The text of the descriptions applies to both real and complex routines, except where alternative words or phrases are indicated, for example, ``symmetric/Hermitian'', ``orthogonal/unitary'', or ``quasi-triangular/triangular''. For the real routines tex2html_wrap_inline12724 is equivalent to tex2html_wrap_inline12722. (The same convention is used in Part ii.)
  7. A few routines for real matrices have no complex equivalent (for example, PSSTEBZ).


tabular6406

tabular6460

tabular6529

tabular6587



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node162.html0100644000056400000620000000325706336113776017542 0ustar pfrauenfstaff Index of Auxiliary Routines next up previous contents index
Next: Notes Up: Index of ScaLAPACK Routines Previous: Notes

Index of Auxiliary Routines

 



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node163.html0100644000056400000620000000606606336113776017544 0ustar pfrauenfstaff Notes next up previous contents index
Next: Matrix Redistribution/Copy Routines Up: Index of ScaLAPACK Routines Previous: Index of Auxiliary Routines

Notes

  1. This index  lists related pairs of real and complex routines together.
  2. Routines are listed in alphanumeric order of the real (single precision) routine name (which always begins with PS-). (See section 3.1.3 for details of the ScaLAPACK naming scheme.)
  3. A few complex routines have no real equivalents, and they are listed first.
  4. Double-precision routines are not listed here; they have names beginning with PD- instead of PS-, or PZ- instead of PC-. The only exceptions to this simple rule are that the double-precision versions of PCMAX1, PSCSUM1, and PCSRSCL are named PZMAX1, PDZSUM1, and PZDRSCL.
  5. A few routines in the list have names that are independent of data type: PXERBLA.
  6. This index gives only a brief description of the purpose of each routine. For a precise description, consult the leading comments in the code, which have been written in the same style as for the driver and computational routines.


tabular6639

tabular6653

tabular6673

tabular6694

tabular6720



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node164.html0100644000056400000620000001337206336113777017544 0ustar pfrauenfstaff Matrix Redistribution/Copy Routines next up previous contents index
Next: Fortran Interface Up: Index of ScaLAPACK Routines Previous: Notes

Matrix Redistribution/Copy Routines

 

ScaLAPACK provides two matrix redistribution/copy routines for each data type [107, 49, 106]. These routines provide a truly general copy from any block cyclicly distributed (sub)matrix to any other block cyclicly distributed (sub)matrix. These routines are the only ones in the entire ScaLAPACK library which provide inter-context operations. By this we mean that they can take a (sub)matrix in context A (distributed over process grid A) and copy it to a (sub)matrix in context B.

There need be no relation between the two operand (sub)matrices other than their global size and the fact that they are both legal block cyclicly distributed (sub)matrices. This means that they may be distributed across different process grids, have varying block sizes, and differing matrix starting points, be contained in different size distributed matrices, etc.

Because of the generality of these routines, they may be used for many operations not usually associated with copy routines. For instance, they may be used to a take a matrix on one process and distribute it across a process grid, or the reverse. If a supercomputer is grouped into a virtual parallel machine with a workstation, for instance, this routine can be used to move the matrix from the workstation to the supercomputer and back. In ScaLAPACK, these routines are called to copy matrices from a two-dimensional process grid to a one-dimensional process grid. They can be used to redistribute matrices so that distributions providing maximal performance can be used by various component libraries, as well. This list of uses is hardly exhaustive, but it gives an idea of the power of a general copy in parallel computing.

The two routine classifications are as follows:

  • P_GEMR2D copies between general, rectangular matrices.
  • P_TRMR2D copies between trapezoidal matrices.
                 

All routines are available in integer, single precision real, double precision real, single precision complex, and double precision complex. In the following sections, we describe only the singe precision routines for each data type. Double precision routines are the same as their single precision counterparts, but they have names beginning with PD- instead of PS-, or PZ- instead of PC-.

Note that these routines require an array descriptor of type DESC_(DTYPE_)=1.




next up previous contents index
Next: Fortran Interface Up: Index of ScaLAPACK Routines Previous: Notes

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node165.html0100644000056400000620000000323306336114000017514 0ustar pfrauenfstaff Fortran Interface next up previous contents index
Next: PxGEMR2D Up: Matrix Redistribution/Copy Routines Previous: Matrix Redistribution/Copy Routines

Fortran Interface



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node166.html0100644000056400000620000000453506336114000017523 0ustar pfrauenfstaff PxGEMR2D next up previous contents index
Next: Purpose Up: Matrix Redistribution/Copy Routines Previous: Fortran Interface

PxGEMR2D

      SUBROUTINE PSGEMR2D( M, N, A, IA, JA, DESCA, B, IB, JB, DESCB,
     $                     ICTXT )
      INTEGER            IA, IB, ICTXT, JA, JB, M, N
      INTEGER            DESCA( * ), DESCB( * )
      REAL               A( * ), B( * )

      SUBROUTINE PCGEMR2D( M, N, A, IA, JA, DESCA, B, IB, JB, DESCB,
     $                     ICTXT )
      INTEGER            IA, IB, ICTXT, JA, JB, M, N
      INTEGER            DESCA( * ), DESCB( * )
      COMPLEX            A( * ), B( * )

      SUBROUTINE PIGEMR2D( M, N, A, IA, JA, DESCA, B, IB, JB, DESCB,
     $                     ICTXT )
      INTEGER            IA, IB, ICTXT, JA, JB, M, N
      INTEGER            DESCA( * ), DESCB( * )
      INTEGER            A( * ), B( * )



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node167.html0100644000056400000620000000503306336114001017517 0ustar pfrauenfstaff Purpose next up previous contents index
Next: Arguments Up: PxGEMR2D Previous: PxGEMR2D

Purpose

PxGEMR2D copies the indicated (sub)matrix of A to the indicated (sub)matrix of B. A and B can have arbitrary block-cyclic distributions: they can be distributed across different process grids, have different blocking factors, etc.

Particular care must be taken when the process grid over which matrix A is distributed (call this context A) is disjoint from the process grid over which matrix B is distributed (call this context B). The general rules for which parameters need to be set are:

  • All calling processes must have the correct M and N.
  • Processes in context A must correctly define all parameters describing A.
  • Processes in context B must correctly define all parameters describing B.
  • Processes which are not members of context A must pass DESCA(CTXT_) = -1 and need not set other parameters describing A.
  • Processes which are not members of context B must pass DESCB(CTXT_) = -1 and need not set other parameters describing B.


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node168.html0100644000056400000620000001263406336114001017525 0ustar pfrauenfstaff Arguments next up previous contents index
Next: PxTRMR2D Up: PxGEMR2D Previous: Purpose

Arguments

M
(global input) INTEGER
On entry, M specifies the number of rows of the (sub)matrix A to be copied. M tex2html_wrap_inline14966 0.

N
(global input) INTEGER
On entry, N specifies the number of columns of the (sub)matrix A to be copied. N tex2html_wrap_inline14966 zero.

A
(local input) REAL/COMPLEX/INTEGER array, dimension (LLD_A,LOCc(JA+N-1))
On entry, the source matrix.

IA
(global input) INTEGER
On entry,the global row index of the beginning of the (sub)matrix of A to copy.
1 tex2html_wrap_inline14970 IA tex2html_wrap_inline14970 M_A - M + 1.

JA
(global input) INTEGER
On entry,the global column index of the beginning of the (sub)matrix of A to copy.
1 tex2html_wrap_inline14970 JA tex2html_wrap_inline14970 N_A - N + 1.

DESCA
(global and local input) INTEGER array, dimension (DLEN_)
The array descriptor for the distributed matrix A.
Only DESCA(DTYPE_)=1 is supported, and thus DLEN_ = 9.
If the calling process is not part of the context of A, then DESCA(CTXT_) must be equal to -1.

B
(local output) REAL/COMPLEX/INTEGER array, dimension (LLD_B,LOCc(JB+N-1))
On exit, the defined (sub)matrix is overwritten by the indicated (sub)matrix from A.

IB
(global input) INTEGER
On entry, the global row index of the beginning of the (sub)matrix of B that will be overwritten.
1 tex2html_wrap_inline14970 IB tex2html_wrap_inline14970 M_B - M + 1.

JB
(global input) INTEGER
On entry, the global column index of the beginning of the submatrix of B that will be overwritten.
1 tex2html_wrap_inline14970 JB tex2html_wrap_inline14970 N_B - N + 1.

DESCB
(global and local input) INTEGER array, dimension (DLEN_)
The array descriptor for the distributed matrix B.
Only DESCB(DTYPE_)=1 is supported, and thus DLEN_ = 9.
If the calling process is not part of the context of B, then DESCB(CTXT_) must be equal to -1.

ICTXT
(global input) INTEGER
The context encompassing at least the union of all processes in context A and context B. All processes in the context ICTXT must call this routine, even if they do not own a piece of either matrix.


next up previous contents index
Next: PxTRMR2D Up: PxGEMR2D Previous: Purpose

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node169.html0100644000056400000620000000474506336114002017533 0ustar pfrauenfstaff PxTRMR2D next up previous contents index
Next: Purpose Up: Matrix Redistribution/Copy Routines Previous: Arguments

PxTRMR2D

      SUBROUTINE PSTRMR2D( UPLO, DIAG, M, N, A, IA, JA, DESCA, B, IB,
     $                     JB, DESCB, ICTXT )
      CHARACTER          DIAG, UPLO
      INTEGER            IA, IB, ICTXT, JA, JB, M, N
      INTEGER            DESCA( * ), DESCB( * )
      REAL               A( * ), B( * )

      SUBROUTINE PCTRMR2D( UPLO, DIAG, M, N, A, IA, JA, DESCA, B, IB,
     $                     JB, DESCB, ICTXT )
      CHARACTER          DIAG, UPLO
      INTEGER            IA, IB, ICTXT, JA, JB, M, N
      INTEGER            DESCA( * ), DESCB( * )
      COMPLEX            A( * ), B( * )

      SUBROUTINE PITRMR2D( UPLO, DIAG, M, N, A, IA, JA, DESCA, B, IB,
     $                     JB, DESCB, ICTXT )
      CHARACTER          DIAG, UPLO
      INTEGER            IA, IB, ICTXT, JA, JB, M, N
      INTEGER            DESCA( * ), DESCB( * )
      INTEGER            A( * ), B( * )



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node16.html0100644000056400000620000001466606336113644017460 0ustar pfrauenfstaff Efficiency and Portability next up previous contents index
Next: Availability Up: Essentials Previous: BLACS

Efficiency and Portability

  

ScaLAPACK is designed to give high efficiency  on MIMD    distributed memory concurrent supercomputers, such as the Intel Paragon, IBM SP series, and the Cray T3 series. In addition, the software is designed so that it can be used with clusters of workstations through a networked environment and with a heterogeneous  computing environment via PVM or MPI. Indeed, ScaLAPACK can run on any machine that supports either PVM  or MPI . See Chapter 5 for some examples of the performance achieved by ScaLAPACK routines.

The ScaLAPACK strategy for combining efficiency with portability  is to construct the software so that as much as possible of the computation is performed by calls to the Parallel Basic Linear Algebra Subprograms (PBLAS). The PBLAS [26, 104] perform global computation by relying on the Basic Linear Algebra Subprograms (BLAS) [93, 59, 57]  for local computation and the Basic Linear Algebra Communication Subprograms (BLACS) [54, 113] for communication.

The efficiency of ScaLAPACK software depends on the use of block-partitioned algorithms   and on efficient implementations of the BLAS  and the BLACS  being provided by computer vendors (and others) for their machines. Thus, the BLAS and the BLACS form a low-level interface between ScaLAPACK software and different machine architectures. Above this level, all of the ScaLAPACK software is portable.

The BLAS, PBLAS, and the BLACS are not, strictly speaking, part of ScaLAPACK. C code for the PBLAS is included in the ScaLAPACK distribution. Since the performance of the package depends upon the BLAS and the BLACS being implemented efficiently, we have not included this software with the ScaLAPACK distribution. A machine-specific implementation of the BLAS and the BLACS should be used. If a machine-optimized version of the BLAS is not available, a Fortran 77 reference implementation of the BLAS is available from netlib (see section 1.5). This code constitutes the ``model implementation'' [58, 56]. The model implementation of the BLAS is not expected to perform as well as a specially tuned implementation on most high-performance computers -- on some machines it may give much worse performance -- but it allows users to run ScaLAPACK codes on machines that do not offer any other implementation of the BLAS.

If a vendor-optimized version of the BLACS is not available for a specific architecture, efficiently ported versions of the BLACS are available on netlib. Currently, the BLACS have been efficiently ported on machine-specific message-passing libraries such as the IBM (MPL)   and Intel (NX)   message-passing libraries, as well as more generic interfaces such as PVM   and MPI  . The BLACS  overhead has been shown to be negligible in [54]. Refer to the URL for the blacs directory on netlib for more details:

http://www.netlib.org/blacs/index.html

next up previous contents index
Next: Availability Up: Essentials Previous: BLACS

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node170.html0100644000056400000620000000527006336114002017515 0ustar pfrauenfstaff Purpose next up previous contents index
Next: Arguments Up: PxTRMR2D Previous: PxTRMR2D

Purpose

PxTRMR2D copies the indicated (sub)matrix of A to the indicated (sub)matrix of B. A and B can have arbitrary block-cyclic distributions: they can be distributed across different process grids, have different blocking factors, etc.

The (sub)matrix to be copied is assumed to be trapezoidal. So only the upper or the lower part will be copied. The other part is unchanged.

Particular care must be taken when the process grid over which matrix A is distributed (call this context A) is disjoint from the process grid over which matrix B is distributed (call this context B). The general rules for which parameters need to be set are as follows:

  • All calling processes must have the correct M and N.
  • Processes in context A must correctly define all parameters describing A.
  • Processes in context B must correctly define all parameters describing B.
  • Processes that are not members of context A must pass DESCA(CTXT_) = -1 and need not set other parameters describing A.
  • Processes that are not members of context B must pass DESCB(CTXT_) = -1 and need not set other parameters describing B.


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node171.html0100644000056400000620000001414506336114003017520 0ustar pfrauenfstaff Arguments next up previous contents index
Next: C Interface Up: PxTRMR2D Previous: Purpose

Arguments

UPLO
(global input) CHARACTERtex2html_wrap_inline152631
On entry, UPLO specifies whether we should copy the upper part or the lower part of the indicated (sub)matrix:
= 'U':
Copy the upper triangular part.
= 'L':
Copy the lower triangular part.

DIAG
(global input) CHARACTERtex2html_wrap_inline152631
On entry, DIAG specifies whether we should copy the diagonal.
= 'U':
Do NOT copy the diagonal of the (sub)matrix.
= 'N':
Do copy the diagonal of the (sub)matrix.

M
(global input) INTEGER
On entry, M specifies the number of rows of the (sub)matrix to be copied. M tex2html_wrap_inline14966 0.

N
(global input) INTEGER
On entry, N specifies the number of columns of the (sub)matrix to be copied. N tex2html_wrap_inline14966 0.

A
(local input) REAL/COMPLEX/INTEGER array, dimension (LLD_A,LOCc(JA+N-1))
On entry, the source matrix.

IA
(global input) INTEGER
On entry,the global row index of the beginning of the (sub)matrix of A to copy.
1 tex2html_wrap_inline14970 IA tex2html_wrap_inline14970 M_A - M + 1.

JA
(global input) INTEGER
On entry,the global column index of the beginning of the (sub)matrix of A to copy.
1 tex2html_wrap_inline14970 JA tex2html_wrap_inline14970 N_A - N + 1.

DESCA
(global and local input) INTEGER array, dimension (DLEN_)
The array descriptor for the distributed matrix A.
Only DESCA(DTYPE_)=1 is supported, and thus DLEN_ = 9.
If the current process is not part of the context of A, then DESCA(CTXT_) must be equal to -1.

B
(local output) REAL/COMPLEX/INTEGER array, dimension (LLD_B,LOCc(JB+N-1))
On exit, the defined (sub)matrix is overwritten by the indicated (sub)matrix from A.

IB
(global input) INTEGER
On entry, the global row index of the beginning of the (sub)matrix of B that will be overwritten.
1 tex2html_wrap_inline14970 IB tex2html_wrap_inline14970 M_B - M + 1.

JB
(global input) INTEGER
On entry, the global column index of the beginning of the submatrix of B that will be overwritten.
1 tex2html_wrap_inline14970 JB tex2html_wrap_inline14970 N_B - N + 1.

DESCB
(global and local input) INTEGER array, dimension (DLEN_)
The array descriptor for the distributed matrix B.
Only DESCB(DTYPE_)=1 is supported, and thus DLEN_ = 9.
If the calling process is not part of the context of B, then DESCB(CTXT_) must be equal to -1.

ICTXT
(global input) INTEGER
The context encompassing at least the union of all processes in context A and context B. All processes in the context ICTXT must call this routine, even if they do not own a piece of either matrix.


next up previous contents index
Next: C Interface Up: PxTRMR2D Previous: Purpose

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node172.html0100644000056400000620000000407606336114004017524 0ustar pfrauenfstaff C Interface next up previous contents index
Next: Code Fragment Calling C Up: Matrix Redistribution/Copy Routines Previous: Arguments

C Interface

void Cp_gemr2d(int m, int n, TYPE *A, int IA, int JA, int *descA, 
               TYPE *B, int IB, int JB, int *descB, int gcontext);

void Cp_trmr2d(char *uplo, char *diag, int m, int n, TYPE *A, int IA,
               int JA, int *descA, TYPE *B, int IB, int JB, int *descB,
               int gcontext);
where _ and TYPE are as defined below:


tabular6773


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node173.html0100644000056400000620000000451506336114004017523 0ustar pfrauenfstaff Code Fragment Calling C Interface Cpdgemr2d next up previous contents index
Next: Call Conversion: LAPACK to Up: Matrix Redistribution/Copy Routines Previous: C Interface

Code Fragment Calling C Interface Cpdgemr2d

      /* scatter of the matrix A from 1 processor to a P*Q grid */       
         Cpdgemr2d(m, n,
                   Aseq, ia, ja, &descA_1x1,
                   Apar, ib, jb, &descA_PxQ, gcontext);     
       
      /* computation of the system solution */
         Cpdgesv( m, n, 
                  Apar , 1, 1, &descA_PxQ, ipiv , 
                  Cpar, 1, 1, &descC_PxQ, &info);
     
      /* gather of the solution matrix C on 1 processor */
         Cpdgemr2d(m, n,
                   Cpar, ia, ja, &descC_PxQ,
                   Cseq, ib, jb, &descC_1x1, gcontext);


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node174.html0100644000056400000620000000501306336114005017517 0ustar pfrauenfstaff Call Conversion: LAPACK to ScaLAPACK and BLAS to PBLAS next up previous contents index
Next: Translating BLAS-based programs to Up: Guide Previous: Code Fragment Calling C

Call Conversion: LAPACK to ScaLAPACK and BLAS to PBLAS

 

This section   is designed to assist people in converting serial programs based on calls to the BLAS and LAPACK to parallel programs using the PBLAS and ScaLAPACK.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node175.html0100644000056400000620000000634606336114005017532 0ustar pfrauenfstaff Translating BLAS-based programs to the PBLAS next up previous contents index
Next: Translating LAPACK-based programs to Up: Call Conversion: LAPACK to Previous: Call Conversion: LAPACK to

Translating BLAS-based programs to the PBLAS

   

With a concrete understanding of array descriptors (Chapter 4), it is relatively simple to translate the serial version of a BLAS call into its parallel equivalent. Translating BLAS calls to PBLAS calls primarily consists of the following steps:

  • a `P' has to be inserted in front of the routine name,
  • the leading dimensions should be replaced by the global array descriptors, and
  • the global indices into the distributed matrices should be inserted as separate parameters in the calling sequence.
An example of translating a DGEMM call to a PDGEMM call is given below.
      CALL DGEMM( 'No transpose', 'No transpose', M-J-JB+1, N-J-JB+1,
     $            JB, -ONE, A( J+JB, J ), LDA, A( J, J+JB ), LDA, ONE,
     $            A( J+JB, J+JB ), LDA )
tex2html_wrap_inline19869
      CALL PDGEMM( 'No transpose', 'No transpose', M-J-JB+JA, N-J-JB+JA,
     $             JB, -ONE, A, J+JB, J, DESCA, A, J, J+JB, DESCA, ONE,
     $             A, J+JB, J+JB, DESCA )

This simple translation process considerably simplifies the implementation phase of linear algebra codes built on top of the BLAS.

The steps necessary to write a program to call a PBLAS routine are analogous to the steps presented in section 2.4.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node176.html0100644000056400000620000000753706336114006017537 0ustar pfrauenfstaff Translating LAPACK-based programs to ScaLAPACK next up previous contents index
Next: Sequential LU Factorization Up: Call Conversion: LAPACK to Previous: Translating BLAS-based programs to

Translating LAPACK-based programs to ScaLAPACK

 

This section  demonstrates how sequential LAPACK-based programs are parallelized and converted to ScaLAPACK.

As with the BLAS conversion, it is relatively simple to translate the serial version of an LAPACK call into its parallel equivalent. Translating LAPACK calls to ScaLAPACK calls primarily consists of the following steps:

  • a `P' has to be inserted in front of the routine name,
  • the leading dimensions should be replaced by the global array descriptors, and
  • the global indices into the distributed matrices should be inserted as separate parameters in the calling sequence.

As an example of this translation process, let us consider the parallelization of the LAPACK driver routine DGESV, which solves a general system of linear equations. The calling sequence comparison for DGESV versus its ScaLAPACK equivalent, PDGESV, is presented below.

      CALL DGESV( N, NRHS, A( I, J ), LDA, IPIV, B( I, 1 ), LDB, INFO )
tex2html_wrap_inline19869
      CALL PDGESV( N, NRHS, A, I, J, DESCA, IPIV, B, I, 1, DESCB, INFO )

For a more complete example, let us consider parallelizing a serial LU factorization code, as demonstrated in sections B.2.1 and B.2.2. Note that the parallel routine assumes the existence of the auxiliary routines PDGETF2 (unblocked LU) and PDLASWP (parallel swap routine) in addition to the PBLAS. With this in mind, the serial and parallel versions are very similar since most of the details of the parallel implementation such as communication and synchronization are hidden at lower levels of the software.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node177.html0100644000056400000620000000641206336114006017527 0ustar pfrauenfstaff Sequential LU Factorization next up previous contents index
Next: Parallel LU Factorization Up: Translating LAPACK-based programs to Previous: Translating LAPACK-based programs to

Sequential LU Factorization

 

      SUBROUTINE DGETRF( M, N, A, LDA, IPIV, INFO )
*
*  LU factorization of a M-by-N matrix A using partial pivoting with
*  row interchanges.
*
      INTEGER            INFO, LDA, M, N, IPIV( * )
      DOUBLE PRECISION   A( LDA, * )
*
      INTEGER            I, IINFO, J, JB, NB
      PARAMETER          ( NB = 64 )
      EXTERNAL           DGEMM, DGETF2, DLASWP, DTRSM
      INTRINSIC          MIN
*
      DO 20 J = 1, MIN(M,N), NB
         JB = MIN( MIN(M,N)-J+1, NB )
*
*        Factor diagonal block and test for exact singularity.
*
         CALL DGETF2( M-J+1, JB, A(J,J), LDA, IPIV(J), IINFO )
*
*        Adjust INFO and the pivot indices.
*
         IF( INFO.EQ.0 .AND. IINFO.GT.0 ) INFO = IINFO + J - 1
         DO 10 I = J, MIN(M,J+JB-1)
            IPIV(I) = J - 1 + IPIV(I)
   10    CONTINUE
*
*        Apply interchanges to columns 1:J-1 and J+JB:N.
*
         CALL DLASWP( J-1, A, LDA, J, J+JB-1, IPIV, 1 )
         IF( J+JB.LE.N ) THEN
            CALL DLASWP( N-J-JB+1, A(1,J+JB), LDA, J, J+JB-1, IPIV, 1 )
*
*           Compute block row of U and update trailing submatrix.
*
            CALL DTRSM( 'Left', 'Lower', 'No transpose', 'Unit', JB,
     $                  N-J-JB+1, 1.0D+0, A(J,J), LDA, A(J,J+JB), LDA )
            IF( J+JB.LE.M )
     $         CALL DGEMM( 'No transpose', 'No transpose', M-J-JB+1,
     $                     N-J-JB+1, JB, -1.0D+0, A(J+JB,J), LDA,
     $                     A(J,J+JB), LDA, 1.0D+0, A(J+JB,J+JB), LDA )
         END IF
   20 CONTINUE
      RETURN
*
      END



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node178.html0100644000056400000620000000734706336114007017541 0ustar pfrauenfstaff Parallel LU Factorization next up previous contents index
Next: Example Programs Up: Translating LAPACK-based programs to Previous: Sequential LU Factorization

Parallel LU Factorization

 

      SUBROUTINE PDGETRF( M, N, A, IA, JA, DESCA, IPIV, INFO )
*
      INTEGER            IA, INFO, JA, M, N, DESCA( * ), IPIV( * )
      DOUBLE PRECISION   A( * )
*
*  LU factorization of a M-by-N distributed matrix A(IA:IA+M-1,JA:JA+N-1)
*  using partial pivoting with row interchanges.
*
      INTEGER            I, IINFO, J, JB
      EXTERNAL           IGAMN2D, PDGEMM, PDGETF2, PDLASWP, PDTRSM
      INTRINSIC          MIN
*
      DO 10 J = JA, JA+MIN(M,N)-1, DESCA( NB_ )
         JB = MIN( MIN(M,N)-J+JA, DESCA( NB_ ) )
         I = IA + J - JA
*
*        Factor diagonal block and test for exact singularity.
*
         CALL PDGETF2( M-J+JA, JB, A, I, J, DESCA, IPIV, IINFO )
         IF( INFO.EQ.0 .AND. IINFO.GT.0 ) INFO = IINFO + J - JA
*
*        Apply interchanges to columns JA:J-JA and J+JB:JA+N-1.
*
         CALL PDLASWP( 'Forward', 'Rows', J-JA, A, IA, JA, DESCA,
     $                 I, I+JB-1, IPIV )
         IF( J-JA+JB+1.LE.N ) THEN
            CALL PDLASWP( 'Forward', 'Rows', N-J-JB+JA, A, IA, J+JB,
     $                    DESCA, I, I+JB-1, IPIV )
*
*           Compute block row of U and update trailing submatrix.
*
            CALL PDTRSM( 'Left', 'Lower', 'No transpose', 'Unit', JB,
     $                   N-J-JB+JA, 1.0D+0, A, I, J, DESCA, A, I, J+JB,
     $                   DESCA )
            IF( J-JA+JB+1.LE.M ) THEN
     $         CALL PDGEMM( 'No transpose', 'No transpose', M-J-JB+JA,
     $                      N-J-JB+JA, JB, -1.0D+0, A, I+JB, J, DESCA, A,
     $                      I, J+JB, DESCA, 1.0D+0, A, I+JB, J+JB, DESCA )
         END IF
   10 CONTINUE
      IF( INFO.EQ.0 ) INFO = MIN(M,N) + 1
      CALL IGAMN2D( ICTXT, 'Row', ' ', 1, 1, INFO, 1, I, J, -1, -1, MYCOL )
      IF( INFO.EQ.MIN(M,N)+1 ) INFO = 0
*
      RETURN
*
      END

The required steps to call a ScaLAPACK routine from a parallel program are demonstrated in Example Program #1 in Chapter 2 and explained in section 2.3.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node179.html0100644000056400000620000000467606336114007017544 0ustar pfrauenfstaff Example Programs next up previous contents index
Next: Example Program #2 Up: Guide Previous: Parallel LU Factorization

Example Programs

 

This Appendix provides additional example programs. Section C.1 presents a more memory-efficient program calling the ScaLAPACK driver routine PDGESV. Section C.2 presents an HPF program calling the HPF interface to ScaLAPACK. These example programs are available from the respective URLs:

http://www.netlib.org/scalapack/examples/scaex.shar
http://www.netlib.org/scalapack/examples/sample_hpf_gesv.f




Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node17.html0100644000056400000620000000710706336113645017452 0ustar pfrauenfstaff Availability next up previous contents index
Next: Commercial Use Up: Essentials Previous: Efficiency and Portability

Availability

  

The complete ScaLAPACK package is freely available on netlib [60, 22, 23]  and can be obtained via the World Wide Web or anonymous ftp.

The ScaLAPACK homepage  can be accessed on the World Wide Web via the URL address:

http://www.netlib.org/scalapack/index.html

Prebuilt ScaLAPACK and BLACS libraries     are available on netlib for a variety of architectures. Refer to the following URLs:

http://www.netlib.org/scalapack/archives/index.html
http://www.netlib.org/blacs/archives/index.html

At the time of this writing, the e-mail addresses for netlib [22, 23] are

netlib@www.netlib.org
netlib@research.bell-labs.com
Both repositories provide electronic mail and anonymous ftp service (the netlib@www.netlib.org site is available via anonymous ftp to ftp.netlib.org). The URL for netlib is http://www.netlib.org/  .

The following sites are mirror repositories :

tabular506

General information about ScaLAPACK (and the PBLAS) can be obtained by contacting any of the URLs listed above. If additional information is desired, feel free to contact the authors at scalapack@cs.utk.edu.

The complete ScaLAPACK package, including test code and timing programs in four different data types, constitutes some 500,000 lines of Fortran and C source and comments.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node180.html0100644000056400000620000007051006336114010017514 0ustar pfrauenfstaff Example Program #2 next up previous contents index
Next: HPF Interface to ScaLAPACK Up: Example Programs Previous: Example Programs

Example Program #2

   

In Chapter 2, we presented an example program using ScaLAPACK. Here we present a second example--a more flexible and memory efficient program to solve a system of linear equations using the ScaLAPACK driver routine PDGESV. In this example we will read the input matrices from a file, distribute these matrices to the processes in the grid. After calling the ScaLAPACK routine, we will write the output solution matrix to a file. The input data files for the program are SCAEX.dat, SCAEXMAT.dat, and SCAEXRHS.dat.

This program is also available in the scalapack directory on netlib
(http://www.netlib.org/scalapack/examples/scaex.shar).

SCAEX.dat is:

'ScaLAPACK Example Program 2'
'May 1997'
'SCAEX.out'             output file name (if any)
6                       device out
6                       value of N
1                       value of NRHS
2                       values of NB
2                       values of NPROW
2                       values of NPCOL

SCAEXMAT.dat is:

6 6
 6.0000D+0
 3.0000D+0
 0.0000D+0
 0.0000D+0
 3.0000D+0
 0.0000D+0
 0.0000D+0
-3.0000D+0
-1.0000D+0
 1.0000D+0
 1.0000D+0
 0.0000D+0
-1.0000D+0
 0.0000D+0
11.0000D+0
 0.0000D+0
 0.0000D+0
10.0000D+0
 0.0000D+0
 0.0000D+0
 0.0000D+0
-11.0000D+0
 0.0000D+0
 0.0000D+0
 0.0000D+0
 0.0000D+0
 0.0000D+0
 2.0000D+0
-4.0000D+0
 0.0000D+0
 0.0000D+0
 0.0000D+0
 0.0000D+0
 8.0000D+0
 0.0000D+0
-10.0000D+0

SCAEXRHS.dat is:

   6  1
     72.000000000000000000D+00
      0.000000000000000000D+00
    160.000000000000000000D+00
      0.000000000000000000D+00
      0.000000000000000000D+00
      0.000000000000000000D+00

      PROGRAM PDSCAEX
*
*  -- ScaLAPACK example code --
*     University of Tennessee, Knoxville, Oak Ridge National Laboratory,
*     and University of California, Berkeley.
*
*     Written by Antoine Petitet, (petitet@cs.utk.edu)
*
*     This program solves a linear system by calling the ScaLAPACK 
*     routine PDGESV. The input matrix and right-and-sides are
*     read from a file. The solution is written to a file.
*
*     .. Parameters ..
      INTEGER            DBLESZ, INTGSZ, MEMSIZ, TOTMEM
      PARAMETER          ( DBLESZ = 8, INTGSZ = 4, TOTMEM = 2000000,
     $                     MEMSIZ = TOTMEM / DBLESZ )
      INTEGER            BLOCK_CYCLIC_2D, CSRC_, CTXT_, DLEN_, DTYPE_,
     $                   LLD_, MB_, M_, NB_, N_, RSRC_
      PARAMETER          ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1,
     $                     CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6,
     $                     RSRC_ = 7, CSRC_ = 8, LLD_ = 9 )
      DOUBLE PRECISION   ONE
      PARAMETER          ( ONE = 1.0D+0 )
*     ..
*     .. Local Scalars ..
      CHARACTER*80       OUTFILE
      INTEGER            IAM, ICTXT, INFO, IPA, IPACPY, IPB, IPPIV, IPX,
     $                   IPW, LIPIV, MYCOL, MYROW, N, NB, NOUT, NPCOL,
     $                   NPROCS, NPROW, NP, NQ, NQRHS, NRHS, WORKSIZ
      DOUBLE PRECISION   ANORM, BNORM, EPS, XNORM, RESID
*     ..
*     .. Local Arrays ..
      INTEGER            DESCA( DLEN_ ), DESCB( DLEN_ ), DESCX( DLEN_ )
      DOUBLE PRECISION   MEM( MEMSIZ )
*     ..
*     .. External Subroutines ..
      EXTERNAL           BLACS_EXIT, BLACS_GET, BLACS_GRIDEXIT,
     $                   BLACS_GRIDINFO, BLACS_GRIDINIT, BLACS_PINFO,
     $                   DESCINIT, IGSUM2D, PDSCAEXINFO, PDGESV,
     $                   PDGEMM, PDLACPY, PDLAPRNT, PDLAREAD, PDLAWRITE
*     ..
*     .. External Functions ..
      INTEGER            ICEIL, NUMROC
      DOUBLE PRECISION   PDLAMCH, PDLANGE
      EXTERNAL           ICEIL, NUMROC, PDLAMCH, PDLANGE
*     ..
*     .. Intrinsic Functions ..
      INTRINSIC          DBLE, MAX
*     ..
*     .. Executable Statements ..
*
*     Get starting information
*
      CALL BLACS_PINFO( IAM, NPROCS )
      CALL PDSCAEXINFO( OUTFILE, NOUT, N, NRHS, NB, NPROW, NPCOL, MEM,
     $                  IAM, NPROCS )
*
*     Define process grid
*
      CALL BLACS_GET( -1, 0, ICTXT )
      CALL BLACS_GRIDINIT( ICTXT, 'Row-major', NPROW, NPCOL )
      CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )
*
*     Go to bottom of process grid loop if this case doesn't use my
*     process
*
      IF( MYROW.GE.NPROW .OR. MYCOL.GE.NPCOL )
     $   GO TO 20
*
      NP    = NUMROC( N, NB, MYROW, 0, NPROW )
      NQ    = NUMROC( N, NB, MYCOL, 0, NPCOL )
      NQRHS = NUMROC( NRHS, NB, MYCOL, 0, NPCOL )
*
*     Initialize the array descriptor for the matrix A and B
*
      CALL DESCINIT( DESCA, N, N, NB, NB, 0, 0, ICTXT, MAX( 1, NP ),
     $               INFO )
      CALL DESCINIT( DESCB, N, NRHS, NB, NB, 0, 0, ICTXT, MAX( 1, NP ),
     $               INFO )
      CALL DESCINIT( DESCX, N, NRHS, NB, NB, 0, 0, ICTXT, MAX( 1, NP ),
     $               INFO )
*
*     Assign pointers into MEM for SCALAPACK arrays, A is
*     allocated starting at position MEM( 1 )
*
      IPA = 1
      IPACPY = IPA + DESCA( LLD_ )*NQ
      IPB = IPACPY + DESCA( LLD_ )*NQ
      IPX = IPB + DESCB( LLD_ )*NQRHS
      IPPIV = IPX + DESCB( LLD_ )*NQRHS
      LIPIV = ICEIL( INTGSZ*( NP+NB ), DBLESZ )
      IPW = IPPIV + MAX( NP, LIPIV )
*
      WORKSIZ = MAX( NB, NP )
*
*     Check for adequate memory for problem size
*
      INFO = 0
      IF( IPW+WORKSIZ.GT.MEMSIZ ) THEN
         IF( IAM.EQ.0 )
     $      WRITE( NOUT, FMT = 9998 ) 'test', ( IPW+WORKSIZ )*DBLESZ
         INFO = 1
      END IF
*
*     Check all processes for an error
*
      CALL IGSUM2D( ICTXT, 'All', ' ', 1, 1, INFO, 1, -1, 0 )
      IF( INFO.GT.0 ) THEN
         IF( IAM.EQ.0 )
     $      WRITE( NOUT, FMT = 9999 ) 'MEMORY'
         GO TO 10
      END IF
*
*     Read from file and distribute matrices A and B
*
      CALL PDLAREAD( 'SCAEXMAT.dat', MEM( IPA ), DESCA, 0, 0,
     $               MEM( IPW ) )
      CALL PDLAREAD( 'SCAEXRHS.dat', MEM( IPB ), DESCB, 0, 0,
     $               MEM( IPW ) )
*
*     Make a copy of A and the rhs for checking purposes
*
      CALL PDLACPY( 'All', N, N, MEM( IPA ), 1, 1, DESCA,
     $              MEM( IPACPY ), 1, 1, DESCA )
      CALL PDLACPY( 'All', N, NRHS, MEM( IPB ), 1, 1, DESCB,
     $              MEM( IPX ), 1, 1, DESCX )
*
**********************************************************************
*     Call ScaLAPACK PDGESV routine
**********************************************************************
*
      IF( IAM.EQ.0 ) THEN
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * )
     $         '***********************************************'
         WRITE( NOUT, FMT = * )
     $         'Example of ScaLAPACK routine call: (PDGESV)'
         WRITE( NOUT, FMT = * )
     $         '***********************************************'
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * ) 'A * X = B, Matrix A:'
         WRITE( NOUT, FMT = * )
      END IF
      CALL PDLAPRNT( N, N, MEM( IPA ), 1, 1, DESCA, 0, 0,
     $               'A', NOUT, MEM( IPW ) )
      IF( IAM.EQ.0 ) THEN
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * ) 'Matrix B:'
         WRITE( NOUT, FMT = * )
      END IF
      CALL PDLAPRNT( N, NRHS, MEM( IPB ), 1, 1, DESCB, 0, 0,
     $               'B', NOUT, MEM( IPW ) )
*
      CALL PDGESV( N, NRHS, MEM( IPA ), 1, 1, DESCA, MEM( IPPIV ),
     $             MEM( IPB ), 1, 1, DESCB, INFO )
*
      IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * ) 'INFO code returned by PDGESV = ', INFO
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * ) 'Matrix X = A^{-1} * B'
         WRITE( NOUT, FMT = * )
      END IF
      CALL PDLAPRNT( N, NRHS, MEM( IPB ), 1, 1, DESCB, 0, 0, 'X', NOUT,
     $               MEM( IPW ) )
      CALL PDLAWRITE( 'SCAEXSOL.dat', N, NRHS, MEM( IPB ), 1, 1, DESCB,
     $                0, 0, MEM( IPW ) )
*
*     Compute residual ||A * X  - B|| / ( ||X|| * ||A|| * eps * N )
*
      EPS = PDLAMCH( ICTXT, 'Epsilon' )
      ANORM = PDLANGE( 'I', N, N, MEM( IPA ), 1, 1, DESCA, MEM( IPW ) )
      BNORM = PDLANGE( 'I', N, NRHS, MEM( IPB ), 1, 1, DESCB,
     $                 MEM( IPW ) )
      CALL PDGEMM( 'No transpose', 'No transpose', N, NRHS, N, ONE,
     $             MEM( IPACPY ), 1, 1, DESCA, MEM( IPB ), 1, 1, DESCB,
     $             -ONE, MEM( IPX ), 1, 1, DESCX )
      XNORM = PDLANGE( 'I', N, NRHS, MEM( IPX ), 1, 1, DESCX,
     $                 MEM( IPW ) )
      RESID = XNORM / ( ANORM * BNORM * EPS * DBLE( N ) )
*
      IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * )
     $     '||A * X  - B|| / ( ||X|| * ||A|| * eps * N ) = ', RESID
         WRITE( NOUT, FMT = * )
         IF( RESID.LT.10.0D+0 ) THEN
            WRITE( NOUT, FMT = * ) 'The answer is correct.'
         ELSE
            WRITE( NOUT, FMT = * ) 'The answer is suspicious.'
         END IF
      END IF
*
   10 CONTINUE
*
      CALL BLACS_GRIDEXIT( ICTXT )
*
   20 CONTINUE
*
*     Print ending messages and close output file
*
      IF( IAM.EQ.0 ) THEN
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = 9997 )
         WRITE( NOUT, FMT = * )
         IF( NOUT.NE.6 .AND. NOUT.NE.0 )
     $      CLOSE ( NOUT )
      END IF
*
      CALL BLACS_EXIT( 0 )
*
 9999 FORMAT( 'Bad ', A6, ' parameters: going on to next test case.' )
 9998 FORMAT( 'Unable to perform ', A, ': need TOTMEM of at least',
     $        I11 )
 9997 FORMAT( 'END OF TESTS.' )
*
      STOP
*
*     End of PDSCAEX
*
      END

      SUBROUTINE PDSCAEXINFO( SUMMRY, NOUT, N, NRHS, NB, NPROW, NPCOL,
     $                        WORK, IAM, NPROCS )
*
*  -- ScaLAPACK example code --
*     University of Tennessee, Knoxville, Oak Ridge National Laboratory,
*     and University of California, Berkeley.
*
*     Written by Antoine Petitet, (petitet@cs.utk.edu)
*
*     This program solves a linear system by calling the ScaLAPACK 
*     routine PDGESV. The input matrix and right-and-sides are
*     read from a file. The solution is written to a file.
*
*     .. Scalar Arguments ..
      CHARACTER*( * )    SUMMRY
      INTEGER            IAM, N, NRHS, NB, NOUT, NPCOL, NPROCS, NPROW
*     ..
*     .. Array Arguments ..
      INTEGER            WORK( * )
*     ..
*
* ======================================================================
*
*     .. Parameters ..
      INTEGER            NIN
      PARAMETER          ( NIN = 11 )
*     ..
*     .. Local Scalars ..
      CHARACTER*79       USRINFO
      INTEGER            ICTXT
*     ..
*     .. External Subroutines ..
      EXTERNAL           BLACS_ABORT, BLACS_GET, BLACS_GRIDEXIT,
     $                   BLACS_GRIDINIT, BLACS_SETUP, IGEBR2D, IGEBS2D
*     ..
*     .. Intrinsic Functions ..
      INTRINSIC          MAX, MIN
*     ..
*     .. Executable Statements ..
*
*     Process 0 reads the input data, broadcasts to other processes and
*     writes needed information to NOUT
*
      IF( IAM.EQ.0 ) THEN
*
*        Open file and skip data file header
*
         OPEN( NIN, FILE='SCAEX.dat', STATUS='OLD' )
         READ( NIN, FMT = * ) SUMMRY
         SUMMRY = ' '
*
*        Read in user-supplied info about machine type, compiler, etc.
*
         READ( NIN, FMT = 9999 ) USRINFO
*
*        Read name and unit number for summary output file
*
         READ( NIN, FMT = * ) SUMMRY
         READ( NIN, FMT = * ) NOUT
         IF( NOUT.NE.0 .AND. NOUT.NE.6 )
     $      OPEN( NOUT, FILE = SUMMRY, STATUS = 'UNKNOWN' )
*
*        Read and check the parameter values for the tests.
*
*        Get matrix dimensions
*
         READ( NIN, FMT = * ) N
         READ( NIN, FMT = * ) NRHS
*
*        Get value of NB
*
         READ( NIN, FMT = * ) NB
*
*        Get grid shape
*
         READ( NIN, FMT = * ) NPROW
         READ( NIN, FMT = * ) NPCOL
*
*        Close input file
*
         CLOSE( NIN )
*
*        If underlying system needs additional set up, do it now
*
         IF( NPROCS.LT.1 ) THEN
            NPROCS = NPROW * NPCOL
            CALL BLACS_SETUP( IAM, NPROCS )
         END IF
*
*        Temporarily define blacs grid to include all processes so
*        information can be broadcast to all processes
*
         CALL BLACS_GET( -1, 0, ICTXT )
         CALL BLACS_GRIDINIT( ICTXT, 'Row-major', 1, NPROCS )
*
*        Pack information arrays and broadcast
*
         WORK( 1 ) = N
         WORK( 2 ) = NRHS
         WORK( 3 ) = NB
         WORK( 4 ) = NPROW
         WORK( 5 ) = NPCOL
         CALL IGEBS2D( ICTXT, 'All', ' ', 5, 1, WORK, 5 )
*
*        regurgitate input
*
         WRITE( NOUT, FMT = 9999 )
     $               'SCALAPACK example driver.'
         WRITE( NOUT, FMT = 9999 ) USRINFO
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = 9999 )
     $               'The matrices A and B are read from '//
     $               'a file.'
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = 9999 )
     $               'An explanation of the input/output '//
     $               'parameters follows:'
*
         WRITE( NOUT, FMT = 9999 )
     $               'N       : The order of the matrix A.'
         WRITE( NOUT, FMT = 9999 )
     $               'NRHS    : The number of right and sides.'
         WRITE( NOUT, FMT = 9999 )
     $               'NB      : The size of the square blocks the'//
     $               ' matrices A and B are split into.'
         WRITE( NOUT, FMT = 9999 )
     $               'P       : The number of process rows.'
         WRITE( NOUT, FMT = 9999 )
     $               'Q       : The number of process columns.'
         WRITE( NOUT, FMT = * )
         WRITE( NOUT, FMT = 9999 )
     $               'The following parameter values will be used:'
         WRITE( NOUT, FMT = 9998 ) 'N    ', N
         WRITE( NOUT, FMT = 9998 ) 'NRHS ', NRHS
         WRITE( NOUT, FMT = 9998 ) 'NB   ', NB
         WRITE( NOUT, FMT = 9998 ) 'P    ', NPROW
         WRITE( NOUT, FMT = 9998 ) 'Q    ', NPCOL
         WRITE( NOUT, FMT = * )
*
      ELSE
*
*        If underlying system needs additional set up, do it now
*
         IF( NPROCS.LT.1 )
     $      CALL BLACS_SETUP( IAM, NPROCS )
*
*        Temporarily define blacs grid to include all processes so
*        information can be broadcast to all processes
*
         CALL BLACS_GET( -1, 0, ICTXT )
         CALL BLACS_GRIDINIT( ICTXT, 'Row-major', 1, NPROCS )
*
         CALL IGEBR2D( ICTXT, 'All', ' ', 5, 1, WORK, 5, 0, 0 )
         N     = WORK( 1 )
         NRHS  = WORK( 2 )
         NB    = WORK( 3 )
         NPROW = WORK( 4 )
         NPCOL = WORK( 5 )
*
      END IF
*
      CALL BLACS_GRIDEXIT( ICTXT )
*
      RETURN
*
   20 WRITE( NOUT, FMT = 9997 )
      CLOSE( NIN )
      IF( NOUT.NE.6 .AND. NOUT.NE.0 )
     $   CLOSE( NOUT )
      CALL BLACS_ABORT( ICTXT, 1 )
*
      STOP
*
 9999 FORMAT( A )
 9998 FORMAT( 2X, A5, '   :        ', I6 )
 9997 FORMAT( ' Illegal input in file ',40A,'.  Aborting run.' )
*
*     End of PDSCAEXINFO
*
      END

      SUBROUTINE PDLAREAD( FILNAM, A, DESCA, IRREAD, ICREAD, WORK )
*
*  -- ScaLAPACK example --
*     University of Tennessee, Knoxville, Oak Ridge National Laboratory,
*     and University of California, Berkeley.
* 
*     written by Antoine Petitet, (petitet@cs.utk.edu)
*
*     .. Scalar Arguments ..
      INTEGER            ICREAD, IRREAD
*     ..
*     .. Array Arguments ..
      CHARACTER*(*)      FILNAM
      INTEGER            DESCA( * )
      DOUBLE PRECISION   A( * ), WORK( * )
*     ..
*
*  Purpose
*  =======
*
*  PDLAREAD reads from a file named FILNAM a matrix and distribute
*  it to the process grid.
*
*  Only the process of coordinates {IRREAD, ICREAD} read the file.
*
*  WORK must be of size >= MB_ = DESCA( MB_ ).
*
*  =====================================================================
*
*     .. Parameters ..
      INTEGER            NIN
      PARAMETER          ( NIN = 11 )
      INTEGER            BLOCK_CYCLIC_2D, CSRC_, CTXT_, DLEN_, DTYPE_,
     $                   LLD_, MB_, M_, NB_, N_, RSRC_
      PARAMETER          ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1,
     $                     CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6,
     $                     RSRC_ = 7, CSRC_ = 8, LLD_ = 9 )
*     ..
*     .. Local Scalars ..
      INTEGER            H, I, IB, ICTXT, ICURCOL, ICURROW, II, J, JB,
     $                   JJ, K, LDA, M, MYCOL, MYROW, N, NPCOL, NPROW
*     ..
*     .. Local Arrays ..
      INTEGER            IWORK( 2 )
*     ..
*     .. External Subroutines ..
      EXTERNAL           BLACS_GRIDINFO, INFOG2L, DGERV2D, DGESD2D,
     $                   IGEBS2D, IGEBR2D
*     ..
*     .. External Functions ..
      INTEGER            ICEIL
      EXTERNAL           ICEIL
*     ..
*     .. Intrinsic Functions ..
      INTRINSIC          MIN
*     ..
*     .. Executable Statements ..
*
*     Get grid parameters
*
      ICTXT = DESCA( CTXT_ )
      CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )
*
      IF( MYROW.EQ.IRREAD .AND. MYCOL.EQ.ICREAD ) THEN
         OPEN( NIN, FILE=FILNAM, STATUS='OLD' )
         READ( NIN, FMT = * ) ( IWORK( I ), I = 1, 2 )
         CALL IGEBS2D( ICTXT, 'All', ' ', 2, 1, IWORK, 2 )
      ELSE
         CALL IGEBR2D( ICTXT, 'All', ' ', 2, 1, IWORK, 2, IRREAD,
     $                 ICREAD )
      END IF
      M = IWORK( 1 )
      N = IWORK( 2 )
*
      IF( M.LE.0 .OR. N.LE.0 )
     $   RETURN
*
      IF( M.GT.DESCA( M_ ).OR. N.GT.DESCA( N_ ) ) THEN
         IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN
            WRITE( *, FMT = * ) 'PDLAREAD: Matrix too big to fit in'
            WRITE( *, FMT = * ) 'Abort ...'
         END IF
         CALL BLACS_ABORT( ICTXT, 0 )
      END IF
*
      II = 1
      JJ = 1
      ICURROW = DESCA( RSRC_ )
      ICURCOL = DESCA( CSRC_ )
      LDA = DESCA( LLD_ )
*
*     Loop over column blocks
*
      DO 50 J = 1, N, DESCA( NB_ )
         JB = MIN(  DESCA( NB_ ), N-J+1 )
         DO 40 H = 0, JB-1
*
*           Loop over block of rows
*
            DO 30 I = 1, M, DESCA( MB_ )
               IB = MIN( DESCA( MB_ ), M-I+1 )
               IF( ICURROW.EQ.IRREAD .AND. ICURCOL.EQ.ICREAD ) THEN
                  IF( MYROW.EQ.IRREAD .AND. MYCOL.EQ.ICREAD ) THEN
                     DO 10 K = 0, IB-1
                        READ( NIN, FMT = * ) A( II+K+(JJ+H-1)*LDA )
   10                CONTINUE
                  END IF
               ELSE
                  IF( MYROW.EQ.ICURROW .AND. MYCOL.EQ.ICURCOL ) THEN
                     CALL DGERV2D( ICTXT, IB, 1, A( II+(JJ+H-1)*LDA ),
     $                             LDA, IRREAD, ICREAD )
                   ELSE IF( MYROW.EQ.IRREAD .AND. MYCOL.EQ.ICREAD ) THEN
                     DO 20 K = 1, IB
                        READ( NIN, FMT = * ) WORK( K )
   20                CONTINUE
                     CALL DGESD2D( ICTXT, IB, 1, WORK, DESCA( MB_ ),
     $                             ICURROW, ICURCOL )
                  END IF
               END IF
               IF( MYROW.EQ.ICURROW )
     $            II = II + IB
               ICURROW = MOD( ICURROW+1, NPROW )
   30       CONTINUE
*
            II = 1
            ICURROW = DESCA( RSRC_ )
   40    CONTINUE
*
         IF( MYCOL.EQ.ICURCOL )
     $      JJ = JJ + JB
         ICURCOL = MOD( ICURCOL+1, NPCOL )
*
   50 CONTINUE
*
      IF( MYROW.EQ.IRREAD .AND. MYCOL.EQ.ICREAD ) THEN
         CLOSE( NIN )
      END IF
*
      RETURN
*
*     End of PDLAREAD
*
      END

      SUBROUTINE PDLAWRITE( FILNAM, M, N, A, IA, JA, DESCA, IRWRIT,
     $                      ICWRIT, WORK )
*
*  -- ScaLAPACK example --
*     University of Tennessee, Knoxville, Oak Ridge National Laboratory,
*     and University of California, Berkeley.
*
*     written by Antoine Petitet, (petitet@cs.utk.edu)
*
*     .. Scalar Arguments ..
      INTEGER            IA, ICWRIT, IRWRIT, JA, M, N
*     ..
*     .. Array Arguments ..
      CHARACTER*(*)      FILNAM
      INTEGER            DESCA( * )
      DOUBLE PRECISION   A( * ), WORK( * )
*     ..
*
*  Purpose
*  =======
*
*  PDLAWRITE writes to a file named FILNAMa distributed matrix sub( A )
*  denoting A(IA:IA+M-1,JA:JA+N-1). The local pieces are sent to and
*  written by the process of coordinates (IRWWRITE, ICWRIT).
*
*  WORK must be of size >= MB_ = DESCA( MB_ ).
*
*  =====================================================================
*
*     .. Parameters ..
      INTEGER            NOUT
      PARAMETER          ( NOUT = 13 )
      INTEGER            BLOCK_CYCLIC_2D, CSRC_, CTXT_, DLEN_, DTYPE_,
     $                   LLD_, MB_, M_, NB_, N_, RSRC_
      PARAMETER          ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1,
     $                     CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6,
     $                     RSRC_ = 7, CSRC_ = 8, LLD_ = 9 )
*     ..
*     .. Local Scalars ..
      INTEGER            H, I, IACOL, IAROW, IB, ICTXT, ICURCOL,
     $                   ICURROW, II, IIA, IN, J, JB, JJ, JJA, JN, K,
     $                   LDA, MYCOL, MYROW, NPCOL, NPROW
*     ..
*     .. External Subroutines ..
      EXTERNAL           BLACS_BARRIER, BLACS_GRIDINFO, INFOG2L,
     $                   DGERV2D, DGESD2D
*     ..
*     .. External Functions ..
      INTEGER            ICEIL
      EXTERNAL           ICEIL
*     ..
*     .. Intrinsic Functions ..
      INTRINSIC          MIN
*     ..
*     .. Executable Statements ..
*
*     Get grid parameters
*
      ICTXT = DESCA( CTXT_ )
      CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )
*
      IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
         OPEN( NOUT, FILE=FILNAM, STATUS='UNKNOWN' )
         WRITE( NOUT, FMT = * ) M, N
      END IF
*
      CALL INFOG2L( IA, JA, DESCA, NPROW, NPCOL, MYROW, MYCOL,
     $              IIA, JJA, IAROW, IACOL )
      ICURROW = IAROW
      ICURCOL = IACOL
      II = IIA
      JJ = JJA
      LDA = DESCA( LLD_ )
*
*     Handle the first block of column separately
*
      JN = MIN( ICEIL( JA, DESCA( NB_ ) ) * DESCA( NB_ ), JA+N-1 )
      JB = JN-JA+1
      DO 60 H = 0, JB-1
         IN = MIN( ICEIL( IA, DESCA( MB_ ) ) * DESCA( MB_ ), IA+M-1 )
         IB = IN-IA+1
         IF( ICURROW.EQ.IRWRIT .AND. ICURCOL.EQ.ICWRIT ) THEN
            IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
               DO 10 K = 0, IB-1
                  WRITE( NOUT, FMT = 9999 ) A( II+K+(JJ+H-1)*LDA )
   10          CONTINUE
            END IF
         ELSE
            IF( MYROW.EQ.ICURROW .AND. MYCOL.EQ.ICURCOL ) THEN
               CALL DGESD2D( ICTXT, IB, 1, A( II+(JJ+H-1)*LDA ), LDA,
     $                       IRWRIT, ICWRIT )
            ELSE IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
               CALL DGERV2D( ICTXT, IB, 1, WORK, DESCA( MB_ ),
     $                       ICURROW, ICURCOL )
               DO 20 K = 1, IB
                  WRITE( NOUT, FMT = 9999 ) WORK( K )
   20          CONTINUE
            END IF
         END IF
         IF( MYROW.EQ.ICURROW )
     $      II = II + IB
         ICURROW = MOD( ICURROW+1, NPROW )
         CALL BLACS_BARRIER( ICTXT, 'All' )
*
*        Loop over remaining block of rows
*
         DO 50 I = IN+1, IA+M-1, DESCA( MB_ )
            IB = MIN( DESCA( MB_ ), IA+M-I )
            IF( ICURROW.EQ.IRWRIT .AND. ICURCOL.EQ.ICWRIT ) THEN
               IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
                  DO 30 K = 0, IB-1
                     WRITE( NOUT, FMT = 9999 ) A( II+K+(JJ+H-1)*LDA )
   30             CONTINUE
               END IF
            ELSE
               IF( MYROW.EQ.ICURROW .AND. MYCOL.EQ.ICURCOL ) THEN
                  CALL DGESD2D( ICTXT, IB, 1, A( II+(JJ+H-1)*LDA ),
     $                          LDA, IRWRIT, ICWRIT )
               ELSE IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
                  CALL DGERV2D( ICTXT, IB, 1, WORK, DESCA( MB_ ),
     $                          ICURROW, ICURCOL )
                  DO 40 K = 1, IB
                     WRITE( NOUT, FMT = 9999 ) WORK( K )
   40             CONTINUE
               END IF
            END IF
            IF( MYROW.EQ.ICURROW )
     $         II = II + IB
            ICURROW = MOD( ICURROW+1, NPROW )
            CALL BLACS_BARRIER( ICTXT, 'All' )
   50    CONTINUE
*
        II = IIA
        ICURROW = IAROW
   60 CONTINUE
*
      IF( MYCOL.EQ.ICURCOL )
     $   JJ = JJ + JB
      ICURCOL = MOD( ICURCOL+1, NPCOL )
      CALL BLACS_BARRIER( ICTXT, 'All' )
*
*     Loop over remaining column blocks
*
      DO 130 J = JN+1, JA+N-1, DESCA( NB_ )
         JB = MIN(  DESCA( NB_ ), JA+N-J )
         DO 120 H = 0, JB-1
            IN = MIN( ICEIL( IA, DESCA( MB_ ) ) * DESCA( MB_ ), IA+M-1 )
            IB = IN-IA+1
            IF( ICURROW.EQ.IRWRIT .AND. ICURCOL.EQ.ICWRIT ) THEN
               IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
                  DO 70 K = 0, IB-1
                     WRITE( NOUT, FMT = 9999 ) A( II+K+(JJ+H-1)*LDA )
   70             CONTINUE
               END IF
            ELSE
               IF( MYROW.EQ.ICURROW .AND. MYCOL.EQ.ICURCOL ) THEN
                  CALL DGESD2D( ICTXT, IB, 1, A( II+(JJ+H-1)*LDA ),
     $                          LDA, IRWRIT, ICWRIT )
               ELSE IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
                  CALL DGERV2D( ICTXT, IB, 1, WORK, DESCA( MB_ ),
     $                          ICURROW, ICURCOL )
                  DO 80 K = 1, IB
                     WRITE( NOUT, FMT = 9999 ) WORK( K )
   80             CONTINUE
               END IF
            END IF
            IF( MYROW.EQ.ICURROW )
     $         II = II + IB
            ICURROW = MOD( ICURROW+1, NPROW )
            CALL BLACS_BARRIER( ICTXT, 'All' )
*
*           Loop over remaining block of rows
*
            DO 110 I = IN+1, IA+M-1, DESCA( MB_ )
               IB = MIN( DESCA( MB_ ), IA+M-I )
               IF( ICURROW.EQ.IRWRIT .AND. ICURCOL.EQ.ICWRIT ) THEN
                  IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
                     DO 90 K = 0, IB-1
                        WRITE( NOUT, FMT = 9999 ) A( II+K+(JJ+H-1)*LDA )
   90                CONTINUE
                  END IF
               ELSE
                  IF( MYROW.EQ.ICURROW .AND. MYCOL.EQ.ICURCOL ) THEN
                     CALL DGESD2D( ICTXT, IB, 1, A( II+(JJ+H-1)*LDA ),
     $                             LDA, IRWRIT, ICWRIT )
                   ELSE IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
                     CALL DGERV2D( ICTXT, IB, 1, WORK, DESCA( MB_ ),
     $                             ICURROW, ICURCOL )
                     DO 100 K = 1, IB
                        WRITE( NOUT, FMT = 9999 ) WORK( K )
  100                CONTINUE
                  END IF
               END IF
               IF( MYROW.EQ.ICURROW )
     $            II = II + IB
               ICURROW = MOD( ICURROW+1, NPROW )
               CALL BLACS_BARRIER( ICTXT, 'All' )
  110       CONTINUE
*
            II = IIA
            ICURROW = IAROW
  120    CONTINUE
*
         IF( MYCOL.EQ.ICURCOL )
     $      JJ = JJ + JB
         ICURCOL = MOD( ICURCOL+1, NPCOL )
         CALL BLACS_BARRIER( ICTXT, 'All' )
*
  130 CONTINUE
*
      IF( MYROW.EQ.IRWRIT .AND. MYCOL.EQ.ICWRIT ) THEN
         CLOSE( NOUT )
      END IF
*
 9999 FORMAT( D30.18 )
*
      RETURN
*
*     End of PDLAWRITE
*
      END



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node181.html0100644000056400000620000001244706336114011017523 0ustar pfrauenfstaff HPF Interface to ScaLAPACK next up previous contents index
Next: Quick Reference Guides Up: Example Programs Previous: Example Program #2

HPF Interface to ScaLAPACK

        

We are investigating issues related to interfacing ScaLAPACK with High Performance Fortran (HPF) [91]. As a part of this effort, we have provided prototype interfaces to some of the ScaLAPACK routines. We are collecting user feedback on these codes, as well as allowing additional time for compiler maturation, before providing a more complete interface.

Initially, interfaces are provided for the following ScaLAPACK routines: the general and symmetric positive definite linear equation solvers (PxGESV and PxPOSV), the linear least squares solver (PxGELS), and the PBLAS matrix multiply routine (PxGEMM).            

      LA_GESV(A, B, IPIV, INFO)
         TYPE, intent(inout), dimension(:,:) :: A, B
         integer, optional, intent(out) :: IPIV(:), INFO
 
      LA_POSV(A, B, UPLO, INFO)
         TYPE, intent(inout), dimension(:,:) :: A, B
         character(LEN=1), optional, intent(in) :: UPLO
         integer, optional, intent(out) :: INFO
 
      LA_GELS(A, B, TRANS, INFO)
         TYPE, intent(inout), dimension(:,:) :: A, B
         character(LEN=1), optional, intent(in) :: TRANS
         integer, optional, intent(out) :: INFO
 
      LA_GEMM(A, B, C, transA, transB, alpha, beta)
         TYPE, intent(in), dimension(:,:) :: A, B
         TYPE, intent(inout), dimension(:,:) :: C
         character(LEN=1), optional, intent(in) :: transA, transB
         TYPE, optional, intent(in) :: alpha, beta

With this interface, all matrices are inherited, and query functions are used to determine the distribution of the matrices. Only when ScaLAPACK cannot handle the user's distribution are the matrices redistributed. In such a case, it is done transparently to the user, and only performance will show that it has occurred.

The prototype interfaces can be downloaded from netlib at the following URL:

http://www.netlib.org/scalapack/prototypes/slhpf.tar.gz

Questions or comments on these routines may be mailed to scalapack@cs.utk.edu.

The following example code is a complete HPF code calling and testing the ScaLAPACK LU factorization/solve in HPF.

This program is also available in the scalapack directory on netlib
(http://www.netlib.org/scalapack/examples/sample_hpf_gesv.f).

      program simplegesv
      use HPF_LAPACK
      integer, parameter :: N=500, NRHS=20, NB=64, NBRHS=64, P=1, Q=3
      integer, parameter :: DP=kind(0.0D0)
      integer :: IPIV(N)
      real(DP) :: A(N, N), X(N, NRHS), B(N, NRHS)
!HPF$ PROCESSORS PROC(P,Q)
!HPF$ DISTRIBUTE A(cyclic(NB), cyclic(NB)) ONTO PROC
!HPF$ DISTRIBUTE (cyclic(NB), cyclic(NBRHS)) ONTO PROC :: B, X

!
!     Randomly generate the coefficient matrix A and the solution
!     matrix X.  Set the right hand side matrix B such that B = A * X.
!
      call random_number(A)
      call random_number(X)
      B = matmul(A, X)
!
!     Solve the linear system; the computed solution overwrites B
!
      call la_gesv(A, B, IPIV)
!
!     As a simple test, print the largest difference (in absolute value)
!     between the computed solution (B) and the generated solution (X).
!
      print*,'MAX( ABS(X~ - X) ) = ',maxval( abs(B - X) )
!
!     Shutdown the ScaLAPACK system, I'm done
!
      call SLhpf_exit()

      stop
      end



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node182.html0100644000056400000620000000507306336114011017521 0ustar pfrauenfstaff Quick Reference Guides next up previous contents index
Next: ScaLAPACK Quick Reference Guide Up: Guide Previous: HPF Interface to ScaLAPACK

Quick Reference Guides

   

Quick References Guides are provided for ScaLAPACK, the PBLAS, and the BLACS. In addition, quick reference guides are also available for LAPACK and the BLAS.

http://www.netlib.org/scalapack/scalapackqref.ps
http://www.netlib.org/scalapack/pblasqref.ps
http://www.netlib.org/blacs/cblacsqref.ps
http://www.netlib.org/blacs/f77blacsqref.ps
http://www.netlib.org/blas/blasqr.ps




Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node183.html0100644000056400000620000000511606336114012017521 0ustar pfrauenfstaff ScaLAPACK Quick Reference Guide next up previous contents index
Next: Quick Reference Guide to Up: Quick Reference Guides Previous: Quick Reference Guides

 

ScaLAPACK Quick Reference Guide

 

A postscript version of this Quick Reference Guide is available on the ScaLAPACK homepage.

http://www.netlib.org/scalapack/scalapackqref.ps


tabular6871

Expert Drivers


tabular6887


tabular6894

Meaning of prefixes
Routines beginning with ``PS'' are available in:

PS - REAL
PD - DOUBLE PRECISION

Routines beginning with ``PC'' are available in:

PC - COMPLEX
PZ - COMPLEX*16

Note: COMPLEX*16 may not be supported by all machines



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node184.html0100644000056400000620000001445506336114012017530 0ustar pfrauenfstaff Quick Reference Guide to the PBLAS next up previous contents index
Next: Quick Reference Guide to Up: Quick Reference Guides Previous: ScaLAPACK Quick Reference Guide

 

Quick Reference Guide to the PBLAS

 

An html version of this Quick Reference Guide , along with the leading comments from each of the routines, is available via the ScaLAPACK homepage.

http://www.netlib.org/scalapack/index.html

At the lowest level, the efficiency of the PBLAS is determined by the local performance of the BLAS and the BLACS. In addition, depending on the shape of its input and output distributed matrices, the PBLAS select the best algorithm in terms of data transfer across the process grid. Transparent to the user, this relatively simple selection process ensures high efficiency independent of the actual computation performed.

Level 1 PBLAS

         dim scalar     vector                  vector
P_SWAP ( N,             X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY )
P_SCAL ( N, ALPHA,      X, IX, JX, DESCX, INCX )
P_COPY ( N,             X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY )
P_AXPY ( N, ALPHA,      X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY )
P_DOT  ( N, DOT,        X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY )
P_DOTU ( N, DOTU,       X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY )
P_DOTC ( N, DOTC,       X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY )
P_NRM2 ( N, NORM2,      X, IX, JX, DESCX, INCX )
P_ASUM ( N, ASUM,       X, IX, JX, DESCX, INCX )
P_AMAX ( N, AMAX, INDX, X, IX, JX, DESCX, INCX )

Level 2 PBLAS

         options            dim   scalar matrix            vector                 scalar vector
P_GEMV (       TRANS,       M, N, ALPHA, A, IA, JA, DESCA, X, IX, JX, DESCX, INCX, BETA, Y, IY, JY, DESCY, INCY )
P_HEMV ( UPLO,                 N, ALPHA, A, IA, JA, DESCA, X, IX, JX, DESCX, INCX, BETA, Y, IY, JY, DESCY, INCY )
P_SYMV ( UPLO,                 N, ALPHA, A, IA, JA, DESCA, X, IX, JX, DESCX, INCX, BETA, Y, IY, JY, DESCY, INCY )
P_TRMV ( UPLO, TRANS, DIAG,    N,        A, IA, JA, DESCA, X, IX, JX, DESCX, INCX )
P_TRSV ( UPLO, TRANS, DIAG,    N,        A, IA, JA, DESCA, X, IX, JX, DESCX, INCX )

         options            dim   scalar vector                  vector                  matrix
P_GER  (                    M, N, ALPHA, X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY, A, IA, JA, DESCA )
P_GERU (                    M, N, ALPHA, X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY, A, IA, JA, DESCA )
P_GERC (                    M, N, ALPHA, X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY, A, IA, JA, DESCA )
P_HER  ( UPLO,                 N, ALPHA, X, IX, JX, DESCX, INCX,                         A, IA, JA, DESCA )
P_HER2 ( UPLO,                 N, ALPHA, X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY, A, IA, JA, DESCA )
P_SYR  ( UPLO,                 N, ALPHA, X, IX, JX, DESCX, INCX,                         A, IA, JA, DESCA )
P_SYR2 ( UPLO,                 N, ALPHA, X, IX, JX, DESCX, INCX, Y, IY, JY, DESCY, INCY, A, IA, JA, DESCA )

Level 3 PBLAS

         options                           dim      scalar matrix            matrix            scalar matrix
P_GEMM (             TRANSA, TRANSB,       M, N, K, ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB, BETA, C, IC, JC, DESCC )
P_SYMM ( SIDE, UPLO,                       M, N,    ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB, BETA, C, IC, JC, DESCC )
P_HEMM ( SIDE, UPLO,                       M, N,    ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB, BETA, C, IC, JC, DESCC )
P_SYRK (       UPLO, TRANS,                   N, K, ALPHA, A, IA, JA, DESCA,                   BETA, C, IC, JC, DESCC )
P_HERK (       UPLO, TRANS,                   N, K, ALPHA, A, IA, JA, DESCA,                   BETA, C, IC, JC, DESCC )
P_SYR2K(       UPLO, TRANS,                   N, K, ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB, BETA, C, IC, JC, DESCC )
P_HER2K(       UPLO, TRANS,                   N, K, ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB, BETA, C, IC, JC, DESCC )
P_TRAN (                                   M, N,    ALPHA, A, IA, JA, DESCA,                   BETA, C, IC, JC, DESCC )
P_TRANU(                                   M, N,    ALPHA, A, IA, JA, DESCA,                   BETA, C, IC, JC, DESCC )
P_TRANC(                                   M, N,    ALPHA, A, IA, JA, DESCA,                   BETA, C, IC, JC, DESCC )
P_TRMM ( SIDE, UPLO, TRANSA,         DIAG, M, N,    ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB )
P_TRSM ( SIDE, UPLO, TRANSA,         DIAG, M, N,    ALPHA, A, IA, JA, DESCA, B, IB, JB, DESCB )


tabular6927


tabular6979



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node185.html0100644000056400000620000000632706336114013017531 0ustar pfrauenfstaff Quick Reference Guide to the BLACS next up previous contents index
Next: Glossary Up: Quick Reference Guides Previous: Quick Reference Guide to

 

Quick Reference Guide to the BLACS

     

An html version of this Quick Reference Guide, along with the leading comments from each of the routines, is available on the BLACS homepage.

http://www.netlib.org/blacs/index.html


tabular7001


tabular7032




tabular7052


tabular7056




tabular7062


tabular7078



tabular7107


tabular7125




tabular7141




tabular7145


tabular7149



tabular7153




tabular7158



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node186.html0100644000056400000620000004113206336114014017524 0ustar pfrauenfstaff Glossary next up previous contents index
Next: Specifications of Routines Up: Guide Previous: Quick Reference Guide to

Glossary

 

The following is a glossary of terms and notation used throughout this users guide and the leading comments of the source code. The first time notation from this glossary appears in the text, it will be italicized.

  • Array descriptor: Contains the information required to establish the mapping between a global matrix entry and its corresponding process and memory location  .

    The notations x_ used in the entries of the array descriptor denote the attributes of a global matrix. For example, M_ denotes the number of rows, and M_A specifically denotes the number of rows in global matrix A. See sections 4.2,  4.3.3,  4.4.5,  4.4.6, and  4.5.1 for complete details.

  • BLACS : Basic Linear Algebra Communication Subprograms, a message-passing library designed for linear algebra. They provide a portability layer for communication between ScaLAPACK and message-passing systems such as MPI and PVM, as well as native message-passing libraries such as NX and MPL. See section 1.3.4.
  • BLAS : Basic Linear Algebra Subprograms [57, 59, 93], a standard for subroutines for common linear algebra computations such as dot-products, matrix-vector multiplication, and matrix-matrix multiplication. They provide a portability layer for computation. See section 1.3.2.
  • Block size: The number of contiguous rows or columns of a global matrix to be distributed consecutively to each of the processes in the process grid. The block size is quantified by the notation tex2html_wrap_inline14817, where MB is the row block size and NB is the column block size.

    The distribution block size can be square, MB=NB, or rectangular, tex2html_wrap_inline20032. Block size  is also referred to as the partitioning unit  or blocking factor .

  • Distributed memory computer: A term used in two senses:
    • A computer marketed as a distributed memory computer (such as the Cray T3 computers, the IBM SP computers, or the Intel Paragon), including one or more message-passing libraries.
    • A distributed shared-memory computer (e.g., the Origin 2000) or network of workstations (e.g., the Berkeley NOW) with message passing.
    ScaLAPACK delivers high performance on these computers provided that they include certain key features such as an efficient message-passing system, a one-to-one mapping of processes to processors, a gang scheduler and a well-connected communication network.
  • Distribution : Method by which the entries of a global matrix are allocated among the processes, also commonly referred to as decomposition   or data layout . Examples of distributions used by ScaLAPACK include block and block-cyclic distributions and these will be illustrated and explained in detail later.

    Data distribution in ScaLAPACK is controlled primarily by the process grid and the block size.

  • Global: A term ``global''   used in two ways:
    • To define the mathematical matrix , e.g. the global matrix A.
    • To identify arguments that must have the same value on all processes .
  • tex2html_wrap_inline12124(K_) : Number of columns that a process receives if tex2html_wrap_inline12126 columns of a matrix are distributed over c columns of its process row.

    To be consistent in notation, we have used a ``modifying character'' subscript on LOC to denote the dimension of the process grid to which we are referring. The subscript ``r'' indicates ``row'' whenever it is appended to LOC; likewise, the subscript ``c'' indicates ``column'' when it is appended to LOC.

    The value of tex2html_wrap_inline12124() may differ from process to process within the process grid. For example, in figure 4.6 (section 4.3.4), we can see that for process (0,0) tex2html_wrap_inline12124(N_)= 4; however, for process (0,1) tex2html_wrap_inline12124(N_) = 3.

  • tex2html_wrap_inline12132(K_) : Number of rows that a process would receive if tex2html_wrap_inline12126 rows of a matrix are distributed over r rows of its process column.

    To be consistent in notation, we have used a ``modifying character'' subscript on LOC to denote the dimension of the process grid to which we are referring. The subscript ``r'' indicates ``row'' whenever it is appended to LOC; likewise, the subscript ``c'' indicates ``column'' when it is appended to LOC.

    The value of tex2html_wrap_inline12132() may differ from process to process within the process grid. For example, in figure 4.6 (section 4.3.4), we can see that for process (0,0) tex2html_wrap_inline12132(M_)= 5; however, for process (1,0) tex2html_wrap_inline12132(M_) = 4.

  • Local: A term    used in two ways:
    • To express the array elements or blocks stored on each process, e.g., the local part of the global matrix A, also referred to as the local array . The size of the local array may differ from process to process. See section 2.3 for further details.
    • To identify arguments that may have different values on different processes. 

  • Local leading dimension   of a local array: Specification of entry size for local array. When a global array is distributed among the processes in the process grid, locally the entries are stored in a two-dimensional array, the size of which may vary from process to process. Thus, a leading dimension needs to be specified for each local array. For example, in Figure 2.2 in section 2.3, we can see that for process (0,0) the local leading dimension of the local array A (denoted tex2html_wrap_inline12116) is 5, whereas for process (1,0) the local leading dimension of local array A is 4.
  • MYCOL : The calling process's column coordinate in the process grid. Each process within the process grid is uniquely identified by its process coordinates (MYROW, MYCOL).
  • MYROW : The calling process's row coordinate in the process grid. Each process within the process grid is uniquely identified by its process coordinates (MYROW, MYCOL).
  • P : The total number of processes in the process grid, i.e., tex2html_wrap_inline17184.

    In terms of notation for process grids, we have used a ``modifying character'' subscript on P to denote the dimension of the process grid to which we are referring. The subscript ``r'' indicates ``row'' whenever it is appended to P, and thus tex2html_wrap_inline12172 is the number of process rows in the process grid. Likewise, the subscript ``c'' indicates ``column'' when it is appended to P, and thus tex2html_wrap_inline12162 is the number of process columns in the process grid.

  • tex2html_wrap_inline12162 : The number of process columns in the process grid (i.e., the second dimension of the two-dimensional process grid).
  • tex2html_wrap_inline12172 : The number of process rows in the process grid (i.e., the first dimension of the two-dimensional process grid).
  • PBLAS : A distributed-memory version of the BLAS (Basic Linear Algebra Subprograms), also referred to as the Parallel BLAS or Parallel Basic Linear Algebra Subprograms. Refer to section 1.3.3 for further details.
  • Process: Basic unit or thread of execution  that minimally includes a stack, registers, and memory. Multiple processes may share a physical processor. The term processor  refers to the actual hardware.

    In ScaLAPACK, each process is treated as if it were a processor: the process must exist for the lifetime of the ScaLAPACK run, and its execution should affect other processes' execution only through the use of message-passing calls. With this in mind, we use the term process in all sections of this users guide except those dealing with timings. When discussing timings, we specify processors as our unit of execution, since speedup will be determined largely by actual hardware resources.

    In ScaLAPACK, algorithms are presented in terms of processes, rather than physical processors. In general there may be several processes on a processor, in which case we assume that the runtime system handles the scheduling of processes. In the absence of such a runtime system, ScaLAPACK assumes one process per processor.

  • Process column : A specific column of processes within the two-dimensional process grid. For further details, consult the definition of process grid.
  • Process grid  : The way we logically view a parallel machine as a one- or two-dimensional rectangular grid of processes.

    For two-dimensional process grids, the variable tex2html_wrap_inline12172  is used to indicate the number of rows in the process grid (i.e., the first dimension of the two-dimensional process grid). The variable tex2html_wrap_inline12162  is used to indicate the number of columns in the process grid (i.e., the second dimension of the two-dimensional process grid). The collection of processes need not physically be connected in the two-dimensional process grid.

    For example, the following figure shows six processes mapped to a tex2html_wrap_inline14922 grid, where tex2html_wrap_inline12422 and tex2html_wrap_inline12424.

    figure7281

    A user may perform an operation within a process row or process column of the process grid. A process row refers to a specific row of processes within the process grid, and a process column refers to a specific column of processes with the process grid. In the example, process row 0 contains the processes with natural ordering 0, 1, and 2, and process column 0 contains the processes with natural ordering 0 and 3.

    For further details, please refer to section 4.1.1.

  • Process row : A specific row of processes within the two-dimensional process grid. For further details, consult the definition of process grid.
  • Scope : A term used in two ways:
    • The portion of the process grid within which an operation is defined. For example, in the Level 1 PBLAS, the resultant output array or scalar will be global or local within a process column or row of the process grid, and undefined elsewhere .

      Equivalently, in Appendix D.3, scope indicates the processes that participate in the broadcast or global combine operations. Scope can equal ``all'', ``row'', or ``column'' .

    • The portion of the parallel program within which the definition of an argument remains unchanged. When the scope of an argument is defined as global , the argument must have the same value on all processes. When the scope of an argument is defined as local , the argument may have different values on different processes .
    Refer to section 4.1.3 for further details.


next up previous contents index
Next: Specifications of Routines Up: Guide Previous: Quick Reference Guide to

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node187.html0100644000056400000620000000327306336375614017550 0ustar pfrauenfstaff Specifications of Routines next up previous contents index
Next: Notes Up: ScaLAPACK Users' Guide Previous: Glossary

Specifications of Routines

 



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node188.html0100644000056400000620000001440306336114015017530 0ustar pfrauenfstaff Notes next up previous contents index
Next: References Up: Specifications of Routines Previous: Specifications of Routines

Notes

  1. The specifications that follow give the calling sequence, purpose, and descriptions of the arguments of each ScaLAPACK driver and computational routine (but not of auxiliary routines).
  2. Specifications of pairs of real and complex routines have been merged (for example PSGETRF/PCGETRF).
  3. Specifications are given only for single-precision routines. To adapt them for the double precision version of the software, simply interpret REAL as DOUBLE PRECISION, COMPLEX as COMPLEX*16 (or DOUBLE COMPLEX), and the initial letters PS- and PC- of ScaLAPACK routine names as PD- and PZ-.
  4. Specifications are arranged in alphabetical order of the real routine name.
  5. The text of the specifications has been derived from the leading comments in the source-text of the routines. It makes only limited use of mathematical typesetting facilities. To eliminate redundancy, tex2html_wrap_inline12724 has been used throughout the specifications. Thus, the reader should note that tex2html_wrap_inline12724 is equivalent to tex2html_wrap_inline12722 in the real case.
  6. If there is a discrepancy between the specifications listed in this section and the actual source code, the source code should be regarded as the most up to date.

Included in the leading comments of each subroutine (immediately preceding the Argument section) is a brief note describing the array descriptor and some commonly used expressions in calculating workspace. For brevity, we have listed this information below and not included it in the specifications of the routines.

*  Notes
*  =====
*
*  Each global data object is described by an associated description
*  vector.  This vector stores the information required to establish
*  the mapping between an object element and its corresponding process
*  and memory location.
*
*  Let A be a generic term for any 2D block cyclicly distributed array.
*  Such a global array has an associated description vector DESCA.
*  In the following comments, the character _ should be read as
*  "of the global array".
*
*  NOTATION        STORED IN      EXPLANATION
*  --------------- -------------- --------------------------------------
*  DTYPE_A(global) DESCA( DTYPE_ )The descriptor type.  In this case,
*                                 DTYPE_A = 1.
*  CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating
*                                 the BLACS process grid A is distribu-
*                                 ted over. The context itself is glo-
*                                 bal, but the handle (the integer
*                                 value) may vary.
*  M_A    (global) DESCA( M_ )    The number of rows in the global
*                                 array A.
*  N_A    (global) DESCA( N_ )    The number of columns in the global
*                                 array A.
*  MB_A   (global) DESCA( MB_ )   The blocking factor used to distribute
*                                 the rows of the array.
*  NB_A   (global) DESCA( NB_ )   The blocking factor used to distribute
*                                 the columns of the array.
*  RSRC_A (global) DESCA( RSRC_ ) The process row over which the first
*                                 row of the array A is distributed.
*  CSRC_A (global) DESCA( CSRC_ ) The process column over which the
*                                 first column of the array A is
*                                 distributed.
*  LLD_A  (local)  DESCA( LLD_ )  The leading dimension of the local
*                                 array.  LLD_A >= MAX(1,LOCr(M_A)).
*
*  Let K be the number of rows or columns of a distributed matrix,
*  and assume that its process grid has dimension p x q.
*  LOCr( K ) denotes the number of elements of K that a process
*  would receive if K were distributed over the p processes of its
*  process column.
*  Similarly, LOCc( K ) denotes the number of elements of K that a
*  process would receive if K were distributed over the q processes of
*  its process row.
*  The values of LOCr() and LOCc() may be determined via a call to the
*  ScaLAPACK tool function, NUMROC:
*          LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ),
*          LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ).
*  An upper bound for these quantities may be computed by:
*          LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A
*          LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A





=0.15in =-.4in

colhtcolroom

=0in =0in



colhtcolroom





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node189.html0100644000056400000620000013542006336376033017546 0ustar pfrauenfstaff References next up previous contents index
Next: Index Up: ScaLAPACK Users' Guide Previous: Notes

References

1
M. ABOELAZE, N. CHRISOCHOIDES, AND E. HOUSTIS, The Parallelization of Level 2 and 3 BLAS Operations on Distributed Memory Machines, Tech. Rep. CSD-TR-91-007, Purdue University, West Lafayette, IN, 1991.

2
R. AGARWAL, F. GUSTAVSON, AND M. ZUBAIR, Improving Performance of Linear Algebra Algorithms for Dense Matrices Using Algorithmic Prefetching, IBM J. Res. Dev., 38 (1994), pp. 265-275.

3
E. ANDERSON, Z. BAI, C. BISCHOF, J. DEMMEL, J. DONGARRA, J. DU CROZ, A. GREENBAUM, S. HAMMARLING, A. MCKENNEY, S. OSTROUCHOV, AND D. SORENSEN, LAPACK Users' Guide, Society for Industrial and Applied Mathematics, Philadelphia, PA, second ed., 1995.

4
E. ANDERSON, Z. BAI, C. BISCHOF, J. DEMMEL, J. DONGARRA, J. DU CROZ, A. GREENBAUM, S. HAMMARLING, A. MCKENNEY, AND D. SORENSEN, LAPACK: A portable linear algebra library for high-performance computers, Computer Science Dept. Technical Report CS-90-105, University of Tennessee, Knoxville, TN, May 1990. (Also LAPACK Working Note #20).

5
E. ANDERSON, Z. BAI, AND J. DONGARRA, Generalized QR factorization and its applications, Linear Algebra and Its Applications, 162-164 (1992), pp. 243-273. (Also LAPACK Working Note #31).

6
I. ANGUS, G. FOX, J. KIM, AND D. WALKER, Solving Problems on Concurrent Processors: Software for Concurrent Processors, vol. 2, Prentice Hall, Englewood Cliffs, N.J, 1990.

7
ANSI/IEEE, IEEE Standard for Binary Floating Point Arithmetic, New York, Std 754-1985 ed., 1985.

8
height 2pt depth -1.6pt width 23pt, IEEE Standard for Radix Independent Floating Point Arithmetic, New York, Std 854-1987 ed., 1987.

9
M. ARIOLI, J. W. DEMMEL, AND I. S. DUFF, Solving sparse linear systems with sparse backward error, SIAM J. Matrix Anal. Appl., 10 (1989), pp. 165-190.

10
C. ASHCRAFT, The Distributed Solution of Linear Systems Using the Torus-wrap Data mapping, Tech. Rep. ECA-TR-147, Boeing Computer Services, Seattle, WA, 1990.

11
Z. BAI AND J. DEMMEL, Design of a parallel nonsymmetric eigenroutine toolbox, Part I, in Proceedings of the Sixth SIAM Conference on Parallel Processing for Scientific Computing, SIAM, 1993, pp. 391-398.

12
Z. BAI AND J. DEMMEL, Using the matrix sign function to compute invariant subspaces, SIAM J. Matrix Anal. Appl, x (1997), p. xxx. to appear.

13
Z. BAI, J. DEMMEL, J. DONGARRA, A. PETITET, H. ROBINSON, AND K. STANLEY, The spectral decomposition of nonsymmetric matrices on distributed memory computers, Computer Science Dept. Technical Report CS-95-273, University of Tennessee, Knoxville, TN, 1995. (Also LAPACK Working Note No. 91), To appear in SIAM J. Sci. Stat. Comput.

14
Z. BAI AND J. W. DEMMEL, Design of a parallel nonsymmetric eigenroutine toolbox, Part I, in Proceedings of the Sixth SIAM Conference on Parallel Processing for Scientific Computing, R. F. et al. Sincovec, ed., Philadelphia, PA, 1993, Society for Industrial and Applied Mathematics, pp. 391-398. Long version available as Computer Science Report CSD-92-718, University of California, Berkeley, 1992.

15
J. BARLOW AND J. DEMMEL, Computing accurate eigensystems of scaled diagonally dominant matrices, SIAM J. Num. Anal., 27 (1990), pp. 762-791. (Also LAPACK Working Note #7).

16
J. BILMES, K. ASANOVIC, J. DEMMEL, D. LAM, AND C. CHIN, Optimizing matrix multiply using PHiPAC: A portable, high-performance, ANSI C coding methodology, Computer Science Dept. Technical Report CS-96-326, University of Tennessee, Knoxville, TN, 1996. (Also LAPACK Working Note #111).

17
R. H. BISSELING AND J. G. G. VAN DE VORST, Parallel LU decomposition on a transputer network, in Lecture Notes in Computer Science, Number 384, G. A. van Zee and J. G. G. van de Vorst, eds., Springer-Verlag, 1989, pp. 61-77.

18
L. S. BLACKFORD, J. CHOI, A. CLEARY, J. DEMMEL, I. DHILLON, J. J. DONGARRA, S. HAMMARLING, G. HENRY, A. PETITET, K. STANLEY, D. W. WALKER, AND R. C. WHALEY, ScaLAPACK: A portable linear algebra library for distributed memory computers - design issues and performance, in Proceedings of Supercomputing '96, Sponsored by ACM SIGARCH and IEEE Computer Society, 1996. (ACM Order Number: 415962, IEEE Computer Society Press Order Number: RS00126. http://www.supercomp.org/sc96/proceedings/).

19
L. S. BLACKFORD, A. CLEARY, J. DEMMEL, I. DHILLON, J. DONGARRA, S. HAMMARLING, A. PETITET, H. REN, K. STANLEY, AND R. C. WHALEY, Practical experience in the dangers of heterogeneous computing, Computer Science Dept. Technical Report CS-96-330, University of Tennessee, Knoxville, TN, July 1996. (Also LAPACK Working Note #112), to appear ACM Trans. Math. Softw., 1997.

20
R. BRENT, The LINPACK Benchmark on the AP 1000, in Frontiers, 1992, McLean, VA, 1992, pp. 128-135.

21
R. BRENT AND P. STRAZDINS, Implementation of BLAS Level 3 and LINPACK Benchmark on the AP1000, Fujitsu Scientific and Technical Journal, 5 (1993), pp. 61-70.

22
S. BROWNE, J. DONGARRA, S. GREEN, E. GROSSE, K. MOORE, T. ROWAN, AND R. WADE, Netlib services and resources (rev. 1), Computer Science Dept. Technical Report CS-94-222, University of Tennessee, Knoxville, TN, 1994.

23
S. BROWNE, J. DONGARRA, E. GROSSE, AND T. ROWAN, The netlib mathematical software repository, D-Lib Magazine (www.dlib.org), (1995).

24
J. CHOI, J. DEMMEL, I. DHILLON, J. DONGARRA, S. OSTROUCHOV, A. PETITET, K. STANLEY, D. WALKER, AND R. C. WHALEY, Installation guide for ScaLAPACK, Computer Science Dept. Technical Report CS-95-280, University of Tennessee, Knoxville, TN, March 1995. (Also LAPACK Working Note #93).

25
height 2pt depth -1.6pt width 23pt, ScaLAPACK: A portable linear algebra library for distributed memory computers - design issues and performance, Computer Science Dept. Technical Report CS-95-283, University of Tennessee, Knoxville, TN, March 1995. (Also LAPACK Working Note #95).

26
J. CHOI, J. DONGARRA, S. OSTROUCHOV, A. PETITET, D. WALKER, AND R. C. WHALEY, A proposal for a set of parallel basic linear algebra subprograms, Computer Science Dept. Technical Report CS-95-292, University of Tennessee, Knoxville, TN, May 1995. (Also LAPACK Working Note #100).

27
J. CHOI, J. DONGARRA, R. POZO, AND D. WALKER, ScaLAPACK: A scalable linear algebra library for distributed memory concurrent computers, in Proceedings of the Fourth Symposium on the Frontiers of Massively Parallel Computation, McLean, Virginia, 1992, IEEE Computer Society Press, pp. 120-127. (Also LAPACK Working Note #55).

28
J. CHOI, J. DONGARRA, AND D. WALKER, The design of a parallel dense linear algebra software library: Reduction to Hessenberg, tridiagonal and bidiagonal form, Numerical Algorithms, 10 (1995), pp. 379-399. (Also LAPACK Working Note #92).

29
J. CHOI, J. DONGARRA, AND D. WALKER, PB-BLAS: A Set of Parallel Block Basic Linear Algebra Subroutines, Concurrency: Practice and Experience, 8 (1996), pp. 517-535.

30
A. CHTCHELKANOVA, J. GUNNELS, G. MORROW, J. OVERFELT, AND R. VAN DE GEIJN, Parallel Implementation of BLAS: General Techniques for Level 3 BLAS, Tech. Rep. TR95-49, Department of Computer Sciences, UT-Austin, 1995. Submitted to Concurrency: Practice and Experience.

31
E. CHU AND A. GEORGE, QR Factorization of a Dense Matrix on a Hypercube Multiprocessor, SIAM Journal on Scientific and Statistical Computing, 11 (1990), pp. 990-1028.

32
A. CLEARY AND J. DONGARRA, Implementation in scalapack of divide-and-conquer algorithms for banded and tridiagonal linear systems, Computer Science Dept. Technical Report CS-97-358, University of Tennessee, Knoxville, TN, April 1997. (Also LAPACK Working Note #125).

33
M. COSNARD, Y. ROBERT, P. QUINTON, AND M. TCHUENTE, eds., Parallel Algorithms and Architectures, North-Holland, 1986.

34
D. E. CULLER, A. ARPACI-DUSSEAU, R. ARPACI-DUSSEAU, B. CHUN, S. LUMETTA, A. MAINWARING, R. MARTIN, C. YOSHIKAWA, AND F. WONG, Parallel computing on the Berkeley NOW. To appear in JSPP'97 (9th Joint Symposium on Parallel Processing), Kobe, Japan, 1997.

35
M. DAYDE, I. DUFF, AND A. PETITET, A Parallel Block Implementation of Level 3 BLAS for MIMD Vector Processors, ACM Trans. Math. Softw., 20 (1994), pp. 178-193.

36
B. DE MOOR AND P. VAN DOOREN, Generalization of the singular value and QR decompositions, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 993-1014.

37
J. DEMMEL, Underflow and the reliability of numerical software, SIAM J. Sci. Stat. Comput., 5 (1984), pp. 887-919.

38
height 2pt depth -1.6pt width 23pt, Applied Numerical Linear Algebra, SIAM, 1996. to appear.

39
J. DEMMEL, S. EISENSTAT, J. GILBERT, X. LI, AND J. W. H. LIU, A supernodal approach to sparse partial pivoting, Technical Report UCB//CSD-95-883, UC Berkeley Computer Science Division, September 1995. to appear in SIAM J. Mat. Anal. Appl.

40
J. DEMMEL AND K. STANLEY, The performance of finding eigenvalues and eigenvectors of dense symmetric matrices on distributed memory computers, Computer Science Dept. Technical Report CS-94-254, University of Tennessee, Knoxville, TN, September 1994. (Also LAPACK Working Note #86).

41
J. W. DEMMEL, J. R. GILBERT, AND X. S. LI, An asynchronous parallel supernodal algorithm for sparse Gaussian elimination, February 1997. Submitted to SIAM J. Matrix Anal. Appl., special issue on Sparse and Structured Matrix Computations and Their Applications (Also LAPACK Working Note 124).

42
J. W. DEMMEL AND X. LI, Faster numerical algorithms via exception handling, IEEE Trans. Comp., 43 (1994), pp. 983-992. (Also LAPACK Working Note #59).

43
I. S. DHILLON, Current inverse iteration software can fail, (1997). Submitted for publication.

44
height 2pt depth -1.6pt width 23pt, A Stable tex2html_wrap_inline18127 Algorithm for the Symmetric Tridiagonal Eigenproblem, PhD thesis, University of California, Berkeley, CA, May 1997.

45
I. S. DHILLON AND B. PARLETT, Orthogonal eigenvectors without Gram-Schmidt, (1997). draft.

46
J. DONGARRA AND T. DUNIGAN, Message-passing performance of various computers, Tech. Rep. ORNL/TM-13006, Oak Ridge National Laboratory, Oak Ridge, TN, 1996. Submitted and accepted to Concurrency: Practice and Experience.

47
J. DONGARRA, S. HAMMARLING, AND D. WALKER, Key Concepts for Parallel Out-Of-Core LU Factorization, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1996. (Also LAPACK Working Note #110).

48
J. DONGARRA, G. HENRY, AND D. WATKINS, A distributed memory implementation of the nonsymmetric QR algorithm, in Proceedings of the Eighth SIAM Conference on Parallel Processing for Scientific Computing, Philadelphia, PA, 1997, Society for Industrial and Applied Mathematics.

49
J. DONGARRA, C. RANDRIAMARO, L. PRYLLI, AND B. TOURANCHEAU, Array redistribution in ScaLAPACK using PVM, in EuroPVM users' group, Hermes, 1995.

50
J. DONGARRA AND R. VAN DE GEIJN, Two dimensional basic linear algebra communication subprograms, Computer Science Dept. Technical Report CS-91-138, University of Tennessee, Knoxville, TN, 1991. (Also LAPACK Working Note #37).

51
J. DONGARRA, R. VAN DE GEIJN, AND D. WALKER, Scalability issues in the design of a library for dense linear algebra, Journal of Parallel and Distributed Computing, 22 (1994), pp. 523-537. (Also LAPACK Working Note #43).

52
J. DONGARRA, R. VAN DE GEIJN, AND R. C. WHALEY, Two dimensional basic linear algebra communication subprograms, in Environments and Tools for Parallel Scientific Computing, Advances in Parallel Computing, J. Dongarra and B. Tourancheau, eds., vol. 6, Elsevier Science Publishers B.V., 1993, pp. 31-40.

53
J. DONGARRA AND D. WALKER, Software libraries for linear algebra computations on high performance computers, SIAM Review, 37 (1995), pp. 151-180.

54
J. DONGARRA AND R. C. WHALEY, A user's guide to the BLACS v1.1, Computer Science Dept. Technical Report CS-95-281, University of Tennessee, Knoxville, TN, 1995. (Also LAPACK Working Note #94).

55
J. J. DONGARRA AND E. F. D'AZEVEDO, The design and implementation of the parallel out-of-core ScaLAPACK LU, QR, and Cholesky factorization routines, Department of Computer Science Technical Report CS-97-347, University of Tennessee, Knoxville, TN, 1997. (Also LAPACK Working Note 118).

56
J. J. DONGARRA, J. DU CROZ, I. S. DUFF, AND S. HAMMARLING, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 18-28.

57
height 2pt depth -1.6pt width 23pt, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 1-17.

58
J. J. DONGARRA, J. DU CROZ, S. HAMMARLING, AND R. J. HANSON, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subroutines, ACM Trans. Math. Soft., 14 (1988), pp. 18-32.

59
height 2pt depth -1.6pt width 23pt, An extended set of FORTRAN basic linear algebra subroutines, ACM Trans. Math. Soft., 14 (1988), pp. 1-17.

60
J. J. DONGARRA AND E. GROSSE, Distribution of mathematical software via electronic mail, Communications of the ACM, 30 (1987), pp. 403-407.

61
J. J. DONGARRA, R. VAN DE GEIJN, AND D. W. WALKER, A look at scalable dense linear algebra libraries, in Proceedings of the Scalable High-Performance Computing Conference, IEEE, ed., IEEE Publishers, 1992, pp. 372-379.

62
J. DU CROZ AND N. J. HIGHAM, Stability of methods for matrix inversion, IMA J. Numer. Anal., 12 (1992), pp. 1-19. (Also LAPACK Working Note #27).

63
R. FALGOUT, A. SKJELLUM, S. SMITH, AND C. STILL, The Multicomputer Toolbox Approach to Concurrent BLAS and LACS, in Proceedings of the Scalable High Performance Computing Conference SHPCC-92, IEEE Computer Society Press, 1992.

64
M. P. I. FORUM, MPI: A message passing interface standard, International Journal of Supercomputer Applications and High Performance Computing, 8 (1994), pp. 3-4. Special issue on MPI. Also available electronically, the URL is ftp://www.netlib.org/mpi/mpi-report.ps .

65
G. FOX, M. JOHNSON, G. LYZENGA, S. OTTO, J. SALMON, AND D. WALKER, Solving Problems on Concurrent Processors, Volume 1, Prentice-Hall, Englewood Cliffs, NJ, 1988.

66
G. FOX, R. WILLIAMS, AND P. MESSINA, Parallel Computing Works!, Morgan Kaufmann Publishers, Inc., San Francisco, CA, 1994.

67
T. L. FREEMAN AND C. PHILLIPS, Parallel Numerical Algorithms, Prentice-Hall, Hemel Hempstead, Hertfordshire, UK, 1992.

68
A. GEIST, A. BEGUELIN, J. DONGARRA, W. JIANG, R. MANCHEK, AND V. SUNDERAM, PVM: Parallel Virtual Machine. A Users' Guide and Tutorial for Networked Parallel Computing, MIT Press, Cambridge, MA, 1994.

69
G. GEIST AND C. ROMINE, LU factorization algorithms on distributed memory multiprocessor architectures, SIAM J. Sci. Stat. Comput., 9 (1988), pp. 639-649.

70
G. GOLUB AND C. VAN LOAN, Matrix Computations, Johns-Hopkins, Baltimore, second ed., 1989.

71
G. GOLUB AND C. F. VAN LOAN, Matrix Computations, Johns Hopkins University Press, Baltimore, MD, third ed., 1996.

72
W. W. HAGER, Condition estimators, SIAM J. Sci. Stat. Comput., 5 (1984), pp. 311-316.

73
S. HAMMARLING, The numerical solution of the general Gauss-Markov linear model, in Mathematics in Signal Processing, T. S. et al.. Durani, ed., Clarendon Press, Oxford, UK, 1986.

74
R. HANSON, F. KROGH, AND C. LAWSON, A proposal for standard linear algebra subprograms, ACM SIGNUM Newsl., 8 (1973).

75
P. HATCHER AND M. QUINN, Data-Parallel Programming On MIMD Computers, The MIT Press, Cambridge, Massachusetts, 1991.

76
B. HENDRICKSON AND D. WOMBLE, The torus-wrap mapping for dense matrix calculations on massively parallel computers, SIAM J. Sci. Stat. Comput., 15 (1994), pp. 1201-1226.

77
G. HENRY, Improving Data Re-Use in Eigenvalue-Related Computations, PhD thesis, Cornell University, Ithaca, NY, January 1994.

78
G. HENRY AND R. VAN DE GEIJN, Parallelizing the QR algorithm for the unsymmetric algebraic eigenvalue problem: Myths and reality, SIAM J. Sci. Comput., 17 (1996), pp. 870-883. (Also LAPACK Working Note 79).

79
G. HENRY, D. WATKINS, AND J. DONGARRA, A parallel implementation of the nonsymmetric QR algorithm for distributed memory architectures, Computer Science Dept. Technical Report CS-97-352, University of Tennessee, Knoxville, TN, March 1997. (Also LAPACK Working Note # 121).

80
N. J. HIGHAM, A survey of condition number estimation for triangular matrices, SIAM Review, 29 (1987), pp. 575-596.

81
height 2pt depth -1.6pt width 23pt, FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation, ACM Trans. Math. Softw., 14 (1988), pp. 381-396.

82
height 2pt depth -1.6pt width 23pt, Experience with a matrix norm estimator, SIAM J. Sci. Stat. Comput., 11 (1990), pp. 804-809.

83
height 2pt depth -1.6pt width 23pt, Perturbation theory and backward error for AX-XB=C, BIT, 33 (1993), pp. 124-136.

84
height 2pt depth -1.6pt width 23pt, Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1996.

85
S. HUSS-LEDERMAN, E. JACOBSON, A. TSAO, AND G. ZHANG, Matrix Multiplication on the Intel Touchstone DELTA, Concurrency: Practice and Experience, 6 (1994), pp. 571-594.

86
S. HUSS-LEDERMAN, A. TSAO, AND G. ZHANG, A parallel implementation of the invariant subspace decomposition algorithm for dense symmetric matrices, in Proceedings of the Sixth SIAM Conference on Parallel Processing for Scientific Computing, SIAM, 1993, pp. 367-374.

87
K. HWANG, Advanced Computer Architecture: Parallelism, Scalability, Programmability, McGraw-Hill, 1993.

88
IBM CORPORATION, IBM RS6000, 1996. (URL = http://www.rs6000.ibm.com/).

89
INTEL CORPORATION, Intel Supercomputer Technical Publications Home Page, 1995. (URL = http://www.ssd.intel.com/pubs.html).

90
B. KåGSTRfOM, P. LING, AND C. V. LOAN, GEMM-based level 3 BLAS: High-performance model implementations and performance evaluation benchmark, Tech. Rep. UMINF 95-18, Department of Computing Science, Umeå University, 1995. Submitted to ACM Trans. Math. Softw.

91
C. KOEBEL, D. LOVEMAN, R. SCHREIBER, G. STEELE, AND M. ZOSEL, The High Performance Fortran Handbook, MIT Press, Cambridge, Massachusetts, 1994.

92
V. KUMAR, A. GRAMA, A. GUPTA, AND G. KARYPIS, Introduction to Parallel Computing - Design and Analysis of Algorithms, The Benjamin/Cummings Publishing Company, Inc., Redwood City, CA, 1994.

93
C. L. LAWSON, R. J. HANSON, D. KINCAID, AND F. T. KROGH, Basic linear algebra subprograms for Fortran usage, ACM Trans. Math. Soft., 5 (1979), pp. 308-323.

94
R. LEHOUCQ, The computation of elementary unitary matrices, Computer Science Dept. Technical Report CS-94-233, University of Tennessee, Knoxville, TN, 1994. (Also LAPACK Working Note 72).

95
T. LEWIS AND H. EL-REWINI, Introduction to Parallel Computing, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1992.

96
X. LI, Sparse Gaussian Elimination on High Performance Computers, PhD thesis, Computer Science Division, Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, September 1996.

97
W. LICHTENSTEIN AND S. L. JOHNSSON, Block-cyclic dense linear algebra, SIAM J. Sci. Stat. Comput., 14 (1993), pp. 1259-1288.

98
A. MAINWARING AND D. E. CULLER, Active message applications programming interface and communication subsystem organization, Tech. Rep. UCB CSD-96-918, University of California at Berkeley, Berkeley, CA, October 1996.

99
P. PACHECO, Parallel Programming with MPI, Morgan Kaufmann Publishers, Inc., San Francisco, CA, 1997.

100
C. PAIGE, Some aspects of generalized QR factorization, in Reliable Numerical Computations, M. Cox and S. Hammarling, eds., Clarendon Press, 1990.

101
B. PARLETT, The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Cliffs, NJ, 1980.

102
height 2pt depth -1.6pt width 23pt, The construction of orthogonal eigenvectors for tight clusters by use of submatrices, Center for Pure and Applied Mathematics PAM-664, University of California, Berkeley, CA, January 1996. submitted to SIMAX.

103
B. PARLETT AND I. DHILLON, On Fernando's method to find the most redundant equation in a tridiagonal system, Linear Algebra and Its Applications, (1996). to appear.

104
A. PETITET, Algorithmic Redistribution Methods for Block Cyclic Decompositions, PhD thesis, University of Tennessee, Knoxville, TN, 1996.

105
E. POLLICINI, A. A., Using Toolpack Software Tools, 1989.

106
L. PRYLLI AND B. TOURANCHEAU, Efficient block cyclic data redistribution, in EUROPAR'96, vol. 1 of Lecture Notes in Computer Science, Springer-Verlag, 1996, pp. 155-165.

107
height 2pt depth -1.6pt width 23pt, Efficient block cyclic array redistribution, Journal of Parallel and Distributed Computing, (1997). To appear.

108
R. SCHREIBER AND C. F. VAN LOAN, A storage efficient WY representation for products of Householder transformations, SIAM J. Sci. Stat. Comput., 10 (1989), pp. 53-57.

109
B. SMITH, W. GROPP, AND L. CURFMAN MCINNES, PETSc 2.0 users manual, Technical Report ANL-95/11, Argonne National Laboratory, Argonne, IL, 1995. (Available by anonymous ftp from ftp.mcs.anl.gov).

110
M. SNIR, S. W. OTTO, S. HUSS-LEDERMAN, D. W. WALKER, AND J. J. DONGARRA, MPI: The Complete Reference, MIT Press, Cambridge, MA, 1996.

111
SUNSOFT, The XDR Protocol Specification. Appendix A of ``Network Interfaces Programmer's Guide'', SunSoft, 1993.

112
E. VAN DE VELDE, Concurrent Scientific Computing, no. 16 in Texts in Applied Mathematics, Springer-Verlag, 1994.

113
R. C. WHALEY, Basic linear algebra communication subprograms: Analysis and implementation across multiple parallel architectures, Computer Science Dept. Technical Report CS-94-234, University of Tennessee, Knoxville, TN, May 1994. (Also LAPACK Working Note 73).

114
J. H. WILKINSON, The Algebraic Eigenvalue Problem, Oxford University Press, Oxford, UK, 1965.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node18.html0100644000056400000620000000466506336113645017461 0ustar pfrauenfstaff Commercial Use next up previous contents index
Next: Installation Up: Essentials Previous: Availability

Commercial Use

  

LAPACK and ScaLAPACK are freely available software packages provided on the World Wide Web via netlib, anonymous ftp, and http access. Thus they can be included in commercial packages (and have been). We ask only that proper credit be given to the authors.

Like all software, these packages are copyrighted. They are not trademarked; however, if modifications are made that affect the interface, functionality, or accuracy of the resulting software, the name of the routine should be changed. Any modification to our software should be noted in the modifier's documentation.

We will gladly answer questions regarding our software. If modifications are made to the software, however, it is the responsibility of the individuals/company who modified the routine to provide support.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node190.html0100644000056400000620000054377506336376062017560 0ustar pfrauenfstaff Index next up previous contents
Next: About this document Up: ScaLAPACK Users' Guide Previous: References

Index

absolute error , gif , gif
absolute gap , gif , gif
accuracy , gif
accuracy
high , gif , gif , gif
Active Messages
algorithmic
coherence
reliability , gif , gif
repeatability
algorithms, block-partitioned , gif
alignment restrictions
AM-II, see Active Messages
angle between vectors and subspaces , gif , gif , gif , gif , gif , gif , gif , gif
arguments
BERR
description conventions , gif , gif , gif
descriptor , gif , gif , gif , gif , gif
DIAG , gif
dimensions
FERR , gif
global
global column index , gif , gif , gif
global input
global output
global row index , gif , gif , gif
IA and JA , gif
INFO , gif , gif
local
local input
local output
LWORK
options
order of
RCOND
SIDE
TRANS
UPLO
work space
ARPACK
array argument descriptions
block cyclic arrays
block-column and block-row arrays
array descriptor , gif , gif , gif , gif , gif , gif
array descriptor
block , gif , gif
block cyclic
CTXT_
definition of
DLEN_
DTYPE_ , gif
example
LLD_
M_ and N_ , gif
MB_ and NB_ , gif
out-of-core
RSRC_ and CSRC_
auxiliary routines , gif
auxiliary routines, index of: see Appendix A
avoiding poor performance
backward error , gif , gif , gif , gif , gif , gif
backward stability , gif , gif
backward stability
componentwise
normwise
band matrix, distribution of
bandwidth
Berkeley NOW
bidiagonal form , gif , gif
bidiagonal matrix, distribution of
BLACS , gif
BLACS
BLACS_ABORT , gif
BLACS_BARRIER
BLACS_EXIT
BLACS_GRIDEXIT , gif
BLACS_GRIDINFO
BLACS_GRIDINIT
BLACS_SET , gif , gif
C interface
context
context handle
efficiency , gif
Fortran 77 interface
prebuilt libraries, availability of
quick reference guide: see Appendix D
BLAS , gif , gif , gif
BLAS
Level 1
Level 2
Level 3
block-column distribution , gif , gif
block-partitioned algorithms , gif , gif , gif
block-row distribution , gif , gif
block cyclic distribution , gif
blocking factor, see block size
block size , gif
block size
size of , gif
bug reports
bug reports
checklist
bug reports
mailing alias
Ctex2html_wrap_inline12068
Ctex2html_wrap_inline12072
Ctex2html_wrap_inline12076
Ctex2html_wrap_inline12068
Ctex2html_wrap_inline12076
Ctex2html_wrap_inline12072
CAPSS
checklist, high performance
chordal distance
cluster
eigenvalues
singular values
clusters of workstations
coefficient band matrix
coefficient matrix , gif , gif , gif , gif
coefficient matrix
distribution of , gif , gif
coefficient tridiagonal matrix , gif
coherence , gif , gif
column-major order
column pivoting (QR)
commercial use
communication bandwidth
communication deadlock
communication latency
communicator, MPI
complete orthogonal factorization
computational routines , gif , gif
Computational Routines, index of: see Appendix A
computers, see distributed memory computers
condensed form, reduction to
condition number , gif , gif , gif , gif , gif , gif , gif , gif , gif , gif
condition number
estimate , gif
context, BLACS
context handle
coprocessor
CoWs , gif
CSRC_, see array descriptor
CTXT_, see array descriptor
data distribution
data distribution
block
block cyclic , gif
block-column , gif , gif
block-row , gif , gif
general band matrix , gif
general tridiagonal matrix , gif
symmetric band matrix
symmetric positive definite band matrix
symmetric positive definite tridiagonal matrix
symmetric tridiagonal matrix
DDISNA , gif
deadlock
debugging
alignment restrictions
errata.blacs , gif
errata.scalapack , gif
INFO > 0
INFO < 0
INFO < 0
INFO > 0
synchronization points
TOOLS
workspace requirements
debugging hints
application
installation
debug level BLACS
decomposition, see data distribution
DESC_, see array descriptor
descriptor , gif , gif , gif , gif , gif
descriptor, see array descriptor
determining the size of local arrays , gif , gif
diagonally dominant-like , gif , gif , gif
distributed across process columns
distributed across process rows
distributed matrix
distributed matrix, partitioning of
distributed memory , gif , gif , gif , gif , gif
distributed memory computers , gif
distributed memory standards
distributed vector
distribution, see data distribution
DLEN_, see array descriptor
documentation
documentation
installation guide , gif
documentation, structure
downloading instructions
driver routine
generalized symmetric definite eigenvalue problem
linear equations
linear least squares
SVD
driver routine, symmetric eigenvalue problem
driver routines , gif , gif
driver routines
expert , gif
simple , gif
driver routines, index of: see Appendix A
DSTEQR2
DTYPE_, see array descriptor
E() , gif
effective rank
efficiency
efficiency
BLACS
BLAS
block-partitioned algorithms
efficiency, transportable
eigendecomposition
blocked form
symmetric
eigenvalue , gif , gif , gif , gif , gif
eigenvalue
error bound , gif , gif , gif , gif , gif
GSEP
SEP
eigenvalue problem
generalized symmetric
generalized symmetric/Hermitian
eigenvalue problem,symmetric/Hermitian
eigenvalue problems
large scale
large sparse
nonsymmetric
eigenvalue problems, symmetric/Hermitian
eigenvector , gif , gif
eigenvector
error bound , gif , gif , gif , gif , gif
GSEP , gif
left
right
SEP
elementary Householder matrix, see Householder matrix , gif , gif , gif , gif , gif
elementary reflector, see Householder matrix , gif , gif , gif , gif
equilibration , gif
errata
BLACS
ScaLAPACK and PBLAS
error
absolute , gif , gif
analysis
backward , gif , gif , gif , gif , gif , gif
measurement of
measurement of
matrix , gif
scalar , gif
subspace , gif
vector , gif
relative , gif , gif , gif , gif , gif , gif , gif
error bounds
generalized symmetric definite eigenproblem , gif
linear equations , gif
linear least squares , gif
singular value decomposition , gif
symmetric eigenproblem , gif
error handler, BLACS
error handler, PBLAS , gif
error handler, PXERBLA , gif
example program
how to execute
HPF
output, using MPICH
output, using PVM
ScaLAPACK (PDGESV) , gif
expert driver
expert driver, see driver routines, expert , gif
External Data Representation (XDR)
Ftex2html_wrap_inline12090
Ftex2html_wrap_inline12090
factorizations, see matrix factorizations
failures
failures
common causes
error handling
INFO
FDDI
Fiber Distributed Data Interface (FDDI)
floating-point arithmetic
floating-point arithmetic
tex2html_wrap_inline17191
gradual underflow
infinity
NaN
Not-a-Number
overflow , gif , gif , gif , gif , gif , gif , gif , gif , gif
relative machine precision , gif
roundoff error
subnormal numbers
underflow , gif , gif , gif
floating-point arithmetic, IEEE standard , gif , gif , gif , gif , gif
flop counts, for ScaLAPACK drivers
forward error
forward stability , gif
functionality
gang scheduler,
gap , gif , gif
generalized eigenproblem, symmetric definite
generalized orthogonal factorization
global
global
matrix
software components
variables , gif
global column index (JA)
global column index (JA) , gif
global input
globally blocking receive
global output
global row index (IA) , gif , gif
glossary
GQR , gif , gif , gif
granularity, of computation , gif , gif , gif
grid, see process grid
GRQ , gif
GSEP
guidelines, for high performance
Hermitian eigenvalue problem, see eigenvalue problems
Hessenberg form
Hessenberg form
reduction to
upper
heterogeneity
heterogeneous environment
heterogeneous network
heterogeneous software issues
algorithmic integrity , gif
machine parameters (PxLAMCH)
hierarchical memory , gif , gif
homepage
BLACS
MPI
netlib
PBLAS
PVM
ScaLAPACK , gif
homogeneous network
Householder matrix , gif
Householder matrix
complex
Householder transformation - blocked form
Householder vector
HPF
DISTRIBUTE , gif , gif , gif , gif , gif , gif , gif , gif , gif
example program
interface LA_GELS
interface LA_GEMM
interface LA_GESV
interface LA_POSV
interface to ScaLAPACK , gif
LA_GELS
LA_GEMM
LA_GESV
LA_POSV
PROCESSORS
ICTXT
ICTXT, see BLACS
ill-conditioned
ill-posed
incoherence
infinity
INFO
input error
installation
installation
PxLAMCH
insufficient workspace
invariant subspaces
error bound
inverse iteration
inverting matrix, using factorization
isoefficiency
isogranularity
iterative refinement
LAPACK
latency , gif
layout, see data distribution
lcm(Ptex2html_wrap_inline12112,Ptex2html_wrap_inline12114)
LDLtex2html_wrap_inline13041 factorization
leading dimension, see local array
linear equations
linear least squares problem
linear least squares problem
linear least squares problem , gif , gif
linear least squares problem
full rank
overdetermined , gif
rank-deficient , gif , gif
underdetermined
linear systems, solution of , gif
LLD_, see array descriptor
LLS , gif
load balance
LOCtex2html_wrap_inline12114()
LOCtex2html_wrap_inline12112()
LOCtex2html_wrap_inline12112()
LOCtex2html_wrap_inline12114()
LOCtex2html_wrap_inline12114()
LOCtex2html_wrap_inline12112()
local
local
software components
variables , gif
local array
local array
determining the size of , gif , gif
local leading dimension
local input
locally blocking send
local output
LQ factorization
LU factorization
LU factorization
matrix types
LWORK
LWORK tex2html_wrap_inline14966 WORK(1)
LWORK query
M_, see array descriptor
machine precision , gif
matrix factorizations
Cholesky , gif
complete orthogonal
generalized orthogonal
generalized QR , gif
generalized RQ
LDLtex2html_wrap_inline13041
LQ , gif
LU
orthogonal , gif
QL
QR , gif , gif
RQ
matrix inversion , gif
matrix of right-hand-side vectors
matrix of solution vectors
matrix partitioning
matrix sign function
MB_, see array descriptor
memory hierarchy , gif , gif
message-passing libraries
MPI , gif
MPL , gif
NX , gif
PVM , gif
message coprocessor
message passing, see distributed memory
MIMD , gif
minimum norm least squares solution
minimum norm solution
minimum norm solution , gif , gif
MPI , gif , gif , gif , gif
MPI
communicator
homepage
reference implementation (MPICH)
MPICH
MPL , gif
Multiple Instruction Multiple Data (MIMD)
Multiple Instruction Multiple Data (MIMD)
MYCOL , gif , gif
MYROW , gif , gif
N_, see array descriptor
naming scheme
naming scheme
auxiliary
driver and computational
NaN
NB_, see array descriptor
NBRHS
netlib
netlib
homepage
mirror repositories
networks of workstations
nonrepeatability
nonsymmetric eigenproblem
norm
Frobenius norm
matrix
two norm
vector
Not-a-Number
notation, table of
NoWs , gif
NPCOL, see Ptex2html_wrap_inline12114
NPROCS, see P
NPROW, see Ptex2html_wrap_inline12112
NRHS
numerical error, sources of
numerical error, sources of
input error
roundoff error , gif
NX , gif
optimal block size, determination of
orthogonal (unitary) factorizations , gif
orthogonal (unitary) transformation , gif
orthogonal factorization, generalized
orthogonal matrix , gif
out-of-core linear solvers
overdetermined system , gif , gif
overflow , gif , gif , gif , gif , gif , gif , gif , gif
P , gif , gif
Ptex2html_wrap_inline12114
Ptex2html_wrap_inline12112
Ptex2html_wrap_inline12114
Ptex2html_wrap_inline12112
P_ARPACK
parallel efficiency , gif
parallelizing BLAS-based programs: see Appendix B
parallelizing LAPACK-based programs: see Appendix B
ParPre
partial pivoting
partitioning unit, see block size , gif
PBLAS , gif
PBLAS
converting from the BLAS to: see Appendix B
error handling
PDGEMM
PDGEMV
PSGEMM
PSGEMV
quick reference guide: see Appendix D
PCDBSV
PCDBTRF , gif , gif
PCDBTRS , gif
PCDTSV
PCDTTRF , gif , gif
PCDTTRS , gif
PCGBSV
PCGBTRF , gif , gif
PCGBTRS , gif
PCGEBRD
PCGECON , gif
PCGEEQU
PCGEHRD
PCGELQF , gif
PCGELS , gif , gif
PCGEMR2D
PCGEQLF
PCGEQPF
PCGEQRF , gif , gif
PCGERFS , gif
PCGERQF
PCGESVD , gif , gif , gif
PCGESVX , gif , gif
PCGETRF
PCGETRI
PCGETRS
PCGGQRF
PCGGRQF
PCHEEVX , gif , gif , gif
PCHEGST
PCHEGVX , gif , gif , gif , gif
PCHETRD
PCHETRF
PCHETRS
PCPBSV
PCPBTRF , gif , gif
PCPBTRS , gif
PCPOCON , gif , gif
PCPOEQU
PCPORFS , gif
PCPOSV , gif
PCPOSVX , gif
PCPOTRF
PCPOTRI
PCPOTRS
PCPTSV
PCPTTRF
PCPTTRS
PCSTEIN , gif
PCTRCON , gif , gif
PCTRMR2D
PCTRRFS , gif
PCTRTRI
PCTRTRS , gif , gif
PCTZRZF
PCUNGLQ
PCUNGQR , gif
PCUNMBR
PCUNMHR
PCUNMLQ
PCUNMQR , gif , gif , gif
PCUNMRZ
PCUNMTR
PDDBSV
PDDBTRF , gif , gif
PDDBTRS , gif
PDDTSV
PDDTTRF , gif , gif
PDDTTRS , gif
PDGBSV
PDGBTRF , gif , gif
PDGBTRS , gif
PDGEBRD , gif
PDGECON , gif
PDGEEQU
PDGEHRD , gif
PDGELQF , gif
PDGELS , gif , gif , gif
PDGEMR2D
PDGEQLF
PDGEQPF
PDGEQRF , gif , gif
PDGERFS , gif
PDGERQF
PDGESV , gif
PDGESVD , gif , gif , gif , gif , gif , gif
PDGESVX , gif , gif , gif
PDGETRF
PDGETRI
PDGETRS
PDGGQRF
PDGGRQF
PDLAHQR , gif
PDLAMCH , gif
PDORGLQ
PDORGQR , gif
PDORMBR , gif
PDORMHR
PDORMLQ , gif
PDORMQR , gif , gif , gif
PDORMRZ
PDORMTR
PDPBSV
PDPBTRF , gif , gif
PDPBTRS , gif
PDPOCON , gif , gif
PDPOEQU
PDPORFS , gif
PDPOSV , gif , gif
PDPOSVX , gif
PDPOTRF
PDPOTRI
PDPOTRS
PDPTSV
PDPTTRF
PDPTTRS
PDSTEBZ , gif
PDSTEIN , gif
PDSYEV , gif , gif , gif , gif
PDSYEVX , gif , gif , gif , gif , gif , gif
PDSYGST
PDSYGVX , gif , gif , gif , gif
PDSYR2K
PDSYTRD , gif
PDTRCON , gif , gif
PDTRMR2D
PDTRRFS , gif
PDTRTRI
PDTRTRS , gif , gif
PDTZRZF
peak performance
performance , gif , gif
performance
diagnosing poor
evaluation of
improvement guidelines
PDLAHQR
PSGELS/PDGELS
PSGEMM/PDGEMM
PSGEMV/PDGEMV
PSGESV/PDGESV
PSGESVD/PDGESVD
PSPOSV/PDPOSV
PSSYEV/PDSYEV
PSSYEVX/PDSYEVX
recommendations
tuning
performance guidelines
PETSc
PIGEMR2D
PITRMR2D
pivoting
column (QR)
partial
poor performance
PoPCs
portability , gif , gif , gif , gif
prebuilt libraries
blacs
scalapack
process
column coordinate (MYCOL)
row coordinate (MYROW)
process column
process grid , gif
process grid
initialization , gif
mapping (column-major) , gif
mapping (row-major) , gif
mapping (user-defined) , gif
obtaining BLACS context
Ptex2html_wrap_inline12112
Ptex2html_wrap_inline12114
shape , gif
process mesh, see process grid
processor
process row
prototype codes
prototype codes
HPF interface to ScaLAPACK
prototype codes
matrix sign function , gif
out-of-core linear solvers
SuperLU
PSDBSV
PSDBTRF , gif , gif
PSDBTRS , gif
PSDTSV
PSDTTRF , gif , gif
PSDTTRS , gif
PSGBSV
PSGBTRF , gif , gif
PSGBTRS , gif
PSGEBRD , gif
PSGECON , gif
PSGEEQU
PSGEHRD , gif
PSGELQF , gif
PSGELS , gif , gif , gif
PSGEMR2D
PSGEQLF
PSGEQPF
PSGEQRF , gif , gif
PSGERFS , gif
PSGERQF
PSGESV , gif , gif
PSGESVD , gif , gif , gif , gif , gif , gif
PSGESVX , gif , gif , gif , gif , gif
PSGETRF
PSGETRI
PSGETRS
PSGGQRF
PSGGRQF
PSLAHQR , gif
PSLAMCH , gif
PSORGLQ
PSORGQR , gif
PSORMBR , gif
PSORMHR
PSORMLQ , gif
PSORMQR , gif , gif , gif
PSORMRZ
PSORMTR
PSPBSV
PSPBTRF , gif , gif
PSPBTRS , gif
PSPOCON , gif , gif
PSPOEQU
PSPORFS , gif
PSPOSV , gif , gif
PSPOSVX , gif
PSPOTRF
PSPOTRI
PSPOTRS
PSPTSV
PSPTTRF
PSPTTRS
PSSTEBZ , gif
PSSTEIN , gif
PSSYEV , gif , gif , gif , gif
PSSYEVX , gif , gif , gif , gif , gif , gif
PSSYGST
PSSYGVX , gif , gif , gif , gif
PSSYR2K
PSSYTRD , gif
PSTRCON , gif , gif
PSTRMR2D
PSTRRFS , gif
PSTRTRI
PSTRTRS , gif , gif
PSTZRZF
PVM , gif , gif , gif
PVM
homepage
PXERBLA , gif , gif
PZDBSV
PZDBTRF , gif , gif
PZDBTRS , gif
PZDTSV
PZDTTRF , gif , gif
PZDTTRS , gif
PZGBSV
PZGBTRF , gif , gif
PZGBTRS , gif
PZGEBRD
PZGECON , gif
PZGEEQU
PZGEHRD
PZGELQF , gif
PZGELS , gif , gif
PZGEMR2D
PZGEQLF
PZGEQPF
PZGEQRF , gif , gif
PZGERFS , gif
PZGERQF
PZGESVD , gif , gif , gif
PZGESVX , gif , gif
PZGETRF
PZGETRI
PZGETRS
PZGGQRF
PZGGRQF
PZHEEVX , gif , gif , gif
PZHEGST
PZHEGVX , gif , gif , gif , gif
PZHETRD
PZHETRF
PZHETRS
PZPBSV
PZPBTRF , gif , gif
PZPBTRS , gif
PZPOCON , gif , gif
PZPOEQU
PZPORFS , gif
PZPOSV , gif
PZPOSVX , gif
PZPOTRF
PZPOTRI
PZPOTRS
PZPTSV
PZPTTRF
PZPTTRS
PZSTEIN , gif
PZTRCON , gif , gif
PZTRMR2D
PZTRRFS , gif
PZTRTRI
PZTRTRS , gif , gif
PZTZRZF
PZUNGLQ
PZUNGQR , gif
PZUNMBR
PZUNMHR
PZUNMLQ
PZUNMQR , gif , gif , gif
PZUNMRZ
PZUNMTR
QL algorithm
implicit
QL factorization
QR algorithm
implicit
QR factorization , gif
QR factorization
blocked form
generalized (GQR) , gif
with pivoting , gif
query
quick reference guides: see Appendix D
rank
rank-deficient, see linear least squares problem
redistribution routines
PxGEMR2D
PxTRMR2D
reduction
bidiagonal
standard form
tridiagonal
upper Hessenberg
relative error , gif , gif , gif , gif , gif , gif , gif
relative machine precision
reliability, see test suites , gif
repeatability , gif
repeatability
enforcement of (BLACS_SET)
replicated across process columns
replicated across process rows
replicated and distributed
replicated vector , gif
reporting bugs, see bug reports , gif
right-hand-side matrix
roundoff error , gif
RQ factorization
RQ factorization
generalized (GRQ) , gif
RSRC_, see array descriptor
rule of thumb
scalability , gif , gif , gif
ScaLAPACK
commercial use of
converting from LAPACK to: see Appendix B
prebuilt libraries, availability of
quick reference guide: see Appendix D
scaling
Schur factorization
Schur form
Schur vectors
scope , gif
scope
groupings of processes
in communication
of a variable
within a process grid
scoped operation
SDISNA , gif
SEP , gif
simple driver, see driver routines, simple , gif
Single Program Multiple Data (SPMD)
singular value
singular value decomposition (SVD)
singular value decomposition (SVD)
singular value
error bound , gif
singular vector
error bound , gif
singular vectors
left , gif
right , gif
software hierarchy
software standards
software standards
distributed memory
solution matrix
spectral factorization
SPMD
SSTEQR2
stability , gif
stability
backward , gif , gif , gif , gif
forward
standard form
storage scheme
general tridiagonal
Hermitian , gif
orthogonal or unitary matrices
symmetric
symmetric tridiagonal
triangular
submatrix access
subspaces
angle between
subspaces, angle between , gif , gif , gif , gif , gif , gif , gif
suggestions for reading
SuperLU, see prototype codes
support
SVD
Sylvester equation
symmetric eigenproblem
T() , gif
Ttex2html_wrap_inline12200()
ttex2html_wrap_inline12068
ttex2html_wrap_inline12072
ttex2html_wrap_inline12218
ttex2html_wrap_inline12226
ttex2html_wrap_inline12076
Ttex2html_wrap_inline16302
ttex2html_wrap_inline12068
ttex2html_wrap_inline12072, see latency
ttex2html_wrap_inline12076
ttex2html_wrap_inline12072
ttex2html_wrap_inline12076
terminology
test suites , gif , gif , gif , gif , gif , gif , gif , gif , gif
thread of execution, see process
throughput, see bandwidth
TOOLS
DESCINIT
NUMROC
PxLAPRNT
SL_INIT
transportable efficiency
tridiagonal form , gif , gif , gif , gif , gif
tridiagonal matrix, distribution of
troubleshooting
tuning
underdetermined system , gif , gif
underflow , gif , gif , gif
unitary matrix , gif
upper Hessenberg form
utility routines
workspace query
World Wide Web
wrong results
WWW
XDR



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node191.html0100644000056400000620000000415006336376170017534 0ustar pfrauenfstaff About this document ... next up previous contents index
Up: ScaLAPACK Users' Guide Previous: Index

About this document ...

ScaLAPACK Users' Guide

This document was generated using the LaTeX2HTML translator Version 96.1-h (September 30, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html slug.tex.

The translation was initiated by Susan Blackford on Tue May 13 09:21:01 EDT 1997


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node19.html0100644000056400000620000001365206336113646017457 0ustar pfrauenfstaff Installation next up previous contents index
Next: Documentation Up: Essentials Previous: Commercial Use

Installation

   

To ease the installation process, prebuilt ScaLAPACK libraries are available on netlib for a variety of architectures.

http://www.netlib.org/scalapack/archives/
Included with each prebuilt library archive is the make include file SLmake.inc detailing the compiler options, and so on, used to compile the library. If a prebuilt library is not available for the specific architecture, the user will need to download the source code from netlib
http://www.netlib.org/scalapack/scalapack.tar.gz
and build the library as instructed in the ScaLAPACK Installation Guide [24]. Sample SLmake.inc files for various architectures are included in the distribution tar file and will require only limited modifications to customize for a specific architecture.

A comprehensive ScaLAPACK Installation Guide (LAPACK Working Note 93) [24]  is distributed with the complete package and contains descriptions of the testing programs, as well as detailed installation instructions.

A BLAS library and BLACS library must have been installed or be available on the architecture on which the user is planning to run ScaLAPACK. Users who plan to run ScaLAPACK on top of PVM [68] or MPI [64, 110] must also have PVM and/or MPI available.

If a vendor-optimized version of the BLAS is not available, one can obtain a Fortran77 reference implementation from the blas directory on netlib. If a BLACS library is not available, prebuilt BLACS libraries are available in the blacs/archives directory on netlib for a variety of architecture and message-passing library combinations. Otherwise, BLACS implementations for the Intel series, IBM SP series, PVM, and MPI are available from the blacs directory on netlib. PVM is available from the pvm3 directory on netlib, and a reference implementation of MPI is also available. Refer to the following URLs    :

http://www.netlib.org/blas/index.html
http://www.netlib.org/blacs/index.html
http://www.netlib.org/blacs/archives/
http://www.netlib.org/pvm3/index.html
http://www.netlib.org/mpi/index.html

Comprehensive test suites       for the BLAS, BLACS, and PVM are provided on netlib, and it is highly recommended that these test suites be run to ensure proper installation of the packages.

If the user will be using PVM, it is important to note that only PVM version 3.3 or later is supported with the BLACS [113, 52]. Because of major changes in PVM and the resulting changes required in the BLACS, earlier versions of PVM are not supported. User who have a previous release of PVM must obtain version 3.3 or later to run the PVM BLACS and thus ScaLAPACK.


next up previous contents index
Next: Documentation Up: Essentials Previous: Commercial Use

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node1.html0100644000056400000620000005235006336375513017367 0ustar pfrauenfstaff Contents next up previous index
Next: List of Figures Up: ScaLAPACK Users' Guide Previous: ScaLAPACK Users' Guide

Contents



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node20.html0100644000056400000620000000574006336113646017446 0ustar pfrauenfstaff Documentation next up previous contents index
Next: Support Up: Essentials Previous: Installation

Documentation

  

This users guide provides an informal introduction to the design of the package, a detailed description of its contents, and a reference manual for the leading comments of the source code. A brief discussion of the contents of each chapter, as well as guidance for novice and expert users, can be found in the Suggestions for Reading at the beginning of this book. A List of Notation and Glossary are also provided.

On-line manpages (troff files) for ScaLAPACK routines, as well as for LAPACK and the BLAS, are available on netlib. These files are automatically generated at the time of each release. For more information, see the manpages.tar.gz entry on the scalapack index on netlib. A comprehensive Installation Guide for ScaLAPACK [24] is also available; refer to section 1.7 for further details.

Using a World Wide Web   browser such as Netscape, one can access the ScaLAPACK homepage  via the URL:

http://www.netlib.org/scalapack/index.html
This homepage contains hyperlinks for additional documentation as well as the ability to view individual ScaLAPACK driver and computational routines.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node21.html0100644000056400000620000000452406336113647017447 0ustar pfrauenfstaff Support next up previous contents index
Next: Errata Up: Essentials Previous: Documentation

Support

 

ScaLAPACK has been thoroughly tested before release, on many different types of computers and configurations. The ScaLAPACK project supports the package in the sense that reports of errors or poor performance will gain immediate attention from the developers  . Refer to section 7 for a list of questions asked when the user submits a bug report. Such reports -- and also descriptions of interesting applications and other comments -- should be sent to

ScaLAPACK Project
c/o J. J. Dongarra
Computer Science Department
University of Tennessee
Knoxville, TN 37996-1301
USA
E-mail: scalapack@cs.utk.edu


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node22.html0100644000056400000620000000431706336113650017442 0ustar pfrauenfstaff Errata next up previous contents index
Next: Related Projects Up: Essentials Previous: Support

Errata

A list of known problems, bugs, and compiler errors  for ScaLAPACK and the PBLAS, as well as an errata list for this guide, is maintained on netlib. For a copy of this report, refer to the URL

http://www.netlib.org/scalapack/errata.scalapack

Similarly, an errata file  for the BLACS can be obtained by the request:

http://www.netlib.org/blacs/errata.blacs

A ScaLAPACK FAQ (Frequently Asked Questions) file is also maintained via the URL

http://www.netlib.org/scalapack/faq.html


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node23.html0100644000056400000620000001354606336113650017447 0ustar pfrauenfstaff Related Projects next up previous contents index
Next: Contents of the CD-ROM Up: Essentials Previous: Errata

Related Projects

As mentioned in the Preface, the ScaLAPACK library discussed in this Users' Guide is only one facet of the ScaLAPACK project. A variety of other software is also available in the scalapack directory on netlib.

http://www.netlib.org/scalapack/index.html

P_ARPACK (Parallel ARPACK)  is an extension of the ARPACK  software package used for solving large-scale eigenvalue problems   on distributed-memory parallel architectures. The message-passing layers currently supported are BLACS and MPI. Serial ARPACK must be retrieved and installed prior to installing P_ARPACK. All core ARPACK routines are available in single-precision real, double-precision real, single-precision complex, and double-precision complex. An extensive set of driver routines is available for ARPACK, and a subset of these is available for parallel computation with P_ARPACK. These may be used as templates that are easily modified to construct a problem specific parallel interface to P_ARPACK.

CAPSS  is a fully parallel package to solve a sparse linear system of the form Ax=b on a message passing multiprocessor; the matrix A is assumed to be symmetric positive definite and associated with a mesh in two or three dimensions. This version has been tested on the Intel Paragon and makes possible efficient parallel solution for several right-hand-side vectors.

ParPre  is a package of parallel preconditioners for general sparse matrices. It includes classical point/block relaxation methods, generalized block SSOR preconditioners (this includes ILU), and domain decomposition methods (additive and multiplicative Schwarz, Schur complement). The communication protocol is MPI, and low-level routines from the PETSc [109]  library are used, but installing the complete PETSc library is not necessary.

Prototype codes are provided for out-of-core   solvers [55] for LU, Cholesky, and QR, the matrix sign function for eigenproblems [14, 13, 12]  , an HPF interface   to a subset of ScaLAPACK routines, and SuperLU [96, 39, 41]  .

http://www.netlib.org/scalapack/prototype/
These software contributions are classified as prototype codes  because they are still under development and their calling sequences may change. They have been tested only on a limited number of architectures and have not been rigorously tested on all of the architectures to which the ScaLAPACK library is portable.

Refer to Appendix C.2 for a brief description of the HPF interface to ScaLAPACK, as well as an example program.


next up previous contents index
Next: Contents of the CD-ROM Up: Essentials Previous: Errata

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node24.html0100644000056400000620000000462706336113651017451 0ustar pfrauenfstaff Contents of the CD-ROM next up previous contents index
Next: Getting Started with ScaLAPACK Up: Essentials Previous: Related Projects

Contents of the CD-ROM

Each Users' Guide includes a CD-ROM containing

  • the HTML version of the ScaLAPACK Users' Guide,
  • the source code for the ScaLAPACK, PBLAS, BLACS, and LAPACK packages, including testing and timing programs,
  • prebuilt ScaLAPACK, BLACS, and LAPACK libraries for a variety of architectures,
  • example programs, and
  • the full set of LAPACK Working Notes in postscript and pdf format.

Instructions for reading and traversing the directory structure on the CD-ROM are provided in the booklet packaged with the CD-ROM. A readme file is provided in each directory on the CD-ROM. The directory structure on the CD-ROM mimics the scalapack, blacs, and lapack directory contents on netlib.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node25.html0100644000056400000620000000732106336113651017444 0ustar pfrauenfstaff Getting Started with ScaLAPACK next up previous contents index
Next: How to Run an Up: Guide Previous: Contents of the CD-ROM

Getting Started with ScaLAPACK

 

This chapter provides the background information to enable users to call ScaLAPACK software, together with a simple example program. The chapter begins by presenting a set of instructions to execute a ScaLAPACK example program, followed by the source code of Example Program #1. A detailed explanation of the example program is given, and finally the necessary steps to call a ScaLAPACK routine are described by referencing the example program.

For an explanation of the terminology used within this chapter, please refer to the List of Notation and/or Glossary at the beginning of this book.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node26.html0100644000056400000620000001541606336113652017452 0ustar pfrauenfstaff How to Run an Example Program Using MPI next up previous contents index
Next: Source Code for Example Up: Getting Started with ScaLAPACK Previous: Getting Started with ScaLAPACK

How to Run an Example Program Using MPI

 

This section presents the instructions for installing ScaLAPACK and running a simple example program in parallel. The example assumes that the underlying system is a Sun Solaris system; the problem is run on one physical processor, using six processes that do message passing. The example uses MPI as the message-passing layer. The version of MPI used in this example is MPICH (version 1.0.13), and we assume the user has this version installed. MPICH   is a freely available, portable implementation of MPI. If MPICH is not installed, refer to http://www.netlib.org/mpi/. If this is run on a different architecture, the user will have to make a number of changes. In particular, the prebuilt libraries will have to be changed. If prebuilt libraries do not exist for the specific architecture, the user will need to download the source
(http://www.netlib.org/scalapack/scalapack.tar.gz) and build them.

To use ScaLAPACK for the first time (on a network of workstations using MPI), one should

  1. Make a directory for this testing.
          mkdir SCALAPACK
          cd SCALAPACK
  2. Download the ScaLAPACK example program (about 7 KB) into directory SCALAPACK.
    http://www.netlib.org/scalapack/examples/example1.f
  3. Download the prebuilt ScaLAPACK library (about 3MB) for the specific architecture into directory SCALAPACK (e.g., SUN4SOL2) and uncompress.
    http://www.netlib.org/scalapack/archives/scalapack_SUN4SOL2.tar.gz
          gunzip scalapack_SUN4SOL2.tar.gz
          tar xvf scalapack_SUN4SOL2.tar
          rm scalapack_SUN4SOL2.tar
    (Note that this tar file contains the library archive and the SLmake.inc used to build the library. Details of compiler flags, etc. can be found in this make include file.)
  4. Download the prebuilt BLACS library (about 60 KB) for the architecture (e.g., SUN4SOL2) and message-passing layer (e.g., MPICH), and uncompress into directory SCALAPACK.
    http://www.netlib.org/blacs/archives/blacs_MPI-SUN4SOL2-0.tar.gz
          gunzip blacs_MPI-SUN4SOL2-0.tar.gz
          tar xvf blacs_MPI-SUN4SOL2-0.tar
          rm blacs_MPI-SUN4SOL2-0.tar
    (Note that this tar file contains the library archive(s) and the Bmake.inc used to build the library. Details of compiler flags, etc. can be found in this make include file.)
  5. Find the optimized BLAS library on the specific architecture.

    If not available, download reference implementation (about 1 MB) into directory SCALAPACK/BLAS, compile, and build the library.

            mkdir BLAS
            cd BLAS
    Download http://www.netlib.org/blas/blas.shar.
            sh blas.shar
            f77 -O -f -c *.f
            ar cr ../blas_SUN4SOL2.a *.o
            cd ..
    (Note that this reference implementation of the BLAS will not deliver high performance.)
  6. Compile and link to prebuilt libraries.

        sun4sol2> f77 -f -o example1 example1.f scalapack_SUN4SOL2.a \
                  blacsF77init_MPI-SUN4SOL2-0.a blacs_MPI-SUN4SOL2-0.a \
                  blacsF77init_MPI-SUN4SOL2-0.a blas_SUN4SOL2.a \
                  $MPI_ROOT/lib/solaris/ch_p4/libmpi.a -lnsl -lsocket
        example1.f:
         MAIN example1:
               matinit:
    Note that the -lnsl -lsocket libraries are machine specific to Solaris. Refer to the SLmake.inc for details. MPICH can be found in the directory $MPI_ROOT. On our system we did
        sun4sol2> setenv MPI_ROOT /src/icl/MPI/mpich
  7. Run the ScaLAPACK example program.

    To run an MPI program with MPICH, one will need to add $MPI_ROOT/bin to the path. On our system we did

        sun4sol2> set path = ($path $MPI_ROOT/bin)

    To run the example:

        sun4sol2> mpirun -np 6 example1

    The example runs on six processes and prints out a statement that ``the solution is correct'' or ``the solution is incorrect''.


next up previous contents index
Next: Source Code for Example Up: Getting Started with ScaLAPACK Previous: Getting Started with ScaLAPACK

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node27.html0100644000056400000620000002347606336113653017461 0ustar pfrauenfstaff Source Code for Example Program #1 next up previous contents index
Next: Details of Example Program Up: Getting Started with ScaLAPACK Previous: How to Run an

Source Code for Example Program #1

This program is also available in the scalapack directory on netlib
(http://www.netlib.org/scalapack/examples/example1.f).

      PROGRAM EXAMPLE1
*
*     Example Program solving Ax=b via ScaLAPACK routine PDGESV
*
*     .. Parameters ..
      INTEGER            DLEN_, IA, JA, IB, JB, M, N, MB, NB, RSRC,
     $                   CSRC, MXLLDA, MXLLDB, NRHS, NBRHS, NOUT,
     $                   MXLOCR, MXLOCC, MXRHSC
      PARAMETER          ( DLEN_ = 9, IA = 1, JA = 1, IB = 1, JB = 1,
     $                   M = 9, N = 9, MB = 2, NB = 2, RSRC = 0,
     $                   CSRC = 0, MXLLDA = 5, MXLLDB = 5, NRHS = 1,
     $                   NBRHS = 1, NOUT = 6, MXLOCR = 5, MXLOCC = 4,
     $                   MXRHSC = 1 )
      DOUBLE PRECISION   ONE
      PARAMETER          ( ONE = 1.0D+0 )
*     ..
*     .. Local Scalars ..
      INTEGER            ICTXT, INFO, MYCOL, MYROW, NPCOL, NPROW
      DOUBLE PRECISION   ANORM, BNORM, EPS, RESID, XNORM
*     ..
*     .. Local Arrays ..
      INTEGER            DESCA( DLEN_ ), DESCB( DLEN_ ),
     $                   IPIV( MXLOCR+NB )
      DOUBLE PRECISION   A( MXLLDA, MXLOCC ), A0( MXLLDA, MXLOCC ),
     $                   B( MXLLDB, MXRHSC ), B0( MXLLDB, MXRHSC ),
     $                   WORK( MXLOCR )
*     ..
*     .. External Functions ..
      DOUBLE PRECISION   PDLAMCH, PDLANGE
      EXTERNAL           PDLAMCH, PDLANGE
*     ..
*     .. External Subroutines ..
      EXTERNAL           BLACS_EXIT, BLACS_GRIDEXIT, BLACS_GRIDINFO,
     $                   DESCINIT, MATINIT, PDGEMM, PDGESV, PDLACPY,
     $                   SL_INIT
*     ..
*     .. Intrinsic Functions ..
      INTRINSIC          DBLE
*     ..
*     .. Data statements ..
      DATA               NPROW / 2 / , NPCOL / 3 /
*     ..
*     .. Executable Statements ..
*
*     INITIALIZE THE PROCESS GRID
*
      CALL SL_INIT( ICTXT, NPROW, NPCOL )
      CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )
*
*     If I'm not in the process grid, go to the end of the program
*
      IF( MYROW.EQ.-1 )
     $   GO TO 10
*
*     DISTRIBUTE THE MATRIX ON THE PROCESS GRID
*     Initialize the array descriptors for the matrices A and B
*
      CALL DESCINIT( DESCA, M, N, MB, NB, RSRC, CSRC, ICTXT, MXLLDA,
     $               INFO )
      CALL DESCINIT( DESCB, N, NRHS, NB, NBRHS, RSRC, CSRC, ICTXT,
     $               MXLLDB, INFO )
*
*     Generate matrices A and B and distribute to the process grid
*
      CALL MATINIT( A, DESCA, B, DESCB )
*
*     Make a copy of A and B for checking purposes
*
      CALL PDLACPY( 'All', N, N, A, 1, 1, DESCA, A0, 1, 1, DESCA )
      CALL PDLACPY( 'All', N, NRHS, B, 1, 1, DESCB, B0, 1, 1, DESCB )
*
*     CALL THE SCALAPACK ROUTINE
*     Solve the linear system A * X = B
*
      CALL PDGESV( N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB,
     $             INFO )
*
      IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN
         WRITE( NOUT, FMT = 9999 )
         WRITE( NOUT, FMT = 9998 )M, N, NB
         WRITE( NOUT, FMT = 9997 )NPROW*NPCOL, NPROW, NPCOL
         WRITE( NOUT, FMT = 9996 )INFO
      END IF
*
*     Compute residual ||A * X  - B|| / ( ||X|| * ||A|| * eps * N )
*
      EPS = PDLAMCH( ICTXT, 'Epsilon' )
      ANORM = PDLANGE( 'I', N, N, A, 1, 1, DESCA, WORK )
      BNORM = PDLANGE( 'I', N, NRHS, B, 1, 1, DESCB, WORK )
      CALL PDGEMM( 'N', 'N', N, NRHS, N, ONE, A0, 1, 1, DESCA, B, 1, 1,
     $             DESCB, -ONE, B0, 1, 1, DESCB )
      XNORM = PDLANGE( 'I', N, NRHS, B0, 1, 1, DESCB, WORK )
      RESID = XNORM / ( ANORM*BNORM*EPS*DBLE( N ) )
*
      IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN
         IF( RESID.LT.10.0D+0 ) THEN
            WRITE( NOUT, FMT = 9995 )
            WRITE( NOUT, FMT = 9993 )RESID
         ELSE
            WRITE( NOUT, FMT = 9994 )
            WRITE( NOUT, FMT = 9993 )RESID
         END IF
      END IF
*
*     RELEASE THE PROCESS GRID
*     Free the BLACS context
*
      CALL BLACS_GRIDEXIT( ICTXT )
   10 CONTINUE
*
*     Exit the BLACS
*
      CALL BLACS_EXIT( 0 )
*
 9999 FORMAT( / 'ScaLAPACK Example Program #1 -- May 1, 1997' )
 9998 FORMAT( / 'Solving Ax=b where A is a ', I3, ' by ', I3,
     $      ' matrix with a block size of ', I3 )
 9997 FORMAT( 'Running on ', I3, ' processes, where the process grid',
     $      ' is ', I3, ' by ', I3 )
 9996 FORMAT( / 'INFO code returned by PDGESV = ', I3 )
 9995 FORMAT( /
     $   'According to the normalized residual the solution is correct.'
     $       )
 9994 FORMAT( /
     $ 'According to the normalized residual the solution is incorrect.'
     $       )
 9993 FORMAT( / '||A*x - b|| / ( ||x||*||A||*eps*N ) = ', 1P, E16.8 )
      STOP
      END
      SUBROUTINE MATINIT( AA, DESCA, B, DESCB )
*
*     MATINIT generates and distributes matrices A and B (depicted in
*     figures 2.5 and 2.6) to a 2 x 3 process grid
*
*     .. Array Arguments ..
      INTEGER            DESCA( * ), DESCB( * )
      DOUBLE PRECISION   AA( * ), B( * )
*     ..
*     .. Parameters ..
      INTEGER            CTXT_, LLD_
      PARAMETER          ( CTXT_ = 2, LLD_ = 9 )
*     ..
*     .. Local Scalars ..
      INTEGER            ICTXT, MXLLDA, MYCOL, MYROW, NPCOL, NPROW
      DOUBLE PRECISION   A, C, K, L, P, S
*     ..
*     .. External Subroutines ..
      EXTERNAL           BLACS_GRIDINFO
*     ..
*     .. Executable Statements ..
*
      ICTXT = DESCA( CTXT_ )
      CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )
*
      S = 19.0D0
      C = 3.0D0
      A = 1.0D0
      L = 12.0D0
      P = 16.0D0
      K = 11.0D0
*
      MXLLDA = DESCA( LLD_ )
*
      IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN
         AA( 1 ) = S
         AA( 2 ) = -S
         AA( 3 ) = -S
         AA( 4 ) = -S
         AA( 5 ) = -S
         AA( 1+MXLLDA ) = C
         AA( 2+MXLLDA ) = C
         AA( 3+MXLLDA ) = -C
         AA( 4+MXLLDA ) = -C
         AA( 5+MXLLDA ) = -C
         AA( 1+2*MXLLDA ) = A
         AA( 2+2*MXLLDA ) = A
         AA( 3+2*MXLLDA ) = A
         AA( 4+2*MXLLDA ) = A
         AA( 5+2*MXLLDA ) = -A
         AA( 1+3*MXLLDA ) = C
         AA( 2+3*MXLLDA ) = C
         AA( 3+3*MXLLDA ) = C
         AA( 4+3*MXLLDA ) = C
         AA( 5+3*MXLLDA ) = -C
         B( 1 ) = 0.0D0
         B( 2 ) = 0.0D0
         B( 3 ) = 0.0D0
         B( 4 ) = 0.0D0
         B( 5 ) = 0.0D0
      ELSE IF( MYROW.EQ.0 .AND. MYCOL.EQ.1 ) THEN
         AA( 1 ) = A
         AA( 2 ) = A
         AA( 3 ) = -A
         AA( 4 ) = -A
         AA( 5 ) = -A
         AA( 1+MXLLDA ) = L
         AA( 2+MXLLDA ) = L
         AA( 3+MXLLDA ) = -L
         AA( 4+MXLLDA ) = -L
         AA( 5+MXLLDA ) = -L
         AA( 1+2*MXLLDA ) = K
         AA( 2+2*MXLLDA ) = K
         AA( 3+2*MXLLDA ) = K
         AA( 4+2*MXLLDA ) = K
         AA( 5+2*MXLLDA ) = K
      ELSE IF( MYROW.EQ.0 .AND. MYCOL.EQ.2 ) THEN
         AA( 1 ) = A
         AA( 2 ) = A
         AA( 3 ) = A
         AA( 4 ) = -A
         AA( 5 ) = -A
         AA( 1+MXLLDA ) = P
         AA( 2+MXLLDA ) = P
         AA( 3+MXLLDA ) = P
         AA( 4+MXLLDA ) = P
         AA( 5+MXLLDA ) = -P
      ELSE IF( MYROW.EQ.1 .AND. MYCOL.EQ.0 ) THEN
         AA( 1 ) = -S
         AA( 2 ) = -S
         AA( 3 ) = -S
         AA( 4 ) = -S
         AA( 1+MXLLDA ) = -C
         AA( 2+MXLLDA ) = -C
         AA( 3+MXLLDA ) = -C
         AA( 4+MXLLDA ) = C
         AA( 1+2*MXLLDA ) = A
         AA( 2+2*MXLLDA ) = A
         AA( 3+2*MXLLDA ) = A
         AA( 4+2*MXLLDA ) = -A
         AA( 1+3*MXLLDA ) = C
         AA( 2+3*MXLLDA ) = C
         AA( 3+3*MXLLDA ) = C
         AA( 4+3*MXLLDA ) = C
         B( 1 ) = 1.0D0
         B( 2 ) = 0.0D0
         B( 3 ) = 0.0D0
         B( 4 ) = 0.0D0
      ELSE IF( MYROW.EQ.1 .AND. MYCOL.EQ.1 ) THEN
         AA( 1 ) = A
         AA( 2 ) = -A
         AA( 3 ) = -A
         AA( 4 ) = -A
         AA( 1+MXLLDA ) = L
         AA( 2+MXLLDA ) = L
         AA( 3+MXLLDA ) = -L
         AA( 4+MXLLDA ) = -L
         AA( 1+2*MXLLDA ) = K
         AA( 2+2*MXLLDA ) = K
         AA( 3+2*MXLLDA ) = K
         AA( 4+2*MXLLDA ) = K
      ELSE IF( MYROW.EQ.1 .AND. MYCOL.EQ.2 ) THEN
         AA( 1 ) = A
         AA( 2 ) = A
         AA( 3 ) = -A
         AA( 4 ) = -A
         AA( 1+MXLLDA ) = P
         AA( 2+MXLLDA ) = P
         AA( 3+MXLLDA ) = -P
         AA( 4+MXLLDA ) = -P
      END IF
      RETURN
      END



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node28.html0100644000056400000620000002062106336113653017447 0ustar pfrauenfstaff Details of Example Program #1 next up previous contents index
Next: Simplifying Assumptions Used in Up: Getting Started with ScaLAPACK Previous: Source Code for Example

Details of Example Program #1

   

This example program demonstrates the basic requirements to call a ScaLAPACK routine -- initializing the process grid, assigning the matrix to the processes, calling the ScaLAPACK routine, and releasing the process grid. For further details on each of these steps, please refer to section 2.4.

This example program solves the tex2html_wrap_inline12402 system of linear equations given by
displaymath12392

using the ScaLAPACK driver routine PDGESV. The ScaLAPACK routine PDGESV solves a system of linear equations tex2html_wrap_inline12404, where the coefficient matrix (denoted by A) and the right-hand-side matrix (denoted by B) are real, general distributed matrices. The coefficient matrix A is distributed as depicted below, and for simplicity, we shall solve the system for one right-hand side (NRHS=1); that is, the matrix B is a vector. The third element of the matrix B is equal to 1, and all other elements are equal to 0. After solving this system of equations, the solution vector X is given by
displaymath12393

Let us assume that the matrix A is partitioned and distributed as denoted in figure 4.6; that is, we have chosen the row and column block sizes as MB=NB=2, and the matrix is distributed on a 2 tex2html_wrap_inline12420 3 process grid (tex2html_wrap_inline12422,tex2html_wrap_inline12424). The partitioning and distribution of our example matrix A is represented in figures 2.1 and 2.2, where, to aid visualization, we use the notation s=19, c=3, a=1, l=12, p=16, and k=11.

  figure705
Figure 2.1: Partitioning of global matrix A (s=19;c=3;a=1;l=12;p=16;k=11)

  figure710
Figure 2.2: Mapping of matrix A onto process grid (tex2html_wrap_inline12422,tex2html_wrap_inline12424). For example, note that process (0,0) contains a local array of size A(5,4).

The partitioning and distribution of our example matrix B are demonstrated in figure 2.3.

  figure717
Figure 2.3: Mapping of matrix B onto process grid (tex2html_wrap_inline12422,tex2html_wrap_inline12424)

Note that matrix B is distributed only in column 0 of the process grid. All other columns in the process grid possess an empty local portion of the matrix B.

On exit from PDGESV, process (0,0) contains (in the global view) the global vector X and (in the local view) the local array B given by
displaymath12394
and process (1,0) contains (in the global view) the global vector X and (in the local view) local array B given by
displaymath12395

The normalized residual check
displaymath12396
is performed on the solution to verify the accuracy of the results.

For more information on the BLACS routines called in this program, please refer to section 2.4, Appendix D.3, [54], and the BLACS homepage (http://www.netlib.org/blacs/index.html). Further details of the matrix distribution and storage scheme can be found in section 2.3.2, figure 4.6, and table 4.8. Complete details on matrix distribution can be found in Chapter 4 and details of the array descriptors can be found in section 4.3.2. For a more flexible and memory efficient example program, please refer to Appendix C.1.




next up previous contents index
Next: Simplifying Assumptions Used in Up: Getting Started with ScaLAPACK Previous: Source Code for Example

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node29.html0100644000056400000620000001212506336113654017451 0ustar pfrauenfstaff Simplifying Assumptions Used in Example Program next up previous contents index
Next: Notation Used in Example Up: Details of Example Program Previous: Details of Example Program

Simplifying Assumptions Used in Example Program

Several simplifying assumptions and/or restrictions have been made in this example program in order to present the most basic example for the user:

  • We have chosen a small block size, MB=NB=2; however, this should not be regarded as a typical choice of block size in a user's application. For best performance, a choice of MB=NB=32 or MB=NB=64 is more suitable. Refer to Chapter 5 for further details.
  • A simplistic subroutine MATINIT is used to assign matrices A and B to the process grid. Note that this subroutine hardcodes the local arrays on each process and does not perform communication. It is not a ScaLAPACK routine and is provided only for the purposes of this example program.
  • We assume RSRC=CSRC=0 , and thus both matrices A and B are distributed across the process grid starting with process (0,0). In general, however, any process in the current process grid can be assigned to receive the first element of the distributed matrix.
  • We have set the local leading dimension of local array A and the local leading dimension of local array B to be the same over all process rows in the process grid. The variable MXLLDA is equal to the maximum local leading dimension for array A (denoted tex2html_wrap_inline12116) over all process rows. Likewise, variable MXLLDB is the maximum local leading dimension for array B (denoted tex2html_wrap_inline12120) over all process rows. In general, however, the local leading dimension of the local array can differ from process to process in the process grid.
  • The system is solved by using the entire matrix A, as opposed to a submatrix of A, so the global indices, denoted by IA, JA, IB, and JB     , into the matrix are equal to 1. Refer to figure 4.7 in section 4.3.5 for more information on the representation of global addressing into a distributed submatrix.

next up previous contents index
Next: Notation Used in Example Up: Details of Example Program Previous: Details of Example Program

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node2.html0100644000056400000620000001213006336376151017357 0ustar pfrauenfstaff List of Figures next up previous contents index
Next: List of Tables Up: ScaLAPACK Users' Guide Previous: Contents

List of Figures



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node30.html0100644000056400000620000000742106336113654017444 0ustar pfrauenfstaff Notation Used in Example Program next up previous contents index
Next: Output of Example Program Up: Details of Example Program Previous: Simplifying Assumptions Used in

Notation Used in Example Program

 

The following is a list of notational variables and definitions specific to Example Program #1. A complete List of Notation can be found at the beginning of this book.

 
Variable 		 Definition

CSRC (global) Process column over which the first column of the matrix is distributed.

DESCA (global and local) Array descriptor for matrix A.

DESCB (global and local) Array descriptor for matrix B.

ICTXT (global) BLACS context associated with a process grid.

M (global) Number of rows in the global matrix A.

MB (global) Row block size for the matrix A.

MXLLDA (global) Maximum local leading dimension of the array A.

MXLLDB (global) Maximum local leading dimension of the array B.

MXLOCC (global) Maximum number of columns of the matrix A owned by any

process column.

MXLOCR (global) Maximum number of rows of the matrix A owned by any

process row.

MXRHSC (global) Maximum number of columns of the matrix B owned by any

process column.

MYCOL (local) Calling process's column coordinate in the process grid.

MYROW (local) Calling process's row coordinate in the process grid.

N (global) Number of columns in the global matrix A,

and the number of rows of the global solution matrix B.

NB (global) Column block size for the matrix A,

and the row block size for the matrix B.

NBRHS (global) Column block size for the global solution matrix B.

NPCOL (global) Number of columns in the process grid.

NPROW (global) Number of rows in the process grid.

NRHS (global) Number of columns in the global solution matrix B.

RSRC (global) Process row over which the first row of the matrix is distributed.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node31.html0100644000056400000620000000526106336113655017446 0ustar pfrauenfstaff Output of Example Program #1 Using MPI next up previous contents index
Next: Output of Example Program Up: Details of Example Program Previous: Notation Used in Example

Output of Example Program #1 Using MPI

When this example program is executed on a Sun Solaris architecture using MPICH (version 1.0.13) and the MPI BLACS, the following outputgif is received: 

sun4sol2> f77 -o example1 example1.f scalapack_SUN4SOL2.a \
          blacsF77init_MPI-SUN4SOL2-0.a blacs_MPI-SUN4SOL2-0.a \
          blacsF77init_MPI-SUN4SOL2-0.a blas_SUN4SOL2.a \
          $MPI_ROOT/lib/solaris/ch_p4/libmpi.a -lnsl -lsocket
example1.f:
 MAIN example1:
        matinit:
sun4sol2> mpirun -np 6 example1
 
ScaLAPACK Example Program #1 -- May 1, 1997
 
Solving Ax=b where A is a   9 by   9 matrix with a block size of   2
Running on   6 processes, where the process grid is   2 by   3
 
INFO code returned by PDGESV =   0
 
According to the normalized residual the solution is correct.
 
||A*x - b|| / ( ||x||*||A||*eps*N ) =   0.00000000E+00



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node32.html0100644000056400000620000000621106336113655017443 0ustar pfrauenfstaff Output of Example Program #1 Using PVM next up previous contents index
Next: Four Basic Steps Required Up: Details of Example Program Previous: Output of Example Program

Output of Example Program #1 Using PVM

When this example program is executed on a SUN4 architecture using PVM (version 3.3.11) and the PVM BLACS, the following outputgif is received: 

sun4> f77 -o example1 example1.f scalapack_SUN4.a \
      blacs_PVM-SUN4-0.a blas_SUN4.a $PVM_ROOT/lib/SUN4/libpvm3.a
example1.f:
 MAIN example1:
        matinit:
sun4> cp example1 $HOME/pvm3/bin/SUN4/
sun4> cd $HOME/pvm3/bin/SUN4/
sun4> pvm
pvm> quit
 
pvmd still running.
sun4> example1
File 'blacs_setup.dat' not found.  Spawning processes to current
configuration.
Enter the name of the executable to run: example1
Spawning 5 more copies of example1
Spawning process 'example1' to host sun4
[t40003] BEGIN
Spawning process 'example1' to host sun4
[t40004] BEGIN
Spawning process 'example1' to host sun4
[t40005] BEGIN
Spawning process 'example1' to host sun4
[t40006] BEGIN
Spawning process 'example1' to host sun4
[t40007] BEGIN
 
ScaLAPACK Example Program #1 -- May 1, 1997
 
Solving Ax=b where A is a   9 by   9 matrix with a block size of   2
Running on   6 processes, where the process grid is   2 by   3
 
INFO code returned by PDGESV =   0
 
According to the normalized residual the solution is correct.
 
||A*x - b|| / ( ||x||*||A||*eps*N ) =   0.00000000E+00
sun4> pvm
pvmd already running.
pvm> halt
sun4>



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node33.html0100644000056400000620000000641506336113655017452 0ustar pfrauenfstaff Four Basic Steps Required to Call a ScaLAPACK Routine next up previous contents index
Next: Initialize the Process Grid Up: Getting Started with ScaLAPACK Previous: Output of Example Program

Four Basic Steps Required to Call a ScaLAPACK Routine

 

Four basic steps are required to call a ScaLAPACK routine.

  1. Initialize the process grid
  2. Distribute the matrix on the process grid
  3. Call ScaLAPACK routine
  4. Release the process grid

Each of these steps is detailed below. The example program in section 2.3 illustrates these basic requirements. Refer to section 2.3.2 for an explanation of notational variables.

For more information on the BLACS routines called in this program, and more specifically their calling sequences, please refer to Appendix D.3, [54], and the BLACS homepage 
(http://www.netlib.org/blacs/index.html). Further details of the matrix distribution and storage scheme can be found in Chapter 4 and section 4.3.2.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node34.html0100644000056400000620000000746306336113656017460 0ustar pfrauenfstaff Initialize the Process Grid next up previous contents index
Next: Distribute the Matrix on Up: Four Basic Steps Required Previous: Four Basic Steps Required

Initialize the Process Grid

 

A call to the ScaLAPACK TOOLS routine SL_INIT initializes the process grid. This routine initializes a tex2html_wrap_inline12182 (denoted tex2html_wrap_inline12614 in the source code) process grid by using a row-major ordering of the processes, and obtains a default system context . For more information on contexts, refer to section 4.1.2 or [54].

The user can then query the process grid to identify each process's coordinates (MYROW,MYCOL)  )   via a call to BLACS_GRIDINFO .

A typical code fragment (as obtained from the example program in section 2.3) to accomplish this task would be the following:

      CALL SL_INIT( ICTXT, NPROW, NPCOL )
      CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL )

where details of the calling sequence for SL_INIT are provided below. Detailed descriptions of calling sequences for each BLACS routine can be found in [54] and Appendix D.3. Note that underlined arguments in the calling sequence denote output arguments.

  • SL_INIT( , NPROW, NPCOL )
     
    ICTXT
    (global output) INTEGER
    ICTXT specifies the BLACS context identifying the created process grid.
    NPROW
    (global input) INTEGER
    NPROW specifies the number of process rows in the process grid to be created.
    NPCOL
    (global input) INTEGER
    NPCOL specifies the number of process columns in the process grid to be created.

For a description of these variables names, please refer to the example program notation in section 2.3 and [54].



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node35.html0100644000056400000620000001034506336113656017452 0ustar pfrauenfstaff Distribute the Matrix on the Process Grid next up previous contents index
Next: Call the ScaLAPACK Routine Up: Four Basic Steps Required Previous: Initialize the Process Grid

Distribute the Matrix on the Process Grid

All global matrices must be distributed on the process grid prior to the invocation of a ScaLAPACK routine. It is the user's responsibility to perform this data distribution. For further information on the appropriate data distribution, please refer to Chapter 4.

Each global matrix that is to be distributed across the process grid must be assigned an array descriptor   . Details of the entries in the array descriptor can be found in section 4.3.3. This array descriptor is most easily initialized with a call to a ScaLAPACK TOOLS routine called DESCINIT  and must be set prior to the invocation of a ScaLAPACK routine.

As an example, the array descriptors  for the matrices in figures 2.2 and 2.3 are assigned with the following code excerpt from the example program in section 2.3.

      CALL DESCINIT( DESCA, M, N, MB, NB, RSRC, CSRC, ICTXT, MXLLDA,
     $               INFO )
      CALL DESCINIT( DESCB, N, NRHS, NB, NBRHS, RSRC, CSRC, ICTXT,
     $               MXLLDB, INFO )
These two calls to DESCINIT are equivalent to the assignment statements:
      DESCA( 1 ) = 1
      DESCA( 2 ) = ICTXT
      DESCA( 3 ) = M
      DESCA( 4 ) = N
      DESCA( 5 ) = MB
      DESCA( 6 ) = NB
      DESCA( 7 ) = RSRC
      DESCA( 8 ) = CSRC
      DESCA( 9 ) = MXLLDA
*
      DESCB( 1 ) = 1
      DESCB( 2 ) = ICTXT
      DESCB( 3 ) = N
      DESCB( 4 ) = NRHS
      DESCB( 5 ) = NB
      DESCB( 6 ) = NBRHS
      DESCB( 7 ) = RSRC
      DESCB( 8 ) = CSRC
      DESCB( 9 ) = MXLLDB

Details of the entries in the array descriptor can be found in section 4.3.3.

A simplistic mapping of the global matrix in figure 2.2 to a process grid is accomplished in the example program in section 2.3 via a call to the subroutine MATINIT. Please note that the routine MATINIT is not a ScaLAPACK routine and is used in this example program for demonstrative purposes only.

Appendix C.1 provides a more detailed example program, which reads a matrix from a file, distributes it onto a process grid, and then writes the solution to a file.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node36.html0100644000056400000620000000447306336113657017461 0ustar pfrauenfstaff Call the ScaLAPACK Routine next up previous contents index
Next: Release the Process Grid Up: Four Basic Steps Required Previous: Distribute the Matrix on

Call the ScaLAPACK Routine

All ScaLAPACK routines assume that the data has been distributed on the process grid prior to the invocation of the routine. Detailed descriptions of the appropriate calling sequences for each of the ScaLAPACK routines can be found in the leading comments of the source code or in Part ii of this users guide. The required data distribution for the ScaLAPACK routine, as well as the amount of input error checking to be performed, is described in Chapter 4. For debugging hints, the user should refer to Chapter 7.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node37.html0100644000056400000620000000434306336113657017456 0ustar pfrauenfstaff Release the Process Grid next up previous contents index
Next: Contents of ScaLAPACK Up: Four Basic Steps Required Previous: Call the ScaLAPACK Routine

Release the Process Grid

After the desired computation on a process grid has been completed, it is advisable to release the process grid via a call to BLACS_GRIDEXIT . When all computations have been completed, the program is exited with a call to BLACS_EXIT.

A typical code fragment to accomplish these steps would be

      CALL BLACS_GRIDEXIT( ICTXT )
      CALL BLACS_EXIT( 0 )

A detailed explanation of the BLACS calling sequences can be found in Appendix D and [54].



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node38.html0100644000056400000620000001166606336113660017457 0ustar pfrauenfstaff Contents of ScaLAPACK next up previous contents index
Next: Structure of ScaLAPACK Up: Guide Previous: Release the Process Grid

Contents of ScaLAPACK

 





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node39.html0100644000056400000620000000401506336113661017447 0ustar pfrauenfstaff Structure of ScaLAPACK next up previous contents index
Next: Levels of Routines Up: Contents of ScaLAPACK Previous: Contents of ScaLAPACK

Structure of ScaLAPACK

 





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node3.html0100644000056400000620000001703206336376124017366 0ustar pfrauenfstaff List of Tables next up previous contents index
Next: Preface Up: ScaLAPACK Users' Guide Previous: List of Figures

List of Tables



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node40.html0100644000056400000620000001542206336113662017444 0ustar pfrauenfstaff Levels of Routines next up previous contents index
Next: Data Types and Precision Up: Structure of ScaLAPACK Previous: Structure of ScaLAPACK

Levels of Routines

 

The routines in ScaLAPACK are classified into three broad categories:

  • Driver routines , each of which solves a complete problem, for example, solving a system of linear equations or computing the eigenvalues of a real symmetric matrix. Users are recommended to use a driver routine if one meets their requirements. Driver routines are described in section 3.2, and a complete list of routine names can be found in Appendix A.1. Global and local input error-checking are performed where possible for these routines.
  • Computational routines , each of which performs a distinct computational task, for example an LU factorization or the reduction of a real symmetric matrix to tridiagonal form. Each driver routine calls a sequence of computational routines. Users (especially software developers) may need to call computational routines directly to perform tasks, or sequences of tasks, that cannot conveniently be performed by the driver routines. Computational routines are described in section 3.3 and a complete list of routine names can be found in Appendix A.1. Global and local input error-checking are performed for these routines.
  • Auxiliary routines , which in turn can be classified as follows:

    • routines that perform subtasks of block-partitioned algorithms  -- in particular, routines that implement unblocked versions of the algorithms; and
    • routines that perform some commonly required low-level computations, for example, scaling a matrix, computing a matrix-norm, or generating an elementary Householder matrix; some of these may be of interest to numerical analysts or software developers and could be considered for future additions to the PBLAS.

In general, no input error-checking is performed in the auxiliary routines. The exception to this rule is for the auxiliary routines that are Level 2 equivalents of computational routines (e.g., PxGETF2, PxGEQR2, PxORMR2, PxORM2R). For these routines, local input error-checking is performed.

Both driver routines and computational routines are fully described in this users guide, but not the auxiliary routines. A list of the auxiliary routines, with brief descriptions of their functions, is given in Appendix A.2. LAPACK auxiliary routines are also used whenever possible for local computation. Refer to the LAPACK Users' Guide [3] for details.

The PBLAS, BLAS, BLACS, and LAPACK are strictly-speaking not part of the ScaLAPACK routines. However, the ScaLAPACK routines make frequent calls to these packages.

ScaLAPACK also provides two matrix redistribution/copy routines for each data type [107, 49, 106]. These routines provide a truly general copy from any block cyclicly distributed (sub)matrix to any other block cyclicly distributed (sub)matrix. These routines are the only ones in the entire ScaLAPACK library which provide inter-context operations. Because of the generality of these routines, they may be used for many operations not usually associated with copy routines. For instance, they may be used to a take a matrix on one process and distribute it across a process grid, or the reverse. If a supercomputer is grouped into a virtual parallel machine with a workstation, for instance, this routine can be used to move the matrix from the workstation to the supercomputer and back. In ScaLAPACK, these routines are called to copy matrices from a two-dimensional process grid to a one-dimensional process grid. They can be used to redistribute matrices so that distributions providing maximal performance can be used by various component libraries, as well. For further details on these routines, refer to Appendix A.3.


next up previous contents index
Next: Data Types and Precision Up: Structure of ScaLAPACK Previous: Structure of ScaLAPACK

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node41.html0100644000056400000620000000474606336113662017454 0ustar pfrauenfstaff Data Types and Precision next up previous contents index
Next: Naming Scheme Up: Structure of ScaLAPACK Previous: Levels of Routines

Data Types and Precision

ScaLAPACK provides the same range of functionality for real and complex data, with a few exceptions. The complex Hermitian eigensolver (PCHEEV) and complex singular value decomposition (PCGESVD) are still under development. They may be available in a future release of ScaLAPACK.

Matching routines for real and complex data have been coded to maintain a close correspondence between the two, wherever possible. However, there are cases where the corresponding complex version calling sequence has more arguments than the real version.

All routines in ScaLAPACK are provided in both single- and double-precision versions.

Double-precision routines for complex matrices require the nonstandard Fortran 77 data type COMPLEX*16, which is available on most machines where double precision computation is usual.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node42.html0100644000056400000620000001303106336113663017441 0ustar pfrauenfstaff Naming Scheme next up previous contents index
Next: Driver Routines Up: Structure of ScaLAPACK Previous: Data Types and Precision

Naming Scheme

  

Each subroutine name in ScaLAPACK, which has an LAPACK equivalent, is simply the LAPACK name prepended by a P. Thus, we have relaxed (violated) the Fortran 77 standard by allowing subroutine names to be greater than 6-characters in length and allowing an underscore _ in the names of certain TOOLS routines.

All driver and computational routines  have names of the form PXYYZZZ, where for some driver routines the seventh character is blank.

The second letter, X, indicates the data type as follows:


tabular885

When we wish to refer to a ScaLAPACK routine generically, regardless of data type, we replace the second letter by ``x''. Thus PxGESV refers to any or all of the routines PSGESV, PCGESV, PDGESV, and PZGESV.

The next two letters, YY, indicate the type of matrix (or of the most significant matrix). Most of these two-letter codes apply to both real and complex matrices; a few apply specifically to one or the other, as indicated in table 3.1.

  table891
Table 3.1: Matrix types in the ScaLAPACK naming scheme

A diagonally dominant-like  matrix is one for which it is known a priori that pivoting for stability is NOT required in the LU factorization of the matrix. Diagonally dominant matrices themselves are examples of diagonally dominant-like matrices.

When we wish to refer to a class of routines that performs the same function on different types of matrices, we replace the second, third, and fourth letters by ``xyy''. Thus, PxyySVX refers to all the expert driver routines for systems of linear equations that are listed in table 3.2.

The last three letters ZZZ indicate the computation performed. Their meanings will be explained in section 3.3. For example, PSGEBRD is a single-precision routine that performs a bidiagonal reduction (BRD) of a real general matrix.

The names of auxiliary routines  follow a similar scheme except that the third and fourth characters YY are usually LA (for example, PSLASCL or PCLARFG). There are two kinds of exception. Auxiliary routines that implement an unblocked version of a block-partitioned algorithm  have similar names to the routines that perform the block-partitioned algorithm, with the seventh character being ``2'' (for example, PSGETF2 is the unblocked version of PSGETRF). A few routines that may be regarded as extensions to the BLAS are named similar to the BLAS naming schemes (for example, PCMAX1, PSCSUM1).


next up previous contents index
Next: Driver Routines Up: Structure of ScaLAPACK Previous: Data Types and Precision

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node43.html0100644000056400000620000000551206336113663017447 0ustar pfrauenfstaff Driver Routines next up previous contents index
Next: Linear Equations Up: Contents of ScaLAPACK Previous: Naming Scheme

Driver Routines

 

This section describes the driver routines  in ScaLAPACK. Further details on the terminology and the numerical operations they perform are given in section 3.3, which describes the computational routines. If the parallel algorithm or implementation differs significantly from the serial LAPACK equivalent, this fact will be noted and the user directed to consult the appropriate LAPACK working note.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node44.html0100644000056400000620000001220706336113664017450 0ustar pfrauenfstaff Linear Equations next up previous contents index
Next: Linear Least Squares Problems Up: Driver Routines Previous: Driver Routines

Linear Equations

 

Two types of driver routines are provided for solving systems of linear equations :

  • a simple driver (name ending -SV)  , which solves the system AX = B by factorizing A and overwriting B with the solution X;
  • an expert driver (name ending -SVX)  , which can also perform some or all of the following functions (some of them optionally):

    • solve tex2html_wrap_inline12654 or tex2html_wrap_inline12656 (unless A is symmetric or Hermitian);
    • estimate the condition number of A, check for near-singularity, and check for pivot growth;
    • refine the solution and compute forward and backward error bounds;
    • equilibrate  the system if A is poorly scaled.

    The expert driver requires roughly twice as much storage as the simple driver in order to perform these extra functions.

Both types of driver routines can handle multiple right-hand sides (the columns of B).

Different driver routines are provided to take advantage of special properties or storage schemes of the matrix A, as shown in table 3.2.

These driver routines cover all the functionality of the computational routines for linear systems , except matrix inversion . It is seldom necessary to compute the inverse of a matrix explicitly, and such computation is certainly not recommended as a means of solving linear systems.

At present, only simple drivers (name ending -SV) are provided for systems involving band and tridiagonal matrices. It is important to note that in the banded and tridiagonal factorizations (PxDBTRF, PxDTTRF, PxGBTRF, PxPBTRF, and PxPTTRF) used within these drivers, the resulting factorization is not the same factorization as returned from LAPACK. Additional permutations are performed on the matrix for the sake of parallelism. Further details of the algorithmic implementations can be found in [32].

  table931
Table 3.2: Driver routines for linear equations


next up previous contents index
Next: Linear Least Squares Problems Up: Driver Routines Previous: Driver Routines

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node45.html0100644000056400000620000001451206336113664017452 0ustar pfrauenfstaff Linear Least Squares Problems next up previous contents index
Next: Standard Eigenvalue and Singular Up: Driver Routines Previous: Linear Equations

Linear Least Squares Problems

   

The linear least squares (LLS) problem  is:
 equation988
where A is an m-by-n matrix, b is a given m element vector and x is the n element solution vector.

In the most usual case, tex2html_wrap_inline12690 and tex2html_wrap_inline12692. In this case the solution to problem (3.1) is unique. The problem is also referred to as finding a least squares solution to an overdetermined  system of linear equations.

When m < n and tex2html_wrap_inline12696, there are an infinite number of solutions x that exactly satisfy b-Ax=0. In this case it is often useful to find the unique solution x that minimizes tex2html_wrap_inline12704, and the problem is referred to as finding a minimum norm solution  to an underdetermined  system of linear equations.

The driver routine PxGELS  solves problem (3.1) on the assumption that tex2html_wrap_inline12706 -- in other words, A has full rank -- finding a least squares solution of an overdetermined   system when m > n, and a minimum norm solution of an underdetermined  system when m < n. PxGELS     uses a QR or LQ factorization   of A and also allows A to be replaced by tex2html_wrap_inline12722 in the statement of the problem (or by tex2html_wrap_inline12724 if A is complex).

In the general case when we may have tex2html_wrap_inline12728 -- in other words, A may be rank-deficient  -- we seek the minimum norm least squares solution  x that minimizes both tex2html_wrap_inline12704 and tex2html_wrap_inline12736.

The LLS  driver routines are listed in table 3.3.

All routines allow several right-hand-side vectors b and corresponding solutions x to be handled in a single call, storing these vectors as columns of matrices B and X, respectively. Note, however, that equation 3.1 is solved for each right-hand-side vector independently; this is not the same as finding a matrix X that minimizes tex2html_wrap_inline12748.

  table1027
Table 3.3: Driver routines for linear least squares problems


next up previous contents index
Next: Standard Eigenvalue and Singular Up: Driver Routines Previous: Linear Equations

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node46.html0100644000056400000620000000402206336113665017447 0ustar pfrauenfstaff Standard Eigenvalue and Singular Value Problems next up previous contents index
Next: Symmetric Eigenproblems Up: Driver Routines Previous: Linear Least Squares Problems

Standard Eigenvalue and Singular Value Problems

 




Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node47.html0100644000056400000620000000731106336113665017454 0ustar pfrauenfstaff Symmetric Eigenproblems next up previous contents index
Next: Singular Value Decomposition Up: Standard Eigenvalue and Singular Previous: Standard Eigenvalue and Singular

Symmetric Eigenproblems

   

The symmetric eigenvalue problem (SEP) is to find the eigenvalues  , tex2html_wrap_inline12778, and corresponding eigenvectors , tex2html_wrap_inline12780, such that
displaymath12772
For the Hermitian eigenvalue problem  we have
displaymath12773
For both problems the eigenvalues tex2html_wrap_inline12778 are real.

When all eigenvalues and eigenvectors have been computed, we write
displaymath12774
where tex2html_wrap_inline12784 is a diagonal matrix whose diagonal elements are the eigenvalues , and Z is an orthogonal (or unitary) matrix whose columns are the eigenvectors. This is the classical spectral factorization   of A.

Two types of driver routines  are provided for symmetric or Hermitian eigenproblems:

  • a simple driver (name ending -EV)  , which computes all the eigenvalues and (optionally) the eigenvectors of a symmetric or Hermitian matrix A;
  • an expert driver (name ending -EVX)  , which can compute either all or a selected subset of the eigenvalues, and (optionally) the corresponding eigenvectors.

The driver routines are shown in table 3.4. Currently the only simple drivers provided are PSSYEV and PDSYEV.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node48.html0100644000056400000620000000772106336113666017463 0ustar pfrauenfstaff Singular Value Decomposition next up previous contents index
Next: Generalized Symmetric Definite Eigenproblems Up: Standard Eigenvalue and Singular Previous: Symmetric Eigenproblems

Singular Value Decomposition

The singular value decomposition (SVD) of an m-by-n matrix A is given by   
displaymath12804
where U and V are orthogonal (unitary) and tex2html_wrap_inline12820 is an m-by-n diagonal matrix with real diagonal elements, tex2html_wrap_inline12826, such that
displaymath12805
The tex2html_wrap_inline12826 are the singular values of A and the first min(m,n) columns of U and V are the left and right singular vectors of A.   

The singular values and singular vectors satisfy
displaymath12806
where tex2html_wrap_inline12840 and tex2html_wrap_inline12842 are the ith columns of U and V, respectively.

A single driver  routine, PxGESVD  , computes the ``economy size'' or ``thin'' singular value decomposition of a general nonsymmetric matrix (see table 3.4). Thus, if A is m-by-n with m>n, then only the first n columns of U are computed and tex2html_wrap_inline12820 is an n-by-n matrix. For a detailed discussion of the ``thin'' singular value decomposition, refer to [71, p. 72,].

Currently, only PSGESVD and PDGESVD are provided.

  table1093
Table 3.4: Driver routines for standard eigenvalue and singular value problems



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node49.html0100644000056400000620000001043506336113666017460 0ustar pfrauenfstaff Generalized Symmetric Definite Eigenproblems (GSEP) next up previous contents index
Next: Computational Routines Up: Driver Routines Previous: Singular Value Decomposition

Generalized Symmetric Definite Eigenproblems (GSEP)

  

   An expert driver  is provided to compute all the eigenvalues and (optionally) the eigenvectors of the following types of problems:

  1. tex2html_wrap_inline12875
  2. tex2html_wrap_inline12877
  3. tex2html_wrap_inline12879

where A and B are symmetric or Hermitian and B is positive definite. For all these problems the eigenvalues  tex2html_wrap_inline12778 are real. When A and B are symmetric, the matrices Z of computed eigenvectors  satisfy tex2html_wrap_inline12895 (problem types 1 and 3) or tex2html_wrap_inline12897 (problem type 2), where tex2html_wrap_inline12784 is a diagonal matrix with the eigenvalues on the diagonal. Z also satisfies tex2html_wrap_inline12903 (problem types 1 and 2) or tex2html_wrap_inline12905 (problem type 3). When A and B are Hermitian, the matrices Z of computed eigenvectors  satisfy tex2html_wrap_inline12913 (problem types 1 and 3) or tex2html_wrap_inline12915 (problem type 2), where tex2html_wrap_inline12784 is a diagonal matrix with the eigenvalues on the diagonal. Z also satisfies tex2html_wrap_inline12921 (problem types 1 and 2) or tex2html_wrap_inline12923 (problem type 3).

The routine is listed in table 3.5.

  table1135
Table 3.5: Driver routine for the generalized symmetric definite eigenvalue problems



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node4.html0100644000056400000620000003140206336113632017355 0ustar pfrauenfstaff Preface next up previous contents index
Next: Suggestions for Reading Up: List of Tables Previous: List of Tables

Preface

Following the initial release of LAPACK and the emerging importance of distributed memory computing , work began on adapting LAPACK to distributed-memory architectures. Since porting software efficiently from one distributed-memory architecture to another is a challenging task, this work is an effort to establish standards for library development in the varied world of distributed-memory computing.

ScaLAPACK is an acronym for Scalable Linear Algebra PACKage, or Scalable LAPACK. As in LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy. (For distributed-memory machines, the memory hierarchy includes the off-processor memory of other processors, in addition to the hierarchy of registers, cache, and local memory on each processor.) The fundamental building block of the ScaLAPACK library is a distributed-memory version of the Level 1, 2, and 3 BLAS, called the PBLAS (Parallel BLAS). The PBLAS are in turn built on the BLAS for computation on single nodes and on a set of Basic Linear Algebra Communication Subprograms (BLACS) for communication tasks that arise frequently in parallel linear algebra computations. For optimal performance, it is necessary, first, that the BLAS be implemented efficiently on the target machine, and second, that an efficient version of the BLACS be available.

Versions of the BLACS exist for both MPI and PVM, as well as versions for the Intel series (NX), IBM SP series (MPL), and Thinking Machines CM-5 (CMMD). A vendor-optimized version of the BLACS is available for the Cray T3 series. Thus, ScaLAPACK is portable on any computer or network of computers that supports MPI or PVM (as well as the aforementioned native message-passing protocols).

Most of the ScaLAPACK code is written in standard Fortran 77; the PBLAS and the BLACS are written in C, but with Fortran 77 interfaces.

The first ScaLAPACK software was written in 1989-1990, and the appearance of the code has undergone many changes since then in our pursuit to resemble and enable code reuse from LAPACK.

The first public release (version 1.0) of ScaLAPACK occurred on February 28, 1995, and subsequent releases occurred in 1996.

The ScaLAPACK library is only one facet of the ``ScaLAPACK Project,'' which is a collaborative effort involving several institutions:

  • Oak Ridge National Laboratory
  • Rice University
  • University of California, Berkeley
  • University of California, Los Angeles
  • University of Illinois, Champaign-Urbana
  • University of Tennessee, Knoxville
and comprises four components:
  • dense and band matrix software (ScaLAPACK)
  • large sparse eigenvalue software (P_ARPACK)
  • sparse direct systems software (CAPSS)
  • preconditioners for large sparse iterative solvers (ParPre)

For further information on any of the related ScaLAPACK projects, please refer to the scalapack index on netlib:

http://www.netlib.org/scalapack/index.html

This users guide describes version 1.5 of the dense and band matrix software package (ScaLAPACK).

The University of Tennessee, Knoxville, provided the routines for the solution of dense, band, and tridiagonal linear systems of equations, condition estimation and iterative refinement, for LU and Cholesky factorization, matrix inversion, full-rank linear least squares problems, orthogonal and generalized orthogonal factorizations, orthogonal transformation routines, reductions to upper Hessenberg, bidiagonal and tridiagonal form, and reduction of a symmetric-definite generalized eigenproblem to standard form. And finally, the BLACS, the PBLAS, and the HPF wrappers were also written at the University of Tennessee, Knoxville.

The University of California, Berkeley, provided the routines for the symmetric and generalized symmetric eigenproblem and the singular value decomposition.

Greg Henry at Intel Corporation provided the routines for the nonsymmetric eigenproblem.

Oak Ridge National Laboratory provided the out-of-core linear solvers for LU, Cholesky, and QR factorizations.

ScaLAPACK has been incorporated into several commercial packages, including the NAG Parallel Library, IBM Parallel ESSL, and Cray LIBSCI, and is being integrated into the VNI IMSL Numerical Library, as well as software libraries for Fujitsu, Hewlett-Packard/Convex, Hitachi, and NEC. Additional information can be found on the respective Web pages:

http://www.nag.co.uk:80/numeric/FM.html
http://www.rs6000.ibm.com/software/sp_products/esslpara.html
http://www.cray.com/PUBLIC/product-info/sw/PE/LibSci.html
http://www.sgi.com/Products/hardware/Power/ch_complib.html
http://www.vni.com/products/imsl/index.html

A number of technical reports have been written during the development of ScaLAPACK and published as LAPACK Working Notes by the University of Tennessee. Refer to the following URL for a complete set of working notes:

http://www.netlib.org/lapack/lawns/index.html
Many of these reports subsequently appeared as journal articles. The Bibliography gives the most recent published reference.

As the distributed-memory  version of LAPACK, ScaLAPACK has drawn heavily on the software and documentation standards set by LAPACK. The test and timing software for the Level 2 and 3 BLAS was used as a model for the PBLAS test and timing software, and the ScaLAPACK test suite was patterned after the LAPACK test suite. Because of the large amount of software, all BLACS, PBLAS, and ScaLAPACK routines are maintained in basefiles whereby the codes can be re-extracted as needed. Final formatting of the software was done using Toolpack/1 [105].

We have tried to be consistent with our documentation and coding style throughout ScaLAPACK in the hope that it will serve as a model for other distributed-memory software development efforts. ScaLAPACK has been designed as a source of building blocks for larger parallel applications.

The development of ScaLAPACK was supported in part by National Science Foundation Grant ASC-9005933; by the Defense Advanced Research Projects Agency under contract DAAH04-95-1-0077, administered by the Army Research Office; by the Division of Mathematical, Information, and Computational Sciences, of the U.S. Department of Energy, under Contract DE-AC05-96OR22464; and by the National Science Foundation Science and Technology Center Cooperative Agreement CCR-8809615.

The performance results presented in this book were obtained using computer resources at various sites:

  • Cray T3E, located at Lawrence Berkeley National Laboratory, National Energy Research Scientific Computing Center (NERSC), supported by the Director, Office of Computational and Technology Research, Division of Mathematical, Information, and Computational Sciences of the U.S. Department of Energy under contract number 76SF00098.
  • IBM SP-2, located at the Cornell Theory Center, which receives major funding from the National Science Foundation (NSF) and New York State, with additional support from the Defense Advanced Research Projects Agency (DARPA), the National Center for Research Resources at the National Institutes of Health (NIH), IBM Corporation, and other members of the center's Corporate Partnership Program.
  • Intel MP Paragon XPS/35, located at Intel Corporation, Portland, Oregon.
  • Intel ASCI Option Red Supercomputer Technology located in Beaverton, Oregon.
  • Network of Sun Ultra Enterprise 2 (Model 2170s) workstations, located in the Department of Computer Science at the University of Tennessee, funded by National Science Foundation Grant CDA-9529459, the Center of Excellence - Science Alliance, UT Networking Services, and the UT Computer Science Department.
  • Network of Sun UltraSPARC-1 workstations, located in the Department of Computer Science at the University of California, Berkeley supported by DARPA Grant F30602-95-C-0014, NSF Grants CCR-9257974 and PFF-CCR-9253705, as well as California MICRO Grants. Corporate sponsors are: the AT&T Foundation, Digital Equipment Corporation, Exabyte Corporation, Hewlett-Packard Company, Informix Software Inc, Intel Corporation, International Business Machines, Internet Archive, Microsoft Corporation, Mitsubishi Electric Research Laboratories, Myricom Inc, Siemens Corporation, Sun Microsystems, Synoptics Corporation, Tandem Corporation, and TIBCO Inc.

The cover of this book was designed by Andy Cleary at Lawrence Livermore National Laboratory.

We acknowledge with gratitude the support that we have received from the following organizations, and the help of individual members of their staff: Cornell Theory Center, Cray Research, a Silicon Graphics Company, IBM (Parallel ESSL Development and Research), Lawrence Berkeley National Laboratory, National Energy Research Scientific Computing Center (NERSC), Maui High Performance Computer Center, Minnesota Supercomputing Center, NAG Ltd., and Oak Ridge National Laboratory Center for Computational Sciences (CCS).

We also thank the many, many people who have contributed code, criticism, ideas and encouragement. We especially acknowledge the contributions of Mark Adams, Peter Arbenz, Scott Betts, Shirley Browne, Henri Casanova, Soumen Chakrabarti, Mishi Derakhshan, Frederic Desprez, Brett Ellis, Ray Fellers, Markus Hegland, Nick Higham, Adolfy Hoisie, Velvel Kahan, Xiaoye Li, Bill Magro, Osni Marques, Paul McMahan, Caroline Papadopoulos, Beresford Parlett, Loic Prylli, Yves Robert, Howard Robinson, Tom Rowan, Shilpa Singhal, Françoise Tisseur, Bernard Tourancheau, Anne Trefethen, Robert van de Geijn, and Andrey Zege.

We express appreciation to all those who helped in the preparation of this work, in particular to Gail Pieper for her tireless efforts in proofreading the draft and improving the quality of the presentation.

Finally, we thank all the test sites that received several test releases of the ScaLAPACK software and that ran an extensive series of test programs for us.

tex2html_wrap12059


next up previous contents index
Next: Suggestions for Reading Up: List of Tables Previous: List of Tables

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node50.html0100644000056400000620000000726406336113667017457 0ustar pfrauenfstaff Computational Routines next up previous contents index
Next: Linear Equations Up: Contents of ScaLAPACK Previous: Generalized Symmetric Definite Eigenproblems

Computational Routines

  

As previously stated, if the parallel algorithm or implementation differs significantly from the serial LAPACK equivalent, this fact will be noted and the user directed to consult the appropriate LAPACK Working Note.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node51.html0100644000056400000620000003353006336113670017445 0ustar pfrauenfstaff Linear Equations next up previous contents index
Next: Orthogonal Factorizations and Linear Up: Computational Routines Previous: Computational Routines

Linear Equations

 

We use the standard notation for a system of simultaneous linear  equations :
 equation1162
where A is the coefficient matrix , b is the right-hand side  , and x is the solution  . In (3.2) A is assumed to be a square matrix of order n, but some of the individual routines allow A to be rectangular. If there are several right-hand sides we write
 equation1174
where the columns of B are the individual right-hand sides, and the columns of X are the corresponding solutions. The basic task is to compute X, given A and B.

If A is upper or lower triangular, (3.2) can be solved by a straightforward process of backward or forward substitution. Otherwise, the solution is obtained after first factorizing A as a product of triangular matrices (and possibly also a diagonal matrix or permutation matrix).

The form of the factorization depends on the properties of the matrix A. ScaLAPACK provides routines for the following types of matrices, based on the stated  factorizations:

  • general matrices   (LU factorization with partial pivoting)  :
    displaymath12933
    where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n).
  • symmetric and Hermitian positive definite matrices (Cholesky factorization) :
    displaymath12934

    displaymath12935
    where U is an upper triangular matrix and L is lower triangular.
  • general band matrices (LU factorization with partial pivoting):

    If A is m-by-n with bwl subdiagonals and bwu superdiagonals, the factorization is
    displaymath12936
    where P and Q are permutation matrices and L and U are banded lower and upper triangular matrices, respectively.

  • general diagonally dominant-like band matrices  including general tridiagonal matrices (LU factorization without pivoting):

    A diagonally dominant-like  matrix is one for which it is known a priori that pivoting for stability is NOT required in the LU factorization of the matrix. Diagonally dominant matrices themselves are examples of diagonally dominant-like matrices.

    If A is m-by-n with bwl subdiagonals and bwu superdiagonals, the factorization is
    displaymath12937
    where P is a permutation matrix and L and U are banded lower and upper triangular matrices respectively.

  • symmetric and Hermitian positive definite band matrices (Cholesky factorization) :
    displaymath12938

    displaymath12939
    where P is a permutation matrix and U and L are banded upper and lower triangular matrices, respectively.
  • symmetric and Hermitian positive definite tridiagonal matrices (tex2html_wrap_inline13039 factorization):  
    displaymath12940

    displaymath12941
    where P is a permutation matrix and U and L are bidiagonal upper and lower triangular matrices respectively.

Note: In the banded and tridiagonal factorizations (PxDBTRF, PxDTTRF, PxGBTRF, PxPBTRF, and PxPTTRF), the resulting factorization is not the same factorization as returned from LAPACK. Additional permutations are performed on the matrix for the sake of parallelism. Further details of the algorithmic implementations can be found in [32].

The factorization for a general diagonally dominant-like  tridiagonal matrix is like that for a general diagonally dominant-like band matrix with bwl = 1 and bwu = 1. Band matrices use the band storage scheme described in section 4.4.3.

While the primary use of a matrix factorization is to solve a system of equations, other related tasks are provided as well. Wherever possible, ScaLAPACK provides routines to perform each of these tasks for each type of matrix and storage scheme (see table 3.6). The following list relates the tasks to the last three characters of the name of the corresponding computational routine:

PxyyTRF:
factorize (obviously not needed for triangular matrices);                                 

PxyyTRS:
use the factorization (or the matrix A itself if it is triangular) to solve (3.3) by forward or backward substitution;                                      

PxyyCON:
estimate the reciprocal of the condition number tex2html_wrap_inline13057; Higham's modification [81] of Hager's method [72] is used to estimate tex2html_wrap_inline13059 (not provided for band or tridiagonal matrices);               

PxyyRFS:
compute bounds on the error in the computed solution (returned by the PxyyTRS routine), and refine the solution to reduce the backward error (see below) (not provided for band or tridiagonal matrices);               

PxyyTRI:
use the factorization (or the matrix A itself if it is triangular) to compute tex2html_wrap_inline13063 (not provided for band matrices, because the inverse does not in general preserve bandedness);                   

PxyyEQU:
compute scaling factors to equilibrate  A (not provided for band, tridiagonal, or triangular matrices). These routines do not actually scale the matrices: auxiliary routines PxLAQyy may be used for that purpose -- see the code of the driver routines PxyySVX for sample usage .          

Note that some of the above routines depend on the output of others:

PxyyTRF:
may work on an equilibrated matrix produced by PxyyEQU and PxLAQyy, if yy is one of {GE, PO};

PxyyTRS:
requires the factorization returned by PxyyTRF;

PxyyCON:
requires the norm of the original matrix A and the factorization returned by PxyyTRF;

PxyyRFS:
requires the original matrices A and B, the factorization returned by PxyyTRF, and the solution X returned by PxyyTRS;

PxyyTRI:
requires the factorization returned by PxyyTRF.

The RFS (``refine solution'') routines perform iterative refinement  and compute backward and forward error  bounds for the solution. Iterative refinement is done in the same precision as the input data. In particular, the residual is not computed with extra precision, as has been traditionally done. The benefit of this procedure is discussed in section 6.5.

  table1358
Table 3.6: Computational routines for linear equations


next up previous contents index
Next: Orthogonal Factorizations and Linear Up: Computational Routines Previous: Computational Routines

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node52.html0100644000056400000620000000755606336113670017457 0ustar pfrauenfstaff Orthogonal Factorizations and Linear Least Squares Problems next up previous contents index
Next: QR Factorization Up: Computational Routines Previous: Linear Equations

Orthogonal Factorizations and Linear Least Squares Problems

 

ScaLAPACK provides a number of routines for factorizing a general rectangular m-by-n matrix A, as the product of an orthogonal matrix (unitary if complex) and a triangular (or possibly trapezoidal) matrix.  

A real matrix Q is orthogonal if tex2html_wrap_inline13210 ; a complex matrix Q is unitary if tex2html_wrap_inline13214 . Orthogonal or unitary matrices   have the important property that they leave the two-norm of a vector invariant:
displaymath13200
As a result, they help to maintain numerical stability because they do not   amplify rounding errors.

Orthogonal factorizations    are used in the solution of linear least squares problems . They may also be used to perform preliminary steps in the solution of eigenvalue or singular value problems.

Table 3.7 lists all routines provided by ScaLAPACK to perform orthogonal factorizations and the generation or pre- or post-multiplication of the matrix Q for each matrix type and storage scheme.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node53.html0100644000056400000620000001511306336113671017445 0ustar pfrauenfstaff QR Factorization next up previous contents index
Next: LQ Factorization Up: Orthogonal Factorizations and Linear Previous: Orthogonal Factorizations and Linear

QR Factorization

 

The most common, and best known, of the factorizations is the QR factorization  given by
displaymath13230
where R is an n-by-n upper triangular matrix and Q is an m-by-m orthogonal (or unitary) matrix. If A is of full rank n, then R is nonsingular. It is sometimes convenient to write the factorization as
displaymath13231
which reduces to
displaymath13232
where tex2html_wrap_inline13270 consists of the first n columns of Q, and tex2html_wrap_inline13276 the remaining m-n columns.

If m < n, R is trapezoidal, and the factorization can be written
displaymath13233
where tex2html_wrap_inline13286 is upper triangular and tex2html_wrap_inline13288 is rectangular.

The routine PxGEQRF     computes the QR factorization  . The matrix Q is not formed explicitly, but is represented as a product of elementary reflectors,     as described in section 3.4. Users need not be aware of the details of this representation, because associated routines are provided to work with Q: PxORGQR   (or PxUNGQR   in the complex case) can generate all or part of Q, while PxORMQR   (or PxUNMQR)   can pre- or post-multiply a given matrix by Q or tex2html_wrap_inline13300 (tex2html_wrap_inline13302 if complex).

The QR factorization can be used to solve the linear least squares problem (3.1)   when tex2html_wrap_inline13306 and A is of full rank, since
displaymath13234
c can be computed by PxORMQR   (or PxUNMQR  ), and tex2html_wrap_inline13312 consists of its first n elements. Then x is the solution of the upper triangular system
displaymath13235
which can be computed by PxTRTRS    . The residual vector r is given by
displaymath13236
and may be computed using PxORMQR   (or PxUNMQR  ). The residual sum of squares tex2html_wrap_inline13320 may be computed without forming r explicitly, since
displaymath13237


next up previous contents index
Next: LQ Factorization Up: Orthogonal Factorizations and Linear Previous: Orthogonal Factorizations and Linear

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node54.html0100644000056400000620000001004306336113671017443 0ustar pfrauenfstaff LQ Factorization next up previous contents index
Next: QR Factorization with Column Up: Orthogonal Factorizations and Linear Previous: QR Factorization

LQ Factorization

The LQ factorization   is given by
displaymath13356
where L is m-by-m lower triangular, Q is n-by-n orthogonal (or unitary), tex2html_wrap_inline13270 consists of the first m rows of Q, and tex2html_wrap_inline13276 consists of the remaining n-m rows.

This factorization is computed by the routine PxGELQF, and again Q is      represented as a product of elementary reflectors; PxORGLQ      (or PxUNGLQ   in the complex case) can generate all or part of Q, and PxORMLQ   (or PxUNMLQ  ) can pre- or post-multiply a given matrix by Q or tex2html_wrap_inline13300 (tex2html_wrap_inline13302 if Q is complex).

The LQ factorization of A is essentially the same as the QR factorization of tex2html_wrap_inline12722 (tex2html_wrap_inline12724 if A is complex), since
displaymath13357

The LQ factorization may be used to find a minimum norm solution  of an underdetermined   system of linear equations A x = b, where A is m-by-n with m < n and has rank m. The solution is given by
displaymath13358
and may be computed by calls to PxTRTRS and PxORMLQ.        



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node55.html0100644000056400000620000001147406336113672017456 0ustar pfrauenfstaff QR Factorization with Column Pivoting next up previous contents index
Next: Complete Orthogonal Factorization Up: Orthogonal Factorizations and Linear Previous: LQ Factorization

QR Factorization with Column Pivoting

To solve a linear least squares problem (3.1)   when A is not of full rank, or the rank of A is in doubt, we can perform either a QR factorization with column pivoting     or a singular value decomposition (see subsection 3.3.6).

The QR factorization with column pivoting is given by
displaymath13452
where Q and R are as before and P is a permutation matrix, chosen (in general) so that
displaymath13453
and moreover, for each k,
displaymath13454
In exact arithmetic, if tex2html_wrap_inline13482, then the whole of the submatrix tex2html_wrap_inline13484 in rows and columns k+1 to n would be zero. In numerical computation, the aim must be to determine an index k such that the leading submatrix tex2html_wrap_inline13492 in the first k rows and columns is well conditioned and tex2html_wrap_inline13484 is negligible:
displaymath13455
Then k is the effective rank of A. See Golub and Van Loan [71] for a further discussion of numerical rank determination.   

The so-called basic solution to the linear least squares problem (3.1)  can be obtained from this factorization as
displaymath13456
where tex2html_wrap_inline13502 consists of just the first k elements of tex2html_wrap_inline13506.

The routine PxGEQPF     computes the QR factorization with column pivoting  but does not attempt to determine the rank of A. The matrix Q is represented in exactly the same way as after a call of PxGEQRF    , and so the routines PxORGQR and PxORMQR can be used to work with Q (PxUNGQR and PxUNMQR if Q is complex).          



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node56.html0100644000056400000620000000712306336113673017454 0ustar pfrauenfstaff Complete Orthogonal Factorization next up previous contents index
Next: Other Factorizations Up: Orthogonal Factorizations and Linear Previous: QR Factorization with Column

Complete Orthogonal Factorization

The QR factorization with column pivoting does not enable us to compute a minimum norm solution to a rank-deficient linear least squares problem   unless tex2html_wrap_inline13551. However, by applying further orthogonal (or unitary) transformations  from the right to the upper trapezoidal matrix tex2html_wrap_inline13553, using the routine PxTZRZF, tex2html_wrap_inline13555 can be eliminated:     
displaymath13543
This gives the complete orthogonal factorization  
displaymath13544
from which the minimum norm solution  can be obtained as
displaymath13545

The matrix Z is not formed explicitly but is represented as a product of elementary reflectors,     as described in section 3.4. Users need not be aware of the details of this representation, because associated routines are provided to work with Z: PxORMRZ   (or PxUNMRZ  ) can pre- or post-multiply a given matrix by Z or tex2html_wrap_inline13563 (tex2html_wrap_inline13565 if complex).



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node57.html0100644000056400000620000000615706336113673017463 0ustar pfrauenfstaff Other Factorizations next up previous contents index
Next: Generalized Orthogonal Factorizations Up: Orthogonal Factorizations and Linear Previous: Complete Orthogonal Factorization

Other Factorizations

The QL and RQ factorizations      are given by
displaymath13582
and
displaymath13583
These factorizations are computed by PxGEQLF and PxGERQF, respectively; they are         less commonly used than either the QR or LQ factorizations described above, but have applications in, for example, the computation of generalized QR factorizations [5].    

All the factorization routines discussed here (except PxTZRZF) allow arbitrary m and n, so that in some cases the matrices R or L are trapezoidal rather than triangular. A routine that performs pivoting is provided only for the QR factorization.

  table1747
Table 3.7: Computational routines for orthogonal factorizations



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node58.html0100644000056400000620000000413606336113673017457 0ustar pfrauenfstaff Generalized Orthogonal Factorizations next up previous contents index
Next: Generalized QR Factorization Up: Computational Routines Previous: Other Factorizations

Generalized Orthogonal Factorizations

      





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node59.html0100644000056400000620000001101306336113674017451 0ustar pfrauenfstaff Generalized QR Factorization next up previous contents index
Next: Generalized RQ factorization Up: Generalized Orthogonal Factorizations Previous: Generalized Orthogonal Factorizations

Generalized QR Factorization

 

   The generalized QR (GQR) factorization of an n-by-m matrix A and an n-by-p matrix B is given by the pair of factorizations
displaymath13656
where Q and Z are respectively n-by-n and p-by-p orthogonal matrices (or unitary matrices if A and B are complex). R has the form
displaymath13657
or
displaymath13658
where tex2html_wrap_inline13492 is upper triangular. T has the form
displaymath13659
or
displaymath13660
where tex2html_wrap_inline13706 or tex2html_wrap_inline13708 is upper triangular.

Note that if B is square and nonsingular, the GQR factorization of A and B implicitly gives the QR factorization of the matrix tex2html_wrap_inline13718:
displaymath13661
without explicitly computing the matrix inverse tex2html_wrap_inline13720 or the product tex2html_wrap_inline13718.

The routine PxGGQRF computes the GQR  factorization by     computing first the QR factorization of A and then the RQ factorization of tex2html_wrap_inline13730. The orthogonal (or unitary) matrices Q and Z can be formed explicitly or can be used just to multiply another given matrix in the same way as the orthogonal (or unitary) matrix in the QR factorization (see section 3.3.2).

The GQR factorization was introduced in [73, 100]. The implementation of the GQR factorization here follows that in [5]. Further generalizations of the GQR  factorization can be found in [36].



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node5.html0100644000056400000620000001743606336113633017372 0ustar pfrauenfstaff Suggestions for Reading next up previous contents index
Next: List of Notation Up: List of Tables Previous: Preface

Suggestions for Reading

 

This users guide is divided into two parts. Part I: Guide contains chapters and appendices providing a thorough explanation of the design and functionality of the ScaLAPACK library. These chapters should be read in the order in which they are presented. Part II: Specifications of Routines is a reference manual of the leading comments of each routine in alphabetical order by routine name. A Bibliography is also provided, as well as two indexes- Index by Keyword and Index by Routine Name.

This book assumes a basic knowledge of distributed-memory parallel programming, and is written for an audience of both novice and expert users. Users intimately familiar with specific concepts discussed in this book may choose to not read certain chapters or sections within this book. Some of the chapters can be regarded as stand-alone and read independently. Novice users are directed to focus their attention on special introductory chapters, sections, and example programs, as detailed below.

All users are encouraged to frequently refer to the List of Notation and the Glossary. The first time notation from the glossary appears in the text, it will be italicized. If the user is unfamiliar with any of the concepts defined, a number of books provide background information in parallel programming [6, 33, 65, 66, 67, 70, 75, 87, 92, 95, 99, 112].

  • Chapter 1: Essentials provides a brief overview of the components of the library, downloading instructions, and details of support for the package. Users who are familiar with the design of the BLAS and LAPACK and acquainted with the existing Web pages may wish to skip this chapter.
  • Chapter 2: Getting Started with ScaLAPACK presents the basic requirements to enable users to call ScaLAPACK software, together with a very simple example program. Users who are well versed in using ScaLAPACK software may choose to skip this chapter.
  • Chapter 3: Contents of ScaLAPACK outlines the functionality provided by the package. This is a stand-alone chapter and important for both expert and novice users.
  • Chapter 4: Data Distributions and Software Conventions discusses process grid layout, contexts, block and block-cyclic data distributions, and documentation and software conventions. This chapter is essential reading for any user who is not familiar with data distributions, array descriptors, and the calling sequences of ScaLAPACK routines.
  • Chapter 5: Performance of ScaLAPACK provides guidelines to achieve high performance by using ScaLAPACK and presents performance results for a subset of the ScaLAPACK routines on a variety of distributed-memory MIMD computers and networks of workstations. This is a stand-alone chapter.
  • Chapter 6: Accuracy and Stability discusses the accuracy and stability of the algorithms used in ScaLAPACK, as well as issues of heterogeneous computing. This chapter provides varying degrees of detail, catering to novice as well as expert users.
  • Chapter 7: Troubleshooting provides a set of installation and application debugging hints for first-time ScaLAPACK users.
  • Appendix A provides a list of routine names for all driver, computational, and auxiliary routines in ScaLAPACK, as well as the matrix redistribution/copy routines.
  • Appendix B provides a brief tutorial on how to convert programs using the BLAS to the PBLAS and LAPACK to ScaLAPACK.
  • Appendix C provides two additional example programs. Section C.1 contains a more memory-efficient and practical example program, which reads a matrix from a file, distributes this matrix to the process grid, calls the desired ScaLAPACK routine, and writes the solution matrix to a file. Section C.2 provides a brief description of the HPF interface to ScaLAPACK, as well as an example program.
  • Appendix D contains Quick Reference Guides for ScaLAPACK, the PBLAS, and the BLACS.
  • Part ii: Specifications of Routines is a reference manual of the leading comments from the source code of all driver and computational routines. This manual can be read selectively as needed.


next up previous contents index
Next: List of Notation Up: List of Tables Previous: Preface

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node60.html0100644000056400000620000001023506336113674017446 0ustar pfrauenfstaff Generalized RQ factorization next up previous contents index
Next: Symmetric Eigenproblems Up: Generalized Orthogonal Factorizations Previous: Generalized QR Factorization

Generalized RQ factorization

    The generalized RQ (GRQ) factorization of an m-by-n matrix A and a p-by-n matrix B is given by the pair of factorizations
displaymath13747
where Q and Z are respectively n-by-n and p-by-p orthogonal matrices (or unitary matrices if A and B are complex). R has the form
displaymath13748
or
displaymath13749
where tex2html_wrap_inline13555 or tex2html_wrap_inline13795 is upper triangular. T has the form
displaymath13750
or
displaymath13751
where tex2html_wrap_inline13799 is upper triangular.

Note that if B is square and nonsingular, the GRQ factorization of A and B implicitly gives the RQ factorization of the matrix tex2html_wrap_inline13809:
displaymath13752
without explicitly computing the matrix inverse tex2html_wrap_inline13720 or the product tex2html_wrap_inline13809.

The routine PxGGRQF computes the GRQ factorization       by computing first the RQ factorization of A and then the QR factorization of tex2html_wrap_inline13821. The orthogonal (or unitary) matrices Q and Z can be formed explicitly or can be used just to multiply another given matrix in the same way as the orthogonal (or unitary) matrix in the RQ factorization (see section 3.3.2).



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node61.html0100644000056400000620000002131706336113675017453 0ustar pfrauenfstaff Symmetric Eigenproblems next up previous contents index
Next: Nonsymmetric Eigenproblems Up: Computational Routines Previous: Generalized RQ factorization

Symmetric Eigenproblems

   

Let A be a real symmetric   or complex Hermitian n-by-n matrix. A scalar tex2html_wrap_inline12778 is called an eigenvalue  and a nonzero column vector z the corresponding eigenvector  if tex2html_wrap_inline13848. tex2html_wrap_inline12778 is always real when A is real symmetric or complex Hermitian.

The basic task of the symmetric eigenproblem routines is to compute values of tex2html_wrap_inline12778 and, optionally, corresponding vectors z for a given matrix A.

This computation proceeds in the following stages:

  1. The real symmetric or complex Hermitian matrix A is reduced to real tridiagonal form   T. If A is real symmetric, the decomposition is tex2html_wrap_inline13866 with Q orthogonal and T symmetric tridiagonal. If A is complex Hermitian, the decomposition is tex2html_wrap_inline13874 with Q unitary and T, as before, real symmetric tridiagonal .
  2. Eigenvalues and eigenvectors of the real symmetric tridiagonal matrix T are computed. If all eigenvalues and eigenvectors are computed, this process is equivalent to factorizing T as tex2html_wrap_inline13884, where S is orthogonal and tex2html_wrap_inline12784 is diagonal. The diagonal entries of tex2html_wrap_inline12784 are the eigenvalues of T, which are also the eigenvalues of A, and the columns of S are the eigenvectors of T; the eigenvectors of A are the columns of Z=QS, so that tex2html_wrap_inline13904 (tex2html_wrap_inline13906 when A is complex Hermitian).

In the real case, the decomposition tex2html_wrap_inline13866 is computed by the routine PxSYTRD   (see table 3.8). The complex analogue of this routine is called PxHETRD.    The routine PxSYTRD (or PxHETRD) represents the matrix Q as a product of elementary reflectors, as described in section 3.4. The routine PxORMTR   (or in the complex case PxUNMTR)    is provided to multiply another matrix by Q without forming Q explicitly; this can be used to transform eigenvectors of T, computed by PxSTEIN, back to eigenvectors of A.     

The following routines compute eigenvalues  and eigenvectors  of T.

xSTEQR2
   This routine [77] is a modified version of LAPACK routine xSTEQR. It has fewer iterations than the LAPACK routine due to a modification in the look-ahead technique described in [77]. Some additional modifications allow each process to perform partial updates to matrix Q. This routine computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the implicit QL or QR method.   
PxSTEBZ
   This routine uses bisection to compute some or all of the eigenvalues. Options provide for computing all the eigenvalues in a real interval or all the eigenvalues from the ith to the jth largest. It can be highly accurate but may be adjusted to run faster if lower accuracy is acceptable.
PxSTEIN
     Given accurate eigenvalues, this routine uses inverse iteration  to compute some or all of the eigenvectors.

Without any reorthogonalization, inverse iteration may produce vectors that have large dot products. To cure this, most implementations of inverse iteration such as LAPACK's xSTEIN reorthogonalize when eigenvalues differ by less than tex2html_wrap_inline13934. As a result, the eigenvectors computed by xSTEIN are almost always orthogonal, but the increase in cost can result in tex2html_wrap_inline13936 work. On some rare examples, xSTEIN may still fail to deliver accurate answers; see [43, 44]. The orthogonalization done by PxSTEIN is limited by the amount of workspace provided; whenever it performs less reorthogonalization than xSTEIN, there is a danger that the dot products may not be satisfactory.

  table1946
Table 3.8: Computational routines for the symmetric eigenproblem


next up previous contents index
Next: Nonsymmetric Eigenproblems Up: Computational Routines Previous: Generalized RQ factorization

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node62.html0100644000056400000620000000364606336113676017462 0ustar pfrauenfstaff Nonsymmetric Eigenproblems next up previous contents index
Next: EigenvaluesEigenvectors, and Schur Up: Computational Routines Previous: Symmetric Eigenproblems

Nonsymmetric Eigenproblems

   





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node63.html0100644000056400000620000001646406336113676017465 0ustar pfrauenfstaff Eigenvalues, Eigenvectors, and Schur Factorization next up previous contents index
Next: Singular Value Decomposition Up: Nonsymmetric Eigenproblems Previous: Nonsymmetric Eigenproblems

Eigenvalues, Eigenvectors, and Schur Factorization

  Let A be a square n-by-n matrix. A scalar tex2html_wrap_inline12778 is called an eigenvalue  and a nonzero column vector v the corresponding right eigenvector  if tex2html_wrap_inline13982. A nonzero column vector u satisfying tex2html_wrap_inline13986 is called the left eigenvector . The first basic task of the routines described in this section is to compute, for a given matrix A, all n values of tex2html_wrap_inline12778 and, if desired, their associated right eigenvectors v and/or left eigenvectors u.

A second basic task is to compute the Schur factorization of a matrix A.   If A is complex, then its Schur factorization is tex2html_wrap_inline14002, where Z is unitary and T is upper triangular. If A is real, its Schur factorization is tex2html_wrap_inline14010, where Z is orthogonal, and T is upper quasi-triangular (1-by-1 and 2-by-2 blocks on its diagonal). The columns of Z are called the Schur vectors of A.   The eigenvalues of A appear on the diagonal of T; complex conjugate eigenvalues of a real A correspond to 2-by-2 blocks on the diagonal of T.

These two basic tasks can be performed in the following stages:

  1. A general matrix A is reduced to upper Hessenberg form  H     which is zero below the first subdiagonal. The reduction may be written tex2html_wrap_inline14044 with Q orthogonal if A is real, or tex2html_wrap_inline14050 with Q unitary if A is complex. The reduction is performed by subroutine PxGEHRD, which represents      Q in a factored form, as described in section 3.4. The routine PxORMHR   (or in the complex case PxUNMHR) is provided to   multiply another matrix by Q without forming Q explicitly.
  2. The upper Hessenberg matrix H is reduced to Schur form T,   giving the Schur factorization tex2html_wrap_inline14066 (for H real) or tex2html_wrap_inline14070 (for H complex). The matrix S (the Schur vectors of H) may optionally be computed as well. Alternatively S may be postmultiplied into the matrix Q determined in stage 1, to give the matrix Z = Q S, the Schur vectors of A. The eigenvalues  are obtained from the diagonal of T. All this is done by subroutine PxLAHQR.   

The algorithm used in PxLAHQR is similar to the LAPACK routine xLAHQR. Unlike xLAHQR, however, instead of sending one double shift through the largest unreduced submatrix, this algorithm sends multiple double shifts and spaces them apart so that there can be parallelism across several processor row/columns. Another critical difference is that this algorithm applies multiple double shifts in a block fashion, as opposed to xLAHQR which applies one double shift at a time, and xHSEQR from LAPACK which attempts to achieve a blocked code by combining the double shifts into one single large multi-shift. For complete details, please refer to [79].

See table 3.9 for a complete list of the routines.

  table2014
Table 3.9: Computational routines for the nonsymmetric eigenproblem


next up previous contents index
Next: Singular Value Decomposition Up: Nonsymmetric Eigenproblems Previous: Nonsymmetric Eigenproblems

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node64.html0100644000056400000620000002305006336113677017454 0ustar pfrauenfstaff Singular Value Decomposition next up previous contents index
Next: Generalized Symmetric Definite Eigenproblems Up: Computational Routines Previous: EigenvaluesEigenvectors, and Schur

Singular Value Decomposition

 

Let A be a general real m-by-n matrix. The singular value decomposition (SVD) of A is the factorization  tex2html_wrap_inline14119, where U and V are orthogonal and tex2html_wrap_inline14125, tex2html_wrap_inline14127, with tex2html_wrap_inline14129. If A is complex, its SVD is tex2html_wrap_inline14133, where U and V are unitary and tex2html_wrap_inline12820 is as before with real diagonal elements. The tex2html_wrap_inline12826 are called the singular values , the first r columns of V the right singular vectors , and the first r columns of U the left singular vectors .

The routines described in this section, and listed in table 3.10, are used to compute this decomposition. The computation proceeds in the following stages:

  1. The matrix A is reduced to bidiagonal  form: tex2html_wrap_inline14153 if A is real (tex2html_wrap_inline14157 if A is complex), where tex2html_wrap_inline14161 and tex2html_wrap_inline14163 are orthogonal (unitary if A is complex), and B is real and upper-bidiagonal when tex2html_wrap_inline13306 and lower bidiagonal when m < n, so that B is nonzero only on the main diagonal and either on the first superdiagonal (if tex2html_wrap_inline13306) or the first subdiagonal (if m<n).
  2. The SVD of the bidiagonal matrix B is computed: tex2html_wrap_inline14181, where tex2html_wrap_inline14183 and tex2html_wrap_inline14185 are orthogonal and tex2html_wrap_inline12820 is diagonal as described above. The singular vectors of A are then tex2html_wrap_inline14191 and tex2html_wrap_inline14193.

The reduction to bidiagonal form is performed by the subroutine PxGEBRD and the SVD of B is computed using the LAPACK routine xBDSQR.       

The routine PxGEBRD represents tex2html_wrap_inline14161 and tex2html_wrap_inline14163 in factored form as products of elementary reflectors,   as described in section 3.4. If A is real, the matrices tex2html_wrap_inline14161 and tex2html_wrap_inline14163 may be multiplied by other matrices without forming tex2html_wrap_inline14161 and tex2html_wrap_inline14163 using routine PxORMBR  . If A is complex, one instead uses PxUNMBR  .

If tex2html_wrap_inline14213, it may be more efficient to first perform a QR factorization of A, using the routine PxGEQRF    , and then to compute the SVD of the n-by-n matrix R, since if A = QR and tex2html_wrap_inline14227, then the SVD of A is given by tex2html_wrap_inline14231. Similarly, if tex2html_wrap_inline14233, it may be more efficient to first perform an LQ factorization of A, using xGELQF. These preliminary QR and LQ      factorizations are performed by the driver PxGESVD.     

The SVD may be used to find a minimum norm solution  to a (possibly) rank-deficient linear least squares   problem (3.1). The effective rank, k, of A can be determined as the number of singular values which exceed a suitable threshold. Let tex2html_wrap_inline14247 be the leading k-by-k submatrix of tex2html_wrap_inline12820, and tex2html_wrap_inline14255 be the matrix consisting of the first k columns of V. Then the solution is given by
displaymath14109
where tex2html_wrap_inline13502 consists of the first k elements of tex2html_wrap_inline14265. tex2html_wrap_inline14267 can be computed by using PxORMBR.   

  table2093
Table 3.10: Computational routines for the singular value decomposition


next up previous contents index
Next: Generalized Symmetric Definite Eigenproblems Up: Computational Routines Previous: EigenvaluesEigenvectors, and Schur

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node65.html0100644000056400000620000001250606336113700017444 0ustar pfrauenfstaff Generalized Symmetric Definite Eigenproblems next up previous contents index
Next: Orthogonal or Unitary Matrices Up: Computational Routines Previous: Singular Value Decomposition

Generalized Symmetric Definite Eigenproblems

  

This section is concerned with the solution of the generalized eigenvalue problems tex2html_wrap_inline12875, tex2html_wrap_inline12877, and tex2html_wrap_inline12879, where A and B are real symmetric or complex Hermitian and B is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of B as either tex2html_wrap_inline14317 or tex2html_wrap_inline14319 (tex2html_wrap_inline14321 or tex2html_wrap_inline14323 in the Hermitian case).

With tex2html_wrap_inline14317, we have
displaymath14301
Hence the eigenvalues of tex2html_wrap_inline12875 are those of tex2html_wrap_inline14329, where C is the symmetric matrix tex2html_wrap_inline14333 and tex2html_wrap_inline14335. In the complex case C is Hermitian with tex2html_wrap_inline14339 and tex2html_wrap_inline14341.

Table 3.11 summarizes how each of the three types of problem may be reduced to standard form   tex2html_wrap_inline14329, and how the eigenvectors z of the original problem may be recovered from the eigenvectors y of the reduced problem. The table applies to real problems; for complex problems, transposed matrices must be replaced by conjugate transposes.

  table2129
Table 3.11: Reduction of generalized symmetric definite eigenproblems to standard problems

Given A and a Cholesky factorization of B, the routines PxyyGST overwrite A      with the matrix C of the corresponding standard problem tex2html_wrap_inline14329 (see table 3.12). This may then be solved by using the routines described in subsection 3.3.4. No special routines are needed to recover the eigenvectors z of the generalized problem from the eigenvectors y of the standard problem, because these computations are simple applications of Level 2 or Level 3 BLAS.

  table2167
Table 3.12: Computational routines for the generalized symmetric definite eigenproblem



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node66.html0100644000056400000620000001453406336113700017450 0ustar pfrauenfstaff Orthogonal or Unitary Matrices next up previous contents index
Next: Algorithmic Differences between LAPACK Up: Contents of ScaLAPACK Previous: Generalized Symmetric Definite Eigenproblems

Orthogonal or Unitary Matrices

 

A real orthogonal or complex unitary matrix (usually denoted Q) is often represented  in ScaLAPACK as a product of elementary reflectors -- also referred to as     elementary Householder matrices (usually denoted tex2html_wrap_inline14421). For example,
displaymath14415
Most users need not be aware of the details, because ScaLAPACK routines are provided to work with this representation:

  • routines whose names begin PSORG- (real) or PCUNG- (complex) can generate all or part of Q explicitly;
  • routines whose name begin PSORM- (real) or PCUNM- (complex) can multiply a given matrix by Q or tex2html_wrap_inline13302 without forming Q explicitly.

The following details may occasionally be useful.

An elementary reflector (or elementary Householder matrix) H of order n is a unitary matrix  of the form    
 equation2204
where tex2html_wrap_inline14435 is a scalar and v is an n-vector, with tex2html_wrap_inline14441); v is often referred to as the Householder vector.  Often v has several leading or trailing zero elements, but for the purpose of this discussion assume that H has no such special structure.

Some redundancy in the representation (3.4) exists, which can be removed in various ways. Like LAPACK, the representation used in ScaLAPACK (which differs from that used in LINPACK or EISPACK) sets tex2html_wrap_inline14449; hence tex2html_wrap_inline14451 need not be stored. In real arithmetic, tex2html_wrap_inline14453, except that tex2html_wrap_inline14455 implies H = I.

In complex arithmetic , tex2html_wrap_inline14435 may be complex and satisfies tex2html_wrap_inline14461 and tex2html_wrap_inline14463. Thus a complex H is not Hermitian (as it is in other representations), but it is unitary, which is the important property. The advantage of allowing tex2html_wrap_inline14435 to be complex is that, given an arbitrary complex vector x, H can be computed so that
displaymath14416
with real tex2html_wrap_inline14473. This is useful, for example, when reducing a complex Hermitian matrix to real symmetric tridiagonal form  or a complex rectangular matrix to real bidiagonal form .

For further details, see Lehoucq [94].


next up previous contents index
Next: Algorithmic Differences between LAPACK Up: Contents of ScaLAPACK Previous: Generalized Symmetric Definite Eigenproblems

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node67.html0100644000056400000620000000476106336113701017453 0ustar pfrauenfstaff Algorithmic Differences between LAPACK and ScaLAPACK next up previous contents index
Next: Data Distributions and Software Up: Contents of ScaLAPACK Previous: Orthogonal or Unitary Matrices

Algorithmic Differences between LAPACK and ScaLAPACK

 

The following ScaLAPACK routines use different algorithms from their LAPACK counterparts. Refer to the relevant LAPACK working notes or the leading comments of the source code for details.

  • PxDBSV, PxDBTRF, PxDBTRS: No LAPACK equivalent; refer to [32].
  • PxGBSV, PxGBTRF, PxGBTRS: Refer to [32].
  • PxLAHQR: Refer to [79].
  • PxPBSV, PxPBTRF, PxPBTRS: Refer to [32].
  • PxPTSV, PxPTTRF, PxPTTRS: Refer to [32].
  • PxSYEV
  • PxSYEVX/PxHEEVX: Refer to [40].
  • PxSYGVX/PxHEGVX.


Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node68.html0100644000056400000620000001541006336113701017445 0ustar pfrauenfstaff Data Distributions and Software Conventions next up previous contents index
Next: Basics Up: Guide Previous: Algorithmic Differences between LAPACK

Data Distributions and Software Conventions

 

The ScaLAPACK software library provides routines that operate on three types of matrices: in-core dense matrices, in-core narrow band matrices and out-of-core dense matrices. On entry, these routines assume that the data has been distributed on the processors according to a specific data decomposition scheme. Conventional arrays are used to store locally the data when it resides in the processors' memory. The data layout information as well as the local storage scheme for these different matrix operands is conveyed to the routines via a simple array of integers called an array descriptor. The first entry of this array identifies the type of the descriptor, i.e., the data distribution scheme it describes. This chapter first presents the fundamental concepts of process grids, communication contexts and array descriptors. Then, for each of the three possible matrix operand mentioned above, the data distribution scheme and the corresponding descriptor array used by ScaLAPACK are discussed in detail. Finally, the software conventions common to all ScaLAPACK routines are presented.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node69.html0100644000056400000620000001325406336113702017453 0ustar pfrauenfstaff Basics next up previous contents index
Next: Process Grid Up: Data Distributions and Software Previous: Data Distributions and Software

Basics

 

ScaLAPACK requires that all global data (vectors or matrices) be distributed across the processes prior to invoking the ScaLAPACK routines. The storage schemes of global data structures in ScaLAPACK are conceptually the same as for LAPACK.

Global data is mapped to the local memories of processes assuming specific data distributions. The local data on each process is referred to as the local array.

The layout of an application's data within the hierarchical memory   of a concurrent computer is critical in determining the performance and scalability  of the parallel code. On shared-memory concurrent computers (or multiprocessors) LAPACK seeks to make efficient use of the hierarchical memory by maximizing data reuse (e.g., on a cache-based computer LAPACK avoids having to reload the cache too frequently). Specifically, LAPACK casts linear algebra computations in terms of block-oriented, matrix-matrix operations through the use of the Level 3 BLAS whenever possible. This approach generally results in maximizing the ratio of floating-point operations to memory references and enables data reuse as much as possible while it is stored in the highest levels of the memory hierarchy (e.g., vector registers or high-speed cache).

An analogous approach has been followed in the design of ScaLAPACK for distributed-memory  machines. By using block-partitioned algorithms  we seek to reduce the frequency with which data must be transferred between processes, thereby reducing the fixed startup cost (or latency)  incurred each time a message is communicated.

The ScaLAPACK routines for solving dense linear systems and eigenvalue problems assume that all global data has been distributed to the processes with a one-dimensional or two-dimensional block-cyclic data distribution. This distribution is a natural expression of the block-partitioned algorithms present in ScaLAPACK. The ScaLAPACK routines for solving band linear systems and tridiagonal systems assume that all global data has been distributed to the processes with a one-dimensional block data distribution. Each of these distributions is supported in the High Performance Fortran standard [91]. Explanations for each distribution will be presented and accompanied by the appropriate HPF directives.

Our implementation of ScaLAPACK emphasizes the mathematical view of a matrix over its storage. In fact, it is even possible to reuse our interface for a different block data distribution that would not fit in the block-cyclic scheme.




next up previous contents index
Next: Process Grid Up: Data Distributions and Software Previous: Data Distributions and Software

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node6.html0100644000056400000620000002472306336113633017370 0ustar pfrauenfstaff List of Notation next up previous contents index
Next: Guide Up: List of Tables Previous: Suggestions for Reading

List of Notation

  

 
1D 		 One-dimensional (as in 1D data distribution)

tex2html_wrap_inline12064 Bandwidth (or throughput) for the network

2D Two-dimensional (as in 2D data distribution)

tex2html_wrap_inline12066 Total number of floating-point operations 

tex2html_wrap_inline12070 Total number of messages 

CSRC_ Entry in DESC_ indicating the process column over which

the first column of the array is distributed 

CTXT_ Entry in DESC_ indicating the BLACS context associated with the

global array  

tex2html_wrap_inline12074 Total number of data items communicated 

DESC_ Array descriptor for a global array 

DESCA Array descriptor for global array A

DESCB Array descriptor for global array B

DLEN_ Length of the descriptor DESC_ 

DTYPE_ First entry of the descriptor DESC_, identifying

the descriptor type  

E() Estimated parallel efficiency 

tex2html_wrap_inline12088 Time per floating point operation in matrix-matrix multiply 

Gflop/s Gigaflops (tex2html_wrap_inline12092 floating point operations) per second

IA Global row index in the global array A indicating the

first row of tex2html_wrap_inline12098 

ICTXT BLACS context associated with a process grid 

INFO Output integer argument of driver and computational routines

indicating the success or failure of the routine

JA Global column index in the global array A indicating the

first column of tex2html_wrap_inline12098 

tex2html_wrap_inline12108 Least common multiple of (tex2html_wrap_inline12110) 

LLD_ Entry in DESC_ indicating the local leading dimension of the

local array 

tex2html_wrap_inline12116 Local leading dimension of the local array A

tex2html_wrap_inline12120 Local leading dimension of the local array B

tex2html_wrap_inline12124(K_) Number of columns that a process receives if tex2html_wrap_inline12126 columns

of a matrix are distributed over c columns of its process row. 

tex2html_wrap_inline12132(K_) Number of rows that a process would receive if tex2html_wrap_inline12126 rows

of a matrix are distributed over r rows of its process column. 

M Global number of rows of the distributed submatrix tex2html_wrap_inline12098

M_ Entry of DESC_ indicating the number of rows in the global

array 

MB Global row block size for partitioning the global matrix

MB_ Entry in DESC_ indicating the block size used to distribute

the rows of the global array 

MB/s Megabyte per second

Mflop/s Megaflops (tex2html_wrap_inline12146 floating point operations) per second

MYCOL The calling process's column coordinate in the process grid. 

MYROW The calling process's row coordinate in the process grid 

N Global number of columns of the distributed submatrix tex2html_wrap_inline12098

N_ Entry of DESC_ indicating the number of columns in the

global array 

NB Global column block size for partitioning the global matrix

NB_ Entry in DESC_ indicating the block size used to distribute

the columns of the global array 

NBRHS Global column block size for the solution matrix 

NPCOL Number of process columns in the process grid (equivalent to tex2html_wrap_inline12162) 

NPROCS Total number of processes in the process grid (equivalent to P) 

NPROW Number of process rows in the process grid (equivalent to tex2html_wrap_inline12172) 

NRHS Global number of columns in the global solution matrix B 

NUMROC TOOLS routine used to calculate the number of rows or columns

in a local array 

P Total number of processes in the process grid, i.e., tex2html_wrap_inline12182 

tex2html_wrap_inline12162 Number of process columns in the process grid 

tex2html_wrap_inline12172 Number of process rows in the process grid 

RSRC_ Entry in DESC_ indicating the process row over which the

first row of the array is distributed 

tex2html_wrap_inline12098 Distributed submatrix A(IA:IA+M-1,JA:JA+N-1)

T() Estimated parallel execution time 

tex2html_wrap_inline12198() Estimated serial execution time 

tex2html_wrap_inline12202 Time per floating-point operation (typically tex2html_wrap_inline12204) 

tex2html_wrap_inline12208 Time per message (latency) 

tex2html_wrap_inline12212 Time required to solve a problem of size N on P processors 

tex2html_wrap_inline12220 Time required to solve a problem of size N/2 on P processors 

tex2html_wrap_inline12228 Time per data item communicated 




next up previous contents index
Next: Guide Up: List of Tables Previous: Suggestions for Reading

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node70.html0100644000056400000620000001077406336113703017450 0ustar pfrauenfstaff Process Grid next up previous contents index
Next: Contexts Up: Basics Previous: Basics

Process Grid

   

The P  processes of an abstract parallel computer are often represented as a one-dimensional linear array of processes labeled tex2html_wrap_inline14493. For reasons described below, it is often more convenient to map this one-dimensional array of processes into a two-dimensional rectangular grid, or process grid. (A process grid is also referred to as a process mesh.)  This grid will have tex2html_wrap_inline12172 process rows and tex2html_wrap_inline12162 process columns, where tex2html_wrap_inline14499. A process can now be referenced by its row and column coordinates, (tex2html_wrap_inline14501), within the grid, where tex2html_wrap_inline14503, and tex2html_wrap_inline14505). An example of such a mapping is shown in figure 4.1, where tex2html_wrap_inline12422 and tex2html_wrap_inline14509.

  figure2256
Figure 4.1: Eight processes mapped to a tex2html_wrap_inline14511 process grid

In figure 4.1, the processes are mapped to the process grid by using row-major order ; in other words, the numbering of the processes increases sequentially across each row. Similarly, the processes can be mapped in a column-major order  whereby the numbering of the processes proceeds down each column of the process grid. The BLACS routine BLACS_GRIDINIT  performs this task of mapping the processes to the process grid. By default, BLACS_GRIDINIT assumes a row-major ordering, although a column-major ordering can also be specified. The companion routine BLACS_GRIDMAP is a more general form of BLACS_GRIDINIT and allows the user to define the mapping of the processes . Refer to the Appendix D.3 for further details.

All ScaLAPACK routines, with the exception of the band linear system routines, allow the processes to be viewed as a one-dimensional or two-dimensional process grid. The band routines support only one-dimensional process grids.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node71.html0100644000056400000620000001436706336113703017453 0ustar pfrauenfstaff Contexts next up previous contents index
Next: Scoped Operations Up: Basics Previous: Process Grid

Contexts

 

In ScaLAPACK, and thus the BLACS, each process grid is enclosed in a context  . Similarly, a context is associated with every global matrix in ScaLAPACK. The use of a context provides the ability to have separate ``universes'' of message passing. This means that a process grid can safely communicate even if other (possibly overlapping) process grids are also communicating. Thus, a context is a powerful mechanism for avoiding unintentional nondeterminism in message passing and provides support for the design of safe, modular software libraries. In MPI  , this concept is referred to as a communicator.

A context partitions the communication space. A message sent from one context cannot be received in another context. The use of separate communication contexts by distinct libraries (or distinct library routine invocations) insulates communication internal to a specific library routine from external communication that may be going on within the user's program.

In most respects, we can use the terms process grid and context interchangeably. For example, we may say we perform an operation ``in context X'' or ``in process grid X''. The slight difference here is that the user may define two identical process grids (say, two tex2html_wrap_inline14534 process grids, both of which use processes 0, 1, and 2), but each will be enclosed in its own context, so that they are distinct in operation, even though they are indistinguishable from a process grid standpoint.

Another example of the use of context might be to define a normal two-dimensional process grid within which most computation takes place. However, in certain portions of the code it may be more convenient to access the processes as a one-dimensional process grid, whereas at other times we may wish, for instance, to share information among nearest neighbors. In such cases, we will want each process to have access to three contexts: the two-dimensional process grid, the one-dimensional process grid, and a small process grid that contains the process and its nearest neighbors.

Therefore, we see that context allows us to

  • create arbitrary groups of processes,
  • create an indeterminate number of overlapping and/or disjoint process grids, and
  • isolate the process grid so that they do not interfere with each other.

The BLACS has two process grid creation  routines, BLACS_GRIDINIT and BLACS_GRIDMAP,    that create a process grid and its enclosing context. These routines return context handles  , which are simple integers, assigned by the BLACS to identify the context. Subsequent BLACS routines will be passed these handles, which allow the BLACS to determine from which context/process grid a routine is being called. The user should never alter or change these handles; they are opaque data objects that are only meaningful for the BLACS routines.

A defined context consumes resources. It is therefore advisable to release contexts when they are no longer needed. This release is done via the routine BLACS_GRIDEXIT . When the entire BLACS system is shut down (via a call to BLACS_EXIT ), all outstanding contexts are automatically freed. Further details about these routines can be found in Appendix D.3.


next up previous contents index
Next: Scoped Operations Up: Basics Previous: Process Grid

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node72.html0100644000056400000620000000536606336113704017454 0ustar pfrauenfstaff Scoped Operations next up previous contents index
Next: Array Descriptors Up: Basics Previous: Contexts

Scoped Operations

   

An operation that involves more than just a sender and a receiver is called a scoped operation . All processes that participate in a scoped operation are said to be within the operation's scope.

On a system using a linear array of processes, the only natural scope is all processes. Using a two-dimensional rectangular process grid, we have three natural scopes , as shown in table 4.1. Refer to [24] for further details.

 table2322
Table 4.1: Scopes provided by a two-dimensional process grid 

These groupings of processes are of particular interest to the linear algebra programmer, since distributed data decompositions of a two-dimensional array (a linear algebra matrix) tend to follow this process mapping. For instance, all of a distributed matrix row can be found on a process row.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node73.html0100644000056400000620000000557206336113704017454 0ustar pfrauenfstaff Array Descriptors next up previous contents index
Next: In-core Dense Matrices Up: Data Distributions and Software Previous: Scoped Operations

Array Descriptors

   

An array descriptor    is associated with each global array. This array stores the information required to establish the mapping between each global array entry and its corresponding process and memory location. The notations x_ used in the entries of the array descriptor denote the attributes of a global array. For example, M_ denotes the number of rows and M_A specifically denotes the number of rows in global matrix A. These descriptors assume different storage for the global data. The length of the array descriptor  is specified by DLEN_ and varies according to the descriptor type DTYPE_.

Array descriptors are provided for

  • dense matrices,
  • band and tridiagonal matrices, and
  • out-of-core matrices
and are differentiated by the DTYPE_ entry in the descriptor. At the present time the following values of DESC_(DTYPE_) are valid.  

 table2340
Table 4.2: Valid values of DESC_(DTYPE_)



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node74.html0100644000056400000620000000716406336113705017455 0ustar pfrauenfstaff In-core Dense Matrices next up previous contents index
Next: The Two-dimensional Block-Cyclic Distribution Up: Data Distributions and Software Previous: Array Descriptors

In-core Dense Matrices

 

The choice of an appropriate data distribution heavily depends on the characteristics or flow of the computation in the algorithm. For dense matrix computations, ScaLAPACK assumes the data to be distributed according to the two-dimensional block-cyclic data layout scheme. This section presents this distribution and demonstrates how the ScaLAPACK software encodes this essential information as well as the related software conventions.

Dense matrix computations feature a large amount of parallelism, so that a wide variety of distribution schemes have the potential for achieving high performance. The block-cyclic data layout has been selected for the dense algorithms implemented in ScaLAPACK principally because of its scalability [51], load balance, and communication [76] properties. The block-partitioned computation proceeds in consecutive order just like a conventional serial algorithm. This essential property of the block cyclic data layout explains why the ScaLAPACK design has been able to reuse the numerical and software expertise of the sequential LAPACK library.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node75.html0100644000056400000620000002544006336113706017454 0ustar pfrauenfstaff The Two-dimensional Block-Cyclic Distribution next up previous contents index
Next: Local Storage Scheme and Up: In-core Dense Matrices Previous: In-core Dense Matrices

The Two-dimensional Block-Cyclic Distribution

    

In this section, we consider the data layout of dense matrices on distributed-memory machines, with the goal of making dense matrix computations as efficient as possible. We shall discuss a sequence of data layouts, starting with the most simple, obvious, and inefficient one and working up to the complicated but efficient ScaLAPACK ultimately uses. Even though our justification is based on Gaussian elimination, analysis of many other algorithms has led to the same set of layouts. As a result, these layouts have been standardized as part of the High Performance Fortran standard [91], with corresponding data declarations as part of that language.

The two main issues in choosing a data layout for dense matrix computations are

  • load balance, or splitting the work reasonably evenly among the processors throughout the algorithm, and
  • use of the Level 3 BLAS during computations on a single processor, to account for the memory hierarchy on each processor.

It will help to remember the pictorial representation of Gaussian elimination below. As the algorithm proceeds, it works on successively smaller square southeast corners of the matrix. In addition, there is extra Level 2 BLAS work to factorize the submatrix tex2html_wrap_inline14557.

  figure2362
Figure 4.2: Gaussian elimination using Level 3 BLAS

For convenience we will number the processes from 0 to P-1, and matrix columns (or rows) from 1 to N. The following two figures shows a sequence of data layouts we will consider. In all cases, each submatrix is labeled with the number of the process (from 0 to 3) that contains it. Process 0 owns the shaded submatrices.

Consider the layout illustrated on the left of figure 4.3, the one-dimensional block column distribution. This distribution

  figure2369
Figure 4.3: The one-dimensional block and cyclic column distributions

assigns a block of contiguous columns of a matrix to successive processes. Each process receives only one block of columns of the matrix. Column k is stored on process tex2html_wrap_inline14569 where tex2html_wrap_inline14571 is the maximum number of columns stored per process. In the figure N=16 and P=4. This layout does not permit good load balancing for the above Gaussian elimination algorithm because as soon as the first tc columns are complete, process 0 is idle for the rest of the computation. The transpose of this layout, the one-dimensional block row distribution, has a similar shortfall for dense computations.

The second layout illustrated on the right of figure 4.3, the one-dimensional cyclic column distribution, addressed this problem by assigning column k to process (k-1) mod P. In the figure, N=16 and P=4. With this layout, each process owns approximately tex2html_wrap_inline14591 of the square southeast corner of the matrix, so the load balance is good. However, since single columns (rather than blocks) are stored, we cannot use the Level 2 BLAS to factorize tex2html_wrap_inline14557 and may not be able to use the Level 3 BLAS to update tex2html_wrap_inline14595. The transpose of this layout, the one-dimensional cyclic row distribution, has a similar shortfall.

The third layout shown on the left of figure 4.4, the one-dimensional block-cyclic column distribution, is a compromise between the distribution schemes shown in figure 4.3. We choose a block size NB, divide the columns into groups of size NB, and distribute these groups in a cyclic manner. This means column k is stored in process tex2html_wrap_inline14601. In fact, this layout includes the first two as the special cases, tex2html_wrap_inline14603 and NB=1, respectively. In the figure N=16, P=4 and NB=2. For NB larger than 1, this has a slightly worse balance than the one-dimensional cyclic column distribution, but can use the Level 2 BLAS and Level 3 BLAS for the local computations. For NB less than tc, it has a better load balance than the one-dimensional block column distribution, but can call the BLAS only on smaller subproblems. Hence, it takes less advantage of the local memory hierarchy. Moreover, this layout has the disadvantage that the factorization of tex2html_wrap_inline14557 will take place on one process (in the natural situation where column blocks in the layout correspond to column blocks in Gaussian elimination), thereby representing a serial bottleneck.

  figure2387
Figure 4.4: The one-dimensional block-cyclic column- and the two-dimensional block-cyclic distributions

This serial bottleneck is eased by the fourth layout shown on the right of figure 4.4, the two-dimensional block cyclic distribution. Here, we think of our P processes arranged in a tex2html_wrap_inline12172 tex2html_wrap_inline12420 tex2html_wrap_inline12162 rectangular array of processes, indexed in a two-dimensional fashion by tex2html_wrap_inline14629, with tex2html_wrap_inline14631 and tex2html_wrap_inline14633. All the processes tex2html_wrap_inline14629 with a fixed tex2html_wrap_inline14637 are referred to as process column tex2html_wrap_inline14637. All the processes tex2html_wrap_inline14629 with a fixed tex2html_wrap_inline14643 are referred to as process row tex2html_wrap_inline14643. Thus, this layout includes all the previous layouts, and their transposes, as special cases. In the figure, N=16, P=4, tex2html_wrap_inline14651, and MB=NB=2. This layout permits tex2html_wrap_inline12162-fold parallelism in any column, and calls to the Level 2 BLAS and Level 3 BLAS on local subarrays. Finally, this layout also features good scalability properties as shown in [61].

The two-dimensional block cyclic distribution scheme is the data layout that is used in the ScaLAPACK library for dense matrix computations.


next up previous contents index
Next: Local Storage Scheme and Up: In-core Dense Matrices Previous: In-core Dense Matrices

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node76.html0100644000056400000620000003531006336113707017453 0ustar pfrauenfstaff Local Storage Scheme and Block-Cyclic Mapping next up previous contents index
Next: Array Descriptor for In-core Up: In-core Dense Matrices Previous: The Two-dimensional Block-Cyclic Distribution

Local Storage Scheme and Block-Cyclic Mapping

 

The block-cyclic distribution scheme is a mapping of a set of blocks onto the processes. The previous section informally described this mapping as well as some of its properties. To be complete, we must now explain how the blocks that are mapped to the same process are arranged and stored in the local process memory. In other words, we shall describe the precise mapping that associates to a matrix entry identified by its global indexes the coordinates of the process that owns it and its local position within that process's memory.

Suppose we have an array of length N to be stored on P processes. By convention, the array entries are numbered 1 through N and the processes are numbered 0 through P-1. First, the array is divided into contiguous blocks of size NB. When NB does not divide N evenly, the last block of array elements will only contain tex2html_wrap_inline14699 entries instead of NB. By convention, these blocks are numbered starting from zero and dealt out to the processes like a deck of cards. In other words, if we assume that the process 0 receives the first block, the tex2html_wrap_inline14705 block is assigned to the process of coordinate tex2html_wrap_inline14707. The blocks assigned to the same process are stored contiguously in memory. The mapping of an array entry globally indexed by I is defined by the following analytical equation:
displaymath14669
where I is a global index in the array, l is the local block coordinate into which this entry resides, p is the coordinate of the process owning that block, and finally x is the coordinate within that block where the global array entry of index I is to be found. It is then fairly easy to establish the analytical relationship between these variables. One obtains:
 equation2409
These equations allow to determine the local information, i.e. the local index tex2html_wrap_inline14721 as well as the process coordinate p corresponding to a global entry identified by its global index I and conversely. Table 4.3 illustrates this mapping for the block layout when P=2 and N=16, i.e., NB=8. At most one block is assigned to each process.

  table2416
Table 4.3: One-dimensional block mapping example for P=2 and N=16

This example of the one-dimensional block distribution mapping can be expressed in HPF  by using the following statements:

      REAL :: X( N )
!HPF$ PROCESSORS PROC( P )
!HPF$ DISTRIBUTE X( BLOCK( NB ) ) ONTO PROC

Table 4.4 illustrates Equation 4.1 for the cyclic layout, i.e., NB=1 when P=2 and N=16.

  table2428
Table 4.4: One-dimensional cyclic mapping example for P=2 and N=16

This example of the one-dimensional cyclic distribution mapping can be expressed in HPF  by using the following statements:

      REAL :: X( N )
!HPF$ PROCESSORS PROC( P )
!HPF$ DISTRIBUTE X( CYCLIC ) ONTO PROC

Table 4.5 illustrates Equation 4.1 for the block-cyclic layout when P=2, NB=3 and N=16.

  table2440
Table 4.5: One-dimensional block-cyclic mapping example for P=2, NB=3 and N=16

This example of the one-dimensional cyclic distribution mapping can be expressed in HPF  by using the following statements:

      REAL :: X( N )
!HPF$ PROCESSORS PROC( P )
!HPF$ DISTRIBUTE X( CYCLIC( NB ) ) ONTO PROC

There is in fact no real reason to always deal out the blocks starting with the process 0. In fact, it is sometimes useful to start the data distribution with the process of arbitrary coordinate SRC, in which case Equation 4.1 becomes:
 equation2451

Table 4.6 illustrates Equation 4.2 for the block-cyclic layout when tex2html_wrap_inline14791, tex2html_wrap_inline14793, tex2html_wrap_inline14795 and tex2html_wrap_inline14797.

  table2461
Table 4.6: One-dimensional block-cyclic mapping example for P=2, SRC=1, NB=3 and N=16

This example of the one-dimensional block-cyclic distribution mapping can be expressed in HPF  by using the following statements:

      REAL :: X( N )
!HPF$ PROCESSORS PROC( P )
!HPF$ TEMPLATE T( N + P*NB )
!HPF$ DISTRIBUTE T( CYCLIC( NB ) ) ONTO PROC
!HPF$ ALIGN X( I ) WITH T( SRC*NB + I )

In the two-dimensional case, assuming the matrix is partitioned in tex2html_wrap_inline14817 blocks and that the first block is given to the process of coordinates (RSRC, CSRC), the analytical formula given above for the one-dimensional case are simply reused independently in each dimension of the tex2html_wrap_inline12182 process grid. For example, the matrix entry (I,J) is thus to be found in the process of coordinates tex2html_wrap_inline14629 within the local (l,m) block at the position (x,y) given by:
displaymath14670

These formula specify how an tex2html_wrap_inline14831 by tex2html_wrap_inline14833  matrix A is mapped and stored on the process grid. It is first decomposed into tex2html_wrap_inline14837 by tex2html_wrap_inline14839  blocks starting at its upper left corner. These blocks are then uniformly distributed across the process grid in a cyclic manner.

Every process owns a collection of blocks, which are contiguously stored by column in a two-dimensional ``column major'' array.

This local storage convention allows the ScaLAPACK software to use efficiently the local memory hierarchy by calling the BLAS on subarrays that may be larger than a single tex2html_wrap_inline14837 by tex2html_wrap_inline14839 block. We present in figure 4.5  the mapping of a 5tex2html_wrap_inline148455 matrix partitioned into 2tex2html_wrap_inline148452 blocks mapped onto a 2tex2html_wrap_inline148452 process grid (i.e., tex2html_wrap_inline14851, tex2html_wrap_inline14651, and tex2html_wrap_inline14855). The local entries of every matrix column are contiguously stored in the processes' memories.

 figure2490
Figure 4.6: A tex2html_wrap_inline14857 matrix decomposed into tex2html_wrap_inline14859   blocks mapped onto a tex2html_wrap_inline14859 process grid

In figure 4.5, the process of coordinates (0,0) owns four blocks. The matrix entries of the global columns 1, 2 and 5 are contiguously stored in that process's memory. Finally, these columns are themselves continuously stored forming a conventional two-dimensional local array. In that local array A, the entry A(2,3) contains the value of the global matrix entry tex2html_wrap_inline14869. This example would be expressed in HPF  as:

      REAL :: A( 5, 5 )
!HPF$ PROCESSORS PROC( 2, 2 )
!HPF$ DISTRIBUTE A( CYCLIC( 2 ), CYCLIC( 2 ) ) ONTO PROC

Determining the number of     rows or columns of a global dense matrix that a specific process receives is an essential task for the user. ScaLAPACK provides a tool routine, NUMROC, to perform this function. The notation LOCtex2html_wrap_inline12112() and LOCtex2html_wrap_inline12114() is used to reflect these local quantities throughout the leading comments of the source code and is reflected in the sample argument description in section 4.3.5. The values of LOCtex2html_wrap_inline12112()  and LOCtex2html_wrap_inline12114()  computed by NUMROC are precise calculations.

However, if users want a general idea of the size of a local array, they can perform the following ``back of the envelope'' calculation to receive an upper bound on the quantity.

An upper bound on the value of LOCtex2html_wrap_inline12112() can be calculated as:
displaymath14671
or equivalently as
displaymath14672

Similarly, an upper bound on the value of LOCtex2html_wrap_inline12114() can be calculated as
displaymath14673
or equivalently as
displaymath14674

Note that this calculation can yield a gross overestimate of the amount of space actually required.


next up previous contents index
Next: Array Descriptor for In-core Up: In-core Dense Matrices Previous: The Two-dimensional Block-Cyclic Distribution

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node77.html0100644000056400000620000000716306336113707017461 0ustar pfrauenfstaff Array Descriptor for In-core Dense Matrices next up previous contents index
Next: Example Up: In-core Dense Matrices Previous: Local Storage Scheme and

Array Descriptor for In-core Dense Matrices

 

The array descriptor DESC_, whose type is defined as DESC_(DTYPE_)=1 , is an integer array of length 9. It is used for the ScaLAPACK routines solving dense linear systems and eigenvalue problems. All global vector and matrix operands are assumed to be distributed on the process grid according to the one- or two-dimensional block cyclic data distribution scheme. Refer to section 4.3.1 for further details on block cyclic data distribution.

A general M_ by N_ distributed matrix is defined by its dimensions, the size of the elementary MB_ by NB_ block used for its decomposition, the coordinates of the process having in its local memory the first matrix entry (RSRC_,CSRC_), and the BLACS context (CTXT_) in which this matrix is defined. Finally, a local leading dimension LLD_ is associated with the local memory address pointing to the data structure used for the local storage of this distributed matrix.

Let us assume, for example, that we have an array descriptor DESCA for a dense global matrix A. As previously mentioned, the notations x_ used in the entries of the array descriptor denote the attributes of a global array. For readability of the code, we have associated symbolic names for the descriptor entries. For example, M_ denotes the number of rows and M_A specifically denotes the number of rows in global matrix A.

 table2535
Table 4.7: Content of the array descriptor for in-core dense matrices

For a detailed description of LOCtex2html_wrap_inline12112() notation, please refer to section 4.3.2.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node78.html0100644000056400000620000001424406336631623017462 0ustar pfrauenfstaff Example next up previous contents index
Next: Submatrix Argument Descriptions Up: In-core Dense Matrices Previous: Array Descriptor for In-core

Example

 

As mentioned in section 4.3.1, ScaLAPACK assumes a one-dimensional or two-dimensional block-cyclic distribution for the dense matrix computational routines. The block-cyclic distribution is a generalization of the block and cyclic distributions. In one dimension, blocks of rows of size MB or blocks of columns of size NB are cyclically distributed over the processes. In two dimensions, blocks of size tex2html_wrap_inline14817 are distributed cyclically over the processes.   Example programs can be found in section 2.3 and Appendix C.1.

According to the two-dimensional block cyclic data distribution, scheme an M_ by N_  dense matrix is first decomposed into MB_ by NB_  blocks starting at its upper left corner. These blocks are then uniformly distributed in each dimension of the process grid. Thus, every process owns a collection of blocks, which are locally and contiguously stored in a two-dimensional ``column major'' array. The partitioning   of a tex2html_wrap_inline12402 matrix into tex2html_wrap_inline14859 blocks and the mapping of these blocks onto a tex2html_wrap_inline14922 process grid are shown in figure 4.6. The local entries of every matrix column are contiguously stored in the processes' memories.

 figure2562
Figure: 4.6 A tex2html_wrap_inline12402 matrix decomposed into tex2html_wrap_inline14859   blocks mapped onto a tex2html_wrap_inline14922 process grid

The number of rows of a matrix     and the number of columns of a matrix that a specific process owns, denoted LOCtex2html_wrap_inline12112 and LOCtex2html_wrap_inline12114 respectively, may differ from process to process in the process grid. Likewise, there is a local leading dimension LLD_  for each process in the process grid. This value may be different on each process in the process grid. For example, we can see on the right of figure 4.6 that the local array stored in process row 0 must have a local leading dimension LLD_ greater than or equal to 5, and greater than or equal to 4 in the process row 1.

  table2576
Table 4.8: Sizes of the local arrays

Table 4.8 gives the values of the local array sizes associated with figure 4.6.


next up previous contents index
Next: Submatrix Argument Descriptions Up: In-core Dense Matrices Previous: Array Descriptor for In-core

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node79.html0100644000056400000620000002334106336113711017452 0ustar pfrauenfstaff Submatrix Argument Descriptions next up previous contents index
Next: Matrix and Vector Storage Up: In-core Dense Matrices Previous: Example

Submatrix Argument Descriptions

   

As previously mentioned, the ScaLAPACK routines that solve dense linear systems and eigenvalue problems assume that all global arrays are distributed in a one- or two-dimensional block cyclic fashion.           After a global vector or matrix has been block-cyclicly distributed over a process grid, the user may choose to perform an operation on a portion of the global matrix. This subset of the global matrix is referred to as a ``submatrix'' and is referenced through the use of six arguments in the calling sequence: the number of rows of the submatrix M, the number of columns of the submatrix N, the local array A containing the global array, the row index IA, the column index JA and the array descriptor of the global array DESCA. This argument convention allows for a global view of the matrix operands and the global addressing of distributed matrices as illustrated in figure 4.7. This scheme allows the complete specification of the submatrix A(IA:IA+M-1,JA:JA+N-1) on which to be operated.

  figure2606
Figure 4.7: Global view of the matrix operands

The description of a global dense subarray consists of (M, N, A, IA, JA, DESCA)

  • the number of rows and columns M and N of the global subarray,
  • a pointer to the local array containing the entire global array (A, for example),
  • the row and column indices, (IA, JA), in the global array, and
  • the array descriptor, DESCA, for the global array.

The names of the row and column indices for the global array have the form I<array_name>  and J<array_name> , respectively. The array descriptor has a name of the form DESC<array_name> .    The length of the array descriptor is specified by DLEN_ and varies according to the descriptor type DTYPE_.

Included in the leading comments of each subroutine (immediately preceding the Argument section), is a brief note describing the array descriptor    and some commonly used expressions in calculating workspace.

The style of the argument  descriptions for dense matrices is illustrated by the following example. As previously mentioned, the notations x_ used in the entries of the array descriptor denote the attributes of a global array. For readability of the code, we have associated symbolic names for the descriptor entries. For example, M_ denotes the number of rows and M_A specifically denotes the number of rows in global matrix A. Complete details can be found in section 4.3.3.

    M
    (global input) INTEGER
    The number of rows of the matrix A(IA:IA+M-1,JA:JA+N-1) on which to be operated. M tex2html_wrap_inline14966 0 and IA+M-1 tex2html_wrap_inline14970 M_A.
    N
    (global input) INTEGER
    The number of columns of the matrix A(IA:IA+M-1,JA:JA+N-1) on which to be operated. N tex2html_wrap_inline14966 0 and JA+N-1 tex2html_wrap_inline14970 N_A.
    NRHS
    (global input) INTEGER
    The number of right hand side vectors, i.e. the number of columns of the matrix B(IB:IB+N-1,JB:JB+NRHS-1). NRHS tex2html_wrap_inline14966 0.
    A
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_A, LOCtex2html_wrap_inline12114(JA+N-1))
    IA
    (global input) INTEGER
    The row index in the global array A indicating the first row of A(IA:IA+M-1,JA:JA+N-1).
    JA
    (global input) INTEGER
    The column index in the global array A indicating the first column of A(IA:IA+M-1,JA:JA+N-1).
    DESCA
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix A.
    B
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_B, LOCtex2html_wrap_inline12114(JB+NRHS-1)).
    IB
    (global input) INTEGER
    The row index in the global array B indicating the first row of B(IB:IB+N-1,JB:JB+NRHS-1).
    JB
    (global input) INTEGER
    The column index in the global array B indicating the first column of B(IB:IB+N-1,JB:JB+NRHS-1).
    DESCB
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix B.

The description of each argument gives

  • A classification of the argument as (local input), (global and local input), (local input/local output), (global input), (local output), (global output), (global input/global output), (local input or local output),gif (local or global input),gif (local workspace), or (local workspace/local output).
  • The type of the argument;
  • For an array, its dimension(s).

    These dimensions are often expressed in terms of LOCtex2html_wrap_inline12112() and LOCtex2html_wrap_inline12114() calculations. For further details, please refer to section 4.3.2.

  • A specification of the value(s) that must be supplied for the argument (if it is an input argument), or of the value(s) returned by the routine (if it is an output argument), or both (if it is an input/output argument). In the last case, the two parts of the description are introduced by the phrases ``On entry'' and ``On exit''.
  • For a scalar input argument, any constraints that the supplied values must satisfy (such as N tex2html_wrap_inline14966 0 in the example above).


next up previous contents index
Next: Matrix and Vector Storage Up: In-core Dense Matrices Previous: Example

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node7.html0100644000056400000620000005131006336376106017367 0ustar pfrauenfstaff Guide next up previous contents index
Next: Essentials Up: ScaLAPACK Users' Guide Previous: List of Notation

Guide







Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node80.html0100644000056400000620000001243706336113711017446 0ustar pfrauenfstaff Matrix and Vector Storage Conventions next up previous contents index
Next: In-Core Narrow Band and Up: In-core Dense Matrices Previous: Submatrix Argument Descriptions

Matrix and Vector Storage Conventions

Whether a dense coefficient matrix  operand is nonsymmetric, symmetric or Hermitian, the entire two-dimensional global array is distributed onto the process grid.

For symmetric  and Hermitian  matrix operands, only the upper (UPLO='U') triangle or the lower (UPLO='L') triangle of the global array is accessed. For triangular  matrix operands, the argument UPLO defines whether the matrix is upper (UPLO='U') or lower (UPLO='L') triangular. Only the elements of the relevant triangle of the global array are accessed. Some ScaLAPACK routines have an option to handle unit triangular matrix operands (that is, triangular matrices with diagonal elements = 1). This option is specified by an argument DIAG . If DIAG = 'U' (Unit triangular), the local array elements corresponding to the diagonal elements of the matrix are not referenced by the ScaLAPACK routines.

If an input matrix operand is Hermitian , the imaginary parts of the diagonal elements are zero, and thus the imaginary parts of the corresponding local arrays need not be set, but are assumed to be zero. If an output matrix operand is Hermitian, the imaginary parts of the diagonal elements are set to zero (e.g., PCPOTRF and PCHETRD).

Similarly, if the matrix is upper Hessenberg, the local array elements corresponding to global array elements below the first subdiagonal are not referenced.

Vectors can be distributed across       process rows or across process columns. A vector of length N distributed across process rows is distributed the same way that a N-by-1 matrix is. A vector of length N distributed across process columns is distributed the same way that a 1-by-N matrix is.

Within some ScaLAPACK routines, some vectors are replicated in one dimension and distributed in the other dimension. These vectors always aligned with one dimension of another distributed matrix. For example, in PDSYTRD, the vectors D, E, and TAU are replicated across process rows, distributed across process columns, and aligned with the distributed matrix operand A. The data distribution of these replicated vectors is inferred from the distribution of the matrix they are associated with. There is no specific array descriptors for these particular vectors at the present time.      


next up previous contents index
Next: In-Core Narrow Band and Up: In-core Dense Matrices Previous: Submatrix Argument Descriptions

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node81.html0100644000056400000620000001035106336113712017441 0ustar pfrauenfstaff In-Core Narrow Band and Tridiagonal Matrices next up previous contents index
Next: The Block Column and Up: Data Distributions and Software Previous: Matrix and Vector Storage

In-Core Narrow Band and Tridiagonal Matrices

The ScaLAPACK routines solving narrow-band and tridiagonal linear systems assume their operands to be distributed according to the block-column and block-row data distribution schemes. Specifically, the narrow band or tridiagonal coefficient matrix is distributed in a block-column fashion, and the dense matrix of right hand side vectors is distributed in a block-row fashion. This section presents these distributions and demonstrates how the ScaLAPACK software encodes this essential information as well as the related software conventions.

The block data layout has been selected for narrow band matrices. Divide-and-conquer algorithms have been implemented in ScaLAPACK because these algorithms offer a much greater scope for exploiting parallelism than the corresponding adapted dense algorithms. The narrow band or tridiagonal coefficient matrix is partitioned into blocks. The inherent parallelism of these divide-and-conquer methods is limited by the number of these blocks because each block is processed independently; hence, it is necessary to choose the number of blocks at least equal to the desired parallelism. However, because the size of the reduced system is proportional to the number of blocks, and solving this reduced system is the major parallelism bottleneck, it follows that a block layout in which each process has exactly one block allows maximum exploitation of parallelism while minimizing the size of the reduced system.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node82.html0100644000056400000620000001327406336113713017452 0ustar pfrauenfstaff The Block Column and Row Distributions next up previous contents index
Next: The Block Mapping Up: In-Core Narrow Band and Previous: In-Core Narrow Band and

The Block Column and Row Distributions

           

ScaLAPACK assumes a one-dimensional block distribution for the band and tridiagonal routines. The block distribution is used when the computational load is distributed homogeneously over the global data. This distribution leads to a highly efficient implementation of the divide-and-conquer algorithms used in ScaLAPACK.

For convenience we will number the processes from 0 to P-1, and the matrix rows from 1 to M and the matrix columns from 1 to N. Figure 4.8 shows the two data layouts used in ScaLAPACK for solving narrow band linear systems. In all cases, each submatrix is labeled with the number of the process that contains it. Process 0 owns the shaded submatrices.

Consider the layout illustrated on the left of figure 4.8, the one-dimensional block column distribution. This distribution   

  figure2693
Figure 4.8: The one-dimensional block-column and block-row distributions

assigns a block of NB contiguous columns of a matrix to successive processes arranged in a tex2html_wrap_inline15088 one-dimensional process grid. Each process receives at most one block of columns of the matrix, i.e., tex2html_wrap_inline15090. Column k is stored on process tex2html_wrap_inline15094. The maximum number of columns stored per process is given by tex2html_wrap_inline15096. In the figure M=N=16 and P=4. This distribution assigns blocks of columns of size NB to successive processes. If the value of P evenly divides the value of N and NB = N / P, then each process owns a block of equal size. However, if this is not the case, then either the last process to receive a portion of the matrix will receive a smaller block than other processes, or some processes may receive an empty portion of the matrix. The transpose of this layout, the one-dimensional block-row distribution,    is shown on the right of figure 4.8.

The block-column distribution scheme is the data layout that is used in the ScaLAPACK library for the coefficient matrix of the narrow band and tridiagonal solvers.

The block-row distribution scheme is the data layout that is used in the ScaLAPACK library for the right-hand-side matrix of the narrow band and tridiagonal solvers.


next up previous contents index
Next: The Block Mapping Up: In-Core Narrow Band and Previous: In-Core Narrow Band and

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node83.html0100644000056400000620000003414106336635410017452 0ustar pfrauenfstaff The Block Mapping next up previous contents index
Next: Local Storage Scheme for Up: In-Core Narrow Band and Previous: The Block Column and

The Block Mapping

 

The one-dimensional distribution scheme is a mapping of a set of blocks onto the processes. The previous section informally described this mapping as well as some of its properties. To be complete, we shall describe the precise mapping that associates to a matrix entry identified by its global indexes the coordinates of the process that owns it and its local position within that process's memory.

Suppose we have a two dimensional array A of size tex2html_wrap_inline15127 to be distributed on a tex2html_wrap_inline15088 process grid in a block-column fashion. By convention, the array columns are numbered 1 through N and the processes are numbered 0 through P-1. First, the array is divided into contiguous blocks of NB columns with tex2html_wrap_inline15090. When NB does not divide N evenly, the last block of columns will only contain tex2html_wrap_inline14699 columns instead of NB. By convention, these blocks are numbered starting from zero and dealt out to the processes. In other words, if we assume that the process 0 receives the first block, the tex2html_wrap_inline15153 block is assigned to the process of coordinate (0,p). The mapping of a column of the array globally indexed by J is defined by the following analytical equation:
displaymath15123
where J is a global column index in the array, p is the column coordinate of the process owning that column, and finally x is the column coordinate within that block of columns where the global array column of index J is to be found. It is then fairly easy to establish the analytical relationship between these variables. One obtains:
 equation2708
These equations allow to determine the local information, i.e. the local index x as well as the process column coordinate p corresponding to a global column identified by its global index J and conversely. Table 4.9 illustrates this mapping layout when P=2 and N=16 and NB=8. At most one block is assigned to each process.

  table2713
Table 4.9: One-dimensional block-column mapping example for P=2 and N=16

This example of the one-dimensional block-column distribution mapping    can be expressed in HPF  by using the following statements:

      REAL :: A( M, N )
!HPF$ PROCESSORS PROC( 1, P )
!HPF$ DISTRIBUTE A( *, BLOCK( NB ) ) ONTO PROC

A similar example of block-row distribution    can easily be constructed. For an tex2html_wrap_inline15189 array B, such an example can be expressed in HPF  by using the following statements:

      REAL :: B( N, NRHS )
!HPF$ PROCESSORS PROC( P, 1 )
!HPF$ DISTRIBUTE B( BLOCK( NB ), * ) ONTO PROC

There is in fact no real reason to always deal out the blocks starting with the process 0. In fact, it is sometimes useful to start the data distribution with the process of arbitrary coordinate SRC, in which case Equation 4.3 becomes:
 equation2729

Table 4.10 illustrates Equation 4.4 for the block-cyclic layout when tex2html_wrap_inline14791, tex2html_wrap_inline14793, tex2html_wrap_inline14795 and tex2html_wrap_inline14797.

  table2738
Table 4.10: One-dimensional block-column mapping for P=2, SRC=1, N=16 and NB=8

This example of the one-dimensional block-column    distribution mapping can be expressed in HPF  by using the following statements:

      REAL :: A( M, N )
!HPF$ PROCESSORS PROC( 1, P )
!HPF$ TEMPLATE T( M, N + P*NB )
!HPF$ DISTRIBUTE T( *, BLOCK( NB ) ) ONTO PROC
!HPF$ ALIGN A( I, J ) WITH T( I, SRC*NB + J )

A similar example of block-row distribution     can easily be constructed. For an tex2html_wrap_inline15189 array B, such an example can be expressed in HPF  by using the following statements:

      REAL :: B( N, NRHS )
!HPF$ PROCESSORS PROC( P, 1 )
!HPF$ TEMPLATE T( N + P*NB, NRHS )
!HPF$ DISTRIBUTE T( BLOCK( NB ), * ) ONTO PROC
!HPF$ ALIGN A( I, J ) WITH T( SRC*NB + I, J )

In ScaLAPACK, the local storage convention of the one-dimensional block distributed matrix in every process's memory is assumed to be Fortran-like, that is, ``column major'' .

Determining the number of rows or columns     of a global band matrix that a specific process receives is an essential task for the user. The notation LOCtex2html_wrap_inline12112() is used for block-row distributions and LOCtex2html_wrap_inline12114() is used for block-column distributions. These local quantities occur throughout the leading comments of the source code, and are reflected in the sample argument description in section 4.4.7.

For block distribution, a matrix can be distributed unevenly. More specifically, one process in the process grid can receive an array that is smaller than other processes. It is also possible that some processes receive no data. For further information on one-dimensional block-column or block-row data distribution, please refer to section 4.4.1.

Block-Column Distribution: LOCtex2html_wrap_inline12114(N_A) denotes the number of columns that a process would receive if N_A columns of a matrix is distributed over tex2html_wrap_inline12162 columns of its process row.

For example, let us assume that the coefficient matrix A is band symmetric of order N and has been block-column distributed on a tex2html_wrap_inline15233 process grid.

In the ideal case where the matrix is evenly distributed to all processes in the process grid, tex2html_wrap_inline15235 and tex2html_wrap_inline15237. Thus, each process receives a block of size tex2html_wrap_inline14839 of the matrix A. Therefore,

LOCtex2html_wrap_inline12114(N_A) = NB_A.

However, if tex2html_wrap_inline15245, at least one of the processes in the process grid will receive a block of size smaller than tex2html_wrap_inline14839. Thus,

 if ( tex2html_wrap_inline15245 and tex2html_wrap_inline15251 ) then

processes (0,0), ... , (0,K-1) receive

LOCtex2html_wrap_inline12114(N_A) = NB_A

and process (0,K) receives

LOCtex2html_wrap_inline12114(N_A) = N_A - K tex2html_wrap_inline15263 NB_A.

if tex2html_wrap_inline15265 then processes tex2html_wrap_inline15267 do not receive any data.

end if

Block-Row Distribution: LOCtex2html_wrap_inline12112(M_B) denotes the number of rows that a process would receive if M_B rows of a matrix is distributed over tex2html_wrap_inline12172 rows of its process column.

Let us assume that the N-by-NRHS right-hand-side matrix B has been block-row distributed on a tex2html_wrap_inline15275 process grid.

In the ideal case where the matrix is evenly distributed to all processes in the process grid, tex2html_wrap_inline15277 and tex2html_wrap_inline15279. Thus, each process receives a block of size tex2html_wrap_inline15281 of the matrix B. Therefore,

LOCtex2html_wrap_inline12112(M_B) = MB_B.

However, if tex2html_wrap_inline15287, then at least one of the processes in the process grid will receive a block of size smaller than tex2html_wrap_inline15281. Thus,

 if ( tex2html_wrap_inline15287 and tex2html_wrap_inline15293 ) then

processes t#tex2html_wrap_inline15295# receive

LOCtex2html_wrap_inline12112(M_B) = MB_B

and process (K,0) receives

LOCtex2html_wrap_inline12112(M_B) = M_B - K tex2html_wrap_inline15263 MB_B.

if tex2html_wrap_inline15307 then processes tex2html_wrap_inline15309 do not receive any data.

end if


next up previous contents index
Next: Local Storage Scheme for Up: In-Core Narrow Band and Previous: The Block Column and

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node84.html0100644000056400000620000002273006336113715017453 0ustar pfrauenfstaff Local Storage Scheme for Narrow Band Matrices next up previous contents index
Next: Local Storage Schemes for Up: In-Core Narrow Band and Previous: The Block Mapping

Local Storage Scheme for Narrow Band Matrices

 

Let us first discuss how to distribute a narrow band matrix A  over a one-dimensional process grid using a block-column distribution. We assume that the coefficient band matrix A   is of size tex2html_wrap_inline15332 (tex2html_wrap_inline15334) with a bandwidth BW=2 if the matrix A is symmetric positive definite, and BWL=2 and BWU=2 if the matrix A is nonsymmetric. The matrix A is represented by the following.


displaymath15326

If we assume that the matrix A is nonsymmetric band, the user may choose to perform partial pivoting or no pivoting during the factorization (PxGBTRF     or PxDBTRF    , respectively). Both strategies assume a block-column distribution of the coefficient matrix, but additional storage is required for fill-in if partial pivoting is selected. First, let us assume that we have selected no pivoting, and we distribute this matrix onto a tex2html_wrap_inline14534 process grid with a block size of tex2html_wrap_inline15352. The processes would contain the local arrays found in figure 4.9. Figure 4.9 also illustrates that the leading dimension of the local arrays containing the coefficient matrix must be at least BWL+1+BWU for the non-pivoting narrow band linear solver.

  figure2837
Figure 4.9: Mapping of local arrays for nonsymmetric band matrix A (no pivoting)

If, however, we select partial pivoting and distribute this same matrix onto a tex2html_wrap_inline14534 process grid with a block size of tex2html_wrap_inline15430, the processes would contain the local arrays found in figure 4.10. The amount of additional storage required for fill-in is represented by F in the figure and is equal to the sum of the lower bandwidth (number of subdiagonals), BWL, and the upper bandwidth (number of superdiagonals), BWU. In this example, BWL=2 and BWU=2. Refer to the leading comments of the routine PxGBTRF for further details. Figure 4.10 also illustrates that the leading dimension of the local arrays containing the coefficient matrix must be at least 2*(BWL+BWU)+1 for the partial pivoting narrow band linear solver.

  figure2886
Figure 4.10: Mapping of local arrays for nonsymmetric band matrix A (partial pivoting)

Let us now assume that the matrix A is symmetric positive definite band with BW=2, and we distribute this matrix assuming lower triangular storage (UPLO='L') onto a tex2html_wrap_inline14534 process grid with a block size tex2html_wrap_inline15430. The processes would contain the local arrays found in figure 4.11. We would then call the routine PxPBTRF     with BW=2 to perform the factorization, for example.

  figure2938
Figure 4.11: Mapping of local arrays for symmetric positive definite band matrix A (UPLO='L')

If we then distributed this same matrix assuming upper triangular storage (UPLO='U') onto a tex2html_wrap_inline14534 process grid with a block size tex2html_wrap_inline15430, the processes would contain the local arrays found in figure 4.12.

  figure2975
Figure 4.12: Mapping of local arrays for symmetric positive definite band matrix A (UPLO='U')

Figures 4.11 and  4.12 also illustrate that the leading dimension of the local arrays containing the coefficient matrix must be at least BW+1 for the symmetric positive definite narrow band linear solver.

The tex2html_wrap_inline15263 notation in figures 4.9,  4.10,  4.11, and 4.12 and the F notation in figure 4.10 signify an entry in which one need not store a value in that position of the local array. These storage positions, however, are required and overwritten during the computation.

The tex2html_wrap_inline15189 matrix of right-hand-side vectors B (for example, used in PxGBTRS    , PxDBTRS    , and PxPBTRS    ) is assumed to be a dense matrix distributed in a block-row manner across the process grid. Thus, consecutive blocks of rows of the matrix B are assigned to successive processes in the process grid, as described in section 4.4.1.


next up previous contents index
Next: Local Storage Schemes for Up: In-Core Narrow Band and Previous: The Block Mapping

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node85.html0100644000056400000620000001616206336113715017456 0ustar pfrauenfstaff Local Storage Schemes for Tridiagonal Matrices next up previous contents index
Next: Array Descriptor for Narrow Up: In-Core Narrow Band and Previous: Local Storage Scheme for

Local Storage Schemes for Tridiagonal Matrices

 

A global tridiagonal matrix A , represented as three vectors (DL, D, DU), should be distributed over a one-dimensional process grid assuming a block-column data distribution. We assume that the coefficient tridiagonal matrix A   is of size tex2html_wrap_inline15332 (tex2html_wrap_inline15334) and is represented by the following.


displaymath15653

If we first assume that the matrix A is nonsymmetric (diagonally dominant like), and it is known a priori that no pivoting is required for numerical stability, the user may choose to perform no pivoting during the factorization (PxDTTRF    ). If we distribute this matrix (assuming no pivoting) onto a tex2html_wrap_inline14534 process grid with a block size of tex2html_wrap_inline15430, the processes would contain the local arrays found in figure 4.13.

  figure3064
Figure 4.13: Mapping of local arrays for nonsymmetric tridiagonal matrix A

Finally, a global symmetric positive definite tridiagonal matrix A , represented as two vectors (D and E), should be distributed over a one-dimensional    process grid assuming a block-column data distribution.

Let us now assume that this matrix A is symmetric positive definite and that we distribute this matrix assuming lower triangular storage (UPLO='L') onto a tex2html_wrap_inline14534 process grid with a block size tex2html_wrap_inline15430. The processes would contain the local arrays found in figure 4.14. We would then call the routine PxPTTRF     to perform the factorization, for example.

  figure3109
Figure 4.14: Mapping of local arrays for symmetric positive definite tridiagonal matrix A (UPLO='L')

If we then distributed this same matrix assuming upper triangular storage (UPLO='U') onto a tex2html_wrap_inline14534 process grid with a block size tex2html_wrap_inline15430, the processes would contain the local arrays found in figure 4.15.

  figure3141
Figure 4.15: Mapping of local arrays for symmetric positive definite tridiagonal matrix A (UPLO='U')

Note that in the tridiagonal cases, it is not necessary to maintain the empty storage positions as designated by tex2html_wrap_inline15263 in the narrow band routines.

The matrix of right-hand-side vectors B (for example, used in PxDTTRS     and PxPTTRS    ) is assumed to be a dense matrix distributed block-row across the process grid. Thus, consecutive blocks of rows of the matrix B are assigned to successive processes in the process grid, as described in section 4.4.1.


next up previous contents index
Next: Array Descriptor for Narrow Up: In-Core Narrow Band and Previous: Local Storage Scheme for

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node86.html0100644000056400000620000001025406336113716017454 0ustar pfrauenfstaff Array Descriptor for Narrow Band and Tridiagonal Matrices next up previous contents index
Next: Array Descriptor for the Up: In-Core Narrow Band and Previous: Local Storage Schemes for

Array Descriptor for Narrow Band and Tridiagonal Matrices

 

The array descriptor DESC_ whose type is defined as DESC_(DTYPE_)=501 , is an integer array of length 7. This descriptor type is used in the ScaLAPACK narrow band routines and tridiagonal routines to specify a block-column distribution of a global array over a one-dimensional process grid. In the general and symmetric positive definite banded and tridiagonal routines, a one-dimensional block-column distribution is specified for the coefficient matrix . The matrix of right-hand-side vectors must be distributed over a one-dimensional process grid using a block-row data distribution. Refer to section 4.4.1 for further details on block data distribution.

Let us assume, for example, that we have an array descriptor DESCA for a block-column distributed array A. For readability of the code, we have associated symbolic names with the descriptor entries. As previously mentioned, the notations x_ used in the entries of the array descriptor denote the attributes of a global array. For readability of the code, we have associated symbolic names for the descriptor entries. For example, N_ denotes the number of columns and N_A specifically denotes the number of columns in global array A.

 table3188
Table 4.11: Content of the array descriptor for in-core narrow band and tridiagonal coefficient matrices

When A is non-symmetric and factorized without pivoting,      LLD_A must be at least BWL+1+BWU. When A is non-symmetric and factorized with partial pivoting, LLD_A      must be at least 2(BWL+BWU)+1. When A is symmetric positive definite, LLD_A must be at least      BW+1. Finally, when A is tridiagonal, LLD_A is not      referenced.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node87.html0100644000056400000620000000645706336113716017467 0ustar pfrauenfstaff Array Descriptor for the Matrix of Right-Hand-Side Vectors next up previous contents index
Next: Argument Descriptions for Band Up: In-Core Narrow Band and Previous: Array Descriptor for Narrow

Array Descriptor for the Matrix of Right-Hand-Side Vectors

 

The array descriptor DESC_ whose type is defined as DESC_(DTYPE_)=502 , is an integer array of length 7. This descriptor type is used in the ScaLAPACK narrow band routines and tridiagonal routines to specify the block-row distribution of a global array containing the right-hand-side vectors over a one-dimensional process grid. In the narrow band and tridiagonal routines, a one-dimensional block-column distribution is specified for the coefficient matrix . The matrix of right-hand-side vectors however must be distributed over a one-dimensional process grid according to a block-row data distribution scheme. Refer to section 4.4.1 for further details on block data distribution.

Let us now assume that we have an array descriptor DESCB for a block-row distributed matrix B. For readability of the code, we have associated symbolic names with the descriptor entries.

 table3220
Table 4.12: Content of the array descriptor for right-hand-side dense matrices for narrow band and tridiagonal solvers

For a detailed description of LOCtex2html_wrap_inline12112() notation, please refer to section 4.4.2.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node88.html0100644000056400000620000003242406336113717017462 0ustar pfrauenfstaff Argument Descriptions for Band and Tridiagonal Routines next up previous contents index
Next: Matrix Storage Conventions for Up: In-Core Narrow Band and Previous: Array Descriptor for the

Argument Descriptions for Band and Tridiagonal Routines

           

All ScaLAPACK narrow band and tridiagonal routines assume that the global matrices are distributed in a one-dimensional block data distribution. Thus, each process has at most one block of data. With selective choices for the block size NB_ and the order N_ of the global matrix, it is possible that some processes in the process grid may not receive any data, or the last process receiving data will receive a smaller block of data than the other processes.

For further information on one-dimensional block-column or block-row data distribution, please refer to section 4.4.1.

The description of a block-column distributed band matrix consists of (N, A, JA, DESCA)

  • the order N of the band matrix operand,
  • a pointer to the local array containing the entire global array (A, for example),
  • the column index, JA, of the global array, and
  • the array descriptor, DESCA, of the global array.

The description of a block-row distributed right-hand-side matrix consists of (NRHS, B, IB, DESCB)

  • the number of right-hand-side vectors NRHS in the matrix,
  • a pointer to the local array containing the entire global array (B, for example),
  • the row index, IB, of the global array, and
  • the array descriptor, DESCB, for the global array.

The description of a block-distributed diagonally dominant-like tridiagonal matrix consists of (N, DL, D, DU, JA, DESCA)

  • the order N of the tridiagonal matrix operand,
  • pointer to the local arrays, (DL, D, DU, for example),
  • the column index, JA, of the global array, and
  • the array descriptor, DESCA, for the global array.

The description of a block-distributed symmetric positive definite tridiagonal matrix consists of (N, D, E, JA, DESCA)

  • the order N of the tridiagonal matrix operand,
  • pointer to the local arrays, (D, E, for example),
  • the column index, JA, of the global array,
  • the array descriptor, DESCA, for the global array.

The name of the row or column index for the global array has the form I<array_name>  or J<array_name> , respectively. The array descriptor has a name of the form DESC<array_name> .    The length of the array descriptor is specified by DLEN_ and varies according to the descriptor type DTYPE_.

Included in the leading comments of each subroutine (immediately preceding the Argument section), is a brief note describing the array descriptor    and some commonly used expressions in calculating workspace.

The style of the argument  descriptions for symmetric positive definite narrow band routines (PxPByyy) and diagonally dominant-like narrow band routines (PxDByyy) is illustrated by the following example:

    N
    (global input) INTEGER
    The number of rows and columns of the matrix A(JA:JA+N-1,JA:JA+N-1) on which to be operated. N tex2html_wrap_inline14966 0.
    NRHS
    (global input) INTEGER
    The number of right hand sides, i.e., the number of columns of the matrix B(IB:IB+N-1,*). NRHS tex2html_wrap_inline14966 0.
    A
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_A, LOCtex2html_wrap_inline12114(JA+N-1))
    On entry, the local part of the N-by-N global symmetric band matrix A(JA:JA+N-1,JA:JA+N-1).
    JA
    (global input) INTEGER
    The column index of the global matrix A.
    DESCA
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix A.
    B
    (local input/local output) REAL array, dimension (LLD_B, NRHS)
    On entry, the local part of the N-by-NRHS right-hand-side matrix.
    IB
    (global input) INTEGER
    The row index of the global matrix B.
    DESCB
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix B.

The style of the argument  descriptions for diagonally dominant-like tridiagonal routines (PxDTyyy) is illustrated by the following example:

    N
    (global input) INTEGER
    The number of rows and columns of the matrix A(JA:JA+N-1,JA:JA+N-1) on which to be operated. N tex2html_wrap_inline14966 0.
    NRHS
    (global input) INTEGER
    The number of right hand sides, i.e., the number of columns of the matrix B(IB:IB+N-1,*). NRHS tex2html_wrap_inline14966 0.
    DL
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LOCtex2html_wrap_inline12114(JA+N-1))
    On entry, the local part of the subdiagonal entries of the global tridiagonal matrix A(JA:JA+N-1,JA:JA+N-1).
    D
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LOCtex2html_wrap_inline12114(JA+N-1))
    On entry, the local part of the diagonal entries of the global tridiagonal matrix A(JA:JA+N-1,JA:JA+N-1).
    DU
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LOCtex2html_wrap_inline12114(JA+N-1))
    On entry, the local part of the superdiagonal entries of the global tridiagonal matrix A(JA:JA+N-1,JA:JA+N-1).
    JA
    (global input) INTEGER
    The column index of the global matrix A.
    DESCA
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix A.
    B
    (local input/local output) REAL array, dimension (LLD_B, NRHS)
    On entry, the local part of the N-by-NRHS right-hand-side matrix.
    IB
    (global input) INTEGER
    The row index of the global matrix B.
    DESCB
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix B.

The style of the argument  descriptions for symmetric positive definite tridiagonal routines (PxPTyyy) is illustrated by the following example:

    N
    (global input) INTEGER
    The number of rows and columns of the matrix A(JA:JA+N-1,JA:JA+N-1) on which to be operated. N tex2html_wrap_inline14966 0.
    NRHS
    (global input) INTEGER
    The number of right hand sides, i.e., the number of columns of the matrix B(IB:IB+N-1,*). NRHS tex2html_wrap_inline14966 0.
    D
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LOCtex2html_wrap_inline12114(JA+N-1))
    On entry, the local part of the diagonal entries of the global tridiagonal matrix A(JA:JA+N-1,JA:JA+N-1).
    E
    (local input/local output) REAL pointer into the local memory to an array of local dimension (LOCtex2html_wrap_inline12114(JA+N-1))
    On entry, the local part of the off-diagonal entries of the global tridiagonal matrix A(JA:JA+N-1,JA:JA+N-1).
    JA
    (global input) INTEGER
    The column index of the global matrix A.
    DESCA
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix A.
    B
    (local input/local output) REAL array, dimension (LLD_B, NRHS)
    On entry, the local part of the N-by-NRHS right-hand-side matrix.
    IB
    (global input) INTEGER
    The row index of the global matrix B.
    DESCB
    (global and local input) INTEGER array of dimension DLEN_
    The array descriptor for the global matrix B.

The description of each argument contains the following information:

  • A classification of the argument as (local input), (global and local input), (local input/local output), (global input), (local output), (global output), (global input/global output), (local or global input)gif (local workspace), or (local workspace/local output).
  • The type of the argument;
  • For an array, its dimension(s).

    These dimensions are often expressed in terms of LOCtex2html_wrap_inline12112() and LOCtex2html_wrap_inline12114() calculations. For further details, please refer to section 4.4.2.

  • A specification of the value(s) that must be supplied for the argument (if it is an input argument), or of the value(s) returned by the routine (if it is an output argument), or both (if it is an input/output argument). In the last case, the two parts of the description are introduced by the phrases ``On entry'' and ``On exit''.
  • For a scalar input argument, any constraints that the supplied values must satisfy (such as ``N tex2html_wrap_inline14966 0'' in the example above).


next up previous contents index
Next: Matrix Storage Conventions for Up: In-Core Narrow Band and Previous: Array Descriptor for the

Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node89.html0100644000056400000620000000612606336113720017455 0ustar pfrauenfstaff Matrix Storage Conventions for Band and Tridiagonal Matrices next up previous contents index
Next: Out-of-Core Matrices Up: In-Core Narrow Band and Previous: Argument Descriptions for Band

Matrix Storage Conventions for Band and Tridiagonal Matrices

A general  tridiagonal matrix of order n is stored globally in three one-dimensional arrays dl, d, du of length n containing the subdiagonal, diagonal, and superdiagonal elements, respectively. Note the mild change from LAPACK in which dl and du were actually of global length n-1. To make the distribution of the vectors consistent, we have chosen to make them all of length n. Note that dl(1)=du(n)=0.

Similarly, a symmetric  tridiagonal matrix is stored globally in two one-dimensional arrays d, e of length n containing the diagonal and off-diagonal elements, respectively. Again, there is a slight departure from LAPACK in which e was of global length n-1. Here, e(n)=0.

The vectors (DL, D, DU) or (D, E) representing these matrices must be block distributed to a one-dimensional process grid. These vectors can be equivalently distributed block-row or block-column since vectors are one-dimensional data structures. Note that when inputting vectors to these special-purpose low-diagonal routines, LLD_ can be ignored, since it is assumed that the local portions of the vectors are of unit stride.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node8.html0100644000056400000620000000616106336113637017372 0ustar pfrauenfstaff Essentials next up previous contents index
Next: ScaLAPACK Up: Guide Previous: Guide

Essentials

 





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node90.html0100644000056400000620000000406506336113720017445 0ustar pfrauenfstaff Out-of-Core Matrices next up previous contents index
Next: Array Descriptor for Out-core Up: Data Distributions and Software Previous: Matrix Storage Conventions for

Out-of-Core Matrices

The ScaLAPACK software library provides routines for solving out-of-core linear systems, in which case the matrices are stored on disk. A particular array descriptor is required to specify such a data storage.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node91.html0100644000056400000620000000612606336113721017447 0ustar pfrauenfstaff Array Descriptor for Out-core Dense Matrices next up previous contents index
Next: Design and Documentation of Up: Out-of-Core Matrices Previous: Out-of-Core Matrices

Array Descriptor for Out-core Dense Matrices

 

The array descriptor DESC_ whose type is defined as DESC_(DTYPE_)=601 , is an integer array of length 11. DESC_(DTYPE_)=601  is used for the ScaLAPACK routines involved in the out-of-core solution of dense linear systems using LU, QR or Cholesky factorizations [55]. The matrix stored on disk is composed of records each record of which corresponds to an MMB tex2html_wrap_inline12420 NNB DESC_(DTYPE_)=1 ScaLAPACK matrix and these records are organized in a column major (Fortran array) manner. The array descriptor for out-of-core matrices has extra fields to store file parameters associated with the matrix, such as the I/O device unit number, MMB and NNB, and the amount of temporary buffer storage available.

Similar to DESC_(DTYPE_)=1 symmetric ScaLAPACK matrices, only the upper (UPLO='U') triangle or the lower (UPLO='L') triangle is accessed. The entire coefficient matrix is stored on disk, regardless of whether the matrix is nonsymmetric or symmetric.

 table3366
Table 4.13: Content of the array descriptor for out-of-core dense matrices



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node92.html0100644000056400000620000000667706336113721017463 0ustar pfrauenfstaff Design and Documentation of Argument Lists next up previous contents index
Next: Structure of the Documentation Up: Data Distributions and Software Previous: Array Descriptor for Out-core

Design and Documentation of Argument Lists

 

As in LAPACK, the argument lists of all ScaLAPACK routines conform to a single set of conventions for their design and documentation.

Specifications of all ScaLAPACK driver and computational routines are given in Part ii of this users guide. These are derived from the specifications given in the leading comments in the code, but in Part ii the specifications for real and complex versions of each routine have been merged in order to save space.





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node93.html0100644000056400000620000000465106336113722017453 0ustar pfrauenfstaff Structure of the Documentation next up previous contents index
Next: Order of Arguments Up: Design and Documentation of Previous: Design and Documentation of

Structure of the Documentation

 

The documentation  of each ScaLAPACK routine includes the following:

  • The SUBROUTINE or FUNCTION statement, followed by statements declaring the type and dimensions of the arguments.
  • A summary of the Purpose of the routine.
  • Descriptions of each of the Arguments in the order of the argument list.
  • (optionally) Further Details (only in the code, not in Part ii of this users guide);
  • (optionally) Internal Parameters (only in the code, not in Part ii of this users guide).



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node94.html0100644000056400000620000000616406336113722017455 0ustar pfrauenfstaff Order of Arguments next up previous contents index
Next: Option Arguments Up: Design and Documentation of Previous: Structure of the Documentation

Order of Arguments

 

Arguments  of a ScaLAPACK routine appear in the following order:

  • arguments specifying options,
  • problem dimensions,
  • array or scalar arguments defining the input data; some of them may be overwritten by results,
  • other array or scalar arguments returning results,
  • work arrays (and associated array dimensions), and
  • diagnostic argument INFO.

Note that not every category is present in each of the routines.

When defining each of these categories of arguments, ScaLAPACK distinguishes between local and global data. On entry to a ScaLAPACK routine, local input arguments   may have different values on each process in the process grid. Similarly, local output arguments   may be assigned different values on different processes in the process grid on exit from the ScaLAPACK routine.

Global input arguments   must have the same value on each process in the process grid on entry to a ScaLAPACK routine. If this is not the case, most routines will call PXERBLA and return. Global output arguments   are assigned the same value on all processes in the process grid on exit from a ScaLAPACK routine.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node95.html0100644000056400000620000000765606336113723017466 0ustar pfrauenfstaff Option Arguments next up previous contents index
Next: Problem Dimensions Up: Design and Documentation of Previous: Order of Arguments

Option Arguments

 

Arguments  specifying options are usually of type CHARACTERtex2html_wrap_inline152631. The arguments that specify options are character arguments with the names SIDE, TRANS, UPLO, and DIAG. On entry to a ScaLAPACK routine, these arguments are global input and must have the same value on each process in the process grid.

SIDE  is used by the routines as follows:
tabular3422

TRANS  is used by the routines as follows:
tabular3431
In the real case the values `T' and `C' have the same meaning, and in the complex case the value `T' is not allowed.

UPLO  is used by the Hermitian, symmetric, and triangular distributed matrix routines to specify whether the upper or lower triangle is being referenced as follows:
tabular3444

DIAG  is used by the triangular distributed matrix routines to specify whether the distributed matrix is unit triangular, as follows:
tabular3453
When DIAG is supplied as `U', the diagonal elements are not referenced. For example:

    UPLO
    (global input) CHARACTERtex2html_wrap_inline152631
    = 'U': Upper triangle of the matrix A(IA:IA+M-1,JA:JA+N-1);
    = 'L': Lower triangle of the matrix A(IA:IA+M-1,JA:JA+N-1).

The corresponding lower-case characters may be supplied (with the same meaning), but any other value is illegal (see section 4.6.6).

A longer character string can be passed as the actual argument, making the calling program more readable, but only the first character is significant; this is a standard feature of Fortran 77. For example:

       CALL PSPOTRS('upper', . . . )



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node96.html0100644000056400000620000000356606336113723017463 0ustar pfrauenfstaff Problem Dimensions next up previous contents index
Next: Workspace Issues Up: Design and Documentation of Previous: Option Arguments

Problem Dimensions

 

The problem  dimensions may be passed as zero, in which case the computation (or part of it) is skipped. Negative dimensions are regarded as erroneous.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node97.html0100644000056400000620000000413306336113724017454 0ustar pfrauenfstaff Workspace Issues next up previous contents index
Next: WORK Arrays Up: Design and Documentation of Previous: Problem Dimensions

Workspace Issues

 





Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node98.html0100644000056400000620000000532306336113724017457 0ustar pfrauenfstaff WORK Arrays next up previous contents index
Next: LWORK Query Up: Workspace Issues Previous: Workspace Issues

WORK Arrays

Many ScaLAPACK routines require one or more work arrays  to be passed as arguments. The name of a work array is usually WORK - sometimes IWORK or RWORK to distinguish work arrays of type integer or real. Immediately following the work array in the argument list is the specified length of the work array, LWORK , LIWORK, or LRWORK, respectively. LWORK is defined as the minimum amount of workspace necessary to perform the operation specified.

The first element of the work array is always used to return the correct value of LWORK for the computation. Whether or not an error is detected, the minimum value of LWORK is placed in WORK(1) on exit from the routine.

If the user passes a value for LWORK that is too small, an input error is detected and INFO is set accordingly (see section 4.6.6), the correct value for LWORK is placed in WORK(1), and the routine PXERBLA is called. The user is thus strongly advised to always check the value of INFO on exit from the called routine.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node99.html0100644000056400000620000000411106336113725017453 0ustar pfrauenfstaff LWORK Query next up previous contents index
Next: LWORK WORK(1) Up: Workspace Issues Previous: WORK Arrays

LWORK Query

 

If in doubt about the amount of workspace to supply to a ScaLAPACK routine, the user may choose to supply LWORK=-1  and use the returned value in WORK(1) as the correct value for LWORK. Setting LWORK=-1 does not invoke an error message from PXERBLA and is defined as a global query   .



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/node9.html0100644000056400000620000000777706336113640017403 0ustar pfrauenfstaff ScaLAPACK next up previous contents index
Next: Structure and Functionality Up: Essentials Previous: Essentials

ScaLAPACK

ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory message-passing MIMD    computers and networks of workstations supporting   PVM [68] and/or MPI [64, 110]. It is a continuation of the LAPACK [3] project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Both libraries contain routines for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency (to run as fast as possible), scalability  (as the problem size and number of processors grow), reliability  (including error bounds), portability  (across all important parallel machines), flexibility (so users can construct new routines from well-designed parts), and ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible). Many of these goals, particularly portability, are aided by developing and promoting standards , especially for low-level communication and computation routines. We have been successful in attaining these goals, limiting most machine dependencies to two standard libraries called the BLAS, or Basic Linear Algebra Subprograms [57, 59, 74, 93], and BLACS, or Basic Linear Algebra Communication Subprograms [50, 54]. LAPACK will run on any machine where the BLAS are available, and ScaLAPACK will run on any machine where both the BLAS and the BLACS are available.

The library is currently written in Fortran 77 (with the exception of a few symmetric eigenproblem auxiliary routines written in C to exploit IEEE arithmetic) in a Single Program Multiple Data (SPMD)   style using explicit message passing  for interprocessor communication. The name ScaLAPACK is an acronym for Scalable Linear Algebra PACKage, or Scalable LAPACK.



Susan Blackford
Tue May 13 09:21:01 EDT 1997
scalapack-doc-1.5/html/slug/scalapack_slug.html0100644000056400000620000005261306407506537021340 0ustar pfrauenfstaff ScaLAPACK Users' Guide next up previous contents index
Next: Contents



ScaLAPACK Users' Guide

  • L. S. Blackford,
  • J. Choi,
  • A. Cleary,
  • E. D'Azevedo,
  • J. Demmel,
  • I. Dhillon,
  • J. Dongarra,
  • S. Hammarling,
  • G. Henry,
  • A. Petitet,
  • K. Stanley,
  • D. Walker,
  • R. C. Whaley

    1 May 1997

    Dedication

    This work is dedicated to the pioneers of high-performance computing who blazed a trail, set standards, and made our job easier.

    Acknowledgment

    We give credit to and thank all of the LAPACK authors for allowing us to reuse large portions of the LAPACK Users' Guide in creating this users guide for ScaLAPACK.


    1997 by the Society for Industrial and Applied Mathematics. Certain derivative work portions have been copyrighted by the Numerical Algorithms Group Ltd.

    The printed version of the ScaLAPACK Users' Guide is available from SIAM.

    • ISBN 0-89871-397-8;
    • The list price for SIAM members is $39.60;
    • the cost for nonmembers is $49.50.
    • Order code SE04.
    Contact SIAM for additional information.

  • click here to send e-mail to service@siam.org
  • fax: 215-386-7999
  • phone: (USA) 800-447-SIAM
  • (outside USA) 215-386-7999
  • mail: SIAM, 3600 University City Science Center, Philadelphia, PA 19104-2688.

    The royalties from the sales of this book are being placed in a fund to help students attend SIAM meetings and other SIAM related activities. This fund is administered by SIAM and qualified individuals are encouraged to write directly to SIAM for guidelines.







    Susan Blackford
    Tue May 13 09:21:01 EDT 1997
    scalapack-doc-1.5/html/slug/slug_cover.gif0100644000056400000620000005215506377302156020333 0ustar pfrauenfstaffGIF89a${{{cccJJJ999)))!!!cZZޔB9{kZ!ƽ)ƽ県ZBR9J11)ﭜ{ckRB!19ތsR19ュ絽ޥZcR!){!{{{RRZ99BZZkΔƌ{{kkkkccBBsJJRRBB9!!Z111))!!s!!{!!s{!k!RJ9Z{s{skscZcRJRB9B{kss{cRZc)B!,$@,KJKKGHHJK,JGKHJ,,J+IHGG+,ɯH̷Ū+ȏҌI★J+FGҐ TJS.j(AbNi׊}J2 D~Vb[X 2Su*Sb&%ɷR<~5 R AS $@;k)s ;"iNRt:pnuNhT);Hf@?vB\3-YMһE3B]!$:U]uoKP2T@;1 /R& 4) I=JZ55PDq9TWX#|G :҄NNAoW|J..goLSAOjHst4c3 I^.PERsJ"I6[x!4@ 'j2-4 ~q@` S14ލ eNydIBCLGM% Lzqɑbq9b(8‚Mc994v h `BJC'n# L I8D7<F`(ɚ$"D(aĚD*Xđ, `%C `,Ԁm8äF`D}I@ E&D,qL&" JCcJ.DK jKHpwJ! 7@D 8(  &(H i b&8ؠD ~kۚ@+'ԠD 6/AԲ84n1K  *C6ذë␉ A3+qno9Do!K?o ;0Q>NR#:OT70ZB:ݔ@Ԉ lԘcj>5t>5'! ӦR)طSQB?7GYH Y$<ȇUbbU=Xg=.EE/E!~賟 1D*؏?ْL0PdB 1A @ R p8*(dGx͔3) M& L"QhqCY@}. !2TPti@ǨT0Et",C d@ =@% ;hGWLy1 Ee0@@+/ RI\G Il 8GvԠ@ Iq~I)1mABmP@ D@U KҶ}w? LpV٭v-x*%Ch pyˈatERsr* obIo5\"- @λ[6'M|i߽QիPxk*h\B%~U*AȰ7a?AOGx(N1]087MQ _0 ,+QX Px}v<%m\*(jAdV[UE.p`/09iA} P_^WSS_{a~/5YPvX֛faUΌ6l+Q]ɴ#+mn^Fb9Bi|zT$R%_3ʰgd^C_|M"Ld h7aο2:e#!JB14(Ȅ5_:̶"Re;%V6lj IYS@TǶr֣uR{۸=A6r MezN`g x?/Yȕ; J+]j, P(k7Efүϲo{Nu9iL BW[5b.O_H7FN4ިb9eV&,>G=;*WA3y,y֝{ z>;n& _C+/'Щ?P;\A+WIt)NrA q@E&D@0XZ :,x& E:ExdYD8gs;1Zh7&y%'q$R%HjֵP]9`0A S;a]z/A8lH; $(JP4b) ,`5RB 9,{R&J+F17`D2821p.F*F"1PzF0"A/7 rZ&-T]FZ3K&,8"mVKs(;AC%%{+J끕A =`[!C; %b]W9Zce1 ,#P#軔z>"BH@B&jF=e)HEjpdHۄ'ЇA(Hy 0>q))I/iCdA7 qeB|=e*BN6BD@Th0bّR*,| ➯ (8VG|pPـ#j6,jZ 0 GBwVUwW t $t$ Qvp29H7C5…/\=.Ӡd:̂QV2C0 BLpk6eHIr ;k1r*G rWyC/3}i1iPNQ6iTeWc+]\{cx@%\]BZ”cVW)KjZ&=DfY\O@xʥzW\\'1 } <i D)CA(!IڋgWrжLB\dNbhQ21µP$bmkE3S()U'#9Ʌ;:*]htFPj;OXRu,Aq|!" $ aj541p 4Ek()!ڸ *6CQ xApɨyHК3/I Bv3{!M 808/=`Pt]_w/bԜBB%Sܠmpr3[iВքNcW0ĉ0(2/}]Vh3b7'*Rpu|lŝ-ُVS&UD0e+t>-lUyIԊD$C,~ VkZ+oƦL-S'qĩb\J[)J?.gA$UQ)=/O*>މN-T1"8&;(H{P$IB N?0E3IˊgLCy97v, mye,WB`IGmSMqiܪ $x}Y,g-u/pm yFd+lX Y `pB}~}޷mmZլ&v)IMiF[c]ֺ)Z="'J^m]߯j$~ݤCxb.JO'vd2(qYZB-UG" B>) \2aO@Q0`4A2nTn4n4걎ꩮ꺎~>~o^ǮNn2.N n O00.n^n.~ ?o?.O T*,W!9~ t<;4+Q% " U["Y C,GX+Qs ,&֌X!(6,H`&FV$1:tc8@3Q$X@?Y_G JƏ$e =go'àQ.VVTrʔCoPa!LGI2\XGWdIoeDHa ~ „zZdU5OBe2KT߰SҊ~6o&t&"Q$-LDq U@bP ,E01.胀q] $ F81'6JBRY 87L # 80Q$@pi $ ؈6U0.GUe $U$C!IWPA 6AHBoV* 32`ГdUJS)E0GS1!V-1MTRUV!g:IKW:#2o41IodXb$ cL04HaN ;iD 7g}'=4g+FTVvޝiUPWscBy/@v$ƠJD[ݬScJ2,C]DvJ<4rd8"b: RD DĮ#rH:TI*B8|@L @HE E0$P Vy=QDS<X4GD(!&ITtS2V @TD7S8!D5uP@pRP LDaB 3DQtc+2@.bc*EJ'|)Ϲ22ItNӴQBXDcdϠiB!J[N?U͗t>5YGQ!`Bb((pHױԢu&ATPn'y}'h3FPf?]l{0_TX?^ovK.>3 %#Qv+FFRֱXFbRvwɌQ(HF%8sgCmCp _g4vKEDuCbL@*h$Rύ3X+jc^(/u{ "`1Ea!;vڥQ2be(D"|-@cGlX 0R0/DJbW"$45w$R~z! f'=̘"@Wͮ&,mpr}S="LbXsb*S -,Ł/)tI.fRj$^F4WHЕl%ș`ՑX$LL̨!%8zZ-KHO ";ZT(z;؀?V~Bܜ n@N(0vb^ q / B@dB>=ڠ 0Iwh0*FlƔ +@AP~J*6 h Ўe"XEu @C$Ā0y"(8d7`&+@ZmZOX=k Ի7?v]8` ~xQrXx'Ƃ=2_HjBv l)/ +)# Q gEq~ pb0]B (fK \2:B"$BaUAjj]f.18'[%A`[B @s'U V0(h ӑt8kYLQ&fk1M'%IfĠ)H:m"Rc 8˜P Mh,$(I\&G MdVA.f++=D_dbec_rv('o^1x Ի؁r;&KA8= ,e #YW90;C}S;c4U'k_GY􂐲@ƦcX$ ]U!;[5x&&M?4̓V(yЦkEzBwRnF$Y*P䙪@s7K-bJ6m"hԓ iDBgYvJ=(H&[vYV QTi䴀 l)&CQtIWFzjvNNcrG>I ZР|v41@ f}NL4PA`&ZzU&GxN<#;#l䆐CJHC_KtA >C0Sbm@+@L:@oҴV{M2HQ CK3=p,AiKF,OBZ0>3  pCl+"-1CU+K . 'O4)KiC:B˴zB#>* J[8MRt4Q bKCK0PL+O\UO+jSH,@dK7D໺:Kg ,@;;?8D `pAaFyk mK &9`T+DTI+P`C-&kC2@?8K 0c"Pp D0(.KRq ,183 Ԭ-`j0MdRƠ_ ?%+Ō2 133ݳ- 8R b-2bC&M11ɺ;R f,3L3,13HLԊx| yr?lL 3_ )1;Ā%pqB1ClP\̴qʹи8l&p1!O]x ǠPп-1@/` L 'P'A'213 =r#pĀ8"0b!6Ill[1PM @pP5@_Rػ1&kYԹ"X93M1#6,)ފsx|*zT8 ;^<*D}^D[<dn"ߞHid {jwO Dˑ'BفklO1 A_P2H"̠l#]<{laH̀L Yu!kd-ޞ<Hֆ!X P_1 'el'`L  s.幑d-D!T*HĚ>B 'N) ӝ;?GFF}q#Pp/\ rmHަ  ͫ&-BcV  X ׅW].C>   AЃy"@90<L l0*`m(\V,/`U<10`C~PPb S]L-D+m}n#nmѦ&&r⧜zcd3˽{˱ $:p {l R [_sx_z_VP`njz : ϴh2:\Eu N >Ϻl.D^fp\Pezk$ikI'ANTqY4*K.2Ko1RdEl"`t\Rj֭K/&T0fJ@+:WSH kuNuBWWdVYQU WPBzpXeMlؔCʥѵ>v4\SB4%$k,fiB[%̆gG%hc2 F3y$W?$Ԅ#'ղLOl%5dq%%1ie7T:hՄV) N(va$8h 7Y(EpS#7EM@$`E"d* dK.ZFXQ4FҖ\v`nJԇQ8Zl:C)ͥWmiO萟QLVX@3|y/e ^:s.T" 4—r(t _i3:D^I+d ^6eFx*61p3J8x3BsP@ArХ _Х r D 6>+Ti8Y6cA ][&AtC[CnC~H t (+*DXu~ iqA~l귉#'eCPLHWR.SƁ ) W.D2P L+ul3ڱ-+@uM2~mrުq"sB'G$h5 b9C8"{[l-sK +̯ <%ya"D}M?Qe {?HJ@=iܣ8iq rz@r5#:H?F  Cuɀ[k5mb[_XXV:SX~~~[F!L35-/pK ؖ0O9\IGf(K/H4%x$&0^91b:haf )O!DΧ=&#D c4RN3. KH>LJH\Kc&ҧƢ3)JPR+'MqyDc> 9J*|s#!t@.1qK@J5?_o4FpKR,Hx0zK5AiIHf\,Fg"EtΔ>sYo|y(D@4hE \ a+h9aUeC">2z\G"N'\Bm, NĠWmJ'6ʽ fH*.%p[ʠ3DLg4X7&d[& $2K)}\l3b@,v)]€R+ӺM\nLp:^@>,ErPf`K @ Y1U\`p M&4$hd&S_4`*LZVT_%y!JPRx n0S}@D7K ) TΑRZQL]TUx/K~HR5cY&dr],H``.e"H=.H]UV?D LvI ,9P0PjBj'qݫ27y%/N bäw*JDZUAЋm6C{4ow74$p61 gB3D3+Z}-0!Pek,m M˴Z A  8Y K'`|21<ҏ@Zۚg@`׿5-h`k4Km\KX[grђivI]rDPr]HSq32BgD+G:N,\:"~4X v3%XF$g'O=0rɺy[h8 s.̼5|q~96q!G:29'!  (6#"!?Su\~R;>'HVQc^ |ҟErIM·rЅ&?/B\Cxuo )\)yptS\K? ibkf{3dՑPc#\,nQH-h%$@:}jZ5-hdꛄOO("<.B+WoJK24Sz/*~PHNusLj(HQv4i4l TKOJK %zdQ2qcp9`b7$`](u=S= W*G%\$?%cYTB3% DE`%†0`4?9&% .d 80`7 XQaTe5\7p?r*7( J 8X6`++g$|% 2'BtP1fs M@T~iS"4;qJ1Ee|'8@ *`zdj[( ~KH=3byAW6tF IGKe= F`%XxUx"0c:%8GEK`ΐ%Ձ[T"3 ؅TH0tTI"?PtKOD\T>Oɇ?]y]ye}AAA_tTC~v%?ܐIWVl<p (7X[+] 6qyYI U \4 (=IF6vd cA A F3$E0r@0xL_[Ƅ͠UgC+v2\./(er@Ik%%}rF nJ *01BB61x|9ua3 2`9 J iqQ0Jh=WhZf6#0Π83<2 Yq~x+8ţM'SH&똘"CW+~'I Dtϐ8]% J &LI˸%y J,x sEz<1eizet ]):GH)jHe:X ڨ3UH  1 9xeRrxF e":;sJw`ą9 (iid`&d0]ɨ|I JsEiO*J^2`& PpD hv‚O9b$)ʀ55 61<ЧPGZHin')q3 ͪ:: ZVwJp"|!'"'!" * )*lbji{"4#/6A;[{{TKY1cdҹ,0nBbmJN2nViiQ !fmC0{fiZi iJ& MI⼖P k(F;˹ zkz˽yKkXKkp0I빗[&Xwd2kGț۽ <۽< !{[&"̹{<, ʾ lP&髷@QwD\w0 H[6 <Fk36 6k#'\ k›g|̾[lE\  y@!y |V{Ȥzk깊$Ƒk&<+绿 YȰ+ jĄ 0,JjY:[mvmoQv&vӆ&nVάh Fmvm\m&hȖ̜,\LP F_"0=]} O9\"OQP}}` $m##}%(. *3m%=O'`<>@B=D]F}HJLM_PR=T]V;scalapack-doc-1.5/man/0040755000056400000620000000000006335610117014315 5ustar pfrauenfstaffscalapack-doc-1.5/man/manl/0040755000056400000620000000000006335610666015255 5ustar pfrauenfstaffscalapack-doc-1.5/man/manl/cdbtf2.l0100644000056400000620000000463606335610612016573 0ustar pfrauenfstaff.SH NAME CDBTF2 - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE CDBTF2( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 COMPLEX AB( LDAB, * ) .SH PURPOSE Cdbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges. This is the unblocked version of the algorithm, calling Level 2 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) COMPLEX array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine; elements marked + need not be set on entry, but are required by the routine to store elements of U, because of fill-in resulting from the row .br interchanges. .br scalapack-doc-1.5/man/manl/cdbtrf.l0100644000056400000620000000435206335610612016666 0ustar pfrauenfstaff.SH NAME CDBTRF - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE CDBTRF( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 COMPLEX AB( LDAB, * ) .SH PURPOSE Cdbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges. This is the blocked version of the algorithm, calling Level 3 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) REAL array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine. .br scalapack-doc-1.5/man/manl/cdttrf.l0100644000056400000620000000331706335610612016710 0ustar pfrauenfstaff.TH CDTTRF l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME CDTTRF - compute an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting .SH SYNOPSIS .TP 19 SUBROUTINE CDTTRF( N, DL, D, DU, INFO ) .TP 19 .ti +4 INTEGER INFO, N .TP 19 .ti +4 COMPLEX D( * ), DL( * ), DU( * ) .SH PURPOSE CDTTRF computes an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting. The factorization has the form .br A = L * U .br where L is a product of unit lower bidiagonal .br matrices and U is upper triangular with nonzeros in only the main diagonal and first superdiagonal. .br .SH ARGUMENTS .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 DL (input/output) COMPLEX array, dimension (N-1) On entry, DL must contain the (n-1) subdiagonal elements of A. On exit, DL is overwritten by the (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input/output) COMPLEX array, dimension (N) On entry, D must contain the diagonal elements of A. On exit, D is overwritten by the n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input/output) COMPLEX array, dimension (N-1) On entry, DU must contain the (n-1) superdiagonal elements of A. On exit, DU is overwritten by the (n-1) elements of the first superdiagonal of U. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/cdttrsv.l0100644000056400000620000000352406335610612017113 0ustar pfrauenfstaff.TH CDTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME CDTTRSV - solve one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE CDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER UPLO, TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 COMPLEX B( LDB, * ), D( * ), DL( * ), DU( * ) .SH PURPOSE CDTTRSV solves one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, U * X = B, U**T * X = B, or U**H * X = B, .br with factors of the tridiagonal matrix A from the LU factorization computed by CDTTRF. .br .SH ARGUMENTS .TP 8 UPLO (input) CHARACTER*1 Specifies whether to solve with L or U. .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': A * X = B (No transpose) .br = 'T': A**T * X = B (Transpose) .br = 'C': A**H * X = B (Conjugate transpose) .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 DL (input) COMPLEX array, dimension (N-1) The (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input) COMPLEX array, dimension (N) The n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input) COMPLEX array, dimension (N-1) The (n-1) elements of the first superdiagonal of U. .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, B is overwritten by the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/cpttrsv.l0100644000056400000620000000411406335610612017123 0ustar pfrauenfstaff.TH CPTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME CPTTRSV - solve one of the triangular systems L * X = B, or L**H * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE CPTTRSV( UPLO, TRANS, N, NRHS, D, E, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER UPLO, TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 REAL D( * ) .TP 20 .ti +4 COMPLEX B( LDB, * ), E( * ) .SH PURPOSE CPTTRSV solves one of the triangular systems L * X = B, or L**H * X = B, U * X = B, or U**H * X = B, .br where L or U is the Cholesky factor of a Hermitian positive definite tridiagonal matrix A such that .br A = U**H*D*U or A = L*D*L**H (computed by CPTTRF). .br .SH ARGUMENTS .TP 8 UPLO (input) CHARACTER*1 Specifies whether the superdiagonal or the subdiagonal of the tridiagonal matrix A is stored and the form of the factorization: .br = 'U': E is the superdiagonal of U, and A = U'*D*U; .br = 'L': E is the subdiagonal of L, and A = L*D*L'. (The two forms are equivalent if A is real.) .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': L * X = B (No transpose) .br = 'N': L * X = B (No transpose) .br = 'C': U**H * X = B (Conjugate transpose) .br = 'C': L**H * X = B (Conjugate transpose) .TP 8 N (input) INTEGER The order of the tridiagonal matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 D (input) REAL array, dimension (N) The n diagonal elements of the diagonal matrix D from the factorization computed by CPTTRF. .TP 8 E (input) COMPLEX array, dimension (N-1) The (n-1) off-diagonal elements of the unit bidiagonal factor U or L from the factorization computed by CPTTRF (see UPLO). .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/ddbtf2.l0100644000056400000620000000464706335610612016576 0ustar pfrauenfstaff.SH NAME DDBTF2 - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE DDBTF2( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 DOUBLE PRECISION AB( LDAB, * ) .SH PURPOSE Ddbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges. This is the unblocked version of the algorithm, calling Level 2 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) DOUBLE PRECISION array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine; elements marked + need not be set on entry, but are required by the routine to store elements of U, because of fill-in resulting from the row .br interchanges. .br scalapack-doc-1.5/man/manl/ddbtrf.l0100644000056400000620000000436306335610612016671 0ustar pfrauenfstaff.SH NAME DDBTRF - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE DDBTRF( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 DOUBLE PRECISION AB( LDAB, * ) .SH PURPOSE Ddbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges. This is the blocked version of the algorithm, calling Level 3 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) REAL array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine. .br scalapack-doc-1.5/man/manl/ddttrf.l0100644000056400000620000000333006335610612016704 0ustar pfrauenfstaff.TH DDTTRF l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME DDTTRF - compute an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting .SH SYNOPSIS .TP 19 SUBROUTINE DDTTRF( N, DL, D, DU, INFO ) .TP 19 .ti +4 INTEGER INFO, N .TP 19 .ti +4 DOUBLE PRECISION D( * ), DL( * ), DU( * ) .SH PURPOSE DDTTRF computes an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting. The factorization has the form .br A = L * U .br where L is a product of unit lower bidiagonal .br matrices and U is upper triangular with nonzeros in only the main diagonal and first superdiagonal. .br .SH ARGUMENTS .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 DL (input/output) COMPLEX array, dimension (N-1) On entry, DL must contain the (n-1) subdiagonal elements of A. On exit, DL is overwritten by the (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input/output) COMPLEX array, dimension (N) On entry, D must contain the diagonal elements of A. On exit, D is overwritten by the n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input/output) COMPLEX array, dimension (N-1) On entry, DU must contain the (n-1) superdiagonal elements of A. On exit, DU is overwritten by the (n-1) elements of the first superdiagonal of U. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/ddttrsv.l0100644000056400000620000000353506335610612017116 0ustar pfrauenfstaff.TH DDTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME DDTTRSV - solve one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE DDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER UPLO, TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION B( LDB, * ), D( * ), DL( * ), DU( * ) .SH PURPOSE DDTTRSV solves one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, U * X = B, U**T * X = B, or U**H * X = B, .br with factors of the tridiagonal matrix A from the LU factorization computed by DDTTRF. .br .SH ARGUMENTS .TP 8 UPLO (input) CHARACTER*1 Specifies whether to solve with L or U. .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': A * X = B (No transpose) .br = 'T': A**T * X = B (Transpose) .br = 'C': A**H * X = B (Conjugate transpose) .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 DL (input) COMPLEX array, dimension (N-1) The (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input) COMPLEX array, dimension (N) The n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input) COMPLEX array, dimension (N-1) The (n-1) elements of the first superdiagonal of U. .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, B is overwritten by the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/dlamsh.l0100644000056400000620000000452406335610612016673 0ustar pfrauenfstaff.TH DLAMSH l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME DLAMSH - send multiple shifts through a small (single node) matrix to see how consecutive small subdiagonal elements are modified by subsequent shifts in an effort to maximize the number of bulges that can be sent through .SH SYNOPSIS .TP 18 SUBROUTINE DLAMSH ( S, LDS, NBULGE, JBLK, H, LDH, N, ULP ) .TP 18 .ti +4 INTEGER LDS, NBULGE, JBLK, LDH, N .TP 18 .ti +4 DOUBLE PRECISION ULP .TP 18 .ti +4 DOUBLE PRECISION S(LDS,*), H(LDH,*) .SH PURPOSE DLAMSH sends multiple shifts through a small (single node) matrix to see how consecutive small subdiagonal elements are modified by subsequent shifts in an effort to maximize the number of bulges that can be sent through. DLAMSH should only be called when there are multiple shifts/bulges (NBULGE > 1) and the first shift is starting in the middle of an unreduced Hessenberg matrix because of two or more consecutive small subdiagonal elements. .br .SH ARGUMENTS .TP 8 S (local input/output) DOUBLE PRECISION array, (LDS,*) On entry, the matrix of shifts. Only the 2x2 diagonal of S is referenced. It is assumed that S has JBLK double shifts (size 2). On exit, the data is rearranged in the best order for applying. .TP 8 LDS (local input) INTEGER On entry, the leading dimension of S. Unchanged on exit. 1 < NBULGE <= JBLK <= LDS/2 .TP 8 NBULGE (local input/output) INTEGER On entry, the number of bulges to send through H ( >1 ). NBULGE should be less than the maximum determined (JBLK). 1 < NBULGE <= JBLK <= LDS/2 On exit, the maximum number of bulges that can be sent through. .TP 8 JBLK (local input) INTEGER On entry, the number of shifts determined for S. Unchanged on exit. .TP 8 H (local input/output) DOUBLE PRECISION array (LDH,N) On entry, the local matrix to apply the shifts on. H should be aligned so that the starting row is 2. On exit, the data is destroyed. .TP 8 LDS (local input) INTEGER On entry, the leading dimension of S. Unchanged on exit. .TP 8 N (local input) INTEGER On entry, the size of H. If all the bulges are expected to go through, N should be at least 4*NBULGE+2. Otherwise, NBULGE may be reduced by this routine. .TP 8 ULP (local input) DOUBLE PRECISION On entry, machine precision Unchanged on exit. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/dlaref.l0100644000056400000620000000624606335610612016663 0ustar pfrauenfstaff.TH DLAREF l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME DLAREF - applie one or several Householder reflectors of size 3 to one or two matrices (if column is specified) on either their rows or columns .SH SYNOPSIS .TP 19 SUBROUTINE DLAREF( TYPE, A, LDA, WANTZ, Z, LDZ, BLOCK, IROW1, ICOL1, ISTART, ISTOP, ITMP1, ITMP2, LILOZ, LIHIZ, VECS, V2, V3, T1, T2, T3 ) .TP 19 .ti +4 LOGICAL BLOCK, WANTZ .TP 19 .ti +4 CHARACTER TYPE .TP 19 .ti +4 INTEGER ICOL1, IROW1, ISTART, ISTOP, ITMP1, ITMP2, LDA, LDZ, LIHIZ, LILOZ .TP 19 .ti +4 DOUBLE PRECISION T1, T2, T3, V2, V3 .TP 19 .ti +4 DOUBLE PRECISION A( LDA, * ), VECS( * ), Z( LDZ, * ) .SH PURPOSE DLAREF applies one or several Householder reflectors of size 3 to one or two matrices (if column is specified) on either their rows or columns. .SH ARGUMENTS .TP 8 TYPE (global input) CHARACTER*1 If 'R': Apply reflectors to the rows of the matrix (apply from left) Otherwise: Apply reflectors to the columns of the matrix Unchanged on exit. .TP 8 A (global input/output) DOUBLE PRECISION array, (LDA,*) On entry, the matrix to receive the reflections. The updated matrix on exit. .TP 8 LDA (local input) INTEGER On entry, the leading dimension of A. Unchanged on exit. .TP 8 WANTZ (global input) LOGICAL If .TRUE., then apply any column reflections to Z as well. If .FALSE., then do no additional work on Z. .TP 8 Z (global input/output) DOUBLE PRECISION array, (LDZ,*) On entry, the second matrix to receive column reflections. This is changed only if WANTZ is set. .TP 8 LDZ (local input) INTEGER On entry, the leading dimension of Z. Unchanged on exit. .TP 8 BLOCK (global input) LOGICAL If .TRUE., then apply several reflectors at once and read their data from the VECS array. If .FALSE., apply the single reflector given by V2, V3, T1, T2, and T3. .TP 8 IROW1 (local input/output) INTEGER On entry, the local row element of A. Undefined on output. .TP 8 ICOL1 (local input/output) INTEGER On entry, the local column element of A. Undefined on output. .TP 8 ISTART (global input) INTEGER Specifies the "number" of the first reflector. This is used as an index into VECS if BLOCK is set. ISTART is ignored if BLOCK is .FALSE.. .TP 8 ISTOP (global input) INTEGER Specifies the "number" of the last reflector. This is used as an index into VECS if BLOCK is set. ISTOP is ignored if BLOCK is .FALSE.. .TP 8 ITMP1 (local input) INTEGER Starting range into A. For rows, this is the local first column. For columns, this is the local first row. .TP 8 ITMP2 (local input) INTEGER Ending range into A. For rows, this is the local last column. For columns, this is the local last row. LILOZ LIHIZ (local input) INTEGER These serve the same purpose as ITMP1,ITMP2 but for Z when WANTZ is set. .TP 8 VECS (global input) DOUBLE PRECISION array of size 3*N (matrix size) This holds the size 3 reflectors one after another and this is only accessed when BLOCK is .TRUE. V2 V3 T1 T2 T3 (global input/output) DOUBLE PRECISION This holds information on a single size 3 Householder reflector and is read when BLOCK is .FALSE., and overwritten when BLOCK is .TRUE. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/dlasorte.l0100644000056400000620000000266506335610613017245 0ustar pfrauenfstaff.TH DLASORTE l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME DLASORTE - sort eigenpairs so that real eigenpairs are together and complex are together .SH SYNOPSIS .TP 20 SUBROUTINE DLASORTE ( S, LDS, J, OUT, INFO ) .TP 20 .ti +4 INTEGER INFO, J, LDS .TP 20 .ti +4 DOUBLE PRECISION OUT( J, * ), S( LDS, * ) .SH PURPOSE DLASORTE sorts eigenpairs so that real eigenpairs are together and complex are together. This way one can employ 2x2 shifts easily since every 2nd subdiagonal is guaranteed to be zero. .br This routine does no parallel work and makes no calls. .br .SH ARGUMENTS .TP 8 S (local input/output) DOUBLE PRECISION array, dimension LDS On entry, a matrix already in Schur form. On exit, the diagonal blocks of S have been rewritten to pair the eigenvalues. The resulting matrix is no longer similar to the input. .TP 8 LDS (local input) INTEGER On entry, the leading dimension of the local array S. Unchanged on exit. .TP 8 J (local input) INTEGER On entry, the order of the matrix S. Unchanged on exit. .TP 8 OUT (local input/output) DOUBLE PRECISION array, dimension Jx2 This is the work buffer required by this routine. .TP 8 INFO (local input) INTEGER This is set if the input matrix had an odd number of real eigenvalues and things couldn't be paired or if the input matrix S was not originally in Schur form. 0 indicates successful completion. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/dlasrt2.l0100644000056400000620000000266206335610613017000 0ustar pfrauenfstaff.TH DLASRT2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME DLASRT2 - the numbers in D in increasing order (if ID = 'I') or in decreasing order (if ID = 'D' ) .SH SYNOPSIS .TP 20 SUBROUTINE DLASRT2( ID, N, D, KEY, INFO ) .TP 20 .ti +4 CHARACTER ID .TP 20 .ti +4 INTEGER INFO, N .TP 20 .ti +4 INTEGER KEY( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ) .SH PURPOSE Sort the numbers in D in increasing order (if ID = 'I') or in decreasing order (if ID = 'D' ). Use Quick Sort, reverting to Insertion sort on arrays of .br size <= 20. Dimension of STACK limits N to about 2**32. .br .SH ARGUMENTS .TP 8 ID (input) CHARACTER*1 = 'I': sort D in increasing order; .br = 'D': sort D in decreasing order. .TP 8 N (input) INTEGER The length of the array D. .TP 8 D (input/output) DOUBLE PRECISION array, dimension (N) On entry, the array to be sorted. On exit, D has been sorted into increasing order (D(1) <= ... <= D(N) ) or into decreasing order (D(1) >= ... >= D(N) ), depending on ID. .TP 8 KEY (input/output) INTEGER array, dimension (N) On entry, KEY contains a key to each of the entries in D() Typically, KEY(I) = I for all I On exit, KEY is permuted in exactly the same manner as D() was permuted from input to output Therefore, if KEY(I) = I for all I upon input, then D_out(I) = D_in(KEY(I)) .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/dpttrsv.l0100644000056400000620000000317406335610613017132 0ustar pfrauenfstaff.TH DPTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME DPTTRSV - solve one of the triangular systems L**T* X = B, or L * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE DPTTRSV( TRANS, N, NRHS, D, E, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION D( * ) .TP 20 .ti +4 DOUBLE PRECISION B( LDB, * ), E( * ) .SH PURPOSE DPTTRSV solves one of the triangular systems L**T* X = B, or L * X = B, where L is the Cholesky factor of a Hermitian positive .br definite tridiagonal matrix A such that .br A = L*D*L**H (computed by DPTTRF). .br .SH ARGUMENTS .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': L * X = B (No transpose) .br = 'T': L**T * X = B (Transpose) .TP 8 N (input) INTEGER The order of the tridiagonal matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 D (input) REAL array, dimension (N) The n diagonal elements of the diagonal matrix D from the factorization computed by DPTTRF. .TP 8 E (input) COMPLEX array, dimension (N-1) The (n-1) off-diagonal elements of the unit bidiagonal factor U or L from the factorization computed by DPTTRF (see UPLO). .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/dstein2.l0100644000056400000620000000702706335610613016775 0ustar pfrauenfstaff.TH DSTEIN2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME DSTEIN2 - compute the eigenvectors of a real symmetric tridiagonal matrix T corresponding to specified eigenvalues, using inverse iteration .SH SYNOPSIS .TP 20 SUBROUTINE DSTEIN2( N, D, E, M, W, IBLOCK, ISPLIT, ORFAC, Z, LDZ, WORK, IWORK, IFAIL, INFO ) .TP 20 .ti +4 INTEGER INFO, LDZ, M, N .TP 20 .ti +4 DOUBLE PRECISION ORFAC .TP 20 .ti +4 INTEGER IBLOCK( * ), IFAIL( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ), W( * ), WORK( * ), Z( LDZ, * ) .SH PURPOSE DSTEIN2 computes the eigenvectors of a real symmetric tridiagonal matrix T corresponding to specified eigenvalues, using inverse iteration. The maximum number of iterations allowed for each eigenvector is specified by an internal parameter MAXITS (currently set to 5). .SH ARGUMENTS .TP 8 N (input) INTEGER The order of the matrix. N >= 0. .TP 8 D (input) DOUBLE PRECISION array, dimension (N) The n diagonal elements of the tridiagonal matrix T. .TP 8 E (input) DOUBLE PRECISION array, dimension (N) The (n-1) subdiagonal elements of the tridiagonal matrix T, in elements 1 to N-1. E(N) need not be set. .TP 8 M (input) INTEGER The number of eigenvectors to be found. 0 <= M <= N. .TP 8 W (input) DOUBLE PRECISION array, dimension (N) The first M elements of W contain the eigenvalues for which eigenvectors are to be computed. The eigenvalues should be grouped by split-off block and ordered from smallest to largest within the block. ( The output array W from DSTEBZ with ORDER = 'B' is expected here. ) .TP 8 IBLOCK (input) INTEGER array, dimension (N) The submatrix indices associated with the corresponding eigenvalues in W; IBLOCK(i)=1 if eigenvalue W(i) belongs to the first submatrix from the top, =2 if W(i) belongs to the second submatrix, etc. ( The output array IBLOCK from DSTEBZ is expected here. ) .TP 8 ISPLIT (input) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT( 1 ), the second of rows/columns ISPLIT( 1 )+1 through ISPLIT( 2 ), etc. ( The output array ISPLIT from DSTEBZ is expected here. ) .TP 8 ORFAC (input) DOUBLE PRECISION ORFAC specifies which eigenvectors should be orthogonalized. Eigenvectors that correspond to eigenvalues which are within ORFAC*||T|| of each other are to be orthogonalized. .TP 8 Z (output) DOUBLE PRECISION array, dimension (LDZ, M) The computed eigenvectors. The eigenvector associated with the eigenvalue W(i) is stored in the i-th column of Z. Any vector which fails to converge is set to its current iterate after MAXITS iterations. .TP 8 LDZ (input) INTEGER The leading dimension of the array Z. LDZ >= max(1,N). .TP 8 WORK (workspace) DOUBLE PRECISION array, dimension (5*N) .TP 8 IWORK (workspace) INTEGER array, dimension (N) .TP 8 IFAIL (output) INTEGER array, dimension (M) On normal exit, all elements of IFAIL are zero. If one or more eigenvectors fail to converge after MAXITS iterations, then their indices are stored in array IFAIL. .TP 8 INFO (output) INTEGER = 0: successful exit. .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, then i eigenvectors failed to converge in MAXITS iterations. Their indices are stored in array IFAIL. .SH PARAMETERS .TP 8 MAXITS INTEGER, default = 5 The maximum number of iterations performed. .TP 8 EXTRA INTEGER, default = 2 The number of iterations performed after norm growth criterion is satisfied, should be at least 1. scalapack-doc-1.5/man/manl/dsteqr2.l0100644000056400000620000000523306335610613017006 0ustar pfrauenfstaff.TH DSTEQR2 l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME DSTEQR2 - i a modified version of LAPACK routine DSTEQR .SH SYNOPSIS .TP 20 SUBROUTINE DSTEQR2( COMPZ, N, D, E, Z, LDZ, NR, WORK, INFO ) .TP 20 .ti +4 CHARACTER COMPZ .TP 20 .ti +4 INTEGER INFO, LDZ, N, NR .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ), WORK( * ), Z( LDZ, * ) .SH PURPOSE DSTEQR2 is a modified version of LAPACK routine DSTEQR. DSTEQR2 computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the implicit QL or QR method. DSTEQR2 is modified from DSTEQR to allow each ScaLAPACK process running DSTEQR2 to perform updates on a distributed matrix Q. Proper usage of DSTEQR2 can be gleaned from examination of ScaLAPACK's PDSYEV. .br .SH ARGUMENTS .TP 8 COMPZ (input) CHARACTER*1 = 'N': Compute eigenvalues only. .br = 'I': Compute eigenvalues and eigenvectors of the tridiagonal matrix. Z must be initialized to the identity matrix by PDLASET or DLASET prior to entering this subroutine. .TP 8 N (input) INTEGER The order of the matrix. N >= 0. .TP 8 D (input/output) DOUBLE PRECISION array, dimension (N) On entry, the diagonal elements of the tridiagonal matrix. On exit, if INFO = 0, the eigenvalues in ascending order. .TP 8 E (input/output) DOUBLE PRECISION array, dimension (N-1) On entry, the (n-1) subdiagonal elements of the tridiagonal matrix. On exit, E has been destroyed. .TP 8 Z (local input/local output) DOUBLE PRECISION array, global dimension (N, N), local dimension (LDZ, NR). On entry, if COMPZ = 'V', then Z contains the orthogonal matrix used in the reduction to tridiagonal form. On exit, if INFO = 0, then if COMPZ = 'V', Z contains the orthonormal eigenvectors of the original symmetric matrix, and if COMPZ = 'I', Z contains the orthonormal eigenvectors of the symmetric tridiagonal matrix. If COMPZ = 'N', then Z is not referenced. .TP 8 LDZ (input) INTEGER The leading dimension of the array Z. LDZ >= 1, and if eigenvectors are desired, then LDZ >= max(1,N). .TP 8 NR (input) INTEGER NR = MAX(1, NUMROC( N, NB, MYPROW, 0, NPROCS ) ). If COMPZ = 'N', then NR is not referenced. .TP 8 WORK (workspace) DOUBLE PRECISION array, dimension (max(1,2*N-2)) If COMPZ = 'N', then WORK is not referenced. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: the algorithm has failed to find all the eigenvalues in a total of 30*N iterations; if INFO = i, then i elements of E have not converged to zero; on exit, D and E contain the elements of a symmetric tridiagonal matrix which is orthogonally similar to the original matrix. scalapack-doc-1.5/man/manl/pcdbsv.l0100644000056400000620000000141106335610613016675 0ustar pfrauenfstaff.TH PCDBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PCDBSV( N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX A( * ), B( * ), WORK( * ) .SH PURPOSE PCDBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PCDBTRF and PCDBTRS for details. .br scalapack-doc-1.5/man/manl/pcdbtrf.l0100644000056400000620000000214306335610613017043 0ustar pfrauenfstaff.TH PCDBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDBTRF - compute a LU factorization of an N-by-N complex banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PCDBTRF( N, BWL, BWU, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), WORK( * ) .SH PURPOSE PCDBTRF computes a LU factorization of an N-by-N complex banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PCDBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pcdbtrs.l0100644000056400000620000000166306335610613017066 0ustar pfrauenfstaff.TH PCDBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PCDBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PCDBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PCDBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Routine PCDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcdbtrsv.l0100644000056400000620000000220706335610613017247 0ustar pfrauenfstaff.TH PCDBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PCDBTRSV( UPLO, TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PCDBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Gaussian elimination code PC@(dom_pre)BTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PCDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcdtsv.l0100644000056400000620000000137406335610613016727 0ustar pfrauenfstaff.TH PCDTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PCDTSV( N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PCDTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal diagonally dominant-like distributed .br matrix. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PCDTTRF and PCDTTRS for details. .br scalapack-doc-1.5/man/manl/pcdttrf.l0100644000056400000620000000213606335610613017067 0ustar pfrauenfstaff.TH PCDTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDTTRF - compute a LU factorization of an N-by-N complex tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCDTTRF( N, DL, D, DU, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX AF( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PCDTTRF computes a LU factorization of an N-by-N complex tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PCDTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pcdttrs.l0100644000056400000620000000164606335610613017111 0ustar pfrauenfstaff.TH PCDTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PCDTTRS( TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PCDTTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PCDTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal diagonally dominant-like distributed .br matrix. .br Routine PCDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcdttrsv.l0100644000056400000620000000223406335610613017271 0ustar pfrauenfstaff.TH PCDTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCDTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PCDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PCDTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Gaussian elimination code PC@(dom_pre)TTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PCDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcgbsv.l0100644000056400000620000000140006335610613016676 0ustar pfrauenfstaff.TH PCGBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PCGBSV( N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 COMPLEX A( * ), B( * ), WORK( * ) .SH PURPOSE PCGBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br banded distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination with pivoting .br is used to factor a reordering .br of the matrix into P L U. .br See PCGBTRF and PCGBTRS for details. .br scalapack-doc-1.5/man/manl/pcgbtrf.l0100644000056400000620000000237306335610613017053 0ustar pfrauenfstaff.TH PCGBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGBTRF - compute a LU factorization of an N-by-N complex banded distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PCGBTRF( N, BWL, BWU, A, JA, DESCA, IPIV, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), WORK( * ) .SH PURPOSE PCGBTRF computes a LU factorization of an N-by-N complex banded distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PCGBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) Q = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P and Q are permutation matrices. .br The matrix Q represents reordering of columns .br for parallelism's sake, while P represents .br reordering of rows for numerical stability using .br classic partial pivoting. .br scalapack-doc-1.5/man/manl/pcgbtrs.l0100644000056400000620000000165106335610614017067 0ustar pfrauenfstaff.TH PCGBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PCGBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWU, BWL, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV(*) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PCGBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PCGBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br banded distributed .br matrix with bandwidth BWL, BWU. .br Routine PCGBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcgebd2.l0100644000056400000620000002224206335610614016730 0ustar pfrauenfstaff.TH PCGEBD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCGEBD2 - reduce a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation .SH SYNOPSIS .TP 20 SUBROUTINE PCGEBD2( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL D( * ), E( * ) .TP 20 .ti +4 COMPLEX A( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PCGEBD2 reduces a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) COMPLEX array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) COMPLEX array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MpA0, NqA0 ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ) IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+IROFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pcgebrd.l0100644000056400000620000002226206335610614017032 0ustar pfrauenfstaff.TH PCGEBRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEBRD - reduce a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation .SH SYNOPSIS .TP 20 SUBROUTINE PCGEBRD( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL D( * ), E( * ) .TP 20 .ti +4 COMPLEX A( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PCGEBRD reduces a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) COMPLEX array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) COMPLEX array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*( MpA0 + NqA0 + 1 ) + NqA0 where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pcgecon.l0100644000056400000620000001541706335610614017046 0ustar pfrauenfstaff.TH PCGECON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGECON - estimate the reciprocal of the condition number of a general distributed complex matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PCGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PCGECON( NORM, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER NORM .TP 20 .ti +4 INTEGER IA, INFO, JA, LRWORK, LWORK, N .TP 20 .ti +4 REAL ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), WORK( * ) .SH PURPOSE PCGECON estimates the reciprocal of the condition number of a general distributed complex matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PCGETRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm .br = 'I': Infinity-norm .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) REAL If NORM = '1' or 'O', the 1-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). If NORM = 'I', the infinity-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) REAL The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + MAX( 2, MAX(NB_A*CEIL(NPROW-1,NPCOL),LOCc(N+MOD(JA-1,NB_A)) + NB_A*CEIL(NPCOL-1,NPROW)) ). LOCr and LOCc values can be computed using the ScaLAPACK tool function NUMROC; NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= 2*LOCc(N+MOD(JA-1,NB_A)). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcgeequ.l0100644000056400000620000001437206335610614017060 0ustar pfrauenfstaff.TH PCGEEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEEQU - compute row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number .SH SYNOPSIS .TP 20 SUBROUTINE PCGEEQU( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 REAL AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL C( * ), R( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCGEEQU computes row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest entry in each row and column of the distributed matrix B with elements B(i,j) = R(i) * A(i,j) * C(j) have absolute value 1. .br R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of sub( A ) but works well in practice. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ), the local pieces of the M-by-N distributed matrix whose equilibration factors are to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local output) REAL array, dimension LOCr(M_A) If INFO = 0 or INFO > IA+M-1, R(IA:IA+M-1) contains the row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local output) REAL array, dimension LOCc(N_A) If INFO = 0, C(JA:JA+N-1) contains the column scale factors for sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distri- buted matrix A. .TP 8 ROWCND (global output) REAL If INFO = 0 or INFO > IA+M-1, ROWCND contains the ratio of the smallest R(i) to the largest R(i) (IA <= i <= IA+M-1). If ROWCND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by R(IA:IA+M-1). .TP 8 COLCND (global output) REAL If INFO = 0, COLCND contains the ratio of the smallest C(j) to the largest C(j) (JA <= j <= JA+N-1). If COLCND >= 0.1, it is not worth scaling by C(JA:JA+N-1). .TP 8 AMAX (global output) REAL Absolute value of largest distributed matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, and i is .br <= M: the i-th row of the distributed matrix sub( A ) is exactly zero, > M: the (i-M)-th column of the distributed matrix sub( A ) is exactly zero. scalapack-doc-1.5/man/manl/pcgehd2.l0100644000056400000620000001651106335610614016740 0ustar pfrauenfstaff.TH PCGEHD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCGEHD2 - reduce a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PCGEHD2( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEHD2 reduces a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation: Q' * sub( A ) * Q = H, where .br sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+JLO-2 and JA+JHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB + MAX( NpA0, NB ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on exit in A(ia+ilo+i:ia+ihi-1,ja+ilo+i-2), and tau in TAU(ja+ilo+i-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follo- wing example, with n = 7, ilo = 2 and ihi = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(ja+ilo+i-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pcgehrd.l0100644000056400000620000001714506335610614017044 0ustar pfrauenfstaff.TH PCGEHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEHRD - reduce a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PCGEHRD( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEHRD reduces a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation: Q' * sub( A ) * Q = H, where .br sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+ILO-2 and JA+IHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*NB + NB*MAX( IHIP+1, IHLP+INLQ ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IOFF = MOD( IA+ILO-2, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IHIP = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), ILROW = INDXG2P( IA+ILO-1, NB, MYROW, RSRC_A, NPROW ), IHLP = NUMROC( IHI-ILO+IOFF+1, NB, MYROW, ILROW, NPROW ), ILCOL = INDXG2P( JA+ILO-1, NB, MYCOL, CSRC_A, NPCOL ), INLQ = NUMROC( N-ILO+IOFF+1, NB, MYCOL, ILCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:I) = 0, v(I+1) = 1 and v(IHI+1:N) = 0; v(I+2:IHI) is stored on exit in A(IA+ILO+I:IA+IHI-1,JA+ILO+I-2), and tau in TAU(JA+ILO+I-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follow- ing example, with N = 7, ILO = 2 and IHI = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), H denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(JA+ILO+I-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pcgelq2.l0100644000056400000620000001422306335610614016757 0ustar pfrauenfstaff.TH PCGELQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGELQ2 - compute a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PCGELQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGELQ2 computes a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1)' H(ia+k-2)' . . . H(ia)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; conjg(v(i+1:n)) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/pcgelqf.l0100644000056400000620000001423406335610614017045 0ustar pfrauenfstaff.TH PCGELQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGELQF - compute a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PCGELQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGELQF computes a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1)' H(ia+k-2)' . . . H(ia)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; conjg(v(i+1:n)) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/pcgels.l0100644000056400000620000002221206335610614016674 0ustar pfrauenfstaff.TH PCGELS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGELS - solve overdetermined or underdetermined complex linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), .SH SYNOPSIS .TP 19 SUBROUTINE PCGELS( TRANS, M, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER TRANS .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX A( * ), B( * ), WORK( * ) .SH PURPOSE PCGELS solves overdetermined or underdetermined complex linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), or its conjugate-transpose, using a QR or LQ factorization of sub( A ). It is assumed that sub( A ) has full rank. .br The following options are provided: .br 1. If TRANS = 'N' and m >= n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )*X ||. .br 2. If TRANS = 'N' and m < n: find the minimum norm solution of an underdetermined system sub( A ) * X = sub( B ). .br 3. If TRANS = 'C' and m >= n: find the minimum norm solution of an undetermined system sub( A )**H * X = sub( B ). .br 4. If TRANS = 'C' and m < n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )**H * X ||. where sub( B ) denotes B( IB:IB+M-1, JB:JB+NRHS-1 ) when TRANS = 'N' and B( IB:IB+N-1, JB:JB+NRHS-1 ) otherwise. Several right hand side vectors b and solution vectors x can be handled in a single call; When TRANS = 'N', the solution vectors are stored as the columns of the N-by-NRHS right hand side matrix sub( B ) and the M-by-NRHS right hand side matrix sub( B ) otherwise. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER = 'N': the linear system involves sub( A ); .br = 'C': the linear system involves sub( A )**H. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e. the number of columns of the distributed submatrices sub( B ) and X. NRHS >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the M-by-N matrix A. if M >= N, sub( A ) is overwritten by details of its QR factorization as returned by PCGEQRF; if M < N, sub( A ) is overwritten by details of its LQ factorization as returned by PCGELQF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix B of right hand side vectors, stored columnwise; sub( B ) is M-by-NRHS if TRANS='N', and N-by-NRHS otherwise. On exit, sub( B ) is overwritten by the solution vectors, stored columnwise: if TRANS = 'N' and M >= N, rows 1 to N of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements N+1 to M in that column; if TRANS = 'N' and M < N, rows 1 to N of sub( B ) contain the minimum norm solution vectors; if TRANS = 'C' and M >= N, rows 1 to M of sub( B ) contain the minimum norm solution vectors; if TRANS = 'C' and M < N, rows 1 to M of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements M+1 to N in that column. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= LTAU + MAX( LWF, LWS ) where If M >= N, then LTAU = NUMROC( JA+MIN(M,N)-1, NB_A, MYCOL, CSRC_A, NPCOL ), LWF = NB_A * ( MpA0 + NqA0 + NB_A ) LWS = MAX( (NB_A*(NB_A-1))/2, (NRHSqB0 + MpB0)*NB_A ) + NB_A * NB_A Else LTAU = NUMROC( IA+MIN(M,N)-1, MB_A, MYROW, RSRC_A, NPROW ), LWF = MB_A * ( MpA0 + NqA0 + MB_A ) LWS = MAX( (MB_A*(MB_A-1))/2, ( NpB0 + MAX( NqA0 + NUMROC( NUMROC( N+IROFFB, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NRHSqB0 ) )*MB_A ) + MB_A * MB_A End if where LCMP = LCM / NPROW with LCM = ILCM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), MpB0 = NUMROC( M+IROFFB, MB_B, MYROW, IBROW, NPROW ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), NRHSqB0 = NUMROC( NRHS+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcgeql2.l0100644000056400000620000001436506335610614016766 0ustar pfrauenfstaff.TH PCGEQL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEQL2 - compute a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PCGEQL2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEQL2 computes a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/pcgeqlf.l0100644000056400000620000001437606335610614017054 0ustar pfrauenfstaff.TH PCGEQLF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEQLF - compute a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PCGEQLF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEQLF computes a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/pcgeqpf.l0100644000056400000620000001624406335610615017055 0ustar pfrauenfstaff.TH PCGEQPF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEQPF - compute a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCGEQPF( M, N, A, IA, JA, DESCA, IPIV, TAU, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 INTEGER IA, JA, INFO, LRWORK, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 REAL RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEQPF computes a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1): sub( A ) * P = Q * R. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension LOCc(JA+N-1). On exit, if IPIV(I) = K, the local i-th column of sub( A )*P was the global K-th column of sub( A ). IPIV is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX(3,Mp0 + Nq0). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCc(JA+N-1)+Nq0. IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), LOCc(JA+N-1) = NUMROC( JA+N-1, NB_A, MYCOL, CSRC_A, NPCOL ) and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n) .br Each H(i) has the form .br H = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in .br A(ia+i-1:ia+m-1,ja+i-1). .br The matrix P is represented in jpvt as follows: If .br jpvt(j) = i .br then the jth column of P is the ith canonical unit vector. scalapack-doc-1.5/man/manl/pcgeqr2.l0100644000056400000620000001421506335610615016767 0ustar pfrauenfstaff.TH PCGEQR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEQR2 - compute a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PCGEQR2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEQR2 computes a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in .br A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/pcgeqrf.l0100644000056400000620000001422606335610615017055 0ustar pfrauenfstaff.TH PCGEQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGEQRF - compute a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PCGEQRF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGEQRF computes a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in .br A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/pcgerfs.l0100644000056400000620000002330306335610615017053 0ustar pfrauenfstaff.TH PCGERFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGERFS - improve the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PCGERFS( TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, IX, INFO, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IPIV( * ) .TP 20 .ti +4 REAL BERR( * ), FERR( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PCGERFS improves the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**H * sub( X ) = sub( B ) (Conjugate transpose) .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). This array contains the local pieces of the distributed factors of the matrix sub( A ) = P * L * U as computed by PCGETRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input) INTEGER array of dimension LOCr(M_AF)+MB_AF. This array contains the pivoting information as computed by PCGETRF. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). This array contains the local pieces of the distributed matrix of right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input and output) COMPLEX pointer into the local memory to an array of local dimension (LLD_X,LOCc(JX+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix solution sub( X ). On exit, the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr( N + MOD(IA-1,MB_A) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCr( N + MOD(IB-1,MB_B) ). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pcgerq2.l0100644000056400000620000001433706335610615016774 0ustar pfrauenfstaff.TH PCGERQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGERQ2 - compute a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PCGERQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGERQ2 computes a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia)' H(ia+1)' . . . H(ia+k-1)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(n-k+i+1:n) = 0 and v(n-k+i) = 1; conjg(v(1:n-k+i-1)) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). scalapack-doc-1.5/man/manl/pcgerqf.l0100644000056400000620000001435006335610615017053 0ustar pfrauenfstaff.TH PCGERQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGERQF - compute a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PCGERQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCGERQF computes a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia)' H(ia+1)' . . . H(ia+k-1)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(n-k+i+1:n) = 0 and v(n-k+i) = 1; conjg(v(1:n-k+i-1)) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). scalapack-doc-1.5/man/manl/pcgesv.l0100644000056400000620000001375706335610615016725 0ustar pfrauenfstaff.TH PCGESV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGESV - compute the solution to a complex system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PCGESV( N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCGESV computes the solution to a complex system of linear equations where sub( A ) = A(IA:IA+N-1,JA:JA+N-1) is an N-by-N distributed matrix and X and sub( B ) = B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The LU decomposition with partial pivoting and row interchanges is used to factor sub( A ) as sub( A ) = P * L * U, where P is a permu- tation matrix, L is unit lower triangular, and U is upper triangular. L and U are stored in sub( A ). The factored form of sub( A ) is then used to solve the system of equations sub( A ) * X = sub( B ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( A ). NRHS >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the N-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. scalapack-doc-1.5/man/manl/pcgesvx.l0100644000056400000620000004031706335610615017105 0ustar pfrauenfstaff.TH PCGESVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGESVX - use the LU factorization to compute the solution to a complex system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PCGESVX( FACT, TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, EQUED, R, C, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 REAL RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IPIV( * ) .TP 20 .ti +4 REAL BERR( * ), C( * ), FERR( * ), R( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PCGESVX uses the LU factorization to compute the solution to a complex system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION In the following description, A denotes A(IA:IA+N-1,JA:JA+N-1), B denotes B(IB:IB+N-1,JB:JB+NRHS-1) and X denotes .br X(IX:IX+N-1,JX:JX+NRHS-1). .br The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N') or diag(C)*B (if TRANS = 'T' or 'C'). .br 2. If FACT = 'N' or 'E', the LU decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as .br A = P * L * U, .br where P is a permutation matrix, L is a unit lower triangular matrix, and U is upper triangular. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If FACT = 'E' and equilibration was used, the matrix X is premultiplied by diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so that it solves the original system before equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A(IA:IA+N-1,JA:JA+N-1) is supplied on entry, and if not, .br whether the matrix A(IA:IA+N-1,JA:JA+N-1) should be equilibrated before it is factored. = 'F': On entry, AF(IAF:IAF+N-1,JAF:JAF+N-1) and IPIV con- .br tain the factored form of A(IA:IA+N-1,JA:JA+N-1). If EQUED is not 'N', the matrix A(IA:IA+N-1,JA:JA+N-1) has been equilibrated with scaling factors given by R and C. A(IA:IA+N-1,JA:JA+N-1), AF(IAF:IAF+N-1,JAF:JAF+N-1), and IPIV are not modified. = 'N': The matrix A(IA:IA+N-1,JA:JA+N-1) will be copied to .br AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .br = 'E': The matrix A(IA:IA+N-1,JA:JA+N-1) will be equili- brated if necessary, then copied to AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': A(IA:IA+N-1,JA:JA+N-1) * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (No transpose) .br = 'T': A(IA:IA+N-1,JA:JA+N-1)**T * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Transpose) .br = 'C': A(IA:IA+N-1,JA:JA+N-1)**H * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Conjugate transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right-hand sides, i.e., the number of columns of the distributed submatrices B(IB:IB+N-1,JB:JB+NRHS-1) and .br X(IX:IX+N-1,JX:JX+NRHS-1). NRHS >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the N-by-N matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'F' and EQUED is not 'N', .br then A(IA:IA+N-1,JA:JA+N-1) must have been equilibrated by .br the scaling factors in R and/or C. A(IA:IA+N-1,JA:JA+N-1) is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if EQUED .ne. 'N', A(IA:IA+N-1,JA:JA+N-1) is scaled as follows: .br EQUED = 'R': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) .br EQUED = 'C': A(IA:IA+N-1,JA:JA+N-1) := .br A(IA:IA+N-1,JA:JA+N-1) * diag(C) .br EQUED = 'B': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) COMPLEX pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). If FACT = 'F', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an input argument and on entry contains the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PCGETRF. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'N', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original .br matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equili- .br brated matrix A(IA:IA+N-1,JA:JA+N-1) (see the description of .br A(IA:IA+N-1,JA:JA+N-1) for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input or local output) INTEGER array, dimension LOCr(M_A)+MB_A. If FACT = 'F', then IPIV is an input argu- ment and on entry contains the pivot indices from the fac- torization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PCGETRF; IPIV(i) -> The global row local row i was swapped with. This array must be aligned with A( IA:IA+N-1, * ). If FACT = 'N', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original matrix .br A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equilibrated matrix .br A(IA:IA+N-1,JA:JA+N-1). .TP 8 EQUED (global input or global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'R': Row equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been premultiplied by diag(R). = 'C': Column equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been postmultiplied by diag(C). = 'B': Both row and column equilibration, i.e., .br A(IA:IA+N-1,JA:JA+N-1) has been replaced by .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 R (local input or local output) REAL array, dimension LOCr(M_A). The row scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'R' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the left by diag(R); if EQUED='N' or 'C', R is not acces- sed. R is an input variable if FACT = 'F'; otherwise, R is an output variable. If FACT = 'F' and EQUED = 'R' or 'B', each element of R must be positive. R is replicated in every process column, and is aligned with the distributed matrix A. .TP 8 C (local input or local output) REAL array, dimension LOCc(N_A). The column scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'C' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the right by diag(C); if EQUED = 'N' or 'R', C is not accessed. C is an input variable if FACT = 'F'; otherwise, C is an output variable. If FACT = 'F' and EQUED = 'C' or 'B', each element of C must be positive. C is replicated in every process row, and is aligned with the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B(IB:IB+N-1,JB:JB+NRHS-1). On exit, if .br EQUED = 'N', B(IB:IB+N-1,JB:JB+NRHS-1) is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B(IB:IB+N-1,JB:JB+NRHS-1); if TRANS = 'T' or 'C' .br and EQUED = 'C' or 'B', B(IB:IB+N-1,JB:JB+NRHS-1) is over- .br written by diag(C)*B(IB:IB+N-1,JB:JB+NRHS-1). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) COMPLEX pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1)). If INFO = 0, the N-by-NRHS solution matrix X(IX:IX+N-1,JX:JX+NRHS-1) to the original .br system of equations. Note that A(IA:IA+N-1,JA:JA+N-1) and .br B(IB:IB+N-1,JB:JB+NRHS-1) are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(C))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(R))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) REAL The estimate of the reciprocal condition number of the matrix A(IA:IA+N-1,JA:JA+N-1) after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0. .TP 8 FERR (local output) REAL array, dimension LOCc(N_B) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X(IX:IX+N-1,JX:JX+NRHS-1). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. FERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 BERR (local output) REAL array, dimension LOCc(N_B). The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A(IA:IA+N-1,JA:JA+N-1) or .br B(IB:IB+N-1,JB:JB+NRHS-1) that makes X(j) an exact solution). BERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PCGECON( LWORK ), PCGERFS( LWORK ) ) + LOCr( N_A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK = 2*LOCc(N_A). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: U(IA+I-1,IA+I-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pcgetf2.l0100644000056400000620000001260606335610615016760 0ustar pfrauenfstaff.TH PCGETF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGETF2 - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PCGETF2( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCGETF2 computes an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). .br This is the right-looking Parallel Level 2 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). NB_A-MOD(JA-1, NB_A) >= N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ). On exit, this array contains the local pieces of the factors L and U from the factoriza- tion sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/pcgetrf.l0100644000056400000620000001257006335610615017060 0ustar pfrauenfstaff.TH PCGETRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGETRF - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PCGETRF( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCGETRF computes an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal ele- ments (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). L and U are stored in sub( A ). This is the right-looking Parallel Level 3 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal ele- ments of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/pcgetri.l0100644000056400000620000001441506335610615017063 0ustar pfrauenfstaff.TH PCGETRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGETRI - compute the inverse of a distributed matrix using the LU factorization computed by PCGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PCGETRI( N, A, IA, JA, DESCA, IPIV, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), WORK( * ) .SH PURPOSE PCGETRI computes the inverse of a distributed matrix using the LU factorization computed by PCGETRF. This method inverts U and then computes the inverse of sub( A ) = A(IA:IA+N-1,JA:JA+N-1) denoted InvA by solving the system InvA*L = inv(U) for InvA. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the L and U obtained by the factorization sub( A ) = P*L*U computed by PCGETRF. On exit, if INFO = 0, sub( A ) contains the inverse of the original distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A keeps track of the pivoting information. IPIV(i) is the global row index the local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = LOCr(N+MOD(IA-1,MB_A))*NB_A. WORK is used to keep a copy of at most an entire column block of sub( A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK used as workspace for physically transposing the pivots. LIWORK is local input and must be at least if NPROW == NPCOL then LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + NB_A, else LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + MAX( CEIL(CEIL(LOCr(M_A)/MB_A)/(LCM/NPROW)), NB_A ) where LCM is the least common multiple of process rows and columns (NPROW and NPCOL). end if If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,IA+K-1) is exactly zero; the matrix is singular and its inverse could not be computed. scalapack-doc-1.5/man/manl/pcgetrs.l0100644000056400000620000001330006335610615017065 0ustar pfrauenfstaff.TH PCGETRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGETRS - solve a system of distributed linear equations op( sub( A ) ) * X = sub( B ) with a general N-by-N distributed matrix sub( A ) using the LU factorization computed by PCGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PCGETRS( TRANS, N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCGETRS solves a system of distributed linear equations sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1), op( A ) = A, A**T or A**H and sub( B ) denotes B(IB:IB+N-1,JB:JB+NRHS-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block data decomposition ( MB_A=NB_A ). .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER Specifies the form of the system of equations: .br = 'N': sub( A ) * X = sub( B ) (No transpose) .br = 'T': sub( A )**T * X = sub( B ) (Transpose) .br = 'C': sub( A )**H * X = sub( B ) (Conjugate transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand sides sub( B ). On exit, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcggqrf.l0100644000056400000620000002351406335610616017060 0ustar pfrauenfstaff.TH PCGGQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGGQRF - compute a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCGGQRF( N, M, P, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PCGGQRF computes a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1): sub( A ) = Q*R, sub( B ) = Q*T*Z, .br where Q is an N-by-N unitary matrix, Z is a P-by-P unitary matrix, and R and T assume one of the forms: .br if N >= M, R = ( R11 ) M , or if N < M, R = ( R11 R12 ) N, ( 0 ) N-M N M-N M .br where R11 is upper triangular, and .br if N <= P, T = ( 0 T12 ) N, or if N > P, T = ( T11 ) N-P, P-N N ( T21 ) P P .br where T12 or T21 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GQR factorization of sub( A ) and sub( B ) implicitly gives the QR factorization of inv( sub( B ) )* sub( A ): .br inv( sub( B ) )*sub( A )= Z'*(inv(T)*R) .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the conjugate transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 M (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( B ). P >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+M-1)). On entry, the local pieces of the N-by-M distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(N,M) by M upper trapezoidal matrix R (R is upper triangular if N >= M); the elements below the diagonal, with the array TAUA, represent the unitary matrix Q as a product of min(N,M) elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) COMPLEX, array, dimension LOCc(JA+MIN(N,M)-1). This array contains the scalar factors TAUA of the elementary reflectors which represent the unitary matrix Q. TAUA is tied to the distributed matrix A. (see Further Details). B (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+P-1)). On entry, the local pieces of the N-by-P distributed matrix sub( B ) which is to be factored. On exit, if N <= P, the upper triangle of B(IB:IB+N-1,JB+P-N:JB+P-1) contains the N by N upper triangular matrix T; if N > P, the elements on and above the (N-P)-th subdiagonal contain the N by P upper trapezoidal matrix T; the remaining elements, with the array TAUB, represent the unitary matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) COMPLEX, array, dimension LOCr(IB+N-1) This array contains the scalar factors of the elementary reflectors which represent the unitary matrix Z. TAUB is tied to the distributed matrix B (see Further Details). WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB_A * ( NpA0 + MqA0 + NB_A ), MAX( (NB_A*(NB_A-1))/2, (PqB0 + NpB0)*NB_A ) + NB_A * NB_A, MB_B * ( NpB0 + PqB0 + MB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), PqB0 = NUMROC( P+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(n,m). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in .br A(ia+i:ia+n-1,ja+i-1), and taua in TAUA(ja+i-1). .br To form Q explicitly, use ScaLAPACK subroutine PCUNGQR. .br To use Q to update another matrix, use ScaLAPACK subroutine PCUNMQR. The matrix Z is represented as a product of elementary reflectors Z = H(ib)' H(ib+1)' . . . H(ib+k-1)', where k = min(n,p). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a complex scalar, and v is a complex vector with v(p-k+i+1:p) = 0 and v(p-k+i) = 1; conjg(v(1:p-k+i-1)) is stored on exit in B(ib+n-k+i-1,jb:jb+p-k+i-2), and taub in TAUB(ib+n-k+i-1). To form Z explicitly, use ScaLAPACK subroutine PCUNGRQ. .br To use Z to update another matrix, use ScaLAPACK subroutine PCUNMRQ. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( MB_A.EQ.MB_B .AND. IROFFA.EQ.IROFFB .AND. IAROW.EQ.IBROW ) scalapack-doc-1.5/man/manl/pcggrqf.l0100644000056400000620000002341406335610616017057 0ustar pfrauenfstaff.TH PCGGRQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCGGRQF - compute a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCGGRQF( M, P, N, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PCGGRQF computes a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) and a P-by-N matrix sub( B ) = B(IB:IB+P-1,JB:JB+N-1): .br sub( A ) = R*Q, sub( B ) = Z*T*Q, .br where Q is an N-by-N unitary matrix, Z is a P-by-P unitary matrix, and R and T assume one of the forms: .br if M <= N, R = ( 0 R12 ) M, or if M > N, R = ( R11 ) M-N, N-M M ( R21 ) N N .br where R12 or R21 is upper triangular, and .br if P >= N, T = ( T11 ) N , or if P < N, T = ( T11 T12 ) P, ( 0 ) P-N P N-P N .br where T11 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GRQ factorization of sub( A ) and sub( B ) implicitly gives the RQ factorization of sub( A )*inv( sub( B ) ): .br sub( A )*inv( sub( B ) ) = (R*inv(T))*Z' .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the conjugate transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( B ). P >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAUA, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUA is tied to the distributed matrix A (see Further Details). B (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, the local pieces of the P-by-N distributed matrix sub( B ) which is to be factored. On exit, the elements on and above the diagonal of sub( B ) contain the min(P,N) by N upper trapezoidal matrix T (T is upper triangular if P >= N); the elements below the diagonal, with the array TAUB, represent the unitary matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) COMPLEX, array, dimension LOCc(JB+MIN(P,N)-1). This array contains the scalar factors TAUB of the elementary reflectors which represent the unitary matrix Z. TAUB is tied to the distributed matrix B (see Further Details). WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MB_A * ( MpA0 + NqA0 + MB_A ), MAX( (MB_A*(MB_A-1))/2, (PpB0 + NqB0)*MB_A ) + MB_A * MB_A, NB_B * ( PpB0 + NqB0 + NB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), PpB0 = NUMROC( P+IROFFB, MB_B, MYROW, IBROW, NPROW ), NqB0 = NUMROC( N+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia)' H(ia+1)' . . . H(ia+k-1)', where k = min(m,n). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a complex scalar, and v is a complex vector with v(n-k+i+1:n) = 0 and v(n-k+i) = 1; conjg(v(1:n-k+i-1)) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and taua in TAUA(ia+m-k+i-1). To form Q explicitly, use ScaLAPACK subroutine PCUNGRQ. .br To use Q to update another matrix, use ScaLAPACK subroutine PCUNMRQ. The matrix Z is represented as a product of elementary reflectors Z = H(jb) H(jb+1) . . . H(jb+k-1), where k = min(p,n). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:p) is stored on exit in .br B(ib+i:ib+p-1,jb+i-1), and taub in TAUB(jb+i-1). .br To form Z explicitly, use ScaLAPACK subroutine PCUNGQR. .br To use Z to update another matrix, use ScaLAPACK subroutine PCUNMQR. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( NB_A.EQ.NB_B .AND. ICOFFA.EQ.ICOFFB .AND. IACOL.EQ.IBCOL ) scalapack-doc-1.5/man/manl/pcheevx.l0100644000056400000620000003531306335610616017071 0ustar pfrauenfstaff.TH PCHEEVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PCHEEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IL, INFO, IU, IZ, JA, JZ, LIWORK, LRWORK, LWORK, M, N, NZ .TP 20 .ti +4 REAL ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 REAL GAP( * ), RWORK( * ), W( * ) .TP 20 .ti +4 COMPLEX A( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 REAL ZERO, ONE, TEN, FIVE .TP 20 .ti +4 PARAMETER ( ZERO = 0.0E+0, ONE = 1.0E+0, TEN = 10.0E+0, FIVE = 5.0E+0 ) .TP 20 .ti +4 INTEGER IERREIN, IERRCLS, IERRSPC, IERREBZ .TP 20 .ti +4 PARAMETER ( IERREIN = 1, IERRCLS = 2, IERRSPC = 4, IERREBZ = 8 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LOWER, LQUERY, QUICKRETURN, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER ORDER .TP 20 .ti +4 INTEGER CSRC_A, I, IACOL, IAROW, ICOFFA, IINFO, INDD, INDD2, INDE, INDE2, INDIBL, INDISP, INDRWORK, INDTAU, INDWORK, IROFFA, IROFFZ, ISCALE, ISIZESTEBZ, ISIZESTEIN, IZROW, LALLWORK, LIWMIN, LLRWORK, LLWORK, LRWMIN, LWMIN, MAXEIGS, MB_A, MB_Z, MQ0, MYCOL, MYROW, NB, NB_A, NB_Z, NEIG, NN, NNP, NP0, NPCOL, NPROCS, NPROW, NQ0, NSPLIT, NZZ, OFFSET, RSRC_A, RSRC_Z, SIZEHEEVX, SIZEORMTR, SIZESTEIN .TP 20 .ti +4 REAL ABSTLL, ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA, SMLNUM, VLL, VUU .TP 20 .ti +4 INTEGER IDUM1( 4 ), IDUM2( 4 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 REAL PCLANHE, PSLAMCH .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PCLANHE, PSLAMCH .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, IGAMN2D, PCELGET, PCHETRD, PCHK2MAT, PCLASCL, PCSTEIN, PCUNMTR, PSLARED1D, PSSTEBZ, PXERBLA, SGEBR2D, SGEBS2D, SLASRT, SSCAL .TP 20 .ti +4 INTRINSIC ABS, CMPLX, ICHAR, MAX, MIN, MOD, REAL, SQRT .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 QUICKRETURN = ( N.EQ.0 ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( DESCA( CTXT_ ), NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 800+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 8, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 21, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 SAFMIN = PSLAMCH( DESCA( CTXT_ ), 'Safe minimum' ) .TP 20 .ti +4 EPS = PSLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 SMLNUM = SAFMIN / EPS .TP 20 .ti +4 BIGNUM = ONE / SMLNUM .TP 20 .ti +4 RMIN = SQRT( SMLNUM ) .TP 20 .ti +4 RMAX = MIN( SQRT( BIGNUM ), ONE / SQRT( SQRT( SAFMIN ) ) ) .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 LOWER = LSAME( UPLO, 'L' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 INDTAU = 1 .TP 20 .ti +4 INDWORK = INDTAU + N .TP 20 .ti +4 LLWORK = LWORK - INDWORK + 1 .TP 20 .ti +4 INDE = 1 .TP 20 .ti +4 INDD = INDE + N .TP 20 .ti +4 INDD2 = INDD + N .TP 20 .ti +4 INDE2 = INDD2 + N .TP 20 .ti +4 INDRWORK = INDE2 + N .TP 20 .ti +4 LLRWORK = LRWORK - INDRWORK + 1 .TP 20 .ti +4 ISIZESTEIN = 3*N + NPROCS + 1 .TP 20 .ti +4 ISIZESTEBZ = MAX( 4*N, 14, NPROCS ) .TP 20 .ti +4 INDIBL = ( MAX( ISIZESTEIN, ISIZESTEBZ ) ) + 1 .TP 20 .ti +4 INDISP = INDIBL + N .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 .OR. LRWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 NNP = MAX( N, NPROCS+1, 4 ) .TP 20 .ti +4 LIWMIN = 6*NNP .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 NB_A = DESCA( NB_ ) .TP 20 .ti +4 MB_A = DESCA( MB_ ) .TP 20 .ti +4 NB_Z = DESCZ( NB_ ) .TP 20 .ti +4 MB_Z = DESCZ( MB_ ) .TP 20 .ti +4 NB = NB_A .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 RSRC_A = DESCA( RSRC_ ) .TP 20 .ti +4 CSRC_A = DESCA( CSRC_ ) .TP 20 .ti +4 RSRC_Z = DESCZ( RSRC_ ) .TP 20 .ti +4 IROFFA = MOD( IA-1, MB_A ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, NB_A ) .TP 20 .ti +4 IROFFZ = MOD( IZ-1, MB_A ) .TP 20 .ti +4 IAROW = INDXG2P( 1, NB_A, MYROW, RSRC_A, NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( 1, MB_A, MYCOL, CSRC_A, NPCOL ) .TP 20 .ti +4 IZROW = INDXG2P( 1, NB_A, MYROW, RSRC_Z, NPROW ) .TP 20 .ti +4 NP0 = NUMROC( N+IROFFA, NB_Z, MYROW, IAROW, NPROW ) .TP 20 .ti +4 MQ0 = NUMROC( N+ICOFFA, NB_Z, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = N + MAX( NB*( NP0+1 ), 3 ) .TP 20 .ti +4 LRWMIN = 5*NN + 4*N .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 NQ0 = NUMROC( NN, NB, 0, 0, NPCOL ) .TP 20 .ti +4 LWMIN = N + ( NP0+NQ0+NB )*NB .TP 20 .ti +4 LRWMIN = 4*N + MAX( 5*NN, NP0*MQ0 ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 RWORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 RWORK( 2 ) = VL .TP 20 .ti +4 RWORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 RWORK( 2 ) = ZERO .TP 20 .ti +4 RWORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL SGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL SGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( LOWER .OR. LSAME( UPLO, 'U' ) ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -23 .TP 20 .ti +4 ELSE IF( LRWORK.LT.LRWMIN .AND. LRWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -25 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -27 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -9 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( ABS( RWORK( 1 )-ABSTOL ).GT.FIVE*EPS* ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -13 .TP 20 .ti +4 ELSE IF( IROFFA.NE.IROFFZ ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -6 .TP 20 .ti +4 ELSE IF( IAROW.NE.IZROW ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 800+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( LOWER ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'L' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'U' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 4 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 8, N, 4, N, 4, IZ, JZ, DESCZ, 21, 4, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 WORK( 1 ) = CMPLX( LWMIN ) .TP 20 .ti +4 RWORK( 1 ) = REAL( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( DESCA( CTXT_ ), 'PCHEEVX', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( QUICKRETURN ) THEN .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NZ = 0 .TP 20 .ti +4 ICLUSTR( 1 ) = 0 .TP 20 .ti +4 END IF .TP 20 .ti +4 M = 0 .TP 20 .ti +4 WORK( 1 ) = CMPLX( LWMIN ) .TP 20 .ti +4 RWORK( 1 ) = REAL( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 ABSTLL = ABSTOL .TP 20 .ti +4 ISCALE = 0 .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL .TP 20 .ti +4 VUU = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 VLL = ZERO .TP 20 .ti +4 VUU = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 ANRM = PCLANHE( '1', UPLO, N, A, IA, JA, DESCA, RWORK( INDRWORK ) ) .TP 20 .ti +4 IF( ANRM.GT.ZERO .AND. ANRM.LT.RMIN ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMIN / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 ELSE IF( ANRM.GT.RMAX ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMAX / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL PCLASCL( UPLO, ONE, SIGMA, N, N, A, IA, JA, DESCA, IINFO ) .TP 20 .ti +4 IF( ABSTOL.GT.0 ) ABSTLL = ABSTOL*SIGMA .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL*SIGMA .TP 20 .ti +4 VUU = VU*SIGMA .TP 20 .ti +4 IF( VUU.EQ.VLL ) THEN .TP 20 .ti +4 VUU = VUU + 2*MAX( ABS( VUU )*EPS, SAFMIN ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 LALLWORK = LLRWORK .TP 20 .ti +4 CALL PCHETRD( UPLO, N, A, IA, JA, DESCA, RWORK( INDD ), RWORK( INDE ), WORK( INDTAU ), WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 OFFSET = 0 .TP 20 .ti +4 IF( IA.EQ.1 .AND. JA.EQ.1 .AND. RSRC_A.EQ.0 .AND. CSRC_A.EQ.0 ) THEN .TP 20 .ti +4 CALL PSLARED1D( N, IA, JA, DESCA, RWORK( INDD ), RWORK( INDD2 ), RWORK( INDRWORK ), LLRWORK ) .TP 20 .ti +4 CALL PSLARED1D( N, IA, JA, DESCA, RWORK( INDE ), RWORK( INDE2 ), RWORK( INDRWORK ), LLRWORK ) .TP 20 .ti +4 IF( .NOT.LOWER ) OFFSET = 1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 10 I = 1, N .TP 20 .ti +4 CALL PCELGET( 'A', ' ', WORK( INDD2+I-1 ), A, I+IA-1, I+JA-1, DESCA ) .TP 20 .ti +4 RWORK( INDD2+I-1 ) = REAL( WORK( INDD2+I-1 ) ) .TP 20 .ti +4 10 CONTINUE .TP 20 .ti +4 IF( LSAME( UPLO, 'U' ) ) THEN .TP 20 .ti +4 DO 20 I = 1, N - 1 .TP 20 .ti +4 CALL PCELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA-1, I+JA, DESCA ) .TP 20 .ti +4 RWORK( INDE2+I-1 ) = REAL( WORK( INDE2+I-1 ) ) .TP 20 .ti +4 20 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 30 I = 1, N - 1 .TP 20 .ti +4 CALL PCELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA, I+JA-1, DESCA ) .TP 20 .ti +4 RWORK( INDE2+I-1 ) = REAL( WORK( INDE2+I-1 ) ) .TP 20 .ti +4 30 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 ORDER = 'b' .TP 20 .ti +4 ELSE .TP 20 .ti +4 ORDER = 'e' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, RWORK( INDD2 ), RWORK( INDE2+OFFSET ), M, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), RWORK( INDRWORK ), LLRWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 IF( IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 DO 40 I = 1, M .TP 20 .ti +4 IWORK( INDIBL+I-1 ) = ABS( IWORK( INDIBL+I-1 ) ) .TP 20 .ti +4 40 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 CALL IGAMN2D( DESCA( CTXT_ ), 'A', ' ', 1, 1, LALLWORK, 1, 1, 1, -1, -1, -1 ) .TP 20 .ti +4 MAXEIGS = DESCZ( N_ ) .TP 20 .ti +4 DO 50 NZ = MIN( MAXEIGS, M ), 0, -1 .TP 20 .ti +4 MQ0 = NUMROC( NZ, NB, 0, 0, NPCOL ) .TP 20 .ti +4 SIZESTEIN = ICEIL( NZ, NPROCS )*N + MAX( 5*N, NP0*MQ0 ) .TP 20 .ti +4 SIZEORMTR = MAX( ( NB*( NB-1 ) ) / 2, ( MQ0+NP0 )*NB ) + NB*NB .TP 20 .ti +4 SIZEHEEVX = MAX( SIZESTEIN, SIZEORMTR ) .TP 20 .ti +4 IF( SIZEHEEVX.LE.LALLWORK ) GO TO 60 .TP 20 .ti +4 50 CONTINUE .TP 20 .ti +4 60 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 NZ = M .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MAX( NZ, 0 ) .TP 20 .ti +4 IF( NZ.NE.M ) THEN .TP 20 .ti +4 INFO = INFO + IERRSPC .TP 20 .ti +4 DO 70 I = 1, M .TP 20 .ti +4 IFAIL( I ) = 0 .TP 20 .ti +4 70 CONTINUE .TP 20 .ti +4 IF( NSPLIT.GT.1 ) THEN .TP 20 .ti +4 CALL SLASRT( 'I', M, W, IINFO ) .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 VUU = W( NZ ) - TEN*( EPS*ANRM+SAFMIN ) .TP 20 .ti +4 IF( VLL.GE.VUU ) THEN .TP 20 .ti +4 NZZ = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL PSSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, RWORK( INDD2 ), RWORK( INDE2+ OFFSET ), NZZ, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), RWORK( INDRWORK ), LLRWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( MOD( INFO / IERREBZ, 1 ).EQ.0 ) THEN .TP 20 .ti +4 IF( NZZ.GT.NZ .OR. IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MIN( NZ, NZZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PCSTEIN( N, RWORK( INDD2 ), RWORK( INDE2+OFFSET ), NZ, W, IWORK( INDIBL ), IWORK( INDISP ), ORFAC, Z, IZ, JZ, DESCZ, RWORK( INDRWORK ), LALLWORK, IWORK( 1 ), ISIZESTEIN, IFAIL, ICLUSTR, GAP, IINFO ) .TP 20 .ti +4 IF( IINFO.GE.NZ+1 ) INFO = INFO + IERRCLS .TP 20 .ti +4 IF( MOD( IINFO, NZ+1 ).NE.0 ) INFO = INFO + IERREIN .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 CALL PCUNMTR( 'L', UPLO, 'N', N, NZ, A, IA, JA, DESCA, WORK( INDTAU ), Z, IZ, JZ, DESCZ, WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL SSCAL( M, ONE / SIGMA, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = CMPLX( LWMIN ) .TP 20 .ti +4 RWORK( 1 ) = REAL( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pchegs2.l0100644000056400000620000001415706335610616016765 0ustar pfrauenfstaff.TH PCHEGS2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCHEGS2 - reduce a complex Hermitian-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PCHEGS2( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCHEGS2 reduces a complex Hermitian-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**H or L**H*sub( A )*L. .br sub( B ) must have been previously factorized as U**H*U or L*L**H by PCPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H); = 2 or 3: compute U*sub( A )*U**H or L**H*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**H*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**H. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PCPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pchegst.l0100644000056400000620000001457606335610616017074 0ustar pfrauenfstaff.TH PCHEGST l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCHEGST - reduce a complex Hermitian-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PCHEGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 REAL SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCHEGST reduces a complex Hermitian-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**H or L**H*sub( A )*L. .br sub( B ) must have been previously factorized as U**H*U or L*L**H by PCPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H); = 2 or 3: compute U*sub( A )*U**H or L**H*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**H*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**H. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PCPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 SCALE (global output) REAL Amount by which the eigenvalues should be scaled to compensate for the scaling performed in this routine. At present, SCALE is always returned as 1.0, it is returned here to allow for future enhancement. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pchegvx.l0100644000056400000620000002446406335610616017100 0ustar pfrauenfstaff.TH PCHEGVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PCHEGVX( IBTYPE, JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, IL, INFO, IU, IZ, JA, JB, JZ, LIWORK, LRWORK, LWORK, M, N, NZ .TP 20 .ti +4 REAL ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 REAL GAP( * ), RWORK( * ), W( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 COMPLEX ONE .TP 20 .ti +4 PARAMETER ( ONE = 1.0E+0 ) .TP 20 .ti +4 REAL FIVE, ZERO .TP 20 .ti +4 PARAMETER ( FIVE = 5.0E+0, ZERO = 0.0E+0 ) .TP 20 .ti +4 INTEGER IERRNPD .TP 20 .ti +4 PARAMETER ( IERRNPD = 16 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LQUERY, UPPER, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IACOL, IAROW, IBCOL, IBROW, ICOFFA, ICOFFB, ICTXT, IROFFA, IROFFB, LIWMIN, LRWMIN, LWMIN, MQ0, MYCOL, MYROW, NB, NEIG, NN, NP0, NPCOL, NPROW .TP 20 .ti +4 REAL EPS, SCALE .TP 20 .ti +4 INTEGER IDUM1( 5 ), IDUM2( 5 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 REAL PSLAMCH .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PSLAMCH .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, PCHEEVX, PCHEGST, PCHK1MAT, PCHK2MAT, PCPOTRF, PCTRMM, PCTRSM, PXERBLA, SGEBR2D, SGEBS2D, SSCAL .TP 20 .ti +4 INTRINSIC ABS, CMPLX, ICHAR, MAX, MIN, MOD, REAL .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 ICTXT = DESCA( CTXT_ ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 900+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2600+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 EPS = PSLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 UPPER = LSAME( UPLO, 'U' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 9, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IB, JB, DESCB, 13, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 RWORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 RWORK( 2 ) = VL .TP 20 .ti +4 RWORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 RWORK( 2 ) = ZERO .TP 20 .ti +4 RWORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL SGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL SGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IAROW = INDXG2P( IA, DESCA( MB_ ), MYROW, DESCA( RSRC_ ), NPROW ) .TP 20 .ti +4 IBROW = INDXG2P( IB, DESCB( MB_ ), MYROW, DESCB( RSRC_ ), NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( JA, DESCA( NB_ ), MYCOL, DESCA( CSRC_ ), NPCOL ) .TP 20 .ti +4 IBCOL = INDXG2P( JB, DESCB( NB_ ), MYCOL, DESCB( CSRC_ ), NPCOL ) .TP 20 .ti +4 IROFFA = MOD( IA-1, DESCA( MB_ ) ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, DESCA( NB_ ) ) .TP 20 .ti +4 IROFFB = MOD( IB-1, DESCB( MB_ ) ) .TP 20 .ti +4 ICOFFB = MOD( JB-1, DESCB( NB_ ) ) .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 .OR. LRWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 LIWMIN = 6*MAX( N, ( NPROW*NPCOL )+1, 4 ) .TP 20 .ti +4 NB = DESCA( MB_ ) .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 NP0 = NUMROC( NN, NB, 0, 0, NPROW ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = N + MAX( NB*( NP0+1 ), 3 ) .TP 20 .ti +4 LRWMIN = 5*NN + 4*N .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, 0, 0, NPCOL ) .TP 20 .ti +4 LWMIN = N + ( NP0+MQ0+NB )*NB .TP 20 .ti +4 LRWMIN = 4*N + MAX( 5*NN, NP0*MQ0 ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( IBTYPE.LT.1 .OR. IBTYPE.GT.3 ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( .NOT.UPPER .AND. .NOT.LSAME( UPLO, 'L' ) ) THEN .TP 20 .ti +4 INFO = -4 .TP 20 .ti +4 ELSE IF( N.LT.0 ) THEN .TP 20 .ti +4 INFO = -5 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -7 .TP 20 .ti +4 ELSE IF( ICOFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -8 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 900+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCB( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCB( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCB( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCB( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCB( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCB( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CTXT_ ) .TP 20 .ti +4 ELSE IF( IROFFB.NE.0 .OR. IBROW.NE.IAROW ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( ICOFFB.NE.0 .OR. IBCOL.NE.IACOL ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -16 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -17 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -14 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( ABS( RWORK( 1 )-ABSTOL ).GT.FIVE*EPS* ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -18 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -28 .TP 20 .ti +4 ELSE IF( LRWORK.LT.LRWMIN .AND. LRWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -30 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -32 .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM1( 1 ) = IBTYPE .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'U' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'L' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 5 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 5 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 5 ) = 5 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 9, N, 4, N, 4, IB, JB, DESCB, 13, 5, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 CALL PCHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, 0, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = CMPLX( REAL( LWMIN ) ) .TP 20 .ti +4 RWORK( 1 ) = REAL( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( ICTXT, 'PCHEGVX ', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PCPOTRF( UPLO, N, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 IFAIL( 1 ) = INFO .TP 20 .ti +4 INFO = IERRNPD .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PCHEGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CALL PCHEEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NEIG = M .TP 20 .ti +4 IF( IBTYPE.EQ.1 .OR. IBTYPE.EQ.2 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'C' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PCTRSM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 ELSE IF( IBTYPE.EQ.3 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'C' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PCTRMM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( SCALE.NE.ONE ) THEN .TP 20 .ti +4 CALL SSCAL( N, SCALE, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pchetd2.l0100644000056400000620000002023706335610616016757 0ustar pfrauenfstaff.TH PCHETD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCHETD2 - reduce a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PCHETD2( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL D( * ), E( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCHETD2 reduces a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*N. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) with .br IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pchetrd.l0100644000056400000620000002067606335610616017066 0ustar pfrauenfstaff.TH PCHETRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCHETRD - reduce a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PCHETRD( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL D( * ), E( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCHETRD reduces a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB * ( NP +1 ), 3 * NB ) where NB = MB_A = NB_A, NP = NUMROC( N, NB, MYROW, IAROW, NPROW ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA .AND. IROFFA.EQ.0 ) with IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pclabrd.l0100644000056400000620000002351306335610616017035 0ustar pfrauenfstaff.TH PCLABRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLABRD - reduce the first NB rows and columns of a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an unitary transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transfor- mation to the unreduced part of sub( A ) .SH SYNOPSIS .TP 20 SUBROUTINE PCLABRD( M, N, NB, A, IA, JA, DESCA, D, E, TAUQ, TAUP, X, IX, JX, DESCX, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IX, IY, JA, JX, JY, M, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ), DESCY( * ) .TP 20 .ti +4 REAL D( * ), E( * ) .TP 20 .ti +4 COMPLEX A( * ), TAUP( * ), TAUQ( * ), X( * ), Y( * ), WORK( * ) .SH PURPOSE PCLABRD reduces the first NB rows and columns of a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an unitary transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transfor- mation to the unreduced part of sub( A ). If M >= N, sub( A ) is reduced to upper bidiagonal form; if M < N, to lower bidiagonal form. .br This is an auxiliary routine called by PCGEBRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of leading rows and columns of sub( A ) to be reduced. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ) to be reduced. On exit, the first NB rows and columns of the matrix are overwritten; the rest of the distributed matrix sub( A ) is unchanged. If m >= n, elements on and below the diagonal in the first NB columns, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors; and elements above the diagonal in the first NB rows, with the array TAUP, represent the unitary matrix P as a product of elementary reflectors. If m < n, elements below the diagonal in the first NB columns, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and elements on and above the diagonal in the first NB rows, with the array TAUP, represent the unitary matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(ia+i-1,ja+i-1). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(ia+i-1,ja+i) for i = 1,2,...,n-1; if m < n, E(i) = A(ia+i,ja+i-1) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) COMPLEX array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) COMPLEX array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix P. TAUP is tied to the distributed matrix A. See Further Details. X (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_X,NB). On exit, the local pieces of the distributed M-by-NB matrix X(IX:IX+M-1,JX:JX+NB-1) required to update the unreduced part of sub( A ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 Y (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_Y,NB). On exit, the local pieces of the distributed N-by-NB matrix Y(IY:IY+N-1,JY:JY+NB-1) required to update the unreduced part of sub( A ). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) LWORK >= NB_A + NQ, with NQ = NUMROC( N+MOD( IA-1, NB_Y ), NB_Y, MYCOL, IACOL, NPCOL ) IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ) INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br Q = H(1) H(2) . . . H(nb) and P = G(1) G(2) . . . G(nb) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors. .br If m >= n, v(1:i-1) = 0, v(i) = 1, and v(i:m) is stored on exit in A(ia+i-1:ia+m-1,ja+i-1); u(1:i) = 0, u(i+1) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, v(1:i) = 0, v(i+1) = 1, and v(i+1:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); u(1:i-1) = 0, u(i) = 1, and u(i:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The elements of the vectors v and u together form the m-by-nb matrix V and the nb-by-n matrix U' which are needed, with X and Y, to apply the transformation to the unreduced part of the matrix, using a block update of the form: sub( A ) := sub( A ) - V*Y' - X*U'. .br The contents of sub( A ) on exit are illustrated by the following examples with nb = 2: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( 1 1 u1 u1 u1 ) ( 1 u1 u1 u1 u1 u1 ) ( v1 1 1 u2 u2 ) ( 1 1 u2 u2 u2 u2 ) ( v1 v2 a a a ) ( v1 1 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) .br where a denotes an element of the original matrix which is unchanged, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br scalapack-doc-1.5/man/manl/pclacgv.l0100644000056400000620000001026506335610616017045 0ustar pfrauenfstaff.TH PCLACGV l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLACGV - conjugate a complex vector of length N, sub( X ), where sub( X ) denotes X(IX,JX:JX+N-1) if INCX = DESCX( M_ ) and X(IX:IX+N-1,JX) if INCX = 1, and Notes ===== Each global data object is described by an associated description vector .SH SYNOPSIS .TP 20 SUBROUTINE PCLACGV( N, X, IX, JX, DESCX, INCX ) .TP 20 .ti +4 INTEGER INCX, IX, JX, N .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX X( * ) .SH PURPOSE PCLACGV conjugates a complex vector of length N, sub( X ), where sub( X ) denotes X(IX,JX:JX+N-1) if INCX = DESCX( M_ ) and X(IX:IX+N-1,JX) if INCX = 1, and the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vector sub( X ). .TP 8 X (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_X,*). On entry the vector to be conjugated x( i ) = X(IX+(JX-1)*M_X +(i-1)*INCX ), 1 <= i <= N. On exit the conjugated vector. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. scalapack-doc-1.5/man/manl/pclacon.l0100644000056400000620000001262006335610616017042 0ustar pfrauenfstaff.TH PCLACON l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLACON - estimate the 1-norm of a square, complex distributed matrix A .SH SYNOPSIS .TP 20 SUBROUTINE PCLACON( N, V, IV, JV, DESCV, X, IX, JX, DESCX, EST, KASE ) .TP 20 .ti +4 INTEGER IV, IX, JV, JX, KASE, N .TP 20 .ti +4 REAL EST .TP 20 .ti +4 INTEGER DESCV( * ), DESCX( * ) .TP 20 .ti +4 COMPLEX V( * ), X( * ) .SH PURPOSE PCLACON estimates the 1-norm of a square, complex distributed matrix A. Reverse communication is used for evaluating matrix-vector products. X and V are aligned with the distributed matrix A, this information is implicitly contained within IV, IX, DESCV, and DESCX. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vectors V and X. N >= 0. .TP 8 V (local workspace) COMPLEX pointer into the local memory to an array of dimension LOCr(N+MOD(IV-1,MB_V)). On the final return, V = A*W, where EST = norm(V)/norm(W) (W is not returned). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 X (local input/local output) COMPLEX pointer into the local memory to an array of dimension LOCr(N+MOD(IX-1,MB_X)). On an intermediate return, X should be overwritten by A * X, if KASE=1, A' * X, if KASE=2, where A' is the conjugate transpose of A, and PCLACON must be re-called with all the other parameters unchanged. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 EST (global output) REAL An estimate (a lower bound) for norm(A). .TP 8 KASE (local input/local output) INTEGER On the initial call to PCLACON, KASE should be 0. On an intermediate return, KASE will be 1 or 2, indicating whether X should be overwritten by A * X or A' * X. On the final return from PCLACON, KASE will again be 0. .SH FURTHER DETAILS The serial version CLACON has been contributed by Nick Higham, University of Manchester. It was originally named SONEST, dated March 16, 1988. .br Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation", ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. scalapack-doc-1.5/man/manl/pclacp2.l0100644000056400000620000001267406335610617016761 0ustar pfrauenfstaff.TH PCLACP2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLACP2 - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PCLACP2( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCLACP2 copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PCLACP2 performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). PCLACP2 requires that only dimension of the matrix operands is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pclacpy.l0100644000056400000620000001255406335610617017065 0ustar pfrauenfstaff.TH PCLACPY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLACPY - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PCLACPY( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCLACPY copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PCLACPY performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pclaevswp.l0100644000056400000620000001217206335610617017432 0ustar pfrauenfstaff.TH PCLAEVSWP l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCLAEVSWP - move the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted .SH SYNOPSIS .TP 22 SUBROUTINE PCLAEVSWP( N, ZIN, LDZI, Z, IZ, JZ, DESCZ, NVS, KEY, RWORK, LRWORK ) .TP 22 .ti +4 INTEGER IZ, JZ, LDZI, LRWORK, N .TP 22 .ti +4 INTEGER DESCZ( * ), KEY( * ), NVS( * ) .TP 22 .ti +4 REAL RWORK( * ), ZIN( LDZI, * ) .TP 22 .ti +4 COMPLEX Z( * ) .SH PURPOSE PCLAEVSWP moves the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = the number of rows local to a given process. NQ = the number of columns local to a given process. .TP 8 N (global input) INTEGER The order of the matrix A. N >= 0. .TP 8 ZIN (local input) REAL array, dimension ( LDZI, NVS(iam) ) The eigenvectors on input. Each eigenvector resides entirely in one process. Each process holds a contiguous set of NVS(iam) eigenvectors. The first eigenvector which the process holds is: sum for i=[0,iam-1) of NVS(i) .TP 8 LDZI (locl input) INTEGER leading dimension of the ZIN array .TP 8 Z (local output) COMPLEX array global dimension (N, N), local dimension (DESCZ(DLEN_), NQ) The eigenvectors on output. The eigenvectors are distributed in a block cyclic manner in both dimensions, with a block size of NB. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 NVS (global input) INTEGER array, dimension( nprocs+1 ) nvs(i) = number of processes number of eigenvectors held by processes [0,i-1) nvs(1) = number of eigen vectors held by [0,1-1) == 0 nvs(nprocs+1) = number of eigen vectors held by [0,nprocs) == total number of eigenvectors .TP 8 KEY (global input) INTEGER array, dimension( N ) Indicates the actual index (after sorting) for each of the eigenvectors. .TP 9 RWORK (local workspace) REAL array, dimension (LRWORK) .TP 9 LRWORK (local input) INTEGER dimension of RWORK scalapack-doc-1.5/man/manl/pclahrd.l0100644000056400000620000001052506335610617017043 0ustar pfrauenfstaff.TH PCLAHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAHRD - reduce the first NB columns of a complex general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero .SH SYNOPSIS .TP 20 SUBROUTINE PCLAHRD( N, K, NB, A, IA, JA, DESCA, TAU, T, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IY, JA, JY, K, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCY( * ) .TP 20 .ti +4 COMPLEX A( * ), T( * ), TAU( * ), WORK( * ), Y( * ) .SH PURPOSE PCLAHRD reduces the first NB columns of a complex general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero. The reduction is performed by an unitary similarity transformation Q' * A * Q. The routine returns the matrices V and T which determine Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. .br This is an auxiliary routine called by PCGEHRD. In the following comments sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1). .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 K (global input) INTEGER The offset for the reduction. Elements below the k-th subdiagonal in the first NB columns are reduced to zero. .TP 8 NB (global input) INTEGER The number of columns to be reduced. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-K)). On entry, this array contains the the local pieces of the N-by-(N-K+1) general distributed matrix A(IA:IA+N-1,JA:JA+N-K). On exit, the elements on and above the k-th subdiagonal in the first NB columns are overwritten with the corresponding elements of the reduced distributed matrix; the elements below the k-th subdiagonal, with the array TAU, represent the matrix Q as a product of elementary reflectors. The other columns of A(IA:IA+N-1,JA:JA+N-K) are unchanged. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). TAU is tied to the distributed matrix A. .TP 8 T (local output) COMPLEX array, dimension (NB_A,NB_A) The upper triangular matrix T. .TP 8 Y (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_Y,NB_A). On exit, this array contains the local pieces of the N-by-NB distributed matrix Y. LLD_Y >= LOCr(IA+N-1). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) COMPLEX array, dimension (NB) .SH FURTHER DETAILS The matrix Q is represented as a product of nb elementary reflectors Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i+k-1) = 0, v(i+k) = 1; v(i+k+1:n) is stored on exit in A(ia+i+k:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the (n-k+1)-by-nb matrix V which is needed, with T and Y, to apply the transformation to the unreduced part of the matrix, using an update of the form: A(ia:ia+n-1,ja:ja+n-k) := (I-V*T*V')*(A(ia:ia+n-1,ja:ja+n-k)-Y*V'). The contents of A(ia:ia+n-1,ja:ja+n-k) on exit are illustrated by the following example with n = 7, k = 3 and nb = 2: .br ( a h a a a ) .br ( a h a a a ) .br ( a h a a a ) .br ( h h a a a ) .br ( v1 h a a a ) .br ( v1 v2 a a a ) .br ( v1 v2 a a a ) .br where a denotes an element of the original matrix .br A(ia:ia+n-1,ja:ja+n-k), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pclange.l0100644000056400000620000001277706335610617017052 0ustar pfrauenfstaff.TH PCLANGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLANGE - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PCLANGE( NORM, M, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM .TP 14 .ti +4 INTEGER IA, JA, M, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL WORK( * ) .TP 14 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLANGE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a distributed matrix sub( A ) = A(IA:IA+M-1, JA:JA+N-1). .br PCLANGE returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+M-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PCLANGE as described above. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PCLANGE is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PCLANGE is set to zero. N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pclanhe.l0100644000056400000620000001442506335610617017043 0ustar pfrauenfstaff.TH PCLANHE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLANHE - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PCLANHE( NORM, UPLO, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM, UPLO .TP 14 .ti +4 INTEGER IA, JA, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL WORK( * ) .TP 14 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLANHE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex hermitian distributed matrix sub(A) = A(IA:IA+N-1,JA:JA+N-1). PCLANHE returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PCLANHE as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the hermitian matrix sub( A ) is to be referenced. = 'U': Upper triangular part of sub( A ) is referenced, .br = 'L': Lower triangular part of sub( A ) is referenced. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PCLANHE is set to zero. N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix which norm is to be computed, and the strictly lower triangular part of this matrix is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix which norm is to be computed, and the strictly upper triangular part of sub( A ) is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), 2*Nq0+Np0+LDW if NORM = '1', 'O', 'o', 'I' or 'i', where LDW is given by: IF( NPROW.NE.NPCOL ) THEN LDW = MB_A*CEIL(CEIL(Np0/MB_A)/(LCM/NPROW)) ELSE LDW = 0 END IF 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where LCM is the least common multiple of NPROW and NPCOL LCM = ILCM( NPROW, NPCOL ) and CEIL denotes the ceiling operation (ICEIL). IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), ICEIL, ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pclanhs.l0100644000056400000620000001250706335610617017060 0ustar pfrauenfstaff.TH PCLANHS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLANHS - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PCLANHS( NORM, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM .TP 14 .ti +4 INTEGER IA, JA, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL WORK( * ) .TP 14 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLANHS returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a Hessenberg distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PCLANHS returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PCLANHS as described above. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PCLANHS is set to zero. N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pclansy.l0100644000056400000620000001442506335610617017102 0ustar pfrauenfstaff.TH PCLANSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLANSY - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PCLANSY( NORM, UPLO, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM, UPLO .TP 14 .ti +4 INTEGER IA, JA, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL WORK( * ) .TP 14 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLANSY returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PCLANSY returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PCLANSY as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is to be referenced. = 'U': Upper triangular part of sub( A ) is referenced, .br = 'L': Lower triangular part of sub( A ) is referenced. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PCLANSY is set to zero. N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix which norm is to be computed, and the strictly lower triangular part of this matrix is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix which norm is to be computed, and the strictly upper triangular part of sub( A ) is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), 2*Nq0+Np0+LDW if NORM = '1', 'O', 'o', 'I' or 'i', where LDW is given by: IF( NPROW.NE.NPCOL ) THEN LDW = MB_A*CEIL(CEIL(Np0/MB_A)/(LCM/NPROW)) ELSE LDW = 0 END IF 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where LCM is the least common multiple of NPROW and NPCOL LCM = ILCM( NPROW, NPCOL ) and CEIL denotes the ceiling operation (ICEIL). IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), ICEIL, ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pclantr.l0100644000056400000620000001365706335610617017102 0ustar pfrauenfstaff.TH PCLANTR l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLANTR - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PCLANTR( NORM, UPLO, DIAG, M, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 14 .ti +4 INTEGER IA, JA, M, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL WORK( * ) .TP 14 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLANTR returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular distributed matrix sub( A ) denoting A(IA:IA+M-1, JA:JA+N-1). .br PCLANTR returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with ia <= i <= ia+m-1, ( and ja <= j <= ja+n-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PCLANTR as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the matrix sub( A ) is upper or lower trapezoidal. = 'U': Upper trapezoidal .br = 'L': Lower trapezoidal Note that sub( A ) is triangular instead of trapezoidal if M = N. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) has unit diagonal. = 'N': Non-unit diagonal .br = 'U': Unit diagonal .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PCLANTR is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PCLANTR is set to zero. N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pclapiv.l0100644000056400000620000001547606335610617017076 0ustar pfrauenfstaff.TH PCLAPIV l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAPIV - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PCLAPIV( DIREC, ROWCOL, PIVROC, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP, IWORK ) .TP 20 .ti +4 CHARACTER*1 DIREC, PIVROC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLAPIV applies either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector may be distributed across a process row or a column. The pivot vector should be aligned with the distributed matrix A. This routine will transpose the pivot vector if necessary. For example if the row pivots should be applied to the columns of sub( A ), pass ROWCOL='C' and PIVROC='C'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Restrictions .br ============ .br IPIV must always be a distributed vector (not a matrix). Thus: IF( ROWPIV .EQ. 'C' ) THEN .br JP must be 1 .br ELSE .br IP must be 1 .br END IF .br The following restrictions apply when IPIV must be transposed: IF( ROWPIV.EQ.'C' .AND. PIVROC.EQ.'C') THEN .br DESCIP(MB_) must equal DESCA(NB_) .br ELSE IF( ROWPIV.EQ.'R" .AND. PIVROC.EQ.'R') THEN .br DESCIP(NB_) must equal DESCA(MB_) .br END IF .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER*1 Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P*sub( A ). = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P )*sub( A ). .TP 8 ROWCOL (global input) CHARACTER*1 Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 PIVROC (global input) CHARACTER*1 Specifies whether IPIV is distributed over a process row or column: = 'R' IPIV distributed over a process row = 'C' IPIV distributed over a process column .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the distributed submatrix sub( A ) to which the row or column interchanges will be applied. On exit, the local pieces of the permuted distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL='R', otherwise LOCc(N_A)+NB_A. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. This array is tied to the distributed matrix A. .TP 8 IWORK (local workspace) INTEGER array, dimension (LDW) where LDW is equal to the workspace necessary for transposition, and the storage of the tranposed IPIV: Let LCM be the least common multiple of NPROW and NPCOL. IF( ROWCOL.EQ.'R' .AND. PIVROC.EQ.'R' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P ELSE LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P * CEIL( CEIL(LOCc(N_P)/NB_P) / (LCM/NPCOL) ) END IF ELSE IF( ROWCOL.EQ.'C' .AND. PIVROC.EQ.'C' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P ELSE LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P * CEIL( CEIL(LOCr(M_P)/MB_P) / (LCM/NPROW) ) END IF ELSE IWORK is not referenced. END IF scalapack-doc-1.5/man/manl/pclapv2.l0100644000056400000620000001357306335610617017003 0ustar pfrauenfstaff.TH PCLAPV2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAPV2 - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PCLAPV2( DIREC, ROWCOL, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLAPV2 applies either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector should be aligned with the distributed matrix A. For pivoting the rows of sub( A ), IPIV should be distributed along a process column and replicated over all process rows. Similarly, IPIV should be distributed along a process row and replicated over all process columns for column pivoting. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P * sub( A ); = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P ) * sub( A ). .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this local array contains the local pieces of the distributed matrix sub( A ) to which the row or columns interchanges will be applied. On exit, this array contains the local pieces of the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL = 'R', LOCc(N_A)+NB_A otherwise. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. IPIV is tied to the distributed matrix A. .TP 8 IP (global input) INTEGER IPIV's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JP (global input) INTEGER IPIV's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCIP (global and local input) INTEGER array of dimension 8 The array descriptor for the distributed matrix IPIV. scalapack-doc-1.5/man/manl/pclaqge.l0100644000056400000620000001366606335610620017045 0ustar pfrauenfstaff.TH PCLAQGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAQGE - equilibrate a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C .SH SYNOPSIS .TP 20 SUBROUTINE PCLAQGE( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 REAL AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL C( * ), R( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLAQGE equilibrates a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)) containing on entry the M-by-N matrix sub( A ). On exit, the equilibrated distributed matrix. See EQUED for the form of the equilibrated distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local input) REAL array, dimension LOCr(M_A) The row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local input) REAL array, dimension LOCc(N_A) The column scale factors of sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distributed matrix A. .TP 8 ROWCND (global input) REAL The global ratio of the smallest R(i) to the largest R(i), IA <= i <= IA+M-1. .TP 8 COLCND (global input) REAL The global ratio of the smallest C(i) to the largest C(i), JA <= j <= JA+N-1. .TP 8 AMAX (global input) REAL Absolute value of largest distributed submatrix entry. .TP 8 EQUED (global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration .br = 'R': Row equilibration, i.e., sub( A ) has been pre- .br multiplied by diag(R(IA:IA+M-1)), .br = 'C': Column equilibration, i.e., sub( A ) has been post- .br multiplied by diag(C(JA:JA+N-1)), .br = 'B': Both row and column equilibration, i.e., sub( A ) has been replaced by diag(R(IA:IA+M-1)) * sub( A ) * diag(C(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if row or column scaling should be done based on the ratio of the row or column scaling factors. If ROWCND < THRESH, row scaling is done, and if COLCND < THRESH, column scaling is done. LARGE and SMALL are threshold values used to decide if row scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. scalapack-doc-1.5/man/manl/pclaqsy.l0100644000056400000620000001403506335610620017074 0ustar pfrauenfstaff.TH PCLAQSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAQSY - equilibrate a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC .SH SYNOPSIS .TP 20 SUBROUTINE PCLAQSY( UPLO, N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED, UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 REAL AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL SC( * ), SR( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLAQSY equilibrates a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric distributed matrix sub( A ) is to be referenced: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (input/output) COMPLEX pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the distributed symmetric matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and the strictly upper trian- gular part of sub( A ) is not referenced. On exit, if EQUED = 'Y', the equilibrated matrix: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local input) REAL array, dimension LOCr(M_A) The scale factors for A(IA:IA+M-1,JA:JA+N-1). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local input) REAL array, dimension LOCc(N_A) The scale factors for sub( A ). SC is aligned with the dis- tributed matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global input) REAL Ratio of the smallest SR(i) (respectively SC(j)) to the largest SR(i) (respectively SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. .TP 8 AMAX (global input) REAL Absolute value of the largest distributed submatrix entry. .TP 8 EQUED (output) CHARACTER*1 Specifies whether or not equilibration was done. = 'N': No equilibration. .br = 'Y': Equilibration was done, i.e., sub( A ) has been re- .br placed by: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if scaling should be done based on the ratio of the scaling factors. If SCOND < THRESH, scaling is done. LARGE and SMALL are threshold values used to decide if scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, scaling is done. scalapack-doc-1.5/man/manl/pclarf.l0100644000056400000620000001777606335610620016706 0ustar pfrauenfstaff.TH PCLARF l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARF - applie a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PCLARF( SIDE, M, N, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 COMPLEX C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PCLARF applies a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also have the first row of sub( C ). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also have the first column of sub( C ) and MOD(JV-1,NB_V) must be equal to MOD(JC-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 V (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+M-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+M-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+N-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+N-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pclarfb.l0100644000056400000620000001765706335610620017046 0ustar pfrauenfstaff.TH PCLARFB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARFB - applie a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right .SH SYNOPSIS .TP 20 SUBROUTINE PCLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, DIRECT, STOREV .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PCLARFB applies a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 DIRECT (global input) CHARACTER Indicates how Q is formed from a product of elementary reflectors = 'F': Q = H(1) H(2) . . . H(k) (Forward) .br = 'B': Q = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 V (local input) COMPLEX pointer into the local memory to an array of dimension ( LLD_V, LOCc(JV+K-1) ) if STOREV = 'C', ( LLD_V, LOCc(JV+M-1)) if STOREV = 'R' and SIDE = 'L', ( LLD_V, LOCc(JV+N-1) ) if STOREV = 'R' and SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation. See further details. If STOREV = 'C' and SIDE = 'L', LLD_V >= MAX(1,LOCr(IV+M-1)); if STOREV = 'C' and SIDE = 'R', LLD_V >= MAX(1,LOCr(IV+N-1)); if STOREV = 'R', LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) COMPLEX array, dimension MB_V by MB_V if STOREV = 'R' and NB_V by NB_V if STOREV = 'C'. The trian- gular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pclarfc.l0100644000056400000620000001776206335610620017044 0ustar pfrauenfstaff.TH PCLARFC l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARFC - applie a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), .SH SYNOPSIS .TP 20 SUBROUTINE PCLARFC( SIDE, M, N, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE .TP 20 .ti +4 INTEGER IC, INCV, IV, JC, JV, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PCLARFC applies a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also have the first row of sub( C ). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also have the first column of sub( C ) and MOD(JV-1,NB_V) must be equal to MOD(JC-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q**H * sub( C ), .br = 'R': form sub( C ) * Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 V (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+M-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+M-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+N-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+N-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q**H * sub( C ) if SIDE = 'L', or sub( C ) * Q**H if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pclarfg.l0100644000056400000620000001254706335610620017044 0ustar pfrauenfstaff.TH PCLARFG l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARFG - generate a complex elementary reflector H of order n, such that H * sub( X ) = H * ( x(iax,jax) ) = ( alpha ), H' * H = I .SH SYNOPSIS .TP 20 SUBROUTINE PCLARFG( N, ALPHA, IAX, JAX, X, IX, JX, DESCX, INCX, TAU ) .TP 20 .ti +4 INTEGER IAX, INCX, IX, JAX, JX, N .TP 20 .ti +4 COMPLEX ALPHA .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX TAU( * ), X( * ) .SH PURPOSE PCLARFG generates a complex elementary reflector H of order n, such that ( x ) ( 0 ) .br where alpha is a real scalar, and sub( X ) is an (N-1)-element complex distributed vector X(IX:IX+N-2,JX) if INCX = 1 and X(IX,JX:JX+N-2) if INCX = DESCX(M_). H is represented in the form H = I - tau * ( 1 ) * ( 1 v' ) , .br ( v ) .br where tau is a complex scalar and v is a complex (N-1)-element vector. Note that H is not Hermitian. .br If the elements of sub( X ) are all zero and X(IAX,JAX) is real, then tau = 0 and H is taken to be the unit matrix. .br Otherwise 1 <= real(tau) <= 2 and abs(tau-1) <= 1. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. .SH ARGUMENTS .TP 8 N (global input) INTEGER The global order of the elementary reflector. N >= 0. .TP 8 ALPHA (local output) COMPLEX On exit, alpha is computed in the process scope having the vector sub( X ). .TP 8 IAX (global input) INTEGER The global row index in X of X(IAX,JAX). .TP 8 JAX (global input) INTEGER The global column index in X of X(IAX,JAX). .TP 8 X (local input/local output) COMPLEX, pointer into the local memory to an array of dimension (LLD_X,*). This array contains the local pieces of the distributed vector sub( X ). Before entry, the incremented array sub( X ) must contain the vector x. On exit, it is overwritten with the vector v. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JX) if INCX = 1, and LOCr(IX) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix X. scalapack-doc-1.5/man/manl/pclarft.l0100644000056400000620000001506506335610620017057 0ustar pfrauenfstaff.TH PCLARFT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARFT - form the triangular factor T of a complex block reflector H of order n, which is defined as a product of k elementary reflectors .SH SYNOPSIS .TP 20 SUBROUTINE PCLARFT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 COMPLEX TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PCLARFT forms the triangular factor T of a complex block reflector H of order n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the distributed matrix V, and H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the distributed matrix V, and H = I - V' * T * V .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER*1 Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER*1 Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The order of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) COMPLEX pointer into the local memory to an array of local dimension (LOCr(IV+N-1),LOCc(JV+K-1)) if STOREV = 'C', and (LOCr(IV+K-1),LOCc(JV+N-1)) if STOREV = 'R'. The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) COMPLEX, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) COMPLEX array, dimension (NB_V,NB_V) if STOREV = 'Col', and (MB_V,MB_V) otherwise. It contains the k-by-k triangular factor of the block reflector asso- ciated with V. If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is lower triangular. .TP 8 WORK (local workspace) COMPLEX array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': V( IV:IV+N-1, ( 1 ) V( IV:IV+K-1, ( 1 v1 v1 v1 v1 ) JV:JV+K-1 ) = ( v1 1 ) JV:JV+N-1 ) = ( 1 v2 v2 v2 ) ( v1 v2 1 ) ( 1 v3 v3 ) ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': V( IV:IV+N-1, ( v1 v2 v3 ) V( IV:IV+K-1, ( v1 v1 1 ) JV:JV+K-1 ) = ( v1 v2 v3 ) JV:JV+N-1 ) = ( v2 v2 v2 1 ) ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) ( 1 v3 ) .br ( 1 ) .br scalapack-doc-1.5/man/manl/pclarz.l0100644000056400000620000002041306335610620016710 0ustar pfrauenfstaff.TH PCLARZ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARZ - applie a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PCLARZ( SIDE, M, N, L, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, L, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 COMPLEX C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PCLARZ applies a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Q is a product of k elementary reflectors as returned by PCTZRZF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also own C(IC+M-L,JC:JC+N-1). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC+N-L-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also own C(IC:IC+M-1,JC+N-L) and MOD(JV-1,NB_V) must be equal to MOD(JC+N-L-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+L-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+L-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pclarzb.l0100644000056400000620000002007206335610620017053 0ustar pfrauenfstaff.TH PCLARZB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARZB - applie a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right .SH SYNOPSIS .TP 20 SUBROUTINE PCLARZB( SIDE, TRANS, DIRECT, STOREV, M, N, K, L, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, SIDE, STOREV, TRANS .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, L, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PCLARZB applies a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right. Q is a product of k elementary reflectors as returned by PCTZRZF. Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 DIRECT (global input) CHARACTER Indicates how H is formed from a product of elementary reflectors = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise (not supported yet) .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_V, LOCc(JV+M-1)) if SIDE = 'L', (LLD_V, LOCc(JV+N-1)) if SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation as returned by PCTZRZF. LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) COMPLEX array, dimension MB_V by MB_V The lower triangular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pclarzc.l0100644000056400000620000002037706335610620017064 0ustar pfrauenfstaff.TH PCLARZC l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARZC - applie a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), .SH SYNOPSIS .TP 20 SUBROUTINE PCLARZC( SIDE, M, N, L, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE .TP 20 .ti +4 INTEGER IC, INCV, IV, JC, JV, L, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PCLARZC applies a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Q is a product of k elementary reflectors as returned by PCTZRZF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also own C(IC+M-L,JC:JC+N-1). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC+N-L-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also own C(IC:IC+M-1,JC+N-L) and MOD(JV-1,NB_V) must be equal to MOD(JC+N-L-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q**H * sub( C ), .br = 'R': form sub( C ) * Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+L-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+L-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q**H * sub( C ) if SIDE = 'L', or sub( C ) * Q**H if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pclarzt.l0100644000056400000620000001565306335610620017106 0ustar pfrauenfstaff.TH PCLARZT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLARZT - form the triangular factor T of a complex block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PCTZRZF .SH SYNOPSIS .TP 20 SUBROUTINE PCLARZT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 COMPLEX TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PCLARZT forms the triangular factor T of a complex block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PCTZRZF. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and .br H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and .br H = I - V' * T * V .br Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The number of meaningful entries of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) COMPLEX pointer into the local memory to an array of local dimension (LOCr(IV+K-1),LOCc(JV+N-1)). The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) COMPLEX, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) COMPLEX array, dimension (MB_V,MB_V) It contains the k-by-k triangular factor of the block reflector associated with V. T is lower triangular. .TP 8 WORK (local workspace) COMPLEX array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': ______V_____ .br ( v1 v2 v3 ) / \ ( v1 v2 v3 ) ( v1 v1 v1 v1 v1 . . . . 1 ) V = ( v1 v2 v3 ) ( v2 v2 v2 v2 v2 . . . 1 ) ( v1 v2 v3 ) ( v3 v3 v3 v3 v3 . . 1 ) ( v1 v2 v3 ) .br . . . .br . . . .br 1 . . .br 1 . .br 1 .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': ______V_____ 1 / \ . 1 ( 1 . . . . v1 v1 v1 v1 v1 ) . . 1 ( . 1 . . . v2 v2 v2 v2 v2 ) . . . ( . . 1 . . v3 v3 v3 v3 v3 ) . . . .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br V = ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br scalapack-doc-1.5/man/manl/pclascl.l0100644000056400000620000001235306335610620017042 0ustar pfrauenfstaff.TH PCLASCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLASCL - multiplie the M-by-N complex distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM .SH SYNOPSIS .TP 20 SUBROUTINE PCLASCL( TYPE, CFROM, CTO, M, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER TYPE .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 REAL CFROM, CTO .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLASCL multiplies the M-by-N complex distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM. This is done without over/underflow as long as the final result CTO * A(I,J) / CFROM does not over/underflow. TYPE specifies that sub( A ) may be full, upper triangular, lower triangular or upper Hessenberg. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TYPE (global input) CHARACTER TYPE indices the storage type of the input distributed matrix. = 'G': sub( A ) is a full matrix, .br = 'L': sub( A ) is a lower triangular matrix, .br = 'U': sub( A ) is an upper triangular matrix, .br = 'H': sub( A ) is an upper Hessenberg matrix. .TP 8 CFROM (global input) REAL CTO (global input) REAL The distributed matrix sub( A ) is multiplied by CTO/CFROM. A(I,J) is computed without over/underflow if the final result CTO * A(I,J) / CFROM can be represented without over/underflow. CFROM must be nonzero. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). On exit, this array contains the local pieces of the distributed matrix multiplied by CTO/CFROM. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pclase2.l0100644000056400000620000001215706335610621016755 0ustar pfrauenfstaff.TH PCLASE2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLASE2 - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PCLASE2( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 COMPLEX ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLASE2 initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. PCLASE2 requires that only dimension of the matrix operand is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) COMPLEX The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) COMPLEX The constant to which the diagonal elements are to be set. .TP 8 A (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pclaset.l0100644000056400000620000001203706335610621017054 0ustar pfrauenfstaff.TH PCLASET l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLASET - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PCLASET( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 COMPLEX ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLASET initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) COMPLEX The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) COMPLEX The constant to which the diagonal elements are to be set. .TP 8 A (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pclassq.l0100644000056400000620000001244206335610621017067 0ustar pfrauenfstaff.TH PCLASSQ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLASSQ - return the values scl and smsq such that ( scl**2 )*smsq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, .SH SYNOPSIS .TP 20 SUBROUTINE PCLASSQ( N, X, IX, JX, DESCX, INCX, SCALE, SUMSQ ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 REAL SCALE, SUMSQ .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX X( * ) .SH PURPOSE PCLASSQ returns the values scl and smsq such that where x( i ) = sub( X ) = abs( X( IX+(JX-1)*DESCX(M_)+(i-1)*INCX ) ). The value of sumsq is assumed to be at least unity and the value of ssq will then satisfy .br 1.0 .le. ssq .le. ( sumsq + 2*n ). .br scale is assumed to be non-negative and scl returns the value scl = max( scale, abs( real( x( i ) ) ), abs( aimag( x( i ) ) ) ), i .br scale and sumsq must be supplied in SCALE and SUMSQ respectively. SCALE and SUMSQ are overwritten by scl and ssq respectively. The routine makes only one pass through the vector sub( X ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. The result are only available in the scope of sub( X ), i.e if sub( X ) is distributed along a process row, the correct results are only available in this process row of the grid. Similarly if sub( X ) is distributed along a process column, the correct results are only available in this process column of the grid. .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vector sub( X ). .TP 8 X (input) COMPLEX The vector for which a scaled sum of squares is computed. x( i ) = X(IX+(JX-1)*M_X +(i-1)*INCX ), 1 <= i <= n. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 SCALE (local input/local output) REAL On entry, the value scale in the equation above. On exit, SCALE is overwritten with scl , the scaling factor for the sum of squares. .TP 8 SUMSQ (local input/local output) REAL On entry, the value sumsq in the equation above. On exit, SUMSQ is overwritten with smsq , the basic sum of squares from which scl has been factored out. scalapack-doc-1.5/man/manl/pclaswp.l0100644000056400000620000001224706335610621017075 0ustar pfrauenfstaff.TH PCLASWP l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLASWP - perform a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCLASWP( DIREC, ROWCOL, N, A, IA, JA, DESCA, K1, K2, IPIV ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, JA, K1, K2, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLASWP performs a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1). One interchange is initiated for each of rows or columns K1 trough K2 of sub( A ). This routine assumes that the pivoting information has already been broadcast along the process row or column. .br Also note that this routine will only work for K1-K2 being in the same MB (or NB) block. If you want to pivot a full matrix, use PCLAPIV. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) = 'B' (Backward) .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are permuted: = 'R' (Rows) = 'C' (Columns) .TP 8 N (global input) INTEGER If ROWCOL = 'R', the length of the rows of the distributed matrix A(*,JA:JA+N-1) to be permuted; If ROWCOL = 'C', the length of the columns of the distributed matrix A(IA:IA+N-1,*) to be permuted. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, * ). On entry, this array contains the local pieces of the distri- buted matrix to which the row/columns interchanges will be applied. On exit the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 K1 (global input) INTEGER The first element of IPIV for which a row or column inter- change will be done. .TP 8 K2 (global input) INTEGER The last element of IPIV for which a row or column inter- change will be done. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A for row pivoting and LOCc(N_A)+NB_A for column pivoting. This array is tied to the matrix A, IPIV(K) = L implies rows (or columns) K and L are to be interchanged. scalapack-doc-1.5/man/manl/pclatra.l0100644000056400000620000000766506335610621017062 0ustar pfrauenfstaff.TH PCLATRA l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLATRA - compute the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ) .SH SYNOPSIS .TP 17 COMPLEX FUNCTION PCLATRA( N, A, IA, JA, DESCA ) .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLATRA computes the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ). The result is left on every process of the grid. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix the trace is to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pclatrd.l0100644000056400000620000002131606335610621017052 0ustar pfrauenfstaff.TH PCLATRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLATRD - reduce NB rows and columns of a complex Hermitian distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to complex tridiagonal form by an unitary similarity transformation Q' * sub( A ) * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of sub( A ) .SH SYNOPSIS .TP 20 SUBROUTINE PCLATRD( UPLO, N, NB, A, IA, JA, DESCA, D, E, TAU, W, IW, JW, DESCW, WORK ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IW, JA, JW, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCW( * ) .TP 20 .ti +4 REAL D( * ), E( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), W( * ), WORK( * ) .SH PURPOSE PCLATRD reduces NB rows and columns of a complex Hermitian distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to complex tridiagonal form by an unitary similarity transformation Q' * sub( A ) * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of sub( A ). If UPLO = 'U', PCLATRD reduces the last NB rows and columns of a matrix, of which the upper triangle is supplied; .br if UPLO = 'L', PCLATRD reduces the first NB rows and columns of a matrix, of which the lower triangle is supplied. .br This is an auxiliary routine called by PCHETRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of rows and columns to be reduced. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the last NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements above the diagonal with the array TAU, represent the unitary matrix Q as a product of elementary reflectors. If UPLO = 'L', the first NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements below the diagonal with the array TAU, represent the unitary matrix Q as a product of elementary reflectors; See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 W (local output) COMPLEX pointer into the local memory to an array of dimension (LLD_W,NB_W), This array contains the local pieces of the N-by-NB_W matrix W required to update the unreduced part of sub( A ). .TP 8 IW (global input) INTEGER The row index in the global array W indicating the first row of sub( W ). .TP 8 JW (global input) INTEGER The column index in the global array W indicating the first column of sub( W ). .TP 8 DESCW (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix W. .TP 8 WORK (local workspace) COMPLEX array, dimension (NB_A) .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n) H(n-1) . . . H(n-nb+1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(i:n) = 0 and v(i-1) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the N-by-NB matrix V which is needed, with W, to apply the transformation to the unreduced part of the matrix, using a Hermitian rank-2k update of the form: sub( A ) := sub( A ) - V*W' - W*V'. .br The contents of A on exit are illustrated by the following examples with n = 5 and nb = 2: .br if UPLO = 'U': if UPLO = 'L': .br ( a a a v4 v5 ) ( d ) ( a a v4 v5 ) ( 1 d ) ( a 1 v5 ) ( v1 1 a ) ( d 1 ) ( v1 v2 a a ) ( d ) ( v1 v2 a a a ) where d denotes a diagonal element of the reduced matrix, a denotes an element of the original matrix that is unchanged, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pclatrs.l0100644000056400000620000000117006335610621017065 0ustar pfrauenfstaff.TH PCLATRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLATRS - solve a triangular system .SH SYNOPSIS .TP 20 SUBROUTINE PCLATRS( UPLO, TRANS, DIAG, NORMIN, N, A, IA, JA, DESCA, X, IX, JX, DESCX, SCALE, CNORM, WORK ) .TP 20 .ti +4 CHARACTER DIAG, NORMIN, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IX, JA, JX, N .TP 20 .ti +4 REAL SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ) .TP 20 .ti +4 REAL CNORM( * ) .TP 20 .ti +4 COMPLEX A( * ), X( * ), WORK( * ) .SH PURPOSE PCLATRS solves a triangular system. This routine in unfinished at this time, but will be part of the next release. .br scalapack-doc-1.5/man/manl/pclatrz.l0100644000056400000620000001453606335610621017106 0ustar pfrauenfstaff.TH PCLATRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCLATRZ - reduce the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = [A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1)] .SH SYNOPSIS .TP 20 SUBROUTINE PCLATRZ( M, N, L, A, IA, JA, DESCA, TAU, WORK ) .TP 20 .ti +4 INTEGER IA, JA, L, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCLATRZ reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = [A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1)] to upper triangular form by means of unitary transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N unitary matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. L > 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements N-L+1 to N of the first M rows of sub( A ), with the array TAU, represent the unitary matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace) COMPLEX array, dimension (LWORK) LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), whose conjugate transpose is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pclauu2.l0100644000056400000620000001160206335610621016771 0ustar pfrauenfstaff.TH PCLAUU2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAUU2 - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCLAUU2( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLAUU2 computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the unblocked form of the algorithm, calling Level 2 BLAS. No communication is performed by this routine, the matrix to operate on should be strictly local to one process. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pclauum.l0100644000056400000620000001144006335610621017064 0ustar pfrauenfstaff.TH PCLAUUM l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCLAUUM - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCLAUUM( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCLAUUM computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the blocked form of the algorithm, calling Level 3 PBLAS. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pcmax1.l0100644000056400000620000001336406335610621016616 0ustar pfrauenfstaff.TH PCMAX1 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCMAX1 - compute the global index of the maximum element in absolute value of a distributed vector sub( X ) .SH SYNOPSIS .TP 19 SUBROUTINE PCMAX1( N, AMAX, INDX, X, IX, JX, DESCX, INCX ) .TP 19 .ti +4 INTEGER INDX, INCX, IX, JX, N .TP 19 .ti +4 COMPLEX AMAX .TP 19 .ti +4 INTEGER DESCX( * ) .TP 19 .ti +4 COMPLEX X( * ) .SH PURPOSE PCMAX1 computes the global index of the maximum element in absolute value of a distributed vector sub( X ). The global index is returned in INDX and the value is returned in AMAX, .br where sub( X ) denotes X(IX:IX+N-1,JX) if INCX = 1, .br X(IX,JX:JX+N-1) if INCX = M_X. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. When the result of a vector-oriented PBLAS call is a scalar, it will be made available only within the scope which owns the vector(s) being operated on. Let X be a generic term for the input vector(s). Then, the processes which receive the answer will be (note that if an operation involves more than one vector, the processes which re- ceive the result will be the union of the following calculation for each vector): .br If N = 1, M_X = 1 and INCX = 1, then one can't determine if a process row or process column owns the vector operand, therefore only the process of coordinate {RSRC_X, CSRC_X} receives the result; If INCX = M_X, then sub( X ) is a vector distributed over a process row. Each process part of this row receives the result; .br If INCX = 1, then sub( X ) is a vector distributed over a process column. Each process part of this column receives the result; Based on PCAMAX from Level 1 PBLAS. The change is to use the 'genuine' absolute value. .br The serial version was contributed to LAPACK by Nick Higham for use with CLACON. .br .SH ARGUMENTS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 AMAX (global output) pointer to REAL The absolute value of the largest entry of the distributed vector sub( X ) only in the scope of sub( X ). .TP 8 INDX (global output) pointer to INTEGER The global index of the element of the distributed vector sub( X ) whose real part has maximum absolute value. .TP 8 X (local input) COMPLEX array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. scalapack-doc-1.5/man/manl/pcpbsv.l0100644000056400000620000000141306335610621016712 0ustar pfrauenfstaff.TH PCPBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PCPBSV( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER BW, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX A( * ), B( * ), WORK( * ) .SH PURPOSE PCPBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PCPBTRF and PCPBTRS for details. .br scalapack-doc-1.5/man/manl/pcpbtrf.l0100644000056400000620000000231506335610621017057 0ustar pfrauenfstaff.TH PCPBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPBTRF - compute a Cholesky factorization of an N-by-N complex banded symmetric positive definite distributed matrix with bandwidth BW .SH SYNOPSIS .TP 20 SUBROUTINE PCPBTRF( UPLO, N, BW, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), WORK( * ) .SH PURPOSE PCPBTRF computes a Cholesky factorization of an N-by-N complex banded symmetric positive definite distributed matrix with bandwidth BW: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PCPBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' U , if UPLO = 'U', or P A(1:N, JA:JA+N-1) P^T = L L', if UPLO = 'L' .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pcpbtrs.l0100644000056400000620000000167506335610621017104 0ustar pfrauenfstaff.TH PCPBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PCPBTRS( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PCPBTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PCPBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'*U or L*L' as computed by PCPBTRF. .br Routine PCPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcpbtrsv.l0100644000056400000620000000216406335610622017265 0ustar pfrauenfstaff.TH PCPBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PCPBTRSV( UPLO, TRANS, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PCPBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Cholesky factorization code PCPBTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PCPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcpocon.l0100644000056400000620000001500406335610622017060 0ustar pfrauenfstaff.TH PCPOCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOCON - estimate the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite distributed matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by PCPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PCPOCON( UPLO, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LRWORK, LWORK, N .TP 20 .ti +4 REAL ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), WORK( * ) .SH PURPOSE PCPOCON estimates the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite distributed matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by PCPOTRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the factor stored in A(IA:IA+N-1,JA:JA+N-1) is upper or lower triangular. .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L or U from the Cholesky factorization A(IA:IA+N-1,JA:JA+N-1) = U'*U or L*L', as computed by PCPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) REAL The 1-norm (or infinity-norm) of the hermitian distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) REAL The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + MAX( 2, MAX(NB_A*MAX(1,CEIL(P-1,Q)),LOCc(N+MOD(JA-1,NB_A)) + NB_A*MAX(1,CEIL(Q-1,P))) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= 2*LOCc(N+MOD(JA-1,NB_A)). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcpoequ.l0100644000056400000620000001402306335610622017073 0ustar pfrauenfstaff.TH PCPOEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOEQU - compute row and column scalings intended to equilibrate a distributed Hermitian positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm) .SH SYNOPSIS .TP 20 SUBROUTINE PCPOEQU( N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 REAL AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL SC( * ), SR( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCPOEQU computes row and column scalings intended to equilibrate a distributed Hermitian positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm). SR and SC contain the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled distri- buted matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of SR and SC puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. .br The scaling factor are stored along process rows in SR and along process columns in SC. The duplication of information simplifies greatly the application of the factors. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ), the N-by-N Hermitian positive definite distributed matrix sub( A ) whose scaling factors are to be computed. Only the diagonal elements of sub( A ) are referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local output) REAL array, dimension LOCr(M_A) If INFO = 0, SR(IA:IA+N-1) contains the row scale factors for sub( A ). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local output) REAL array, dimension LOCc(N_A) If INFO = 0, SC(JA:JA+N-1) contains the column scale factors .br for A(IA:IA+M-1,JA:JA+N-1). SC is aligned with the distribu- ted matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global output) REAL If INFO = 0, SCOND contains the ratio of the smallest SR(i) (or SC(j)) to the largest SR(i) (or SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. If SCOND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by SR (or SC). .TP 8 AMAX (global output) REAL Absolute value of largest matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the K-th diagonal entry of sub( A ) is nonpositive. scalapack-doc-1.5/man/manl/pcporfs.l0100644000056400000620000002355406335610622017104 0ustar pfrauenfstaff.TH PCPORFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPORFS - improve the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PCPORFS( UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .TP 20 .ti +4 REAL RWORK( * ) .SH PURPOSE PCPORFS improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored. = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky factorization sub( A ) = L*L**H or U**H*U, as computed by PCPOTRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 B (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). On exit, it contains the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr( N + MOD( IA-1, MB_A ) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pcposv.l0100644000056400000620000001460106335610622016733 0ustar pfrauenfstaff.TH PCPOSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOSV - compute the solution to a complex system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PCPOSV( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCPOSV computes the solution to a complex system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is an N-by-N hermitian distributed positive definite matrix and X and sub( B ) denoting B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The Cholesky decomposition is used to factor sub( A ) as .br sub( A ) = U**H * U, if UPLO = 'U', or sub( A ) = L * L**H, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of sub( A ) is then used to solve the system of equations. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, this array contains the local pieces of the factor U or L from the Cholesky factori- zation sub( A ) = U**H*U or L*L**H. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B,LOC(JB+NRHS-1)). On entry, the local pieces of the right hand sides distribu- ted matrix sub( B ). On exit, if INFO = 0, sub( B ) is over- written with the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed, and the solution has not been computed. scalapack-doc-1.5/man/manl/pcposvx.l0100644000056400000620000003312106335610622017121 0ustar pfrauenfstaff.TH PCPOSVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOSVX - use the Cholesky factorization A = U**H*U or A = L*L**H to compute the solution to a complex system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PCPOSVX( FACT, UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, EQUED, SR, SC, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 REAL RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ) .TP 20 .ti +4 REAL BERR( * ), FERR( * ), SC( * ), SR( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), AF( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PCPOSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to compute the solution to a complex system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. In the following comments Y denotes Y(IY:IY+M-1,JY:JY+K-1) a M-by-K matrix where Y can be A, AF, B and X. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br diag(SR) * A * diag(SC) * inv(diag(SC)) * X = diag(SR) * B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(SR)*A*diag(SC) and B by diag(SR)*B. 2. If FACT = 'N' or 'E', the Cholesky decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as A = U**T* U, if UPLO = 'U', or .br A = L * L**T, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If equilibration was used, the matrix X is premultiplied by diag(SR) so that it solves the original system before .br equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A is supplied on entry, and if not, whether the matrix A should be equilibrated before it is factored. = 'F': On entry, AF contains the factored form of A. If EQUED = 'Y', the matrix A has been equilibrated with scaling factors given by S. A and AF will not be modified. = 'N': The matrix A will be copied to AF and factored. .br = 'E': The matrix A will be equilibrated if necessary, then copied to AF and factored. .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of A is stored; .br = 'L': Lower triangle of A is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrices B and X. NRHS >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the Hermitian matrix A, except if FACT = 'F' and EQUED = 'Y', then A must contain the equilibrated matrix diag(SR)*A*diag(SC). If UPLO = 'U', the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. A is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if FACT = 'E' and EQUED = 'Y', A is overwritten by diag(SR)*A*diag(SC). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) COMPLEX pointer into the local memory to an array of local dimension ( LLD_AF, LOCc(JA+N-1)). If FACT = 'F', then AF is an input argument and on entry contains the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T, in the same storage format as A. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix diag(SR)*A*diag(SC). If FACT = 'N', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the original matrix A. If FACT = 'E', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the equilibrated matrix A (see the description of A for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 EQUED (global input/global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'Y': Equilibration was done, i.e., A has been replaced by diag(SR) * A * diag(SC). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 SR (local input/local output) COMPLEX array, dimension (LLD_A) The scale factors for A distributed across process rows; not accessed if EQUED = 'N'. SR is an input variable if FACT = 'F'; otherwise, SR is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SR must be positive. .TP 8 SC (local input/local output) COMPLEX array, dimension (LOC(N_A)) The scale factors for A distributed across process columns; not accessed if EQUED = 'N'. SC is an input variable if FACT = 'F'; otherwise, SC is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SC must be positive. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of local dimension ( LLD_B, LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B. On exit, if EQUED = 'N', B is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B; if TRANS = 'T' or 'C' and EQUED = 'C' or 'B', B is overwritten by diag(C)*B. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) COMPLEX pointer into the local memory to an array of local dimension ( LLD_X, LOCc(JX+NRHS-1) ). If INFO = 0, the N-by-NRHS solution matrix X to the original system of equations. Note that A and B are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(SC))*X if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(SR))*X if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) REAL The estimate of the reciprocal condition number of the matrix A after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0, and the solution and error bounds are not computed. .TP 8 FERR (local output) REAL array, dimension (LOC(N_B)) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The quality of the error bound depends on the quality of the estimate of norm(inv(A)) computed in the code; if the estimate of norm(inv(A)) is accurate, the error bound is guaranteed. .TP 8 BERR (local output) REAL array, dimension (LOC(N_B)) The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A or B that makes X(j) an exact solution). .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PCPOCON( LWORK ), PCPORFS( LWORK ) ) + LOCr( N_A ). LWORK = 3*DESCA( LLD_ ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK = 2*LOCc(N_A). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: if INFO = i, the leading minor of order i of A is not positive definite, so the factorization could not be completed, and the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pcpotf2.l0100644000056400000620000001260206335610622016775 0ustar pfrauenfstaff.TH PCPOTF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOTF2 - compute the Cholesky factorization of a complex hermitian positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCPOTF2( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCPOTF2 computes the Cholesky factorization of a complex hermitian positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pcpotrf.l0100644000056400000620000001260406335610622017077 0ustar pfrauenfstaff.TH PCPOTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOTRF - compute the Cholesky factorization of an N-by-N complex hermitian positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCPOTRF( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCPOTRF computes the Cholesky factorization of an N-by-N complex hermitian positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pcpotri.l0100644000056400000620000001145506335610622017105 0ustar pfrauenfstaff.TH PCPOTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOTRI - compute the inverse of a complex Hermitian positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**H*U or L*L**H computed by PCPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PCPOTRI( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCPOTRI computes the inverse of a complex Hermitian positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**H*U or L*L**H computed by PCPOTRF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor U or L from the Cholesky factorization of the distributed matrix sub( A ) = U**H*U or L*L**H, as computed by PCPOTRF. On exit, the local pieces of the upper or lower triangle of the (Hermitian) inverse of sub( A ), overwriting the input factor U or L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the (i,i) element of the factor U or L is zero, and the inverse could not be computed. scalapack-doc-1.5/man/manl/pcpotrs.l0100644000056400000620000001267306335610622017122 0ustar pfrauenfstaff.TH PCPOTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPOTRS - solve a system of linear equations sub( A ) * X = sub( B ) A(IA:IA+N-1,JA:JA+N-1)*X = B(IB:IB+N-1,JB:JB+NRHS-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCPOTRS( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCPOTRS solves a system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a N-by-N hermitian positive definite distributed matrix using the Cholesky factorization sub( A ) = U**H*U or L*L**H computed by PCPOTRF. sub( B ) denotes the distributed matrix B(IB:IB+N-1,JB:JB+NRHS-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) COMPLEX pointer into local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky facto- rization sub( A ) = L*L**H or U**H*U, as computed by PCPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the the local pieces of the right hand sides sub( B ). On exit, this array contains the local pieces of the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcptsv.l0100644000056400000620000000142306335610622016736 0ustar pfrauenfstaff.TH PCPTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PCPTSV( UPLO, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX B( * ), E( * ), WORK( * ) .TP 19 .ti +4 REAL D( * ) .SH PURPOSE PCPTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal symmetric positive definite distributed .br matrix. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PCPTTRF and PCPTTRS for details. .br scalapack-doc-1.5/man/manl/pcpttrf.l0100644000056400000620000000225106335610622017101 0ustar pfrauenfstaff.TH PCPTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPTTRF - compute a Cholesky factorization of an N-by-N complex tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCPTTRF( N, D, E, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX AF( * ), E( * ), WORK( * ) .TP 20 .ti +4 REAL D( * ) .SH PURPOSE PCPTTRF computes a Cholesky factorization of an N-by-N complex tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PCPTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' D U or .br P A(1:N, JA:JA+N-1) P^T = L D L', .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pcpttrs.l0100644000056400000620000000170506335610622017121 0ustar pfrauenfstaff.TH PCPTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PCPTTRS( UPLO, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX AF( * ), B( * ), E( * ), WORK( * ) .TP 20 .ti +4 REAL D( * ) .SH PURPOSE PCPTTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PCPTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal symmetric positive definite distributed .br matrix. .br Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'D *U or L*D L' as computed by PCPTTRF. Routine PCPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcpttrsv.l0100644000056400000620000000223006335610623017302 0ustar pfrauenfstaff.TH PCPTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCPTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PCPTTRSV( UPLO, TRANS, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX AF( * ), B( * ), E( * ), WORK( * ) .TP 21 .ti +4 REAL D( * ) .SH PURPOSE PCPTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Cholesky factorization code PCPTTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PCPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pcsrscl.l0100644000056400000620000001110706335610623017071 0ustar pfrauenfstaff.TH PCSRSCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCSRSCL - multiplie an N-element complex distributed vector sub( X ) by the real scalar 1/a .SH SYNOPSIS .TP 20 SUBROUTINE PCSRSCL( N, SA, SX, IX, JX, DESCX, INCX ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 REAL SA .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX SX( * ) .SH PURPOSE PCSRSCL multiplies an N-element complex distributed vector sub( X ) by the real scalar 1/a. This is done without overflow or underflow as long as the final sub( X )/a does not overflow or underflow. .br where sub( X ) denotes X(IX:IX+N-1,JX:JX), if INCX = 1, .br X(IX:IX,JX:JX+N-1), if INCX = M_X. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector descA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DT_A (global) descA[ DT_ ] The descriptor type. In this case, DT_A = 1. .br CTXT_A (global) descA[ CTXT_ ] The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) descA[ M_ ] The number of rows in the global array A. .br N_A (global) descA[ N_ ] The number of columns in the global array A. .br MB_A (global) descA[ MB_ ] The blocking factor used to distribu- te the rows of the array. .br NB_A (global) descA[ NB_ ] The blocking factor used to distribu- te the columns of the array. RSRC_A (global) descA[ RSRC_ ] The process row over which the first row of the array A is distributed. CSRC_A (global) descA[ CSRC_ ] The process column over which the first column of the array A is distributed. .br LLD_A (local) descA[ LLD_ ] The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be seen as particular matrices, a distributed vector is considered to be a distributed matrix. .br .SH ARGUMENTS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 SA (global input) REAL The scalar a which is used to divide each component of sub( X ). SA must be >= 0, or the subroutine will divide by zero. .TP 8 SX (local input/local output) COMPLEX array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) pointer to INTEGER The global row index of the submatrix of the distributed matrix X to operate on. .TP 8 JX (global input) pointer to INTEGER The global column index of the submatrix of the distributed matrix X to operate on. .TP 8 DESCX (global and local input) INTEGER array of dimension 8. The array descriptor of the distributed matrix X. .TP 8 INCX (global input) pointer to INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. scalapack-doc-1.5/man/manl/pcstein.l0100644000056400000620000002563606335610623017101 0ustar pfrauenfstaff.TH PCSTEIN l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCSTEIN - compute the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration .SH SYNOPSIS .TP 20 SUBROUTINE PCSTEIN( N, D, E, M, W, IBLOCK, ISPLIT, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 INTEGER INFO, IZ, JZ, LIWORK, LWORK, M, N .TP 20 .ti +4 REAL ORFAC .TP 20 .ti +4 INTEGER DESCZ( * ), IBLOCK( * ), ICLUSTR( * ), IFAIL( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 REAL D( * ), E( * ), GAP( * ), W( * ), WORK( * ) .TP 20 .ti +4 COMPLEX Z( * ) .SH PURPOSE PCSTEIN computes the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration. The eigenvectors found correspond to user specified eigenvalues. PCSTEIN does not orthogonalize vectors that are on different processes. The extent of orthogonalization is controlled by the input parameter LWORK. Eigenvectors that are to be orthogonalized are computed by the same process. PCSTEIN decides on the allocation of work among the processes and then calls SSTEIN2 (modified LAPACK routine) on each individual process. If insufficient workspace is allocated, the expected orthogonalization may not be done. .br Note : If the eigenvectors obtained are not orthogonal, increase LWORK and run the code again. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS P = NPROW * NPCOL is the total number of processes .TP 8 N (global input) INTEGER The order of the tridiagonal matrix T. N >= 0. .TP 8 D (global input) REAL array, dimension (N) The n diagonal elements of the tridiagonal matrix T. .TP 8 E (global input) REAL array, dimension (N-1) The (n-1) off-diagonal elements of the tridiagonal matrix T. .TP 8 M (global input) INTEGER The total number of eigenvectors to be found. 0 <= M <= N. .TP 8 W (global input/global output) REAL array, dim (M) On input, the first M elements of W contain all the eigenvalues for which eigenvectors are to be computed. The eigenvalues should be grouped by split-off block and ordered from smallest to largest within the block (The output array W from PSSTEBZ with ORDER='b' is expected here). This array should be replicated on all processes. On output, the first M elements contain the input eigenvalues in ascending order. Note : To obtain orthogonal vectors, it is best if eigenvalues are computed to highest accuracy ( this can be done by setting ABSTOL to the underflow threshold = SLAMCH('U') --- ABSTOL is an input parameter to PSSTEBZ ) .TP 8 IBLOCK (global input) INTEGER array, dimension (N) The submatrix indices associated with the corresponding eigenvalues in W -- 1 for eigenvalues belonging to the first submatrix from the top, 2 for those belonging to the second submatrix, etc. (The output array IBLOCK from PSSTEBZ is expected here). .TP 8 ISPLIT (global input) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT(1), the second of rows/columns ISPLIT(1)+1 through ISPLIT(2), etc., and the NSPLIT-th consists of rows/columns ISPLIT(NSPLIT-1)+1 through ISPLIT(NSPLIT)=N (The output array ISPLIT from PSSTEBZ is expected here.) .TP 8 ORFAC (global input) REAL ORFAC specifies which eigenvectors should be orthogonalized. Eigenvectors that correspond to eigenvalues which are within ORFAC*||T|| of each other are to be orthogonalized. However, if the workspace is insufficient (see LWORK), this tolerance may be decreased until all eigenvectors to be orthogonalized can be stored in one process. No orthogonalization will be done if ORFAC equals zero. A default value of 10^-3 is used if ORFAC is negative. ORFAC should be identical on all processes. .TP 8 Z (local output) COMPLEX array, dimension (DESCZ(DLEN_), N/npcol + NB) Z contains the computed eigenvectors associated with the specified eigenvalues. Any vector which fails to converge is set to its current iterate after MAXITS iterations ( See SSTEIN2 ). On output, Z is distributed across the P processes in block cyclic format. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 WORK (local workspace/global output) REAL array, dimension ( LWORK ) On output, WORK(1) gives a lower bound on the workspace ( LWORK ) that guarantees the user desired orthogonalization (see ORFAC). Note that this may overestimate the minimum workspace needed. .TP 8 LWORK (local input) integer LWORK controls the extent of orthogonalization which can be done. The number of eigenvectors for which storage is allocated on each process is NVEC = floor(( LWORK- max(5*N,NP00*MQ00) )/N). Eigenvectors corresponding to eigenvalue clusters of size NVEC - ceil(M/P) + 1 are guaranteed to be orthogonal ( the orthogonality is similar to that obtained from CSTEIN2). Note : LWORK must be no smaller than: max(5*N,NP00*MQ00) + ceil(M/P)*N, and should have the same input value on all processes. It is the minimum value of LWORK input on different processes that is significant. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/global output) INTEGER array, dimension ( 3*N+P+1 ) On return, IWORK(1) contains the amount of integer workspace required. On return, the IWORK(2) through IWORK(P+2) indicate the eigenvectors computed by each process. Process I computes eigenvectors indexed IWORK(I+2)+1 thru' IWORK(I+3). .TP 8 LIWORK (local input) INTEGER Size of array IWORK. Must be >= 3*N + P + 1 If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IFAIL (global output) integer array, dimension (M) On normal exit, all elements of IFAIL are zero. If one or more eigenvectors fail to converge after MAXITS iterations (as in CSTEIN), then INFO > 0 is returned. If mod(INFO,M+1)>0, then for I=1 to mod(INFO,M+1), the eigenvector corresponding to the eigenvalue W(IFAIL(I)) failed to converge ( W refers to the array of eigenvalues on output ). ICLUSTR (global output) integer array, dimension (2*P) This output array contains indices of eigenvectors corresponding to a cluster of eigenvalues that could not be orthogonalized due to insufficient workspace (see LWORK, ORFAC and INFO). Eigenvectors corresponding to clusters of eigenvalues indexed ICLUSTR(2*I-1) to ICLUSTR(2*I), I = 1 to INFO/(M+1), could not be orthogonalized due to lack of workspace. Hence the eigenvectors corresponding to these clusters may not be orthogonal. ICLUSTR is a zero terminated array --- ( ICLUSTR(2*K).NE.0 .AND. ICLUSTR(2*K+1).EQ.0 ) if and only if K is the number of clusters. .TP 8 GAP (global output) REAL array, dimension (P) This output array contains the gap between eigenvalues whose eigenvectors could not be orthogonalized. The INFO/M output values in this array correspond to the INFO/(M+1) clusters indicated by the array ICLUSTR. As a result, the dot product between eigenvectors corresponding to the I^th cluster may be as high as ( O(n)*macheps ) / GAP(I). .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. < 0 : if INFO = -I, the I-th argument had an illegal value .br > 0 : if mod(INFO,M+1) = I, then I eigenvectors failed to converge in MAXITS iterations. Their indices are stored in the array IFAIL. if INFO/(M+1) = I, then eigenvectors corresponding to I clusters of eigenvalues could not be orthogonalized due to insufficient workspace. The indices of the clusters are stored in the array ICLUSTR. scalapack-doc-1.5/man/manl/pctrcon.l0100644000056400000620000001603206335610623017072 0ustar pfrauenfstaff.TH PCTRCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCTRCON - estimate the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm .SH SYNOPSIS .TP 20 SUBROUTINE PCTRCON( NORM, UPLO, DIAG, N, A, IA, JA, DESCA, RCOND, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 20 .ti +4 INTEGER IA, JA, INFO, LRWORK, LWORK, N .TP 20 .ti +4 REAL RCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), WORK( * ) .SH PURPOSE PCTRCON estimates the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm. The norm of A(IA:IA+N-1,JA:JA+N-1) is computed and an estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), then the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm; .br = 'I': Infinity-norm. .TP 8 UPLO (global input) CHARACTER .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is upper triangular; .br = 'L': A(IA:IA+N-1,JA:JA+N-1) is lower triangular. .TP 8 DIAG (global input) CHARACTER .br = 'N': A(IA:IA+N-1,JA:JA+N-1) is non-unit triangular; .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is unit triangular. .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U', the leading N-by-N upper triangular part of this distributed matrix con- tains the upper triangular matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of this ditributed matrix contains the lower triangular matrix, and the strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of A(IA:IA+N-1,JA:JA+N-1) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 RCOND (global output) REAL The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + MAX( 2, MAX(NB_A*CEIL(P-1,Q),LOCc(N+MOD(JA-1,NB_A)) + NB_A*CEIL(Q-1,P)) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCc(N+MOD(JA-1,NB_A)). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pctrrfs.l0100644000056400000620000002264506335610623017114 0ustar pfrauenfstaff.TH PCTRRFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCTRRFS - provide error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix .SH SYNOPSIS .TP 20 SUBROUTINE PCTRRFS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER INFO, IA, IB, IX, JA, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCX( * ) .TP 20 .ti +4 REAL BERR( * ), FERR( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PCTRRFS provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix. The solution matrix X must be computed by PCTRTRS or some other means before entering this routine. PCTRRFS does not do iterative refinement because doing so cannot improve the backward error. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**H * sub( X ) = sub( B ) (Conjugate transpose) .TP 8 DIAG (global input) CHARACTER*1 = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the original triangular distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) COMPLEX pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The estimated forward error bounds for each solution vector of sub( X ). If XTRUE is the true solution, FERR bounds the magnitude of the largest entry in (sub( X ) - XTRUE) divided by the magnitude of the largest entry in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr( N + MOD( IA-1, MB_A ) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) REAL array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices sub( X ) and sub( B ) should be distributed the same way on the same processes. These conditions ensure that sub( X ) and sub( B ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pctrti2.l0100644000056400000620000001205306335610623017010 0ustar pfrauenfstaff.TH PCTRTI2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCTRTI2 - compute the inverse of a complex upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCTRTI2( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCTRTI2 computes the inverse of a complex upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). This matrix should be contained in one and only one process memory space (local operation). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 DIAG (global input) CHARACTER*1 .br = 'N': sub( A ) is non-unit triangular .br = 'U': sub( A ) is unit triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)), this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. On exit, the (triangular) inverse of the original matrix, in the same storage format. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pctrtri.l0100644000056400000620000001213306335610623017107 0ustar pfrauenfstaff.TH PCTRTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCTRTRI - compute the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PCTRTRI( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ) .SH PURPOSE PCTRTRI computes the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) is unit triangular: .br = 'N': Non-unit triangular, .br = 'U': Unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix to be inverted, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. On exit, the (triangular) inverse of the original matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, A(IA+K-1,JA+K-1) is exactly zero. The triangular matrix sub( A ) is singular and its inverse can not be computed. scalapack-doc-1.5/man/manl/pctrtrs.l0100644000056400000620000001446406335610623017132 0ustar pfrauenfstaff.TH PCTRTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PCTRTRS - solve a triangular system of the form sub( A ) * X = sub( B ) or sub( A )**T * X = sub( B ) or sub( A )**H * X = sub( B ), .SH SYNOPSIS .TP 20 SUBROUTINE PCTRTRS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX A( * ), B( * ) .SH PURPOSE PCTRTRS solves a triangular system of the form where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a triangular distributed matrix of order N, and B(IB:IB+N-1,JB:JB+NRHS-1) is an N-by-NRHS distributed matrix denoted by sub( B ). A check is made to verify that sub( A ) is nonsingular. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': Solve sub( A ) * X = sub( B ) (No transpose) .br = 'T': Solve sub( A )**T * X = sub( B ) (Transpose) .br = 'C': Solve sub( A )**H * X = sub( B ) (Conjugate transpose) .TP 8 DIAG (global input) CHARACTER .br = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed matrix sub( B ). NRHS >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the distributed triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the i-th diagonal element of sub( A ) is zero, indicating that the submatrix is singular and the solutions X have not been computed. scalapack-doc-1.5/man/manl/pctzrzf.l0100644000056400000620000001565006335610623017131 0ustar pfrauenfstaff.TH PCTZRZF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCTZRZF - reduce the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of unitary transformations .SH SYNOPSIS .TP 20 SUBROUTINE PCTZRZF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCTZRZF reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of unitary transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N unitary matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements M+1 to N of the first M rows of sub( A ), with the array TAU, represent the unitary matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), whose conjugate transpose is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pcung2l.l0100644000056400000620000001403706335610623016777 0ustar pfrauenfstaff.TH PCUNG2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNG2L - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PCUNG2L( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNG2L generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PCGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PCGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcung2r.l0100644000056400000620000001402206335610623016777 0ustar pfrauenfstaff.TH PCUNG2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNG2R - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PCUNG2R( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNG2R generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PCGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PCGEQRF in the K columns of its array argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcungl2.l0100644000056400000620000001401106335610623016767 0ustar pfrauenfstaff.TH PCUNGL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNGL2 - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k)' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNGL2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNGL2 generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PCGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcunglq.l0100644000056400000620000001402206335610623017070 0ustar pfrauenfstaff.TH PCUNGLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNGLQ - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k)' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNGLQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNGLQ generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PCGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcungql.l0100644000056400000620000001405006335610624017072 0ustar pfrauenfstaff.TH PCUNGQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNGQL - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PCUNGQL( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNGQL generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PCGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PCGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcungqr.l0100644000056400000620000001404706335610624017106 0ustar pfrauenfstaff.TH PCUNGQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNGQR - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PCUNGQR( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNGQR generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PCGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PCGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+K-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcungr2.l0100644000056400000620000001402406335610624017002 0ustar pfrauenfstaff.TH PCUNGR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNGR2 - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1)' H(2)' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNGR2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNGR2 generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PCGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PCGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcungrq.l0100644000056400000620000001403506335610624017103 0ustar pfrauenfstaff.TH PCUNGRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNGRQ - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1)' H(2)' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNGRQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX A( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNGRQ generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PCGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PCGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pcunm2l.l0100644000056400000620000001726306335610624017012 0ustar pfrauenfstaff.TH PCUNM2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNM2L - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNM2L( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNM2L overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PCGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PCGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pcunm2r.l0100644000056400000620000001726506335610624017022 0ustar pfrauenfstaff.TH PCUNM2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNM2R - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNM2R( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNM2R overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PCGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PCGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pcunmbr.l0100644000056400000620000002402206335610624017067 0ustar pfrauenfstaff.TH PCUNMBR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMBR - VECT = 'Q', PCUNMBR overwrites the general complex distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMBR( VECT, SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, VECT .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE If VECT = 'Q', PCUNMBR overwrites the general complex distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br If VECT = 'P', PCUNMBR overwrites sub( C ) with .br SIDE = 'L' SIDE = 'R' .br TRANS = 'N': P * sub( C ) sub( C ) * P .br TRANS = 'C': P**H * sub( C ) sub( C ) * P**H .br Here Q and P**H are the unitary distributed matrices determined by PCGEBRD when reducing a complex distributed matrix A(IA:*,JA:*) to bidiagonal form: A(IA:*,JA:*) = Q * B * P**H. Q and P**H are defined as products of elementary reflectors H(i) and G(i) respectively. Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the order of the unitary matrix Q or P**H that is applied. .br If VECT = 'Q', A(IA:*,JA:*) is assumed to have been an NQ-by-K matrix: .br if nq >= k, Q = H(1) H(2) . . . H(k); .br if nq < k, Q = H(1) H(2) . . . H(nq-1). .br If VECT = 'P', A(IA:*,JA:*) is assumed to have been a K-by-NQ matrix: .br if k < nq, P = G(1) G(2) . . . G(k); .br if k >= nq, P = G(1) G(2) . . . G(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 VECT (global input) CHARACTER = 'Q': apply Q or Q**H; .br = 'P': apply P or P**H. .TP 8 SIDE (global input) CHARACTER .br = 'L': apply Q, Q**H, P or P**H from the Left; .br = 'R': apply Q, Q**H, P or P**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q or P; .br = 'C': Conjugate transpose, apply Q**H or P**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER If VECT = 'Q', the number of columns in the original distributed matrix reduced by PCGEBRD. If VECT = 'P', the number of rows in the original distributed matrix reduced by PCGEBRD. K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+MIN(NQ,K)-1)) if VECT='Q', and (LLD_A,LOCc(JA+NQ-1)) if VECT = 'P'. NQ = M if SIDE = 'L', and NQ = N otherwise. The vectors which define the elementary reflectors H(i) and G(i), whose products determine the matrices Q and P, as returned by PCGEBRD. If VECT = 'Q', LLD_A >= max(1,LOCr(IA+NQ-1)); if VECT = 'P', LLD_A >= max(1,LOCr(IA+MIN(NQ,K)-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX array, dimension LOCc(JA+MIN(NQ,K)-1) if VECT = 'Q', LOCr(IA+MIN(NQ,K)-1) if VECT = 'P', TAU(i) must contain the scalar factor of the elementary reflector H(i) or G(i), which determines Q or P, as returned by PDGEBRD in its array argument TAUQ or TAUP. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, if VECT='Q', sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q; if VECT='P, sub( C ) is overwritten by P*sub( C ) or P'*sub( C ) or sub( C )*P or sub( C )*P'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', NQ = M; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA+1; JAA=JA; MI=M-1; NI=N; ICC=IC+1; JCC=JC; end if else if SIDE = 'R', NQ = N; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA; JAA=JA+1; MI=M; NI=N-1; ICC=IC; JCC=JC+1; end if end if If VECT = 'Q', If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if else if VECT <> 'Q', if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( MI+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if end if where LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JAA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( MI+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If VECT = 'Q', If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) else If SIDE = 'L', ( MB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pcunmhr.l0100644000056400000620000002014506335610624017077 0ustar pfrauenfstaff.TH PCUNMHR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMHR - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMHR( SIDE, TRANS, M, N, ILO, IHI, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, IHI, ILO, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMHR overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of IHI-ILO elementary reflectors, as returned by PCGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER ILO and IHI must have the same values as in the previous call of PCGEHRD. Q is equal to the unit matrix except in the distributed submatrix Q(ia+ilo:ia+ihi-1,ia+ilo:ja+ihi-1). If SIDE = 'L', 1 <= ILO <= IHI <= max(1,M); if SIDE = 'R', 1 <= ILO <= IHI <= max(1,N); ILO and IHI are relative indexes. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PCGEHRD. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+M-2) if SIDE = 'L', and LOCc(JA+N-2) if SIDE = 'R'. This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEHRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least IAA = IA + ILO; JAA = JA+ILO-1; If SIDE = 'L', MI = IHI-ILO; NI = N; ICC = IC + ILO; JCC = JC; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; NI = IHI-ILO; ICC = IC; JCC = JC + ILO; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pcunml2.l0100644000056400000620000001725106335610624017007 0ustar pfrauenfstaff.TH PCUNML2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNML2 - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNML2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNML2 overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k)' . . . H(2)' H(1)' .br as returned by PCGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pcunmlq.l0100644000056400000620000001766106335610624017113 0ustar pfrauenfstaff.TH PCUNMLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMLQ - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMLQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMLQ overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k)' . . . H(2)' H(1)' .br as returned by PCGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pcunmql.l0100644000056400000620000001767306335610624017116 0ustar pfrauenfstaff.TH PCUNMQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMQL - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMQL( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMQL overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PCGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PCGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pcunmqr.l0100644000056400000620000001767506335610624017126 0ustar pfrauenfstaff.TH PCUNMQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMQR - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMQR( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMQR overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PCGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PCGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PCGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pcunmr2.l0100644000056400000620000001725106335610624017015 0ustar pfrauenfstaff.TH PCUNMR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMR2 - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMR2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMR2 overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PCGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pcunmr3.l0100644000056400000620000001757406335610624017026 0ustar pfrauenfstaff.TH PCUNMR3 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMR3 - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMR3( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMR3 overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PCTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pcunmrq.l0100644000056400000620000001766206335610625017123 0ustar pfrauenfstaff.TH PCUNMRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMRQ - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMRQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMRQ overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PCGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pcunmrz.l0100644000056400000620000002020506335610625017117 0ustar pfrauenfstaff.TH PCUNMRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMRZ - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMRZ( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMRZ overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PCTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PCTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PCTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pcunmtr.l0100644000056400000620000002045406335610625017117 0ustar pfrauenfstaff.TH PCUNMTR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PCUNMTR - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PCUNMTR( SIDE, UPLO, TRANS, M, N, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PCUNMTR overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by PCHETRD: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); .br if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of A(IA:*,JA:*) contains elementary reflectors from PCHETRD; = 'L': Lower triangle of A(IA:*,JA:*) contains elementary reflectors from PCHETRD. .TP 8 TRANS (global input) CHARACTER = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 A (local input) COMPLEX pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', or (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PCHETRD. If SIDE = 'L', LLD_A >= max(1,LOCr(IA+M-1)); if SIDE = 'R', LLD_A >= max(1,LOCr(IA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX array, dimension LTAU, where if SIDE = 'L' and UPLO = 'U', LTAU = LOCc(M_A), if SIDE = 'L' and UPLO = 'L', LTAU = LOCc(JA+M-2), if SIDE = 'R' and UPLO = 'U', LTAU = LOCc(N_A), if SIDE = 'R' and UPLO = 'L', LTAU = LOCc(JA+N-2). TAU(i) must contain the scalar factor of the elementary reflector H(i), as returned by PCHETRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If UPLO = 'U', IAA = IA, JAA = JA+1, ICC = IC, JCC = JC; else UPLO = 'L', IAA = IA+1, JAA = JA; if SIDE = 'L', ICC = IC+1; JCC = JC; else ICC = IC; JCC = JC+1; end if end if If SIDE = 'L', MI = M-1; NI = N; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; MI = N-1; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pddbsv.l0100644000056400000620000000141706335610625016707 0ustar pfrauenfstaff.TH PDDBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PDDBSV( N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), B( * ), WORK( * ) .SH PURPOSE PDDBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PDDBTRF and PDDBTRS for details. .br scalapack-doc-1.5/man/manl/pddbtrf.l0100644000056400000620000000214606335610625017052 0ustar pfrauenfstaff.TH PDDBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDBTRF - compute a LU factorization of an N-by-N real banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PDDBTRF( N, BWL, BWU, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), WORK( * ) .SH PURPOSE PDDBTRF computes a LU factorization of an N-by-N real banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PDDBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pddbtrs.l0100644000056400000620000000167106335610625017071 0ustar pfrauenfstaff.TH PDDBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PDDBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PDDBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PDDBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Routine PDDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pddbtrsv.l0100644000056400000620000000222006335610625017246 0ustar pfrauenfstaff.TH PDDBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PDDBTRSV( UPLO, TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PDDBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Gaussian elimination code PD@(dom_pre)BTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PDDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pddtsv.l0100644000056400000620000000140206335610625016723 0ustar pfrauenfstaff.TH PDDTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PDDTSV( N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 DOUBLE PRECISION B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PDDTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal diagonally dominant-like distributed .br matrix. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PDDTTRF and PDDTTRS for details. .br scalapack-doc-1.5/man/manl/pddttrf.l0100644000056400000620000000214106335610625017067 0ustar pfrauenfstaff.TH PDDTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDTTRF - compute a LU factorization of an N-by-N real tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDDTTRF( N, DL, D, DU, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION AF( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PDDTTRF computes a LU factorization of an N-by-N real tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PDDTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pddttrs.l0100644000056400000620000000165406335610625017114 0ustar pfrauenfstaff.TH PDDTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PDDTTRS( TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PDDTTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PDDTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal diagonally dominant-like distributed .br matrix. .br Routine PDDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pddttrsv.l0100644000056400000620000000224506335610625017277 0ustar pfrauenfstaff.TH PDDTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDDTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PDDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 DOUBLE PRECISION AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PDDTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Gaussian elimination code PD@(dom_pre)TTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PDDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pdgbsv.l0100644000056400000620000000140606335610625016710 0ustar pfrauenfstaff.TH PDGBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PDGBSV( N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), B( * ), WORK( * ) .SH PURPOSE PDGBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br banded distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination with pivoting .br is used to factor a reordering .br of the matrix into P L U. .br See PDGBTRF and PDGBTRS for details. .br scalapack-doc-1.5/man/manl/pdgbtrf.l0100644000056400000620000000237606335610625017062 0ustar pfrauenfstaff.TH PDGBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGBTRF - compute a LU factorization of an N-by-N real banded distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PDGBTRF( N, BWL, BWU, A, JA, DESCA, IPIV, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), WORK( * ) .SH PURPOSE PDGBTRF computes a LU factorization of an N-by-N real banded distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PDGBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) Q = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P and Q are permutation matrices. .br The matrix Q represents reordering of columns .br for parallelism's sake, while P represents .br reordering of rows for numerical stability using .br classic partial pivoting. .br scalapack-doc-1.5/man/manl/pdgbtrs.l0100644000056400000620000000165706335610625017100 0ustar pfrauenfstaff.TH PDGBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PDGBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWU, BWL, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV(*) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PDGBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PDGBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br banded distributed .br matrix with bandwidth BWL, BWU. .br Routine PDGBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pdgebd2.l0100644000056400000620000002231506335610626016735 0ustar pfrauenfstaff.TH PDGEBD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDGEBD2 - reduce a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation .SH SYNOPSIS .TP 20 SUBROUTINE PDGEBD2( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), D( * ), E( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PDGEBD2 reduces a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) DOUBLE PRECISION array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MpA0, NqA0 ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ) IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+IROFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pdgebrd.l0100644000056400000620000002233506335610626017037 0ustar pfrauenfstaff.TH PDGEBRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEBRD - reduce a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation .SH SYNOPSIS .TP 20 SUBROUTINE PDGEBRD( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), D( * ), E( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PDGEBRD reduces a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) DOUBLE PRECISION array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*( MpA0 + NqA0 + 1 ) + NqA0 where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pdgecon.l0100644000056400000620000001554706335610626017056 0ustar pfrauenfstaff.TH PDGECON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGECON - estimate the reciprocal of the condition number of a general distributed real matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PDGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PDGECON( NORM, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER NORM .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 DOUBLE PRECISION ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDGECON estimates the reciprocal of the condition number of a general distributed real matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PDGETRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm .br = 'I': Infinity-norm .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) DOUBLE PRECISION If NORM = '1' or 'O', the 1-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). If NORM = 'I', the infinity-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) DOUBLE PRECISION The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + 2*LOCc(N+MOD(JA-1,NB_A)) + MAX( 2, MAX( NB_A*MAX( 1, CEIL(NPROW-1,NPCOL) ), LOCc(N+MOD(JA-1,NB_A)) + NB_A*MAX( 1, CEIL(NPCOL-1,NPROW) ) ). LOCr and LOCc values can be computed using the ScaLAPACK tool function NUMROC; NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr(N+MOD(IA-1,MB_A)). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdgeequ.l0100644000056400000620000001450206335610626017057 0ustar pfrauenfstaff.TH PDGEEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEEQU - compute row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number .SH SYNOPSIS .TP 20 SUBROUTINE PDGEEQU( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), R( * ) .SH PURPOSE PDGEEQU computes row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest entry in each row and column of the distributed matrix B with elements B(i,j) = R(i) * A(i,j) * C(j) have absolute value 1. .br R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of sub( A ) but works well in practice. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ), the local pieces of the M-by-N distributed matrix whose equilibration factors are to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local output) DOUBLE PRECISION array, dimension LOCr(M_A) If INFO = 0 or INFO > IA+M-1, R(IA:IA+M-1) contains the row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local output) DOUBLE PRECISION array, dimension LOCc(N_A) If INFO = 0, C(JA:JA+N-1) contains the column scale factors for sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distri- buted matrix A. .TP 8 ROWCND (global output) DOUBLE PRECISION If INFO = 0 or INFO > IA+M-1, ROWCND contains the ratio of the smallest R(i) to the largest R(i) (IA <= i <= IA+M-1). If ROWCND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by R(IA:IA+M-1). .TP 8 COLCND (global output) DOUBLE PRECISION If INFO = 0, COLCND contains the ratio of the smallest C(j) to the largest C(j) (JA <= j <= JA+N-1). If COLCND >= 0.1, it is not worth scaling by C(JA:JA+N-1). .TP 8 AMAX (global output) DOUBLE PRECISION Absolute value of largest distributed matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, and i is .br <= M: the i-th row of the distributed matrix sub( A ) is exactly zero, > M: the (i-M)-th column of the distributed matrix sub( A ) is exactly zero. scalapack-doc-1.5/man/manl/pdgehd2.l0100644000056400000620000001655706335610626016756 0ustar pfrauenfstaff.TH PDGEHD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDGEHD2 - reduce a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion .SH SYNOPSIS .TP 20 SUBROUTINE PDGEHD2( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEHD2 reduces a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion: Q' * sub( A ) * Q = H, where sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+JLO-2 and JA+JHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB + MAX( NpA0, NB ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on exit in A(ia+ilo+i:ia+ihi-1,ja+ilo+i-2), and tau in TAU(ja+ilo+i-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follo- wing example, with n = 7, ilo = 2 and ihi = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(ja+ilo+i-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pdgehrd.l0100644000056400000620000001721306335610626017044 0ustar pfrauenfstaff.TH PDGEHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEHRD - reduce a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion .SH SYNOPSIS .TP 20 SUBROUTINE PDGEHRD( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEHRD reduces a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion: Q' * sub( A ) * Q = H, where sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+ILO-2 and JA+IHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*NB + NB*MAX( IHIP+1, IHLP+INLQ ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IOFF = MOD( IA+ILO-2, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IHIP = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), ILROW = INDXG2P( IA+ILO-1, NB, MYROW, RSRC_A, NPROW ), IHLP = NUMROC( IHI-ILO+IOFF+1, NB, MYROW, ILROW, NPROW ), ILCOL = INDXG2P( JA+ILO-1, NB, MYCOL, CSRC_A, NPCOL ), INLQ = NUMROC( N-ILO+IOFF+1, NB, MYCOL, ILCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:I) = 0, v(I+1) = 1 and v(IHI+1:N) = 0; v(I+2:IHI) is stored on exit in A(IA+ILO+I:IA+IHI-1,JA+ILO+I-2), and tau in TAU(JA+ILO+I-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follow- ing example, with N = 7, ILO = 2 and IHI = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), H denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(JA+ILO+I-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pdgelq2.l0100644000056400000620000001424206335610626016764 0ustar pfrauenfstaff.TH PDGELQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGELQ2 - compute a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PDGELQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGELQ2 computes a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1) H(ia+k-2) . . . H(ia), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1)=0 and v(i) = 1; v(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/pdgelqf.l0100644000056400000620000001425306335610626017052 0ustar pfrauenfstaff.TH PDGELQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGELQF - compute a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PDGELQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGELQF computes a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1) H(ia+k-2) . . . H(ia), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1)=0 and v(i) = 1; v(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/pdgels.l0100644000056400000620000002223606335610626016706 0ustar pfrauenfstaff.TH PDGELS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGELS - solve overdetermined or underdetermined real linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), .SH SYNOPSIS .TP 19 SUBROUTINE PDGELS( TRANS, M, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER TRANS .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), B( * ), WORK( * ) .SH PURPOSE PDGELS solves overdetermined or underdetermined real linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), or its transpose, using a QR or LQ factorization of sub( A ). It is assumed that sub( A ) has full rank. .br The following options are provided: .br 1. If TRANS = 'N' and m >= n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )*X ||. .br 2. If TRANS = 'N' and m < n: find the minimum norm solution of an underdetermined system sub( A ) * X = sub( B ). .br 3. If TRANS = 'T' and m >= n: find the minimum norm solution of an undetermined system sub( A )**T * X = sub( B ). .br 4. If TRANS = 'T' and m < n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )**T * X ||. where sub( B ) denotes B( IB:IB+M-1, JB:JB+NRHS-1 ) when TRANS = 'N' and B( IB:IB+N-1, JB:JB+NRHS-1 ) otherwise. Several right hand side vectors b and solution vectors x can be handled in a single call; When TRANS = 'N', the solution vectors are stored as the columns of the N-by-NRHS right hand side matrix sub( B ) and the M-by-NRHS right hand side matrix sub( B ) otherwise. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER = 'N': the linear system involves sub( A ); .br = 'T': the linear system involves sub( A )**T. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e. the number of columns of the distributed submatrices sub( B ) and X. NRHS >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the M-by-N matrix A. if M >= N, sub( A ) is overwritten by details of its QR factorization as returned by PDGEQRF; if M < N, sub( A ) is overwritten by details of its LQ factorization as returned by PDGELQF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix B of right hand side vectors, stored columnwise; sub( B ) is M-by-NRHS if TRANS='N', and N-by-NRHS otherwise. On exit, sub( B ) is overwritten by the solution vectors, stored columnwise: if TRANS = 'N' and M >= N, rows 1 to N of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements N+1 to M in that column; if TRANS = 'N' and M < N, rows 1 to N of sub( B ) contain the minimum norm solution vectors; if TRANS = 'T' and M >= N, rows 1 to M of sub( B ) contain the minimum norm solution vectors; if TRANS = 'T' and M < N, rows 1 to M of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements M+1 to N in that column. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= LTAU + MAX( LWF, LWS ) where If M >= N, then LTAU = NUMROC( JA+MIN(M,N)-1, NB_A, MYCOL, CSRC_A, NPCOL ), LWF = NB_A * ( MpA0 + NqA0 + NB_A ) LWS = MAX( (NB_A*(NB_A-1))/2, (NRHSqB0 + MpB0)*NB_A ) + NB_A * NB_A Else LTAU = NUMROC( IA+MIN(M,N)-1, MB_A, MYROW, RSRC_A, NPROW ), LWF = MB_A * ( MpA0 + NqA0 + MB_A ) LWS = MAX( (MB_A*(MB_A-1))/2, ( NpB0 + MAX( NqA0 + NUMROC( NUMROC( N+IROFFB, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NRHSqB0 ) )*MB_A ) + MB_A * MB_A End if where LCMP = LCM / NPROW with LCM = ILCM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), MpB0 = NUMROC( M+IROFFB, MB_B, MYROW, IBROW, NPROW ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), NRHSqB0 = NUMROC( NRHS+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdgeql2.l0100644000056400000620000001442406335610626016766 0ustar pfrauenfstaff.TH PDGEQL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEQL2 - compute a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PDGEQL2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEQL2 computes a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/pdgeqlf.l0100644000056400000620000001443506335610626017054 0ustar pfrauenfstaff.TH PDGEQLF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEQLF - compute a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PDGEQLF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEQLF computes a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/pdgeqpf.l0100644000056400000620000001514106335610626017053 0ustar pfrauenfstaff.TH PDGEQPF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEQPF - compute a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDGEQPF( M, N, A, IA, JA, DESCA, IPIV, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, JA, INFO, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEQPF computes a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1): sub( A ) * P = Q * R. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension LOCc(JA+N-1). On exit, if IPIV(I) = K, the local i-th column of sub( A )*P was the global K-th column of sub( A ). IPIV is tied to the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX(3,Mp0 + Nq0) + LOCc(JA+N-1)+Nq0. IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), LOCc(JA+N-1) = NUMROC( JA+N-1, NB_A, MYCOL, CSRC_A, NPCOL ) and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n) .br Each H(i) has the form .br H = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(ia+i-1:ia+m-1,ja+i-1). The matrix P is represented in jpvt as follows: If .br jpvt(j) = i .br then the jth column of P is the ith canonical unit vector. scalapack-doc-1.5/man/manl/pdgeqr2.l0100644000056400000620000001424406335610627016775 0ustar pfrauenfstaff.TH PDGEQR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEQR2 - compute a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PDGEQR2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEQR2 computes a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/pdgeqrf.l0100644000056400000620000001425506335610627017063 0ustar pfrauenfstaff.TH PDGEQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGEQRF - compute a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PDGEQRF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGEQRF computes a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/pdgerfs.l0100644000056400000620000002341506335610627017063 0ustar pfrauenfstaff.TH PDGERFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGERFS - improve the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PDGERFS( TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, IX, INFO, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ),IPIV( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .SH PURPOSE PDGERFS improves the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**T * sub( X ) = sub( B ) (Conjugate transpose = Transpose) .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). This array contains the local pieces of the distributed factors of the matrix sub( A ) = P * L * U as computed by PDGETRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input) INTEGER array of dimension LOCr(M_AF)+MB_AF. This array contains the pivoting information as computed by PDGETRF. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). This array contains the local pieces of the distributed matrix of right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input and output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_X,LOCc(JX+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix solution sub( X ). On exit, the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*LOCr( N + MOD(IA-1,MB_A) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr( N + MOD(IB-1,MB_B) ). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pdgerq2.l0100644000056400000620000001437006335610627016775 0ustar pfrauenfstaff.TH PDGERQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGERQ2 - compute a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PDGERQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGERQ2 computes a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia) H(ia+1) . . . H(ia+k-1), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). .br scalapack-doc-1.5/man/manl/pdgerqf.l0100644000056400000620000001440106335610627017054 0ustar pfrauenfstaff.TH PDGERQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGERQF - compute a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PDGERQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDGERQF computes a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia) H(ia+1) . . . H(ia+k-1), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). .br scalapack-doc-1.5/man/manl/pdgesv.l0100644000056400000620000001400406335610627016713 0ustar pfrauenfstaff.TH PDGESV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGESV - compute the solution to a real system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PDGESV( N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDGESV computes the solution to a real system of linear equations where sub( A ) = A(IA:IA+N-1,JA:JA+N-1) is an N-by-N distributed matrix and X and sub( B ) = B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The LU decomposition with partial pivoting and row interchanges is used to factor sub( A ) as sub( A ) = P * L * U, where P is a permu- tation matrix, L is unit lower triangular, and U is upper triangular. L and U are stored in sub( A ). The factored form of sub( A ) is then used to solve the system of equations sub( A ) * X = sub( B ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( A ). NRHS >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the N-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. scalapack-doc-1.5/man/manl/pdgesvd.l0100644000056400000620000002256206335610627017067 0ustar pfrauenfstaff.TH PDGESVD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGESVD - compute the singular value decomposition (SVD) of an M-by-N matrix A, optionally computing the left and/or right singular vectors .SH SYNOPSIS .TP 20 SUBROUTINE PDGESVD( JOBU, JOBVT, M, N, A, IA, JA, DESCA, S, U, IU, JU, DESCU, VT, IVT, JVT, DESCVT, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER JOBU, JOBVT .TP 20 .ti +4 INTEGER IA, INFO, IU, IVT, JA, JU, JVT, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCU( * ), DESCVT( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), S( * ), U( * ), VT( * ), WORK( * ) .SH PURPOSE PDGESVD computes the singular value decomposition (SVD) of an M-by-N matrix A, optionally computing the left and/or right singular vectors. The SVD is written as A = U * SIGMA * transpose(V) .br where SIGMA is an M-by-N matrix which is zero except for its min(M,N) diagonal elements, U is an M-by-M orthogonal matrix, and V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA are the singular values of A and the columns of U and V are the corresponding right and left singular vectors, respectively. The singular values are returned in array S in decreasing order and only the first min(M,N) columns of U and rows of VT = V**T are computed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS MP = number of local rows in A and U NQ = number of local columns in A and VT SIZE = min( M, N ) SIZEQ = number of local columns in U SIZEP = number of local rows in VT .TP 8 JOBU (global input) CHARACTER*1 Specifies options for computing all or part of the matrix U: .br = 'V': the first SIZE columns of U (the left singular vectors) are returned in the array U; = 'N': no columns of U (no left singular vectors) are computed. .TP 8 JOBVT (global input) CHARACTER*1 Specifies options for computing all or part of the matrix V**T: .br = 'V': the first SIZE rows of V**T (the right singular vectors) are returned in the array VT; = 'N': no rows of V**T (no right singular vectors) are computed. .TP 8 M (global input) INTEGER The number of rows of the input matrix A. M >= 0. .TP 8 N (global input) INTEGER The number of columns of the input matrix A. N >= 0. .TP 8 A (local input/workspace) block cyclic DOUBLE PRECISION array, global dimension (M, N), local dimension (MP, NQ) On exit, the contents of A are destroyed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global input) INTEGER array of dimension DLEN_ The array descriptor for the distributed matrix A. .TP 8 S (global output) DOUBLE PRECISION array, dimension SIZE The singular values of A, sorted so that S(i) >= S(i+1). .TP 8 U (local output) DOUBLE PRECISION array, local dimension (MP, SIZEQ), global dimension (M, SIZE) if JOBU = 'V', U contains the first min(m,n) columns of U if JOBU = 'N', U is not referenced. .TP 8 IU (global input) INTEGER The row index in the global array U indicating the first row of sub( U ). .TP 8 JU (global input) INTEGER The column index in the global array U indicating the first column of sub( U ). .TP 8 DESCU (global input) INTEGER array of dimension DLEN_ The array descriptor for the distributed matrix U. .TP 8 VT (local output) DOUBLE PRECISION array, local dimension (SIZEP, NQ), global dimension (SIZE, N). If JOBVT = 'V', VT contains the first SIZE rows of V**T. If JOBVT = 'N', VT is not referenced. .TP 8 IVT (global input) INTEGER The row index in the global array VT indicating the first row of sub( VT ). .TP 8 JVT (global input) INTEGER The column index in the global array VT indicating the first column of sub( VT ). .TP 9 DESCVT (global input) INTEGER array of dimension DLEN_ The array descriptor for the distributed matrix VT. .TP 8 WORK (local workspace/output) DOUBLE PRECISION array, dimension (LWORK) On exit, if INFO = 0, WORK(1) returns the optimal LWORK; .TP 8 LWORK (local input) INTEGER The dimension of the array WORK. LWORK > 2 + 6*SIZEB + MAX(WATOBD, WBDTOSVD), where SIZEB = MAX(M,N), and WATOBD and WBDTOSVD refer, respectively, to the workspace required to bidiagonalize the matrix A and to go from the bidiagonal matrix to the singular value decomposition U*S*VT. For WATOBD, the following holds: WATOBD = MAX(MAX(WPDLANGE,WPDGEBRD), MAX(WPDLARED2D,WPDLARED1D)), where WPDLANGE, WPDLARED1D, WPDLARED2D, WPDGEBRD are the workspaces required respectively for the subprograms PDLANGE, PDLARED1D, PDLARED2D, PDGEBRD. Using the standard notation MP = NUMROC( M, MB, MYROW, DESCA( CTXT_ ), NPROW), NQ = NUMROC( N, NB, MYCOL, DESCA( LLD_ ), NPCOL), the workspaces required for the above subprograms are WPDLANGE = MP, WPDLARED1D = NQ0, WPDLARED2D = MP0, WPDGEBRD = NB*(MP + NQ + 1) + NQ, where NQ0 and MP0 refer, respectively, to the values obtained at MYCOL = 0 and MYROW = 0. In general, the upper limit for the workspace is given by a workspace required on processor (0,0): WATOBD <= NB*(MP0 + NQ0 + 1) + NQ0. In case of a homogeneous process grid this upper limit can be used as an estimate of the minimum workspace for every processor. For WBDTOSVD, the following holds: WBDTOSVD = SIZE*(WANTU*NRU + WANTVT*NCVT) + MAX(WDBDSQR, MAX(WANTU*WPDORMBRQLN, WANTVT*WPDORMBRPRT)), .TP -1 where 1, if left(right) singular vectors are wanted WANTU(WANTVT) = 0, otherwise and WDBDSQR, WPDORMBRQLN and WPDORMBRPRT refer respectively to the workspace required for the subprograms DBDSQR, PDORMBR(QLN), and PDORMBR(PRT), where QLN and PRT are the values of the arguments VECT, SIDE, and TRANS in the call to PDORMBR. NRU is equal to the local number of rows of the matrix U when distributed 1-dimensional "column" of processes. Analogously, NCVT is equal to the local number of columns of the matrix VT when distributed across 1-dimensional "row" of processes. Calling the LAPACK procedure DBDSQR requires WDBDSQR = MAX(1, 2*SIZE + (2*SIZE - 4)*MAX(WANTU, WANTVT)) on every processor. Finally, WPDORMBRQLN = MAX( (NB*(NB-1))/2, (SIZEQ+MP)*NB)+NB*NB, WPDORMBRPRT = MAX( (MB*(MB-1))/2, (SIZEP+NQ)*MB )+MB*MB, If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum size for the work array. The required workspace is returned as the first element of WORK and no error message is issued by PXERBLA. .TP 8 INFO (output) INTEGER = 0: successful exit. .br < 0: if INFO = -i, the i-th argument had an illegal value. .br > 0: if SBDSQR did not converge If INFO = MIN(M,N) + 1, then PDGESVD has detected heterogeneity by finding that eigenvalues were not identical across the process grid. In this case, the accuracy of the results from PDGESVD cannot be guaranteed. scalapack-doc-1.5/man/manl/pdgesvx.l0100644000056400000620000004045406335610627017113 0ustar pfrauenfstaff.TH PDGESVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGESVX - use the LU factorization to compute the solution to a real system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PDGESVX( FACT, TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, EQUED, R, C, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), BERR( * ), C( * ), FERR( * ), R( * ), WORK( * ), X( * ) .SH PURPOSE PDGESVX uses the LU factorization to compute the solution to a real system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION In the following description, A denotes A(IA:IA+N-1,JA:JA+N-1), B denotes B(IB:IB+N-1,JB:JB+NRHS-1) and X denotes .br X(IX:IX+N-1,JX:JX+NRHS-1). .br The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N') or diag(C)*B (if TRANS = 'T' or 'C'). .br 2. If FACT = 'N' or 'E', the LU decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as .br A = P * L * U, .br where P is a permutation matrix, L is a unit lower triangular matrix, and U is upper triangular. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If FACT = 'E' and equilibration was used, the matrix X is premultiplied by diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so that it solves the original system before equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A(IA:IA+N-1,JA:JA+N-1) is supplied on entry, and if not, .br whether the matrix A(IA:IA+N-1,JA:JA+N-1) should be equilibrated before it is factored. = 'F': On entry, AF(IAF:IAF+N-1,JAF:JAF+N-1) and IPIV con- .br tain the factored form of A(IA:IA+N-1,JA:JA+N-1). If EQUED is not 'N', the matrix A(IA:IA+N-1,JA:JA+N-1) has been equilibrated with scaling factors given by R and C. A(IA:IA+N-1,JA:JA+N-1), AF(IAF:IAF+N-1,JAF:JAF+N-1), and IPIV are not modified. = 'N': The matrix A(IA:IA+N-1,JA:JA+N-1) will be copied to .br AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .br = 'E': The matrix A(IA:IA+N-1,JA:JA+N-1) will be equili- brated if necessary, then copied to AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': A(IA:IA+N-1,JA:JA+N-1) * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (No transpose) .br = 'T': A(IA:IA+N-1,JA:JA+N-1)**T * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Transpose) .br = 'C': A(IA:IA+N-1,JA:JA+N-1)**H * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right-hand sides, i.e., the number of columns of the distributed submatrices B(IB:IB+N-1,JB:JB+NRHS-1) and .br X(IX:IX+N-1,JX:JX+NRHS-1). NRHS >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the N-by-N matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'F' and EQUED is not 'N', .br then A(IA:IA+N-1,JA:JA+N-1) must have been equilibrated by .br the scaling factors in R and/or C. A(IA:IA+N-1,JA:JA+N-1) is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if EQUED .ne. 'N', A(IA:IA+N-1,JA:JA+N-1) is scaled as follows: .br EQUED = 'R': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) .br EQUED = 'C': A(IA:IA+N-1,JA:JA+N-1) := .br A(IA:IA+N-1,JA:JA+N-1) * diag(C) .br EQUED = 'B': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). If FACT = 'F', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an input argument and on entry contains the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PDGETRF. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'N', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original .br matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equili- .br brated matrix A(IA:IA+N-1,JA:JA+N-1) (see the description of .br A(IA:IA+N-1,JA:JA+N-1) for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input or local output) INTEGER array, dimension LOCr(M_A)+MB_A. If FACT = 'F', then IPIV is an input argu- ment and on entry contains the pivot indices from the fac- torization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PDGETRF; IPIV(i) -> The global row local row i was swapped with. This array must be aligned with A( IA:IA+N-1, * ). If FACT = 'N', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original matrix .br A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equilibrated matrix .br A(IA:IA+N-1,JA:JA+N-1). .TP 8 EQUED (global input or global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'R': Row equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been premultiplied by diag(R). = 'C': Column equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been postmultiplied by diag(C). = 'B': Both row and column equilibration, i.e., .br A(IA:IA+N-1,JA:JA+N-1) has been replaced by .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 R (local input or local output) DOUBLE PRECISION array, dimension LOCr(M_A). The row scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'R' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the left by diag(R); if EQUED='N' or 'C', R is not acces- sed. R is an input variable if FACT = 'F'; otherwise, R is an output variable. If FACT = 'F' and EQUED = 'R' or 'B', each element of R must be positive. R is replicated in every process column, and is aligned with the distributed matrix A. .TP 8 C (local input or local output) DOUBLE PRECISION array, dimension LOCc(N_A). The column scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'C' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the right by diag(C); if EQUED = 'N' or 'R', C is not accessed. C is an input variable if FACT = 'F'; otherwise, C is an output variable. If FACT = 'F' and EQUED = 'C' or 'B', each element of C must be positive. C is replicated in every process row, and is aligned with the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B(IB:IB+N-1,JB:JB+NRHS-1). On exit, if .br EQUED = 'N', B(IB:IB+N-1,JB:JB+NRHS-1) is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B(IB:IB+N-1,JB:JB+NRHS-1); if TRANS = 'T' or 'C' .br and EQUED = 'C' or 'B', B(IB:IB+N-1,JB:JB+NRHS-1) is over- .br written by diag(C)*B(IB:IB+N-1,JB:JB+NRHS-1). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1)). If INFO = 0, the N-by-NRHS solution matrix X(IX:IX+N-1,JX:JX+NRHS-1) to the original .br system of equations. Note that A(IA:IA+N-1,JA:JA+N-1) and .br B(IB:IB+N-1,JB:JB+NRHS-1) are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(C))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(R))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) DOUBLE PRECISION The estimate of the reciprocal condition number of the matrix A(IA:IA+N-1,JA:JA+N-1) after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0. .TP 8 FERR (local output) DOUBLE PRECISION array, dimension LOCc(N_B) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X(IX:IX+N-1,JX:JX+NRHS-1). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. FERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 BERR (local output) DOUBLE PRECISION array, dimension LOCc(N_B). The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A(IA:IA+N-1,JA:JA+N-1) or .br B(IB:IB+N-1,JB:JB+NRHS-1) that makes X(j) an exact solution). BERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PDGECON( LWORK ), PDGERFS( LWORK ) ) + LOCr( N_A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK = LOCr(N_A). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: U(IA+I-1,IA+I-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pdgetf2.l0100644000056400000620000001263006335610627016761 0ustar pfrauenfstaff.TH PDGETF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGETF2 - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PDGETF2( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDGETF2 computes an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). .br This is the right-looking Parallel Level 2 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). NB_A-MOD(JA-1, NB_A) >= N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ). On exit, this array contains the local pieces of the factors L and U from the factoriza- tion sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/pdgetrf.l0100644000056400000620000001261206335610627017061 0ustar pfrauenfstaff.TH PDGETRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGETRF - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PDGETRF( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDGETRF computes an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal ele- ments (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). L and U are stored in sub( A ). This is the right-looking Parallel Level 3 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal ele- ments of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/pdgetri.l0100644000056400000620000001445006335610627017066 0ustar pfrauenfstaff.TH PDGETRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGETRI - compute the inverse of a distributed matrix using the LU factorization computed by PDGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PDGETRI( N, A, IA, JA, DESCA, IPIV, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDGETRI computes the inverse of a distributed matrix using the LU factorization computed by PDGETRF. This method inverts U and then computes the inverse of sub( A ) = A(IA:IA+N-1,JA:JA+N-1) denoted InvA by solving the system InvA*L = inv(U) for InvA. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the L and U obtained by the factorization sub( A ) = P*L*U computed by PDGETRF. On exit, if INFO = 0, sub( A ) contains the inverse of the original distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A keeps track of the pivoting information. IPIV(i) is the global row index the local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = LOCr(N+MOD(IA-1,MB_A))*NB_A. WORK is used to keep a copy of at most an entire column block of sub( A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK used as workspace for physically transposing the pivots. LIWORK is local input and must be at least if NPROW == NPCOL then LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + NB_A, else LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + MAX( CEIL(CEIL(LOCr(M_A)/MB_A)/(LCM/NPROW)), NB_A ) where LCM is the least common multiple of process rows and columns (NPROW and NPCOL). end if If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,IA+K-1) is exactly zero; the matrix is singular and its inverse could not be computed. scalapack-doc-1.5/man/manl/pdgetrs.l0100644000056400000620000001331306335610627017075 0ustar pfrauenfstaff.TH PDGETRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGETRS - solve a system of distributed linear equations op( sub( A ) ) * X = sub( B ) with a general N-by-N distributed matrix sub( A ) using the LU factorization computed by PDGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PDGETRS( TRANS, N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDGETRS solves a system of distributed linear equations sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1), op( A ) = A or A**T and sub( B ) denotes B(IB:IB+N-1,JB:JB+NRHS-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block data decomposition ( MB_A=NB_A ). .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER Specifies the form of the system of equations: .br = 'N': sub( A ) * X = sub( B ) (No transpose) .br = 'T': sub( A )**T * X = sub( B ) (Transpose) .br = 'C': sub( A )**T * X = sub( B ) (Transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand sides sub( B ). On exit, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdggqrf.l0100644000056400000620000002361206335610630017054 0ustar pfrauenfstaff.TH PDGGQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGGQRF - compute a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDGGQRF( N, M, P, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PDGGQRF computes a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1): sub( A ) = Q*R, sub( B ) = Q*T*Z, .br where Q is an N-by-N orthogonal matrix, Z is a P-by-P orthogonal matrix, and R and T assume one of the forms: .br if N >= M, R = ( R11 ) M , or if N < M, R = ( R11 R12 ) N, ( 0 ) N-M N M-N M .br where R11 is upper triangular, and .br if N <= P, T = ( 0 T12 ) N, or if N > P, T = ( T11 ) N-P, P-N N ( T21 ) P P .br where T12 or T21 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GQR factorization of sub( A ) and sub( B ) implicitly gives the QR factorization of inv( sub( B ) )* sub( A ): .br inv( sub( B ) )*sub( A )= Z'*(inv(T)*R) .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 M (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( B ). P >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+M-1)). On entry, the local pieces of the N-by-M distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(N,M) by M upper trapezoidal matrix R (R is upper triangular if N >= M); the elements below the diagonal, with the array TAUA, represent the orthogonal matrix Q as a product of min(N,M) elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) DOUBLE PRECISION, array, dimension LOCc(JA+MIN(N,M)-1). This array contains the scalar factors TAUA of the elementary reflectors which represent the orthogonal matrix Q. TAUA is tied to the distributed matrix A. (see Further Details). B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+P-1)). On entry, the local pieces of the N-by-P distributed matrix sub( B ) which is to be factored. On exit, if N <= P, the upper triangle of B(IB:IB+N-1,JB+P-N:JB+P-1) contains the N by N upper triangular matrix T; if N > P, the elements on and above the (N-P)-th subdiagonal contain the N by P upper trapezoidal matrix T; the remaining elements, with the array TAUB, represent the orthogonal matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) DOUBLE PRECISION, array, dimension LOCr(IB+N-1) This array contains the scalar factors of the elementary reflectors which represent the orthogonal unitary matrix Z. TAUB is tied to the distributed matrix B (see Further Details). .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB_A * ( NpA0 + MqA0 + NB_A ), MAX( (NB_A*(NB_A-1))/2, (PqB0 + NpB0)*NB_A ) + NB_A * NB_A, MB_B * ( NpB0 + PqB0 + MB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), PqB0 = NUMROC( P+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(n,m). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a real scalar, and v is a real vector with .br v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in .br A(ia+i:ia+n-1,ja+i-1), and taua in TAUA(ja+i-1). .br To form Q explicitly, use ScaLAPACK subroutine PDORGQR. .br To use Q to update another matrix, use ScaLAPACK subroutine PDORMQR. The matrix Z is represented as a product of elementary reflectors Z = H(ib) H(ib+1) . . . H(ib+k-1), where k = min(n,p). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a real scalar, and v is a real vector with .br v(p-k+i+1:p) = 0 and v(p-k+i) = 1; v(1:p-k+i-1) is stored on exit in B(ib+n-k+i-1,jb:jb+p-k+i-2), and taub in TAUB(ib+n-k+i-1). To form Z explicitly, use ScaLAPACK subroutine PDORGRQ. .br To use Z to update another matrix, use ScaLAPACK subroutine PDORMRQ. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( MB_A.EQ.MB_B .AND. IROFFA.EQ.IROFFB .AND. IAROW.EQ.IBROW ) scalapack-doc-1.5/man/manl/pdggrqf.l0100644000056400000620000002351206335610630017053 0ustar pfrauenfstaff.TH PDGGRQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDGGRQF - compute a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDGGRQF( M, P, N, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PDGGRQF computes a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) and a P-by-N matrix sub( B ) = B(IB:IB+P-1,JB:JB+N-1): .br sub( A ) = R*Q, sub( B ) = Z*T*Q, .br where Q is an N-by-N orthogonal matrix, Z is a P-by-P orthogonal matrix, and R and T assume one of the forms: .br if M <= N, R = ( 0 R12 ) M, or if M > N, R = ( R11 ) M-N, N-M M ( R21 ) N N .br where R12 or R21 is upper triangular, and .br if P >= N, T = ( T11 ) N , or if P < N, T = ( T11 T12 ) P, ( 0 ) P-N P N-P N .br where T11 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GRQ factorization of sub( A ) and sub( B ) implicitly gives the RQ factorization of sub( A )*inv( sub( B ) ): .br sub( A )*inv( sub( B ) ) = (R*inv(T))*Z' .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( B ). P >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAUA, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors which represent the orthogonal unitary matrix Q. TAUA is tied to the distributed matrix A (see Further Details). .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, the local pieces of the P-by-N distributed matrix sub( B ) which is to be factored. On exit, the elements on and above the diagonal of sub( B ) contain the min(P,N) by N upper trapezoidal matrix T (T is upper triangular if P >= N); the elements below the diagonal, with the array TAUB, represent the orthogonal matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) DOUBLE PRECISION, array, dimension LOCc(JB+MIN(P,N)-1). This array contains the scalar factors TAUB of the elementary reflectors which represent the orthogonal matrix Z. TAUB is tied to the distributed matrix B (see Further Details). WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MB_A * ( MpA0 + NqA0 + MB_A ), MAX( (MB_A*(MB_A-1))/2, (PpB0 + NqB0)*MB_A ) + MB_A * MB_A, NB_B * ( PpB0 + NqB0 + NB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), PpB0 = NUMROC( P+IROFFB, MB_B, MYROW, IBROW, NPROW ), NqB0 = NUMROC( N+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia) H(ia+1) . . . H(ia+k-1), where k = min(m,n). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a real scalar, and v is a real vector with .br v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and taua in TAUA(ia+m-k+i-1). To form Q explicitly, use ScaLAPACK subroutine PDORGRQ. .br To use Q to update another matrix, use ScaLAPACK subroutine PDORMRQ. The matrix Z is represented as a product of elementary reflectors Z = H(jb) H(jb+1) . . . H(jb+k-1), where k = min(p,n). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a real scalar, and v is a real vector with .br v(1:i-1) = 0 and v(i) = 1; v(i+1:p) is stored on exit in .br B(ib+i:ib+p-1,jb+i-1), and taub in TAUB(jb+i-1). .br To form Z explicitly, use ScaLAPACK subroutine PDORGQR. .br To use Z to update another matrix, use ScaLAPACK subroutine PDORMQR. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( NB_A.EQ.NB_B .AND. ICOFFA.EQ.ICOFFB .AND. IACOL.EQ.IBCOL ) scalapack-doc-1.5/man/manl/pdlabad.l0100644000056400000620000000313406335610630017006 0ustar pfrauenfstaff.TH PDLABAD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLABAD - take as input the values computed by PDLAMCH for underflow and overflow, and returns the square root of each of these values if the log of LARGE is sufficiently large .SH SYNOPSIS .TP 20 SUBROUTINE PDLABAD( ICTXT, SMALL, LARGE ) .TP 20 .ti +4 INTEGER ICTXT .TP 20 .ti +4 DOUBLE PRECISION LARGE, SMALL .SH PURPOSE PDLABAD takes as input the values computed by PDLAMCH for underflow and overflow, and returns the square root of each of these values if the log of LARGE is sufficiently large. This subroutine is intended to identify machines with a large exponent range, such as the Crays, and redefine the underflow and overflow limits to be the square roots of the values computed by PDLAMCH. This subroutine is needed because PDLAMCH does not compensate for poor arithmetic in the upper half of the exponent range, as is found on a Cray. .br In addition, this routine performs a global minimization and maximi- zation on these values, to support heterogeneous computing networks. .SH ARGUMENTS .TP 8 ICTXT (global input) INTEGER The BLACS context handle in which the computation takes place. .TP 8 SMALL (local input/local output) DOUBLE PRECISION On entry, the underflow threshold as computed by PDLAMCH. On exit, if LOG10(LARGE) is sufficiently large, the square root of SMALL, otherwise unchanged. .TP 8 LARGE (local input/local output) DOUBLE PRECISION On entry, the overflow threshold as computed by PDLAMCH. On exit, if LOG10(LARGE) is sufficiently large, the square root of LARGE, otherwise unchanged. scalapack-doc-1.5/man/manl/pdlabrd.l0100644000056400000620000002345006335610630017032 0ustar pfrauenfstaff.TH PDLABRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLABRD - reduce the first NB rows and columns of a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an orthogonal transformation Q' * A * P, .SH SYNOPSIS .TP 20 SUBROUTINE PDLABRD( M, N, NB, A, IA, JA, DESCA, D, E, TAUQ, TAUP, X, IX, JX, DESCX, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IX, IY, JA, JX, JY, M, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ), DESCY( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), D( * ), E( * ), TAUP( * ), TAUQ( * ), X( * ), Y( * ), WORK( * ) .SH PURPOSE PDLABRD reduces the first NB rows and columns of a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an orthogonal transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transformation to the unreduced part of sub( A ). .br If M >= N, sub( A ) is reduced to upper bidiagonal form; if M < N, to lower bidiagonal form. .br This is an auxiliary routine called by PDGEBRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of leading rows and columns of sub( A ) to be reduced. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ) to be reduced. On exit, the first NB rows and columns of the matrix are overwritten; the rest of the distributed matrix sub( A ) is unchanged. If m >= n, elements on and below the diagonal in the first NB columns, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors; and elements above the diagonal in the first NB rows, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If m < n, elements below the diagonal in the first NB columns, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and elements on and above the diagonal in the first NB rows, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(ia+i-1,ja+i-1). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(ia+i-1,ja+i) for i = 1,2,...,n-1; if m < n, E(i) = A(ia+i,ja+i-1) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) DOUBLE PRECISION array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. TAUP is tied to the distributed matrix A. See Further Details. X (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_X,NB). On exit, the local pieces of the distributed M-by-NB matrix X(IX:IX+M-1,JX:JX+NB-1) required to update the unreduced part of sub( A ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 Y (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_Y,NB). On exit, the local pieces of the distributed N-by-NB matrix Y(IY:IY+N-1,JY:JY+NB-1) required to update the unreduced part of sub( A ). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) LWORK >= NB_A + NQ, with NQ = NUMROC( N+MOD( IA-1, NB_Y ), NB_Y, MYCOL, IACOL, NPCOL ) IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ) INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br Q = H(1) H(2) . . . H(nb) and P = G(1) G(2) . . . G(nb) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors. If m >= n, v(1:i-1) = 0, v(i) = 1, and v(i:m) is stored on exit in A(ia+i-1:ia+m-1,ja+i-1); u(1:i) = 0, u(i+1) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, v(1:i) = 0, v(i+1) = 1, and v(i+1:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); u(1:i-1) = 0, u(i) = 1, and u(i:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The elements of the vectors v and u together form the m-by-nb matrix V and the nb-by-n matrix U' which are needed, with X and Y, to apply the transformation to the unreduced part of the matrix, using a block update of the form: sub( A ) := sub( A ) - V*Y' - X*U'. .br The contents of sub( A ) on exit are illustrated by the following examples with nb = 2: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( 1 1 u1 u1 u1 ) ( 1 u1 u1 u1 u1 u1 ) ( v1 1 1 u2 u2 ) ( 1 1 u2 u2 u2 u2 ) ( v1 v2 a a a ) ( v1 1 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) .br where a denotes an element of the original matrix which is unchanged, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br scalapack-doc-1.5/man/manl/pdlacon.l0100644000056400000620000001302306335610630017035 0ustar pfrauenfstaff.TH PDLACON l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLACON - estimate the 1-norm of a square, real distributed matrix A .SH SYNOPSIS .TP 20 SUBROUTINE PDLACON( N, V, IV, JV, DESCV, X, IX, JX, DESCX, ISGN, EST, KASE ) .TP 20 .ti +4 INTEGER IV, IX, JV, JX, KASE, N .TP 20 .ti +4 DOUBLE PRECISION EST .TP 20 .ti +4 INTEGER DESCV( * ), DESCX( * ), ISGN( * ) .TP 20 .ti +4 DOUBLE PRECISION V( * ), X( * ) .SH PURPOSE PDLACON estimates the 1-norm of a square, real distributed matrix A. Reverse communication is used for evaluating matrix-vector products. X and V are aligned with the distributed matrix A, this information is implicitly contained within IV, IX, DESCV, and DESCX. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vectors V and X. N >= 0. .TP 8 V (local workspace) DOUBLE PRECISION pointer into the local memory to an array of dimension LOCr(N+MOD(IV-1,MB_V)). On the final return, V = A*W, where EST = norm(V)/norm(W) (W is not returned). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 X (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension LOCr(N+MOD(IX-1,MB_X)). On an intermediate return, X should be overwritten by A * X, if KASE=1, A' * X, if KASE=2, PDLACON must be re-called with all the other parameters unchanged. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 ISGN (local workspace) INTEGER array, dimension LOCr(N+MOD(IX-1,MB_X)). ISGN is aligned with X and V. .TP 8 EST (global output) DOUBLE PRECISION An estimate (a lower bound) for norm(A). .TP 8 KASE (local input/local output) INTEGER On the initial call to PDLACON, KASE should be 0. On an intermediate return, KASE will be 1 or 2, indicating whether X should be overwritten by A * X or A' * X. On the final return from PDLACON, KASE will again be 0. .SH FURTHER DETAILS The serial version DLACON has been contributed by Nick Higham, University of Manchester. It was originally named SONEST, dated March 16, 1988. .br Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation", ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. scalapack-doc-1.5/man/manl/pdlaconsb.l0100644000056400000620000001362306335610630017370 0ustar pfrauenfstaff.TH PDLACONSB l "12 May 1997" "LAPACK version 1.5 " "LAPACK routine (version 1.5 )" .SH NAME PDLACONSB - look for two consecutive small subdiagonal elements by seeing the effect of starting a double shift QR iteration given by H44, H33, & H43H34 and see if this would make a subdiagonal negligible .SH SYNOPSIS .TP 22 SUBROUTINE PDLACONSB( A, DESCA, I, L, M, H44, H33, H43H34, BUF, LWORK ) .TP 22 .ti +4 INTEGER I, L, LWORK, M .TP 22 .ti +4 DOUBLE PRECISION H33, H43H34, H44 .TP 22 .ti +4 INTEGER DESCA( * ) .TP 22 .ti +4 DOUBLE PRECISION A( * ), BUF( * ) .SH PURPOSE PDLACONSB looks for two consecutive small subdiagonal elements by seeing the effect of starting a double shift QR iteration given by H44, H33, & H43H34 and see if this would make a subdiagonal negligible. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 A (global input) DOUBLE PRECISION array, dimension (DESCA(LLD_),*) On entry, the Hessenberg matrix whose tridiagonal part is being scanned. Unchanged on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 I (global input) INTEGER The global location of the bottom of the unreduced submatrix of A. Unchanged on exit. .TP 8 L (global input) INTEGER The global location of the top of the unreduced submatrix of A. Unchanged on exit. .TP 8 M (global output) INTEGER On exit, this yields the starting location of the QR double shift. This will satisfy: L <= M <= I-2. H44 H33 H43H34 (global input) DOUBLE PRECISION These three values are for the double shift QR iteration. .TP 8 BUF (local output) DOUBLE PRECISION array of size LWORK. .TP 8 LWORK (global input) INTEGER On exit, LWORK is the size of the work buffer. This must be at least 7*Ceil( Ceil( (I-L)/HBL ) / LCM(NPROW,NPCOL) ) Here LCM is least common multiple, and NPROWxNPCOL is the logical grid size. Logic: ====== Two consecutive small subdiagonal elements will stall convergence of a double shift if their product is small relatively even if each is not very small. Thus it is necessary to scan the "tridiagonal portion of the matrix." In the LAPACK algorithm DLAHQR, a loop of M goes from I-2 down to L and examines H(m,m),H(m+1,m+1),H(m+1,m),H(m,m+1),H(m-1,m-1),H(m,m-1), and H(m+2,m-1). Since these elements may be on separate processors, the first major loop (10) goes over the tridiagonal and has each node store whatever values of the 7 it has that the node owning H(m,m) does not. This will occur on a border and can happen in no more than 3 locations per block assuming square blocks. There are 5 buffers that each node stores these values: a buffer to send diagonally down and right, a buffer to send up, a buffer to send left, a buffer to send diagonally up and left and a buffer to send right. Each of these buffers is actually stored in one buffer BUF where BUF(ISTR1+1) starts the first buffer, BUF(ISTR2+1) starts the second, etc.. After the values are stored, if there are any values that a node needs, they will be sent and received. Then the next major loop passes over the data and searches for two consecutive small subdiagonals. Notes: This routine does a global maximum and must be called by all processes. Implemented by: G. Henry, November 17, 1996 scalapack-doc-1.5/man/manl/pdlacp2.l0100644000056400000620000001272706335610630016754 0ustar pfrauenfstaff.TH PDLACP2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLACP2 - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PDLACP2( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDLACP2 copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PDLACP2 performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). PDLACP2 requires that only dimension of the matrix operands is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pdlacp3.l0100644000056400000620000001210706335610630016745 0ustar pfrauenfstaff.TH PDLACP3 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDLACP3 - i an auxiliary routine that copies from a global parallel array into a local replicated array or vise versa .SH SYNOPSIS .TP 20 SUBROUTINE PDLACP3( M, I, A, DESCA, B, LDB, II, JJ, REV ) .TP 20 .ti +4 INTEGER I, II, JJ, LDB, M, REV .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( LDB, * ) .SH PURPOSE PDLACP3 is an auxiliary routine that copies from a global parallel array into a local replicated array or vise versa. Notice that the entire submatrix that is copied gets placed on one node or more. The receiving node can be specified precisely, or all nodes can receive, or just one row or column of nodes. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER M is the order of the square submatrix that is copied. M >= 0. Unchanged on exit .TP 8 I (global input) INTEGER A(I,I) is the global location that the copying starts from. Unchanged on exit. .TP 8 A (global input/output) DOUBLE PRECISION array, dimension (DESCA(LLD_),*) On entry, the parallel matrix to be copied into or from. On exit, if REV=1, the copied data. Unchanged on exit if REV=0. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/output) DOUBLE PRECISION array of size (LDB,M) If REV=0, this is the global portion of the array A(I:I+M-1,I:I+M-1). If REV=1, this is the unchanged on exit. .TP 8 LDB (local input) INTEGER The leading dimension of B. .TP 8 II (global input) INTEGER By using REV 0 & 1, data can be sent out and returned again. If REV=0, then II is destination row index for the node(s) receiving the replicated B. If II>=0,JJ>=0, then node (II,JJ) receives the data If II=-1,JJ>=0, then all rows in column JJ receive the data If II>=0,JJ=-1, then all cols in row II receive the data If II=-1,JJ=-1, then all nodes receive the data If REV<>0, then II is the source row index for the node(s) sending the replicated B. .TP 8 JJ (global input) INTEGER Similar description as II above .TP 8 REV (global input) INTEGER Use REV = 0 to send global A into locally replicated B (on node (II,JJ)). Use REV <> 0 to send locally replicated B from node (II,JJ) to its owner (which changes depending on its location in A) into the global A. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/pdlacpy.l0100644000056400000620000001260706335610630017060 0ustar pfrauenfstaff.TH PDLACPY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLACPY - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PDLACPY( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDLACPY copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PDLACPY performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pdlaevswp.l0100644000056400000620000001221206335610630017421 0ustar pfrauenfstaff.TH PDLAEVSWP l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDLAEVSWP - move the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted .SH SYNOPSIS .TP 22 SUBROUTINE PDLAEVSWP( N, ZIN, LDZI, Z, IZ, JZ, DESCZ, NVS, KEY, WORK, LWORK ) .TP 22 .ti +4 INTEGER IZ, JZ, LDZI, LWORK, N .TP 22 .ti +4 INTEGER DESCZ( * ), KEY( * ), NVS( * ) .TP 22 .ti +4 DOUBLE PRECISION WORK( * ), Z( * ), ZIN( LDZI, * ) .SH PURPOSE PDLAEVSWP moves the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = the number of rows local to a given process. NQ = the number of columns local to a given process. .TP 8 N (global input) INTEGER The order of the matrix A. N >= 0. .TP 8 ZIN (local input) DOUBLE PRECISION array, dimension ( LDZI, NVS(iam) ) The eigenvectors on input. Each eigenvector resides entirely in one process. Each process holds a contiguous set of NVS(iam) eigenvectors. The first eigenvector which the process holds is: sum for i=[0,iam-1) of NVS(i) .TP 8 LDZI (locl input) INTEGER leading dimension of the ZIN array .TP 8 Z (local output) DOUBLE PRECISION array global dimension (N, N), local dimension (DESCZ(DLEN_), NQ) The eigenvectors on output. The eigenvectors are distributed in a block cyclic manner in both dimensions, with a block size of NB. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 NVS (global input) INTEGER array, dimension( nprocs+1 ) nvs(i) = number of processes number of eigenvectors held by processes [0,i-1) nvs(1) = number of eigen vectors held by [0,1-1) == 0 nvs(nprocs+1) = number of eigen vectors held by [0,nprocs) == total number of eigenvectors .TP 8 KEY (global input) INTEGER array, dimension( N ) Indicates the actual index (after sorting) for each of the eigenvectors. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) .TP 8 LWORK (local input) INTEGER dimension of WORK scalapack-doc-1.5/man/manl/pdlahqr.l0100644000056400000620000002470506335610630017061 0ustar pfrauenfstaff.TH PDLAHQR l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAHQR - i an auxiliary routine used to find the Schur decomposition and or eigenvalues of a matrix already in Hessenberg form from cols ILO to IHI .SH SYNOPSIS .TP 20 SUBROUTINE PDLAHQR( WANTT, WANTZ, N, ILO, IHI, A, DESCA, WR, WI, ILOZ, IHIZ, Z, DESCZ, WORK, LWORK, IWORK, ILWORK, INFO ) .TP 20 .ti +4 LOGICAL WANTT, WANTZ .TP 20 .ti +4 INTEGER IHI, IHIZ, ILO, ILOZ, ILWORK, INFO, LWORK, N, ROTN .TP 20 .ti +4 INTEGER DESCA( * ), DESCZ( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), WI( * ), WORK( * ), WR( * ), Z( * ) .SH PURPOSE PDLAHQR is an auxiliary routine used to find the Schur decomposition and or eigenvalues of a matrix already in Hessenberg form from cols ILO to IHI. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 WANTT (global input) LOGICAL = .TRUE. : the full Schur form T is required; .br = .FALSE.: only eigenvalues are required. .TP 8 WANTZ (global input) LOGICAL .br = .TRUE. : the matrix of Schur vectors Z is required; .br = .FALSE.: Schur vectors are not required. .TP 8 N (global input) INTEGER The order of the Hessenberg matrix A (and Z if WANTZ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that A is already upper quasi-triangular in rows and columns IHI+1:N, and that A(ILO,ILO-1) = 0 (unless ILO = 1). PDLAHQR works primarily with the Hessenberg submatrix in rows and columns ILO to IHI, but applies transformations to all of H if WANTT is .TRUE.. 1 <= ILO <= max(1,IHI); IHI <= N. .TP 8 A (global input/output) DOUBLE PRECISION array, dimension (DESCA(LLD_),*) On entry, the upper Hessenberg matrix A. On exit, if WANTT is .TRUE., A is upper quasi-triangular in rows and columns ILO:IHI, with any 2-by-2 or larger diagonal blocks not yet in standard form. If WANTT is .FALSE., the contents of A are unspecified on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WR (global replicated output) DOUBLE PRECISION array, dimension (N) WI (global replicated output) DOUBLE PRECISION array, dimension (N) The real and imaginary parts, respectively, of the computed eigenvalues ILO to IHI are stored in the corresponding elements of WR and WI. If two eigenvalues are computed as a complex conjugate pair, they are stored in consecutive elements of WR and WI, say the i-th and (i+1)th, with WI(i) > 0 and WI(i+1) < 0. If WANTT is .TRUE., the eigenvalues are stored in the same order as on the diagonal of the Schur form returned in A. A may be returned with larger diagonal blocks until the next release. .TP 8 ILOZ (global input) INTEGER IHIZ (global input) INTEGER Specify the rows of Z to which transformations must be applied if WANTZ is .TRUE.. 1 <= ILOZ <= ILO; IHI <= IHIZ <= N. .TP 8 Z (global input/output) DOUBLE PRECISION array. If WANTZ is .TRUE., on entry Z must contain the current matrix Z of transformations accumulated by PDHSEQR, and on exit Z has been updated; transformations are applied only to the submatrix Z(ILOZ:IHIZ,ILO:IHI). If WANTZ is .FALSE., Z is not referenced. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 WORK (local output) DOUBLE PRECISION array of size LWORK (Unless LWORK=-1, in which case WORK must be at least size 1) .TP 8 LWORK (local input) INTEGER WORK(LWORK) is a local array and LWORK is assumed big enough so that LWORK >= 3*N + MAX( 2*MAX(DESCZ(LLD_),DESCA(LLD_)) + 2*LOCc(N), 7*Ceil(N/HBL)/LCM(NPROW,NPCOL)) + MAX( 2*N, (8*LCM(NPROW,NPCOL)+2)**2 ) If LWORK=-1, then WORK(1) gets set to the above number and the code returns immediately. .TP 8 IWORK (global and local input) INTEGER array of size ILWORK This will hold some of the IBLK integer arrays. This is held as a place holder for a future release. Currently unreferenced. .TP 8 ILWORK (local input) INTEGER This will hold the size of the IWORK array. This is held as a place holder for a future release. Currently unreferenced. .TP 8 INFO (global output) INTEGER < 0: parameter number -INFO incorrect or inconsistent .br = 0: successful exit .br > 0: PDLAHQR failed to compute all the eigenvalues ILO to IHI in a total of 30*(IHI-ILO+1) iterations; if INFO = i, elements i+1:ihi of WR and WI contain those eigenvalues which have been successfully computed. Logic: This algorithm is very similar to _LAHQR. Unlike _LAHQR, instead of sending one double shift through the largest unreduced submatrix, this algorithm sends multiple double shifts and spaces them apart so that there can be parallelism across several processor row/columns. Another critical difference is that this algorithm aggregrates multiple transforms together in order to apply them in a block fashion. Important Local Variables: IBLK = The maximum number of bulges that can be computed. Currently fixed. Future releases this won't be fixed. HBL = The square block size (HBL=DESCA(MB_)=DESCA(NB_)) ROTN = The number of transforms to block together NBULGE = The number of bulges that will be attempted on the current submatrix. IBULGE = The current number of bulges started. K1(*),K2(*) = The current bulge loops from K1(*) to K2(*). Subroutines: From LAPACK, this routine calls: DLAHQR -> Serial QR used to determine shifts and eigenvalues DLARFG -> Determine the Householder transforms This ScaLAPACK, this routine calls: PDLACONSB -> To determine where to start each iteration DLAMSH -> Sends multiple shifts through a small submatrix to see how the consecutive subdiagonals change (if PDLACONSB indicates we can start a run in the middle) PDLAWIL -> Given the shift, get the transformation DLASORTE -> Pair up eigenvalues so that reals are paired. PDLACP3 -> Parallel array to local replicated array copy & back. DLAREF -> Row/column reflector applier. Core routine here. PDLASMSUB -> Finds negligible subdiagonal elements. Current Notes and/or Restrictions: 1.) This code requires the distributed block size to be square and at least six (6); unlike simpler codes like LU, this algorithm is extremely sensitive to block size. Unwise choices of too small a block size can lead to bad performance. 2.) This code requires A and Z to be distributed identically and have identical contxts. A future version may allow Z to have a different contxt to 1D row map it to all nodes (so no communication on Z is necessary.) 3.) This release currently does not have a routine for resolving the Schur blocks into regular 2x2 form after this code is completed. Because of this, a significant performance impact is required while the deflation is done by sometimes a single column of processors. 4.) This code does not currently block the initial transforms so that none of the rows or columns for any bulge are completed until all are started. To offset pipeline start-up it is recommended that at least 2*LCM(NPROW,NPCOL) bulges are used (if possible) 5.) The maximum number of bulges currently supported is fixed at 32. In future versions this will be limited only by the incoming WORK and IWORK array. 6.) The matrix A must be in upper Hessenberg form. If elements below the subdiagonal are nonzero, the resulting transforms may be nonsimilar. This is also true with the LAPACK routine DLAHQR. 7.) For this release, this code has only been tested for RSRC_=CSRC_=0, but it has been written for the general case. 8.) Currently, all the eigenvalues are distributed to all the nodes. Future releases will probably distribute the eigenvalues by the column partitioning. 9.) The internals of this routine are subject to change. 10.) To optimize this for your architecture, try tuning DLAREF. 11.) This code has only been tested for WANTZ = .TRUE. and may behave unpredictably for WANTZ set to .FALSE. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/pdlahrd.l0100644000056400000620000001061006335610631017033 0ustar pfrauenfstaff.TH PDLAHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAHRD - reduce the first NB columns of a real general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero .SH SYNOPSIS .TP 20 SUBROUTINE PDLAHRD( N, K, NB, A, IA, JA, DESCA, TAU, T, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IY, JA, JY, K, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCY( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), T( * ), TAU( * ), WORK( * ), Y( * ) .SH PURPOSE PDLAHRD reduces the first NB columns of a real general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero. The reduction is performed by an orthogo- nal similarity transformation Q' * A * Q. The routine returns the matrices V and T which determine Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. .br This is an auxiliary routine called by PDGEHRD. In the following comments sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1). .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 K (global input) INTEGER The offset for the reduction. Elements below the k-th subdiagonal in the first NB columns are reduced to zero. .TP 8 NB (global input) INTEGER The number of columns to be reduced. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-K)). On entry, this array contains the the local pieces of the N-by-(N-K+1) general distributed matrix A(IA:IA+N-1,JA:JA+N-K). On exit, the elements on and above the k-th subdiagonal in the first NB columns are overwritten with the corresponding elements of the reduced distributed matrix; the elements below the k-th subdiagonal, with the array TAU, represent the matrix Q as a product of elementary reflectors. The other columns of A(IA:IA+N-1,JA:JA+N-K) are unchanged. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). TAU is tied to the distributed matrix A. .TP 8 T (local output) DOUBLE PRECISION array, dimension (NB_A,NB_A) The upper triangular matrix T. .TP 8 Y (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_Y,NB_A). On exit, this array contains the local pieces of the N-by-NB distributed matrix Y. LLD_Y >= LOCr(IA+N-1). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (NB) .SH FURTHER DETAILS The matrix Q is represented as a product of nb elementary reflectors Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i+k-1) = 0, v(i+k) = 1; v(i+k+1:n) is stored on exit in A(ia+i+k:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the (n-k+1)-by-nb matrix V which is needed, with T and Y, to apply the transformation to the unreduced part of the matrix, using an update of the form: A(ia:ia+n-1,ja:ja+n-k) := (I-V*T*V')*(A(ia:ia+n-1,ja:ja+n-k)-Y*V'). The contents of A(ia:ia+n-1,ja:ja+n-k) on exit are illustrated by the following example with n = 7, k = 3 and nb = 2: .br ( a h a a a ) .br ( a h a a a ) .br ( a h a a a ) .br ( h h a a a ) .br ( v1 h a a a ) .br ( v1 v2 a a a ) .br ( v1 v2 a a a ) .br where a denotes an element of the original matrix .br A(ia:ia+n-1,ja:ja+n-k), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pdlamch.l0100644000056400000620000000252206335610631017030 0ustar pfrauenfstaff.TH PDLAMCH l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAMCH - determine double precision machine parameters .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PDLAMCH( ICTXT, CMACH ) .TP 17 .ti +4 CHARACTER CMACH .TP 17 .ti +4 INTEGER ICTXT .SH PURPOSE PDLAMCH determines double precision machine parameters. .SH ARGUMENTS .TP 8 ICTXT (global input) INTEGER The BLACS context handle in which the computation takes place. .TP 8 CMACH (global input) CHARACTER*1 Specifies the value to be returned by PDLAMCH: .br = 'E' or 'e', PDLAMCH := eps .br = 'S' or 's , PDLAMCH := sfmin .br = 'B' or 'b', PDLAMCH := base .br = 'P' or 'p', PDLAMCH := eps*base .br = 'N' or 'n', PDLAMCH := t .br = 'R' or 'r', PDLAMCH := rnd .br = 'M' or 'm', PDLAMCH := emin .br = 'U' or 'u', PDLAMCH := rmin .br = 'L' or 'l', PDLAMCH := emax .br = 'O' or 'o', PDLAMCH := rmax where .TP 6 eps = relative machine precision sfmin = safe minimum, such that 1/sfmin does not overflow base = base of the machine prec = eps*base t = number of (base) digits in the mantissa rnd = 1.0 when rounding occurs in addition, 0.0 otherwise emin = minimum exponent before (gradual) underflow rmin = underflow threshold - base**(emin-1) emax = largest exponent before overflow rmax = overflow threshold - (base**emax)*(1-eps) scalapack-doc-1.5/man/manl/pdlange.l0100644000056400000620000001302706335610631017034 0ustar pfrauenfstaff.TH PDLANGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLANGE - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PDLANGE( NORM, M, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM .TP 17 .ti +4 INTEGER IA, JA, M, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDLANGE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a distributed matrix sub( A ) = A(IA:IA+M-1, JA:JA+N-1). .br PDLANGE returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+M-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PDLANGE as described above. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PDLANGE is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PDLANGE is set to zero. N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pdlanhs.l0100644000056400000620000001253706335610631017060 0ustar pfrauenfstaff.TH PDLANHS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLANHS - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PDLANHS( NORM, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDLANHS returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a Hessenberg distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PDLANHS returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PDLANHS as described above. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PDLANHS is set to zero. N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pdlansy.l0100644000056400000620000001445506335610631017102 0ustar pfrauenfstaff.TH PDLANSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLANSY - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PDLANSY( NORM, UPLO, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM, UPLO .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDLANSY returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PDLANSY returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PDLANSY as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is to be referenced. = 'U': Upper triangular part of sub( A ) is referenced, .br = 'L': Lower triangular part of sub( A ) is referenced. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PDLANSY is set to zero. N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix which norm is to be computed, and the strictly lower triangular part of this matrix is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix which norm is to be computed, and the strictly upper triangular part of sub( A ) is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), 2*Nq0+Np0+LDW if NORM = '1', 'O', 'o', 'I' or 'i', where LDW is given by: IF( NPROW.NE.NPCOL ) THEN LDW = MB_A*CEIL(CEIL(Np0/MB_A)/(LCM/NPROW)) ELSE LDW = 0 END IF 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where LCM is the least common multiple of NPROW and NPCOL LCM = ILCM( NPROW, NPCOL ) and CEIL denotes the ceiling operation (ICEIL). IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), ICEIL, ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pdlantr.l0100644000056400000620000001370706335610631017073 0ustar pfrauenfstaff.TH PDLANTR l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLANTR - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PDLANTR( NORM, UPLO, DIAG, M, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 17 .ti +4 INTEGER IA, JA, M, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDLANTR returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular distributed matrix sub( A ) denoting A(IA:IA+M-1, JA:JA+N-1). .br PDLANTR returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with ia <= i <= ia+m-1, ( and ja <= j <= ja+n-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PDLANTR as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the matrix sub( A ) is upper or lower trapezoidal. = 'U': Upper trapezoidal .br = 'L': Lower trapezoidal Note that sub( A ) is triangular instead of trapezoidal if M = N. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) has unit diagonal. = 'N': Non-unit diagonal .br = 'U': Unit diagonal .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PDLANTR is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PDLANTR is set to zero. N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pdlapiv.l0100644000056400000620000001552006335610631017061 0ustar pfrauenfstaff.TH PDLAPIV l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAPIV - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PDLAPIV( DIREC, ROWCOL, PIVROC, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP, IWORK ) .TP 20 .ti +4 CHARACTER*1 DIREC, PIVROC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLAPIV applies either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector may be distributed across a process row or a column. The pivot vector should be aligned with the distributed matrix A. This routine will transpose the pivot vector if necessary. For example if the row pivots should be applied to the columns of sub( A ), pass ROWCOL='C' and PIVROC='C'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Restrictions .br ============ .br IPIV must always be a distributed vector (not a matrix). Thus: IF( ROWPIV .EQ. 'C' ) THEN .br JP must be 1 .br ELSE .br IP must be 1 .br END IF .br The following restrictions apply when IPIV must be transposed: IF( ROWPIV.EQ.'C' .AND. PIVROC.EQ.'C') THEN .br DESCIP(MB_) must equal DESCA(NB_) .br ELSE IF( ROWPIV.EQ.'R" .AND. PIVROC.EQ.'R') THEN .br DESCIP(NB_) must equal DESCA(MB_) .br END IF .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER*1 Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P*sub( A ). = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P )*sub( A ). .TP 8 ROWCOL (global input) CHARACTER*1 Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 PIVROC (global input) CHARACTER*1 Specifies whether IPIV is distributed over a process row or column: = 'R' IPIV distributed over a process row = 'C' IPIV distributed over a process column .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the distributed submatrix sub( A ) to which the row or column interchanges will be applied. On exit, the local pieces of the permuted distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL='R', otherwise LOCc(N_A)+NB_A. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. This array is tied to the distributed matrix A. .TP 8 IWORK (local workspace) INTEGER array, dimension (LDW) where LDW is equal to the workspace necessary for transposition, and the storage of the tranposed IPIV: Let LCM be the least common multiple of NPROW and NPCOL. IF( ROWCOL.EQ.'R' .AND. PIVROC.EQ.'R' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P ELSE LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P * CEIL( CEIL(LOCc(N_P)/NB_P) / (LCM/NPCOL) ) END IF ELSE IF( ROWCOL.EQ.'C' .AND. PIVROC.EQ.'C' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P ELSE LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P * CEIL( CEIL(LOCr(M_P)/MB_P) / (LCM/NPROW) ) END IF ELSE IWORK is not referenced. END IF scalapack-doc-1.5/man/manl/pdlapv2.l0100644000056400000620000001361506335610631016775 0ustar pfrauenfstaff.TH PDLAPV2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAPV2 - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PDLAPV2( DIREC, ROWCOL, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLAPV2 applies either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector should be aligned with the distributed matrix A. For pivoting the rows of sub( A ), IPIV should be distributed along a process column and replicated over all process rows. Similarly, IPIV should be distributed along a process row and replicated over all process columns for column pivoting. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P * sub( A ); = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P ) * sub( A ). .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this local array contains the local pieces of the distributed matrix sub( A ) to which the row or columns interchanges will be applied. On exit, this array contains the local pieces of the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL = 'R', LOCc(N_A)+NB_A otherwise. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. IPIV is tied to the distributed matrix A. .TP 8 IP (global input) INTEGER IPIV's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JP (global input) INTEGER IPIV's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCIP (global and local input) INTEGER array of dimension 8 The array descriptor for the distributed matrix IPIV. scalapack-doc-1.5/man/manl/pdlaqge.l0100644000056400000620000001377606335610631017052 0ustar pfrauenfstaff.TH PDLAQGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAQGE - equilibrate a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C .SH SYNOPSIS .TP 20 SUBROUTINE PDLAQGE( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), R( * ) .SH PURPOSE PDLAQGE equilibrates a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)) containing on entry the M-by-N matrix sub( A ). On exit, the equilibrated distributed matrix. See EQUED for the form of the equilibrated distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local input) DOUBLE PRECISION array, dimension LOCr(M_A) The row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local input) DOUBLE PRECISION array, dimension LOCc(N_A) The column scale factors of sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distributed matrix A. .TP 8 ROWCND (global input) DOUBLE PRECISION The global ratio of the smallest R(i) to the largest R(i), IA <= i <= IA+M-1. .TP 8 COLCND (global input) DOUBLE PRECISION The global ratio of the smallest C(i) to the largest C(i), JA <= j <= JA+N-1. .TP 8 AMAX (global input) DOUBLE PRECISION Absolute value of largest distributed submatrix entry. .TP 8 EQUED (global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration .br = 'R': Row equilibration, i.e., sub( A ) has been pre- .br multiplied by diag(R(IA:IA+M-1)), .br = 'C': Column equilibration, i.e., sub( A ) has been post- .br multiplied by diag(C(JA:JA+N-1)), .br = 'B': Both row and column equilibration, i.e., sub( A ) has been replaced by diag(R(IA:IA+M-1)) * sub( A ) * diag(C(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if row or column scaling should be done based on the ratio of the row or column scaling factors. If ROWCND < THRESH, row scaling is done, and if COLCND < THRESH, column scaling is done. LARGE and SMALL are threshold values used to decide if row scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. scalapack-doc-1.5/man/manl/pdlaqsy.l0100644000056400000620000001413106335610631017074 0ustar pfrauenfstaff.TH PDLAQSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAQSY - equilibrate a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC .SH SYNOPSIS .TP 20 SUBROUTINE PDLAQSY( UPLO, N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED, UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), SC( * ), SR( * ) .SH PURPOSE PDLAQSY equilibrates a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric distributed matrix sub( A ) is to be referenced: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (input/output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the distributed symmetric matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and the strictly upper trian- gular part of sub( A ) is not referenced. On exit, if EQUED = 'Y', the equilibrated matrix: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local input) DOUBLE PRECISION array, dimension LOCr(M_A) The scale factors for A(IA:IA+M-1,JA:JA+N-1). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local input) DOUBLE PRECISION array, dimension LOCc(N_A) The scale factors for sub( A ). SC is aligned with the dis- tributed matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global input) DOUBLE PRECISION Ratio of the smallest SR(i) (respectively SC(j)) to the largest SR(i) (respectively SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. .TP 8 AMAX (global input) DOUBLE PRECISION Absolute value of the largest distributed submatrix entry. .TP 8 EQUED (output) CHARACTER*1 Specifies whether or not equilibration was done. = 'N': No equilibration. .br = 'Y': Equilibration was done, i.e., sub( A ) has been re- .br placed by: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if scaling should be done based on the ratio of the scaling factors. If SCOND < THRESH, scaling is done. LARGE and SMALL are threshold values used to decide if scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, scaling is done. scalapack-doc-1.5/man/manl/pdlared1d.l0100644000056400000620000001060506335610631017261 0ustar pfrauenfstaff.TH PDLARED1D l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDLARED1D - redistribute a 1D array It assumes that the input array, BYCOL, is distributed across rows and that all process column contain the same copy of BYCOL .SH SYNOPSIS .TP 22 SUBROUTINE PDLARED1D( N, IA, JA, DESC, BYCOL, BYALL, WORK, LWORK ) .TP 22 .ti +4 INTEGER IA, JA, LWORK, N .TP 22 .ti +4 INTEGER DESC( * ) .TP 22 .ti +4 DOUBLE PRECISION BYALL( * ), BYCOL( * ), WORK( LWORK ) .SH PURPOSE PDLARED1D redistributes a 1D array and will contain the entire array. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = Number of local rows in BYCOL() .TP 8 N (global input) INTEGER The number of elements to be redistributed. N >= 0. .TP 8 IA (global input) INTEGER IA must be equal to 1 .TP 8 JA (global input) INTEGER JA must be equal to 1 .TP 8 DESC (global/local input) INTEGER Array of dimension 8 A 2D array descirptor, which describes BYCOL .TP 8 BYCOL (local input) distributed block cyclic DOUBLE PRECISION array global dimension (N), local dimension NP BYCOL is distributed across the process rows All process columns are assumed to contain the same value .TP 8 BYALL (global output) DOUBLE PRECISION global dimension( N ) local dimension (N) BYALL is exactly duplicated on all processes It contains the same values as BYCOL, but it is replicated across all processes rather than being distributed BYALL(i) = BYCOL( NUMROC(i,NB,MYROW,0,NPROW ) on the procs whose MYROW == mod((i-1)/NB,NPROW) .TP 8 WORK (local workspace) DOUBLE PRECISION dimension (LWORK) Used to hold the buffers sent from one process to another .TP 8 LWORK (local input) INTEGER size of WORK array LWORK >= NUMROC(N, DESC( NB_ ), 0, 0, NPCOL) scalapack-doc-1.5/man/manl/pdlared2d.l0100644000056400000620000001062306335610631017262 0ustar pfrauenfstaff.TH PDLARED2D l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDLARED2D - redistribute a 1D array It assumes that the input array, BYROW, is distributed across columns and that all process rows contain the same copy of BYROW .SH SYNOPSIS .TP 22 SUBROUTINE PDLARED2D( N, IA, JA, DESC, BYROW, BYALL, WORK, LWORK ) .TP 22 .ti +4 INTEGER IA, JA, LWORK, N .TP 22 .ti +4 INTEGER DESC( * ) .TP 22 .ti +4 DOUBLE PRECISION BYALL( * ), BYROW( * ), WORK( LWORK ) .SH PURPOSE PDLARED2D redistributes a 1D array and will contain the entire array. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = Number of local rows in BYROW() .TP 8 N (global input) INTEGER The number of elements to be redistributed. N >= 0. .TP 8 IA (global input) INTEGER IA must be equal to 1 .TP 8 JA (global input) INTEGER JA must be equal to 1 .TP 8 DESC (global/local input) INTEGER Array of dimension DLEN_ A 2D array descriptor, which describes BYROW .TP 8 BYROW (local input) distributed block cyclic DOUBLE PRECISION array global dimension (N), local dimension NP BYCOL is distributed across the process columns All process rows are assumed to contain the same value .TP 8 BYALL (global output) DOUBLE PRECISION global dimension( N ) local dimension (N) BYALL is exactly duplicated on all processes It contains the same values as BYCOL, but it is replicated across all processes rather than being distributed BYALL(i) = BYCOL( NUMROC(i,NB,MYROW,0,NPROW ) on the procs whose MYROW == mod((i-1)/NB,NPROW) .TP 8 WORK (local workspace) DOUBLE PRECISION dimension (LWORK) Used to hold the buffers sent from one process to another .TP 8 LWORK (local input) INTEGER size of WORK array LWORK >= LWORK >= NUMROC(N, DESC( NB_ ), 0, 0, NPCOL) scalapack-doc-1.5/man/manl/pdlarf.l0100644000056400000620000002006706335610631016674 0ustar pfrauenfstaff.TH PDLARF l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARF - applie a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PDLARF( SIDE, M, N, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 DOUBLE PRECISION C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PDLARF applies a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a real scalar and v is a real vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also have the first row of sub( C ). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also have the first column of sub( C ) and MOD(JV-1,NB_V) must be equal to MOD(JC-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q, Q = Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 V (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+M-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+M-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+N-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+N-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pdlarfb.l0100644000056400000620000001761306335610632017042 0ustar pfrauenfstaff.TH PDLARFB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARFB - applie a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, DIRECT, STOREV .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 DOUBLE PRECISION C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PDLARFB applies a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) from the left or the right. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 DIRECT (global input) CHARACTER Indicates how Q is formed from a product of elementary reflectors = 'F': Q = H(1) H(2) . . . H(k) (Forward) .br = 'B': Q = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 V (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension ( LLD_V, LOCc(JV+K-1) ) if STOREV = 'C', ( LLD_V, LOCc(JV+M-1)) if STOREV = 'R' and SIDE = 'L', ( LLD_V, LOCc(JV+N-1) ) if STOREV = 'R' and SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation. See further details. If STOREV = 'C' and SIDE = 'L', LLD_V >= MAX(1,LOCr(IV+M-1)); if STOREV = 'C' and SIDE = 'R', LLD_V >= MAX(1,LOCr(IV+N-1)); if STOREV = 'R', LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) DOUBLE PRECISION array, dimension MB_V by MB_V if STOREV = 'R' and NB_V by NB_V if STOREV = 'C'. The trian- gular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pdlarfg.l0100644000056400000620000001247106335610632017044 0ustar pfrauenfstaff.TH PDLARFG l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARFG - generate a real elementary reflector H of order n, such that H * sub( X ) = H * ( x(iax,jax) ) = ( alpha ), H' * H = I .SH SYNOPSIS .TP 20 SUBROUTINE PDLARFG( N, ALPHA, IAX, JAX, X, IX, JX, DESCX, INCX, TAU ) .TP 20 .ti +4 INTEGER IAX, INCX, IX, JAX, JX, N .TP 20 .ti +4 DOUBLE PRECISION ALPHA .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 DOUBLE PRECISION TAU( * ), X( * ) .SH PURPOSE PDLARFG generates a real elementary reflector H of order n, such that ( x ) ( 0 ) .br where alpha is a scalar, and sub( X ) is an (N-1)-element real distributed vector X(IX:IX+N-2,JX) if INCX = 1 and X(IX,JX:JX+N-2) if INCX = DESCX(M_). H is represented in the form .br H = I - tau * ( 1 ) * ( 1 v' ) , .br ( v ) .br where tau is a real scalar and v is a real (N-1)-element .br vector. .br If the elements of sub( X ) are all zero, then tau = 0 and H is taken to be the unit matrix. .br Otherwise 1 <= tau <= 2. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. .SH ARGUMENTS .TP 8 N (global input) INTEGER The global order of the elementary reflector. N >= 0. .TP 8 ALPHA (local output) DOUBLE PRECISION On exit, alpha is computed in the process scope having the vector sub( X ). .TP 8 IAX (global input) INTEGER The global row index in X of X(IAX,JAX). .TP 8 JAX (global input) INTEGER The global column index in X of X(IAX,JAX). .TP 8 X (local input/local output) DOUBLE PRECISION, pointer into the local memory to an array of dimension (LLD_X,*). This array contains the local pieces of the distributed vector sub( X ). Before entry, the incremented array sub( X ) must contain the vector x. On exit, it is overwritten with the vector v. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JX) if INCX = 1, and LOCr(IX) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix X. scalapack-doc-1.5/man/manl/pdlarft.l0100644000056400000620000001513406335610632017060 0ustar pfrauenfstaff.TH PDLARFT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARFT - form the triangular factor T of a real block reflector H of order n, which is defined as a product of k elementary reflectors .SH SYNOPSIS .TP 20 SUBROUTINE PDLARFT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 DOUBLE PRECISION TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PDLARFT forms the triangular factor T of a real block reflector H of order n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the distributed matrix V, and H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the distributed matrix V, and H = I - V' * T * V .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER*1 Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER*1 Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The order of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LOCr(IV+N-1),LOCc(JV+K-1)) if STOREV = 'C', and (LOCr(IV+K-1),LOCc(JV+N-1)) if STOREV = 'R'. The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) DOUBLE PRECISION array, dimension (NB_V,NB_V) if STOREV = 'Col', and (MB_V,MB_V) otherwise. It contains the k-by-k triangular factor of the block reflector asso- ciated with V. If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is lower triangular. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': V( IV:IV+N-1, ( 1 ) V( IV:IV+K-1, ( 1 v1 v1 v1 v1 ) JV:JV+K-1 ) = ( v1 1 ) JV:JV+N-1 ) = ( 1 v2 v2 v2 ) ( v1 v2 1 ) ( 1 v3 v3 ) ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': V( IV:IV+N-1, ( v1 v2 v3 ) V( IV:IV+K-1, ( v1 v1 1 ) JV:JV+K-1 ) = ( v1 v2 v3 ) JV:JV+N-1 ) = ( v2 v2 v2 1 ) ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) ( 1 v3 ) .br ( 1 ) .br scalapack-doc-1.5/man/manl/pdlarz.l0100644000056400000620000002050406335610632016715 0ustar pfrauenfstaff.TH PDLARZ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARZ - applie a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PDLARZ( SIDE, M, N, L, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, L, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 DOUBLE PRECISION C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PDLARZ applies a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a real scalar and v is a real vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Q is a product of k elementary reflectors as returned by PDTZRZF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also own C(IC+M-L,JC:JC+N-1). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC+N-L-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also own C(IC:IC+M-1,JC+N-L) and MOD(JV-1,NB_V) must be equal to MOD(JC+N-L-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q, Q = Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+L-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+L-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pdlarzb.l0100644000056400000620000002002606335610632017056 0ustar pfrauenfstaff.TH PDLARZB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARZB - applie a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDLARZB( SIDE, TRANS, DIRECT, STOREV, M, N, K, L, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, SIDE, STOREV, TRANS .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, L, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 DOUBLE PRECISION C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PDLARZB applies a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) from the left or the right. .br Q is a product of k elementary reflectors as returned by PDTZRZF. Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 DIRECT (global input) CHARACTER Indicates how H is formed from a product of elementary reflectors = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise (not supported yet) .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_V, LOCc(JV+M-1)) if SIDE = 'L', (LLD_V, LOCc(JV+N-1)) if SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation as returned by PDTZRZF. LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) DOUBLE PRECISION array, dimension MB_V by MB_V The lower triangular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pdlarzt.l0100644000056400000620000001572206335610632017107 0ustar pfrauenfstaff.TH PDLARZT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLARZT - form the triangular factor T of a real block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PDTZRZF .SH SYNOPSIS .TP 20 SUBROUTINE PDLARZT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 DOUBLE PRECISION TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PDLARZT forms the triangular factor T of a real block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PDTZRZF. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and .br H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and .br H = I - V' * T * V .br Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The number of meaningful entries of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LOCr(IV+K-1),LOCc(JV+N-1)). The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) DOUBLE PRECISION array, dimension (MB_V,MB_V) It contains the k-by-k triangular factor of the block reflector associated with V. T is lower triangular. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': ______V_____ .br ( v1 v2 v3 ) / \ ( v1 v2 v3 ) ( v1 v1 v1 v1 v1 . . . . 1 ) V = ( v1 v2 v3 ) ( v2 v2 v2 v2 v2 . . . 1 ) ( v1 v2 v3 ) ( v3 v3 v3 v3 v3 . . 1 ) ( v1 v2 v3 ) .br . . . .br . . . .br 1 . . .br 1 . .br 1 .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': ______V_____ 1 / \ . 1 ( 1 . . . . v1 v1 v1 v1 v1 ) . . 1 ( . 1 . . . v2 v2 v2 v2 v2 ) . . . ( . . 1 . . v3 v3 v3 v3 v3 ) . . . .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br V = ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br scalapack-doc-1.5/man/manl/pdlascl.l0100644000056400000620000001243306335610632017045 0ustar pfrauenfstaff.TH PDLASCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLASCL - multiplie the M-by-N real distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM .SH SYNOPSIS .TP 20 SUBROUTINE PDLASCL( TYPE, CFROM, CTO, M, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER TYPE .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION CFROM, CTO .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLASCL multiplies the M-by-N real distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM. This is done without over/underflow as long as the final result CTO * A(I,J) / CFROM does not over/underflow. TYPE specifies that sub( A ) may be full, upper triangular, lower triangular or upper Hessenberg. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TYPE (global input) CHARACTER TYPE indices the storage type of the input distributed matrix. = 'G': sub( A ) is a full matrix, .br = 'L': sub( A ) is a lower triangular matrix, .br = 'U': sub( A ) is an upper triangular matrix, .br = 'H': sub( A ) is an upper Hessenberg matrix. .TP 8 CFROM (global input) DOUBLE PRECISION CTO (global input) DOUBLE PRECISION The distributed matrix sub( A ) is multiplied by CTO/CFROM. A(I,J) is computed without over/underflow if the final result CTO * A(I,J) / CFROM can be represented without over/underflow. CFROM must be nonzero. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). On exit, this array contains the local pieces of the distributed matrix multiplied by CTO/CFROM. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdlase2.l0100644000056400000620000001223406335610632016754 0ustar pfrauenfstaff.TH PDLASE2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLASE2 - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PDLASE2( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLASE2 initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. PDLASE2 requires that only dimension of the matrix operand is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) DOUBLE PRECISION The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) DOUBLE PRECISION The constant to which the diagonal elements are to be set. .TP 8 A (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pdpbsv.l0100644000056400000620000000142106335610635016717 0ustar pfrauenfstaff.TH PDPBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PDPBSV( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER BW, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), B( * ), WORK( * ) .SH PURPOSE PDPBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PDPBTRF and PDPBTRS for details. .br scalapack-doc-1.5/man/manl/pdlaset.l0100644000056400000620000001211406335610632017053 0ustar pfrauenfstaff.TH PDLASET l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLASET - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PDLASET( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLASET initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) DOUBLE PRECISION The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) DOUBLE PRECISION The constant to which the diagonal elements are to be set. .TP 8 A (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pdlasmsub.l0100644000056400000620000001153106335610632017413 0ustar pfrauenfstaff.TH PDLASMSUB l "12 May 1997" "LAPACK version 1.5 " "LAPACK routine (version 1.5 )" .SH NAME PDLASMSUB - look for a small subdiagonal element from the bottom of the matrix that it can safely set to zero .SH SYNOPSIS .TP 22 SUBROUTINE PDLASMSUB( A, DESCA, I, L, K, SMLNUM, BUF, LWORK ) .TP 22 .ti +4 INTEGER I, K, L, LWORK .TP 22 .ti +4 DOUBLE PRECISION SMLNUM .TP 22 .ti +4 INTEGER DESCA( * ) .TP 22 .ti +4 DOUBLE PRECISION A( * ), BUF( * ) .SH PURPOSE PDLASMSUB looks for a small subdiagonal element from the bottom of the matrix that it can safely set to zero. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 A (global input) DOUBLE PRECISION array, dimension (DESCA(LLD_),*) On entry, the Hessenberg matrix whose tridiagonal part is being scanned. Unchanged on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 I (global input) INTEGER The global location of the bottom of the unreduced submatrix of A. Unchanged on exit. .TP 8 L (global input) INTEGER The global location of the top of the unreduced submatrix of A. Unchanged on exit. .TP 8 K (global output) INTEGER On exit, this yields the bottom portion of the unreduced submatrix. This will satisfy: L <= M <= I-1. .TP 8 SMLNUM (global input) DOUBLE PRECISION On entry, a "small number" for the given matrix. Unchanged on exit. .TP 8 BUF (local output) DOUBLE PRECISION array of size LWORK. .TP 8 LWORK (global input) INTEGER On exit, LWORK is the size of the work buffer. This must be at least 2*Ceil( Ceil( (I-L)/HBL ) / LCM(NPROW,NPCOL) ) Here LCM is least common multiple, and NPROWxNPCOL is the logical grid size. Notes: This routine does a global maximum and must be called by all processes. This code is basically a parallelization of the following snip of LAPACK code from DLAHQR: Look for a single small subdiagonal element. DO 20 K = I, L + 1, -1 TST1 = ABS( H( K-1, K-1 ) ) + ABS( H( K, K ) ) IF( TST1.EQ.ZERO ) $ TST1 = DLANHS( '1', I-L+1, H( L, L ), LDH, WORK ) IF( ABS( H( K, K-1 ) ).LE.MAX( ULP*TST1, SMLNUM ) ) $ GO TO 30 20 CONTINUE 30 CONTINUE Implemented by: G. Henry, November 17, 1996 scalapack-doc-1.5/man/manl/pdlassq.l0100644000056400000620000001225506335610632017074 0ustar pfrauenfstaff.TH PDLASSQ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLASSQ - return the values scl and smsq such that ( scl**2 )*smsq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, .SH SYNOPSIS .TP 20 SUBROUTINE PDLASSQ( N, X, IX, JX, DESCX, INCX, SCALE, SUMSQ ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 DOUBLE PRECISION SCALE, SUMSQ .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 DOUBLE PRECISION X( * ) .SH PURPOSE PDLASSQ returns the values scl and smsq such that where x( i ) = sub( X ) = X( IX+(JX-1)*DESCX(M_)+(i-1)*INCX ). The value of sumsq is assumed to be non-negative and scl returns the value .br scl = max( scale, abs( x( i ) ) ). .br scale and sumsq must be supplied in SCALE and SUMSQ respectively. SCALE and SUMSQ are overwritten by scl and ssq respectively. The routine makes only one pass through the vector sub( X ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. The result are only available in the scope of sub( X ), i.e if sub( X ) is distributed along a process row, the correct results are only available in this process row of the grid. Similarly if sub( X ) is distributed along a process column, the correct results are only available in this process column of the grid. .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vector sub( X ). .TP 8 X (input) DOUBLE PRECISION The vector for which a scaled sum of squares is computed. x( i ) = X(IX+(JX-1)*M_X +(i-1)*INCX ), 1 <= i <= n. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 SCALE (local input/local output) DOUBLE PRECISION On entry, the value scale in the equation above. On exit, SCALE is overwritten with scl , the scaling factor for the sum of squares. .TP 8 SUMSQ (local input/local output) DOUBLE PRECISION On entry, the value sumsq in the equation above. On exit, SUMSQ is overwritten with smsq , the basic sum of squares from which scl has been factored out. scalapack-doc-1.5/man/manl/pdlaswp.l0100644000056400000620000001227106335610633017076 0ustar pfrauenfstaff.TH PDLASWP l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLASWP - perform a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDLASWP( DIREC, ROWCOL, N, A, IA, JA, DESCA, K1, K2, IPIV ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, JA, K1, K2, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLASWP performs a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1). One interchange is initiated for each of rows or columns K1 trough K2 of sub( A ). This routine assumes that the pivoting information has already been broadcast along the process row or column. .br Also note that this routine will only work for K1-K2 being in the same MB (or NB) block. If you want to pivot a full matrix, use PDLAPIV. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) = 'B' (Backward) .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are permuted: = 'R' (Rows) = 'C' (Columns) .TP 8 N (global input) INTEGER If ROWCOL = 'R', the length of the rows of the distributed matrix A(*,JA:JA+N-1) to be permuted; If ROWCOL = 'C', the length of the columns of the distributed matrix A(IA:IA+N-1,*) to be permuted. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, * ). On entry, this array contains the local pieces of the distri- buted matrix to which the row/columns interchanges will be applied. On exit the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 K1 (global input) INTEGER The first element of IPIV for which a row or column inter- change will be done. .TP 8 K2 (global input) INTEGER The last element of IPIV for which a row or column inter- change will be done. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A for row pivoting and LOCc(N_A)+NB_A for column pivoting. This array is tied to the matrix A, IPIV(K) = L implies rows (or columns) K and L are to be interchanged. scalapack-doc-1.5/man/manl/pdlatra.l0100644000056400000620000000772006335610633017056 0ustar pfrauenfstaff.TH PDLATRA l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLATRA - compute the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ) .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PDLATRA( N, A, IA, JA, DESCA ) .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLATRA computes the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ). The result is left on every process of the grid. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix the trace is to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pdlatrd.l0100644000056400000620000002123306335610633017054 0ustar pfrauenfstaff.TH PDLATRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLATRD - reduce NB rows and columns of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to symmetric tridiagonal form by an orthogonal similarity transformation Q' * sub( A ) * Q, .SH SYNOPSIS .TP 20 SUBROUTINE PDLATRD( UPLO, N, NB, A, IA, JA, DESCA, D, E, TAU, W, IW, JW, DESCW, WORK ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IW, JA, JW, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCW( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), D( * ), E( * ), TAU( * ), W( * ), WORK( * ) .SH PURPOSE PDLATRD reduces NB rows and columns of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to symmetric tridiagonal form by an orthogonal similarity transformation Q' * sub( A ) * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of sub( A ). .br If UPLO = 'U', PDLATRD reduces the last NB rows and columns of a matrix, of which the upper triangle is supplied; .br if UPLO = 'L', PDLATRD reduces the first NB rows and columns of a matrix, of which the lower triangle is supplied. .br This is an auxiliary routine called by PDSYTRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of rows and columns to be reduced. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the last NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements above the diagonal with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. If UPLO = 'L', the first NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements below the diagonal with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 W (local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_W,NB_W), This array contains the local pieces of the N-by-NB_W matrix W required to update the unreduced part of sub( A ). .TP 8 IW (global input) INTEGER The row index in the global array W indicating the first row of sub( W ). .TP 8 JW (global input) INTEGER The column index in the global array W indicating the first column of sub( W ). .TP 8 DESCW (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix W. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (NB_A) .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n) H(n-1) . . . H(n-nb+1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(i:n) = 0 and v(i-1) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the N-by-NB matrix V which is needed, with W, to apply the transformation to the unreduced part of the matrix, using a symmetric rank-2k update of the form: sub( A ) := sub( A ) - V*W' - W*V'. .br The contents of A on exit are illustrated by the following examples with n = 5 and nb = 2: .br if UPLO = 'U': if UPLO = 'L': .br ( a a a v4 v5 ) ( d ) ( a a v4 v5 ) ( 1 d ) ( a 1 v5 ) ( v1 1 a ) ( d 1 ) ( v1 v2 a a ) ( d ) ( v1 v2 a a a ) where d denotes a diagonal element of the reduced matrix, a denotes an element of the original matrix that is unchanged, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pdlatrs.l0100644000056400000620000000117306335610633017074 0ustar pfrauenfstaff.TH PDLATRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLATRS - solve a triangular system .SH SYNOPSIS .TP 20 SUBROUTINE PDLATRS( UPLO, TRANS, DIAG, NORMIN, N, A, IA, JA, DESCA, X, IX, JX, DESCX, SCALE, CNORM, WORK ) .TP 20 .ti +4 CHARACTER DIAG, NORMIN, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IX, JA, JX, N .TP 20 .ti +4 DOUBLE PRECISION SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), CNORM( * ), X( * ), WORK( * ) .SH PURPOSE PDLATRS solves a triangular system. This routine in unfinished at this time, but will be part of the next release. .br scalapack-doc-1.5/man/manl/pdlatrz.l0100644000056400000620000001466506335610633017115 0ustar pfrauenfstaff.TH PDLATRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDLATRZ - reduce the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = [ A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1) ] to upper triangular form by means of orthogonal transformations .SH SYNOPSIS .TP 20 SUBROUTINE PDLATRZ( M, N, L, A, IA, JA, DESCA, TAU, WORK ) .TP 20 .ti +4 INTEGER IA, JA, L, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDLATRZ reduces the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = [ A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1) ] to upper triangular form by means of orthogonal transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N orthogonal matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. L > 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements N-L+1 to N of the first M rows of sub( A ), with the array TAU, represent the orthogonal matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension (LWORK) LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), which is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pdlauu2.l0100644000056400000620000001162406335610633017001 0ustar pfrauenfstaff.TH PDLAUU2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAUU2 - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDLAUU2( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLAUU2 computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the unblocked form of the algorithm, calling Level 2 BLAS. No communication is performed by this routine, the matrix to operate on should be strictly local to one process. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pdlauum.l0100644000056400000620000001146206335610633017074 0ustar pfrauenfstaff.TH PDLAUUM l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDLAUUM - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDLAUUM( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDLAUUM computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the blocked form of the algorithm, calling Level 3 PBLAS. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pdlawil.l0100644000056400000620000000774206335610633017067 0ustar pfrauenfstaff.TH PDLAWIL l "12 May 1997" "LAPACK version 1.5 " "LAPACK routine (version 1.5 )" .SH NAME PDLAWIL - get the transform given by H44,H33, & H43H34 into V starting at row M .SH SYNOPSIS .TP 20 SUBROUTINE PDLAWIL( II, JJ, M, A, DESCA, H44, H33, H43H34, V ) .TP 20 .ti +4 INTEGER II, JJ, M .TP 20 .ti +4 DOUBLE PRECISION H33, H43H34, H44 .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), V( * ) .SH PURPOSE PDLAWIL gets the transform given by H44,H33, & H43H34 into V starting at row M. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 II (global input) INTEGER Row owner of H(M+2,M+2) .TP 8 JJ (global input) INTEGER Column owner of H(M+2,M+2) .TP 8 M (global input) INTEGER On entry, this is where the transform starts (row M.) Unchanged on exit. .TP 8 A (global input) DOUBLE PRECISION array, dimension (DESCA(LLD_),*) On entry, the Hessenberg matrix. Unchanged on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. Unchanged on exit. H44 H33 H43H34 (global input) DOUBLE PRECISION These three values are for the double shift QR iteration. Unchanged on exit. .TP 8 V (global output) DOUBLE PRECISION array of size 3. Contains the transform on ouput. Implemented by: G. Henry, November 17, 1996 scalapack-doc-1.5/man/manl/pdorg2l.l0100644000056400000620000001407506335610633017001 0ustar pfrauenfstaff.TH PDORG2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORG2L - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PDORG2L( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORG2L generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PDGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PDGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorg2r.l0100644000056400000620000001406006335610633017001 0ustar pfrauenfstaff.TH PDORG2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORG2R - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PDORG2R( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORG2R generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PDGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PDGEQRF in the K columns of its array argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorgl2.l0100644000056400000620000001404606335610633016777 0ustar pfrauenfstaff.TH PDORGL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORGL2 - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PDORGL2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORGL2 generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PDGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorglq.l0100644000056400000620000001405706335610633017100 0ustar pfrauenfstaff.TH PDORGLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORGLQ - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PDORGLQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORGLQ generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PDGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorgql.l0100644000056400000620000001410606335610634017074 0ustar pfrauenfstaff.TH PDORGQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORGQL - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PDORGQL( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORGQL generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PDGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PDGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorgqr.l0100644000056400000620000001410506335610634017101 0ustar pfrauenfstaff.TH PDORGQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORGQR - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PDORGQR( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORGQR generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PDGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PDGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+K-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorgr2.l0100644000056400000620000001406006335610634017002 0ustar pfrauenfstaff.TH PDORGR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORGR2 - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PDORGR2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORGR2 generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PDGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PDGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorgrq.l0100644000056400000620000001407106335610634017103 0ustar pfrauenfstaff.TH PDORGRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORGRQ - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PDORGRQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORGRQ generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PDGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PDGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdorm2l.l0100644000056400000620000001732106335610634017005 0ustar pfrauenfstaff.TH PDORM2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORM2L - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORM2L( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORM2L overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PDGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PDGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pdorm2r.l0100644000056400000620000001732206335610634017014 0ustar pfrauenfstaff.TH PDORM2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORM2R - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORM2R( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORM2R overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PDGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PDGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pdormbr.l0100644000056400000620000002405606335610634017076 0ustar pfrauenfstaff.TH PDORMBR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMBR - VECT = 'Q', PDORMBR overwrites the general real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMBR( VECT, SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, VECT .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE If VECT = 'Q', PDORMBR overwrites the general real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br If VECT = 'P', PDORMBR overwrites sub( C ) with .br SIDE = 'L' SIDE = 'R' .br TRANS = 'N': P * sub( C ) sub( C ) * P .br TRANS = 'T': P**T * sub( C ) sub( C ) * P**T .br Here Q and P**T are the orthogonal distributed matrices determined by PDGEBRD when reducing a real distributed matrix A(IA:*,JA:*) to bidiagonal form: A(IA:*,JA:*) = Q * B * P**T. Q and P**T are defined as products of elementary reflectors H(i) and G(i) respectively. Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the order of the orthogonal matrix Q or P**T that is applied. If VECT = 'Q', A(IA:*,JA:*) is assumed to have been an NQ-by-K matrix: .br if nq >= k, Q = H(1) H(2) . . . H(k); .br if nq < k, Q = H(1) H(2) . . . H(nq-1). .br If VECT = 'P', A(IA:*,JA:*) is assumed to have been a K-by-NQ matrix: .br if k < nq, P = G(1) G(2) . . . G(k); .br if k >= nq, P = G(1) G(2) . . . G(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 VECT (global input) CHARACTER = 'Q': apply Q or Q**T; .br = 'P': apply P or P**T. .TP 8 SIDE (global input) CHARACTER .br = 'L': apply Q, Q**T, P or P**T from the Left; .br = 'R': apply Q, Q**T, P or P**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q or P; .br = 'T': Transpose, apply Q**T or P**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER If VECT = 'Q', the number of columns in the original distributed matrix reduced by PDGEBRD. If VECT = 'P', the number of rows in the original distributed matrix reduced by PDGEBRD. K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+MIN(NQ,K)-1)) if VECT='Q', and (LLD_A,LOCc(JA+NQ-1)) if VECT = 'P'. NQ = M if SIDE = 'L', and NQ = N otherwise. The vectors which define the elementary reflectors H(i) and G(i), whose products determine the matrices Q and P, as returned by PDGEBRD. If VECT = 'Q', LLD_A >= max(1,LOCr(IA+NQ-1)); if VECT = 'P', LLD_A >= max(1,LOCr(IA+MIN(NQ,K)-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION array, dimension LOCc(JA+MIN(NQ,K)-1) if VECT = 'Q', LOCr(IA+MIN(NQ,K)-1) if VECT = 'P', TAU(i) must contain the scalar factor of the elementary reflector H(i) or G(i), which determines Q or P, as returned by PDGEBRD in its array argument TAUQ or TAUP. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, if VECT='Q', sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q; if VECT='P, sub( C ) is overwritten by P*sub( C ) or P'*sub( C ) or sub( C )*P or sub( C )*P'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', NQ = M; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA+1; JAA=JA; MI=M-1; NI=N; ICC=IC+1; JCC=JC; end if else if SIDE = 'R', NQ = N; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA; JAA=JA+1; MI=M; NI=N-1; ICC=IC; JCC=JC+1; end if end if If VECT = 'Q', If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if else if VECT <> 'Q', if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( MI+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if end if where LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JAA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( MI+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If VECT = 'Q', If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) else If SIDE = 'L', ( MB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pdormhr.l0100644000056400000620000002020206335610634017071 0ustar pfrauenfstaff.TH PDORMHR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMHR - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMHR( SIDE, TRANS, M, N, ILO, IHI, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, IHI, ILO, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMHR overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of IHI-ILO elementary reflectors, as returned by PDGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER ILO and IHI must have the same values as in the previous call of PDGEHRD. Q is equal to the unit matrix except in the distributed submatrix Q(ia+ilo:ia+ihi-1,ia+ilo:ja+ihi-1). If SIDE = 'L', 1 <= ILO <= IHI <= max(1,M); if SIDE = 'R', 1 <= ILO <= IHI <= max(1,N); ILO and IHI are relative indexes. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PDGEHRD. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+M-2) if SIDE = 'L', and LOCc(JA+N-2) if SIDE = 'R'. This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEHRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least IAA = IA + ILO; JAA = JA+ILO-1; If SIDE = 'L', MI = IHI-ILO; NI = N; ICC = IC + ILO; JCC = JC; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; NI = IHI-ILO; ICC = IC; JCC = JC + ILO; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pdorml2.l0100644000056400000620000001730306335610634017005 0ustar pfrauenfstaff.TH PDORML2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORML2 - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORML2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORML2 overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PDGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pdormlq.l0100644000056400000620000001771306335610634017111 0ustar pfrauenfstaff.TH PDORMLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMLQ - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMLQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMLQ overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PDGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pdormql.l0100644000056400000620000001773106335610634017111 0ustar pfrauenfstaff.TH PDORMQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMQL - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMQL( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMQL overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PDGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PDGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pdormqr.l0100644000056400000620000001773206335610634017120 0ustar pfrauenfstaff.TH PDORMQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMQR - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMQR( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMQR overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PDGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PDGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PDGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pdormr2.l0100644000056400000620000001730306335610635017014 0ustar pfrauenfstaff.TH PDORMR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMR2 - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMR2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMR2 overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PDGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pdormr3.l0100644000056400000620000001762606335610635017025 0ustar pfrauenfstaff.TH PDORMR3 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMR3 - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMR3( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMR3 overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PDTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pdormrq.l0100644000056400000620000001771406335610635017121 0ustar pfrauenfstaff.TH PDORMRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMRQ - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMRQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMRQ overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PDGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pdormrz.l0100644000056400000620000002023706335610635017124 0ustar pfrauenfstaff.TH PDORMRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMRZ - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMRZ( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMRZ overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PDTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PDTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PDTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pdormtr.l0100644000056400000620000002051106335610635017111 0ustar pfrauenfstaff.TH PDORMTR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDORMTR - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PDORMTR( SIDE, UPLO, TRANS, M, N, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PDORMTR overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by PDSYTRD: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); .br if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of A(IA:*,JA:*) contains elementary reflectors from PDSYTRD; = 'L': Lower triangle of A(IA:*,JA:*) contains elementary reflectors from PDSYTRD. .TP 8 TRANS (global input) CHARACTER = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', or (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PDSYTRD. If SIDE = 'L', LLD_A >= max(1,LOCr(IA+M-1)); if SIDE = 'R', LLD_A >= max(1,LOCr(IA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) DOUBLE PRECISION array, dimension LTAU, where if SIDE = 'L' and UPLO = 'U', LTAU = LOCc(M_A), if SIDE = 'L' and UPLO = 'L', LTAU = LOCc(JA+M-2), if SIDE = 'R' and UPLO = 'U', LTAU = LOCc(N_A), if SIDE = 'R' and UPLO = 'L', LTAU = LOCc(JA+N-2). TAU(i) must contain the scalar factor of the elementary reflector H(i), as returned by PDSYTRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If UPLO = 'U', IAA = IA, JAA = JA+1, ICC = IC, JCC = JC; else UPLO = 'L', IAA = IA+1, JAA = JA; if SIDE = 'L', ICC = IC+1; JCC = JC; else ICC = IC; JCC = JC+1; end if end if If SIDE = 'L', MI = M-1; NI = N; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; MI = N-1; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pdpbtrf.l0100644000056400000620000000232006335610635017061 0ustar pfrauenfstaff.TH PDPBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPBTRF - compute a Cholesky factorization of an N-by-N real banded symmetric positive definite distributed matrix with bandwidth BW .SH SYNOPSIS .TP 20 SUBROUTINE PDPBTRF( UPLO, N, BW, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), WORK( * ) .SH PURPOSE PDPBTRF computes a Cholesky factorization of an N-by-N real banded symmetric positive definite distributed matrix with bandwidth BW: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PDPBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' U , if UPLO = 'U', or P A(1:N, JA:JA+N-1) P^T = L L', if UPLO = 'L' .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pdpbtrs.l0100644000056400000620000000170306335610635017102 0ustar pfrauenfstaff.TH PDPBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PDPBTRS( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PDPBTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PDPBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'*U or L*L' as computed by PDPBTRF. .br Routine PDPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pdpbtrsv.l0100644000056400000620000000217506335610635017274 0ustar pfrauenfstaff.TH PDPBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PDPBTRSV( UPLO, TRANS, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PDPBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Cholesky factorization code PDPBTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PDPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pdpocon.l0100644000056400000620000001511106335610635017064 0ustar pfrauenfstaff.TH PDPOCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOCON - estimate the reciprocal of the condition number (in the 1-norm) of a real symmetric positive definite distributed matrix using the Cholesky factorization A = U**T*U or A = L*L**T computed by PDPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PDPOCON( UPLO, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 DOUBLE PRECISION ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDPOCON estimates the reciprocal of the condition number (in the 1-norm) of a real symmetric positive definite distributed matrix using the Cholesky factorization A = U**T*U or A = L*L**T computed by PDPOTRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the factor stored in A(IA:IA+N-1,JA:JA+N-1) is upper or lower triangular. .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L or U from the Cholesky factorization A(IA:IA+N-1,JA:JA+N-1) = U'*U or L*L', as computed by PDPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) DOUBLE PRECISION The 1-norm (or infinity-norm) of the symmetric distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) DOUBLE PRECISION The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + 2*LOCc(N+MOD(JA-1,NB_A)) + MAX( 2, MAX(NB_A*CEIL(NPROW-1,NPCOL),LOCc(N+MOD(JA-1,NB_A)) + NB_A*CEIL(NPCOL-1,NPROW)) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr(N+MOD(IA-1,MB_A)). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdpoequ.l0100644000056400000620000001411706335610635017104 0ustar pfrauenfstaff.TH PDPOEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOEQU - compute row and column scalings intended to equilibrate a distributed symmetric positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm) .SH SYNOPSIS .TP 20 SUBROUTINE PDPOEQU( N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), SC( * ), SR( * ) .SH PURPOSE PDPOEQU computes row and column scalings intended to equilibrate a distributed symmetric positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm). SR and SC contain the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled distri- buted matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of SR and SC puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. .br The scaling factor are stored along process rows in SR and along process columns in SC. The duplication of information simplifies greatly the application of the factors. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ), the N-by-N symmetric positive definite distributed matrix sub( A ) whose scaling factors are to be computed. Only the diagonal elements of sub( A ) are referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local output) DOUBLE PRECISION array, dimension LOCr(M_A) If INFO = 0, SR(IA:IA+N-1) contains the row scale factors for sub( A ). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local output) DOUBLE PRECISION array, dimension LOCc(N_A) If INFO = 0, SC(JA:JA+N-1) contains the column scale factors .br for A(IA:IA+M-1,JA:JA+N-1). SC is aligned with the distribu- ted matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global output) DOUBLE PRECISION If INFO = 0, SCOND contains the ratio of the smallest SR(i) (or SC(j)) to the largest SR(i) (or SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. If SCOND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by SR (or SC). .TP 8 AMAX (global output) DOUBLE PRECISION Absolute value of largest matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the K-th diagonal entry of sub( A ) is nonpositive. scalapack-doc-1.5/man/manl/pdporfs.l0100644000056400000620000002365306335610635017111 0ustar pfrauenfstaff.TH PDPORFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPORFS - improve the computed solution to a system of linear equations when the coefficient matrix is symmetric positive definite and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PDPORFS( UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .SH PURPOSE PDPORFS improves the computed solution to a system of linear equations when the coefficient matrix is symmetric positive definite and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored. = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky factorization sub( A ) = L*L**T or U**T*U, as computed by PDPOTRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 B (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). On exit, it contains the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*LOCr( N + MOD( IA-1, MB_A ) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pdposv.l0100644000056400000620000001462606335610636016750 0ustar pfrauenfstaff.TH PDPOSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOSV - compute the solution to a real system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PDPOSV( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDPOSV computes the solution to a real system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is an N-by-N symmetric distributed positive definite matrix and X and sub( B ) denoting B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The Cholesky decomposition is used to factor sub( A ) as .br sub( A ) = U**T * U, if UPLO = 'U', or sub( A ) = L * L**T, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of sub( A ) is then used to solve the system of equations. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, this array contains the local pieces of the factor U or L from the Cholesky factori- zation sub( A ) = U**T*U or L*L**T. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B,LOC(JB+NRHS-1)). On entry, the local pieces of the right hand sides distribu- ted matrix sub( B ). On exit, if INFO = 0, sub( B ) is over- written with the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed, and the solution has not been computed. scalapack-doc-1.5/man/manl/pdposvx.l0100644000056400000620000003331106335610636017130 0ustar pfrauenfstaff.TH PDPOSVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOSVX - use the Cholesky factorization A = U**T*U or A = L*L**T to compute the solution to a real system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PDPOSVX( FACT, UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, EQUED, SR, SC, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), SC( * ), SR( * ), WORK( * ), X( * ) .SH PURPOSE PDPOSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to compute the solution to a real system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. In the following comments Y denotes Y(IY:IY+M-1,JY:JY+K-1) a M-by-K matrix where Y can be A, AF, B and X. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br diag(SR) * A * diag(SC) * inv(diag(SC)) * X = diag(SR) * B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(SR)*A*diag(SC) and B by diag(SR)*B. 2. If FACT = 'N' or 'E', the Cholesky decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as A = U**T* U, if UPLO = 'U', or .br A = L * L**T, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If equilibration was used, the matrix X is premultiplied by diag(SR) so that it solves the original system before .br equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A is supplied on entry, and if not, whether the matrix A should be equilibrated before it is factored. = 'F': On entry, AF contains the factored form of A. If EQUED = 'Y', the matrix A has been equilibrated with scaling factors given by S. A and AF will not be modified. = 'N': The matrix A will be copied to AF and factored. .br = 'E': The matrix A will be equilibrated if necessary, then copied to AF and factored. .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of A is stored; .br = 'L': Lower triangle of A is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrices B and X. NRHS >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the symmetric matrix A, except if FACT = 'F' and EQUED = 'Y', then A must contain the equilibrated matrix diag(SR)*A*diag(SC). If UPLO = 'U', the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. A is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if FACT = 'E' and EQUED = 'Y', A is overwritten by diag(SR)*A*diag(SC). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension ( LLD_AF, LOCc(JA+N-1)). If FACT = 'F', then AF is an input argument and on entry contains the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T, in the same storage format as A. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix diag(SR)*A*diag(SC). If FACT = 'N', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the original matrix A. If FACT = 'E', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the equilibrated matrix A (see the description of A for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 EQUED (global input/global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'Y': Equilibration was done, i.e., A has been replaced by diag(SR) * A * diag(SC). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 SR (local input/local output) DOUBLE PRECISION array, dimension (LLD_A) The scale factors for A distributed across process rows; not accessed if EQUED = 'N'. SR is an input variable if FACT = 'F'; otherwise, SR is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SR must be positive. .TP 8 SC (local input/local output) DOUBLE PRECISION array, dimension (LOC(N_A)) The scale factors for A distributed across process columns; not accessed if EQUED = 'N'. SC is an input variable if FACT = 'F'; otherwise, SC is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SC must be positive. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension ( LLD_B, LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B. On exit, if EQUED = 'N', B is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B; if TRANS = 'T' or 'C' and EQUED = 'C' or 'B', B is overwritten by diag(C)*B. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension ( LLD_X, LOCc(JX+NRHS-1) ). If INFO = 0, the N-by-NRHS solution matrix X to the original system of equations. Note that A and B are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(SC))*X if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(SR))*X if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) DOUBLE PRECISION The estimate of the reciprocal condition number of the matrix A after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0, and the solution and error bounds are not computed. .TP 8 FERR (local output) DOUBLE PRECISION array, dimension (LOC(N_B)) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The quality of the error bound depends on the quality of the estimate of norm(inv(A)) computed in the code; if the estimate of norm(inv(A)) is accurate, the error bound is guaranteed. .TP 8 BERR (local output) DOUBLE PRECISION array, dimension (LOC(N_B)) The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A or B that makes X(j) an exact solution). .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PDPOCON( LWORK ), PDPORFS( LWORK ) ) + LOCr( N_A ). LWORK = 3*DESCA( LLD_ ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK = DESCA( LLD_ ) LIWORK = LOCr(N_A). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: if INFO = i, the leading minor of order i of A is not positive definite, so the factorization could not be completed, and the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pdpotf2.l0100644000056400000620000001261606335610636017010 0ustar pfrauenfstaff.TH PDPOTF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOTF2 - compute the Cholesky factorization of a real symmetric positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDPOTF2( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDPOTF2 computes the Cholesky factorization of a real symmetric positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pdpotrf.l0100644000056400000620000001262006335610636017103 0ustar pfrauenfstaff.TH PDPOTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOTRF - compute the Cholesky factorization of an N-by-N real symmetric positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDPOTRF( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDPOTRF computes the Cholesky factorization of an N-by-N real symmetric positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pdpotri.l0100644000056400000620000001147106335610636017111 0ustar pfrauenfstaff.TH PDPOTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOTRI - compute the inverse of a real symmetric positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**T*U or L*L**T computed by PDPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PDPOTRI( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDPOTRI computes the inverse of a real symmetric positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**T*U or L*L**T computed by PDPOTRF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor U or L from the Cholesky factorization of the distributed matrix sub( A ) = U**T*U or L*L**T, as computed by PDPOTRF. On exit, the local pieces of the upper or lower triangle of the (symmetric) inverse of sub( A ), overwriting the input factor U or L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the (i,i) element of the factor U or L is zero, and the inverse could not be computed. scalapack-doc-1.5/man/manl/pdpotrs.l0100644000056400000620000001272606335610636017127 0ustar pfrauenfstaff.TH PDPOTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPOTRS - solve a system of linear equations sub( A ) * X = sub( B ) A(IA:IA+N-1,JA:JA+N-1)*X = B(IB:IB+N-1,JB:JB+NRHS-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDPOTRS( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDPOTRS solves a system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a N-by-N symmetric positive definite distributed matrix using the Cholesky factorization sub( A ) = U**T*U or L*L**T computed by PDPOTRF. sub( B ) denotes the distributed matrix B(IB:IB+N-1,JB:JB+NRHS-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky facto- rization sub( A ) = L*L**T or U**T*U, as computed by PDPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the the local pieces of the right hand sides sub( B ). On exit, this array contains the local pieces of the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdptsv.l0100644000056400000620000000134406335610636016746 0ustar pfrauenfstaff.TH PDPTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PDPTSV( N, NRHS, D, E, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 DOUBLE PRECISION B( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PDPTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal symmetric positive definite distributed .br matrix. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PDPTTRF and PDPTTRS for details. .br scalapack-doc-1.5/man/manl/pdpttrf.l0100644000056400000620000000223206335610636017106 0ustar pfrauenfstaff.TH PDPTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPTTRF - compute a Cholesky factorization of an N-by-N real tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDPTTRF( N, D, E, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION AF( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PDPTTRF computes a Cholesky factorization of an N-by-N real tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PDPTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' D U or .br P A(1:N, JA:JA+N-1) P^T = L D L', .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pdpttrs.l0100644000056400000620000000142606335610636017127 0ustar pfrauenfstaff.TH PDPTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PDPTTRS( N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION AF( * ), B( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PDPTTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PDPTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal symmetric positive definite distributed .br matrix. .br Routine PDPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pdpttrsv.l0100644000056400000620000000220106335610636017305 0ustar pfrauenfstaff.TH PDPTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDPTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PDPTTRSV( UPLO, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 DOUBLE PRECISION AF( * ), B( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PDPTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Cholesky factorization code PDPTTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PDPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pdrscl.l0100644000056400000620000001115206335610636016713 0ustar pfrauenfstaff.TH PDRSCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDRSCL - multiplie an N-element real distributed vector sub( X ) by the real scalar 1/a .SH SYNOPSIS .TP 19 SUBROUTINE PDRSCL( N, SA, SX, IX, JX, DESCX, INCX ) .TP 19 .ti +4 INTEGER IX, INCX, JX, N .TP 19 .ti +4 DOUBLE PRECISION SA .TP 19 .ti +4 INTEGER DESCX( * ) .TP 19 .ti +4 DOUBLE PRECISION SX( * ) .SH PURPOSE PDRSCL multiplies an N-element real distributed vector sub( X ) by the real scalar 1/a. This is done without overflow or underflow as long as the final result sub( X )/a does not overflow or underflow. where sub( X ) denotes X(IX:IX+N-1,JX:JX), if INCX = 1, .br X(IX:IX,JX:JX+N-1), if INCX = M_X. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector descA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DT_A (global) descA[ DT_ ] The descriptor type. In this case, DT_A = 1. .br CTXT_A (global) descA[ CTXT_ ] The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) descA[ M_ ] The number of rows in the global array A. .br N_A (global) descA[ N_ ] The number of columns in the global array A. .br MB_A (global) descA[ MB_ ] The blocking factor used to distribu- te the rows of the array. .br NB_A (global) descA[ NB_ ] The blocking factor used to distribu- te the columns of the array. RSRC_A (global) descA[ RSRC_ ] The process row over which the first row of the array A is distributed. CSRC_A (global) descA[ CSRC_ ] The process column over which the first column of the array A is distributed. .br LLD_A (local) descA[ LLD_ ] The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be seen as particular matrices, a distributed vector is considered to be a distributed matrix. .br .SH ARGUMENTS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 SA (global input) DOUBLE PRECISION The scalar a which is used to divide each component of sub( X ). SA must be >= 0, or the subroutine will divide by zero. .TP 8 SX (local input/local output) DOUBLE PRECISION array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) pointer to INTEGER The global row index of the submatrix of the distributed matrix X to operate on. .TP 8 JX (global input) pointer to INTEGER The global column index of the submatrix of the distributed matrix X to operate on. .TP 8 DESCX (global and local input) INTEGER array of dimension 8. The array descriptor of the distributed matrix X. .TP 8 INCX (global input) pointer to INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. scalapack-doc-1.5/man/manl/pdstebz.l0100644000056400000620000002003306335610636017075 0ustar pfrauenfstaff.TH PDSTEBZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDSTEBZ - compute the eigenvalues of a symmetric tridiagonal matrix in parallel .SH SYNOPSIS .TP 20 SUBROUTINE PDSTEBZ( ICTXT, RANGE, ORDER, N, VL, VU, IL, IU, ABSTOL, D, E, M, NSPLIT, W, IBLOCK, ISPLIT, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER ORDER, RANGE .TP 20 .ti +4 INTEGER ICTXT, IL, INFO, IU, LIWORK, LWORK, M, N, NSPLIT .TP 20 .ti +4 DOUBLE PRECISION ABSTOL, VL, VU .TP 20 .ti +4 INTEGER IBLOCK( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ), W( * ), WORK( * ) .SH PURPOSE PDSTEBZ computes the eigenvalues of a symmetric tridiagonal matrix in parallel. The user may ask for all eigenvalues, all eigenvalues in the interval [VL, VU], or the eigenvalues indexed IL through IU. A static partitioning of work is done at the beginning of PDSTEBZ which results in all processes finding an (almost) equal number of eigenvalues. .br NOTE : It is assumed that the user is on an IEEE machine. If the user is not on an IEEE mchine, set the compile time flag NO_IEEE to 1 (in SLmake.inc). The features of IEEE arithmetic that are needed for the "fast" Sturm Count are : (a) infinity arithmetic (b) the sign bit of a single precision floating point number is assumed be in the 32nd bit position (c) the sign of negative zero. .br See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford .br University, July 21, 1966. .br .SH ARGUMENTS .TP 8 ICTXT (global input) INTEGER The BLACS context handle. .TP 8 RANGE (global input) CHARACTER Specifies which eigenvalues are to be found. = 'A': ("All") all eigenvalues will be found. .br = 'V': ("Value") all eigenvalues in the interval [VL, VU] will be found. = 'I': ("Index") the IL-th through IU-th eigenvalues (of the entire matrix) will be found. .TP 8 ORDER (global input) CHARACTER Specifies the order in which the eigenvalues and their block numbers are stored in W and IBLOCK. = 'B': ("By Block") the eigenvalues will be grouped by split-off block (see IBLOCK, ISPLIT) and ordered from smallest to largest within the block. = 'E': ("Entire matrix") the eigenvalues for the entire matrix will be ordered from smallest to largest. .TP 8 N (global input) INTEGER The order of the tridiagonal matrix T. N >= 0. .TP 8 VL (global input) DOUBLE PRECISION If RANGE='V', the lower bound of the interval to be searched for eigenvalues. Eigenvalues less than VL will not be returned. Not referenced if RANGE='A' or 'I'. .TP 8 VU (global input) DOUBLE PRECISION If RANGE='V', the upper bound of the interval to be searched for eigenvalues. Eigenvalues greater than VU will not be returned. VU must be greater than VL. Not referenced if RANGE='A' or 'I'. .TP 8 IL (global input) INTEGER If RANGE='I', the index (from smallest to largest) of the smallest eigenvalue to be returned. IL must be at least 1. Not referenced if RANGE='A' or 'V'. .TP 8 IU (global input) INTEGER If RANGE='I', the index (from smallest to largest) of the largest eigenvalue to be returned. IU must be at least IL and no greater than N. Not referenced if RANGE='A' or 'V'. .TP 8 ABSTOL (global input) DOUBLE PRECISION The absolute tolerance for the eigenvalues. An eigenvalue (or cluster) is considered to be located if it has been determined to lie in an interval whose width is ABSTOL or less. If ABSTOL is less than or equal to zero, then ULP*|T| will be used, where |T| means the 1-norm of T. Eigenvalues will be computed most accurately when ABSTOL is set to the underflow threshold DLAMCH('U'), not zero. Note : If eigenvectors are desired later by inverse iteration ( PDSTEIN ), ABSTOL should be set to 2*PDLAMCH('S'). .TP 8 D (global input) DOUBLE PRECISION array, dimension (N) The n diagonal elements of the tridiagonal matrix T. To avoid overflow, the matrix must be scaled so that its largest entry is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. .TP 8 E (global input) DOUBLE PRECISION array, dimension (N-1) The (n-1) off-diagonal elements of the tridiagonal matrix T. To avoid overflow, the matrix must be scaled so that its largest entry is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. .TP 8 M (global output) INTEGER The actual number of eigenvalues found. 0 <= M <= N. (See also the description of INFO=2) .TP 8 NSPLIT (global output) INTEGER The number of diagonal blocks in the matrix T. 1 <= NSPLIT <= N. .TP 8 W (global output) DOUBLE PRECISION array, dimension (N) On exit, the first M elements of W contain the eigenvalues on all processes. .TP 8 IBLOCK (global output) INTEGER array, dimension (N) At each row/column j where E(j) is zero or small, the matrix T is considered to split into a block diagonal matrix. On exit IBLOCK(i) specifies which block (from 1 to the number of blocks) the eigenvalue W(i) belongs to. NOTE: in the (theoretically impossible) event that bisection does not converge for some or all eigenvalues, INFO is set to 1 and the ones for which it did not are identified by a negative block number. .TP 8 ISPLIT (global output) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT(1), the second of rows/columns ISPLIT(1)+1 through ISPLIT(2), etc., and the NSPLIT-th consists of rows/columns ISPLIT(NSPLIT-1)+1 through ISPLIT(NSPLIT)=N. (Only the first NSPLIT elements will actually be used, but since the user cannot know a priori what value NSPLIT will have, N words must be reserved for ISPLIT.) .TP 8 WORK (local workspace) DOUBLE PRECISION array, dimension ( MAX( 5*N, 7 ) ) .TP 8 LWORK (local input) INTEGER size of array WORK must be >= MAX( 5*N, 7 ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace) INTEGER array, dimension ( MAX( 4*N, 14 ) ) .TP 8 LIWORK (local input) INTEGER size of array IWORK must be >= MAX( 4*N, 14, NPROCS ) If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0 : successful exit .br < 0 : if INFO = -i, the i-th argument had an illegal value .br > 0 : some or all of the eigenvalues failed to converge or .br were not computed: .br = 1 : Bisection failed to converge for some eigenvalues; these eigenvalues are flagged by a negative block number. The effect is that the eigenvalues may not be as accurate as the absolute and relative tolerances. This is generally caused by arithmetic which is less accurate than PDLAMCH says. = 2 : There is a mismatch between the number of eigenvalues output and the number desired. = 3 : RANGE='i', and the Gershgorin interval initially used was incorrect. No eigenvalues were computed. Probable cause: your machine has sloppy floating point arithmetic. Cure: Increase the PARAMETER "FUDGE", recompile, and try again. .SH PARAMETERS .TP 8 RELFAC DOUBLE PRECISION, default = 2.0 The relative tolerance. An interval [a,b] lies within "relative tolerance" if b-a < RELFAC*ulp*max(|a|,|b|), where "ulp" is the machine precision (distance from 1 to the next larger floating point number.) .TP 8 FUDGE DOUBLE PRECISION, default = 2.0 A "fudge factor" to widen the Gershgorin intervals. Ideally, a value of 1 should work, but on machines with sloppy arithmetic, this needs to be larger. The default for publicly released versions should be large enough to handle the worst machine around. Note that this has no effect on the accuracy of the solution. scalapack-doc-1.5/man/manl/pdstein.l0100644000056400000620000002576206335610637017107 0ustar pfrauenfstaff.TH PDSTEIN l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDSTEIN - compute the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration .SH SYNOPSIS .TP 20 SUBROUTINE PDSTEIN( N, D, E, M, W, IBLOCK, ISPLIT, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 INTEGER INFO, IZ, JZ, LIWORK, LWORK, M, N .TP 20 .ti +4 DOUBLE PRECISION ORFAC .TP 20 .ti +4 INTEGER DESCZ( * ), IBLOCK( * ), ICLUSTR( * ), IFAIL( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ), GAP( * ), W( * ), WORK( * ), Z( * ) .SH PURPOSE PDSTEIN computes the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration. The eigenvectors found correspond to user specified eigenvalues. PDSTEIN does not orthogonalize vectors that are on different processes. The extent of orthogonalization is controlled by the input parameter LWORK. Eigenvectors that are to be orthogonalized are computed by the same process. PDSTEIN decides on the allocation of work among the processes and then calls DSTEIN2 (modified LAPACK routine) on each individual process. If insufficient workspace is allocated, the expected orthogonalization may not be done. .br Note : If the eigenvectors obtained are not orthogonal, increase LWORK and run the code again. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS P = NPROW * NPCOL is the total number of processes .TP 8 N (global input) INTEGER The order of the tridiagonal matrix T. N >= 0. .TP 8 D (global input) DOUBLE PRECISION array, dimension (N) The n diagonal elements of the tridiagonal matrix T. .TP 8 E (global input) DOUBLE PRECISION array, dimension (N-1) The (n-1) off-diagonal elements of the tridiagonal matrix T. .TP 8 M (global input) INTEGER The total number of eigenvectors to be found. 0 <= M <= N. .TP 8 W (global input/global output) DOUBLE PRECISION array, dim (M) On input, the first M elements of W contain all the eigenvalues for which eigenvectors are to be computed. The eigenvalues should be grouped by split-off block and ordered from smallest to largest within the block (The output array W from PDSTEBZ with ORDER='b' is expected here). This array should be replicated on all processes. On output, the first M elements contain the input eigenvalues in ascending order. Note : To obtain orthogonal vectors, it is best if eigenvalues are computed to highest accuracy ( this can be done by setting ABSTOL to the underflow threshold = DLAMCH('U') --- ABSTOL is an input parameter to PDSTEBZ ) .TP 8 IBLOCK (global input) INTEGER array, dimension (N) The submatrix indices associated with the corresponding eigenvalues in W -- 1 for eigenvalues belonging to the first submatrix from the top, 2 for those belonging to the second submatrix, etc. (The output array IBLOCK from PDSTEBZ is expected here). .TP 8 ISPLIT (global input) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT(1), the second of rows/columns ISPLIT(1)+1 through ISPLIT(2), etc., and the NSPLIT-th consists of rows/columns ISPLIT(NSPLIT-1)+1 through ISPLIT(NSPLIT)=N (The output array ISPLIT from PDSTEBZ is expected here.) .TP 8 ORFAC (global input) DOUBLE PRECISION ORFAC specifies which eigenvectors should be orthogonalized. Eigenvectors that correspond to eigenvalues which are within ORFAC*||T|| of each other are to be orthogonalized. However, if the workspace is insufficient (see LWORK), this tolerance may be decreased until all eigenvectors to be orthogonalized can be stored in one process. No orthogonalization will be done if ORFAC equals zero. A default value of 10^-3 is used if ORFAC is negative. ORFAC should be identical on all processes. .TP 8 Z (local output) DOUBLE PRECISION array, dimension (DESCZ(DLEN_), N/npcol + NB) Z contains the computed eigenvectors associated with the specified eigenvalues. Any vector which fails to converge is set to its current iterate after MAXITS iterations ( See DSTEIN2 ). On output, Z is distributed across the P processes in block cyclic format. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 WORK (local workspace/global output) DOUBLE PRECISION array, dimension ( LWORK ) On output, WORK(1) gives a lower bound on the workspace ( LWORK ) that guarantees the user desired orthogonalization (see ORFAC). Note that this may overestimate the minimum workspace needed. .TP 8 LWORK (local input) integer LWORK controls the extent of orthogonalization which can be done. The number of eigenvectors for which storage is allocated on each process is NVEC = floor(( LWORK- max(5*N,NP00*MQ00) )/N). Eigenvectors corresponding to eigenvalue clusters of size NVEC - ceil(M/P) + 1 are guaranteed to be orthogonal ( the orthogonality is similar to that obtained from DSTEIN2). Note : LWORK must be no smaller than: max(5*N,NP00*MQ00) + ceil(M/P)*N, and should have the same input value on all processes. It is the minimum value of LWORK input on different processes that is significant. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/global output) INTEGER array, dimension ( 3*N+P+1 ) On return, IWORK(1) contains the amount of integer workspace required. On return, the IWORK(2) through IWORK(P+2) indicate the eigenvectors computed by each process. Process I computes eigenvectors indexed IWORK(I+2)+1 thru' IWORK(I+3). .TP 8 LIWORK (local input) INTEGER Size of array IWORK. Must be >= 3*N + P + 1 If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IFAIL (global output) integer array, dimension (M) On normal exit, all elements of IFAIL are zero. If one or more eigenvectors fail to converge after MAXITS iterations (as in DSTEIN), then INFO > 0 is returned. If mod(INFO,M+1)>0, then for I=1 to mod(INFO,M+1), the eigenvector corresponding to the eigenvalue W(IFAIL(I)) failed to converge ( W refers to the array of eigenvalues on output ). ICLUSTR (global output) integer array, dimension (2*P) This output array contains indices of eigenvectors corresponding to a cluster of eigenvalues that could not be orthogonalized due to insufficient workspace (see LWORK, ORFAC and INFO). Eigenvectors corresponding to clusters of eigenvalues indexed ICLUSTR(2*I-1) to ICLUSTR(2*I), I = 1 to INFO/(M+1), could not be orthogonalized due to lack of workspace. Hence the eigenvectors corresponding to these clusters may not be orthogonal. ICLUSTR is a zero terminated array --- ( ICLUSTR(2*K).NE.0 .AND. ICLUSTR(2*K+1).EQ.0 ) if and only if K is the number of clusters. .TP 8 GAP (global output) DOUBLE PRECISION array, dimension (P) This output array contains the gap between eigenvalues whose eigenvectors could not be orthogonalized. The INFO/M output values in this array correspond to the INFO/(M+1) clusters indicated by the array ICLUSTR. As a result, the dot product between eigenvectors corresponding to the I^th cluster may be as high as ( O(n)*macheps ) / GAP(I). .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. < 0 : if INFO = -I, the I-th argument had an illegal value .br > 0 : if mod(INFO,M+1) = I, then I eigenvectors failed to converge in MAXITS iterations. Their indices are stored in the array IFAIL. if INFO/(M+1) = I, then eigenvectors corresponding to I clusters of eigenvalues could not be orthogonalized due to insufficient workspace. The indices of the clusters are stored in the array ICLUSTR. scalapack-doc-1.5/man/manl/pdsyev.l0100644000056400000620000002321406335610637016741 0ustar pfrauenfstaff.TH PDSYEV l "12 May 1997" "LAPACK version 1.3" "LAPACK routine (version 1.3)" .SH NAME .SH SYNOPSIS .TP 19 SUBROUTINE PDSYEV( JOBZ, UPLO, N, A, IA, JA, DESCA, W, Z, IZ, JZ, DESCZ, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER JOBZ, UPLO .TP 19 .ti +4 INTEGER IA, INFO, IZ, JA, JZ, LWORK, N .TP 19 .ti +4 INTEGER DESCA( * ), DESCZ( * ) .TP 19 .ti +4 DOUBLE PRECISION A( * ), W( * ), WORK( * ), Z( * ) .TP 19 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 19 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 19 .ti +4 DOUBLE PRECISION ZERO, ONE .TP 19 .ti +4 PARAMETER ( ZERO = 0.0D+0, ONE = 1.0D+0 ) .TP 19 .ti +4 INTEGER ITHVAL .TP 19 .ti +4 PARAMETER ( ITHVAL = 10 ) .TP 19 .ti +4 LOGICAL LOWER, WANTZ .TP 19 .ti +4 INTEGER CONTEXTC, CSRC_A, I, IACOL, IAROW, ICOFFA, IINFO, INDD, INDD2, INDE, INDE2, INDTAU, INDWORK, INDWORK2, IROFFA, IROFFZ, ISCALE, IZROW, J, K, LCM, LCMQ, LDC, LLWORK, LWMIN, MB_A, MB_Z, MYCOL, MYPCOLC, MYPROWC, MYROW, NB, NB_A, NB_Z, NN, NP, NPCOL, NPCOLC, NPROCS, NPROW, NPROWC, NQ, NRC, QRMEM, RSRC_A, RSRC_Z, SIZEMQRLEFT, SIZEMQRRIGHT .TP 19 .ti +4 DOUBLE PRECISION ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA, SMLNUM .TP 19 .ti +4 INTEGER DESCQR( 10 ), IDUM1( 3 ), IDUM2( 3 ) .TP 19 .ti +4 LOGICAL LSAME .TP 19 .ti +4 INTEGER ILCM, INDXG2P, NUMROC, SL_GRIDRESHAPE .TP 19 .ti +4 DOUBLE PRECISION PDLAMCH, PDLANSY .TP 19 .ti +4 EXTERNAL LSAME, ILCM, INDXG2P, NUMROC, SL_GRIDRESHAPE, PDLAMCH, PDLANSY .TP 19 .ti +4 EXTERNAL BLACS_GRIDEXIT, BLACS_GRIDINFO, CHK1MAT, DCOPY, DESCINIT, DGAMN2D, DGAMX2D, DSCAL, DSTEQR2, PCHK2MAT, PDELGET, PDGEMR2D, PDLASCL, PDLASET, PDORMTR, PDSYTRD, PXERBLA .TP 19 .ti +4 INTRINSIC DBLE, ICHAR, MAX, MIN, MOD, SQRT .TP 19 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 19 .ti +4 IF( N.EQ.0 ) RETURN .TP 19 .ti +4 CALL BLACS_GRIDINFO( DESCA( CTXT_ ), NPROW, NPCOL, MYROW, MYCOL ) .TP 19 .ti +4 INFO = 0 .TP 19 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 19 .ti +4 INFO = -( 700+CTXT_ ) .TP 19 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+CTXT_ ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 CALL CHK1MAT( N, 3, N, 3, IA, JA, DESCA, 7, INFO ) .TP 19 .ti +4 CALL CHK1MAT( N, 3, N, 3, IZ, JZ, DESCZ, 12, INFO ) .TP 19 .ti +4 IF( INFO.EQ.0 ) THEN .TP 19 .ti +4 SAFMIN = PDLAMCH( DESCA( CTXT_ ), 'Safe minimum' ) .TP 19 .ti +4 EPS = PDLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 19 .ti +4 SMLNUM = SAFMIN / EPS .TP 19 .ti +4 BIGNUM = ONE / SMLNUM .TP 19 .ti +4 RMIN = SQRT( SMLNUM ) .TP 19 .ti +4 RMAX = MIN( SQRT( BIGNUM ), ONE / SQRT( SQRT( SAFMIN ) ) ) .TP 19 .ti +4 NPROCS = NPROW*NPCOL .TP 19 .ti +4 NB_A = DESCA( NB_ ) .TP 19 .ti +4 MB_A = DESCA( MB_ ) .TP 19 .ti +4 NB_Z = DESCZ( NB_ ) .TP 19 .ti +4 MB_Z = DESCZ( MB_ ) .TP 19 .ti +4 NB = NB_A .TP 19 .ti +4 LOWER = LSAME( UPLO, 'L' ) .TP 19 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 19 .ti +4 RSRC_A = DESCA( RSRC_ ) .TP 19 .ti +4 CSRC_A = DESCA( CSRC_ ) .TP 19 .ti +4 RSRC_Z = DESCZ( RSRC_ ) .TP 19 .ti +4 LCM = ILCM( NPROW, NPCOL ) .TP 19 .ti +4 LCMQ = LCM / NPCOL .TP 19 .ti +4 IROFFA = MOD( IA-1, MB_A ) .TP 19 .ti +4 ICOFFA = MOD( JA-1, NB_A ) .TP 19 .ti +4 IROFFZ = MOD( IZ-1, MB_A ) .TP 19 .ti +4 IAROW = INDXG2P( 1, NB_A, MYROW, RSRC_A, NPROW ) .TP 19 .ti +4 IACOL = INDXG2P( 1, MB_A, MYCOL, CSRC_A, NPCOL ) .TP 19 .ti +4 IZROW = INDXG2P( 1, NB_A, MYROW, RSRC_Z, NPROW ) .TP 19 .ti +4 NP = NUMROC( N+IROFFA, NB_Z, MYROW, IAROW, NPROW ) .TP 19 .ti +4 NQ = NUMROC( N+ICOFFA, NB_Z, MYCOL, IACOL, NPCOL ) .TP 19 .ti +4 SIZEMQRLEFT = MAX( ( NB_A*( NB_A-1 ) ) / 2, ( NP+NQ )*NB_A ) + NB_A*NB_A .TP 19 .ti +4 SIZEMQRRIGHT = MAX( ( NB_A*( NB_A-1 ) ) / 2, ( NQ+MAX( NP+NUMROC( NUMROC( N+ICOFFA, NB_A, 0, 0, NPCOL ), NB, 0, 0, LCMQ ), NP ) )* NB_A ) + NB_A*NB_A .TP 19 .ti +4 LDC = 0 .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 CONTEXTC = SL_GRIDRESHAPE( DESCA( CTXT_ ), 0, 1, 1, NPROCS, 1 ) .TP 19 .ti +4 CALL BLACS_GRIDINFO( CONTEXTC, NPROWC, NPCOLC, MYPROWC, MYPCOLC ) .TP 19 .ti +4 NRC = NUMROC( N, NB_A, MYPROWC, 0, NPROCS ) .TP 19 .ti +4 LDC = MAX( 1, NRC ) .TP 19 .ti +4 CALL DESCINIT( DESCQR, N, N, NB, NB, 0, 0, CONTEXTC, LDC, INFO ) .TP 19 .ti +4 END IF .TP 19 .ti +4 INDTAU = 1 .TP 19 .ti +4 INDE = INDTAU + N .TP 19 .ti +4 INDD = INDE + N .TP 19 .ti +4 INDD2 = INDD + N .TP 19 .ti +4 INDE2 = INDD2 + N .TP 19 .ti +4 INDWORK = INDE2 + N .TP 19 .ti +4 INDWORK2 = INDWORK + N*LDC .TP 19 .ti +4 LLWORK = LWORK - INDWORK + 1 .TP 19 .ti +4 NN = MAX( N, NB, 2 ) .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 QRMEM = 5*N + MAX( 2*NP+NQ+NB*NN, 2*NN-2 ) + N*LDC .TP 19 .ti +4 LWMIN = MAX( SIZEMQRLEFT, SIZEMQRRIGHT, QRMEM ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 LWMIN = 5*N + 2*NP + NQ + NB*NN .TP 19 .ti +4 END IF .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( INFO.EQ.0 ) THEN .TP 19 .ti +4 IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 19 .ti +4 INFO = -1 .TP 19 .ti +4 ELSE IF( .NOT.( LOWER .OR. LSAME( UPLO, 'U' ) ) ) THEN .TP 19 .ti +4 INFO = -2 .TP 19 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 19 .ti +4 INFO = -14 .TP 19 .ti +4 ELSE IF( IROFFA.NE.IROFFZ ) THEN .TP 19 .ti +4 INFO = -10 .TP 19 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 19 .ti +4 INFO = -5 .TP 19 .ti +4 ELSE IF( IAROW.NE.IZROW ) THEN .TP 19 .ti +4 INFO = -10 .TP 19 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 19 .ti +4 INFO = -( 700+NB_ ) .TP 19 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+M_ ) .TP 19 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+N_ ) .TP 19 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+MB_ ) .TP 19 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+NB_ ) .TP 19 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+RSRC_ ) .TP 19 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+CTXT_ ) .TP 19 .ti +4 END IF .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 IDUM1( 1 ) = ICHAR( 'V' ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 IDUM1( 1 ) = ICHAR( 'N' ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IDUM2( 1 ) = 1 .TP 19 .ti +4 IF( LOWER ) THEN .TP 19 .ti +4 IDUM1( 2 ) = ICHAR( 'L' ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 IDUM1( 2 ) = ICHAR( 'U' ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IDUM2( 2 ) = 2 .TP 19 .ti +4 IF( LWORK.EQ.-1 ) THEN .TP 19 .ti +4 IDUM1( 3 ) = -1 .TP 19 .ti +4 ELSE .TP 19 .ti +4 IDUM1( 3 ) = 1 .TP 19 .ti +4 END IF .TP 19 .ti +4 IDUM2( 3 ) = 3 .TP 19 .ti +4 CALL PCHK2MAT( N, 3, N, 3, IA, JA, DESCA, 7, N, 3, N, 3, IZ, JZ, DESCZ, 12, 3, IDUM1, IDUM2, INFO ) .TP 19 .ti +4 WORK( 1 ) = DBLE( LWMIN ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( INFO.NE.0 ) THEN .TP 19 .ti +4 CALL PXERBLA( DESCA( CTXT_ ), 'PDSYEV', -INFO ) .TP 19 .ti +4 IF( WANTZ ) CALL BLACS_GRIDEXIT( CONTEXTC ) .TP 19 .ti +4 RETURN .TP 19 .ti +4 ELSE IF( LWORK.EQ.-1 ) THEN .TP 19 .ti +4 IF( WANTZ ) CALL BLACS_GRIDEXIT( CONTEXTC ) .TP 19 .ti +4 RETURN .TP 19 .ti +4 END IF .TP 19 .ti +4 ISCALE = 0 .TP 19 .ti +4 ANRM = PDLANSY( '1', UPLO, N, A, IA, JA, DESCA, WORK( INDWORK ) ) .TP 19 .ti +4 IF( ANRM.GT.ZERO .AND. ANRM.LT.RMIN ) THEN .TP 19 .ti +4 ISCALE = 1 .TP 19 .ti +4 SIGMA = RMIN / ANRM .TP 19 .ti +4 ELSE IF( ANRM.GT.RMAX ) THEN .TP 19 .ti +4 ISCALE = 1 .TP 19 .ti +4 SIGMA = RMAX / ANRM .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 19 .ti +4 CALL PDLASCL( UPLO, ONE, SIGMA, N, N, A, IA, JA, DESCA, IINFO ) .TP 19 .ti +4 END IF .TP 19 .ti +4 CALL PDSYTRD( UPLO, N, A, IA, JA, DESCA, WORK( INDD ), WORK( INDE ), WORK( INDTAU ), WORK( INDWORK ), LLWORK, IINFO ) .TP 19 .ti +4 DO 10 I = 1, N .TP 19 .ti +4 CALL PDELGET( 'A', ' ', WORK( INDD2+I-1 ), A, I+IA-1, I+JA-1, DESCA ) .TP 19 .ti +4 10 CONTINUE .TP 19 .ti +4 IF( LSAME( UPLO, 'U' ) ) THEN .TP 19 .ti +4 DO 20 I = 1, N - 1 .TP 19 .ti +4 CALL PDELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA-1, I+JA, DESCA ) .TP 19 .ti +4 20 CONTINUE .TP 19 .ti +4 ELSE .TP 19 .ti +4 DO 30 I = 1, N - 1 .TP 19 .ti +4 CALL PDELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA, I+JA-1, DESCA ) .TP 19 .ti +4 30 CONTINUE .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 CALL PDLASET( 'Full', N, N, ZERO, ONE, WORK( INDWORK ), 1, 1, DESCQR ) .TP 19 .ti +4 CALL DSTEQR2( 'I', N, WORK( INDD2 ), WORK( INDE2 ), WORK( INDWORK ), LDC, NRC, WORK( INDWORK2 ), INFO ) .TP 19 .ti +4 CALL PDGEMR2D( N, N, WORK( INDWORK ), 1, 1, DESCQR, Z, 1, 1, DESCZ, CONTEXTC ) .TP 19 .ti +4 CALL PDORMTR( 'L', UPLO, 'N', N, N, A, IA, JA, DESCA, WORK( INDTAU ), Z, IZ, JZ, DESCZ, WORK( INDWORK ), LLWORK, IINFO ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 CALL DSTEQR2( 'N', N, WORK( INDD2 ), WORK( INDE2 ), WORK( INDWORK ), 1, 1, WORK( INDWORK2 ), INFO ) .TP 19 .ti +4 END IF .TP 19 .ti +4 CALL DCOPY( N, WORK( INDD2 ), 1, W, 1 ) .TP 19 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 19 .ti +4 CALL DSCAL( N, ONE / SIGMA, W, 1 ) .TP 19 .ti +4 END IF .TP 19 .ti +4 WORK( 1 ) = DBLE( LWMIN ) .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 CALL BLACS_GRIDEXIT( CONTEXTC ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( N.LE.ITHVAL ) THEN .TP 19 .ti +4 J = N .TP 19 .ti +4 K = 1 .TP 19 .ti +4 ELSE .TP 19 .ti +4 J = N / ITHVAL .TP 19 .ti +4 K = ITHVAL .TP 19 .ti +4 END IF .TP 19 .ti +4 DO 40 I = 1, J .TP 19 .ti +4 WORK( I+INDTAU ) = W( ( I-1 )*K+1 ) .TP 19 .ti +4 WORK( I+INDE ) = W( ( I-1 )*K+1 ) .TP 19 .ti +4 40 CONTINUE .TP 19 .ti +4 CALL DGAMN2D( DESCA( CTXT_ ), 'a', ' ', J, 1, WORK( 1+INDTAU ), J, 1, 1, -1, -1, 0 ) .TP 19 .ti +4 CALL DGAMX2D( DESCA( CTXT_ ), 'a', ' ', J, 1, WORK( 1+INDE ), J, 1, 1, -1, -1, 0 ) .TP 19 .ti +4 DO 50 I = 1, J .TP 19 .ti +4 IF( INFO.EQ.0 .AND. ( WORK( I+INDTAU )-WORK( I+INDE ).NE. ZERO ) ) THEN .TP 19 .ti +4 INFO = N + 1 .TP 19 .ti +4 END IF .TP 19 .ti +4 50 CONTINUE .TP 19 .ti +4 RETURN .TP 19 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pdsyevx.l0100644000056400000620000003403506335610637017134 0ustar pfrauenfstaff.TH PDSYEVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PDSYEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IL, INFO, IU, IZ, JA, JZ, LIWORK, LWORK, M, N, NZ .TP 20 .ti +4 DOUBLE PRECISION ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), GAP( * ), W( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 DOUBLE PRECISION ZERO, ONE, TEN, FIVE .TP 20 .ti +4 PARAMETER ( ZERO = 0.0D+0, ONE = 1.0D+0, TEN = 10.0D+0, FIVE = 5.0D+0 ) .TP 20 .ti +4 INTEGER IERREIN, IERRCLS, IERRSPC, IERREBZ .TP 20 .ti +4 PARAMETER ( IERREIN = 1, IERRCLS = 2, IERRSPC = 4, IERREBZ = 8 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LOWER, LQUERY, QUICKRETURN, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER ORDER .TP 20 .ti +4 INTEGER CSRC_A, I, IACOL, IAROW, ICOFFA, IINFO, INDD, INDD2, INDE, INDE2, INDIBL, INDISP, INDTAU, INDWORK, IROFFA, IROFFZ, ISCALE, ISIZESTEBZ, ISIZESTEIN, IZROW, LALLWORK, LIWMIN, LLWORK, LWMIN, MAXEIGS, MB_A, MB_Z, MQ0, MYCOL, MYROW, NB, NB_A, NB_Z, NEIG, NN, NNP, NP0, NPCOL, NPROCS, NPROW, NSPLIT, NZZ, OFFSET, RSRC_A, RSRC_Z, SIZEORMTR, SIZESTEIN, SIZESYEVX .TP 20 .ti +4 DOUBLE PRECISION ABSTLL, ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA, SMLNUM, VLL, VUU .TP 20 .ti +4 INTEGER IDUM1( 4 ), IDUM2( 4 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 DOUBLE PRECISION PDLAMCH, PDLANSY .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PDLAMCH, PDLANSY .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, DGEBR2D, DGEBS2D, DLASRT, DSCAL, IGAMN2D, PCHK2MAT, PDELGET, PDLARED1D, PDLASCL, PDORMTR, PDSTEBZ, PDSTEIN, PDSYTRD, PXERBLA .TP 20 .ti +4 INTRINSIC ABS, DBLE, ICHAR, MAX, MIN, MOD, SQRT .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 QUICKRETURN = ( N.EQ.0 ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( DESCA( CTXT_ ), NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 800+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 8, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 21, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 SAFMIN = PDLAMCH( DESCA( CTXT_ ), 'Safe minimum' ) .TP 20 .ti +4 EPS = PDLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 SMLNUM = SAFMIN / EPS .TP 20 .ti +4 BIGNUM = ONE / SMLNUM .TP 20 .ti +4 RMIN = SQRT( SMLNUM ) .TP 20 .ti +4 RMAX = MIN( SQRT( BIGNUM ), ONE / SQRT( SQRT( SAFMIN ) ) ) .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 LOWER = LSAME( UPLO, 'L' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 INDTAU = 1 .TP 20 .ti +4 INDE = INDTAU + N .TP 20 .ti +4 INDD = INDE + N .TP 20 .ti +4 INDD2 = INDD + N .TP 20 .ti +4 INDE2 = INDD2 + N .TP 20 .ti +4 INDWORK = INDE2 + N .TP 20 .ti +4 LLWORK = LWORK - INDWORK + 1 .TP 20 .ti +4 ISIZESTEIN = 3*N + NPROCS + 1 .TP 20 .ti +4 ISIZESTEBZ = MAX( 4*N, 14, NPROCS ) .TP 20 .ti +4 INDIBL = ( MAX( ISIZESTEIN, ISIZESTEBZ ) ) + 1 .TP 20 .ti +4 INDISP = INDIBL + N .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 NNP = MAX( N, NPROCS+1, 4 ) .TP 20 .ti +4 LIWMIN = 6*NNP .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 NB_A = DESCA( NB_ ) .TP 20 .ti +4 MB_A = DESCA( MB_ ) .TP 20 .ti +4 NB_Z = DESCZ( NB_ ) .TP 20 .ti +4 MB_Z = DESCZ( MB_ ) .TP 20 .ti +4 NB = NB_A .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 RSRC_A = DESCA( RSRC_ ) .TP 20 .ti +4 CSRC_A = DESCA( CSRC_ ) .TP 20 .ti +4 RSRC_Z = DESCZ( RSRC_ ) .TP 20 .ti +4 IROFFA = MOD( IA-1, MB_A ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, NB_A ) .TP 20 .ti +4 IROFFZ = MOD( IZ-1, MB_A ) .TP 20 .ti +4 IAROW = INDXG2P( 1, NB_A, MYROW, RSRC_A, NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( 1, MB_A, MYCOL, CSRC_A, NPCOL ) .TP 20 .ti +4 IZROW = INDXG2P( 1, NB_A, MYROW, RSRC_Z, NPROW ) .TP 20 .ti +4 NP0 = NUMROC( N+IROFFA, NB_Z, MYROW, IAROW, NPROW ) .TP 20 .ti +4 MQ0 = NUMROC( N+ICOFFA, NB_Z, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NB*( NP0+1 ) ) .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NP0*MQ0+2*NB*NB ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 WORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 WORK( 2 ) = VL .TP 20 .ti +4 WORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 WORK( 2 ) = ZERO .TP 20 .ti +4 WORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL DGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL DGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( LOWER .OR. LSAME( UPLO, 'U' ) ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -23 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -25 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -9 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( ABS( WORK( 1 )-ABSTOL ).GT.FIVE*EPS*ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -13 .TP 20 .ti +4 ELSE IF( IROFFA.NE.IROFFZ ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -6 .TP 20 .ti +4 ELSE IF( IAROW.NE.IZROW ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 800+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( LOWER ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'L' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'U' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 4 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 8, N, 4, N, 4, IZ, JZ, DESCZ, 21, 4, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 WORK( 1 ) = DBLE( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( DESCA( CTXT_ ), 'PDSYEVX', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( QUICKRETURN ) THEN .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NZ = 0 .TP 20 .ti +4 ICLUSTR( 1 ) = 0 .TP 20 .ti +4 END IF .TP 20 .ti +4 M = 0 .TP 20 .ti +4 WORK( 1 ) = DBLE( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 ABSTLL = ABSTOL .TP 20 .ti +4 ISCALE = 0 .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL .TP 20 .ti +4 VUU = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 VLL = ZERO .TP 20 .ti +4 VUU = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 ANRM = PDLANSY( '1', UPLO, N, A, IA, JA, DESCA, WORK( INDWORK ) ) .TP 20 .ti +4 IF( ANRM.GT.ZERO .AND. ANRM.LT.RMIN ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMIN / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 ELSE IF( ANRM.GT.RMAX ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMAX / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL PDLASCL( UPLO, ONE, SIGMA, N, N, A, IA, JA, DESCA, IINFO ) .TP 20 .ti +4 IF( ABSTOL.GT.0 ) ABSTLL = ABSTOL*SIGMA .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL*SIGMA .TP 20 .ti +4 VUU = VU*SIGMA .TP 20 .ti +4 IF( VUU.EQ.VLL ) THEN .TP 20 .ti +4 VUU = VUU + 2*MAX( ABS( VUU )*EPS, SAFMIN ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 LALLWORK = LLWORK .TP 20 .ti +4 CALL PDSYTRD( UPLO, N, A, IA, JA, DESCA, WORK( INDD ), WORK( INDE ), WORK( INDTAU ), WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 OFFSET = 0 .TP 20 .ti +4 IF( IA.EQ.1 .AND. JA.EQ.1 .AND. RSRC_A.EQ.0 .AND. CSRC_A.EQ.0 ) THEN .TP 20 .ti +4 CALL PDLARED1D( N, IA, JA, DESCA, WORK( INDD ), WORK( INDD2 ), WORK( INDWORK ), LLWORK ) .TP 20 .ti +4 CALL PDLARED1D( N, IA, JA, DESCA, WORK( INDE ), WORK( INDE2 ), WORK( INDWORK ), LLWORK ) .TP 20 .ti +4 IF( .NOT.LOWER ) OFFSET = 1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 10 I = 1, N .TP 20 .ti +4 CALL PDELGET( 'A', ' ', WORK( INDD2+I-1 ), A, I+IA-1, I+JA-1, DESCA ) .TP 20 .ti +4 10 CONTINUE .TP 20 .ti +4 IF( LSAME( UPLO, 'U' ) ) THEN .TP 20 .ti +4 DO 20 I = 1, N - 1 .TP 20 .ti +4 CALL PDELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA-1, I+JA, DESCA ) .TP 20 .ti +4 20 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 30 I = 1, N - 1 .TP 20 .ti +4 CALL PDELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA, I+JA-1, DESCA ) .TP 20 .ti +4 30 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 ORDER = 'b' .TP 20 .ti +4 ELSE .TP 20 .ti +4 ORDER = 'e' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, WORK( INDD2 ), WORK( INDE2+OFFSET ), M, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), WORK( INDWORK ), LLWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 IF( IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 DO 40 I = 1, M .TP 20 .ti +4 IWORK( INDIBL+I-1 ) = ABS( IWORK( INDIBL+I-1 ) ) .TP 20 .ti +4 40 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 CALL IGAMN2D( DESCA( CTXT_ ), 'A', ' ', 1, 1, LALLWORK, 1, 1, 1, -1, -1, -1 ) .TP 20 .ti +4 MAXEIGS = DESCZ( N_ ) .TP 20 .ti +4 DO 50 NZ = MIN( MAXEIGS, M ), 0, -1 .TP 20 .ti +4 MQ0 = NUMROC( NZ, NB, 0, 0, NPCOL ) .TP 20 .ti +4 SIZESTEIN = ICEIL( NZ, NPROCS )*N + MAX( 5*N, NP0*MQ0 ) .TP 20 .ti +4 SIZEORMTR = MAX( ( NB*( NB-1 ) ) / 2, ( MQ0+NP0 )*NB ) + NB*NB .TP 20 .ti +4 SIZESYEVX = MAX( SIZESTEIN, SIZEORMTR ) .TP 20 .ti +4 IF( SIZESYEVX.LE.LALLWORK ) GO TO 60 .TP 20 .ti +4 50 CONTINUE .TP 20 .ti +4 60 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 NZ = M .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MAX( NZ, 0 ) .TP 20 .ti +4 IF( NZ.NE.M ) THEN .TP 20 .ti +4 INFO = INFO + IERRSPC .TP 20 .ti +4 DO 70 I = 1, M .TP 20 .ti +4 IFAIL( I ) = 0 .TP 20 .ti +4 70 CONTINUE .TP 20 .ti +4 IF( NSPLIT.GT.1 ) THEN .TP 20 .ti +4 CALL DLASRT( 'I', M, W, IINFO ) .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 VUU = W( NZ ) - TEN*( EPS*ANRM+SAFMIN ) .TP 20 .ti +4 IF( VLL.GE.VUU ) THEN .TP 20 .ti +4 NZZ = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL PDSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, WORK( INDD2 ), WORK( INDE2+OFFSET ), NZZ, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), WORK( INDWORK ), LLWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( MOD( INFO / IERREBZ, 1 ).EQ.0 ) THEN .TP 20 .ti +4 IF( NZZ.GT.NZ .OR. IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MIN( NZ, NZZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDSTEIN( N, WORK( INDD2 ), WORK( INDE2+OFFSET ), NZ, W, IWORK( INDIBL ), IWORK( INDISP ), ORFAC, Z, IZ, JZ, DESCZ, WORK( INDWORK ), LALLWORK, IWORK( 1 ), ISIZESTEIN, IFAIL, ICLUSTR, GAP, IINFO ) .TP 20 .ti +4 IF( IINFO.GE.NZ+1 ) INFO = INFO + IERRCLS .TP 20 .ti +4 IF( MOD( IINFO, NZ+1 ).NE.0 ) INFO = INFO + IERREIN .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 CALL PDORMTR( 'L', UPLO, 'N', N, NZ, A, IA, JA, DESCA, WORK( INDTAU ), Z, IZ, JZ, DESCZ, WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL DSCAL( M, ONE / SIGMA, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = DBLE( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pdsygs2.l0100644000056400000620000001420406335610637017021 0ustar pfrauenfstaff.TH PDSYGS2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDSYGS2 - reduce a real symmetric-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PDSYGS2( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDSYGS2 reduces a real symmetric-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**T or L**T*sub( A )*L. .br sub( B ) must have been previously factorized as U**T*U or L*L**T by PDPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T); = 2 or 3: compute U*sub( A )*U**T or L**T*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**T*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**T. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PDPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdsygst.l0100644000056400000620000001465306335610637017133 0ustar pfrauenfstaff.TH PDSYGST l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDSYGST - reduce a real symmetric-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PDSYGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 DOUBLE PRECISION SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDSYGST reduces a real symmetric-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**T or L**T*sub( A )*L. .br sub( B ) must have been previously factorized as U**T*U or L*L**T by PDPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T); = 2 or 3: compute U*sub( A )*U**T or L**T*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**T*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**T. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PDPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 SCALE (global output) DOUBLE PRECISION Amount by which the eigenvalues should be scaled to compensate for the scaling performed in this routine. At present, SCALE is always returned as 1.0, it is returned here to allow for future enhancement. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdsygvx.l0100644000056400000620000002406606335610637017141 0ustar pfrauenfstaff.TH PDSYGVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PDSYGVX( IBTYPE, JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, IL, INFO, IU, IZ, JA, JB, JZ, LIWORK, LWORK, M, N, NZ .TP 20 .ti +4 DOUBLE PRECISION ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ), GAP( * ), W( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 DOUBLE PRECISION ONE .TP 20 .ti +4 PARAMETER ( ONE = 1.0D+0 ) .TP 20 .ti +4 DOUBLE PRECISION FIVE, ZERO .TP 20 .ti +4 PARAMETER ( FIVE = 5.0D+0, ZERO = 0.0D+0 ) .TP 20 .ti +4 INTEGER IERRNPD .TP 20 .ti +4 PARAMETER ( IERRNPD = 16 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LQUERY, UPPER, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IACOL, IAROW, IBCOL, IBROW, ICOFFA, ICOFFB, ICTXT, IROFFA, IROFFB, LIWMIN, LWMIN, MQ0, MYCOL, MYROW, NB, NEIG, NN, NP0, NPCOL, NPROW .TP 20 .ti +4 DOUBLE PRECISION EPS, SCALE .TP 20 .ti +4 INTEGER IDUM1( 5 ), IDUM2( 5 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 DOUBLE PRECISION PDLAMCH .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PDLAMCH .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, DGEBR2D, DGEBS2D, DSCAL, PCHK1MAT, PCHK2MAT, PDPOTRF, PDSYEVX, PDSYGST, PDTRMM, PDTRSM, PXERBLA .TP 20 .ti +4 INTRINSIC ABS, DBLE, ICHAR, MAX, MIN, MOD .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 ICTXT = DESCA( CTXT_ ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 900+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2600+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 EPS = PDLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 UPPER = LSAME( UPLO, 'U' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 9, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IB, JB, DESCB, 13, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 WORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 WORK( 2 ) = VL .TP 20 .ti +4 WORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 WORK( 2 ) = ZERO .TP 20 .ti +4 WORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL DGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL DGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IAROW = INDXG2P( IA, DESCA( MB_ ), MYROW, DESCA( RSRC_ ), NPROW ) .TP 20 .ti +4 IBROW = INDXG2P( IB, DESCB( MB_ ), MYROW, DESCB( RSRC_ ), NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( JA, DESCA( NB_ ), MYCOL, DESCA( CSRC_ ), NPCOL ) .TP 20 .ti +4 IBCOL = INDXG2P( JB, DESCB( NB_ ), MYCOL, DESCB( CSRC_ ), NPCOL ) .TP 20 .ti +4 IROFFA = MOD( IA-1, DESCA( MB_ ) ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, DESCA( NB_ ) ) .TP 20 .ti +4 IROFFB = MOD( IB-1, DESCB( MB_ ) ) .TP 20 .ti +4 ICOFFB = MOD( JB-1, DESCB( NB_ ) ) .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 LIWMIN = 6*MAX( N, ( NPROW*NPCOL )+1, 4 ) .TP 20 .ti +4 NB = DESCA( MB_ ) .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 NP0 = NUMROC( NN, NB, 0, 0, NPROW ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NB*( NP0+1 ) ) .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, 0, 0, NPCOL ) .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NP0*MQ0+2*NB*NB ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( IBTYPE.LT.1 .OR. IBTYPE.GT.3 ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( .NOT.UPPER .AND. .NOT.LSAME( UPLO, 'L' ) ) THEN .TP 20 .ti +4 INFO = -4 .TP 20 .ti +4 ELSE IF( N.LT.0 ) THEN .TP 20 .ti +4 INFO = -5 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -7 .TP 20 .ti +4 ELSE IF( ICOFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -8 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 900+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCB( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCB( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCB( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCB( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCB( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCB( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CTXT_ ) .TP 20 .ti +4 ELSE IF( IROFFB.NE.0 .OR. IBROW.NE.IAROW ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( ICOFFB.NE.0 .OR. IBCOL.NE.IACOL ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -16 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -17 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -14 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( ABS( WORK( 1 )-ABSTOL ).GT.FIVE*EPS*ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -18 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -28 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -30 .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM1( 1 ) = IBTYPE .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'U' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'L' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 5 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 5 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 5 ) = 5 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 9, N, 4, N, 4, IB, JB, DESCB, 13, 5, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 CALL PCHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, 0, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = DBLE( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( ICTXT, 'PDSYGVX ', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDPOTRF( UPLO, N, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 IFAIL( 1 ) = INFO .TP 20 .ti +4 INFO = IERRNPD .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDSYGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CALL PDSYEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NEIG = M .TP 20 .ti +4 IF( IBTYPE.EQ.1 .OR. IBTYPE.EQ.2 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'T' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDTRSM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 ELSE IF( IBTYPE.EQ.3 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'T' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDTRMM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( SCALE.NE.ONE ) THEN .TP 20 .ti +4 CALL DSCAL( N, SCALE, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pdsytd2.l0100644000056400000620000002031306335610637017015 0ustar pfrauenfstaff.TH PDSYTD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDSYTD2 - reduce a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PDSYTD2( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), D( * ), E( * ), TAU( * ), WORK( * ) .SH PURPOSE PDSYTD2 reduces a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*N. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) with .br IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pdsytrd.l0100644000056400000620000002075206335610637017124 0ustar pfrauenfstaff.TH PDSYTRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDSYTRD - reduce a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PDSYTRD( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), D( * ), E( * ), TAU( * ), WORK( * ) .SH PURPOSE PDSYTRD reduces a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB * ( NP +1 ), 3 * NB ) where NB = MB_A = NB_A, NP = NUMROC( N, NB, MYROW, IAROW, NPROW ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA .AND. IROFFA.EQ.0 ) with IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pdtrcon.l0100644000056400000620000001617406335610637017107 0ustar pfrauenfstaff.TH PDTRCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDTRCON - estimate the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm .SH SYNOPSIS .TP 20 SUBROUTINE PDTRCON( NORM, UPLO, DIAG, N, A, IA, JA, DESCA, RCOND, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 20 .ti +4 INTEGER IA, JA, INFO, LIWORK, LWORK, N .TP 20 .ti +4 DOUBLE PRECISION RCOND .TP 20 .ti +4 INTEGER DESCA( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), WORK( * ) .SH PURPOSE PDTRCON estimates the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm. The norm of A(IA:IA+N-1,JA:JA+N-1) is computed and an estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), then the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm; .br = 'I': Infinity-norm. .TP 8 UPLO (global input) CHARACTER .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is upper triangular; .br = 'L': A(IA:IA+N-1,JA:JA+N-1) is lower triangular. .TP 8 DIAG (global input) CHARACTER .br = 'N': A(IA:IA+N-1,JA:JA+N-1) is non-unit triangular; .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is unit triangular. .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U', the leading N-by-N upper triangular part of this distributed matrix con- tains the upper triangular matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of this ditributed matrix contains the lower triangular matrix, and the strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of A(IA:IA+N-1,JA:JA+N-1) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 RCOND (global output) DOUBLE PRECISION The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + LOCc(N+MOD(JA-1,NB_A)) + MAX( 2, MAX( NB_A*MAX( 1, CEIL(NPROW-1,NPCOL) ), LOCc(N+MOD(JA-1,NB_A)) + NB_A*MAX( 1, CEIL(NPCOL-1,NPROW) ) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr(N+MOD(IA-1,MB_A)). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdtrrfs.l0100644000056400000620000002274706335610637017125 0ustar pfrauenfstaff.TH PDTRRFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDTRRFS - provide error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix .SH SYNOPSIS .TP 20 SUBROUTINE PDTRRFS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER INFO, IA, IB, IX, JA, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCX( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .SH PURPOSE PDTRRFS provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix. The solution matrix X must be computed by PDTRTRS or some other means before entering this routine. PDTRRFS does not do iterative refinement because doing so cannot improve the backward error. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**T * sub( X ) = sub( B ) (Conjugate transpose = Transpose) .TP 8 DIAG (global input) CHARACTER*1 = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the original triangular distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) DOUBLE PRECISION pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The estimated forward error bounds for each solution vector of sub( X ). If XTRUE is the true solution, FERR bounds the magnitude of the largest entry in (sub( X ) - XTRUE) divided by the magnitude of the largest entry in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*LOCr( N + MOD( IA-1, MB_A ) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices sub( X ) and sub( B ) should be distributed the same way on the same processes. These conditions ensure that sub( X ) and sub( B ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pdtrti2.l0100644000056400000620000001206706335610640017015 0ustar pfrauenfstaff.TH PDTRTI2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDTRTI2 - compute the inverse of a real upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDTRTI2( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDTRTI2 computes the inverse of a real upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). This matrix should be contained in one and only one process memory space (local operation). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 DIAG (global input) CHARACTER*1 .br = 'N': sub( A ) is non-unit triangular .br = 'U': sub( A ) is unit triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)), this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. On exit, the (triangular) inverse of the original matrix, in the same storage format. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pdtrtri.l0100644000056400000620000001215506335610640017113 0ustar pfrauenfstaff.TH PDTRTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDTRTRI - compute the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PDTRTRI( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ) .SH PURPOSE PDTRTRI computes the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) is unit triangular: .br = 'N': Non-unit triangular, .br = 'U': Unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix to be inverted, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. On exit, the (triangular) inverse of the original matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, A(IA+K-1,JA+K-1) is exactly zero. The triangular matrix sub( A ) is singular and its inverse can not be computed. scalapack-doc-1.5/man/manl/pdtrtrs.l0100644000056400000620000001444506335610640017131 0ustar pfrauenfstaff.TH PDTRTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDTRTRS - solve a triangular system of the form sub( A ) * X = sub( B ) or sub( A )**T * X = sub( B ), .SH SYNOPSIS .TP 20 SUBROUTINE PDTRTRS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), B( * ) .SH PURPOSE PDTRTRS solves a triangular system of the form where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a triangular distributed matrix of order N, and B(IB:IB+N-1,JB:JB+NRHS-1) is an N-by-NRHS distributed matrix denoted by sub( B ). A check is made to verify that sub( A ) is nonsingular. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': Solve sub( A ) * X = sub( B ) (No transpose) .br = 'T': Solve sub( A )**T * X = sub( B ) (Transpose) .br = 'C': Solve sub( A )**T * X = sub( B ) (Transpose) .TP 8 DIAG (global input) CHARACTER .br = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed matrix sub( B ). NRHS >= 0. .TP 8 A (local input) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the distributed triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the i-th diagonal element of sub( A ) is zero, indicating that the submatrix is singular and the solutions X have not been computed. scalapack-doc-1.5/man/manl/pdtzrzf.l0100644000056400000620000001567606335610640017141 0ustar pfrauenfstaff.TH PDTZRZF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PDTZRZF - reduce the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of orthogonal transformations .SH SYNOPSIS .TP 20 SUBROUTINE PDTZRZF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION A( * ), TAU( * ), WORK( * ) .SH PURPOSE PDTZRZF reduces the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of orthogonal transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N orthogonal matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) DOUBLE PRECISION pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements M+1 to N of the first M rows of sub( A ), with the array TAU, represent the orthogonal matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) DOUBLE PRECISION, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) DOUBLE PRECISION array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), which is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pdzsum1.l0100644000056400000620000001302106335610640017017 0ustar pfrauenfstaff.TH PDZSUM1 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PDZSUM1 - return the sum of absolute values of a complex distributed vector sub( X ) in ASUM, .SH SYNOPSIS .TP 20 SUBROUTINE PDZSUM1( N, ASUM, X, IX, JX, DESCX, INCX ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 DOUBLE PRECISION ASUM .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX*16 X( * ) .SH PURPOSE PDZSUM1 returns the sum of absolute values of a complex distributed vector sub( X ) in ASUM, where sub( X ) denotes X(IX:IX+N-1,JX:JX), if INCX = 1, .br X(IX:IX,JX:JX+N-1), if INCX = M_X. Based on PDZASUM from the Level 1 PBLAS. The change is .br to use the 'genuine' absolute value. .br The serial version of this routine was originally contributed by Nick Higham for use with ZLACON. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. When the result of a vector-oriented PBLAS call is a scalar, it will be made available only within the scope which owns the vector(s) being operated on. Let X be a generic term for the input vector(s). Then, the processes which receive the answer will be (note that if an operation involves more than one vector, the processes which re- ceive the result will be the union of the following calculation for each vector): .br If N = 1, M_X = 1 and INCX = 1, then one can't determine if a process row or process column owns the vector operand, therefore only the process of coordinate {RSRC_X, CSRC_X} receives the result; If INCX = M_X, then sub( X ) is a vector distributed over a process row. Each process part of this row receives the result; .br If INCX = 1, then sub( X ) is a vector distributed over a process column. Each process part of this column receives the result; .SH PARAMETERS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 ASUM (local output) pointer to DOUBLE PRECISION The sum of absolute values of the distributed vector sub( X ) only in its scope. .TP 8 X (local input) COMPLEX*16 array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) pointer to INTEGER The global row index of the submatrix of the distributed matrix X to operate on. .TP 8 JX (global input) pointer to INTEGER The global column index of the submatrix of the distributed matrix X to operate on. .TP 8 DESCX (global and local input) INTEGER array of dimension 8. The array descriptor of the distributed matrix X. .TP 8 INCX (global input) pointer to INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. scalapack-doc-1.5/man/manl/pscsum1.l0100644000056400000620000001276306335610640017023 0ustar pfrauenfstaff.TH PSCSUM1 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSCSUM1 - return the sum of absolute values of a complex distributed vector sub( X ) in ASUM, .SH SYNOPSIS .TP 20 SUBROUTINE PSCSUM1( N, ASUM, X, IX, JX, DESCX, INCX ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 REAL ASUM .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX X( * ) .SH PURPOSE PSCSUM1 returns the sum of absolute values of a complex distributed vector sub( X ) in ASUM, where sub( X ) denotes X(IX:IX+N-1,JX:JX), if INCX = 1, .br X(IX:IX,JX:JX+N-1), if INCX = M_X. Based on PSCASUM from the Level 1 PBLAS. The change is .br to use the 'genuine' absolute value. .br The serial version of this routine was originally contributed by Nick Higham for use with CLACON. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. When the result of a vector-oriented PBLAS call is a scalar, it will be made available only within the scope which owns the vector(s) being operated on. Let X be a generic term for the input vector(s). Then, the processes which receive the answer will be (note that if an operation involves more than one vector, the processes which re- ceive the result will be the union of the following calculation for each vector): .br If N = 1, M_X = 1 and INCX = 1, then one can't determine if a process row or process column owns the vector operand, therefore only the process of coordinate {RSRC_X, CSRC_X} receives the result; If INCX = M_X, then sub( X ) is a vector distributed over a process row. Each process part of this row receives the result; .br If INCX = 1, then sub( X ) is a vector distributed over a process column. Each process part of this column receives the result; .SH PARAMETERS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 ASUM (local output) pointer to REAL The sum of absolute values of the distributed vector sub( X ) only in its scope. .TP 8 X (local input) COMPLEX array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) pointer to INTEGER The global row index of the submatrix of the distributed matrix X to operate on. .TP 8 JX (global input) pointer to INTEGER The global column index of the submatrix of the distributed matrix X to operate on. .TP 8 DESCX (global and local input) INTEGER array of dimension 8. The array descriptor of the distributed matrix X. .TP 8 INCX (global input) pointer to INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. scalapack-doc-1.5/man/manl/psdbsv.l0100644000056400000620000000140306335610640016716 0ustar pfrauenfstaff.TH PSDBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PSDBSV( N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 REAL A( * ), B( * ), WORK( * ) .SH PURPOSE PSDBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PSDBTRF and PSDBTRS for details. .br scalapack-doc-1.5/man/manl/psdbtrf.l0100644000056400000620000000213206335610640017061 0ustar pfrauenfstaff.TH PSDBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDBTRF - compute a LU factorization of an N-by-N real banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PSDBTRF( N, BWL, BWU, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), WORK( * ) .SH PURPOSE PSDBTRF computes a LU factorization of an N-by-N real banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PSDBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/psdbtrs.l0100644000056400000620000000165506335610640017107 0ustar pfrauenfstaff.TH PSDBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PSDBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PSDBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PSDBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Routine PSDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/psdbtrsv.l0100644000056400000620000000220406335610640017264 0ustar pfrauenfstaff.TH PSDBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PSDBTRSV( UPLO, TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 REAL A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PSDBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Gaussian elimination code PS@(dom_pre)BTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PSDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/psdtsv.l0100644000056400000620000000136606335610640016750 0ustar pfrauenfstaff.TH PSDTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PSDTSV( N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 REAL B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PSDTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal diagonally dominant-like distributed .br matrix. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PSDTTRF and PSDTTRS for details. .br scalapack-doc-1.5/man/manl/psdttrf.l0100644000056400000620000000212506335610640017105 0ustar pfrauenfstaff.TH PSDTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDTTRF - compute a LU factorization of an N-by-N real tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSDTTRF( N, DL, D, DU, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL AF( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PSDTTRF computes a LU factorization of an N-by-N real tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PSDTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/psdttrs.l0100644000056400000620000000164006335610640017123 0ustar pfrauenfstaff.TH PSDTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PSDTTRS( TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PSDTTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PSDTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal diagonally dominant-like distributed .br matrix. .br Routine PSDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/psdttrsv.l0100644000056400000620000000223106335610641017307 0ustar pfrauenfstaff.TH PSDTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSDTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PSDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 REAL AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PSDTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Gaussian elimination code PS@(dom_pre)TTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PSDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/psgbsv.l0100644000056400000620000000137206335610641016727 0ustar pfrauenfstaff.TH PSGBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PSGBSV( N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 REAL A( * ), B( * ), WORK( * ) .SH PURPOSE PSGBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br banded distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination with pivoting .br is used to factor a reordering .br of the matrix into P L U. .br See PSGBTRF and PSGBTRS for details. .br scalapack-doc-1.5/man/manl/psgbtrf.l0100644000056400000620000000236206335610641017072 0ustar pfrauenfstaff.TH PSGBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGBTRF - compute a LU factorization of an N-by-N real banded distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PSGBTRF( N, BWL, BWU, A, JA, DESCA, IPIV, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), WORK( * ) .SH PURPOSE PSGBTRF computes a LU factorization of an N-by-N real banded distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PSGBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) Q = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P and Q are permutation matrices. .br The matrix Q represents reordering of columns .br for parallelism's sake, while P represents .br reordering of rows for numerical stability using .br classic partial pivoting. .br scalapack-doc-1.5/man/manl/psgbtrs.l0100644000056400000620000000164306335610641017110 0ustar pfrauenfstaff.TH PSGBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PSGBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWU, BWL, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV(*) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PSGBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PSGBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br banded distributed .br matrix with bandwidth BWL, BWU. .br Routine PSGBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/psgebd2.l0100644000056400000620000002217106335610641016751 0ustar pfrauenfstaff.TH PSGEBD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSGEBD2 - reduce a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation .SH SYNOPSIS .TP 20 SUBROUTINE PSGEBD2( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), D( * ), E( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PSGEBD2 reduces a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) REAL array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MpA0, NqA0 ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ) IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+IROFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/psgebrd.l0100644000056400000620000002221106335610641017044 0ustar pfrauenfstaff.TH PSGEBRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEBRD - reduce a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation .SH SYNOPSIS .TP 20 SUBROUTINE PSGEBRD( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), D( * ), E( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PSGEBRD reduces a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an orthogonal transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) REAL array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*( MpA0 + NqA0 + 1 ) + NqA0 where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors; v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/psgecon.l0100644000056400000620000001543706335610641017070 0ustar pfrauenfstaff.TH PSGECON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGECON - estimate the reciprocal of the condition number of a general distributed real matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PSGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PSGECON( NORM, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER NORM .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 REAL ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSGECON estimates the reciprocal of the condition number of a general distributed real matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PSGETRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm .br = 'I': Infinity-norm .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) REAL If NORM = '1' or 'O', the 1-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). If NORM = 'I', the infinity-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) REAL The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + 2*LOCc(N+MOD(JA-1,NB_A)) + MAX( 2, MAX( NB_A*MAX( 1, CEIL(NPROW-1,NPCOL) ), LOCc(N+MOD(JA-1,NB_A)) + NB_A*MAX( 1, CEIL(NPCOL-1,NPROW) ) ). LOCr and LOCc values can be computed using the ScaLAPACK tool function NUMROC; NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr(N+MOD(IA-1,MB_A)). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psgeequ.l0100644000056400000620000001434206335610641017075 0ustar pfrauenfstaff.TH PSGEEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEEQU - compute row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number .SH SYNOPSIS .TP 20 SUBROUTINE PSGEEQU( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 REAL AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), C( * ), R( * ) .SH PURPOSE PSGEEQU computes row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest entry in each row and column of the distributed matrix B with elements B(i,j) = R(i) * A(i,j) * C(j) have absolute value 1. .br R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of sub( A ) but works well in practice. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ), the local pieces of the M-by-N distributed matrix whose equilibration factors are to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local output) REAL array, dimension LOCr(M_A) If INFO = 0 or INFO > IA+M-1, R(IA:IA+M-1) contains the row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local output) REAL array, dimension LOCc(N_A) If INFO = 0, C(JA:JA+N-1) contains the column scale factors for sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distri- buted matrix A. .TP 8 ROWCND (global output) REAL If INFO = 0 or INFO > IA+M-1, ROWCND contains the ratio of the smallest R(i) to the largest R(i) (IA <= i <= IA+M-1). If ROWCND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by R(IA:IA+M-1). .TP 8 COLCND (global output) REAL If INFO = 0, COLCND contains the ratio of the smallest C(j) to the largest C(j) (JA <= j <= JA+N-1). If COLCND >= 0.1, it is not worth scaling by C(JA:JA+N-1). .TP 8 AMAX (global output) REAL Absolute value of largest distributed matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, and i is .br <= M: the i-th row of the distributed matrix sub( A ) is exactly zero, > M: the (i-M)-th column of the distributed matrix sub( A ) is exactly zero. scalapack-doc-1.5/man/manl/psgehd2.l0100644000056400000620000001647706335610641016773 0ustar pfrauenfstaff.TH PSGEHD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSGEHD2 - reduce a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion .SH SYNOPSIS .TP 20 SUBROUTINE PSGEHD2( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEHD2 reduces a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion: Q' * sub( A ) * Q = H, where sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+JLO-2 and JA+JHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB + MAX( NpA0, NB ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on exit in A(ia+ilo+i:ia+ihi-1,ja+ilo+i-2), and tau in TAU(ja+ilo+i-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follo- wing example, with n = 7, ilo = 2 and ihi = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(ja+ilo+i-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/psgehrd.l0100644000056400000620000001713306335610641017061 0ustar pfrauenfstaff.TH PSGEHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEHRD - reduce a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion .SH SYNOPSIS .TP 20 SUBROUTINE PSGEHRD( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEHRD reduces a real general distributed matrix sub( A ) to upper Hessenberg form H by an orthogonal similarity transforma- tion: Q' * sub( A ) * Q = H, where sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+ILO-2 and JA+IHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*NB + NB*MAX( IHIP+1, IHLP+INLQ ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IOFF = MOD( IA+ILO-2, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IHIP = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), ILROW = INDXG2P( IA+ILO-1, NB, MYROW, RSRC_A, NPROW ), IHLP = NUMROC( IHI-ILO+IOFF+1, NB, MYROW, ILROW, NPROW ), ILCOL = INDXG2P( JA+ILO-1, NB, MYCOL, CSRC_A, NPCOL ), INLQ = NUMROC( N-ILO+IOFF+1, NB, MYCOL, ILCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:I) = 0, v(I+1) = 1 and v(IHI+1:N) = 0; v(I+2:IHI) is stored on exit in A(IA+ILO+I:IA+IHI-1,JA+ILO+I-2), and tau in TAU(JA+ILO+I-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follow- ing example, with N = 7, ILO = 2 and IHI = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), H denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(JA+ILO+I-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/psgelq2.l0100644000056400000620000001416206335610641017001 0ustar pfrauenfstaff.TH PSGELQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGELQ2 - compute a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PSGELQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGELQ2 computes a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1) H(ia+k-2) . . . H(ia), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1)=0 and v(i) = 1; v(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/psgelqf.l0100644000056400000620000001417306335610641017067 0ustar pfrauenfstaff.TH PSGELQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGELQF - compute a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PSGELQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGELQF computes a LQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1) H(ia+k-2) . . . H(ia), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1)=0 and v(i) = 1; v(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/psgels.l0100644000056400000620000002215606335610642016724 0ustar pfrauenfstaff.TH PSGELS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGELS - solve overdetermined or underdetermined real linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), .SH SYNOPSIS .TP 19 SUBROUTINE PSGELS( TRANS, M, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER TRANS .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 REAL A( * ), B( * ), WORK( * ) .SH PURPOSE PSGELS solves overdetermined or underdetermined real linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), or its transpose, using a QR or LQ factorization of sub( A ). It is assumed that sub( A ) has full rank. .br The following options are provided: .br 1. If TRANS = 'N' and m >= n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )*X ||. .br 2. If TRANS = 'N' and m < n: find the minimum norm solution of an underdetermined system sub( A ) * X = sub( B ). .br 3. If TRANS = 'T' and m >= n: find the minimum norm solution of an undetermined system sub( A )**T * X = sub( B ). .br 4. If TRANS = 'T' and m < n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )**T * X ||. where sub( B ) denotes B( IB:IB+M-1, JB:JB+NRHS-1 ) when TRANS = 'N' and B( IB:IB+N-1, JB:JB+NRHS-1 ) otherwise. Several right hand side vectors b and solution vectors x can be handled in a single call; When TRANS = 'N', the solution vectors are stored as the columns of the N-by-NRHS right hand side matrix sub( B ) and the M-by-NRHS right hand side matrix sub( B ) otherwise. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER = 'N': the linear system involves sub( A ); .br = 'T': the linear system involves sub( A )**T. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e. the number of columns of the distributed submatrices sub( B ) and X. NRHS >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the M-by-N matrix A. if M >= N, sub( A ) is overwritten by details of its QR factorization as returned by PSGEQRF; if M < N, sub( A ) is overwritten by details of its LQ factorization as returned by PSGELQF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix B of right hand side vectors, stored columnwise; sub( B ) is M-by-NRHS if TRANS='N', and N-by-NRHS otherwise. On exit, sub( B ) is overwritten by the solution vectors, stored columnwise: if TRANS = 'N' and M >= N, rows 1 to N of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements N+1 to M in that column; if TRANS = 'N' and M < N, rows 1 to N of sub( B ) contain the minimum norm solution vectors; if TRANS = 'T' and M >= N, rows 1 to M of sub( B ) contain the minimum norm solution vectors; if TRANS = 'T' and M < N, rows 1 to M of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements M+1 to N in that column. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= LTAU + MAX( LWF, LWS ) where If M >= N, then LTAU = NUMROC( JA+MIN(M,N)-1, NB_A, MYCOL, CSRC_A, NPCOL ), LWF = NB_A * ( MpA0 + NqA0 + NB_A ) LWS = MAX( (NB_A*(NB_A-1))/2, (NRHSqB0 + MpB0)*NB_A ) + NB_A * NB_A Else LTAU = NUMROC( IA+MIN(M,N)-1, MB_A, MYROW, RSRC_A, NPROW ), LWF = MB_A * ( MpA0 + NqA0 + MB_A ) LWS = MAX( (MB_A*(MB_A-1))/2, ( NpB0 + MAX( NqA0 + NUMROC( NUMROC( N+IROFFB, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NRHSqB0 ) )*MB_A ) + MB_A * MB_A End if where LCMP = LCM / NPROW with LCM = ILCM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), MpB0 = NUMROC( M+IROFFB, MB_B, MYROW, IBROW, NPROW ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), NRHSqB0 = NUMROC( NRHS+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psgeql2.l0100644000056400000620000001434406335610642017004 0ustar pfrauenfstaff.TH PSGEQL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEQL2 - compute a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PSGEQL2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEQL2 computes a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/psgeqlf.l0100644000056400000620000001435506335610642017072 0ustar pfrauenfstaff.TH PSGEQLF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEQLF - compute a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PSGEQLF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEQLF computes a QL factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/psgeqpf.l0100644000056400000620000001506106335610642017071 0ustar pfrauenfstaff.TH PSGEQPF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEQPF - compute a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSGEQPF( M, N, A, IA, JA, DESCA, IPIV, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, JA, INFO, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEQPF computes a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1): sub( A ) * P = Q * R. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, repre- sent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension LOCc(JA+N-1). On exit, if IPIV(I) = K, the local i-th column of sub( A )*P was the global K-th column of sub( A ). IPIV is tied to the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX(3,Mp0 + Nq0) + LOCc(JA+N-1)+Nq0. IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), LOCc(JA+N-1) = NUMROC( JA+N-1, NB_A, MYCOL, CSRC_A, NPCOL ) and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n) .br Each H(i) has the form .br H = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(ia+i-1:ia+m-1,ja+i-1). The matrix P is represented in jpvt as follows: If .br jpvt(j) = i .br then the jth column of P is the ith canonical unit vector. scalapack-doc-1.5/man/manl/psgeqr2.l0100644000056400000620000001416406335610642017012 0ustar pfrauenfstaff.TH PSGEQR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEQR2 - compute a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PSGEQR2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEQR2 computes a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/psgeqrf.l0100644000056400000620000001417506335610642017100 0ustar pfrauenfstaff.TH PSGEQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGEQRF - compute a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PSGEQRF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGEQRF computes a QR factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/psgerfs.l0100644000056400000620000002325506335610642017101 0ustar pfrauenfstaff.TH PSGERFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGERFS - improve the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PSGERFS( TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, IX, INFO, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ),IPIV( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .SH PURPOSE PSGERFS improves the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**T * sub( X ) = sub( B ) (Conjugate transpose = Transpose) .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) REAL pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). This array contains the local pieces of the distributed factors of the matrix sub( A ) = P * L * U as computed by PSGETRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input) INTEGER array of dimension LOCr(M_AF)+MB_AF. This array contains the pivoting information as computed by PSGETRF. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input) REAL pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). This array contains the local pieces of the distributed matrix of right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input and output) REAL pointer into the local memory to an array of local dimension (LLD_X,LOCc(JX+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix solution sub( X ). On exit, the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*LOCr( N + MOD(IA-1,MB_A) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr( N + MOD(IB-1,MB_B) ). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/psgerq2.l0100644000056400000620000001431006335610642017003 0ustar pfrauenfstaff.TH PSGERQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGERQ2 - compute a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PSGERQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGERQ2 computes a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia) H(ia+1) . . . H(ia+k-1), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). .br scalapack-doc-1.5/man/manl/psgerqf.l0100644000056400000620000001432106335610642017071 0ustar pfrauenfstaff.TH PSGERQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGERQF - compute a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PSGERQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSGERQF computes a RQ factorization of a real distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia) H(ia+1) . . . H(ia+k-1), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). .br scalapack-doc-1.5/man/manl/psgesv.l0100644000056400000620000001374006335610642016735 0ustar pfrauenfstaff.TH PSGESV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGESV - compute the solution to a real system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PSGESV( N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSGESV computes the solution to a real system of linear equations where sub( A ) = A(IA:IA+N-1,JA:JA+N-1) is an N-by-N distributed matrix and X and sub( B ) = B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The LU decomposition with partial pivoting and row interchanges is used to factor sub( A ) as sub( A ) = P * L * U, where P is a permu- tation matrix, L is unit lower triangular, and U is upper triangular. L and U are stored in sub( A ). The factored form of sub( A ) is then used to solve the system of equations sub( A ) * X = sub( B ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( A ). NRHS >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the N-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. scalapack-doc-1.5/man/manl/psgesvd.l0100644000056400000620000002253306335610642017101 0ustar pfrauenfstaff.TH PSGESVD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGESVD - compute the singular value decomposition (SVD) of an M-by-N matrix A, optionally computing the left and/or right singular vectors .SH SYNOPSIS .TP 20 SUBROUTINE PSGESVD( JOBU, JOBVT, M, N, A, IA, JA, DESCA, S, U, IU, JU, DESCU, VT, IVT, JVT, DESCVT, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER JOBU, JOBVT .TP 20 .ti +4 INTEGER IA, INFO, IU, IVT, JA, JU, JVT, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCU( * ), DESCVT( * ) .TP 20 .ti +4 REAL A( * ), S( * ), U( * ), VT( * ), WORK( * ) .SH PURPOSE PSGESVD computes the singular value decomposition (SVD) of an M-by-N matrix A, optionally computing the left and/or right singular vectors. The SVD is written as A = U * SIGMA * transpose(V) .br where SIGMA is an M-by-N matrix which is zero except for its min(M,N) diagonal elements, U is an M-by-M orthogonal matrix, and V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA are the singular values of A and the columns of U and V are the corresponding right and left singular vectors, respectively. The singular values are returned in array S in decreasing order and only the first min(M,N) columns of U and rows of VT = V**T are computed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS MP = number of local rows in A and U NQ = number of local columns in A and VT SIZE = min( M, N ) SIZEQ = number of local columns in U SIZEP = number of local rows in VT .TP 8 JOBU (global input) CHARACTER*1 Specifies options for computing all or part of the matrix U: .br = 'V': the first SIZE columns of U (the left singular vectors) are returned in the array U; = 'N': no columns of U (no left singular vectors) are computed. .TP 8 JOBVT (global input) CHARACTER*1 Specifies options for computing all or part of the matrix V**T: .br = 'V': the first SIZE rows of V**T (the right singular vectors) are returned in the array VT; = 'N': no rows of V**T (no right singular vectors) are computed. .TP 8 M (global input) INTEGER The number of rows of the input matrix A. M >= 0. .TP 8 N (global input) INTEGER The number of columns of the input matrix A. N >= 0. .TP 8 A (local input/workspace) block cyclic REAL array, global dimension (M, N), local dimension (MP, NQ) On exit, the contents of A are destroyed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global input) INTEGER array of dimension DLEN_ The array descriptor for the distributed matrix A. .TP 8 S (global output) REAL array, dimension SIZE The singular values of A, sorted so that S(i) >= S(i+1). .TP 8 U (local output) REAL array, local dimension (MP, SIZEQ), global dimension (M, SIZE) if JOBU = 'V', U contains the first min(m,n) columns of U if JOBU = 'N', U is not referenced. .TP 8 IU (global input) INTEGER The row index in the global array U indicating the first row of sub( U ). .TP 8 JU (global input) INTEGER The column index in the global array U indicating the first column of sub( U ). .TP 8 DESCU (global input) INTEGER array of dimension DLEN_ The array descriptor for the distributed matrix U. .TP 8 VT (local output) REAL array, local dimension (SIZEP, NQ), global dimension (SIZE, N). If JOBVT = 'V', VT contains the first SIZE rows of V**T. If JOBVT = 'N', VT is not referenced. .TP 8 IVT (global input) INTEGER The row index in the global array VT indicating the first row of sub( VT ). .TP 8 JVT (global input) INTEGER The column index in the global array VT indicating the first column of sub( VT ). .TP 9 DESCVT (global input) INTEGER array of dimension DLEN_ The array descriptor for the distributed matrix VT. .TP 8 WORK (local workspace/output) REAL array, dimension (LWORK) On exit, if INFO = 0, WORK(1) returns the optimal LWORK; .TP 8 LWORK (local input) INTEGER The dimension of the array WORK. LWORK > 2 + 6*SIZEB + MAX(WATOBD, WBDTOSVD), where SIZEB = MAX(M,N), and WATOBD and WBDTOSVD refer, respectively, to the workspace required to bidiagonalize the matrix A and to go from the bidiagonal matrix to the singular value decomposition U*S*VT. For WATOBD, the following holds: WATOBD = MAX(MAX(WPSLANGE,WPSGEBRD), MAX(WPSLARED2D,WPSLARED1D)), where WPSLANGE, WPSLARED1D, WPSLARED2D, WPSGEBRD are the workspaces required respectively for the subprograms PSLANGE, PSLARED1D, PSLARED2D, PSGEBRD. Using the standard notation MP = NUMROC( M, MB, MYROW, DESCA( CTXT_ ), NPROW), NQ = NUMROC( N, NB, MYCOL, DESCA( LLD_ ), NPCOL), the workspaces required for the above subprograms are WPSLANGE = MP, WPSLARED1D = NQ0, WPSLARED2D = MP0, WPSGEBRD = NB*(MP + NQ + 1) + NQ, where NQ0 and MP0 refer, respectively, to the values obtained at MYCOL = 0 and MYROW = 0. In general, the upper limit for the workspace is given by a workspace required on processor (0,0): WATOBD <= NB*(MP0 + NQ0 + 1) + NQ0. In case of a homogeneous process grid this upper limit can be used as an estimate of the minimum workspace for every processor. For WBDTOSVD, the following holds: WBDTOSVD = SIZE*(WANTU*NRU + WANTVT*NCVT) + MAX(WDBDSQR, MAX(WANTU*WPSORMBRQLN, WANTVT*WPSORMBRPRT)), .TP -1 where 1, if left(right) singular vectors are wanted WANTU(WANTVT) = 0, otherwise and WDBDSQR, WPSORMBRQLN and WPSORMBRPRT refer respectively to the workspace required for the subprograms DBDSQR, PSORMBR(QLN), and PSORMBR(PRT), where QLN and PRT are the values of the arguments VECT, SIDE, and TRANS in the call to PSORMBR. NRU is equal to the local number of rows of the matrix U when distributed 1-dimensional "column" of processes. Analogously, NCVT is equal to the local number of columns of the matrix VT when distributed across 1-dimensional "row" of processes. Calling the LAPACK procedure DBDSQR requires WDBDSQR = MAX(1, 2*SIZE + (2*SIZE - 4)*MAX(WANTU, WANTVT)) on every processor. Finally, WPSORMBRQLN = MAX( (NB*(NB-1))/2, (SIZEQ+MP)*NB)+NB*NB, WPSORMBRPRT = MAX( (MB*(MB-1))/2, (SIZEP+NQ)*MB )+MB*MB, If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum size for the work array. The required workspace is returned as the first element of WORK and no error message is issued by PXERBLA. .TP 8 INFO (output) INTEGER = 0: successful exit. .br < 0: if INFO = -i, the i-th argument had an illegal value. .br > 0: if SBDSQR did not converge If INFO = MIN(M,N) + 1, then PSSYEV has detected heterogeneity by finding that eigenvalues were not identical across the process grid. In this case, the accuracy of the results from PSSYEV cannot be guaranteed. scalapack-doc-1.5/man/manl/psgesvx.l0100644000056400000620000004023406335610642017123 0ustar pfrauenfstaff.TH PSGESVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGESVX - use the LU factorization to compute the solution to a real system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PSGESVX( FACT, TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, EQUED, R, C, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 REAL RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), BERR( * ), C( * ), FERR( * ), R( * ), WORK( * ), X( * ) .SH PURPOSE PSGESVX uses the LU factorization to compute the solution to a real system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION In the following description, A denotes A(IA:IA+N-1,JA:JA+N-1), B denotes B(IB:IB+N-1,JB:JB+NRHS-1) and X denotes .br X(IX:IX+N-1,JX:JX+NRHS-1). .br The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N') or diag(C)*B (if TRANS = 'T' or 'C'). .br 2. If FACT = 'N' or 'E', the LU decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as .br A = P * L * U, .br where P is a permutation matrix, L is a unit lower triangular matrix, and U is upper triangular. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If FACT = 'E' and equilibration was used, the matrix X is premultiplied by diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so that it solves the original system before equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A(IA:IA+N-1,JA:JA+N-1) is supplied on entry, and if not, .br whether the matrix A(IA:IA+N-1,JA:JA+N-1) should be equilibrated before it is factored. = 'F': On entry, AF(IAF:IAF+N-1,JAF:JAF+N-1) and IPIV con- .br tain the factored form of A(IA:IA+N-1,JA:JA+N-1). If EQUED is not 'N', the matrix A(IA:IA+N-1,JA:JA+N-1) has been equilibrated with scaling factors given by R and C. A(IA:IA+N-1,JA:JA+N-1), AF(IAF:IAF+N-1,JAF:JAF+N-1), and IPIV are not modified. = 'N': The matrix A(IA:IA+N-1,JA:JA+N-1) will be copied to .br AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .br = 'E': The matrix A(IA:IA+N-1,JA:JA+N-1) will be equili- brated if necessary, then copied to AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': A(IA:IA+N-1,JA:JA+N-1) * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (No transpose) .br = 'T': A(IA:IA+N-1,JA:JA+N-1)**T * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Transpose) .br = 'C': A(IA:IA+N-1,JA:JA+N-1)**H * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right-hand sides, i.e., the number of columns of the distributed submatrices B(IB:IB+N-1,JB:JB+NRHS-1) and .br X(IX:IX+N-1,JX:JX+NRHS-1). NRHS >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the N-by-N matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'F' and EQUED is not 'N', .br then A(IA:IA+N-1,JA:JA+N-1) must have been equilibrated by .br the scaling factors in R and/or C. A(IA:IA+N-1,JA:JA+N-1) is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if EQUED .ne. 'N', A(IA:IA+N-1,JA:JA+N-1) is scaled as follows: .br EQUED = 'R': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) .br EQUED = 'C': A(IA:IA+N-1,JA:JA+N-1) := .br A(IA:IA+N-1,JA:JA+N-1) * diag(C) .br EQUED = 'B': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) REAL pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). If FACT = 'F', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an input argument and on entry contains the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PSGETRF. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'N', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original .br matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equili- .br brated matrix A(IA:IA+N-1,JA:JA+N-1) (see the description of .br A(IA:IA+N-1,JA:JA+N-1) for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input or local output) INTEGER array, dimension LOCr(M_A)+MB_A. If FACT = 'F', then IPIV is an input argu- ment and on entry contains the pivot indices from the fac- torization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PSGETRF; IPIV(i) -> The global row local row i was swapped with. This array must be aligned with A( IA:IA+N-1, * ). If FACT = 'N', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original matrix .br A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equilibrated matrix .br A(IA:IA+N-1,JA:JA+N-1). .TP 8 EQUED (global input or global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'R': Row equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been premultiplied by diag(R). = 'C': Column equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been postmultiplied by diag(C). = 'B': Both row and column equilibration, i.e., .br A(IA:IA+N-1,JA:JA+N-1) has been replaced by .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 R (local input or local output) REAL array, dimension LOCr(M_A). The row scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'R' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the left by diag(R); if EQUED='N' or 'C', R is not acces- sed. R is an input variable if FACT = 'F'; otherwise, R is an output variable. If FACT = 'F' and EQUED = 'R' or 'B', each element of R must be positive. R is replicated in every process column, and is aligned with the distributed matrix A. .TP 8 C (local input or local output) REAL array, dimension LOCc(N_A). The column scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'C' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the right by diag(C); if EQUED = 'N' or 'R', C is not accessed. C is an input variable if FACT = 'F'; otherwise, C is an output variable. If FACT = 'F' and EQUED = 'C' or 'B', each element of C must be positive. C is replicated in every process row, and is aligned with the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B(IB:IB+N-1,JB:JB+NRHS-1). On exit, if .br EQUED = 'N', B(IB:IB+N-1,JB:JB+NRHS-1) is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B(IB:IB+N-1,JB:JB+NRHS-1); if TRANS = 'T' or 'C' .br and EQUED = 'C' or 'B', B(IB:IB+N-1,JB:JB+NRHS-1) is over- .br written by diag(C)*B(IB:IB+N-1,JB:JB+NRHS-1). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1)). If INFO = 0, the N-by-NRHS solution matrix X(IX:IX+N-1,JX:JX+NRHS-1) to the original .br system of equations. Note that A(IA:IA+N-1,JA:JA+N-1) and .br B(IB:IB+N-1,JB:JB+NRHS-1) are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(C))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(R))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) REAL The estimate of the reciprocal condition number of the matrix A(IA:IA+N-1,JA:JA+N-1) after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0. .TP 8 FERR (local output) REAL array, dimension LOCc(N_B) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X(IX:IX+N-1,JX:JX+NRHS-1). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. FERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 BERR (local output) REAL array, dimension LOCc(N_B). The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A(IA:IA+N-1,JA:JA+N-1) or .br B(IB:IB+N-1,JB:JB+NRHS-1) that makes X(j) an exact solution). BERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PSGECON( LWORK ), PSGERFS( LWORK ) ) + LOCr( N_A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK = LOCr(N_A). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: U(IA+I-1,IA+I-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/psgetf2.l0100644000056400000620000001260006335610643016773 0ustar pfrauenfstaff.TH PSGETF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGETF2 - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PSGETF2( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSGETF2 computes an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). .br This is the right-looking Parallel Level 2 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). NB_A-MOD(JA-1, NB_A) >= N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ). On exit, this array contains the local pieces of the factors L and U from the factoriza- tion sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/psgetrf.l0100644000056400000620000001256206335610643017102 0ustar pfrauenfstaff.TH PSGETRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGETRF - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PSGETRF( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSGETRF computes an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal ele- ments (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). L and U are stored in sub( A ). This is the right-looking Parallel Level 3 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal ele- ments of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/psgetri.l0100644000056400000620000001440406335610643017102 0ustar pfrauenfstaff.TH PSGETRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGETRI - compute the inverse of a distributed matrix using the LU factorization computed by PSGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PSGETRI( N, A, IA, JA, DESCA, IPIV, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSGETRI computes the inverse of a distributed matrix using the LU factorization computed by PSGETRF. This method inverts U and then computes the inverse of sub( A ) = A(IA:IA+N-1,JA:JA+N-1) denoted InvA by solving the system InvA*L = inv(U) for InvA. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the L and U obtained by the factorization sub( A ) = P*L*U computed by PSGETRF. On exit, if INFO = 0, sub( A ) contains the inverse of the original distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A keeps track of the pivoting information. IPIV(i) is the global row index the local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = LOCr(N+MOD(IA-1,MB_A))*NB_A. WORK is used to keep a copy of at most an entire column block of sub( A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK used as workspace for physically transposing the pivots. LIWORK is local input and must be at least if NPROW == NPCOL then LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + NB_A, else LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + MAX( CEIL(CEIL(LOCr(M_A)/MB_A)/(LCM/NPROW)), NB_A ) where LCM is the least common multiple of process rows and columns (NPROW and NPCOL). end if If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,IA+K-1) is exactly zero; the matrix is singular and its inverse could not be computed. scalapack-doc-1.5/man/manl/psgetrs.l0100644000056400000620000001324706335610643017120 0ustar pfrauenfstaff.TH PSGETRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGETRS - solve a system of distributed linear equations op( sub( A ) ) * X = sub( B ) with a general N-by-N distributed matrix sub( A ) using the LU factorization computed by PSGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PSGETRS( TRANS, N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSGETRS solves a system of distributed linear equations sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1), op( A ) = A or A**T and sub( B ) denotes B(IB:IB+N-1,JB:JB+NRHS-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block data decomposition ( MB_A=NB_A ). .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER Specifies the form of the system of equations: .br = 'N': sub( A ) * X = sub( B ) (No transpose) .br = 'T': sub( A )**T * X = sub( B ) (Transpose) .br = 'C': sub( A )**T * X = sub( B ) (Transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand sides sub( B ). On exit, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psggqrf.l0100644000056400000620000002350206335610643017075 0ustar pfrauenfstaff.TH PSGGQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGGQRF - compute a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSGGQRF( N, M, P, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PSGGQRF computes a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1): sub( A ) = Q*R, sub( B ) = Q*T*Z, .br where Q is an N-by-N orthogonal matrix, Z is a P-by-P orthogonal matrix, and R and T assume one of the forms: .br if N >= M, R = ( R11 ) M , or if N < M, R = ( R11 R12 ) N, ( 0 ) N-M N M-N M .br where R11 is upper triangular, and .br if N <= P, T = ( 0 T12 ) N, or if N > P, T = ( T11 ) N-P, P-N N ( T21 ) P P .br where T12 or T21 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GQR factorization of sub( A ) and sub( B ) implicitly gives the QR factorization of inv( sub( B ) )* sub( A ): .br inv( sub( B ) )*sub( A )= Z'*(inv(T)*R) .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 M (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( B ). P >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+M-1)). On entry, the local pieces of the N-by-M distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(N,M) by M upper trapezoidal matrix R (R is upper triangular if N >= M); the elements below the diagonal, with the array TAUA, represent the orthogonal matrix Q as a product of min(N,M) elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) REAL, array, dimension LOCc(JA+MIN(N,M)-1). This array contains the scalar factors TAUA of the elementary reflectors which represent the orthogonal matrix Q. TAUA is tied to the distributed matrix A. (see Further Details). B (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+P-1)). On entry, the local pieces of the N-by-P distributed matrix sub( B ) which is to be factored. On exit, if N <= P, the upper triangle of B(IB:IB+N-1,JB+P-N:JB+P-1) contains the N by N upper triangular matrix T; if N > P, the elements on and above the (N-P)-th subdiagonal contain the N by P upper trapezoidal matrix T; the remaining elements, with the array TAUB, represent the orthogonal matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) REAL, array, dimension LOCr(IB+N-1) This array contains the scalar factors of the elementary reflectors which represent the orthogonal unitary matrix Z. TAUB is tied to the distributed matrix B (see Further Details). .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB_A * ( NpA0 + MqA0 + NB_A ), MAX( (NB_A*(NB_A-1))/2, (PqB0 + NpB0)*NB_A ) + NB_A * NB_A, MB_B * ( NpB0 + PqB0 + MB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), PqB0 = NUMROC( P+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(n,m). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a real scalar, and v is a real vector with .br v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in .br A(ia+i:ia+n-1,ja+i-1), and taua in TAUA(ja+i-1). .br To form Q explicitly, use ScaLAPACK subroutine PSORGQR. .br To use Q to update another matrix, use ScaLAPACK subroutine PSORMQR. The matrix Z is represented as a product of elementary reflectors Z = H(ib) H(ib+1) . . . H(ib+k-1), where k = min(n,p). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a real scalar, and v is a real vector with .br v(p-k+i+1:p) = 0 and v(p-k+i) = 1; v(1:p-k+i-1) is stored on exit in B(ib+n-k+i-1,jb:jb+p-k+i-2), and taub in TAUB(ib+n-k+i-1). To form Z explicitly, use ScaLAPACK subroutine PSORGRQ. .br To use Z to update another matrix, use ScaLAPACK subroutine PSORMRQ. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( MB_A.EQ.MB_B .AND. IROFFA.EQ.IROFFB .AND. IAROW.EQ.IBROW ) scalapack-doc-1.5/man/manl/psggrqf.l0100644000056400000620000002340206335610643017074 0ustar pfrauenfstaff.TH PSGGRQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSGGRQF - compute a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSGGRQF( M, P, N, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PSGGRQF computes a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) and a P-by-N matrix sub( B ) = B(IB:IB+P-1,JB:JB+N-1): .br sub( A ) = R*Q, sub( B ) = Z*T*Q, .br where Q is an N-by-N orthogonal matrix, Z is a P-by-P orthogonal matrix, and R and T assume one of the forms: .br if M <= N, R = ( 0 R12 ) M, or if M > N, R = ( R11 ) M-N, N-M M ( R21 ) N N .br where R12 or R21 is upper triangular, and .br if P >= N, T = ( T11 ) N , or if P < N, T = ( T11 T12 ) P, ( 0 ) P-N P N-P N .br where T11 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GRQ factorization of sub( A ) and sub( B ) implicitly gives the RQ factorization of sub( A )*inv( sub( B ) ): .br sub( A )*inv( sub( B ) ) = (R*inv(T))*Z' .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( B ). P >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAUA, represent the orthogonal matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors which represent the orthogonal unitary matrix Q. TAUA is tied to the distributed matrix A (see Further Details). .TP 8 B (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, the local pieces of the P-by-N distributed matrix sub( B ) which is to be factored. On exit, the elements on and above the diagonal of sub( B ) contain the min(P,N) by N upper trapezoidal matrix T (T is upper triangular if P >= N); the elements below the diagonal, with the array TAUB, represent the orthogonal matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) REAL, array, dimension LOCc(JB+MIN(P,N)-1). This array contains the scalar factors TAUB of the elementary reflectors which represent the orthogonal matrix Z. TAUB is tied to the distributed matrix B (see Further Details). WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MB_A * ( MpA0 + NqA0 + MB_A ), MAX( (MB_A*(MB_A-1))/2, (PpB0 + NqB0)*MB_A ) + MB_A * MB_A, NB_B * ( PpB0 + NqB0 + NB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), PpB0 = NUMROC( P+IROFFB, MB_B, MYROW, IBROW, NPROW ), NqB0 = NUMROC( N+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia) H(ia+1) . . . H(ia+k-1), where k = min(m,n). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a real scalar, and v is a real vector with .br v(n-k+i+1:n) = 0 and v(n-k+i) = 1; v(1:n-k+i-1) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and taua in TAUA(ia+m-k+i-1). To form Q explicitly, use ScaLAPACK subroutine PSORGRQ. .br To use Q to update another matrix, use ScaLAPACK subroutine PSORMRQ. The matrix Z is represented as a product of elementary reflectors Z = H(jb) H(jb+1) . . . H(jb+k-1), where k = min(p,n). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a real scalar, and v is a real vector with .br v(1:i-1) = 0 and v(i) = 1; v(i+1:p) is stored on exit in .br B(ib+i:ib+p-1,jb+i-1), and taub in TAUB(jb+i-1). .br To form Z explicitly, use ScaLAPACK subroutine PSORGQR. .br To use Z to update another matrix, use ScaLAPACK subroutine PSORMQR. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( NB_A.EQ.NB_B .AND. ICOFFA.EQ.ICOFFB .AND. IACOL.EQ.IBCOL ) scalapack-doc-1.5/man/manl/pslabad.l0100644000056400000620000000307006335610643017030 0ustar pfrauenfstaff.TH PSLABAD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLABAD - take as input the values computed by PSLAMCH for underflow and overflow, and returns the square root of each of these values if the log of LARGE is sufficiently large .SH SYNOPSIS .TP 20 SUBROUTINE PSLABAD( ICTXT, SMALL, LARGE ) .TP 20 .ti +4 INTEGER ICTXT .TP 20 .ti +4 REAL LARGE, SMALL .SH PURPOSE PSLABAD takes as input the values computed by PSLAMCH for underflow and overflow, and returns the square root of each of these values if the log of LARGE is sufficiently large. This subroutine is intended to identify machines with a large exponent range, such as the Crays, and redefine the underflow and overflow limits to be the square roots of the values computed by PSLAMCH. This subroutine is needed because PSLAMCH does not compensate for poor arithmetic in the upper half of the exponent range, as is found on a Cray. .br In addition, this routine performs a global minimization and maximi- zation on these values, to support heterogeneous computing networks. .SH ARGUMENTS .TP 8 ICTXT (global input) INTEGER The BLACS context handle in which the computation takes place. .TP 8 SMALL (local input/local output) REAL On entry, the underflow threshold as computed by PSLAMCH. On exit, if LOG10(LARGE) is sufficiently large, the square root of SMALL, otherwise unchanged. .TP 8 LARGE (local input/local output) REAL On entry, the overflow threshold as computed by PSLAMCH. On exit, if LOG10(LARGE) is sufficiently large, the square root of LARGE, otherwise unchanged. scalapack-doc-1.5/man/manl/pslabrd.l0100644000056400000620000002327406335610643017061 0ustar pfrauenfstaff.TH PSLABRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLABRD - reduce the first NB rows and columns of a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an orthogonal transformation Q' * A * P, .SH SYNOPSIS .TP 20 SUBROUTINE PSLABRD( M, N, NB, A, IA, JA, DESCA, D, E, TAUQ, TAUP, X, IX, JX, DESCX, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IX, IY, JA, JX, JY, M, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ), DESCY( * ) .TP 20 .ti +4 REAL A( * ), D( * ), E( * ), TAUP( * ), TAUQ( * ), X( * ), Y( * ), WORK( * ) .SH PURPOSE PSLABRD reduces the first NB rows and columns of a real general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an orthogonal transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transformation to the unreduced part of sub( A ). .br If M >= N, sub( A ) is reduced to upper bidiagonal form; if M < N, to lower bidiagonal form. .br This is an auxiliary routine called by PSGEBRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of leading rows and columns of sub( A ) to be reduced. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ) to be reduced. On exit, the first NB rows and columns of the matrix are overwritten; the rest of the distributed matrix sub( A ) is unchanged. If m >= n, elements on and below the diagonal in the first NB columns, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors; and elements above the diagonal in the first NB rows, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If m < n, elements below the diagonal in the first NB columns, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and elements on and above the diagonal in the first NB rows, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(ia+i-1,ja+i-1). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(ia+i-1,ja+i) for i = 1,2,...,n-1; if m < n, E(i) = A(ia+i,ja+i-1) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) REAL array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) REAL array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. TAUP is tied to the distributed matrix A. See Further Details. X (local output) REAL pointer into the local memory to an array of dimension (LLD_X,NB). On exit, the local pieces of the distributed M-by-NB matrix X(IX:IX+M-1,JX:JX+NB-1) required to update the unreduced part of sub( A ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 Y (local output) REAL pointer into the local memory to an array of dimension (LLD_Y,NB). On exit, the local pieces of the distributed N-by-NB matrix Y(IY:IY+N-1,JY:JY+NB-1) required to update the unreduced part of sub( A ). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) LWORK >= NB_A + NQ, with NQ = NUMROC( N+MOD( IA-1, NB_Y ), NB_Y, MYCOL, IACOL, NPCOL ) IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ) INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br Q = H(1) H(2) . . . H(nb) and P = G(1) G(2) . . . G(nb) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are real scalars, and v and u are real vectors. If m >= n, v(1:i-1) = 0, v(i) = 1, and v(i:m) is stored on exit in A(ia+i-1:ia+m-1,ja+i-1); u(1:i) = 0, u(i+1) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, v(1:i) = 0, v(i+1) = 1, and v(i+1:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); u(1:i-1) = 0, u(i) = 1, and u(i:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The elements of the vectors v and u together form the m-by-nb matrix V and the nb-by-n matrix U' which are needed, with X and Y, to apply the transformation to the unreduced part of the matrix, using a block update of the form: sub( A ) := sub( A ) - V*Y' - X*U'. .br The contents of sub( A ) on exit are illustrated by the following examples with nb = 2: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( 1 1 u1 u1 u1 ) ( 1 u1 u1 u1 u1 u1 ) ( v1 1 1 u2 u2 ) ( 1 1 u2 u2 u2 u2 ) ( v1 v2 a a a ) ( v1 1 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) .br where a denotes an element of the original matrix which is unchanged, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br scalapack-doc-1.5/man/manl/pslacon.l0100644000056400000620000001272706335610643017072 0ustar pfrauenfstaff.TH PSLACON l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLACON - estimate the 1-norm of a square, real distributed matrix A .SH SYNOPSIS .TP 20 SUBROUTINE PSLACON( N, V, IV, JV, DESCV, X, IX, JX, DESCX, ISGN, EST, KASE ) .TP 20 .ti +4 INTEGER IV, IX, JV, JX, KASE, N .TP 20 .ti +4 REAL EST .TP 20 .ti +4 INTEGER DESCV( * ), DESCX( * ), ISGN( * ) .TP 20 .ti +4 REAL V( * ), X( * ) .SH PURPOSE PSLACON estimates the 1-norm of a square, real distributed matrix A. Reverse communication is used for evaluating matrix-vector products. X and V are aligned with the distributed matrix A, this information is implicitly contained within IV, IX, DESCV, and DESCX. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vectors V and X. N >= 0. .TP 8 V (local workspace) REAL pointer into the local memory to an array of dimension LOCr(N+MOD(IV-1,MB_V)). On the final return, V = A*W, where EST = norm(V)/norm(W) (W is not returned). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 X (local input/local output) REAL pointer into the local memory to an array of dimension LOCr(N+MOD(IX-1,MB_X)). On an intermediate return, X should be overwritten by A * X, if KASE=1, A' * X, if KASE=2, PSLACON must be re-called with all the other parameters unchanged. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 ISGN (local workspace) INTEGER array, dimension LOCr(N+MOD(IX-1,MB_X)). ISGN is aligned with X and V. .TP 8 EST (global output) REAL An estimate (a lower bound) for norm(A). .TP 8 KASE (local input/local output) INTEGER On the initial call to PSLACON, KASE should be 0. On an intermediate return, KASE will be 1 or 2, indicating whether X should be overwritten by A * X or A' * X. On the final return from PSLACON, KASE will again be 0. .SH FURTHER DETAILS The serial version SLACON has been contributed by Nick Higham, University of Manchester. It was originally named SONEST, dated March 16, 1988. .br Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation", ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. scalapack-doc-1.5/man/manl/pslaconsb.l0100644000056400000620000001352706335610643017416 0ustar pfrauenfstaff.TH PSLACONSB l "12 May 1997" "LAPACK version 1.5 " "LAPACK routine (version 1.5 )" .SH NAME PSLACONSB - look for two consecutive small subdiagonal elements by seeing the effect of starting a double shift QR iteration given by H44, H33, & H43H34 and see if this would make a subdiagonal negligible .SH SYNOPSIS .TP 22 SUBROUTINE PSLACONSB( A, DESCA, I, L, M, H44, H33, H43H34, BUF, LWORK ) .TP 22 .ti +4 INTEGER I, L, LWORK, M .TP 22 .ti +4 REAL H33, H43H34, H44 .TP 22 .ti +4 INTEGER DESCA( * ) .TP 22 .ti +4 REAL A( * ), BUF( * ) .SH PURPOSE PSLACONSB looks for two consecutive small subdiagonal elements by seeing the effect of starting a double shift QR iteration given by H44, H33, & H43H34 and see if this would make a subdiagonal negligible. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 A (global input) REAL array, dimension (DESCA(LLD_),*) On entry, the Hessenberg matrix whose tridiagonal part is being scanned. Unchanged on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 I (global input) INTEGER The global location of the bottom of the unreduced submatrix of A. Unchanged on exit. .TP 8 L (global input) INTEGER The global location of the top of the unreduced submatrix of A. Unchanged on exit. .TP 8 M (global output) INTEGER On exit, this yields the starting location of the QR double shift. This will satisfy: L <= M <= I-2. H44 H33 H43H34 (global input) REAL These three values are for the double shift QR iteration. .TP 8 BUF (local output) REAL array of size LWORK. .TP 8 LWORK (global input) INTEGER On exit, LWORK is the size of the work buffer. This must be at least 7*Ceil( Ceil( (I-L)/HBL ) / LCM(NPROW,NPCOL) ) Here LCM is least common multiple, and NPROWxNPCOL is the logical grid size. Logic: ====== Two consecutive small subdiagonal elements will stall convergence of a double shift if their product is small relatively even if each is not very small. Thus it is necessary to scan the "tridiagonal portion of the matrix." In the LAPACK algorithm DLAHQR, a loop of M goes from I-2 down to L and examines H(m,m),H(m+1,m+1),H(m+1,m),H(m,m+1),H(m-1,m-1),H(m,m-1), and H(m+2,m-1). Since these elements may be on separate processors, the first major loop (10) goes over the tridiagonal and has each node store whatever values of the 7 it has that the node owning H(m,m) does not. This will occur on a border and can happen in no more than 3 locations per block assuming square blocks. There are 5 buffers that each node stores these values: a buffer to send diagonally down and right, a buffer to send up, a buffer to send left, a buffer to send diagonally up and left and a buffer to send right. Each of these buffers is actually stored in one buffer BUF where BUF(ISTR1+1) starts the first buffer, BUF(ISTR2+1) starts the second, etc.. After the values are stored, if there are any values that a node needs, they will be sent and received. Then the next major loop passes over the data and searches for two consecutive small subdiagonals. Notes: This routine does a global maximum and must be called by all processes. Implemented by: G. Henry, November 17, 1996 scalapack-doc-1.5/man/manl/pslacp2.l0100644000056400000620000001266306335610643016776 0ustar pfrauenfstaff.TH PSLACP2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLACP2 - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PSLACP2( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSLACP2 copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PSLACP2 performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). PSLACP2 requires that only dimension of the matrix operands is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) REAL pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pslacp3.l0100644000056400000620000001204306335610643016767 0ustar pfrauenfstaff.TH PSLACP3 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSLACP3 - i an auxiliary routine that copies from a global parallel array into a local replicated array or vise versa .SH SYNOPSIS .TP 20 SUBROUTINE PSLACP3( M, I, A, DESCA, B, LDB, II, JJ, REV ) .TP 20 .ti +4 INTEGER I, II, JJ, LDB, M, REV .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), B( LDB, * ) .SH PURPOSE PSLACP3 is an auxiliary routine that copies from a global parallel array into a local replicated array or vise versa. Notice that the entire submatrix that is copied gets placed on one node or more. The receiving node can be specified precisely, or all nodes can receive, or just one row or column of nodes. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER M is the order of the square submatrix that is copied. M >= 0. Unchanged on exit .TP 8 I (global input) INTEGER A(I,I) is the global location that the copying starts from. Unchanged on exit. .TP 8 A (global input/output) REAL array, dimension (DESCA(LLD_),*) On entry, the parallel matrix to be copied into or from. On exit, if REV=1, the copied data. Unchanged on exit if REV=0. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/output) REAL array of size (LDB,M) If REV=0, this is the global portion of the array A(I:I+M-1,I:I+M-1). If REV=1, this is the unchanged on exit. .TP 8 LDB (local input) INTEGER The leading dimension of B. .TP 8 II (global input) INTEGER By using REV 0 & 1, data can be sent out and returned again. If REV=0, then II is destination row index for the node(s) receiving the replicated B. If II>=0,JJ>=0, then node (II,JJ) receives the data If II=-1,JJ>=0, then all rows in column JJ receive the data If II>=0,JJ=-1, then all cols in row II receive the data If II=-1,JJ=-1, then all nodes receive the data If REV<>0, then II is the source row index for the node(s) sending the replicated B. .TP 8 JJ (global input) INTEGER Similar description as II above .TP 8 REV (global input) INTEGER Use REV = 0 to send global A into locally replicated B (on node (II,JJ)). Use REV <> 0 to send locally replicated B from node (II,JJ) to its owner (which changes depending on its location in A) into the global A. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/pslacpy.l0100644000056400000620000001254306335610644017103 0ustar pfrauenfstaff.TH PSLACPY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLACPY - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PSLACPY( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSLACPY copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PSLACPY performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) REAL pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pslaevswp.l0100644000056400000620000001213206335610644017446 0ustar pfrauenfstaff.TH PSLAEVSWP l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSLAEVSWP - move the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted .SH SYNOPSIS .TP 22 SUBROUTINE PSLAEVSWP( N, ZIN, LDZI, Z, IZ, JZ, DESCZ, NVS, KEY, WORK, LWORK ) .TP 22 .ti +4 INTEGER IZ, JZ, LDZI, LWORK, N .TP 22 .ti +4 INTEGER DESCZ( * ), KEY( * ), NVS( * ) .TP 22 .ti +4 REAL WORK( * ), Z( * ), ZIN( LDZI, * ) .SH PURPOSE PSLAEVSWP moves the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = the number of rows local to a given process. NQ = the number of columns local to a given process. .TP 8 N (global input) INTEGER The order of the matrix A. N >= 0. .TP 8 ZIN (local input) REAL array, dimension ( LDZI, NVS(iam) ) The eigenvectors on input. Each eigenvector resides entirely in one process. Each process holds a contiguous set of NVS(iam) eigenvectors. The first eigenvector which the process holds is: sum for i=[0,iam-1) of NVS(i) .TP 8 LDZI (locl input) INTEGER leading dimension of the ZIN array .TP 8 Z (local output) REAL array global dimension (N, N), local dimension (DESCZ(DLEN_), NQ) The eigenvectors on output. The eigenvectors are distributed in a block cyclic manner in both dimensions, with a block size of NB. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 NVS (global input) INTEGER array, dimension( nprocs+1 ) nvs(i) = number of processes number of eigenvectors held by processes [0,i-1) nvs(1) = number of eigen vectors held by [0,1-1) == 0 nvs(nprocs+1) = number of eigen vectors held by [0,nprocs) == total number of eigenvectors .TP 8 KEY (global input) INTEGER array, dimension( N ) Indicates the actual index (after sorting) for each of the eigenvectors. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) .TP 8 LWORK (local input) INTEGER dimension of WORK scalapack-doc-1.5/man/manl/pslahqr.l0100644000056400000620000002457506335610644017112 0ustar pfrauenfstaff.TH PSLAHQR l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAHQR - i an auxiliary routine used to find the Schur decomposition and or eigenvalues of a matrix already in Hessenberg form from cols ILO to IHI .SH SYNOPSIS .TP 20 SUBROUTINE PSLAHQR( WANTT, WANTZ, N, ILO, IHI, A, DESCA, WR, WI, ILOZ, IHIZ, Z, DESCZ, WORK, LWORK, IWORK, ILWORK, INFO ) .TP 20 .ti +4 LOGICAL WANTT, WANTZ .TP 20 .ti +4 INTEGER IHI, IHIZ, ILO, ILOZ, ILWORK, INFO, LWORK, N, ROTN .TP 20 .ti +4 INTEGER DESCA( * ), DESCZ( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), WI( * ), WORK( * ), WR( * ), Z( * ) .SH PURPOSE PSLAHQR is an auxiliary routine used to find the Schur decomposition and or eigenvalues of a matrix already in Hessenberg form from cols ILO to IHI. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 WANTT (global input) LOGICAL = .TRUE. : the full Schur form T is required; .br = .FALSE.: only eigenvalues are required. .TP 8 WANTZ (global input) LOGICAL .br = .TRUE. : the matrix of Schur vectors Z is required; .br = .FALSE.: Schur vectors are not required. .TP 8 N (global input) INTEGER The order of the Hessenberg matrix A (and Z if WANTZ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that A is already upper quasi-triangular in rows and columns IHI+1:N, and that A(ILO,ILO-1) = 0 (unless ILO = 1). PSLAHQR works primarily with the Hessenberg submatrix in rows and columns ILO to IHI, but applies transformations to all of H if WANTT is .TRUE.. 1 <= ILO <= max(1,IHI); IHI <= N. .TP 8 A (global input/output) REAL array, dimension (DESCA(LLD_),*) On entry, the upper Hessenberg matrix A. On exit, if WANTT is .TRUE., A is upper quasi-triangular in rows and columns ILO:IHI, with any 2-by-2 or larger diagonal blocks not yet in standard form. If WANTT is .FALSE., the contents of A are unspecified on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WR (global replicated output) REAL array, dimension (N) WI (global replicated output) REAL array, dimension (N) The real and imaginary parts, respectively, of the computed eigenvalues ILO to IHI are stored in the corresponding elements of WR and WI. If two eigenvalues are computed as a complex conjugate pair, they are stored in consecutive elements of WR and WI, say the i-th and (i+1)th, with WI(i) > 0 and WI(i+1) < 0. If WANTT is .TRUE., the eigenvalues are stored in the same order as on the diagonal of the Schur form returned in A. A may be returned with larger diagonal blocks until the next release. .TP 8 ILOZ (global input) INTEGER IHIZ (global input) INTEGER Specify the rows of Z to which transformations must be applied if WANTZ is .TRUE.. 1 <= ILOZ <= ILO; IHI <= IHIZ <= N. .TP 8 Z (global input/output) REAL array. If WANTZ is .TRUE., on entry Z must contain the current matrix Z of transformations accumulated by PDHSEQR, and on exit Z has been updated; transformations are applied only to the submatrix Z(ILOZ:IHIZ,ILO:IHI). If WANTZ is .FALSE., Z is not referenced. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 WORK (local output) REAL array of size LWORK (Unless LWORK=-1, in which case WORK must be at least size 1) .TP 8 LWORK (local input) INTEGER WORK(LWORK) is a local array and LWORK is assumed big enough so that LWORK >= 3*N + MAX( 2*MAX(DESCZ(LLD_),DESCA(LLD_)) + 2*LOCc(N), 7*Ceil(N/HBL)/LCM(NPROW,NPCOL)) + MAX( 2*N, (8*LCM(NPROW,NPCOL)+2)**2 ) If LWORK=-1, then WORK(1) gets set to the above number and the code returns immediately. .TP 8 IWORK (global and local input) INTEGER array of size ILWORK This will hold some of the IBLK integer arrays. This is held as a place holder for a future release. Currently unreferenced. .TP 8 ILWORK (local input) INTEGER This will hold the size of the IWORK array. This is held as a place holder for a future release. Currently unreferenced. .TP 8 INFO (global output) INTEGER < 0: parameter number -INFO incorrect or inconsistent .br = 0: successful exit .br > 0: PSLAHQR failed to compute all the eigenvalues ILO to IHI in a total of 30*(IHI-ILO+1) iterations; if INFO = i, elements i+1:ihi of WR and WI contain those eigenvalues which have been successfully computed. Logic: This algorithm is very similar to _LAHQR. Unlike _LAHQR, instead of sending one double shift through the largest unreduced submatrix, this algorithm sends multiple double shifts and spaces them apart so that there can be parallelism across several processor row/columns. Another critical difference is that this algorithm aggregrates multiple transforms together in order to apply them in a block fashion. Important Local Variables: IBLK = The maximum number of bulges that can be computed. Currently fixed. Future releases this won't be fixed. HBL = The square block size (HBL=DESCA(MB_)=DESCA(NB_)) ROTN = The number of transforms to block together NBULGE = The number of bulges that will be attempted on the current submatrix. IBULGE = The current number of bulges started. K1(*),K2(*) = The current bulge loops from K1(*) to K2(*). Subroutines: From LAPACK, this routine calls: SLAHQR -> Serial QR used to determine shifts and eigenvalues SLARFG -> Determine the Householder transforms This ScaLAPACK, this routine calls: PSLACONSB -> To determine where to start each iteration SLAMSH -> Sends multiple shifts through a small submatrix to see how the consecutive subdiagonals change (if PSLACONSB indicates we can start a run in the middle) PSLAWIL -> Given the shift, get the transformation SLASORTE -> Pair up eigenvalues so that reals are paired. PSLACP3 -> Parallel array to local replicated array copy & back. SLAREF -> Row/column reflector applier. Core routine here. PSLASMSUB -> Finds negligible subdiagonal elements. Current Notes and/or Restrictions: 1.) This code requires the distributed block size to be square and at least six (6); unlike simpler codes like LU, this algorithm is extremely sensitive to block size. Unwise choices of too small a block size can lead to bad performance. 2.) This code requires A and Z to be distributed identically and have identical contxts. A future version may allow Z to have a different contxt to 1D row map it to all nodes (so no communication on Z is necessary.) 3.) This release currently does not have a routine for resolving the Schur blocks into regular 2x2 form after this code is completed. Because of this, a significant performance impact is required while the deflation is done by sometimes a single column of processors. 4.) This code does not currently block the initial transforms so that none of the rows or columns for any bulge are completed until all are started. To offset pipeline start-up it is recommended that at least 2*LCM(NPROW,NPCOL) bulges are used (if possible) 5.) The maximum number of bulges currently supported is fixed at 32. In future versions this will be limited only by the incoming WORK and IWORK array. 6.) The matrix A must be in upper Hessenberg form. If elements below the subdiagonal are nonzero, the resulting transforms may be nonsimilar. This is also true with the LAPACK routine SLAHQR. 7.) For this release, this code has only been tested for RSRC_=CSRC_=0, but it has been written for the general case. 8.) Currently, all the eigenvalues are distributed to all the nodes. Future releases will probably distribute the eigenvalues by the column partitioning. 9.) The internals of this routine are subject to change. 10.) To optimize this for your architecture, try tuning SLAREF. 11.) This code has only been tested for WANTZ = .TRUE. and may behave unpredictably for WANTZ set to .FALSE. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/pslahrd.l0100644000056400000620000001050006335610644017054 0ustar pfrauenfstaff.TH PSLAHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAHRD - reduce the first NB columns of a real general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero .SH SYNOPSIS .TP 20 SUBROUTINE PSLAHRD( N, K, NB, A, IA, JA, DESCA, TAU, T, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IY, JA, JY, K, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCY( * ) .TP 20 .ti +4 REAL A( * ), T( * ), TAU( * ), WORK( * ), Y( * ) .SH PURPOSE PSLAHRD reduces the first NB columns of a real general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero. The reduction is performed by an orthogo- nal similarity transformation Q' * A * Q. The routine returns the matrices V and T which determine Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. .br This is an auxiliary routine called by PSGEHRD. In the following comments sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1). .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 K (global input) INTEGER The offset for the reduction. Elements below the k-th subdiagonal in the first NB columns are reduced to zero. .TP 8 NB (global input) INTEGER The number of columns to be reduced. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-K)). On entry, this array contains the the local pieces of the N-by-(N-K+1) general distributed matrix A(IA:IA+N-1,JA:JA+N-K). On exit, the elements on and above the k-th subdiagonal in the first NB columns are overwritten with the corresponding elements of the reduced distributed matrix; the elements below the k-th subdiagonal, with the array TAU, represent the matrix Q as a product of elementary reflectors. The other columns of A(IA:IA+N-1,JA:JA+N-K) are unchanged. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). TAU is tied to the distributed matrix A. .TP 8 T (local output) REAL array, dimension (NB_A,NB_A) The upper triangular matrix T. .TP 8 Y (local output) REAL pointer into the local memory to an array of dimension (LLD_Y,NB_A). On exit, this array contains the local pieces of the N-by-NB distributed matrix Y. LLD_Y >= LOCr(IA+N-1). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) REAL array, dimension (NB) .SH FURTHER DETAILS The matrix Q is represented as a product of nb elementary reflectors Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i+k-1) = 0, v(i+k) = 1; v(i+k+1:n) is stored on exit in A(ia+i+k:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the (n-k+1)-by-nb matrix V which is needed, with T and Y, to apply the transformation to the unreduced part of the matrix, using an update of the form: A(ia:ia+n-1,ja:ja+n-k) := (I-V*T*V')*(A(ia:ia+n-1,ja:ja+n-k)-Y*V'). The contents of A(ia:ia+n-1,ja:ja+n-k) on exit are illustrated by the following example with n = 7, k = 3 and nb = 2: .br ( a h a a a ) .br ( a h a a a ) .br ( a h a a a ) .br ( h h a a a ) .br ( v1 h a a a ) .br ( v1 v2 a a a ) .br ( v1 v2 a a a ) .br where a denotes an element of the original matrix .br A(ia:ia+n-1,ja:ja+n-k), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pslamch.l0100644000056400000620000000250606335610644017055 0ustar pfrauenfstaff.TH PSLAMCH l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAMCH - determine single precision machine parameters .SH SYNOPSIS .TP 14 REAL FUNCTION PSLAMCH( ICTXT, CMACH ) .TP 14 .ti +4 CHARACTER CMACH .TP 14 .ti +4 INTEGER ICTXT .SH PURPOSE PSLAMCH determines single precision machine parameters. .SH ARGUMENTS .TP 8 ICTXT (global input) INTEGER The BLACS context handle in which the computation takes place. .TP 8 CMACH (global input) CHARACTER*1 Specifies the value to be returned by PSLAMCH: .br = 'E' or 'e', PSLAMCH := eps .br = 'S' or 's , PSLAMCH := sfmin .br = 'B' or 'b', PSLAMCH := base .br = 'P' or 'p', PSLAMCH := eps*base .br = 'N' or 'n', PSLAMCH := t .br = 'R' or 'r', PSLAMCH := rnd .br = 'M' or 'm', PSLAMCH := emin .br = 'U' or 'u', PSLAMCH := rmin .br = 'L' or 'l', PSLAMCH := emax .br = 'O' or 'o', PSLAMCH := rmax where .TP 6 eps = relative machine precision sfmin = safe minimum, such that 1/sfmin does not overflow base = base of the machine prec = eps*base t = number of (base) digits in the mantissa rnd = 1.0 when rounding occurs in addition, 0.0 otherwise emin = minimum exponent before (gradual) underflow rmin = underflow threshold - base**(emin-1) emax = largest exponent before overflow rmax = overflow threshold - (base**emax)*(1-eps) scalapack-doc-1.5/man/manl/pslange.l0100644000056400000620000001274706335610644017067 0ustar pfrauenfstaff.TH PSLANGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLANGE - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PSLANGE( NORM, M, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM .TP 14 .ti +4 INTEGER IA, JA, M, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSLANGE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a distributed matrix sub( A ) = A(IA:IA+M-1, JA:JA+N-1). .br PSLANGE returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+M-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PSLANGE as described above. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PSLANGE is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PSLANGE is set to zero. N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pslanhs.l0100644000056400000620000001245706335610644017104 0ustar pfrauenfstaff.TH PSLANHS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLANHS - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PSLANHS( NORM, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM .TP 14 .ti +4 INTEGER IA, JA, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSLANHS returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a Hessenberg distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PSLANHS returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PSLANHS as described above. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PSLANHS is set to zero. N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pslansy.l0100644000056400000620000001437506335610644017126 0ustar pfrauenfstaff.TH PSLANSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLANSY - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PSLANSY( NORM, UPLO, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER NORM, UPLO .TP 14 .ti +4 INTEGER IA, JA, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSLANSY returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PSLANSY returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PSLANSY as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is to be referenced. = 'U': Upper triangular part of sub( A ) is referenced, .br = 'L': Lower triangular part of sub( A ) is referenced. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PSLANSY is set to zero. N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix which norm is to be computed, and the strictly lower triangular part of this matrix is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix which norm is to be computed, and the strictly upper triangular part of sub( A ) is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), 2*Nq0+Np0+LDW if NORM = '1', 'O', 'o', 'I' or 'i', where LDW is given by: IF( NPROW.NE.NPCOL ) THEN LDW = MB_A*CEIL(CEIL(Np0/MB_A)/(LCM/NPROW)) ELSE LDW = 0 END IF 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where LCM is the least common multiple of NPROW and NPCOL LCM = ILCM( NPROW, NPCOL ) and CEIL denotes the ceiling operation (ICEIL). IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), ICEIL, ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pslantr.l0100644000056400000620000001362706335610644017117 0ustar pfrauenfstaff.TH PSLANTR l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLANTR - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 14 REAL FUNCTION PSLANTR( NORM, UPLO, DIAG, M, N, A, IA, JA, DESCA, WORK ) .TP 14 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 14 .ti +4 INTEGER IA, JA, M, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSLANTR returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular distributed matrix sub( A ) denoting A(IA:IA+M-1, JA:JA+N-1). .br PSLANTR returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with ia <= i <= ia+m-1, ( and ja <= j <= ja+n-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PSLANTR as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the matrix sub( A ) is upper or lower trapezoidal. = 'U': Upper trapezoidal .br = 'L': Lower trapezoidal Note that sub( A ) is triangular instead of trapezoidal if M = N. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) has unit diagonal. = 'N': Non-unit diagonal .br = 'U': Unit diagonal .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PSLANTR is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PSLANTR is set to zero. N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) REAL array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pslapiv.l0100644000056400000620000001547006335610644017110 0ustar pfrauenfstaff.TH PSLAPIV l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAPIV - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PSLAPIV( DIREC, ROWCOL, PIVROC, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP, IWORK ) .TP 20 .ti +4 CHARACTER*1 DIREC, PIVROC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLAPIV applies either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector may be distributed across a process row or a column. The pivot vector should be aligned with the distributed matrix A. This routine will transpose the pivot vector if necessary. For example if the row pivots should be applied to the columns of sub( A ), pass ROWCOL='C' and PIVROC='C'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Restrictions .br ============ .br IPIV must always be a distributed vector (not a matrix). Thus: IF( ROWPIV .EQ. 'C' ) THEN .br JP must be 1 .br ELSE .br IP must be 1 .br END IF .br The following restrictions apply when IPIV must be transposed: IF( ROWPIV.EQ.'C' .AND. PIVROC.EQ.'C') THEN .br DESCIP(MB_) must equal DESCA(NB_) .br ELSE IF( ROWPIV.EQ.'R" .AND. PIVROC.EQ.'R') THEN .br DESCIP(NB_) must equal DESCA(MB_) .br END IF .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER*1 Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P*sub( A ). = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P )*sub( A ). .TP 8 ROWCOL (global input) CHARACTER*1 Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 PIVROC (global input) CHARACTER*1 Specifies whether IPIV is distributed over a process row or column: = 'R' IPIV distributed over a process row = 'C' IPIV distributed over a process column .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the distributed submatrix sub( A ) to which the row or column interchanges will be applied. On exit, the local pieces of the permuted distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL='R', otherwise LOCc(N_A)+NB_A. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. This array is tied to the distributed matrix A. .TP 8 IWORK (local workspace) INTEGER array, dimension (LDW) where LDW is equal to the workspace necessary for transposition, and the storage of the tranposed IPIV: Let LCM be the least common multiple of NPROW and NPCOL. IF( ROWCOL.EQ.'R' .AND. PIVROC.EQ.'R' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P ELSE LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P * CEIL( CEIL(LOCc(N_P)/NB_P) / (LCM/NPCOL) ) END IF ELSE IF( ROWCOL.EQ.'C' .AND. PIVROC.EQ.'C' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P ELSE LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P * CEIL( CEIL(LOCr(M_P)/MB_P) / (LCM/NPROW) ) END IF ELSE IWORK is not referenced. END IF scalapack-doc-1.5/man/manl/pslapv2.l0100644000056400000620000001356506335610644017024 0ustar pfrauenfstaff.TH PSLAPV2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAPV2 - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PSLAPV2( DIREC, ROWCOL, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLAPV2 applies either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector should be aligned with the distributed matrix A. For pivoting the rows of sub( A ), IPIV should be distributed along a process column and replicated over all process rows. Similarly, IPIV should be distributed along a process row and replicated over all process columns for column pivoting. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P * sub( A ); = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P ) * sub( A ). .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this local array contains the local pieces of the distributed matrix sub( A ) to which the row or columns interchanges will be applied. On exit, this array contains the local pieces of the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL = 'R', LOCc(N_A)+NB_A otherwise. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. IPIV is tied to the distributed matrix A. .TP 8 IP (global input) INTEGER IPIV's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JP (global input) INTEGER IPIV's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCIP (global and local input) INTEGER array of dimension 8 The array descriptor for the distributed matrix IPIV. scalapack-doc-1.5/man/manl/pslaqge.l0100644000056400000620000001363606335610644017070 0ustar pfrauenfstaff.TH PSLAQGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAQGE - equilibrate a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C .SH SYNOPSIS .TP 20 SUBROUTINE PSLAQGE( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 REAL AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), C( * ), R( * ) .SH PURPOSE PSLAQGE equilibrates a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)) containing on entry the M-by-N matrix sub( A ). On exit, the equilibrated distributed matrix. See EQUED for the form of the equilibrated distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local input) REAL array, dimension LOCr(M_A) The row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local input) REAL array, dimension LOCc(N_A) The column scale factors of sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distributed matrix A. .TP 8 ROWCND (global input) REAL The global ratio of the smallest R(i) to the largest R(i), IA <= i <= IA+M-1. .TP 8 COLCND (global input) REAL The global ratio of the smallest C(i) to the largest C(i), JA <= j <= JA+N-1. .TP 8 AMAX (global input) REAL Absolute value of largest distributed submatrix entry. .TP 8 EQUED (global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration .br = 'R': Row equilibration, i.e., sub( A ) has been pre- .br multiplied by diag(R(IA:IA+M-1)), .br = 'C': Column equilibration, i.e., sub( A ) has been post- .br multiplied by diag(C(JA:JA+N-1)), .br = 'B': Both row and column equilibration, i.e., sub( A ) has been replaced by diag(R(IA:IA+M-1)) * sub( A ) * diag(C(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if row or column scaling should be done based on the ratio of the row or column scaling factors. If ROWCND < THRESH, row scaling is done, and if COLCND < THRESH, column scaling is done. LARGE and SMALL are threshold values used to decide if row scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. scalapack-doc-1.5/man/manl/pslaqsy.l0100644000056400000620000001400506335610644017117 0ustar pfrauenfstaff.TH PSLAQSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAQSY - equilibrate a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC .SH SYNOPSIS .TP 20 SUBROUTINE PSLAQSY( UPLO, N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED, UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 REAL AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), SC( * ), SR( * ) .SH PURPOSE PSLAQSY equilibrates a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric distributed matrix sub( A ) is to be referenced: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (input/output) REAL pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the distributed symmetric matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and the strictly upper trian- gular part of sub( A ) is not referenced. On exit, if EQUED = 'Y', the equilibrated matrix: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local input) REAL array, dimension LOCr(M_A) The scale factors for A(IA:IA+M-1,JA:JA+N-1). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local input) REAL array, dimension LOCc(N_A) The scale factors for sub( A ). SC is aligned with the dis- tributed matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global input) REAL Ratio of the smallest SR(i) (respectively SC(j)) to the largest SR(i) (respectively SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. .TP 8 AMAX (global input) REAL Absolute value of the largest distributed submatrix entry. .TP 8 EQUED (output) CHARACTER*1 Specifies whether or not equilibration was done. = 'N': No equilibration. .br = 'Y': Equilibration was done, i.e., sub( A ) has been re- .br placed by: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if scaling should be done based on the ratio of the scaling factors. If SCOND < THRESH, scaling is done. LARGE and SMALL are threshold values used to decide if scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, scaling is done. scalapack-doc-1.5/man/manl/pslared1d.l0100644000056400000620000001052506335610645017306 0ustar pfrauenfstaff.TH PSLARED1D l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSLARED1D - redistribute a 1D array It assumes that the input array, BYCOL, is distributed across rows and that all process column contain the same copy of BYCOL .SH SYNOPSIS .TP 22 SUBROUTINE PSLARED1D( N, IA, JA, DESC, BYCOL, BYALL, WORK, LWORK ) .TP 22 .ti +4 INTEGER IA, JA, LWORK, N .TP 22 .ti +4 INTEGER DESC( * ) .TP 22 .ti +4 REAL BYALL( * ), BYCOL( * ), WORK( LWORK ) .SH PURPOSE PSLARED1D redistributes a 1D array and will contain the entire array. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = Number of local rows in BYCOL() .TP 8 N (global input) INTEGER The number of elements to be redistributed. N >= 0. .TP 8 IA (global input) INTEGER IA must be equal to 1 .TP 8 JA (global input) INTEGER JA must be equal to 1 .TP 8 DESC (global/local input) INTEGER Array of dimension 8 A 2D array descirptor, which describes BYCOL .TP 8 BYCOL (local input) distributed block cyclic REAL array global dimension (N), local dimension NP BYCOL is distributed across the process rows All process columns are assumed to contain the same value .TP 8 BYALL (global output) REAL global dimension( N ) local dimension (N) BYALL is exactly duplicated on all processes It contains the same values as BYCOL, but it is replicated across all processes rather than being distributed BYALL(i) = BYCOL( NUMROC(i,NB,MYROW,0,NPROW ) on the procs whose MYROW == mod((i-1)/NB,NPROW) .TP 8 WORK (local workspace) REAL dimension (LWORK) Used to hold the buffers sent from one process to another .TP 8 LWORK (local input) INTEGER size of WORK array LWORK >= NUMROC(N, DESC( NB_ ), 0, 0, NPCOL) scalapack-doc-1.5/man/manl/pslared2d.l0100644000056400000620000001060406335610645017305 0ustar pfrauenfstaff.TH PSLARED2D l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSLARED2D - redistribute a 1D array It assumes that the input array, BYROW, is distributed across columns and that all process rows contain the same copy of BYROW .SH SYNOPSIS .TP 22 SUBROUTINE PSLARED2D( N, IA, JA, DESC, BYROW, BYALL, WORK, LWORK ) .TP 22 .ti +4 INTEGER IA, JA, LWORK, N .TP 22 .ti +4 INTEGER DESC( * ) .TP 22 .ti +4 REAL BYALL( * ), BYROW( * ), WORK( LWORK ) .SH PURPOSE PSLARED2D redistributes a 1D array and will contain the entire array. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = Number of local rows in BYROW() .TP 8 N (global input) INTEGER The number of elements to be redistributed. N >= 0. .TP 8 IA (global input) INTEGER IA must be equal to 1 .TP 8 JA (global input) INTEGER JA must be equal to 1 .TP 8 DESC (global/local input) INTEGER Array of dimension DLEN_ A 2D array descriptor, which describes BYROW .TP 8 BYROW (local input) distributed block cyclic REAL array global dimension (N), local dimension NP BYCOL is distributed across the process columns All process rows are assumed to contain the same value .TP 8 BYALL (global output) REAL global dimension( N ) local dimension (N) BYALL is exactly duplicated on all processes It contains the same values as BYCOL, but it is replicated across all processes rather than being distributed BYALL(i) = BYCOL( NUMROC(i,NB,MYROW,0,NPROW ) on the procs whose MYROW == mod((i-1)/NB,NPROW) .TP 8 WORK (local workspace) REAL dimension (LWORK) Used to hold the buffers sent from one process to another .TP 8 LWORK (local input) INTEGER size of WORK array LWORK >= LWORK >= NUMROC(N, DESC( NB_ ), 0, 0, NPCOL) scalapack-doc-1.5/man/manl/pslarf.l0100644000056400000620000001777306335610645016732 0ustar pfrauenfstaff.TH PSLARF l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARF - applie a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PSLARF( SIDE, M, N, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 REAL C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PSLARF applies a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a real scalar and v is a real vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also have the first row of sub( C ). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also have the first column of sub( C ) and MOD(JV-1,NB_V) must be equal to MOD(JC-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q, Q = Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 V (local input) REAL pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+M-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+M-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+N-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+N-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) REAL, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pslarfb.l0100644000056400000620000001751706335610645017070 0ustar pfrauenfstaff.TH PSLARFB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARFB - applie a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, DIRECT, STOREV .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 REAL C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PSLARFB applies a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) from the left or the right. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 DIRECT (global input) CHARACTER Indicates how Q is formed from a product of elementary reflectors = 'F': Q = H(1) H(2) . . . H(k) (Forward) .br = 'B': Q = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 V (local input) REAL pointer into the local memory to an array of dimension ( LLD_V, LOCc(JV+K-1) ) if STOREV = 'C', ( LLD_V, LOCc(JV+M-1)) if STOREV = 'R' and SIDE = 'L', ( LLD_V, LOCc(JV+N-1) ) if STOREV = 'R' and SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation. See further details. If STOREV = 'C' and SIDE = 'L', LLD_V >= MAX(1,LOCr(IV+M-1)); if STOREV = 'C' and SIDE = 'R', LLD_V >= MAX(1,LOCr(IV+N-1)); if STOREV = 'R', LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) REAL array, dimension MB_V by MB_V if STOREV = 'R' and NB_V by NB_V if STOREV = 'C'. The trian- gular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pslarfg.l0100644000056400000620000001237506335610645017072 0ustar pfrauenfstaff.TH PSLARFG l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARFG - generate a real elementary reflector H of order n, such that H * sub( X ) = H * ( x(iax,jax) ) = ( alpha ), H' * H = I .SH SYNOPSIS .TP 20 SUBROUTINE PSLARFG( N, ALPHA, IAX, JAX, X, IX, JX, DESCX, INCX, TAU ) .TP 20 .ti +4 INTEGER IAX, INCX, IX, JAX, JX, N .TP 20 .ti +4 REAL ALPHA .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 REAL TAU( * ), X( * ) .SH PURPOSE PSLARFG generates a real elementary reflector H of order n, such that ( x ) ( 0 ) .br where alpha is a scalar, and sub( X ) is an (N-1)-element real distributed vector X(IX:IX+N-2,JX) if INCX = 1 and X(IX,JX:JX+N-2) if INCX = DESCX(M_). H is represented in the form .br H = I - tau * ( 1 ) * ( 1 v' ) , .br ( v ) .br where tau is a real scalar and v is a real (N-1)-element .br vector. .br If the elements of sub( X ) are all zero, then tau = 0 and H is taken to be the unit matrix. .br Otherwise 1 <= tau <= 2. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. .SH ARGUMENTS .TP 8 N (global input) INTEGER The global order of the elementary reflector. N >= 0. .TP 8 ALPHA (local output) REAL On exit, alpha is computed in the process scope having the vector sub( X ). .TP 8 IAX (global input) INTEGER The global row index in X of X(IAX,JAX). .TP 8 JAX (global input) INTEGER The global column index in X of X(IAX,JAX). .TP 8 X (local input/local output) REAL, pointer into the local memory to an array of dimension (LLD_X,*). This array contains the local pieces of the distributed vector sub( X ). Before entry, the incremented array sub( X ) must contain the vector x. On exit, it is overwritten with the vector v. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 TAU (local output) REAL, array, dimension LOCc(JX) if INCX = 1, and LOCr(IX) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix X. scalapack-doc-1.5/man/manl/pslarft.l0100644000056400000620000001504006335610645017077 0ustar pfrauenfstaff.TH PSLARFT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARFT - form the triangular factor T of a real block reflector H of order n, which is defined as a product of k elementary reflectors .SH SYNOPSIS .TP 20 SUBROUTINE PSLARFT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 REAL TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PSLARFT forms the triangular factor T of a real block reflector H of order n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the distributed matrix V, and H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the distributed matrix V, and H = I - V' * T * V .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER*1 Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER*1 Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The order of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) REAL pointer into the local memory to an array of local dimension (LOCr(IV+N-1),LOCc(JV+K-1)) if STOREV = 'C', and (LOCr(IV+K-1),LOCc(JV+N-1)) if STOREV = 'R'. The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) REAL, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) REAL array, dimension (NB_V,NB_V) if STOREV = 'Col', and (MB_V,MB_V) otherwise. It contains the k-by-k triangular factor of the block reflector asso- ciated with V. If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is lower triangular. .TP 8 WORK (local workspace) REAL array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': V( IV:IV+N-1, ( 1 ) V( IV:IV+K-1, ( 1 v1 v1 v1 v1 ) JV:JV+K-1 ) = ( v1 1 ) JV:JV+N-1 ) = ( 1 v2 v2 v2 ) ( v1 v2 1 ) ( 1 v3 v3 ) ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': V( IV:IV+N-1, ( v1 v2 v3 ) V( IV:IV+K-1, ( v1 v1 1 ) JV:JV+K-1 ) = ( v1 v2 v3 ) JV:JV+N-1 ) = ( v2 v2 v2 1 ) ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) ( 1 v3 ) .br ( 1 ) .br scalapack-doc-1.5/man/manl/pslarz.l0100644000056400000620000002041006335610645016734 0ustar pfrauenfstaff.TH PSLARZ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARZ - applie a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PSLARZ( SIDE, M, N, L, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, L, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 REAL C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PSLARZ applies a real elementary reflector Q (or Q**T) to a real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a real scalar and v is a real vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Q is a product of k elementary reflectors as returned by PSTZRZF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also own C(IC+M-L,JC:JC+N-1). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC+N-L-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also own C(IC:IC+M-1,JC+N-L) and MOD(JV-1,NB_V) must be equal to MOD(JC+N-L-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q, Q = Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) REAL pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+L-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+L-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) REAL, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pslarzb.l0100644000056400000620000001773206335610645017113 0ustar pfrauenfstaff.TH PSLARZB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARZB - applie a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSLARZB( SIDE, TRANS, DIRECT, STOREV, M, N, K, L, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, SIDE, STOREV, TRANS .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, L, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 REAL C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PSLARZB applies a real block reflector Q or its transpose Q**T to a real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) from the left or the right. .br Q is a product of k elementary reflectors as returned by PSTZRZF. Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 DIRECT (global input) CHARACTER Indicates how H is formed from a product of elementary reflectors = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise (not supported yet) .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) REAL pointer into the local memory to an array of dimension (LLD_V, LOCc(JV+M-1)) if SIDE = 'L', (LLD_V, LOCc(JV+N-1)) if SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation as returned by PSTZRZF. LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) REAL array, dimension MB_V by MB_V The lower triangular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pslarzt.l0100644000056400000620000001562606335610645017135 0ustar pfrauenfstaff.TH PSLARZT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLARZT - form the triangular factor T of a real block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PSTZRZF .SH SYNOPSIS .TP 20 SUBROUTINE PSLARZT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 REAL TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PSLARZT forms the triangular factor T of a real block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PSTZRZF. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and .br H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and .br H = I - V' * T * V .br Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The number of meaningful entries of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) REAL pointer into the local memory to an array of local dimension (LOCr(IV+K-1),LOCc(JV+N-1)). The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) REAL, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) REAL array, dimension (MB_V,MB_V) It contains the k-by-k triangular factor of the block reflector associated with V. T is lower triangular. .TP 8 WORK (local workspace) REAL array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': ______V_____ .br ( v1 v2 v3 ) / \ ( v1 v2 v3 ) ( v1 v1 v1 v1 v1 . . . . 1 ) V = ( v1 v2 v3 ) ( v2 v2 v2 v2 v2 . . . 1 ) ( v1 v2 v3 ) ( v3 v3 v3 v3 v3 . . 1 ) ( v1 v2 v3 ) .br . . . .br . . . .br 1 . . .br 1 . .br 1 .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': ______V_____ 1 / \ . 1 ( 1 . . . . v1 v1 v1 v1 v1 ) . . 1 ( . 1 . . . v2 v2 v2 v2 v2 ) . . . ( . . 1 . . v3 v3 v3 v3 v3 ) . . . .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br V = ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br scalapack-doc-1.5/man/manl/pslascl.l0100644000056400000620000001233706335610645017073 0ustar pfrauenfstaff.TH PSLASCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLASCL - multiplie the M-by-N real distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM .SH SYNOPSIS .TP 20 SUBROUTINE PSLASCL( TYPE, CFROM, CTO, M, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER TYPE .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 REAL CFROM, CTO .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLASCL multiplies the M-by-N real distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM. This is done without over/underflow as long as the final result CTO * A(I,J) / CFROM does not over/underflow. TYPE specifies that sub( A ) may be full, upper triangular, lower triangular or upper Hessenberg. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TYPE (global input) CHARACTER TYPE indices the storage type of the input distributed matrix. = 'G': sub( A ) is a full matrix, .br = 'L': sub( A ) is a lower triangular matrix, .br = 'U': sub( A ) is an upper triangular matrix, .br = 'H': sub( A ) is an upper Hessenberg matrix. .TP 8 CFROM (global input) REAL CTO (global input) REAL The distributed matrix sub( A ) is multiplied by CTO/CFROM. A(I,J) is computed without over/underflow if the final result CTO * A(I,J) / CFROM can be represented without over/underflow. CFROM must be nonzero. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). On exit, this array contains the local pieces of the distributed matrix multiplied by CTO/CFROM. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pslase2.l0100644000056400000620000001214006335610645016773 0ustar pfrauenfstaff.TH PSLASE2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLASE2 - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PSLASE2( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 REAL ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLASE2 initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. PSLASE2 requires that only dimension of the matrix operand is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) REAL The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) REAL The constant to which the diagonal elements are to be set. .TP 8 A (local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pslaset.l0100644000056400000620000001202006335610645017072 0ustar pfrauenfstaff.TH PSLASET l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLASET - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PSLASET( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 REAL ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLASET initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) REAL The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) REAL The constant to which the diagonal elements are to be set. .TP 8 A (local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pslasmsub.l0100644000056400000620000001143506335610646017442 0ustar pfrauenfstaff.TH PSLASMSUB l "12 May 1997" "LAPACK version 1.5 " "LAPACK routine (version 1.5 )" .SH NAME PSLASMSUB - look for a small subdiagonal element from the bottom of the matrix that it can safely set to zero .SH SYNOPSIS .TP 22 SUBROUTINE PSLASMSUB( A, DESCA, I, L, K, SMLNUM, BUF, LWORK ) .TP 22 .ti +4 INTEGER I, K, L, LWORK .TP 22 .ti +4 REAL SMLNUM .TP 22 .ti +4 INTEGER DESCA( * ) .TP 22 .ti +4 REAL A( * ), BUF( * ) .SH PURPOSE PSLASMSUB looks for a small subdiagonal element from the bottom of the matrix that it can safely set to zero. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 A (global input) REAL array, dimension (DESCA(LLD_),*) On entry, the Hessenberg matrix whose tridiagonal part is being scanned. Unchanged on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 I (global input) INTEGER The global location of the bottom of the unreduced submatrix of A. Unchanged on exit. .TP 8 L (global input) INTEGER The global location of the top of the unreduced submatrix of A. Unchanged on exit. .TP 8 K (global output) INTEGER On exit, this yields the bottom portion of the unreduced submatrix. This will satisfy: L <= M <= I-1. .TP 8 SMLNUM (global input) REAL On entry, a "small number" for the given matrix. Unchanged on exit. .TP 8 BUF (local output) REAL array of size LWORK. .TP 8 LWORK (global input) INTEGER On exit, LWORK is the size of the work buffer. This must be at least 2*Ceil( Ceil( (I-L)/HBL ) / LCM(NPROW,NPCOL) ) Here LCM is least common multiple, and NPROWxNPCOL is the logical grid size. Notes: This routine does a global maximum and must be called by all processes. This code is basically a parallelization of the following snip of LAPACK code from SLAHQR: Look for a single small subdiagonal element. DO 20 K = I, L + 1, -1 TST1 = ABS( H( K-1, K-1 ) ) + ABS( H( K, K ) ) IF( TST1.EQ.ZERO ) $ TST1 = SLANHS( '1', I-L+1, H( L, L ), LDH, WORK ) IF( ABS( H( K, K-1 ) ).LE.MAX( ULP*TST1, SMLNUM ) ) $ GO TO 30 20 CONTINUE 30 CONTINUE Implemented by: G. Henry, November 17, 1996 scalapack-doc-1.5/man/manl/pslassq.l0100644000056400000620000001216106335610646017114 0ustar pfrauenfstaff.TH PSLASSQ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLASSQ - return the values scl and smsq such that ( scl**2 )*smsq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, .SH SYNOPSIS .TP 20 SUBROUTINE PSLASSQ( N, X, IX, JX, DESCX, INCX, SCALE, SUMSQ ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 REAL SCALE, SUMSQ .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 REAL X( * ) .SH PURPOSE PSLASSQ returns the values scl and smsq such that where x( i ) = sub( X ) = X( IX+(JX-1)*DESCX(M_)+(i-1)*INCX ). The value of sumsq is assumed to be non-negative and scl returns the value .br scl = max( scale, abs( x( i ) ) ). .br scale and sumsq must be supplied in SCALE and SUMSQ respectively. SCALE and SUMSQ are overwritten by scl and ssq respectively. The routine makes only one pass through the vector sub( X ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. The result are only available in the scope of sub( X ), i.e if sub( X ) is distributed along a process row, the correct results are only available in this process row of the grid. Similarly if sub( X ) is distributed along a process column, the correct results are only available in this process column of the grid. .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vector sub( X ). .TP 8 X (input) REAL The vector for which a scaled sum of squares is computed. x( i ) = X(IX+(JX-1)*M_X +(i-1)*INCX ), 1 <= i <= n. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 SCALE (local input/local output) REAL On entry, the value scale in the equation above. On exit, SCALE is overwritten with scl , the scaling factor for the sum of squares. .TP 8 SUMSQ (local input/local output) REAL On entry, the value sumsq in the equation above. On exit, SUMSQ is overwritten with smsq , the basic sum of squares from which scl has been factored out. scalapack-doc-1.5/man/manl/pslaswp.l0100644000056400000620000001224106335610646017116 0ustar pfrauenfstaff.TH PSLASWP l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLASWP - perform a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSLASWP( DIREC, ROWCOL, N, A, IA, JA, DESCA, K1, K2, IPIV ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, JA, K1, K2, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLASWP performs a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1). One interchange is initiated for each of rows or columns K1 trough K2 of sub( A ). This routine assumes that the pivoting information has already been broadcast along the process row or column. .br Also note that this routine will only work for K1-K2 being in the same MB (or NB) block. If you want to pivot a full matrix, use PSLAPIV. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) = 'B' (Backward) .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are permuted: = 'R' (Rows) = 'C' (Columns) .TP 8 N (global input) INTEGER If ROWCOL = 'R', the length of the rows of the distributed matrix A(*,JA:JA+N-1) to be permuted; If ROWCOL = 'C', the length of the columns of the distributed matrix A(IA:IA+N-1,*) to be permuted. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, * ). On entry, this array contains the local pieces of the distri- buted matrix to which the row/columns interchanges will be applied. On exit the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 K1 (global input) INTEGER The first element of IPIV for which a row or column inter- change will be done. .TP 8 K2 (global input) INTEGER The last element of IPIV for which a row or column inter- change will be done. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A for row pivoting and LOCc(N_A)+NB_A for column pivoting. This array is tied to the matrix A, IPIV(K) = L implies rows (or columns) K and L are to be interchanged. scalapack-doc-1.5/man/manl/pslatra.l0100644000056400000620000000765406335610646017107 0ustar pfrauenfstaff.TH PSLATRA l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLATRA - compute the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ) .SH SYNOPSIS .TP 14 REAL FUNCTION PSLATRA( N, A, IA, JA, DESCA ) .TP 14 .ti +4 INTEGER IA, JA, N .TP 14 .ti +4 INTEGER DESCA( * ) .TP 14 .ti +4 REAL A( * ) .SH PURPOSE PSLATRA computes the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ). The result is left on every process of the grid. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix the trace is to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pslatrd.l0100644000056400000620000002110706335610646017077 0ustar pfrauenfstaff.TH PSLATRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLATRD - reduce NB rows and columns of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to symmetric tridiagonal form by an orthogonal similarity transformation Q' * sub( A ) * Q, .SH SYNOPSIS .TP 20 SUBROUTINE PSLATRD( UPLO, N, NB, A, IA, JA, DESCA, D, E, TAU, W, IW, JW, DESCW, WORK ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IW, JA, JW, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCW( * ) .TP 20 .ti +4 REAL A( * ), D( * ), E( * ), TAU( * ), W( * ), WORK( * ) .SH PURPOSE PSLATRD reduces NB rows and columns of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to symmetric tridiagonal form by an orthogonal similarity transformation Q' * sub( A ) * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of sub( A ). .br If UPLO = 'U', PSLATRD reduces the last NB rows and columns of a matrix, of which the upper triangle is supplied; .br if UPLO = 'L', PSLATRD reduces the first NB rows and columns of a matrix, of which the lower triangle is supplied. .br This is an auxiliary routine called by PSSYTRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of rows and columns to be reduced. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the last NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements above the diagonal with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. If UPLO = 'L', the first NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements below the diagonal with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 W (local output) REAL pointer into the local memory to an array of dimension (LLD_W,NB_W), This array contains the local pieces of the N-by-NB_W matrix W required to update the unreduced part of sub( A ). .TP 8 IW (global input) INTEGER The row index in the global array W indicating the first row of sub( W ). .TP 8 JW (global input) INTEGER The column index in the global array W indicating the first column of sub( W ). .TP 8 DESCW (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix W. .TP 8 WORK (local workspace) REAL array, dimension (NB_A) .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n) H(n-1) . . . H(n-nb+1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(i:n) = 0 and v(i-1) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the N-by-NB matrix V which is needed, with W, to apply the transformation to the unreduced part of the matrix, using a symmetric rank-2k update of the form: sub( A ) := sub( A ) - V*W' - W*V'. .br The contents of A on exit are illustrated by the following examples with n = 5 and nb = 2: .br if UPLO = 'U': if UPLO = 'L': .br ( a a a v4 v5 ) ( d ) ( a a v4 v5 ) ( 1 d ) ( a 1 v5 ) ( v1 1 a ) ( d 1 ) ( v1 v2 a a ) ( d ) ( v1 v2 a a a ) where d denotes a diagonal element of the reduced matrix, a denotes an element of the original matrix that is unchanged, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pslatrs.l0100644000056400000620000000114306335610646017114 0ustar pfrauenfstaff.TH PSLATRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLATRS - solve a triangular system .SH SYNOPSIS .TP 20 SUBROUTINE PSLATRS( UPLO, TRANS, DIAG, NORMIN, N, A, IA, JA, DESCA, X, IX, JX, DESCX, SCALE, CNORM, WORK ) .TP 20 .ti +4 CHARACTER DIAG, NORMIN, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IX, JA, JX, N .TP 20 .ti +4 REAL SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ) .TP 20 .ti +4 REAL A( * ), CNORM( * ), X( * ), WORK( * ) .SH PURPOSE PSLATRS solves a triangular system. This routine in unfinished at this time, but will be part of the next release. .br scalapack-doc-1.5/man/manl/pslatrz.l0100644000056400000620000001460506335610646017132 0ustar pfrauenfstaff.TH PSLATRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSLATRZ - reduce the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = [ A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1) ] to upper triangular form by means of orthogonal transformations .SH SYNOPSIS .TP 20 SUBROUTINE PSLATRZ( M, N, L, A, IA, JA, DESCA, TAU, WORK ) .TP 20 .ti +4 INTEGER IA, JA, L, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSLATRZ reduces the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = [ A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1) ] to upper triangular form by means of orthogonal transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N orthogonal matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. L > 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements N-L+1 to N of the first M rows of sub( A ), with the array TAU, represent the orthogonal matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace) REAL array, dimension (LWORK) LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), which is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pslauu2.l0100644000056400000620000001157406335610646017030 0ustar pfrauenfstaff.TH PSLAUU2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAUU2 - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSLAUU2( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLAUU2 computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the unblocked form of the algorithm, calling Level 2 BLAS. No communication is performed by this routine, the matrix to operate on should be strictly local to one process. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pslauum.l0100644000056400000620000001143206335610646017114 0ustar pfrauenfstaff.TH PSLAUUM l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSLAUUM - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSLAUUM( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSLAUUM computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the blocked form of the algorithm, calling Level 3 PBLAS. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pslawil.l0100644000056400000620000000764706335610646017116 0ustar pfrauenfstaff.TH PSLAWIL l "12 May 1997" "LAPACK version 1.5 " "LAPACK routine (version 1.5 )" .SH NAME PSLAWIL - get the transform given by H44,H33, & H43H34 into V starting at row M .SH SYNOPSIS .TP 20 SUBROUTINE PSLAWIL( II, JJ, M, A, DESCA, H44, H33, H43H34, V ) .TP 20 .ti +4 INTEGER II, JJ, M .TP 20 .ti +4 REAL H33, H43H34, H44 .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), V( * ) .SH PURPOSE PSLAWIL gets the transform given by H44,H33, & H43H34 into V starting at row M. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 II (global input) INTEGER Row owner of H(M+2,M+2) .TP 8 JJ (global input) INTEGER Column owner of H(M+2,M+2) .TP 8 M (global input) INTEGER On entry, this is where the transform starts (row M.) Unchanged on exit. .TP 8 A (global input) REAL array, dimension (DESCA(LLD_),*) On entry, the Hessenberg matrix. Unchanged on exit. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. Unchanged on exit. H44 H33 H43H34 (global input) REAL These three values are for the double shift QR iteration. Unchanged on exit. .TP 8 V (global output) REAL array of size 3. Contains the transform on output. Implemented by: G. Henry, November 17, 1996 scalapack-doc-1.5/man/manl/psorg2l.l0100644000056400000620000001401506335610646017016 0ustar pfrauenfstaff.TH PSORG2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORG2L - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PSORG2L( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORG2L generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PSGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PSGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorg2r.l0100644000056400000620000001400006335610646017016 0ustar pfrauenfstaff.TH PSORG2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORG2R - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PSORG2R( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORG2R generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PSGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PSGEQRF in the K columns of its array argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorgl2.l0100644000056400000620000001376606335610647017033 0ustar pfrauenfstaff.TH PSORGL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORGL2 - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PSORGL2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORGL2 generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PSGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorglq.l0100644000056400000620000001377706335610647017134 0ustar pfrauenfstaff.TH PSORGLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORGLQ - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PSORGLQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORGLQ generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PSGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorgql.l0100644000056400000620000001402606335610647017120 0ustar pfrauenfstaff.TH PSORGQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORGQL - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PSORGQL( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORGQL generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PSGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PSGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorgqr.l0100644000056400000620000001402506335610647017125 0ustar pfrauenfstaff.TH PSORGQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORGQR - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PSORGQR( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORGQR generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PSGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PSGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+K-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorgr2.l0100644000056400000620000001400006335610647017017 0ustar pfrauenfstaff.TH PSORGR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORGR2 - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PSORGR2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORGR2 generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PSGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PSGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorgrq.l0100644000056400000620000001401106335610647017120 0ustar pfrauenfstaff.TH PSORGRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORGRQ - generate an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PSORGRQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORGRQ generates an M-by-N real distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PSGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PSGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psorm2l.l0100644000056400000620000001722506335610647017033 0ustar pfrauenfstaff.TH PSORM2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORM2L - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORM2L( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORM2L overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PSGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PSGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/psorm2r.l0100644000056400000620000001722606335610647017042 0ustar pfrauenfstaff.TH PSORM2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORM2R - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORM2R( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORM2R overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PSGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PSGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/psormbr.l0100644000056400000620000002376206335610647017124 0ustar pfrauenfstaff.TH PSORMBR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMBR - VECT = 'Q', PSORMBR overwrites the general real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMBR( VECT, SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, VECT .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE If VECT = 'Q', PSORMBR overwrites the general real distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br If VECT = 'P', PSORMBR overwrites sub( C ) with .br SIDE = 'L' SIDE = 'R' .br TRANS = 'N': P * sub( C ) sub( C ) * P .br TRANS = 'T': P**T * sub( C ) sub( C ) * P**T .br Here Q and P**T are the orthogonal distributed matrices determined by PSGEBRD when reducing a real distributed matrix A(IA:*,JA:*) to bidiagonal form: A(IA:*,JA:*) = Q * B * P**T. Q and P**T are defined as products of elementary reflectors H(i) and G(i) respectively. Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the order of the orthogonal matrix Q or P**T that is applied. If VECT = 'Q', A(IA:*,JA:*) is assumed to have been an NQ-by-K matrix: .br if nq >= k, Q = H(1) H(2) . . . H(k); .br if nq < k, Q = H(1) H(2) . . . H(nq-1). .br If VECT = 'P', A(IA:*,JA:*) is assumed to have been a K-by-NQ matrix: .br if k < nq, P = G(1) G(2) . . . G(k); .br if k >= nq, P = G(1) G(2) . . . G(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 VECT (global input) CHARACTER = 'Q': apply Q or Q**T; .br = 'P': apply P or P**T. .TP 8 SIDE (global input) CHARACTER .br = 'L': apply Q, Q**T, P or P**T from the Left; .br = 'R': apply Q, Q**T, P or P**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q or P; .br = 'T': Transpose, apply Q**T or P**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER If VECT = 'Q', the number of columns in the original distributed matrix reduced by PSGEBRD. If VECT = 'P', the number of rows in the original distributed matrix reduced by PSGEBRD. K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+MIN(NQ,K)-1)) if VECT='Q', and (LLD_A,LOCc(JA+NQ-1)) if VECT = 'P'. NQ = M if SIDE = 'L', and NQ = N otherwise. The vectors which define the elementary reflectors H(i) and G(i), whose products determine the matrices Q and P, as returned by PSGEBRD. If VECT = 'Q', LLD_A >= max(1,LOCr(IA+NQ-1)); if VECT = 'P', LLD_A >= max(1,LOCr(IA+MIN(NQ,K)-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL array, dimension LOCc(JA+MIN(NQ,K)-1) if VECT = 'Q', LOCr(IA+MIN(NQ,K)-1) if VECT = 'P', TAU(i) must contain the scalar factor of the elementary reflector H(i) or G(i), which determines Q or P, as returned by PDGEBRD in its array argument TAUQ or TAUP. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, if VECT='Q', sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q; if VECT='P, sub( C ) is overwritten by P*sub( C ) or P'*sub( C ) or sub( C )*P or sub( C )*P'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', NQ = M; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA+1; JAA=JA; MI=M-1; NI=N; ICC=IC+1; JCC=JC; end if else if SIDE = 'R', NQ = N; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA; JAA=JA+1; MI=M; NI=N-1; ICC=IC; JCC=JC+1; end if end if If VECT = 'Q', If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if else if VECT <> 'Q', if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( MI+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if end if where LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JAA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( MI+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If VECT = 'Q', If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) else If SIDE = 'L', ( MB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/psormhr.l0100644000056400000620000002010606335610647017117 0ustar pfrauenfstaff.TH PSORMHR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMHR - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMHR( SIDE, TRANS, M, N, ILO, IHI, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, IHI, ILO, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMHR overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of IHI-ILO elementary reflectors, as returned by PSGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER ILO and IHI must have the same values as in the previous call of PSGEHRD. Q is equal to the unit matrix except in the distributed submatrix Q(ia+ilo:ia+ihi-1,ia+ilo:ja+ihi-1). If SIDE = 'L', 1 <= ILO <= IHI <= max(1,M); if SIDE = 'R', 1 <= ILO <= IHI <= max(1,N); ILO and IHI are relative indexes. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PSGEHRD. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+M-2) if SIDE = 'L', and LOCc(JA+N-2) if SIDE = 'R'. This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEHRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least IAA = IA + ILO; JAA = JA+ILO-1; If SIDE = 'L', MI = IHI-ILO; NI = N; ICC = IC + ILO; JCC = JC; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; NI = IHI-ILO; ICC = IC; JCC = JC + ILO; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/psorml2.l0100644000056400000620000001720706335610647017033 0ustar pfrauenfstaff.TH PSORML2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORML2 - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORML2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORML2 overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PSGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/psormlq.l0100644000056400000620000001761706335610650017131 0ustar pfrauenfstaff.TH PSORMLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMLQ - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMLQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMLQ overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PSGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/psormql.l0100644000056400000620000001763506335610650017131 0ustar pfrauenfstaff.TH PSORMQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMQL - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMQL( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMQL overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PSGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PSGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/psormqr.l0100644000056400000620000001763606335610650017140 0ustar pfrauenfstaff.TH PSORMQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMQR - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMQR( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMQR overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PSGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PSGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PSGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/psormr2.l0100644000056400000620000001720706335610650017033 0ustar pfrauenfstaff.TH PSORMR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMR2 - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMR2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMR2 overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PSGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/psormr3.l0100644000056400000620000001753206335610650017035 0ustar pfrauenfstaff.TH PSORMR3 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMR3 - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMR3( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMR3 overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PSTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/psormrq.l0100644000056400000620000001762006335610650017131 0ustar pfrauenfstaff.TH PSORMRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMRQ - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMRQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMRQ overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PSGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/psormrz.l0100644000056400000620000002014306335610650017134 0ustar pfrauenfstaff.TH PSORMRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMRZ - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMRZ( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMRZ overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix defined as the product of K elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PSTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PSTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PSTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/psormtr.l0100644000056400000620000002041506335610650017130 0ustar pfrauenfstaff.TH PSORMTR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSORMTR - overwrite the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PSORMTR( SIDE, UPLO, TRANS, M, N, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 REAL A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PSORMTR overwrites the general real M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'T': Q**T * sub( C ) sub( C ) * Q**T .br where Q is a real orthogonal distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by PSSYTRD: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); .br if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**T from the Left; .br = 'R': apply Q or Q**T from the Right. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of A(IA:*,JA:*) contains elementary reflectors from PSSYTRD; = 'L': Lower triangle of A(IA:*,JA:*) contains elementary reflectors from PSSYTRD. .TP 8 TRANS (global input) CHARACTER = 'N': No transpose, apply Q; .br = 'T': Transpose, apply Q**T. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', or (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PSSYTRD. If SIDE = 'L', LLD_A >= max(1,LOCr(IA+M-1)); if SIDE = 'R', LLD_A >= max(1,LOCr(IA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) REAL array, dimension LTAU, where if SIDE = 'L' and UPLO = 'U', LTAU = LOCc(M_A), if SIDE = 'L' and UPLO = 'L', LTAU = LOCc(JA+M-2), if SIDE = 'R' and UPLO = 'U', LTAU = LOCc(N_A), if SIDE = 'R' and UPLO = 'L', LTAU = LOCc(JA+N-2). TAU(i) must contain the scalar factor of the elementary reflector H(i), as returned by PSSYTRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If UPLO = 'U', IAA = IA, JAA = JA+1, ICC = IC, JCC = JC; else UPLO = 'L', IAA = IA+1, JAA = JA; if SIDE = 'L', ICC = IC+1; JCC = JC; else ICC = IC; JCC = JC+1; end if end if If SIDE = 'L', MI = M-1; NI = N; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; MI = N-1; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pspbsv.l0100644000056400000620000000140506335610650016735 0ustar pfrauenfstaff.TH PSPBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PSPBSV( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER BW, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 REAL A( * ), B( * ), WORK( * ) .SH PURPOSE PSPBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PSPBTRF and PSPBTRS for details. .br scalapack-doc-1.5/man/manl/pspbtrf.l0100644000056400000620000000230406335610650017077 0ustar pfrauenfstaff.TH PSPBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPBTRF - compute a Cholesky factorization of an N-by-N real banded symmetric positive definite distributed matrix with bandwidth BW .SH SYNOPSIS .TP 20 SUBROUTINE PSPBTRF( UPLO, N, BW, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), WORK( * ) .SH PURPOSE PSPBTRF computes a Cholesky factorization of an N-by-N real banded symmetric positive definite distributed matrix with bandwidth BW: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PSPBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' U , if UPLO = 'U', or P A(1:N, JA:JA+N-1) P^T = L L', if UPLO = 'L' .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pspbtrs.l0100644000056400000620000000166706335610650017127 0ustar pfrauenfstaff.TH PSPBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PSPBTRS( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PSPBTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PSPBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'*U or L*L' as computed by PSPBTRF. .br Routine PSPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pspbtrsv.l0100644000056400000620000000216106335610650017303 0ustar pfrauenfstaff.TH PSPBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PSPBTRSV( UPLO, TRANS, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 REAL A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PSPBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Cholesky factorization code PSPBTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PSPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pspocon.l0100644000056400000620000001500106335610650017076 0ustar pfrauenfstaff.TH PSPOCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOCON - estimate the reciprocal of the condition number (in the 1-norm) of a real symmetric positive definite distributed matrix using the Cholesky factorization A = U**T*U or A = L*L**T computed by PSPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PSPOCON( UPLO, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 REAL ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSPOCON estimates the reciprocal of the condition number (in the 1-norm) of a real symmetric positive definite distributed matrix using the Cholesky factorization A = U**T*U or A = L*L**T computed by PSPOTRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the factor stored in A(IA:IA+N-1,JA:JA+N-1) is upper or lower triangular. .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L or U from the Cholesky factorization A(IA:IA+N-1,JA:JA+N-1) = U'*U or L*L', as computed by PSPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) REAL The 1-norm (or infinity-norm) of the symmetric distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) REAL The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + 2*LOCc(N+MOD(JA-1,NB_A)) + MAX( 2, MAX(NB_A*CEIL(NPROW-1,NPCOL),LOCc(N+MOD(JA-1,NB_A)) + NB_A*CEIL(NPCOL-1,NPROW)) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr(N+MOD(IA-1,MB_A)). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pspoequ.l0100644000056400000620000001377306335610651017130 0ustar pfrauenfstaff.TH PSPOEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOEQU - compute row and column scalings intended to equilibrate a distributed symmetric positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm) .SH SYNOPSIS .TP 20 SUBROUTINE PSPOEQU( N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 REAL AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), SC( * ), SR( * ) .SH PURPOSE PSPOEQU computes row and column scalings intended to equilibrate a distributed symmetric positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm). SR and SC contain the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled distri- buted matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of SR and SC puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. .br The scaling factor are stored along process rows in SR and along process columns in SC. The duplication of information simplifies greatly the application of the factors. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ), the N-by-N symmetric positive definite distributed matrix sub( A ) whose scaling factors are to be computed. Only the diagonal elements of sub( A ) are referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local output) REAL array, dimension LOCr(M_A) If INFO = 0, SR(IA:IA+N-1) contains the row scale factors for sub( A ). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local output) REAL array, dimension LOCc(N_A) If INFO = 0, SC(JA:JA+N-1) contains the column scale factors .br for A(IA:IA+M-1,JA:JA+N-1). SC is aligned with the distribu- ted matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global output) REAL If INFO = 0, SCOND contains the ratio of the smallest SR(i) (or SC(j)) to the largest SR(i) (or SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. If SCOND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by SR (or SC). .TP 8 AMAX (global output) REAL Absolute value of largest matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the K-th diagonal entry of sub( A ) is nonpositive. scalapack-doc-1.5/man/manl/psporfs.l0100644000056400000620000002351306335610651017121 0ustar pfrauenfstaff.TH PSPORFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPORFS - improve the computed solution to a system of linear equations when the coefficient matrix is symmetric positive definite and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PSPORFS( UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .SH PURPOSE PSPORFS improves the computed solution to a system of linear equations when the coefficient matrix is symmetric positive definite and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored. = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) REAL pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky factorization sub( A ) = L*L**T or U**T*U, as computed by PSPOTRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 B (local input) REAL pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) REAL pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). On exit, it contains the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*LOCr( N + MOD( IA-1, MB_A ) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/psposv.l0100644000056400000620000001456206335610651016763 0ustar pfrauenfstaff.TH PSPOSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOSV - compute the solution to a real system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PSPOSV( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSPOSV computes the solution to a real system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is an N-by-N symmetric distributed positive definite matrix and X and sub( B ) denoting B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The Cholesky decomposition is used to factor sub( A ) as .br sub( A ) = U**T * U, if UPLO = 'U', or sub( A ) = L * L**T, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of sub( A ) is then used to solve the system of equations. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, this array contains the local pieces of the factor U or L from the Cholesky factori- zation sub( A ) = U**T*U or L*L**T. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_B,LOC(JB+NRHS-1)). On entry, the local pieces of the right hand sides distribu- ted matrix sub( B ). On exit, if INFO = 0, sub( B ) is over- written with the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed, and the solution has not been computed. scalapack-doc-1.5/man/manl/psposvx.l0100644000056400000620000003307106335610651017147 0ustar pfrauenfstaff.TH PSPOSVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOSVX - use the Cholesky factorization A = U**T*U or A = L*L**T to compute the solution to a real system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PSPOSVX( FACT, UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, EQUED, SR, SC, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 REAL RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), SC( * ), SR( * ), WORK( * ), X( * ) .SH PURPOSE PSPOSVX uses the Cholesky factorization A = U**T*U or A = L*L**T to compute the solution to a real system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. In the following comments Y denotes Y(IY:IY+M-1,JY:JY+K-1) a M-by-K matrix where Y can be A, AF, B and X. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br diag(SR) * A * diag(SC) * inv(diag(SC)) * X = diag(SR) * B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(SR)*A*diag(SC) and B by diag(SR)*B. 2. If FACT = 'N' or 'E', the Cholesky decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as A = U**T* U, if UPLO = 'U', or .br A = L * L**T, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If equilibration was used, the matrix X is premultiplied by diag(SR) so that it solves the original system before .br equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A is supplied on entry, and if not, whether the matrix A should be equilibrated before it is factored. = 'F': On entry, AF contains the factored form of A. If EQUED = 'Y', the matrix A has been equilibrated with scaling factors given by S. A and AF will not be modified. = 'N': The matrix A will be copied to AF and factored. .br = 'E': The matrix A will be equilibrated if necessary, then copied to AF and factored. .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of A is stored; .br = 'L': Lower triangle of A is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrices B and X. NRHS >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the symmetric matrix A, except if FACT = 'F' and EQUED = 'Y', then A must contain the equilibrated matrix diag(SR)*A*diag(SC). If UPLO = 'U', the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. A is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if FACT = 'E' and EQUED = 'Y', A is overwritten by diag(SR)*A*diag(SC). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) REAL pointer into the local memory to an array of local dimension ( LLD_AF, LOCc(JA+N-1)). If FACT = 'F', then AF is an input argument and on entry contains the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T, in the same storage format as A. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix diag(SR)*A*diag(SC). If FACT = 'N', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the original matrix A. If FACT = 'E', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the equilibrated matrix A (see the description of A for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 EQUED (global input/global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'Y': Equilibration was done, i.e., A has been replaced by diag(SR) * A * diag(SC). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 SR (local input/local output) REAL array, dimension (LLD_A) The scale factors for A distributed across process rows; not accessed if EQUED = 'N'. SR is an input variable if FACT = 'F'; otherwise, SR is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SR must be positive. .TP 8 SC (local input/local output) REAL array, dimension (LOC(N_A)) The scale factors for A distributed across process columns; not accessed if EQUED = 'N'. SC is an input variable if FACT = 'F'; otherwise, SC is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SC must be positive. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of local dimension ( LLD_B, LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B. On exit, if EQUED = 'N', B is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B; if TRANS = 'T' or 'C' and EQUED = 'C' or 'B', B is overwritten by diag(C)*B. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) REAL pointer into the local memory to an array of local dimension ( LLD_X, LOCc(JX+NRHS-1) ). If INFO = 0, the N-by-NRHS solution matrix X to the original system of equations. Note that A and B are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(SC))*X if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(SR))*X if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) REAL The estimate of the reciprocal condition number of the matrix A after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0, and the solution and error bounds are not computed. .TP 8 FERR (local output) REAL array, dimension (LOC(N_B)) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The quality of the error bound depends on the quality of the estimate of norm(inv(A)) computed in the code; if the estimate of norm(inv(A)) is accurate, the error bound is guaranteed. .TP 8 BERR (local output) REAL array, dimension (LOC(N_B)) The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A or B that makes X(j) an exact solution). .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PSPOCON( LWORK ), PSPORFS( LWORK ) ) + LOCr( N_A ). LWORK = 3*DESCA( LLD_ ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK = DESCA( LLD_ ) LIWORK = LOCr(N_A). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: if INFO = i, the leading minor of order i of A is not positive definite, so the factorization could not be completed, and the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pspotf2.l0100644000056400000620000001256606335610651017030 0ustar pfrauenfstaff.TH PSPOTF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOTF2 - compute the Cholesky factorization of a real symmetric positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSPOTF2( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSPOTF2 computes the Cholesky factorization of a real symmetric positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pspotrf.l0100644000056400000620000001257006335610651017123 0ustar pfrauenfstaff.TH PSPOTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOTRF - compute the Cholesky factorization of an N-by-N real symmetric positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSPOTRF( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSPOTRF computes the Cholesky factorization of an N-by-N real symmetric positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pspotri.l0100644000056400000620000001144106335610651017122 0ustar pfrauenfstaff.TH PSPOTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOTRI - compute the inverse of a real symmetric positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**T*U or L*L**T computed by PSPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PSPOTRI( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSPOTRI computes the inverse of a real symmetric positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**T*U or L*L**T computed by PSPOTRF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor U or L from the Cholesky factorization of the distributed matrix sub( A ) = U**T*U or L*L**T, as computed by PSPOTRF. On exit, the local pieces of the upper or lower triangle of the (symmetric) inverse of sub( A ), overwriting the input factor U or L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the (i,i) element of the factor U or L is zero, and the inverse could not be computed. scalapack-doc-1.5/man/manl/pspotrs.l0100644000056400000620000001266206335610651017142 0ustar pfrauenfstaff.TH PSPOTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPOTRS - solve a system of linear equations sub( A ) * X = sub( B ) A(IA:IA+N-1,JA:JA+N-1)*X = B(IB:IB+N-1,JB:JB+NRHS-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSPOTRS( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSPOTRS solves a system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a N-by-N symmetric positive definite distributed matrix using the Cholesky factorization sub( A ) = U**T*U or L*L**T computed by PSPOTRF. sub( B ) denotes the distributed matrix B(IB:IB+N-1,JB:JB+NRHS-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) REAL pointer into local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky facto- rization sub( A ) = L*L**T or U**T*U, as computed by PSPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the the local pieces of the right hand sides sub( B ). On exit, this array contains the local pieces of the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/psptsv.l0100644000056400000620000000133006335610651016755 0ustar pfrauenfstaff.TH PSPTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PSPTSV( N, NRHS, D, E, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 REAL B( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PSPTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal symmetric positive definite distributed .br matrix. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PSPTTRF and PSPTTRS for details. .br scalapack-doc-1.5/man/manl/pspttrf.l0100644000056400000620000000221606335610651017124 0ustar pfrauenfstaff.TH PSPTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPTTRF - compute a Cholesky factorization of an N-by-N real tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSPTTRF( N, D, E, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL AF( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PSPTTRF computes a Cholesky factorization of an N-by-N real tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PSPTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' D U or .br P A(1:N, JA:JA+N-1) P^T = L D L', .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pspttrs.l0100644000056400000620000000141206335610651017136 0ustar pfrauenfstaff.TH PSPTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PSPTTRS( N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL AF( * ), B( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PSPTTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PSPTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N real .br tridiagonal symmetric positive definite distributed .br matrix. .br Routine PSPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pspttrsv.l0100644000056400000620000000216506335610652017333 0ustar pfrauenfstaff.TH PSPTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSPTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PSPTTRSV( UPLO, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 REAL AF( * ), B( * ), D( * ), E( * ), WORK( * ) .SH PURPOSE PSPTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^T * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Cholesky factorization code PSPTTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^T is dictated by the user by the parameter TRANS. .br Routine PSPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/psrscl.l0100644000056400000620000001107206335610652016731 0ustar pfrauenfstaff.TH PSRSCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSRSCL - multiplie an N-element real distributed vector sub( X ) by the real scalar 1/a .SH SYNOPSIS .TP 19 SUBROUTINE PSRSCL( N, SA, SX, IX, JX, DESCX, INCX ) .TP 19 .ti +4 INTEGER IX, INCX, JX, N .TP 19 .ti +4 REAL SA .TP 19 .ti +4 INTEGER DESCX( * ) .TP 19 .ti +4 REAL SX( * ) .SH PURPOSE PSRSCL multiplies an N-element real distributed vector sub( X ) by the real scalar 1/a. This is done without overflow or underflow as long as the final result sub( X )/a does not overflow or underflow. where sub( X ) denotes X(IX:IX+N-1,JX:JX), if INCX = 1, .br X(IX:IX,JX:JX+N-1), if INCX = M_X. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector descA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DT_A (global) descA[ DT_ ] The descriptor type. In this case, DT_A = 1. .br CTXT_A (global) descA[ CTXT_ ] The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) descA[ M_ ] The number of rows in the global array A. .br N_A (global) descA[ N_ ] The number of columns in the global array A. .br MB_A (global) descA[ MB_ ] The blocking factor used to distribu- te the rows of the array. .br NB_A (global) descA[ NB_ ] The blocking factor used to distribu- te the columns of the array. RSRC_A (global) descA[ RSRC_ ] The process row over which the first row of the array A is distributed. CSRC_A (global) descA[ CSRC_ ] The process column over which the first column of the array A is distributed. .br LLD_A (local) descA[ LLD_ ] The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be seen as particular matrices, a distributed vector is considered to be a distributed matrix. .br .SH ARGUMENTS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 SA (global input) REAL The scalar a which is used to divide each component of sub( X ). SA must be >= 0, or the subroutine will divide by zero. .TP 8 SX (local input/local output) REAL array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) pointer to INTEGER The global row index of the submatrix of the distributed matrix X to operate on. .TP 8 JX (global input) pointer to INTEGER The global column index of the submatrix of the distributed matrix X to operate on. .TP 8 DESCX (global and local input) INTEGER array of dimension 8. The array descriptor of the distributed matrix X. .TP 8 INCX (global input) pointer to INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. scalapack-doc-1.5/man/manl/psstebz.l0100644000056400000620000001763706335610652017132 0ustar pfrauenfstaff.TH PSSTEBZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSSTEBZ - compute the eigenvalues of a symmetric tridiagonal matrix in parallel .SH SYNOPSIS .TP 20 SUBROUTINE PSSTEBZ( ICTXT, RANGE, ORDER, N, VL, VU, IL, IU, ABSTOL, D, E, M, NSPLIT, W, IBLOCK, ISPLIT, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER ORDER, RANGE .TP 20 .ti +4 INTEGER ICTXT, IL, INFO, IU, LIWORK, LWORK, M, N, NSPLIT .TP 20 .ti +4 REAL ABSTOL, VL, VU .TP 20 .ti +4 INTEGER IBLOCK( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 REAL D( * ), E( * ), W( * ), WORK( * ) .SH PURPOSE PSSTEBZ computes the eigenvalues of a symmetric tridiagonal matrix in parallel. The user may ask for all eigenvalues, all eigenvalues in the interval [VL, VU], or the eigenvalues indexed IL through IU. A static partitioning of work is done at the beginning of PSSTEBZ which results in all processes finding an (almost) equal number of eigenvalues. .br NOTE : It is assumed that the user is on an IEEE machine. If the user is not on an IEEE mchine, set the compile time flag NO_IEEE to 1 (in SLmake.inc). The features of IEEE arithmetic that are needed for the "fast" Sturm Count are : (a) infinity arithmetic (b) the sign bit of a double precision floating point number is assumed be in the 32nd or 64th bit position (c) the sign of negative zero. .br See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford .br University, July 21, 1966. .br .SH ARGUMENTS .TP 8 ICTXT (global input) INTEGER The BLACS context handle. .TP 8 RANGE (global input) CHARACTER Specifies which eigenvalues are to be found. = 'A': ("All") all eigenvalues will be found. .br = 'V': ("Value") all eigenvalues in the interval [VL, VU] will be found. = 'I': ("Index") the IL-th through IU-th eigenvalues (of the entire matrix) will be found. .TP 8 ORDER (global input) CHARACTER Specifies the order in which the eigenvalues and their block numbers are stored in W and IBLOCK. = 'B': ("By Block") the eigenvalues will be grouped by split-off block (see IBLOCK, ISPLIT) and ordered from smallest to largest within the block. = 'E': ("Entire matrix") the eigenvalues for the entire matrix will be ordered from smallest to largest. .TP 8 N (global input) INTEGER The order of the tridiagonal matrix T. N >= 0. .TP 8 VL (global input) REAL If RANGE='V', the lower bound of the interval to be searched for eigenvalues. Eigenvalues less than VL will not be returned. Not referenced if RANGE='A' or 'I'. .TP 8 VU (global input) REAL If RANGE='V', the upper bound of the interval to be searched for eigenvalues. Eigenvalues greater than VU will not be returned. VU must be greater than VL. Not referenced if RANGE='A' or 'I'. .TP 8 IL (global input) INTEGER If RANGE='I', the index (from smallest to largest) of the smallest eigenvalue to be returned. IL must be at least 1. Not referenced if RANGE='A' or 'V'. .TP 8 IU (global input) INTEGER If RANGE='I', the index (from smallest to largest) of the largest eigenvalue to be returned. IU must be at least IL and no greater than N. Not referenced if RANGE='A' or 'V'. .TP 8 ABSTOL (global input) REAL The absolute tolerance for the eigenvalues. An eigenvalue (or cluster) is considered to be located if it has been determined to lie in an interval whose width is ABSTOL or less. If ABSTOL is less than or equal to zero, then ULP*|T| will be used, where |T| means the 1-norm of T. Eigenvalues will be computed most accurately when ABSTOL is set to the underflow threshold SLAMCH('U'), not zero. Note : If eigenvectors are desired later by inverse iteration ( PSSTEIN ), ABSTOL should be set to 2*PSLAMCH('S'). .TP 8 D (global input) REAL array, dimension (N) The n diagonal elements of the tridiagonal matrix T. To avoid overflow, the matrix must be scaled so that its largest entry is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. .TP 8 E (global input) REAL array, dimension (N-1) The (n-1) off-diagonal elements of the tridiagonal matrix T. To avoid overflow, the matrix must be scaled so that its largest entry is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. .TP 8 M (global output) INTEGER The actual number of eigenvalues found. 0 <= M <= N. (See also the description of INFO=2) .TP 8 NSPLIT (global output) INTEGER The number of diagonal blocks in the matrix T. 1 <= NSPLIT <= N. .TP 8 W (global output) REAL array, dimension (N) On exit, the first M elements of W contain the eigenvalues on all processes. .TP 8 IBLOCK (global output) INTEGER array, dimension (N) At each row/column j where E(j) is zero or small, the matrix T is considered to split into a block diagonal matrix. On exit IBLOCK(i) specifies which block (from 1 to the number of blocks) the eigenvalue W(i) belongs to. NOTE: in the (theoretically impossible) event that bisection does not converge for some or all eigenvalues, INFO is set to 1 and the ones for which it did not are identified by a negative block number. .TP 8 ISPLIT (global output) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT(1), the second of rows/columns ISPLIT(1)+1 through ISPLIT(2), etc., and the NSPLIT-th consists of rows/columns ISPLIT(NSPLIT-1)+1 through ISPLIT(NSPLIT)=N. (Only the first NSPLIT elements will actually be used, but since the user cannot know a priori what value NSPLIT will have, N words must be reserved for ISPLIT.) .TP 8 WORK (local workspace) REAL array, dimension ( MAX( 5*N, 7 ) ) .TP 8 LWORK (local input) INTEGER size of array WORK must be >= MAX( 5*N, 7 ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace) INTEGER array, dimension ( MAX( 4*N, 14 ) ) .TP 8 LIWORK (local input) INTEGER size of array IWORK must be >= MAX( 4*N, 14, NPROCS ) If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0 : successful exit .br < 0 : if INFO = -i, the i-th argument had an illegal value .br > 0 : some or all of the eigenvalues failed to converge or .br were not computed: .br = 1 : Bisection failed to converge for some eigenvalues; these eigenvalues are flagged by a negative block number. The effect is that the eigenvalues may not be as accurate as the absolute and relative tolerances. This is generally caused by arithmetic which is less accurate than PSLAMCH says. = 2 : There is a mismatch between the number of eigenvalues output and the number desired. = 3 : RANGE='i', and the Gershgorin interval initially used was incorrect. No eigenvalues were computed. Probable cause: your machine has sloppy floating point arithmetic. Cure: Increase the PARAMETER "FUDGE", recompile, and try again. .SH PARAMETERS .TP 8 RELFAC REAL, default = 2.0 The relative tolerance. An interval [a,b] lies within "relative tolerance" if b-a < RELFAC*ulp*max(|a|,|b|), where "ulp" is the machine precision (distance from 1 to the next larger floating point number.) .TP 8 FUDGE REAL, default = 2.0 A "fudge factor" to widen the Gershgorin intervals. Ideally, a value of 1 should work, but on machines with sloppy arithmetic, this needs to be larger. The default for publicly released versions should be large enough to handle the worst machine around. Note that this has no effect on the accuracy of the solution. scalapack-doc-1.5/man/manl/psstein.l0100644000056400000620000002560606335610652017120 0ustar pfrauenfstaff.TH PSSTEIN l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSSTEIN - compute the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration .SH SYNOPSIS .TP 20 SUBROUTINE PSSTEIN( N, D, E, M, W, IBLOCK, ISPLIT, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 INTEGER INFO, IZ, JZ, LIWORK, LWORK, M, N .TP 20 .ti +4 REAL ORFAC .TP 20 .ti +4 INTEGER DESCZ( * ), IBLOCK( * ), ICLUSTR( * ), IFAIL( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 REAL D( * ), E( * ), GAP( * ), W( * ), WORK( * ), Z( * ) .SH PURPOSE PSSTEIN computes the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration. The eigenvectors found correspond to user specified eigenvalues. PSSTEIN does not orthogonalize vectors that are on different processes. The extent of orthogonalization is controlled by the input parameter LWORK. Eigenvectors that are to be orthogonalized are computed by the same process. PSSTEIN decides on the allocation of work among the processes and then calls SSTEIN2 (modified LAPACK routine) on each individual process. If insufficient workspace is allocated, the expected orthogonalization may not be done. .br Note : If the eigenvectors obtained are not orthogonal, increase LWORK and run the code again. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS P = NPROW * NPCOL is the total number of processes .TP 8 N (global input) INTEGER The order of the tridiagonal matrix T. N >= 0. .TP 8 D (global input) REAL array, dimension (N) The n diagonal elements of the tridiagonal matrix T. .TP 8 E (global input) REAL array, dimension (N-1) The (n-1) off-diagonal elements of the tridiagonal matrix T. .TP 8 M (global input) INTEGER The total number of eigenvectors to be found. 0 <= M <= N. .TP 8 W (global input/global output) REAL array, dim (M) On input, the first M elements of W contain all the eigenvalues for which eigenvectors are to be computed. The eigenvalues should be grouped by split-off block and ordered from smallest to largest within the block (The output array W from PSSTEBZ with ORDER='b' is expected here). This array should be replicated on all processes. On output, the first M elements contain the input eigenvalues in ascending order. Note : To obtain orthogonal vectors, it is best if eigenvalues are computed to highest accuracy ( this can be done by setting ABSTOL to the underflow threshold = SLAMCH('U') --- ABSTOL is an input parameter to PSSTEBZ ) .TP 8 IBLOCK (global input) INTEGER array, dimension (N) The submatrix indices associated with the corresponding eigenvalues in W -- 1 for eigenvalues belonging to the first submatrix from the top, 2 for those belonging to the second submatrix, etc. (The output array IBLOCK from PSSTEBZ is expected here). .TP 8 ISPLIT (global input) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT(1), the second of rows/columns ISPLIT(1)+1 through ISPLIT(2), etc., and the NSPLIT-th consists of rows/columns ISPLIT(NSPLIT-1)+1 through ISPLIT(NSPLIT)=N (The output array ISPLIT from PSSTEBZ is expected here.) .TP 8 ORFAC (global input) REAL ORFAC specifies which eigenvectors should be orthogonalized. Eigenvectors that correspond to eigenvalues which are within ORFAC*||T|| of each other are to be orthogonalized. However, if the workspace is insufficient (see LWORK), this tolerance may be decreased until all eigenvectors to be orthogonalized can be stored in one process. No orthogonalization will be done if ORFAC equals zero. A default value of 10^-3 is used if ORFAC is negative. ORFAC should be identical on all processes. .TP 8 Z (local output) REAL array, dimension (DESCZ(DLEN_), N/npcol + NB) Z contains the computed eigenvectors associated with the specified eigenvalues. Any vector which fails to converge is set to its current iterate after MAXITS iterations ( See SSTEIN2 ). On output, Z is distributed across the P processes in block cyclic format. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 WORK (local workspace/global output) REAL array, dimension ( LWORK ) On output, WORK(1) gives a lower bound on the workspace ( LWORK ) that guarantees the user desired orthogonalization (see ORFAC). Note that this may overestimate the minimum workspace needed. .TP 8 LWORK (local input) integer LWORK controls the extent of orthogonalization which can be done. The number of eigenvectors for which storage is allocated on each process is NVEC = floor(( LWORK- max(5*N,NP00*MQ00) )/N). Eigenvectors corresponding to eigenvalue clusters of size NVEC - ceil(M/P) + 1 are guaranteed to be orthogonal ( the orthogonality is similar to that obtained from SSTEIN2). Note : LWORK must be no smaller than: max(5*N,NP00*MQ00) + ceil(M/P)*N, and should have the same input value on all processes. It is the minimum value of LWORK input on different processes that is significant. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/global output) INTEGER array, dimension ( 3*N+P+1 ) On return, IWORK(1) contains the amount of integer workspace required. On return, the IWORK(2) through IWORK(P+2) indicate the eigenvectors computed by each process. Process I computes eigenvectors indexed IWORK(I+2)+1 thru' IWORK(I+3). .TP 8 LIWORK (local input) INTEGER Size of array IWORK. Must be >= 3*N + P + 1 If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IFAIL (global output) integer array, dimension (M) On normal exit, all elements of IFAIL are zero. If one or more eigenvectors fail to converge after MAXITS iterations (as in SSTEIN), then INFO > 0 is returned. If mod(INFO,M+1)>0, then for I=1 to mod(INFO,M+1), the eigenvector corresponding to the eigenvalue W(IFAIL(I)) failed to converge ( W refers to the array of eigenvalues on output ). ICLUSTR (global output) integer array, dimension (2*P) This output array contains indices of eigenvectors corresponding to a cluster of eigenvalues that could not be orthogonalized due to insufficient workspace (see LWORK, ORFAC and INFO). Eigenvectors corresponding to clusters of eigenvalues indexed ICLUSTR(2*I-1) to ICLUSTR(2*I), I = 1 to INFO/(M+1), could not be orthogonalized due to lack of workspace. Hence the eigenvectors corresponding to these clusters may not be orthogonal. ICLUSTR is a zero terminated array --- ( ICLUSTR(2*K).NE.0 .AND. ICLUSTR(2*K+1).EQ.0 ) if and only if K is the number of clusters. .TP 8 GAP (global output) REAL array, dimension (P) This output array contains the gap between eigenvalues whose eigenvectors could not be orthogonalized. The INFO/M output values in this array correspond to the INFO/(M+1) clusters indicated by the array ICLUSTR. As a result, the dot product between eigenvectors corresponding to the I^th cluster may be as high as ( O(n)*macheps ) / GAP(I). .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. < 0 : if INFO = -I, the I-th argument had an illegal value .br > 0 : if mod(INFO,M+1) = I, then I eigenvectors failed to converge in MAXITS iterations. Their indices are stored in the array IFAIL. if INFO/(M+1) = I, then eigenvectors corresponding to I clusters of eigenvalues could not be orthogonalized due to insufficient workspace. The indices of the clusters are stored in the array ICLUSTR. scalapack-doc-1.5/man/manl/pssyev.l0100644000056400000620000002313406335610652016756 0ustar pfrauenfstaff.TH PSSYEV l "12 May 1997" "LAPACK version 1.3" "LAPACK routine (version 1.3)" .SH NAME .SH SYNOPSIS .TP 19 SUBROUTINE PSSYEV( JOBZ, UPLO, N, A, IA, JA, DESCA, W, Z, IZ, JZ, DESCZ, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER JOBZ, UPLO .TP 19 .ti +4 INTEGER IA, INFO, IZ, JA, JZ, LWORK, N .TP 19 .ti +4 INTEGER DESCA( * ), DESCZ( * ) .TP 19 .ti +4 REAL A( * ), W( * ), WORK( * ), Z( * ) .TP 19 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 19 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 19 .ti +4 REAL ZERO, ONE .TP 19 .ti +4 PARAMETER ( ZERO = 0.0E+0, ONE = 1.0E+0 ) .TP 19 .ti +4 INTEGER ITHVAL .TP 19 .ti +4 PARAMETER ( ITHVAL = 10 ) .TP 19 .ti +4 LOGICAL LOWER, WANTZ .TP 19 .ti +4 INTEGER CONTEXTC, CSRC_A, I, IACOL, IAROW, ICOFFA, IINFO, INDD, INDD2, INDE, INDE2, INDTAU, INDWORK, INDWORK2, IROFFA, IROFFZ, ISCALE, IZROW, J, K, LCM, LCMQ, LDC, LLWORK, LWMIN, MB_A, MB_Z, MYCOL, MYPCOLC, MYPROWC, MYROW, NB, NB_A, NB_Z, NN, NP, NPCOL, NPCOLC, NPROCS, NPROW, NPROWC, NQ, NRC, QRMEM, RSRC_A, RSRC_Z, SIZEMQRLEFT, SIZEMQRRIGHT .TP 19 .ti +4 REAL ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA, SMLNUM .TP 19 .ti +4 INTEGER DESCQR( 10 ), IDUM1( 3 ), IDUM2( 3 ) .TP 19 .ti +4 LOGICAL LSAME .TP 19 .ti +4 INTEGER ILCM, INDXG2P, NUMROC, SL_GRIDRESHAPE .TP 19 .ti +4 REAL PSLAMCH, PSLANSY .TP 19 .ti +4 EXTERNAL LSAME, ILCM, INDXG2P, NUMROC, SL_GRIDRESHAPE, PSLAMCH, PSLANSY .TP 19 .ti +4 EXTERNAL BLACS_GRIDEXIT, BLACS_GRIDINFO, CHK1MAT, DESCINIT, PCHK2MAT, PSELGET, PSGEMR2D, PSLASCL, PSLASET, PSORMTR, PSSYTRD, PXERBLA, SCOPY, SGAMN2D, SGAMX2D, SSCAL, SSTEQR2 .TP 19 .ti +4 INTRINSIC ICHAR, MAX, MIN, MOD, REAL, SQRT .TP 19 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 19 .ti +4 IF( N.EQ.0 ) RETURN .TP 19 .ti +4 CALL BLACS_GRIDINFO( DESCA( CTXT_ ), NPROW, NPCOL, MYROW, MYCOL ) .TP 19 .ti +4 INFO = 0 .TP 19 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 19 .ti +4 INFO = -( 700+CTXT_ ) .TP 19 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+CTXT_ ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 CALL CHK1MAT( N, 3, N, 3, IA, JA, DESCA, 7, INFO ) .TP 19 .ti +4 CALL CHK1MAT( N, 3, N, 3, IZ, JZ, DESCZ, 12, INFO ) .TP 19 .ti +4 IF( INFO.EQ.0 ) THEN .TP 19 .ti +4 SAFMIN = PSLAMCH( DESCA( CTXT_ ), 'Safe minimum' ) .TP 19 .ti +4 EPS = PSLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 19 .ti +4 SMLNUM = SAFMIN / EPS .TP 19 .ti +4 BIGNUM = ONE / SMLNUM .TP 19 .ti +4 RMIN = SQRT( SMLNUM ) .TP 19 .ti +4 RMAX = MIN( SQRT( BIGNUM ), ONE / SQRT( SQRT( SAFMIN ) ) ) .TP 19 .ti +4 NPROCS = NPROW*NPCOL .TP 19 .ti +4 NB_A = DESCA( NB_ ) .TP 19 .ti +4 MB_A = DESCA( MB_ ) .TP 19 .ti +4 NB_Z = DESCZ( NB_ ) .TP 19 .ti +4 MB_Z = DESCZ( MB_ ) .TP 19 .ti +4 NB = NB_A .TP 19 .ti +4 LOWER = LSAME( UPLO, 'L' ) .TP 19 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 19 .ti +4 RSRC_A = DESCA( RSRC_ ) .TP 19 .ti +4 CSRC_A = DESCA( CSRC_ ) .TP 19 .ti +4 RSRC_Z = DESCZ( RSRC_ ) .TP 19 .ti +4 LCM = ILCM( NPROW, NPCOL ) .TP 19 .ti +4 LCMQ = LCM / NPCOL .TP 19 .ti +4 IROFFA = MOD( IA-1, MB_A ) .TP 19 .ti +4 ICOFFA = MOD( JA-1, NB_A ) .TP 19 .ti +4 IROFFZ = MOD( IZ-1, MB_A ) .TP 19 .ti +4 IAROW = INDXG2P( 1, NB_A, MYROW, RSRC_A, NPROW ) .TP 19 .ti +4 IACOL = INDXG2P( 1, MB_A, MYCOL, CSRC_A, NPCOL ) .TP 19 .ti +4 IZROW = INDXG2P( 1, NB_A, MYROW, RSRC_Z, NPROW ) .TP 19 .ti +4 NP = NUMROC( N+IROFFA, NB_Z, MYROW, IAROW, NPROW ) .TP 19 .ti +4 NQ = NUMROC( N+ICOFFA, NB_Z, MYCOL, IACOL, NPCOL ) .TP 19 .ti +4 SIZEMQRLEFT = MAX( ( NB_A*( NB_A-1 ) ) / 2, ( NP+NQ )*NB_A ) + NB_A*NB_A .TP 19 .ti +4 SIZEMQRRIGHT = MAX( ( NB_A*( NB_A-1 ) ) / 2, ( NQ+MAX( NP+NUMROC( NUMROC( N+ICOFFA, NB_A, 0, 0, NPCOL ), NB, 0, 0, LCMQ ), NP ) )* NB_A ) + NB_A*NB_A .TP 19 .ti +4 LDC = 0 .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 CONTEXTC = SL_GRIDRESHAPE( DESCA( CTXT_ ), 0, 1, 1, NPROCS, 1 ) .TP 19 .ti +4 CALL BLACS_GRIDINFO( CONTEXTC, NPROWC, NPCOLC, MYPROWC, MYPCOLC ) .TP 19 .ti +4 NRC = NUMROC( N, NB_A, MYPROWC, 0, NPROCS ) .TP 19 .ti +4 LDC = MAX( 1, NRC ) .TP 19 .ti +4 CALL DESCINIT( DESCQR, N, N, NB, NB, 0, 0, CONTEXTC, LDC, INFO ) .TP 19 .ti +4 END IF .TP 19 .ti +4 INDTAU = 1 .TP 19 .ti +4 INDE = INDTAU + N .TP 19 .ti +4 INDD = INDE + N .TP 19 .ti +4 INDD2 = INDD + N .TP 19 .ti +4 INDE2 = INDD2 + N .TP 19 .ti +4 INDWORK = INDE2 + N .TP 19 .ti +4 INDWORK2 = INDWORK + N*LDC .TP 19 .ti +4 LLWORK = LWORK - INDWORK + 1 .TP 19 .ti +4 NN = MAX( N, NB, 2 ) .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 QRMEM = 5*N + MAX( 2*NP+NQ+NB*NN, 2*NN-2 ) + N*LDC .TP 19 .ti +4 LWMIN = MAX( SIZEMQRLEFT, SIZEMQRRIGHT, QRMEM ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 LWMIN = 5*N + 2*NP + NQ + NB*NN .TP 19 .ti +4 END IF .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( INFO.EQ.0 ) THEN .TP 19 .ti +4 IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 19 .ti +4 INFO = -1 .TP 19 .ti +4 ELSE IF( .NOT.( LOWER .OR. LSAME( UPLO, 'U' ) ) ) THEN .TP 19 .ti +4 INFO = -2 .TP 19 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 19 .ti +4 INFO = -14 .TP 19 .ti +4 ELSE IF( IROFFA.NE.IROFFZ ) THEN .TP 19 .ti +4 INFO = -10 .TP 19 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 19 .ti +4 INFO = -5 .TP 19 .ti +4 ELSE IF( IAROW.NE.IZROW ) THEN .TP 19 .ti +4 INFO = -10 .TP 19 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 19 .ti +4 INFO = -( 700+NB_ ) .TP 19 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+M_ ) .TP 19 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+N_ ) .TP 19 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+MB_ ) .TP 19 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+NB_ ) .TP 19 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+RSRC_ ) .TP 19 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 19 .ti +4 INFO = -( 1200+CTXT_ ) .TP 19 .ti +4 END IF .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 IDUM1( 1 ) = ICHAR( 'V' ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 IDUM1( 1 ) = ICHAR( 'N' ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IDUM2( 1 ) = 1 .TP 19 .ti +4 IF( LOWER ) THEN .TP 19 .ti +4 IDUM1( 2 ) = ICHAR( 'L' ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 IDUM1( 2 ) = ICHAR( 'U' ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IDUM2( 2 ) = 2 .TP 19 .ti +4 IF( LWORK.EQ.-1 ) THEN .TP 19 .ti +4 IDUM1( 3 ) = -1 .TP 19 .ti +4 ELSE .TP 19 .ti +4 IDUM1( 3 ) = 1 .TP 19 .ti +4 END IF .TP 19 .ti +4 IDUM2( 3 ) = 3 .TP 19 .ti +4 CALL PCHK2MAT( N, 3, N, 3, IA, JA, DESCA, 7, N, 3, N, 3, IZ, JZ, DESCZ, 12, 3, IDUM1, IDUM2, INFO ) .TP 19 .ti +4 WORK( 1 ) = REAL( LWMIN ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( INFO.NE.0 ) THEN .TP 19 .ti +4 CALL PXERBLA( DESCA( CTXT_ ), 'PSSYEV', -INFO ) .TP 19 .ti +4 IF( WANTZ ) CALL BLACS_GRIDEXIT( CONTEXTC ) .TP 19 .ti +4 RETURN .TP 19 .ti +4 ELSE IF( LWORK.EQ.-1 ) THEN .TP 19 .ti +4 IF( WANTZ ) CALL BLACS_GRIDEXIT( CONTEXTC ) .TP 19 .ti +4 RETURN .TP 19 .ti +4 END IF .TP 19 .ti +4 ISCALE = 0 .TP 19 .ti +4 ANRM = PSLANSY( '1', UPLO, N, A, IA, JA, DESCA, WORK( INDWORK ) ) .TP 19 .ti +4 IF( ANRM.GT.ZERO .AND. ANRM.LT.RMIN ) THEN .TP 19 .ti +4 ISCALE = 1 .TP 19 .ti +4 SIGMA = RMIN / ANRM .TP 19 .ti +4 ELSE IF( ANRM.GT.RMAX ) THEN .TP 19 .ti +4 ISCALE = 1 .TP 19 .ti +4 SIGMA = RMAX / ANRM .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 19 .ti +4 CALL PSLASCL( UPLO, ONE, SIGMA, N, N, A, IA, JA, DESCA, IINFO ) .TP 19 .ti +4 END IF .TP 19 .ti +4 CALL PSSYTRD( UPLO, N, A, IA, JA, DESCA, WORK( INDD ), WORK( INDE ), WORK( INDTAU ), WORK( INDWORK ), LLWORK, IINFO ) .TP 19 .ti +4 DO 10 I = 1, N .TP 19 .ti +4 CALL PSELGET( 'A', ' ', WORK( INDD2+I-1 ), A, I+IA-1, I+JA-1, DESCA ) .TP 19 .ti +4 10 CONTINUE .TP 19 .ti +4 IF( LSAME( UPLO, 'U' ) ) THEN .TP 19 .ti +4 DO 20 I = 1, N - 1 .TP 19 .ti +4 CALL PSELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA-1, I+JA, DESCA ) .TP 19 .ti +4 20 CONTINUE .TP 19 .ti +4 ELSE .TP 19 .ti +4 DO 30 I = 1, N - 1 .TP 19 .ti +4 CALL PSELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA, I+JA-1, DESCA ) .TP 19 .ti +4 30 CONTINUE .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 CALL PSLASET( 'Full', N, N, ZERO, ONE, WORK( INDWORK ), 1, 1, DESCQR ) .TP 19 .ti +4 CALL SSTEQR2( 'I', N, WORK( INDD2 ), WORK( INDE2 ), WORK( INDWORK ), LDC, NRC, WORK( INDWORK2 ), INFO ) .TP 19 .ti +4 CALL PSGEMR2D( N, N, WORK( INDWORK ), 1, 1, DESCQR, Z, 1, 1, DESCZ, CONTEXTC ) .TP 19 .ti +4 CALL PSORMTR( 'L', UPLO, 'N', N, N, A, IA, JA, DESCA, WORK( INDTAU ), Z, IZ, JZ, DESCZ, WORK( INDWORK ), LLWORK, IINFO ) .TP 19 .ti +4 ELSE .TP 19 .ti +4 CALL SSTEQR2( 'N', N, WORK( INDD2 ), WORK( INDE2 ), WORK( INDWORK ), 1, 1, WORK( INDWORK2 ), INFO ) .TP 19 .ti +4 END IF .TP 19 .ti +4 CALL SCOPY( N, WORK( INDD2 ), 1, W, 1 ) .TP 19 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 19 .ti +4 CALL SSCAL( N, ONE / SIGMA, W, 1 ) .TP 19 .ti +4 END IF .TP 19 .ti +4 WORK( 1 ) = REAL( LWMIN ) .TP 19 .ti +4 IF( WANTZ ) THEN .TP 19 .ti +4 CALL BLACS_GRIDEXIT( CONTEXTC ) .TP 19 .ti +4 END IF .TP 19 .ti +4 IF( N.LE.ITHVAL ) THEN .TP 19 .ti +4 J = N .TP 19 .ti +4 K = 1 .TP 19 .ti +4 ELSE .TP 19 .ti +4 J = N / ITHVAL .TP 19 .ti +4 K = ITHVAL .TP 19 .ti +4 END IF .TP 19 .ti +4 DO 40 I = 1, J .TP 19 .ti +4 WORK( I+INDTAU ) = W( ( I-1 )*K+1 ) .TP 19 .ti +4 WORK( I+INDE ) = W( ( I-1 )*K+1 ) .TP 19 .ti +4 40 CONTINUE .TP 19 .ti +4 CALL SGAMN2D( DESCA( CTXT_ ), 'a', ' ', J, 1, WORK( 1+INDTAU ), J, 1, 1, -1, -1, 0 ) .TP 19 .ti +4 CALL SGAMX2D( DESCA( CTXT_ ), 'a', ' ', J, 1, WORK( 1+INDE ), J, 1, 1, -1, -1, 0 ) .TP 19 .ti +4 DO 50 I = 1, J .TP 19 .ti +4 IF( INFO.EQ.0 .AND. ( WORK( I+INDTAU )-WORK( I+INDE ).NE. ZERO ) ) THEN .TP 19 .ti +4 INFO = N + 1 .TP 19 .ti +4 END IF .TP 19 .ti +4 50 CONTINUE .TP 19 .ti +4 RETURN .TP 19 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pssyevx.l0100644000056400000620000003374106335610652017153 0ustar pfrauenfstaff.TH PSSYEVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PSSYEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IL, INFO, IU, IZ, JA, JZ, LIWORK, LWORK, M, N, NZ .TP 20 .ti +4 REAL ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), GAP( * ), W( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 REAL ZERO, ONE, TEN, FIVE .TP 20 .ti +4 PARAMETER ( ZERO = 0.0E+0, ONE = 1.0E+0, TEN = 10.0E+0, FIVE = 5.0E+0 ) .TP 20 .ti +4 INTEGER IERREIN, IERRCLS, IERRSPC, IERREBZ .TP 20 .ti +4 PARAMETER ( IERREIN = 1, IERRCLS = 2, IERRSPC = 4, IERREBZ = 8 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LOWER, LQUERY, QUICKRETURN, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER ORDER .TP 20 .ti +4 INTEGER CSRC_A, I, IACOL, IAROW, ICOFFA, IINFO, INDD, INDD2, INDE, INDE2, INDIBL, INDISP, INDTAU, INDWORK, IROFFA, IROFFZ, ISCALE, ISIZESTEBZ, ISIZESTEIN, IZROW, LALLWORK, LIWMIN, LLWORK, LWMIN, MAXEIGS, MB_A, MB_Z, MQ0, MYCOL, MYROW, NB, NB_A, NB_Z, NEIG, NN, NNP, NP0, NPCOL, NPROCS, NPROW, NSPLIT, NZZ, OFFSET, RSRC_A, RSRC_Z, SIZEORMTR, SIZESTEIN, SIZESYEVX .TP 20 .ti +4 REAL ABSTLL, ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA, SMLNUM, VLL, VUU .TP 20 .ti +4 INTEGER IDUM1( 4 ), IDUM2( 4 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 REAL PSLAMCH, PSLANSY .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PSLAMCH, PSLANSY .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, IGAMN2D, PCHK2MAT, PSELGET, PSLARED1D, PSLASCL, PSORMTR, PSSTEBZ, PSSTEIN, PSSYTRD, PXERBLA, SGEBR2D, SGEBS2D, SLASRT, SSCAL .TP 20 .ti +4 INTRINSIC ABS, ICHAR, MAX, MIN, MOD, REAL, SQRT .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 QUICKRETURN = ( N.EQ.0 ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( DESCA( CTXT_ ), NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 800+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 8, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 21, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 SAFMIN = PSLAMCH( DESCA( CTXT_ ), 'Safe minimum' ) .TP 20 .ti +4 EPS = PSLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 SMLNUM = SAFMIN / EPS .TP 20 .ti +4 BIGNUM = ONE / SMLNUM .TP 20 .ti +4 RMIN = SQRT( SMLNUM ) .TP 20 .ti +4 RMAX = MIN( SQRT( BIGNUM ), ONE / SQRT( SQRT( SAFMIN ) ) ) .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 LOWER = LSAME( UPLO, 'L' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 INDTAU = 1 .TP 20 .ti +4 INDE = INDTAU + N .TP 20 .ti +4 INDD = INDE + N .TP 20 .ti +4 INDD2 = INDD + N .TP 20 .ti +4 INDE2 = INDD2 + N .TP 20 .ti +4 INDWORK = INDE2 + N .TP 20 .ti +4 LLWORK = LWORK - INDWORK + 1 .TP 20 .ti +4 ISIZESTEIN = 3*N + NPROCS + 1 .TP 20 .ti +4 ISIZESTEBZ = MAX( 4*N, 14, NPROCS ) .TP 20 .ti +4 INDIBL = ( MAX( ISIZESTEIN, ISIZESTEBZ ) ) + 1 .TP 20 .ti +4 INDISP = INDIBL + N .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 NNP = MAX( N, NPROCS+1, 4 ) .TP 20 .ti +4 LIWMIN = 6*NNP .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 NB_A = DESCA( NB_ ) .TP 20 .ti +4 MB_A = DESCA( MB_ ) .TP 20 .ti +4 NB_Z = DESCZ( NB_ ) .TP 20 .ti +4 MB_Z = DESCZ( MB_ ) .TP 20 .ti +4 NB = NB_A .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 RSRC_A = DESCA( RSRC_ ) .TP 20 .ti +4 CSRC_A = DESCA( CSRC_ ) .TP 20 .ti +4 RSRC_Z = DESCZ( RSRC_ ) .TP 20 .ti +4 IROFFA = MOD( IA-1, MB_A ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, NB_A ) .TP 20 .ti +4 IROFFZ = MOD( IZ-1, MB_A ) .TP 20 .ti +4 IAROW = INDXG2P( 1, NB_A, MYROW, RSRC_A, NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( 1, MB_A, MYCOL, CSRC_A, NPCOL ) .TP 20 .ti +4 IZROW = INDXG2P( 1, NB_A, MYROW, RSRC_Z, NPROW ) .TP 20 .ti +4 NP0 = NUMROC( N+IROFFA, NB_Z, MYROW, IAROW, NPROW ) .TP 20 .ti +4 MQ0 = NUMROC( N+ICOFFA, NB_Z, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NB*( NP0+1 ) ) .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NP0*MQ0+2*NB*NB ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 WORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 WORK( 2 ) = VL .TP 20 .ti +4 WORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 WORK( 2 ) = ZERO .TP 20 .ti +4 WORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL SGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL SGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( LOWER .OR. LSAME( UPLO, 'U' ) ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -23 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -25 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -9 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( ABS( WORK( 1 )-ABSTOL ).GT.FIVE*EPS*ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -13 .TP 20 .ti +4 ELSE IF( IROFFA.NE.IROFFZ ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -6 .TP 20 .ti +4 ELSE IF( IAROW.NE.IZROW ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 800+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( LOWER ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'L' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'U' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 4 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 8, N, 4, N, 4, IZ, JZ, DESCZ, 21, 4, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 WORK( 1 ) = REAL( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( DESCA( CTXT_ ), 'PSSYEVX', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( QUICKRETURN ) THEN .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NZ = 0 .TP 20 .ti +4 ICLUSTR( 1 ) = 0 .TP 20 .ti +4 END IF .TP 20 .ti +4 M = 0 .TP 20 .ti +4 WORK( 1 ) = REAL( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 ABSTLL = ABSTOL .TP 20 .ti +4 ISCALE = 0 .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL .TP 20 .ti +4 VUU = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 VLL = ZERO .TP 20 .ti +4 VUU = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 ANRM = PSLANSY( '1', UPLO, N, A, IA, JA, DESCA, WORK( INDWORK ) ) .TP 20 .ti +4 IF( ANRM.GT.ZERO .AND. ANRM.LT.RMIN ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMIN / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 ELSE IF( ANRM.GT.RMAX ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMAX / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL PSLASCL( UPLO, ONE, SIGMA, N, N, A, IA, JA, DESCA, IINFO ) .TP 20 .ti +4 IF( ABSTOL.GT.0 ) ABSTLL = ABSTOL*SIGMA .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL*SIGMA .TP 20 .ti +4 VUU = VU*SIGMA .TP 20 .ti +4 IF( VUU.EQ.VLL ) THEN .TP 20 .ti +4 VUU = VUU + 2*MAX( ABS( VUU )*EPS, SAFMIN ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 LALLWORK = LLWORK .TP 20 .ti +4 CALL PSSYTRD( UPLO, N, A, IA, JA, DESCA, WORK( INDD ), WORK( INDE ), WORK( INDTAU ), WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 OFFSET = 0 .TP 20 .ti +4 IF( IA.EQ.1 .AND. JA.EQ.1 .AND. RSRC_A.EQ.0 .AND. CSRC_A.EQ.0 ) THEN .TP 20 .ti +4 CALL PSLARED1D( N, IA, JA, DESCA, WORK( INDD ), WORK( INDD2 ), WORK( INDWORK ), LLWORK ) .TP 20 .ti +4 CALL PSLARED1D( N, IA, JA, DESCA, WORK( INDE ), WORK( INDE2 ), WORK( INDWORK ), LLWORK ) .TP 20 .ti +4 IF( .NOT.LOWER ) OFFSET = 1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 10 I = 1, N .TP 20 .ti +4 CALL PSELGET( 'A', ' ', WORK( INDD2+I-1 ), A, I+IA-1, I+JA-1, DESCA ) .TP 20 .ti +4 10 CONTINUE .TP 20 .ti +4 IF( LSAME( UPLO, 'U' ) ) THEN .TP 20 .ti +4 DO 20 I = 1, N - 1 .TP 20 .ti +4 CALL PSELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA-1, I+JA, DESCA ) .TP 20 .ti +4 20 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 30 I = 1, N - 1 .TP 20 .ti +4 CALL PSELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA, I+JA-1, DESCA ) .TP 20 .ti +4 30 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 ORDER = 'b' .TP 20 .ti +4 ELSE .TP 20 .ti +4 ORDER = 'e' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, WORK( INDD2 ), WORK( INDE2+OFFSET ), M, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), WORK( INDWORK ), LLWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 IF( IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 DO 40 I = 1, M .TP 20 .ti +4 IWORK( INDIBL+I-1 ) = ABS( IWORK( INDIBL+I-1 ) ) .TP 20 .ti +4 40 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 CALL IGAMN2D( DESCA( CTXT_ ), 'A', ' ', 1, 1, LALLWORK, 1, 1, 1, -1, -1, -1 ) .TP 20 .ti +4 MAXEIGS = DESCZ( N_ ) .TP 20 .ti +4 DO 50 NZ = MIN( MAXEIGS, M ), 0, -1 .TP 20 .ti +4 MQ0 = NUMROC( NZ, NB, 0, 0, NPCOL ) .TP 20 .ti +4 SIZESTEIN = ICEIL( NZ, NPROCS )*N + MAX( 5*N, NP0*MQ0 ) .TP 20 .ti +4 SIZEORMTR = MAX( ( NB*( NB-1 ) ) / 2, ( MQ0+NP0 )*NB ) + NB*NB .TP 20 .ti +4 SIZESYEVX = MAX( SIZESTEIN, SIZEORMTR ) .TP 20 .ti +4 IF( SIZESYEVX.LE.LALLWORK ) GO TO 60 .TP 20 .ti +4 50 CONTINUE .TP 20 .ti +4 60 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 NZ = M .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MAX( NZ, 0 ) .TP 20 .ti +4 IF( NZ.NE.M ) THEN .TP 20 .ti +4 INFO = INFO + IERRSPC .TP 20 .ti +4 DO 70 I = 1, M .TP 20 .ti +4 IFAIL( I ) = 0 .TP 20 .ti +4 70 CONTINUE .TP 20 .ti +4 IF( NSPLIT.GT.1 ) THEN .TP 20 .ti +4 CALL SLASRT( 'I', M, W, IINFO ) .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 VUU = W( NZ ) - TEN*( EPS*ANRM+SAFMIN ) .TP 20 .ti +4 IF( VLL.GE.VUU ) THEN .TP 20 .ti +4 NZZ = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL PSSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, WORK( INDD2 ), WORK( INDE2+OFFSET ), NZZ, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), WORK( INDWORK ), LLWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( MOD( INFO / IERREBZ, 1 ).EQ.0 ) THEN .TP 20 .ti +4 IF( NZZ.GT.NZ .OR. IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MIN( NZ, NZZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSSTEIN( N, WORK( INDD2 ), WORK( INDE2+OFFSET ), NZ, W, IWORK( INDIBL ), IWORK( INDISP ), ORFAC, Z, IZ, JZ, DESCZ, WORK( INDWORK ), LALLWORK, IWORK( 1 ), ISIZESTEIN, IFAIL, ICLUSTR, GAP, IINFO ) .TP 20 .ti +4 IF( IINFO.GE.NZ+1 ) INFO = INFO + IERRCLS .TP 20 .ti +4 IF( MOD( IINFO, NZ+1 ).NE.0 ) INFO = INFO + IERREIN .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 CALL PSORMTR( 'L', UPLO, 'N', N, NZ, A, IA, JA, DESCA, WORK( INDTAU ), Z, IZ, JZ, DESCZ, WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL SSCAL( M, ONE / SIGMA, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = REAL( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pssygs2.l0100644000056400000620000001414006335610652017034 0ustar pfrauenfstaff.TH PSSYGS2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSSYGS2 - reduce a real symmetric-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PSSYGS2( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSSYGS2 reduces a real symmetric-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**T or L**T*sub( A )*L. .br sub( B ) must have been previously factorized as U**T*U or L*L**T by PSPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T); = 2 or 3: compute U*sub( A )*U**T or L**T*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**T*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**T. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) REAL pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PSPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pssygst.l0100644000056400000620000001455706335610652017152 0ustar pfrauenfstaff.TH PSSYGST l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSSYGST - reduce a real symmetric-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PSSYGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 REAL SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSSYGST reduces a real symmetric-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**T or L**T*sub( A )*L. .br sub( B ) must have been previously factorized as U**T*U or L*L**T by PSPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**T)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**T); = 2 or 3: compute U*sub( A )*U**T or L**T*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**T*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**T. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) REAL pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PSPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 SCALE (global output) REAL Amount by which the eigenvalues should be scaled to compensate for the scaling performed in this routine. At present, SCALE is always returned as 1.0, it is returned here to allow for future enhancement. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pssygvx.l0100644000056400000620000002375606335610652017162 0ustar pfrauenfstaff.TH PSSYGVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PSSYGVX( IBTYPE, JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, IL, INFO, IU, IZ, JA, JB, JZ, LIWORK, LWORK, M, N, NZ .TP 20 .ti +4 REAL ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), B( * ), GAP( * ), W( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 REAL ONE .TP 20 .ti +4 PARAMETER ( ONE = 1.0E+0 ) .TP 20 .ti +4 REAL FIVE, ZERO .TP 20 .ti +4 PARAMETER ( FIVE = 5.0E+0, ZERO = 0.0E+0 ) .TP 20 .ti +4 INTEGER IERRNPD .TP 20 .ti +4 PARAMETER ( IERRNPD = 16 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LQUERY, UPPER, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IACOL, IAROW, IBCOL, IBROW, ICOFFA, ICOFFB, ICTXT, IROFFA, IROFFB, LIWMIN, LWMIN, MQ0, MYCOL, MYROW, NB, NEIG, NN, NP0, NPCOL, NPROW .TP 20 .ti +4 REAL EPS, SCALE .TP 20 .ti +4 INTEGER IDUM1( 5 ), IDUM2( 5 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 REAL PSLAMCH .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PSLAMCH .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, PCHK1MAT, PCHK2MAT, PSPOTRF, PSSYEVX, PSSYGST, PSTRMM, PSTRSM, PXERBLA, SGEBR2D, SGEBS2D, SSCAL .TP 20 .ti +4 INTRINSIC ABS, ICHAR, MAX, MIN, MOD, REAL .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 ICTXT = DESCA( CTXT_ ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 900+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2600+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 EPS = PSLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 UPPER = LSAME( UPLO, 'U' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 9, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IB, JB, DESCB, 13, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 WORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 WORK( 2 ) = VL .TP 20 .ti +4 WORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 WORK( 2 ) = ZERO .TP 20 .ti +4 WORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL SGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL SGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, WORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IAROW = INDXG2P( IA, DESCA( MB_ ), MYROW, DESCA( RSRC_ ), NPROW ) .TP 20 .ti +4 IBROW = INDXG2P( IB, DESCB( MB_ ), MYROW, DESCB( RSRC_ ), NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( JA, DESCA( NB_ ), MYCOL, DESCA( CSRC_ ), NPCOL ) .TP 20 .ti +4 IBCOL = INDXG2P( JB, DESCB( NB_ ), MYCOL, DESCB( CSRC_ ), NPCOL ) .TP 20 .ti +4 IROFFA = MOD( IA-1, DESCA( MB_ ) ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, DESCA( NB_ ) ) .TP 20 .ti +4 IROFFB = MOD( IB-1, DESCB( MB_ ) ) .TP 20 .ti +4 ICOFFB = MOD( JB-1, DESCB( NB_ ) ) .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 LIWMIN = 6*MAX( N, ( NPROW*NPCOL )+1, 4 ) .TP 20 .ti +4 NB = DESCA( MB_ ) .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 NP0 = NUMROC( NN, NB, 0, 0, NPROW ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NB*( NP0+1 ) ) .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, 0, 0, NPCOL ) .TP 20 .ti +4 LWMIN = 5*N + MAX( 5*NN, NP0*MQ0+2*NB*NB ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( IBTYPE.LT.1 .OR. IBTYPE.GT.3 ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( .NOT.UPPER .AND. .NOT.LSAME( UPLO, 'L' ) ) THEN .TP 20 .ti +4 INFO = -4 .TP 20 .ti +4 ELSE IF( N.LT.0 ) THEN .TP 20 .ti +4 INFO = -5 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -7 .TP 20 .ti +4 ELSE IF( ICOFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -8 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 900+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCB( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCB( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCB( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCB( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCB( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCB( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CTXT_ ) .TP 20 .ti +4 ELSE IF( IROFFB.NE.0 .OR. IBROW.NE.IAROW ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( ICOFFB.NE.0 .OR. IBCOL.NE.IACOL ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -16 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -17 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -14 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( WORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( ABS( WORK( 1 )-ABSTOL ).GT.FIVE*EPS*ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -18 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -28 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -30 .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM1( 1 ) = IBTYPE .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'U' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'L' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 5 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 5 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 5 ) = 5 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 9, N, 4, N, 4, IB, JB, DESCB, 13, 5, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 CALL PCHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, 0, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = REAL( LWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( ICTXT, 'PSSYGVX ', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSPOTRF( UPLO, N, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 IFAIL( 1 ) = INFO .TP 20 .ti +4 INFO = IERRNPD .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSSYGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CALL PSSYEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NEIG = M .TP 20 .ti +4 IF( IBTYPE.EQ.1 .OR. IBTYPE.EQ.2 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'T' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSTRSM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 ELSE IF( IBTYPE.EQ.3 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'T' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PSTRMM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( SCALE.NE.ONE ) THEN .TP 20 .ti +4 CALL SSCAL( N, SCALE, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pssytd2.l0100644000056400000620000002020306335610652017027 0ustar pfrauenfstaff.TH PSSYTD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSSYTD2 - reduce a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PSSYTD2( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), D( * ), E( * ), TAU( * ), WORK( * ) .SH PURPOSE PSSYTD2 reduces a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*N. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) with .br IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pssytrd.l0100644000056400000620000002064206335610652017136 0ustar pfrauenfstaff.TH PSSYTRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSSYTRD - reduce a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PSSYTRD( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), D( * ), E( * ), TAU( * ), WORK( * ) .SH PURPOSE PSSYTRD reduces a real symmetric matrix sub( A ) to symmetric tridiagonal form T by an orthogonal similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the orthogonal matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) REAL array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) REAL array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB * ( NP +1 ), 3 * NB ) where NB = MB_A = NB_A, NP = NUMROC( N, NB, MYROW, IAROW, NPROW ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a real scalar, and v is a real vector with .br v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA .AND. IROFFA.EQ.0 ) with IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pstrcon.l0100644000056400000620000001610006335610653017111 0ustar pfrauenfstaff.TH PSTRCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSTRCON - estimate the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm .SH SYNOPSIS .TP 20 SUBROUTINE PSTRCON( NORM, UPLO, DIAG, N, A, IA, JA, DESCA, RCOND, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 20 .ti +4 INTEGER IA, JA, INFO, LIWORK, LWORK, N .TP 20 .ti +4 REAL RCOND .TP 20 .ti +4 INTEGER DESCA( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), WORK( * ) .SH PURPOSE PSTRCON estimates the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm. The norm of A(IA:IA+N-1,JA:JA+N-1) is computed and an estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), then the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm; .br = 'I': Infinity-norm. .TP 8 UPLO (global input) CHARACTER .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is upper triangular; .br = 'L': A(IA:IA+N-1,JA:JA+N-1) is lower triangular. .TP 8 DIAG (global input) CHARACTER .br = 'N': A(IA:IA+N-1,JA:JA+N-1) is non-unit triangular; .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is unit triangular. .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U', the leading N-by-N upper triangular part of this distributed matrix con- tains the upper triangular matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of this ditributed matrix contains the lower triangular matrix, and the strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of A(IA:IA+N-1,JA:JA+N-1) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 RCOND (global output) REAL The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + LOCc(N+MOD(JA-1,NB_A)) + MAX( 2, MAX( NB_A*MAX( 1, CEIL(NPROW-1,NPCOL) ), LOCc(N+MOD(JA-1,NB_A)) + NB_A*MAX( 1, CEIL(NPCOL-1,NPROW) ) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr(N+MOD(IA-1,MB_A)). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pstrrfs.l0100644000056400000620000002262306335610653017133 0ustar pfrauenfstaff.TH PSTRRFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSTRRFS - provide error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix .SH SYNOPSIS .TP 20 SUBROUTINE PSTRRFS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER INFO, IA, IB, IX, JA, JB, JX, LIWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCX( * ), IWORK( * ) .TP 20 .ti +4 REAL A( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .SH PURPOSE PSTRRFS provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix. The solution matrix X must be computed by PSTRTRS or some other means before entering this routine. PSTRRFS does not do iterative refinement because doing so cannot improve the backward error. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**T * sub( X ) = sub( B ) (Conjugate transpose = Transpose) .TP 8 DIAG (global input) CHARACTER*1 = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the original triangular distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) REAL pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) REAL pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The estimated forward error bounds for each solution vector of sub( X ). If XTRUE is the true solution, FERR bounds the magnitude of the largest entry in (sub( X ) - XTRUE) divided by the magnitude of the largest entry in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) REAL array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*LOCr( N + MOD( IA-1, MB_A ) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK. LIWORK is local input and must be at least LIWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices sub( X ) and sub( B ) should be distributed the same way on the same processes. These conditions ensure that sub( X ) and sub( B ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pstrti2.l0100644000056400000620000001203706335610653017035 0ustar pfrauenfstaff.TH PSTRTI2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSTRTI2 - compute the inverse of a real upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSTRTI2( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSTRTI2 computes the inverse of a real upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). This matrix should be contained in one and only one process memory space (local operation). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 DIAG (global input) CHARACTER*1 .br = 'N': sub( A ) is non-unit triangular .br = 'U': sub( A ) is unit triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)), this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. On exit, the (triangular) inverse of the original matrix, in the same storage format. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pstrtri.l0100644000056400000620000001212506335610653017133 0ustar pfrauenfstaff.TH PSTRTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSTRTRI - compute the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PSTRTRI( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ) .SH PURPOSE PSTRTRI computes the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) is unit triangular: .br = 'N': Non-unit triangular, .br = 'U': Unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix to be inverted, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. On exit, the (triangular) inverse of the original matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, A(IA+K-1,JA+K-1) is exactly zero. The triangular matrix sub( A ) is singular and its inverse can not be computed. scalapack-doc-1.5/man/manl/pstrtrs.l0100644000056400000620000001440106335610653017144 0ustar pfrauenfstaff.TH PSTRTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PSTRTRS - solve a triangular system of the form sub( A ) * X = sub( B ) or sub( A )**T * X = sub( B ), .SH SYNOPSIS .TP 20 SUBROUTINE PSTRTRS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 REAL A( * ), B( * ) .SH PURPOSE PSTRTRS solves a triangular system of the form where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a triangular distributed matrix of order N, and B(IB:IB+N-1,JB:JB+NRHS-1) is an N-by-NRHS distributed matrix denoted by sub( B ). A check is made to verify that sub( A ) is nonsingular. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': Solve sub( A ) * X = sub( B ) (No transpose) .br = 'T': Solve sub( A )**T * X = sub( B ) (Transpose) .br = 'C': Solve sub( A )**T * X = sub( B ) (Transpose) .TP 8 DIAG (global input) CHARACTER .br = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed matrix sub( B ). NRHS >= 0. .TP 8 A (local input) REAL pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the distributed triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the i-th diagonal element of sub( A ) is zero, indicating that the submatrix is singular and the solutions X have not been computed. scalapack-doc-1.5/man/manl/pstzrzf.l0100644000056400000620000001561606335610653017156 0ustar pfrauenfstaff.TH PSTZRZF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PSTZRZF - reduce the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of orthogonal transformations .SH SYNOPSIS .TP 20 SUBROUTINE PSTZRZF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 REAL A( * ), TAU( * ), WORK( * ) .SH PURPOSE PSTZRZF reduces the M-by-N ( M<=N ) real upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of orthogonal transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N orthogonal matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) REAL pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements M+1 to N of the first M rows of sub( A ), with the array TAU, represent the orthogonal matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) REAL, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) REAL array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), which is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pzdbsv.l0100644000056400000620000000141406335610653016733 0ustar pfrauenfstaff.TH PZDBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PZDBSV( N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX*16 A( * ), B( * ), WORK( * ) .SH PURPOSE PZDBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PZDBTRF and PZDBTRS for details. .br scalapack-doc-1.5/man/manl/pzdbtrf.l0100644000056400000620000000214606335610653017101 0ustar pfrauenfstaff.TH PZDBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDBTRF - compute a LU factorization of an N-by-N complex banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PZDBTRF( N, BWL, BWU, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), WORK( * ) .SH PURPOSE PZDBTRF computes a LU factorization of an N-by-N complex banded diagonally dominant-like distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PZDBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pzdbtrs.l0100644000056400000620000000166606335610653017124 0ustar pfrauenfstaff.TH PZDBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PZDBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PZDBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PZDBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br banded diagonally dominant-like distributed .br matrix with bandwidth BWL, BWU. .br Routine PZDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzdbtrsv.l0100644000056400000620000000221206335610653017276 0ustar pfrauenfstaff.TH PZDBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PZDBTRSV( UPLO, TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PZDBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Gaussian elimination code PZ@(dom_pre)BTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PZDBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzdrscl.l0100644000056400000620000001114506335610653017106 0ustar pfrauenfstaff.TH PZDRSCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZDRSCL - multiplie an N-element complex distributed vector sub( X ) by the real scalar 1/a .SH SYNOPSIS .TP 20 SUBROUTINE PZDRSCL( N, SA, SX, IX, JX, DESCX, INCX ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 DOUBLE PRECISION SA .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX*16 SX( * ) .SH PURPOSE PZDRSCL multiplies an N-element complex distributed vector sub( X ) by the real scalar 1/a. This is done without overflow or underflow as long as the final sub( X )/a does not overflow or underflow. .br where sub( X ) denotes X(IX:IX+N-1,JX:JX), if INCX = 1, .br X(IX:IX,JX:JX+N-1), if INCX = M_X. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector descA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DT_A (global) descA[ DT_ ] The descriptor type. In this case, DT_A = 1. .br CTXT_A (global) descA[ CTXT_ ] The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) descA[ M_ ] The number of rows in the global array A. .br N_A (global) descA[ N_ ] The number of columns in the global array A. .br MB_A (global) descA[ MB_ ] The blocking factor used to distribu- te the rows of the array. .br NB_A (global) descA[ NB_ ] The blocking factor used to distribu- te the columns of the array. RSRC_A (global) descA[ RSRC_ ] The process row over which the first row of the array A is distributed. CSRC_A (global) descA[ CSRC_ ] The process column over which the first column of the array A is distributed. .br LLD_A (local) descA[ LLD_ ] The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be seen as particular matrices, a distributed vector is considered to be a distributed matrix. .br .SH ARGUMENTS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 SA (global input) DOUBLE PRECISION The scalar a which is used to divide each component of sub( X ). SA must be >= 0, or the subroutine will divide by zero. .TP 8 SX (local input/local output) COMPLEX*16 array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) pointer to INTEGER The global row index of the submatrix of the distributed matrix X to operate on. .TP 8 JX (global input) pointer to INTEGER The global column index of the submatrix of the distributed matrix X to operate on. .TP 8 DESCX (global and local input) INTEGER array of dimension 8. The array descriptor of the distributed matrix X. .TP 8 INCX (global input) pointer to INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. scalapack-doc-1.5/man/manl/pzdtsv.l0100644000056400000620000000137706335610653016765 0ustar pfrauenfstaff.TH PZDTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PZDTSV( N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX*16 B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PZDTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal diagonally dominant-like distributed .br matrix. .br Gaussian elimination without pivoting .br is used to factor a reordering .br of the matrix into L U. .br See PZDTTRF and PZDTTRS for details. .br scalapack-doc-1.5/man/manl/pzdttrf.l0100644000056400000620000000214106335610653017116 0ustar pfrauenfstaff.TH PZDTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDTTRF - compute a LU factorization of an N-by-N complex tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZDTTRF( N, DL, D, DU, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 AF( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PZDTTRF computes a LU factorization of an N-by-N complex tridiagonal diagonally dominant-like distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PZDTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = L U .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pzdttrs.l0100644000056400000620000000165106335610653017140 0ustar pfrauenfstaff.TH PZDTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PZDTTRS( TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PZDTTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PZDTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal diagonally dominant-like distributed .br matrix. .br Routine PZDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzdttrsv.l0100644000056400000620000000223706335610653017327 0ustar pfrauenfstaff.TH PZDTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZDTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PZDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX*16 AF( * ), B( * ), D( * ), DL( * ), DU( * ), WORK( * ) .SH PURPOSE PZDTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Gaussian elimination code PZ@(dom_pre)TTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PZDTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzgbsv.l0100644000056400000620000000140306335610654016735 0ustar pfrauenfstaff.TH PZGBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PZGBSV( N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 INTEGER BWL, BWU, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 COMPLEX*16 A( * ), B( * ), WORK( * ) .SH PURPOSE PZGBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br banded distributed .br matrix with bandwidth BWL, BWU. .br Gaussian elimination with pivoting .br is used to factor a reordering .br of the matrix into P L U. .br See PZGBTRF and PZGBTRS for details. .br scalapack-doc-1.5/man/manl/pzgbtrf.l0100644000056400000620000000237606335610654017112 0ustar pfrauenfstaff.TH PZGBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGBTRF - compute a LU factorization of an N-by-N complex banded distributed matrix with bandwidth BWL, BWU .SH SYNOPSIS .TP 20 SUBROUTINE PZGBTRF( N, BWL, BWU, A, JA, DESCA, IPIV, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER BWL, BWU, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), WORK( * ) .SH PURPOSE PZGBTRF computes a LU factorization of an N-by-N complex banded distributed matrix with bandwidth BWL, BWU: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PZGBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) Q = L U .br where U is a banded upper triangular matrix and L is banded lower triangular, and P and Q are permutation matrices. .br The matrix Q represents reordering of columns .br for parallelism's sake, while P represents .br reordering of rows for numerical stability using .br classic partial pivoting. .br scalapack-doc-1.5/man/manl/pzgbtrs.l0100644000056400000620000000165406335610654017125 0ustar pfrauenfstaff.TH PZGBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PZGBTRS( TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER BWU, BWL, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV(*) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PZGBTRS solves a system of linear equations or .br A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS) .br where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PZGBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br banded distributed .br matrix with bandwidth BWL, BWU. .br Routine PZGBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzgebd2.l0100644000056400000620000002232506335610654016765 0ustar pfrauenfstaff.TH PZGEBD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZGEBD2 - reduce a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation .SH SYNOPSIS .TP 20 SUBROUTINE PZGEBD2( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PZGEBD2 reduces a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) COMPLEX*16 array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) COMPLEX*16 array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MpA0, NqA0 ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ) IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+IROFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pzgebrd.l0100644000056400000620000002234506335610654017067 0ustar pfrauenfstaff.TH PZGEBRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEBRD - reduce a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation .SH SYNOPSIS .TP 20 SUBROUTINE PZGEBRD( M, N, A, IA, JA, DESCA, D, E, TAUQ, TAUP, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAUP( * ), TAUQ( * ), WORK( * ) .SH PURPOSE PZGEBRD reduces a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form B by an unitary transformation: Q' * sub( A ) * P = B. If M >= N, B is upper bidiagonal; if M < N, B is lower bidiagonal. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ). On exit, if M >= N, the diagonal and the first superdiagonal of sub( A ) are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. If M < N, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+MIN(M,N)-1) if M >= N; LOCr(IA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(i,i+1) for i = 1,2,...,n-1; if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) COMPLEX*16 array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) COMPLEX*16 array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix P. TAUP is tied to the distributed matrix A. See Further Details. WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*( MpA0 + NqA0 + 1 ) + NqA0 where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, NB, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB, MYCOL, IACOL, NPCOL ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br If m >= n, .br Q = H(1) H(2) . . . H(n) and P = G(1) G(2) . . . G(n-1) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i-1) = 0, v(i) = 1, and v(i+1:m) is stored on exit in A(ia+i:ia+m-1,ja+i-1); .br u(1:i) = 0, u(i+1) = 1, and u(i+2:n) is stored on exit in A(ia+i-1,ja+i+1:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, .br Q = H(1) H(2) . . . H(m-1) and P = G(1) G(2) . . . G(m) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors; .br v(1:i) = 0, v(i+1) = 1, and v(i+2:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); .br u(1:i-1) = 0, u(i) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); .br tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) .br where d and e denote diagonal and off-diagonal elements of B, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expressions should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pzgecon.l0100644000056400000620000001552406335610654017100 0ustar pfrauenfstaff.TH PZGECON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGECON - estimate the reciprocal of the condition number of a general distributed complex matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PZGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PZGECON( NORM, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER NORM .TP 20 .ti +4 INTEGER IA, INFO, JA, LRWORK, LWORK, N .TP 20 .ti +4 DOUBLE PRECISION ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), WORK( * ) .SH PURPOSE PZGECON estimates the reciprocal of the condition number of a general distributed complex matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm, using the LU factorization computed by PZGETRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm .br = 'I': Infinity-norm .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) DOUBLE PRECISION If NORM = '1' or 'O', the 1-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). If NORM = 'I', the infinity-norm of the original distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) DOUBLE PRECISION The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + MAX( 2, MAX(NB_A*CEIL(NPROW-1,NPCOL),LOCc(N+MOD(JA-1,NB_A)) + NB_A*CEIL(NPCOL-1,NPROW)) ). LOCr and LOCc values can be computed using the ScaLAPACK tool function NUMROC; NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= 2*LOCc(N+MOD(JA-1,NB_A)). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzgeequ.l0100644000056400000620000001452406335610654017112 0ustar pfrauenfstaff.TH PZGEEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEEQU - compute row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number .SH SYNOPSIS .TP 20 SUBROUTINE PZGEEQU( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION C( * ), R( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZGEEQU computes row and column scalings intended to equilibrate an M-by-N distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA:JA+N-1) and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest entry in each row and column of the distributed matrix B with elements B(i,j) = R(i) * A(i,j) * C(j) have absolute value 1. .br R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of sub( A ) but works well in practice. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ), the local pieces of the M-by-N distributed matrix whose equilibration factors are to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local output) DOUBLE PRECISION array, dimension LOCr(M_A) If INFO = 0 or INFO > IA+M-1, R(IA:IA+M-1) contains the row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local output) DOUBLE PRECISION array, dimension LOCc(N_A) If INFO = 0, C(JA:JA+N-1) contains the column scale factors for sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distri- buted matrix A. .TP 8 ROWCND (global output) DOUBLE PRECISION If INFO = 0 or INFO > IA+M-1, ROWCND contains the ratio of the smallest R(i) to the largest R(i) (IA <= i <= IA+M-1). If ROWCND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by R(IA:IA+M-1). .TP 8 COLCND (global output) DOUBLE PRECISION If INFO = 0, COLCND contains the ratio of the smallest C(j) to the largest C(j) (JA <= j <= JA+N-1). If COLCND >= 0.1, it is not worth scaling by C(JA:JA+N-1). .TP 8 AMAX (global output) DOUBLE PRECISION Absolute value of largest distributed matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, and i is .br <= M: the i-th row of the distributed matrix sub( A ) is exactly zero, > M: the (i-M)-th column of the distributed matrix sub( A ) is exactly zero. scalapack-doc-1.5/man/manl/pzgehd2.l0100644000056400000620000001652506335610654017000 0ustar pfrauenfstaff.TH PZGEHD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZGEHD2 - reduce a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PZGEHD2( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEHD2 reduces a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation: Q' * sub( A ) * Q = H, where .br sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+JLO-2 and JA+JHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16 array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB + MAX( NpA0, NB ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0, v(i+1) = 1 and v(ihi+1:n) = 0; v(i+2:ihi) is stored on exit in A(ia+ilo+i:ia+ihi-1,ja+ilo+i-2), and tau in TAU(ja+ilo+i-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follo- wing example, with n = 7, ilo = 2 and ihi = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(ja+ilo+i-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pzgehrd.l0100644000056400000620000001716106335610654017075 0ustar pfrauenfstaff.TH PZGEHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEHRD - reduce a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PZGEHRD( N, ILO, IHI, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IHI, ILO, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEHRD reduces a complex general distributed matrix sub( A ) to upper Hessenberg form H by an unitary similarity transformation: Q' * sub( A ) * Q = H, where .br sub( A ) = A(IA+N-1:IA+N-1,JA+N-1:JA+N-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER It is assumed that sub( A ) is already upper triangular in rows IA:IA+ILO-2 and IA+IHI:IA+N-1 and columns JA:JA+ILO-2 and JA+IHI:JA+N-1. See Further Details. If N > 0, .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N general distributed matrix sub( A ) to be reduced. On exit, the upper triangle and the first subdiagonal of sub( A ) are overwritten with the upper Hessenberg matrix H, and the ele- ments below the first subdiagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16 array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). Elements JA:JA+ILO-2 and JA+IHI:JA+N-2 of TAU are set to zero. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB*NB + NB*MAX( IHIP+1, IHLP+INLQ ) where NB = MB_A = NB_A, IROFFA = MOD( IA-1, NB ), ICOFFA = MOD( JA-1, NB ), IOFF = MOD( IA+ILO-2, NB ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ), IHIP = NUMROC( IHI+IROFFA, NB, MYROW, IAROW, NPROW ), ILROW = INDXG2P( IA+ILO-1, NB, MYROW, RSRC_A, NPROW ), IHLP = NUMROC( IHI-ILO+IOFF+1, NB, MYROW, ILROW, NPROW ), ILCOL = INDXG2P( JA+ILO-1, NB, MYCOL, CSRC_A, NPCOL ), INLQ = NUMROC( N-ILO+IOFF+1, NB, MYCOL, ILCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of (ihi-ilo) elementary reflectors .br Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:I) = 0, v(I+1) = 1 and v(IHI+1:N) = 0; v(I+2:IHI) is stored on exit in A(IA+ILO+I:IA+IHI-1,JA+ILO+I-2), and tau in TAU(JA+ILO+I-2). The contents of A(IA:IA+N-1,JA:JA+N-1) are illustrated by the follow- ing example, with N = 7, ILO = 2 and IHI = 6: .br on entry on exit .br ( a a a a a a a ) ( a a h h h h a ) ( a a a a a a ) ( a h h h h a ) ( a a a a a a ) ( h h h h h h ) ( a a a a a a ) ( v2 h h h h h ) ( a a a a a a ) ( v2 v3 h h h h ) ( a a a a a a ) ( v2 v3 v4 h h h ) ( a ) ( a ) where a denotes an element of the original matrix sub( A ), H denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(JA+ILO+I-2). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) .br scalapack-doc-1.5/man/manl/pzgelq2.l0100644000056400000620000001423706335610654017017 0ustar pfrauenfstaff.TH PZGELQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGELQ2 - compute a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PZGELQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGELQ2 computes a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1)' H(ia+k-2)' . . . H(ia)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; conjg(v(i+1:n)) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/pzgelqf.l0100644000056400000620000001425006335610654017076 0ustar pfrauenfstaff.TH PZGELQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGELQF - compute a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q .SH SYNOPSIS .TP 20 SUBROUTINE PZGELQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGELQF computes a LQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = L * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and below the diagonal of sub( A ) contain the M by min(M,N) lower trapezoidal matrix L (L is lower triangular if M <= N); the elements above the diagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCr(IA+MIN(M,N)-1). This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia+k-1)' H(ia+k-2)' . . . H(ia)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; conjg(v(i+1:n)) is stored on exit in A(ia+i-1,ja+i:ja+n-1), and tau in TAU(ia+i-1). .br scalapack-doc-1.5/man/manl/pzgels.l0100644000056400000620000002222606335610655016735 0ustar pfrauenfstaff.TH PZGELS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGELS - solve overdetermined or underdetermined complex linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), .SH SYNOPSIS .TP 19 SUBROUTINE PZGELS( TRANS, M, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER TRANS .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX*16 A( * ), B( * ), WORK( * ) .SH PURPOSE PZGELS solves overdetermined or underdetermined complex linear systems involving an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), or its conjugate-transpose, using a QR or LQ factorization of sub( A ). It is assumed that sub( A ) has full rank. .br The following options are provided: .br 1. If TRANS = 'N' and m >= n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )*X ||. .br 2. If TRANS = 'N' and m < n: find the minimum norm solution of an underdetermined system sub( A ) * X = sub( B ). .br 3. If TRANS = 'C' and m >= n: find the minimum norm solution of an undetermined system sub( A )**H * X = sub( B ). .br 4. If TRANS = 'C' and m < n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || sub( B ) - sub( A )**H * X ||. where sub( B ) denotes B( IB:IB+M-1, JB:JB+NRHS-1 ) when TRANS = 'N' and B( IB:IB+N-1, JB:JB+NRHS-1 ) otherwise. Several right hand side vectors b and solution vectors x can be handled in a single call; When TRANS = 'N', the solution vectors are stored as the columns of the N-by-NRHS right hand side matrix sub( B ) and the M-by-NRHS right hand side matrix sub( B ) otherwise. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER = 'N': the linear system involves sub( A ); .br = 'C': the linear system involves sub( A )**H. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e. the number of columns of the distributed submatrices sub( B ) and X. NRHS >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the M-by-N matrix A. if M >= N, sub( A ) is overwritten by details of its QR factorization as returned by PZGEQRF; if M < N, sub( A ) is overwritten by details of its LQ factorization as returned by PZGELQF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix B of right hand side vectors, stored columnwise; sub( B ) is M-by-NRHS if TRANS='N', and N-by-NRHS otherwise. On exit, sub( B ) is overwritten by the solution vectors, stored columnwise: if TRANS = 'N' and M >= N, rows 1 to N of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements N+1 to M in that column; if TRANS = 'N' and M < N, rows 1 to N of sub( B ) contain the minimum norm solution vectors; if TRANS = 'C' and M >= N, rows 1 to M of sub( B ) contain the minimum norm solution vectors; if TRANS = 'C' and M < N, rows 1 to M of sub( B ) contain the least squares solution vectors; the residual sum of squares for the solution in each column is given by the sum of squares of elements M+1 to N in that column. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= LTAU + MAX( LWF, LWS ) where If M >= N, then LTAU = NUMROC( JA+MIN(M,N)-1, NB_A, MYCOL, CSRC_A, NPCOL ), LWF = NB_A * ( MpA0 + NqA0 + NB_A ) LWS = MAX( (NB_A*(NB_A-1))/2, (NRHSqB0 + MpB0)*NB_A ) + NB_A * NB_A Else LTAU = NUMROC( IA+MIN(M,N)-1, MB_A, MYROW, RSRC_A, NPROW ), LWF = MB_A * ( MpA0 + NqA0 + MB_A ) LWS = MAX( (MB_A*(MB_A-1))/2, ( NpB0 + MAX( NqA0 + NUMROC( NUMROC( N+IROFFB, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NRHSqB0 ) )*MB_A ) + MB_A * MB_A End if where LCMP = LCM / NPROW with LCM = ILCM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), MpB0 = NUMROC( M+IROFFB, MB_B, MYROW, IBROW, NPROW ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), NRHSqB0 = NUMROC( NRHS+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzgeql2.l0100644000056400000620000001440106335610655017011 0ustar pfrauenfstaff.TH PZGEQL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEQL2 - compute a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PZGEQL2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEQL2 computes a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/pzgeqlf.l0100644000056400000620000001441206335610655017077 0ustar pfrauenfstaff.TH PZGEQLF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEQLF - compute a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L .SH SYNOPSIS .TP 20 SUBROUTINE PZGEQLF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEQLF computes a QL factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * L. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M >= N, the lower triangle of the distributed submatrix A( IA+M-N:IA+M-1, JA:JA+N-1 ) contains the N-by-N lower triangular matrix L; if M <= N, the elements on and below the (N-M)-th superdiagonal contain the M by N lower trapezoidal matrix L; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+N-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja+k-1) . . . H(ja+1) H(ja), where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(m-k+i+1:m) = 0 and v(m-k+i) = 1; v(1:m-k+i-1) is stored on exit in A(ia:ia+m-k+i-2,ja+n-k+i-1), and tau in TAU(ja+n-k+i-1). .br scalapack-doc-1.5/man/manl/pzgeqpf.l0100644000056400000620000001631006335610655017102 0ustar pfrauenfstaff.TH PZGEQPF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEQPF - compute a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZGEQPF( M, N, A, IA, JA, DESCA, IPIV, TAU, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 INTEGER IA, JA, INFO, LRWORK, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEQPF computes a QR factorization with column pivoting of a M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1): sub( A ) * P = Q * R. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, repre- sent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension LOCc(JA+N-1). On exit, if IPIV(I) = K, the local i-th column of sub( A )*P was the global K-th column of sub( A ). IPIV is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX(3,Mp0 + Nq0). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCc(JA+N-1)+Nq0. IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), LOCc(JA+N-1) = NUMROC( JA+N-1, NB_A, MYCOL, CSRC_A, NPCOL ) and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(1) H(2) . . . H(n) .br Each H(i) has the form .br H = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in .br A(ia+i-1:ia+m-1,ja+i-1). .br The matrix P is represented in jpvt as follows: If .br jpvt(j) = i .br then the jth column of P is the ith canonical unit vector. scalapack-doc-1.5/man/manl/pzgeqr2.l0100644000056400000620000001423106335610655017020 0ustar pfrauenfstaff.TH PZGEQR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEQR2 - compute a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PZGEQR2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEQR2 computes a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Mp0 + MAX( 1, Nq0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in .br A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/pzgeqrf.l0100644000056400000620000001424206335610655017106 0ustar pfrauenfstaff.TH PZGEQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGEQRF - compute a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R .SH SYNOPSIS .TP 20 SUBROUTINE PZGEQRF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGEQRF computes a QR factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = Q * R. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(M,N) by N upper trapezoidal matrix R (R is upper triangular if M >= N); the elements below the diagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+MIN(M,N)-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( Mp0 + Nq0 + NB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(m,n). Each H(i) has the form .br H(j) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is stored on exit in .br A(ia+i:ia+m-1,ja+i-1), and tau in TAU(ja+i-1). .br scalapack-doc-1.5/man/manl/pzgerfs.l0100644000056400000620000002340506335610655017111 0ustar pfrauenfstaff.TH PZGERFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGERFS - improve the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PZGERFS( TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, IX, INFO, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION BERR( * ), FERR( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PZGERFS improves the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**H * sub( X ) = sub( B ) (Conjugate transpose) .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). This array contains the local pieces of the distributed factors of the matrix sub( A ) = P * L * U as computed by PZGETRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input) INTEGER array of dimension LOCr(M_AF)+MB_AF. This array contains the pivoting information as computed by PZGETRF. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). This array contains the local pieces of the distributed matrix of right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input and output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_X,LOCc(JX+NRHS-1)). On entry, this array contains the local pieces of the distributed matrix solution sub( X ). On exit, the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr( N + MOD(IA-1,MB_A) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCr( N + MOD(IB-1,MB_B) ). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pzgerq2.l0100644000056400000620000001435306335610655017025 0ustar pfrauenfstaff.TH PZGERQ2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGERQ2 - compute a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PZGERQ2( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGERQ2 computes a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia)' H(ia+1)' . . . H(ia+k-1)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(n-k+i+1:n) = 0 and v(n-k+i) = 1; conjg(v(1:n-k+i-1)) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). scalapack-doc-1.5/man/manl/pzgerqf.l0100644000056400000620000001436406335610655017113 0ustar pfrauenfstaff.TH PZGERQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGERQF - compute a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q .SH SYNOPSIS .TP 20 SUBROUTINE PZGERQF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZGERQF computes a RQ factorization of a complex distributed M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) = R * Q. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia)' H(ia+1)' . . . H(ia+k-1)', where k = min(m,n). Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(n-k+i+1:n) = 0 and v(n-k+i) = 1; conjg(v(1:n-k+i-1)) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and tau in TAU(ia+m-k+i-1). scalapack-doc-1.5/man/manl/pzgesv.l0100644000056400000620000001377006335610655016753 0ustar pfrauenfstaff.TH PZGESV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGESV - compute the solution to a complex system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PZGESV( N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 19 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZGESV computes the solution to a complex system of linear equations where sub( A ) = A(IA:IA+N-1,JA:JA+N-1) is an N-by-N distributed matrix and X and sub( B ) = B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The LU decomposition with partial pivoting and row interchanges is used to factor sub( A ) as sub( A ) = P * L * U, where P is a permu- tation matrix, L is unit lower triangular, and U is upper triangular. L and U are stored in sub( A ). The factored form of sub( A ) is then used to solve the system of equations sub( A ) * X = sub( B ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( A ). NRHS >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the N-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. scalapack-doc-1.5/man/manl/pzgesvx.l0100644000056400000620000004050106335610655017133 0ustar pfrauenfstaff.TH PZGESVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGESVX - use the LU factorization to compute the solution to a complex system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PZGESVX( FACT, TRANS, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, EQUED, R, C, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, TRANS .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ), IPIV( * ) .TP 20 .ti +4 DOUBLE PRECISION BERR( * ), C( * ), FERR( * ), R( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PZGESVX uses the LU factorization to compute the solution to a complex system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION In the following description, A denotes A(IA:IA+N-1,JA:JA+N-1), B denotes B(IB:IB+N-1,JB:JB+NRHS-1) and X denotes .br X(IX:IX+N-1,JX:JX+NRHS-1). .br The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N') or diag(C)*B (if TRANS = 'T' or 'C'). .br 2. If FACT = 'N' or 'E', the LU decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as .br A = P * L * U, .br where P is a permutation matrix, L is a unit lower triangular matrix, and U is upper triangular. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If FACT = 'E' and equilibration was used, the matrix X is premultiplied by diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so that it solves the original system before equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A(IA:IA+N-1,JA:JA+N-1) is supplied on entry, and if not, .br whether the matrix A(IA:IA+N-1,JA:JA+N-1) should be equilibrated before it is factored. = 'F': On entry, AF(IAF:IAF+N-1,JAF:JAF+N-1) and IPIV con- .br tain the factored form of A(IA:IA+N-1,JA:JA+N-1). If EQUED is not 'N', the matrix A(IA:IA+N-1,JA:JA+N-1) has been equilibrated with scaling factors given by R and C. A(IA:IA+N-1,JA:JA+N-1), AF(IAF:IAF+N-1,JAF:JAF+N-1), and IPIV are not modified. = 'N': The matrix A(IA:IA+N-1,JA:JA+N-1) will be copied to .br AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .br = 'E': The matrix A(IA:IA+N-1,JA:JA+N-1) will be equili- brated if necessary, then copied to AF(IAF:IAF+N-1,JAF:JAF+N-1) and factored. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': A(IA:IA+N-1,JA:JA+N-1) * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (No transpose) .br = 'T': A(IA:IA+N-1,JA:JA+N-1)**T * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Transpose) .br = 'C': A(IA:IA+N-1,JA:JA+N-1)**H * X(IX:IX+N-1,JX:JX+NRHS-1) .br = B(IB:IB+N-1,JB:JB+NRHS-1) (Conjugate transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right-hand sides, i.e., the number of columns of the distributed submatrices B(IB:IB+N-1,JB:JB+NRHS-1) and .br X(IX:IX+N-1,JX:JX+NRHS-1). NRHS >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the N-by-N matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'F' and EQUED is not 'N', .br then A(IA:IA+N-1,JA:JA+N-1) must have been equilibrated by .br the scaling factors in R and/or C. A(IA:IA+N-1,JA:JA+N-1) is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if EQUED .ne. 'N', A(IA:IA+N-1,JA:JA+N-1) is scaled as follows: .br EQUED = 'R': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) .br EQUED = 'C': A(IA:IA+N-1,JA:JA+N-1) := .br A(IA:IA+N-1,JA:JA+N-1) * diag(C) .br EQUED = 'B': A(IA:IA+N-1,JA:JA+N-1) := .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). If FACT = 'F', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an input argument and on entry contains the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PZGETRF. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'N', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original .br matrix A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then AF(IAF:IAF+N-1,JAF:JAF+N-1) is an output argument and on exit returns the factors L and U from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equili- .br brated matrix A(IA:IA+N-1,JA:JA+N-1) (see the description of .br A(IA:IA+N-1,JA:JA+N-1) for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 IPIV (local input or local output) INTEGER array, dimension LOCr(M_A)+MB_A. If FACT = 'F', then IPIV is an input argu- ment and on entry contains the pivot indices from the fac- torization A(IA:IA+N-1,JA:JA+N-1) = P*L*U as computed by PZGETRF; IPIV(i) -> The global row local row i was swapped with. This array must be aligned with A( IA:IA+N-1, * ). If FACT = 'N', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the original matrix .br A(IA:IA+N-1,JA:JA+N-1). If FACT = 'E', then IPIV is an output argument and on exit contains the pivot indices from the factorization A(IA:IA+N-1,JA:JA+N-1) = P*L*U of the equilibrated matrix .br A(IA:IA+N-1,JA:JA+N-1). .TP 8 EQUED (global input or global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'R': Row equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been premultiplied by diag(R). = 'C': Column equilibration, i.e., A(IA:IA+N-1,JA:JA+N-1) has been postmultiplied by diag(C). = 'B': Both row and column equilibration, i.e., .br A(IA:IA+N-1,JA:JA+N-1) has been replaced by .br diag(R) * A(IA:IA+N-1,JA:JA+N-1) * diag(C). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 R (local input or local output) DOUBLE PRECISION array, dimension LOCr(M_A). The row scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'R' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the left by diag(R); if EQUED='N' or 'C', R is not acces- sed. R is an input variable if FACT = 'F'; otherwise, R is an output variable. If FACT = 'F' and EQUED = 'R' or 'B', each element of R must be positive. R is replicated in every process column, and is aligned with the distributed matrix A. .TP 8 C (local input or local output) DOUBLE PRECISION array, dimension LOCc(N_A). The column scale factors for A(IA:IA+N-1,JA:JA+N-1). .br If EQUED = 'C' or 'B', A(IA:IA+N-1,JA:JA+N-1) is multiplied on the right by diag(C); if EQUED = 'N' or 'R', C is not accessed. C is an input variable if FACT = 'F'; otherwise, C is an output variable. If FACT = 'F' and EQUED = 'C' or 'B', each element of C must be positive. C is replicated in every process row, and is aligned with the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B(IB:IB+N-1,JB:JB+NRHS-1). On exit, if .br EQUED = 'N', B(IB:IB+N-1,JB:JB+NRHS-1) is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B(IB:IB+N-1,JB:JB+NRHS-1); if TRANS = 'T' or 'C' .br and EQUED = 'C' or 'B', B(IB:IB+N-1,JB:JB+NRHS-1) is over- .br written by diag(C)*B(IB:IB+N-1,JB:JB+NRHS-1). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1)). If INFO = 0, the N-by-NRHS solution matrix X(IX:IX+N-1,JX:JX+NRHS-1) to the original .br system of equations. Note that A(IA:IA+N-1,JA:JA+N-1) and .br B(IB:IB+N-1,JB:JB+NRHS-1) are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(C))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(R))*X(IX:IX+N-1,JX:JX+NRHS-1) if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) DOUBLE PRECISION The estimate of the reciprocal condition number of the matrix A(IA:IA+N-1,JA:JA+N-1) after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0. .TP 8 FERR (local output) DOUBLE PRECISION array, dimension LOCc(N_B) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X(IX:IX+N-1,JX:JX+NRHS-1). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. FERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 BERR (local output) DOUBLE PRECISION array, dimension LOCc(N_B). The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A(IA:IA+N-1,JA:JA+N-1) or .br B(IB:IB+N-1,JB:JB+NRHS-1) that makes X(j) an exact solution). BERR is replicated in every process row, and is aligned with the matrices B and X. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PZGECON( LWORK ), PZGERFS( LWORK ) ) + LOCr( N_A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK = 2*LOCc(N_A). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: U(IA+I-1,IA+I-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pzgetf2.l0100644000056400000620000001261406335610656017013 0ustar pfrauenfstaff.TH PZGETF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGETF2 - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PZGETF2( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZGETF2 computes an LU factorization of a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). .br This is the right-looking Parallel Level 2 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). NB_A-MOD(JA-1, NB_A) >= N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ). On exit, this array contains the local pieces of the factors L and U from the factoriza- tion sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/pzgetrf.l0100644000056400000620000001257606335610656017122 0ustar pfrauenfstaff.TH PZGETRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGETRF - compute an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges .SH SYNOPSIS .TP 20 SUBROUTINE PZGETRF( M, N, A, IA, JA, DESCA, IPIV, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZGETRF computes an LU factorization of a general M-by-N distributed matrix sub( A ) = (IA:IA+M-1,JA:JA+N-1) using partial pivoting with row interchanges. The factorization has the form sub( A ) = P * L * U, where P is a permutation matrix, L is lower triangular with unit diagonal ele- ments (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). L and U are stored in sub( A ). This is the right-looking Parallel Level 3 BLAS version of the algorithm. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the M-by-N distributed matrix sub( A ) to be factored. On exit, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal ele- ments of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local output) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,JA+K-1) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/pzgetri.l0100644000056400000620000001442606335610656017121 0ustar pfrauenfstaff.TH PZGETRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGETRI - compute the inverse of a distributed matrix using the LU factorization computed by PZGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PZGETRI( N, A, IA, JA, DESCA, IPIV, WORK, LWORK, IWORK, LIWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LIWORK, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), WORK( * ) .SH PURPOSE PZGETRI computes the inverse of a distributed matrix using the LU factorization computed by PZGETRF. This method inverts U and then computes the inverse of sub( A ) = A(IA:IA+N-1,JA:JA+N-1) denoted InvA by solving the system InvA*L = inv(U) for InvA. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the L and U obtained by the factorization sub( A ) = P*L*U computed by PZGETRF. On exit, if INFO = 0, sub( A ) contains the inverse of the original distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A keeps track of the pivoting information. IPIV(i) is the global row index the local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = LOCr(N+MOD(IA-1,MB_A))*NB_A. WORK is used to keep a copy of at most an entire column block of sub( A ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/local output) INTEGER array, dimension (LIWORK) On exit, IWORK(1) returns the minimal and optimal LIWORK. .TP 8 LIWORK (local or global input) INTEGER The dimension of the array IWORK used as workspace for physically transposing the pivots. LIWORK is local input and must be at least if NPROW == NPCOL then LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + NB_A, else LIWORK = LOCc( N_A + MOD(JA-1, NB_A) ) + MAX( CEIL(CEIL(LOCr(M_A)/MB_A)/(LCM/NPROW)), NB_A ) where LCM is the least common multiple of process rows and columns (NPROW and NPCOL). end if If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, U(IA+K-1,IA+K-1) is exactly zero; the matrix is singular and its inverse could not be computed. scalapack-doc-1.5/man/manl/pzgetrs.l0100644000056400000620000001331106335610656017123 0ustar pfrauenfstaff.TH PZGETRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGETRS - solve a system of distributed linear equations op( sub( A ) ) * X = sub( B ) with a general N-by-N distributed matrix sub( A ) using the LU factorization computed by PZGETRF .SH SYNOPSIS .TP 20 SUBROUTINE PZGETRS( TRANS, N, NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZGETRS solves a system of distributed linear equations sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1), op( A ) = A, A**T or A**H and sub( B ) denotes B(IB:IB+N-1,JB:JB+NRHS-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block data decomposition ( MB_A=NB_A ). .SH ARGUMENTS .TP 8 TRANS (global input) CHARACTER Specifies the form of the system of equations: .br = 'N': sub( A ) * X = sub( B ) (No transpose) .br = 'T': sub( A )**T * X = sub( B ) (Transpose) .br = 'C': sub( A )**H * X = sub( B ) (Conjugate transpose) .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the factors L and U from the factorization sub( A ) = P*L*U; the unit diagonal elements of L are not stored. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension ( LOCr(M_A)+MB_A ) This array contains the pivoting information. IPIV(i) -> The global row local row i was swapped with. This array is tied to the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, the right hand sides sub( B ). On exit, sub( B ) is overwritten by the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzggqrf.l0100644000056400000620000002353606335610656017117 0ustar pfrauenfstaff.TH PZGGQRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGGQRF - compute a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZGGQRF( N, M, P, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PZGGQRF computes a generalized QR factorization of an N-by-M matrix sub( A ) = A(IA:IA+N-1,JA:JA+M-1) and an N-by-P matrix sub( B ) = B(IB:IB+N-1,JB:JB+P-1): sub( A ) = Q*R, sub( B ) = Q*T*Z, .br where Q is an N-by-N unitary matrix, Z is a P-by-P unitary matrix, and R and T assume one of the forms: .br if N >= M, R = ( R11 ) M , or if N < M, R = ( R11 R12 ) N, ( 0 ) N-M N M-N M .br where R11 is upper triangular, and .br if N <= P, T = ( 0 T12 ) N, or if N > P, T = ( T11 ) N-P, P-N N ( T21 ) P P .br where T12 or T21 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GQR factorization of sub( A ) and sub( B ) implicitly gives the QR factorization of inv( sub( B ) )* sub( A ): .br inv( sub( B ) )*sub( A )= Z'*(inv(T)*R) .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the conjugate transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 M (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( B ). P >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+M-1)). On entry, the local pieces of the N-by-M distributed matrix sub( A ) which is to be factored. On exit, the elements on and above the diagonal of sub( A ) contain the min(N,M) by M upper trapezoidal matrix R (R is upper triangular if N >= M); the elements below the diagonal, with the array TAUA, represent the unitary matrix Q as a product of min(N,M) elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) COMPLEX*16, array, dimension LOCc(JA+MIN(N,M)-1). This array contains the scalar factors TAUA of the elementary reflectors which represent the unitary matrix Q. TAUA is tied to the distributed matrix A. (see Further Details). B (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+P-1)). On entry, the local pieces of the N-by-P distributed matrix sub( B ) which is to be factored. On exit, if N <= P, the upper triangle of B(IB:IB+N-1,JB+P-N:JB+P-1) contains the N by N upper triangular matrix T; if N > P, the elements on and above the (N-P)-th subdiagonal contain the N by P upper trapezoidal matrix T; the remaining elements, with the array TAUB, represent the unitary matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) COMPLEX*16, array, dimension LOCr(IB+N-1) This array contains the scalar factors of the elementary reflectors which represent the unitary matrix Z. TAUB is tied to the distributed matrix B (see Further Details). WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB_A * ( NpA0 + MqA0 + NB_A ), MAX( (NB_A*(NB_A-1))/2, (PqB0 + NpB0)*NB_A ) + NB_A * NB_A, MB_B * ( NpB0 + PqB0 + MB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), NpB0 = NUMROC( N+IROFFB, MB_B, MYROW, IBROW, NPROW ), PqB0 = NUMROC( P+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ja) H(ja+1) . . . H(ja+k-1), where k = min(n,m). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in .br A(ia+i:ia+n-1,ja+i-1), and taua in TAUA(ja+i-1). .br To form Q explicitly, use ScaLAPACK subroutine PZUNGQR. .br To use Q to update another matrix, use ScaLAPACK subroutine PZUNMQR. The matrix Z is represented as a product of elementary reflectors Z = H(ib)' H(ib+1)' . . . H(ib+k-1)', where k = min(n,p). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a complex scalar, and v is a complex vector with v(p-k+i+1:p) = 0 and v(p-k+i) = 1; conjg(v(1:p-k+i-1)) is stored on exit in B(ib+n-k+i-1,jb:jb+p-k+i-2), and taub in TAUB(ib+n-k+i-1). To form Z explicitly, use ScaLAPACK subroutine PZUNGRQ. .br To use Z to update another matrix, use ScaLAPACK subroutine PZUNMRQ. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( MB_A.EQ.MB_B .AND. IROFFA.EQ.IROFFB .AND. IAROW.EQ.IBROW ) scalapack-doc-1.5/man/manl/pzggrqf.l0100644000056400000620000002343606335610656017116 0ustar pfrauenfstaff.TH PZGGRQF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZGGRQF - compute a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZGGRQF( M, P, N, A, IA, JA, DESCA, TAUA, B, IB, JB, DESCB, TAUB, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, LWORK, M, N, P .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ), TAUA( * ), TAUB( * ), WORK( * ) .SH PURPOSE PZGGRQF computes a generalized RQ factorization of an M-by-N matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) and a P-by-N matrix sub( B ) = B(IB:IB+P-1,JB:JB+N-1): .br sub( A ) = R*Q, sub( B ) = Z*T*Q, .br where Q is an N-by-N unitary matrix, Z is a P-by-P unitary matrix, and R and T assume one of the forms: .br if M <= N, R = ( 0 R12 ) M, or if M > N, R = ( R11 ) M-N, N-M M ( R21 ) N N .br where R12 or R21 is upper triangular, and .br if P >= N, T = ( T11 ) N , or if P < N, T = ( T11 T12 ) P, ( 0 ) P-N P N-P N .br where T11 is upper triangular. .br In particular, if sub( B ) is square and nonsingular, the GRQ factorization of sub( A ) and sub( B ) implicitly gives the RQ factorization of sub( A )*inv( sub( B ) ): .br sub( A )*inv( sub( B ) ) = (R*inv(T))*Z' .br where inv( sub( B ) ) denotes the inverse of the matrix sub( B ), and Z' denotes the conjugate transpose of matrix Z. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 P (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( B ). P >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, if M <= N, the upper triangle of A( IA:IA+M-1, JA+N-M:JA+N-1 ) contains the M by M upper triangular matrix R; if M >= N, the elements on and above the (M-N)-th subdiagonal contain the M by N upper trapezoidal matrix R; the remaining elements, with the array TAUA, represent the unitary matrix Q as a product of elementary reflectors (see Further Details). IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAUA (local output) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUA is tied to the distributed matrix A (see Further Details). B (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, the local pieces of the P-by-N distributed matrix sub( B ) which is to be factored. On exit, the elements on and above the diagonal of sub( B ) contain the min(P,N) by N upper trapezoidal matrix T (T is upper triangular if P >= N); the elements below the diagonal, with the array TAUB, represent the unitary matrix Z as a product of elementary reflectors (see Further Details). IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 TAUB (local output) COMPLEX*16, array, dimension LOCc(JB+MIN(P,N)-1). This array contains the scalar factors TAUB of the elementary reflectors which represent the unitary matrix Z. TAUB is tied to the distributed matrix B (see Further Details). WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( MB_A * ( MpA0 + NqA0 + MB_A ), MAX( (MB_A*(MB_A-1))/2, (PpB0 + NqB0)*MB_A ) + MB_A * MB_A, NB_B * ( PpB0 + NqB0 + NB_B ) ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFB = MOD( IB-1, MB_B ), ICOFFB = MOD( JB-1, NB_B ), IBROW = INDXG2P( IB, MB_B, MYROW, RSRC_B, NPROW ), IBCOL = INDXG2P( JB, NB_B, MYCOL, CSRC_B, NPCOL ), PpB0 = NUMROC( P+IROFFB, MB_B, MYROW, IBROW, NPROW ), NqB0 = NUMROC( N+ICOFFB, NB_B, MYCOL, IBCOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The matrix Q is represented as a product of elementary reflectors Q = H(ia)' H(ia+1)' . . . H(ia+k-1)', where k = min(m,n). Each H(i) has the form .br H(i) = I - taua * v * v' .br where taua is a complex scalar, and v is a complex vector with v(n-k+i+1:n) = 0 and v(n-k+i) = 1; conjg(v(1:n-k+i-1)) is stored on exit in A(ia+m-k+i-1,ja:ja+n-k+i-2), and taua in TAUA(ia+m-k+i-1). To form Q explicitly, use ScaLAPACK subroutine PZUNGRQ. .br To use Q to update another matrix, use ScaLAPACK subroutine PZUNMRQ. The matrix Z is represented as a product of elementary reflectors Z = H(jb) H(jb+1) . . . H(jb+k-1), where k = min(p,n). Each H(i) has the form .br H(i) = I - taub * v * v' .br where taub is a complex scalar, and v is a complex vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:p) is stored on exit in .br B(ib+i:ib+p-1,jb+i-1), and taub in TAUB(jb+i-1). .br To form Z explicitly, use ScaLAPACK subroutine PZUNGQR. .br To use Z to update another matrix, use ScaLAPACK subroutine PZUNMQR. Alignment requirements .br ====================== .br The distributed submatrices sub( A ) and sub( B ) must verify some alignment properties, namely the following expression should be true: ( NB_A.EQ.NB_B .AND. ICOFFA.EQ.ICOFFB .AND. IACOL.EQ.IBCOL ) scalapack-doc-1.5/man/manl/pzheevx.l0100644000056400000620000003541606335610656017130 0ustar pfrauenfstaff.TH PZHEEVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PZHEEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IL, INFO, IU, IZ, JA, JZ, LIWORK, LRWORK, LWORK, M, N, NZ .TP 20 .ti +4 DOUBLE PRECISION ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION GAP( * ), RWORK( * ), W( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 DOUBLE PRECISION ZERO, ONE, TEN, FIVE .TP 20 .ti +4 PARAMETER ( ZERO = 0.0D+0, ONE = 1.0D+0, TEN = 10.0D+0, FIVE = 5.0D+0 ) .TP 20 .ti +4 INTEGER IERREIN, IERRCLS, IERRSPC, IERREBZ .TP 20 .ti +4 PARAMETER ( IERREIN = 1, IERRCLS = 2, IERRSPC = 4, IERREBZ = 8 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LOWER, LQUERY, QUICKRETURN, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER ORDER .TP 20 .ti +4 INTEGER CSRC_A, I, IACOL, IAROW, ICOFFA, IINFO, INDD, INDD2, INDE, INDE2, INDIBL, INDISP, INDRWORK, INDTAU, INDWORK, IROFFA, IROFFZ, ISCALE, ISIZESTEBZ, ISIZESTEIN, IZROW, LALLWORK, LIWMIN, LLRWORK, LLWORK, LRWMIN, LWMIN, MAXEIGS, MB_A, MB_Z, MQ0, MYCOL, MYROW, NB, NB_A, NB_Z, NEIG, NN, NNP, NP0, NPCOL, NPROCS, NPROW, NQ0, NSPLIT, NZZ, OFFSET, RSRC_A, RSRC_Z, SIZEHEEVX, SIZEORMTR, SIZESTEIN .TP 20 .ti +4 DOUBLE PRECISION ABSTLL, ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA, SMLNUM, VLL, VUU .TP 20 .ti +4 INTEGER IDUM1( 4 ), IDUM2( 4 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 DOUBLE PRECISION PDLAMCH, PZLANHE .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PDLAMCH, PZLANHE .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, DGEBR2D, DGEBS2D, DLASRT, DSCAL, IGAMN2D, PCHK2MAT, PDLARED1D, PDSTEBZ, PXERBLA, PZELGET, PZHETRD, PZLASCL, PZSTEIN, PZUNMTR .TP 20 .ti +4 INTRINSIC ABS, DBLE, DCMPLX, ICHAR, MAX, MIN, MOD, SQRT .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 QUICKRETURN = ( N.EQ.0 ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( DESCA( CTXT_ ), NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 800+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 8, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 21, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 SAFMIN = PDLAMCH( DESCA( CTXT_ ), 'Safe minimum' ) .TP 20 .ti +4 EPS = PDLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 SMLNUM = SAFMIN / EPS .TP 20 .ti +4 BIGNUM = ONE / SMLNUM .TP 20 .ti +4 RMIN = SQRT( SMLNUM ) .TP 20 .ti +4 RMAX = MIN( SQRT( BIGNUM ), ONE / SQRT( SQRT( SAFMIN ) ) ) .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 LOWER = LSAME( UPLO, 'L' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 INDTAU = 1 .TP 20 .ti +4 INDWORK = INDTAU + N .TP 20 .ti +4 LLWORK = LWORK - INDWORK + 1 .TP 20 .ti +4 INDE = 1 .TP 20 .ti +4 INDD = INDE + N .TP 20 .ti +4 INDD2 = INDD + N .TP 20 .ti +4 INDE2 = INDD2 + N .TP 20 .ti +4 INDRWORK = INDE2 + N .TP 20 .ti +4 LLRWORK = LRWORK - INDRWORK + 1 .TP 20 .ti +4 ISIZESTEIN = 3*N + NPROCS + 1 .TP 20 .ti +4 ISIZESTEBZ = MAX( 4*N, 14, NPROCS ) .TP 20 .ti +4 INDIBL = ( MAX( ISIZESTEIN, ISIZESTEBZ ) ) + 1 .TP 20 .ti +4 INDISP = INDIBL + N .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 .OR. LRWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 NNP = MAX( N, NPROCS+1, 4 ) .TP 20 .ti +4 LIWMIN = 6*NNP .TP 20 .ti +4 NPROCS = NPROW*NPCOL .TP 20 .ti +4 NB_A = DESCA( NB_ ) .TP 20 .ti +4 MB_A = DESCA( MB_ ) .TP 20 .ti +4 NB_Z = DESCZ( NB_ ) .TP 20 .ti +4 MB_Z = DESCZ( MB_ ) .TP 20 .ti +4 NB = NB_A .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 RSRC_A = DESCA( RSRC_ ) .TP 20 .ti +4 CSRC_A = DESCA( CSRC_ ) .TP 20 .ti +4 RSRC_Z = DESCZ( RSRC_ ) .TP 20 .ti +4 IROFFA = MOD( IA-1, MB_A ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, NB_A ) .TP 20 .ti +4 IROFFZ = MOD( IZ-1, MB_A ) .TP 20 .ti +4 IAROW = INDXG2P( 1, NB_A, MYROW, RSRC_A, NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( 1, MB_A, MYCOL, CSRC_A, NPCOL ) .TP 20 .ti +4 IZROW = INDXG2P( 1, NB_A, MYROW, RSRC_Z, NPROW ) .TP 20 .ti +4 NP0 = NUMROC( N+IROFFA, NB_Z, MYROW, IAROW, NPROW ) .TP 20 .ti +4 MQ0 = NUMROC( N+ICOFFA, NB_Z, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = N + MAX( NB*( NP0+1 ), 3 ) .TP 20 .ti +4 LRWMIN = 5*NN + 4*N .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, MYCOL, IACOL, NPCOL ) .TP 20 .ti +4 NQ0 = NUMROC( NN, NB, 0, 0, NPCOL ) .TP 20 .ti +4 LWMIN = N + ( NP0+NQ0+NB )*NB .TP 20 .ti +4 LRWMIN = 4*N + MAX( 5*NN, NP0*MQ0 ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 RWORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 RWORK( 2 ) = VL .TP 20 .ti +4 RWORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 RWORK( 2 ) = ZERO .TP 20 .ti +4 RWORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL DGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL DGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( LOWER .OR. LSAME( UPLO, 'U' ) ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -23 .TP 20 .ti +4 ELSE IF( LRWORK.LT.LRWMIN .AND. LRWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -25 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -27 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -9 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -10 .TP 20 .ti +4 ELSE IF( ABS( RWORK( 1 )-ABSTOL ).GT.FIVE*EPS* ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -13 .TP 20 .ti +4 ELSE IF( IROFFA.NE.IROFFZ ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -6 .TP 20 .ti +4 ELSE IF( IAROW.NE.IZROW ) THEN .TP 20 .ti +4 INFO = -19 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 800+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2100+CTXT_ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 1 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( LOWER ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'L' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'U' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 4 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 8, N, 4, N, 4, IZ, JZ, DESCZ, 21, 4, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 WORK( 1 ) = DCMPLX( LWMIN ) .TP 20 .ti +4 RWORK( 1 ) = DBLE( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( DESCA( CTXT_ ), 'PZHEEVX', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( QUICKRETURN ) THEN .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NZ = 0 .TP 20 .ti +4 ICLUSTR( 1 ) = 0 .TP 20 .ti +4 END IF .TP 20 .ti +4 M = 0 .TP 20 .ti +4 WORK( 1 ) = DCMPLX( LWMIN ) .TP 20 .ti +4 RWORK( 1 ) = DBLE( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 ABSTLL = ABSTOL .TP 20 .ti +4 ISCALE = 0 .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL .TP 20 .ti +4 VUU = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 VLL = ZERO .TP 20 .ti +4 VUU = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 ANRM = PZLANHE( '1', UPLO, N, A, IA, JA, DESCA, RWORK( INDRWORK ) ) .TP 20 .ti +4 IF( ANRM.GT.ZERO .AND. ANRM.LT.RMIN ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMIN / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 ELSE IF( ANRM.GT.RMAX ) THEN .TP 20 .ti +4 ISCALE = 1 .TP 20 .ti +4 SIGMA = RMAX / ANRM .TP 20 .ti +4 ANRM = ANRM*SIGMA .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL PZLASCL( UPLO, ONE, SIGMA, N, N, A, IA, JA, DESCA, IINFO ) .TP 20 .ti +4 IF( ABSTOL.GT.0 ) ABSTLL = ABSTOL*SIGMA .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 VLL = VL*SIGMA .TP 20 .ti +4 VUU = VU*SIGMA .TP 20 .ti +4 IF( VUU.EQ.VLL ) THEN .TP 20 .ti +4 VUU = VUU + 2*MAX( ABS( VUU )*EPS, SAFMIN ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 LALLWORK = LLRWORK .TP 20 .ti +4 CALL PZHETRD( UPLO, N, A, IA, JA, DESCA, RWORK( INDD ), RWORK( INDE ), WORK( INDTAU ), WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 OFFSET = 0 .TP 20 .ti +4 IF( IA.EQ.1 .AND. JA.EQ.1 .AND. RSRC_A.EQ.0 .AND. CSRC_A.EQ.0 ) THEN .TP 20 .ti +4 CALL PDLARED1D( N, IA, JA, DESCA, RWORK( INDD ), RWORK( INDD2 ), RWORK( INDRWORK ), LLRWORK ) .TP 20 .ti +4 CALL PDLARED1D( N, IA, JA, DESCA, RWORK( INDE ), RWORK( INDE2 ), RWORK( INDRWORK ), LLRWORK ) .TP 20 .ti +4 IF( .NOT.LOWER ) OFFSET = 1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 10 I = 1, N .TP 20 .ti +4 CALL PZELGET( 'A', ' ', WORK( INDD2+I-1 ), A, I+IA-1, I+JA-1, DESCA ) .TP 20 .ti +4 RWORK( INDD2+I-1 ) = DBLE( WORK( INDD2+I-1 ) ) .TP 20 .ti +4 10 CONTINUE .TP 20 .ti +4 IF( LSAME( UPLO, 'U' ) ) THEN .TP 20 .ti +4 DO 20 I = 1, N - 1 .TP 20 .ti +4 CALL PZELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA-1, I+JA, DESCA ) .TP 20 .ti +4 RWORK( INDE2+I-1 ) = DBLE( WORK( INDE2+I-1 ) ) .TP 20 .ti +4 20 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 DO 30 I = 1, N - 1 .TP 20 .ti +4 CALL PZELGET( 'A', ' ', WORK( INDE2+I-1 ), A, I+IA, I+JA-1, DESCA ) .TP 20 .ti +4 RWORK( INDE2+I-1 ) = DBLE( WORK( INDE2+I-1 ) ) .TP 20 .ti +4 30 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 ORDER = 'b' .TP 20 .ti +4 ELSE .TP 20 .ti +4 ORDER = 'e' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PDSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, RWORK( INDD2 ), RWORK( INDE2+OFFSET ), M, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), RWORK( INDRWORK ), LLRWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 IF( IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 DO 40 I = 1, M .TP 20 .ti +4 IWORK( INDIBL+I-1 ) = ABS( IWORK( INDIBL+I-1 ) ) .TP 20 .ti +4 40 CONTINUE .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 CALL IGAMN2D( DESCA( CTXT_ ), 'A', ' ', 1, 1, LALLWORK, 1, 1, 1, -1, -1, -1 ) .TP 20 .ti +4 MAXEIGS = DESCZ( N_ ) .TP 20 .ti +4 DO 50 NZ = MIN( MAXEIGS, M ), 0, -1 .TP 20 .ti +4 MQ0 = NUMROC( NZ, NB, 0, 0, NPCOL ) .TP 20 .ti +4 SIZESTEIN = ICEIL( NZ, NPROCS )*N + MAX( 5*N, NP0*MQ0 ) .TP 20 .ti +4 SIZEORMTR = MAX( ( NB*( NB-1 ) ) / 2, ( MQ0+NP0 )*NB ) + NB*NB .TP 20 .ti +4 SIZEHEEVX = MAX( SIZESTEIN, SIZEORMTR ) .TP 20 .ti +4 IF( SIZEHEEVX.LE.LALLWORK ) GO TO 60 .TP 20 .ti +4 50 CONTINUE .TP 20 .ti +4 60 CONTINUE .TP 20 .ti +4 ELSE .TP 20 .ti +4 NZ = M .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MAX( NZ, 0 ) .TP 20 .ti +4 IF( NZ.NE.M ) THEN .TP 20 .ti +4 INFO = INFO + IERRSPC .TP 20 .ti +4 DO 70 I = 1, M .TP 20 .ti +4 IFAIL( I ) = 0 .TP 20 .ti +4 70 CONTINUE .TP 20 .ti +4 IF( NSPLIT.GT.1 ) THEN .TP 20 .ti +4 CALL DLASRT( 'I', M, W, IINFO ) .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 VUU = W( NZ ) - TEN*( EPS*ANRM+SAFMIN ) .TP 20 .ti +4 IF( VLL.GE.VUU ) THEN .TP 20 .ti +4 NZZ = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL PDSTEBZ( DESCA( CTXT_ ), RANGE, ORDER, N, VLL, VUU, IL, IU, ABSTLL, RWORK( INDD2 ), RWORK( INDE2+ OFFSET ), NZZ, NSPLIT, W, IWORK( INDIBL ), IWORK( INDISP ), RWORK( INDRWORK ), LLRWORK, IWORK( 1 ), ISIZESTEBZ, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( MOD( INFO / IERREBZ, 1 ).EQ.0 ) THEN .TP 20 .ti +4 IF( NZZ.GT.NZ .OR. IINFO.NE.0 ) THEN .TP 20 .ti +4 INFO = INFO + IERREBZ .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 NZ = MIN( NZ, NZZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PZSTEIN( N, RWORK( INDD2 ), RWORK( INDE2+OFFSET ), NZ, W, IWORK( INDIBL ), IWORK( INDISP ), ORFAC, Z, IZ, JZ, DESCZ, RWORK( INDRWORK ), LALLWORK, IWORK( 1 ), ISIZESTEIN, IFAIL, ICLUSTR, GAP, IINFO ) .TP 20 .ti +4 IF( IINFO.GE.NZ+1 ) INFO = INFO + IERRCLS .TP 20 .ti +4 IF( MOD( IINFO, NZ+1 ).NE.0 ) INFO = INFO + IERREIN .TP 20 .ti +4 IF( NZ.GT.0 ) THEN .TP 20 .ti +4 CALL PZUNMTR( 'L', UPLO, 'N', N, NZ, A, IA, JA, DESCA, WORK( INDTAU ), Z, IZ, JZ, DESCZ, WORK( INDWORK ), LLWORK, IINFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( ISCALE.EQ.1 ) THEN .TP 20 .ti +4 CALL DSCAL( M, ONE / SIGMA, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = DCMPLX( LWMIN ) .TP 20 .ti +4 RWORK( 1 ) = DBLE( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pzhegs2.l0100644000056400000620000001417006335610656017013 0ustar pfrauenfstaff.TH PZHEGS2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZHEGS2 - reduce a complex Hermitian-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PZHEGS2( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZHEGS2 reduces a complex Hermitian-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**H or L**H*sub( A )*L. .br sub( B ) must have been previously factorized as U**H*U or L*L**H by PZPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H); = 2 or 3: compute U*sub( A )*U**H or L**H*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**H*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**H. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PZPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzhegst.l0100644000056400000620000001463706335610656017125 0ustar pfrauenfstaff.TH PZHEGST l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZHEGST - reduce a complex Hermitian-definite generalized eigenproblem to standard form .SH SYNOPSIS .TP 20 SUBROUTINE PZHEGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, INFO, JA, JB, N .TP 20 .ti +4 DOUBLE PRECISION SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZHEGST reduces a complex Hermitian-definite generalized eigenproblem to standard form. In the following sub( A ) denotes A( IA:IA+N-1, JA:JA+N-1 ) and sub( B ) denotes B( IB:IB+N-1, JB:JB+N-1 ). .br If IBTYPE = 1, the problem is sub( A )*x = lambda*sub( B )*x, and sub( A ) is overwritten by inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H) .br If IBTYPE = 2 or 3, the problem is sub( A )*sub( B )*x = lambda*x or sub( B )*sub( A )*x = lambda*x, and sub( A ) is overwritten by U*sub( A )*U**H or L**H*sub( A )*L. .br sub( B ) must have been previously factorized as U**H*U or L*L**H by PZPOTRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 9 IBTYPE (global input) INTEGER = 1: compute inv(U**H)*sub( A )*inv(U) or inv(L)*sub( A )*inv(L**H); = 2 or 3: compute U*sub( A )*U**H or L**H*sub( A )*L. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of sub( A ) is stored and sub( B ) is factored as U**H*U; = 'L': Lower triangle of sub( A ) is stored and sub( B ) is factored as L*L**H. .TP 8 N (global input) INTEGER The order of the matrices sub( A ) and sub( B ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, the transformed matrix, stored in the same format as sub( A ). .TP 8 IA (global input) INTEGER A's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JA (global input) INTEGER A's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1)). On entry, this array contains the local pieces of the triangular factor from the Cholesky factorization of sub( B ), as returned by PZPOTRF. .TP 8 IB (global input) INTEGER B's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JB (global input) INTEGER B's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 SCALE (global output) DOUBLE PRECISION Amount by which the eigenvalues should be scaled to compensate for the scaling performed in this routine. At present, SCALE is always returned as 1.0, it is returned here to allow for future enhancement. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzhegvx.l0100644000056400000620000002457006335610657017132 0ustar pfrauenfstaff.TH PZHEGVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME .SH SYNOPSIS .TP 20 SUBROUTINE PZHEGVX( IBTYPE, JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 CHARACTER JOBZ, RANGE, UPLO .TP 20 .ti +4 INTEGER IA, IB, IBTYPE, IL, INFO, IU, IZ, JA, JB, JZ, LIWORK, LRWORK, LWORK, M, N, NZ .TP 20 .ti +4 DOUBLE PRECISION ABSTOL, ORFAC, VL, VU .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCZ( * ), ICLUSTR( * ), IFAIL( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION GAP( * ), RWORK( * ), W( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ), WORK( * ), Z( * ) .TP 20 .ti +4 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_, MB_, NB_, RSRC_, CSRC_, LLD_ .TP 20 .ti +4 PARAMETER ( BLOCK_CYCLIC_2D = 1, DLEN_ = 9, DTYPE_ = 1, CTXT_ = 2, M_ = 3, N_ = 4, MB_ = 5, NB_ = 6, RSRC_ = 7, CSRC_ = 8, LLD_ = 9 ) .TP 20 .ti +4 COMPLEX*16 ONE .TP 20 .ti +4 PARAMETER ( ONE = 1.0D+0 ) .TP 20 .ti +4 DOUBLE PRECISION FIVE, ZERO .TP 20 .ti +4 PARAMETER ( FIVE = 5.0D+0, ZERO = 0.0D+0 ) .TP 20 .ti +4 INTEGER IERRNPD .TP 20 .ti +4 PARAMETER ( IERRNPD = 16 ) .TP 20 .ti +4 LOGICAL ALLEIG, INDEIG, LQUERY, UPPER, VALEIG, WANTZ .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER IACOL, IAROW, IBCOL, IBROW, ICOFFA, ICOFFB, ICTXT, IROFFA, IROFFB, LIWMIN, LRWMIN, LWMIN, MQ0, MYCOL, MYROW, NB, NEIG, NN, NP0, NPCOL, NPROW .TP 20 .ti +4 DOUBLE PRECISION EPS, SCALE .TP 20 .ti +4 INTEGER IDUM1( 5 ), IDUM2( 5 ) .TP 20 .ti +4 LOGICAL LSAME .TP 20 .ti +4 INTEGER ICEIL, INDXG2P, NUMROC .TP 20 .ti +4 DOUBLE PRECISION PDLAMCH .TP 20 .ti +4 EXTERNAL LSAME, ICEIL, INDXG2P, NUMROC, PDLAMCH .TP 20 .ti +4 EXTERNAL BLACS_GRIDINFO, CHK1MAT, DGEBR2D, DGEBS2D, DSCAL, PCHK1MAT, PCHK2MAT, PXERBLA, PZHEEVX, PZHEGST, PZPOTRF, PZTRMM, PZTRSM .TP 20 .ti +4 INTRINSIC ABS, DBLE, DCMPLX, ICHAR, MAX, MIN, MOD .TP 20 .ti +4 IF( BLOCK_CYCLIC_2D*CSRC_*CTXT_*DLEN_*DTYPE_*LLD_*MB_*M_*NB_*N_* RSRC_.LT.0 )RETURN .TP 20 .ti +4 ICTXT = DESCA( CTXT_ ) .TP 20 .ti +4 CALL BLACS_GRIDINFO( ICTXT, NPROW, NPCOL, MYROW, MYCOL ) .TP 20 .ti +4 INFO = 0 .TP 20 .ti +4 IF( NPROW.EQ.-1 ) THEN .TP 20 .ti +4 INFO = -( 900+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2600+CTXT_ ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 EPS = PDLAMCH( DESCA( CTXT_ ), 'Precision' ) .TP 20 .ti +4 WANTZ = LSAME( JOBZ, 'V' ) .TP 20 .ti +4 UPPER = LSAME( UPLO, 'U' ) .TP 20 .ti +4 ALLEIG = LSAME( RANGE, 'A' ) .TP 20 .ti +4 VALEIG = LSAME( RANGE, 'V' ) .TP 20 .ti +4 INDEIG = LSAME( RANGE, 'I' ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IA, JA, DESCA, 9, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IB, JB, DESCB, 13, INFO ) .TP 20 .ti +4 CALL CHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, INFO ) .TP 20 .ti +4 IF( INFO.EQ.0 ) THEN .TP 20 .ti +4 IF( MYROW.EQ.0 .AND. MYCOL.EQ.0 ) THEN .TP 20 .ti +4 RWORK( 1 ) = ABSTOL .TP 20 .ti +4 IF( VALEIG ) THEN .TP 20 .ti +4 RWORK( 2 ) = VL .TP 20 .ti +4 RWORK( 3 ) = VU .TP 20 .ti +4 ELSE .TP 20 .ti +4 RWORK( 2 ) = ZERO .TP 20 .ti +4 RWORK( 3 ) = ZERO .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL DGEBS2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3 ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 CALL DGEBR2D( DESCA( CTXT_ ), 'ALL', ' ', 3, 1, RWORK, 3, 0, 0 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IAROW = INDXG2P( IA, DESCA( MB_ ), MYROW, DESCA( RSRC_ ), NPROW ) .TP 20 .ti +4 IBROW = INDXG2P( IB, DESCB( MB_ ), MYROW, DESCB( RSRC_ ), NPROW ) .TP 20 .ti +4 IACOL = INDXG2P( JA, DESCA( NB_ ), MYCOL, DESCA( CSRC_ ), NPCOL ) .TP 20 .ti +4 IBCOL = INDXG2P( JB, DESCB( NB_ ), MYCOL, DESCB( CSRC_ ), NPCOL ) .TP 20 .ti +4 IROFFA = MOD( IA-1, DESCA( MB_ ) ) .TP 20 .ti +4 ICOFFA = MOD( JA-1, DESCA( NB_ ) ) .TP 20 .ti +4 IROFFB = MOD( IB-1, DESCB( MB_ ) ) .TP 20 .ti +4 ICOFFB = MOD( JB-1, DESCB( NB_ ) ) .TP 20 .ti +4 LQUERY = .FALSE. .TP 20 .ti +4 IF( LWORK.EQ.-1 .OR. LIWORK.EQ.-1 .OR. LRWORK.EQ.-1 ) LQUERY = .TRUE. .TP 20 .ti +4 LIWMIN = 6*MAX( N, ( NPROW*NPCOL )+1, 4 ) .TP 20 .ti +4 NB = DESCA( MB_ ) .TP 20 .ti +4 NN = MAX( N, NB, 2 ) .TP 20 .ti +4 NP0 = NUMROC( NN, NB, 0, 0, NPROW ) .TP 20 .ti +4 IF( ( .NOT.WANTZ ) .OR. ( VALEIG .AND. ( .NOT.LQUERY ) ) ) THEN .TP 20 .ti +4 LWMIN = N + MAX( NB*( NP0+1 ), 3 ) .TP 20 .ti +4 LRWMIN = 5*NN + 4*N .TP 20 .ti +4 NEIG = 0 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IF( ALLEIG .OR. VALEIG ) THEN .TP 20 .ti +4 NEIG = N .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 NEIG = IU - IL + 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 MQ0 = NUMROC( MAX( NEIG, NB, 2 ), NB, 0, 0, NPCOL ) .TP 20 .ti +4 LWMIN = N + ( NP0+MQ0+NB )*NB .TP 20 .ti +4 LRWMIN = 4*N + MAX( 5*NN, NP0*MQ0 ) + ICEIL( NEIG, NPROW*NPCOL )*NN .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( IBTYPE.LT.1 .OR. IBTYPE.GT.3 ) THEN .TP 20 .ti +4 INFO = -1 .TP 20 .ti +4 ELSE IF( .NOT.( WANTZ .OR. LSAME( JOBZ, 'N' ) ) ) THEN .TP 20 .ti +4 INFO = -2 .TP 20 .ti +4 ELSE IF( .NOT.( ALLEIG .OR. VALEIG .OR. INDEIG ) ) THEN .TP 20 .ti +4 INFO = -3 .TP 20 .ti +4 ELSE IF( .NOT.UPPER .AND. .NOT.LSAME( UPLO, 'L' ) ) THEN .TP 20 .ti +4 INFO = -4 .TP 20 .ti +4 ELSE IF( N.LT.0 ) THEN .TP 20 .ti +4 INFO = -5 .TP 20 .ti +4 ELSE IF( IROFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -7 .TP 20 .ti +4 ELSE IF( ICOFFA.NE.0 ) THEN .TP 20 .ti +4 INFO = -8 .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 900+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCB( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCB( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCB( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCB( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCB( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCB( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCB( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 1300+CTXT_ ) .TP 20 .ti +4 ELSE IF( DESCA( M_ ).NE.DESCZ( M_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+M_ ) .TP 20 .ti +4 ELSE IF( DESCA( N_ ).NE.DESCZ( N_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+N_ ) .TP 20 .ti +4 ELSE IF( DESCA( MB_ ).NE.DESCZ( MB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+MB_ ) .TP 20 .ti +4 ELSE IF( DESCA( NB_ ).NE.DESCZ( NB_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+NB_ ) .TP 20 .ti +4 ELSE IF( DESCA( RSRC_ ).NE.DESCZ( RSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+RSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CSRC_ ).NE.DESCZ( CSRC_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CSRC_ ) .TP 20 .ti +4 ELSE IF( DESCA( CTXT_ ).NE.DESCZ( CTXT_ ) ) THEN .TP 20 .ti +4 INFO = -( 2200+CTXT_ ) .TP 20 .ti +4 ELSE IF( IROFFB.NE.0 .OR. IBROW.NE.IAROW ) THEN .TP 20 .ti +4 INFO = -11 .TP 20 .ti +4 ELSE IF( ICOFFB.NE.0 .OR. IBCOL.NE.IACOL ) THEN .TP 20 .ti +4 INFO = -12 .TP 20 .ti +4 ELSE IF( VALEIG .AND. N.GT.0 .AND. VU.LE.VL ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IL.LT.1 .OR. IL.GT.MAX( 1, N ) ) ) THEN .TP 20 .ti +4 INFO = -16 .TP 20 .ti +4 ELSE IF( INDEIG .AND. ( IU.LT.MIN( N, IL ) .OR. IU.GT.N ) ) THEN .TP 20 .ti +4 INFO = -17 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 2 )-VL ).GT.FIVE*EPS* ABS( VL ) ) ) THEN .TP 20 .ti +4 INFO = -14 .TP 20 .ti +4 ELSE IF( VALEIG .AND. ( ABS( RWORK( 3 )-VU ).GT.FIVE*EPS* ABS( VU ) ) ) THEN .TP 20 .ti +4 INFO = -15 .TP 20 .ti +4 ELSE IF( ABS( RWORK( 1 )-ABSTOL ).GT.FIVE*EPS* ABS( ABSTOL ) ) THEN .TP 20 .ti +4 INFO = -18 .TP 20 .ti +4 ELSE IF( LWORK.LT.LWMIN .AND. LWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -28 .TP 20 .ti +4 ELSE IF( LRWORK.LT.LRWMIN .AND. LRWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -30 .TP 20 .ti +4 ELSE IF( LIWORK.LT.LIWMIN .AND. LIWORK.NE.-1 ) THEN .TP 20 .ti +4 INFO = -32 .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM1( 1 ) = IBTYPE .TP 20 .ti +4 IDUM2( 1 ) = 1 .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'V' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 2 ) = ICHAR( 'N' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 2 ) = 2 .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'U' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 3 ) = ICHAR( 'L' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 3 ) = 3 .TP 20 .ti +4 IF( ALLEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'A' ) .TP 20 .ti +4 ELSE IF( INDEIG ) THEN .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'I' ) .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 4 ) = ICHAR( 'V' ) .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 4 ) = 4 .TP 20 .ti +4 IF( LQUERY ) THEN .TP 20 .ti +4 IDUM1( 5 ) = -1 .TP 20 .ti +4 ELSE .TP 20 .ti +4 IDUM1( 5 ) = 1 .TP 20 .ti +4 END IF .TP 20 .ti +4 IDUM2( 5 ) = 5 .TP 20 .ti +4 CALL PCHK2MAT( N, 4, N, 4, IA, JA, DESCA, 9, N, 4, N, 4, IB, JB, DESCB, 13, 5, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 CALL PCHK1MAT( N, 4, N, 4, IZ, JZ, DESCZ, 26, 0, IDUM1, IDUM2, INFO ) .TP 20 .ti +4 END IF .TP 20 .ti +4 WORK( 1 ) = DCMPLX( DBLE( LWMIN ) ) .TP 20 .ti +4 RWORK( 1 ) = DBLE( LRWMIN ) .TP 20 .ti +4 IWORK( 1 ) = LIWMIN .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 CALL PXERBLA( ICTXT, 'PZHEGVX ', -INFO ) .TP 20 .ti +4 RETURN .TP 20 .ti +4 ELSE IF( LQUERY ) THEN .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PZPOTRF( UPLO, N, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 IF( INFO.NE.0 ) THEN .TP 20 .ti +4 IFAIL( 1 ) = INFO .TP 20 .ti +4 INFO = IERRNPD .TP 20 .ti +4 RETURN .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PZHEGST( IBTYPE, UPLO, N, A, IA, JA, DESCA, B, IB, JB, DESCB, SCALE, INFO ) .TP 20 .ti +4 CALL PZHEEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 IF( WANTZ ) THEN .TP 20 .ti +4 NEIG = M .TP 20 .ti +4 IF( IBTYPE.EQ.1 .OR. IBTYPE.EQ.2 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'C' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PZTRSM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 ELSE IF( IBTYPE.EQ.3 ) THEN .TP 20 .ti +4 IF( UPPER ) THEN .TP 20 .ti +4 TRANS = 'C' .TP 20 .ti +4 ELSE .TP 20 .ti +4 TRANS = 'N' .TP 20 .ti +4 END IF .TP 20 .ti +4 CALL PZTRMM( 'Left', UPLO, TRANS, 'Non-unit', N, NEIG, ONE, B, IB, JB, DESCB, Z, IZ, JZ, DESCZ ) .TP 20 .ti +4 END IF .TP 20 .ti +4 END IF .TP 20 .ti +4 IF( SCALE.NE.ONE ) THEN .TP 20 .ti +4 CALL DSCAL( N, SCALE, W, 1 ) .TP 20 .ti +4 END IF .TP 20 .ti +4 RETURN .TP 20 .ti +4 END .SH PURPOSE scalapack-doc-1.5/man/manl/pzhetd2.l0100644000056400000620000002031706335610657017012 0ustar pfrauenfstaff.TH PZHETD2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZHETD2 - reduce a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PZHETD2( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZHETD2 reduces a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 3*N. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA ) with .br IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pzhetrd.l0100644000056400000620000002075606335610657017121 0ustar pfrauenfstaff.TH PZHETRD l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZHETRD - reduce a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation .SH SYNOPSIS .TP 20 SUBROUTINE PZHETRD( UPLO, N, A, IA, JA, DESCA, D, E, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZHETRD reduces a complex Hermitian matrix sub( A ) to Hermitian tridiagonal form T by an unitary similarity transformation: Q' * sub( A ) * Q = T, where sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the diagonal and first superdiagonal of sub( A ) are over- written by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors; if UPLO = 'L', the diagonal and first subdiagonal of sub( A ) are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array TAU, represent the unitary matrix Q as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK( 1 ) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MAX( NB * ( NP +1 ), 3 * NB ) where NB = MB_A = NB_A, NP = NUMROC( N, NB, MYROW, IAROW, NPROW ), IAROW = INDXG2P( IA, NB, MYROW, RSRC_A, NPROW ). INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n-1) . . . H(2) H(1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(n-1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The contents of sub( A ) on exit are illustrated by the following examples with n = 5: .br if UPLO = 'U': if UPLO = 'L': .br ( d e v2 v3 v4 ) ( d ) ( d e v3 v4 ) ( e d ) ( d e v4 ) ( v1 e d ) ( d e ) ( v1 v2 e d ) ( d ) ( v1 v2 v3 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). .br Alignment requirements .br ====================== .br The distributed submatrix sub( A ) must verify some alignment proper- ties, namely the following expression should be true: .br ( MB_A.EQ.NB_A .AND. IROFFA.EQ.ICOFFA .AND. IROFFA.EQ.0 ) with IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ). scalapack-doc-1.5/man/manl/pzlabrd.l0100644000056400000620000002360406335610657017072 0ustar pfrauenfstaff.TH PZLABRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLABRD - reduce the first NB rows and columns of a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an unitary transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transfor- mation to the unreduced part of sub( A ) .SH SYNOPSIS .TP 20 SUBROUTINE PZLABRD( M, N, NB, A, IA, JA, DESCA, D, E, TAUQ, TAUP, X, IX, JX, DESCX, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IX, IY, JA, JX, JY, M, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ), DESCY( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAUP( * ), TAUQ( * ), X( * ), Y( * ), WORK( * ) .SH PURPOSE PZLABRD reduces the first NB rows and columns of a complex general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper or lower bidiagonal form by an unitary transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transfor- mation to the unreduced part of sub( A ). If M >= N, sub( A ) is reduced to upper bidiagonal form; if M < N, to lower bidiagonal form. .br This is an auxiliary routine called by PZGEBRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of leading rows and columns of sub( A ) to be reduced. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the general distributed matrix sub( A ) to be reduced. On exit, the first NB rows and columns of the matrix are overwritten; the rest of the distributed matrix sub( A ) is unchanged. If m >= n, elements on and below the diagonal in the first NB columns, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors; and elements above the diagonal in the first NB rows, with the array TAUP, represent the unitary matrix P as a product of elementary reflectors. If m < n, elements below the diagonal in the first NB columns, with the array TAUQ, represent the unitary matrix Q as a product of elementary reflectors, and elements on and above the diagonal in the first NB rows, with the array TAUP, represent the unitary matrix P as a product of elementary reflectors. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-1) otherwise. The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(ia+i-1,ja+i-1). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCr(IA+MIN(M,N)-1) if M >= N; LOCc(JA+MIN(M,N)-2) otherwise. The distributed off-diagonal elements of the bidiagonal distributed matrix B: if m >= n, E(i) = A(ia+i-1,ja+i) for i = 1,2,...,n-1; if m < n, E(i) = A(ia+i,ja+i-1) for i = 1,2,...,m-1. E is tied to the distributed matrix A. .TP 8 TAUQ (local output) COMPLEX*16 array dimension LOCc(JA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix Q. TAUQ is tied to the distributed matrix A. See Further Details. TAUP (local output) COMPLEX*16 array, dimension LOCr(IA+MIN(M,N)-1). The scalar factors of the elementary reflectors which represent the unitary matrix P. TAUP is tied to the distributed matrix A. See Further Details. X (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_X,NB). On exit, the local pieces of the distributed M-by-NB matrix X(IX:IX+M-1,JX:JX+NB-1) required to update the unreduced part of sub( A ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 Y (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_Y,NB). On exit, the local pieces of the distributed N-by-NB matrix Y(IY:IY+N-1,JY:JY+NB-1) required to update the unreduced part of sub( A ). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) LWORK >= NB_A + NQ, with NQ = NUMROC( N+MOD( IA-1, NB_Y ), NB_Y, MYCOL, IACOL, NPCOL ) IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ) INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The matrices Q and P are represented as products of elementary reflectors: .br Q = H(1) H(2) . . . H(nb) and P = G(1) G(2) . . . G(nb) Each H(i) and G(i) has the form: .br H(i) = I - tauq * v * v' and G(i) = I - taup * u * u' where tauq and taup are complex scalars, and v and u are complex vectors. .br If m >= n, v(1:i-1) = 0, v(i) = 1, and v(i:m) is stored on exit in A(ia+i-1:ia+m-1,ja+i-1); u(1:i) = 0, u(i+1) = 1, and u(i+1:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br If m < n, v(1:i) = 0, v(i+1) = 1, and v(i+1:m) is stored on exit in A(ia+i+1:ia+m-1,ja+i-1); u(1:i-1) = 0, u(i) = 1, and u(i:n) is stored on exit in A(ia+i-1,ja+i:ja+n-1); tauq is stored in TAUQ(ja+i-1) and taup in TAUP(ia+i-1). .br The elements of the vectors v and u together form the m-by-nb matrix V and the nb-by-n matrix U' which are needed, with X and Y, to apply the transformation to the unreduced part of the matrix, using a block update of the form: sub( A ) := sub( A ) - V*Y' - X*U'. .br The contents of sub( A ) on exit are illustrated by the following examples with nb = 2: .br m = 6 and n = 5 (m > n): m = 5 and n = 6 (m < n): ( 1 1 u1 u1 u1 ) ( 1 u1 u1 u1 u1 u1 ) ( v1 1 1 u2 u2 ) ( 1 1 u2 u2 u2 u2 ) ( v1 v2 a a a ) ( v1 1 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) ( v1 v2 a a a a ) ( v1 v2 a a a ) .br where a denotes an element of the original matrix which is unchanged, vi denotes an element of the vector defining H(i), and ui an element of the vector defining G(i). .br scalapack-doc-1.5/man/manl/pzlacgv.l0100644000056400000620000001027306335610657017100 0ustar pfrauenfstaff.TH PZLACGV l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLACGV - conjugate a complex vector of length N, sub( X ), where sub( X ) denotes X(IX,JX:JX+N-1) if INCX = DESCX( M_ ) and X(IX:IX+N-1,JX) if INCX = 1, and Notes ===== Each global data object is described by an associated description vector .SH SYNOPSIS .TP 20 SUBROUTINE PZLACGV( N, X, IX, JX, DESCX, INCX ) .TP 20 .ti +4 INTEGER INCX, IX, JX, N .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX*16 X( * ) .SH PURPOSE PZLACGV conjugates a complex vector of length N, sub( X ), where sub( X ) denotes X(IX,JX:JX+N-1) if INCX = DESCX( M_ ) and X(IX:IX+N-1,JX) if INCX = 1, and the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vector sub( X ). .TP 8 X (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_X,*). On entry the vector to be conjugated x( i ) = X(IX+(JX-1)*M_X +(i-1)*INCX ), 1 <= i <= N. On exit the conjugated vector. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. scalapack-doc-1.5/man/manl/pzlacon.l0100644000056400000620000001266106335610657017103 0ustar pfrauenfstaff.TH PZLACON l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLACON - estimate the 1-norm of a square, complex distributed matrix A .SH SYNOPSIS .TP 20 SUBROUTINE PZLACON( N, V, IV, JV, DESCV, X, IX, JX, DESCX, EST, KASE ) .TP 20 .ti +4 INTEGER IV, IX, JV, JX, KASE, N .TP 20 .ti +4 DOUBLE PRECISION EST .TP 20 .ti +4 INTEGER DESCV( * ), DESCX( * ) .TP 20 .ti +4 COMPLEX*16 V( * ), X( * ) .SH PURPOSE PZLACON estimates the 1-norm of a square, complex distributed matrix A. Reverse communication is used for evaluating matrix-vector products. X and V are aligned with the distributed matrix A, this information is implicitly contained within IV, IX, DESCV, and DESCX. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vectors V and X. N >= 0. .TP 8 V (local workspace) COMPLEX*16 pointer into the local memory to an array of dimension LOCr(N+MOD(IV-1,MB_V)). On the final return, V = A*W, where EST = norm(V)/norm(W) (W is not returned). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 X (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension LOCr(N+MOD(IX-1,MB_X)). On an intermediate return, X should be overwritten by A * X, if KASE=1, A' * X, if KASE=2, where A' is the conjugate transpose of A, and PZLACON must be re-called with all the other parameters unchanged. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 EST (global output) DOUBLE PRECISION An estimate (a lower bound) for norm(A). .TP 8 KASE (local input/local output) INTEGER On the initial call to PZLACON, KASE should be 0. On an intermediate return, KASE will be 1 or 2, indicating whether X should be overwritten by A * X or A' * X. On the final return from PZLACON, KASE will again be 0. .SH FURTHER DETAILS The serial version ZLACON has been contributed by Nick Higham, University of Manchester. It was originally named SONEST, dated March 16, 1988. .br Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation", ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. scalapack-doc-1.5/man/manl/pzlacp2.l0100644000056400000620000001270506335610657017007 0ustar pfrauenfstaff.TH PZLACP2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLACP2 - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PZLACP2( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZLACP2 copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PZLACP2 performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). PZLACP2 requires that only dimension of the matrix operands is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pzlacpy.l0100644000056400000620000001256506335610657017122 0ustar pfrauenfstaff.TH PZLACPY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLACPY - copie all or part of a distributed matrix A to another distributed matrix B .SH SYNOPSIS .TP 20 SUBROUTINE PZLACPY( UPLO, M, N, A, IA, JA, DESCA, B, IB, JB, DESCB ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, JA, JB, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZLACPY copies all or part of a distributed matrix A to another distributed matrix B. No communication is performed, PZLACPY performs a local copy sub( A ) := sub( B ), where sub( A ) denotes A(IA:IA+M-1,JA:JA+N-1) and sub( B ) denotes B(IB:IB+M-1,JB:JB+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be copied: .br = 'U': Upper triangular part is copied; the strictly lower triangular part of sub( A ) is not referenced; = 'L': Lower triangular part is copied; the strictly upper triangular part of sub( A ) is not referenced; Otherwise: All of the matrix sub( A ) is copied. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix sub( A ) to be copied from. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B, LOCc(JB+N-1) ). This array contains on exit the local pieces of the distributed matrix sub( B ) set as follows: if UPLO = 'U', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=j, 1<=j<=N; if UPLO = 'L', B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), j<=i<=M, 1<=j<=N; otherwise, B(IB+i-1,JB+j-1) = A(IA+i-1,JA+j-1), 1<=i<=M, 1<=j<=N. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. scalapack-doc-1.5/man/manl/pzlaevswp.l0100644000056400000620000001224406335610657017465 0ustar pfrauenfstaff.TH PZLAEVSWP l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZLAEVSWP - move the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted .SH SYNOPSIS .TP 22 SUBROUTINE PZLAEVSWP( N, ZIN, LDZI, Z, IZ, JZ, DESCZ, NVS, KEY, RWORK, LRWORK ) .TP 22 .ti +4 INTEGER IZ, JZ, LDZI, LRWORK, N .TP 22 .ti +4 INTEGER DESCZ( * ), KEY( * ), NVS( * ) .TP 22 .ti +4 DOUBLE PRECISION RWORK( * ), ZIN( LDZI, * ) .TP 22 .ti +4 COMPLEX*16 Z( * ) .SH PURPOSE PZLAEVSWP moves the eigenvectors (potentially unsorted) from where they are computed, to a ScaLAPACK standard block cyclic array, sorted so that the corresponding eigenvalues are sorted. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS NP = the number of rows local to a given process. NQ = the number of columns local to a given process. .TP 8 N (global input) INTEGER The order of the matrix A. N >= 0. .TP 8 ZIN (local input) DOUBLE PRECISION array, dimension ( LDZI, NVS(iam) ) The eigenvectors on input. Each eigenvector resides entirely in one process. Each process holds a contiguous set of NVS(iam) eigenvectors. The first eigenvector which the process holds is: sum for i=[0,iam-1) of NVS(i) .TP 8 LDZI (locl input) INTEGER leading dimension of the ZIN array .TP 8 Z (local output) COMPLEX*16 array global dimension (N, N), local dimension (DESCZ(DLEN_), NQ) The eigenvectors on output. The eigenvectors are distributed in a block cyclic manner in both dimensions, with a block size of NB. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 NVS (global input) INTEGER array, dimension( nprocs+1 ) nvs(i) = number of processes number of eigenvectors held by processes [0,i-1) nvs(1) = number of eigen vectors held by [0,1-1) == 0 nvs(nprocs+1) = number of eigen vectors held by [0,nprocs) == total number of eigenvectors .TP 8 KEY (global input) INTEGER array, dimension( N ) Indicates the actual index (after sorting) for each of the eigenvectors. .TP 9 RWORK (local workspace) DOUBLE PRECISION array, dimension (LRWORK) .TP 9 LRWORK (local input) INTEGER dimension of RWORK scalapack-doc-1.5/man/manl/pzlahrd.l0100644000056400000620000001054706335610657017102 0ustar pfrauenfstaff.TH PZLAHRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAHRD - reduce the first NB columns of a complex general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero .SH SYNOPSIS .TP 20 SUBROUTINE PZLAHRD( N, K, NB, A, IA, JA, DESCA, TAU, T, Y, IY, JY, DESCY, WORK ) .TP 20 .ti +4 INTEGER IA, IY, JA, JY, K, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCY( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), T( * ), TAU( * ), WORK( * ), Y( * ) .SH PURPOSE PZLAHRD reduces the first NB columns of a complex general N-by-(N-K+1) distributed matrix A(IA:IA+N-1,JA:JA+N-K) so that elements below the k-th subdiagonal are zero. The reduction is performed by an unitary similarity transformation Q' * A * Q. The routine returns the matrices V and T which determine Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. .br This is an auxiliary routine called by PZGEHRD. In the following comments sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1). .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 K (global input) INTEGER The offset for the reduction. Elements below the k-th subdiagonal in the first NB columns are reduced to zero. .TP 8 NB (global input) INTEGER The number of columns to be reduced. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-K)). On entry, this array contains the the local pieces of the N-by-(N-K+1) general distributed matrix A(IA:IA+N-1,JA:JA+N-K). On exit, the elements on and above the k-th subdiagonal in the first NB columns are overwritten with the corresponding elements of the reduced distributed matrix; the elements below the k-th subdiagonal, with the array TAU, represent the matrix Q as a product of elementary reflectors. The other columns of A(IA:IA+N-1,JA:JA+N-K) are unchanged. See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16 array, dimension LOCc(JA+N-2) The scalar factors of the elementary reflectors (see Further Details). TAU is tied to the distributed matrix A. .TP 8 T (local output) COMPLEX*16 array, dimension (NB_A,NB_A) The upper triangular matrix T. .TP 8 Y (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_Y,NB_A). On exit, this array contains the local pieces of the N-by-NB distributed matrix Y. LLD_Y >= LOCr(IA+N-1). .TP 8 IY (global input) INTEGER The row index in the global array Y indicating the first row of sub( Y ). .TP 8 JY (global input) INTEGER The column index in the global array Y indicating the first column of sub( Y ). .TP 8 DESCY (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Y. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (NB) .SH FURTHER DETAILS The matrix Q is represented as a product of nb elementary reflectors Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i+k-1) = 0, v(i+k) = 1; v(i+k+1:n) is stored on exit in A(ia+i+k:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the (n-k+1)-by-nb matrix V which is needed, with T and Y, to apply the transformation to the unreduced part of the matrix, using an update of the form: A(ia:ia+n-1,ja:ja+n-k) := (I-V*T*V')*(A(ia:ia+n-1,ja:ja+n-k)-Y*V'). The contents of A(ia:ia+n-1,ja:ja+n-k) on exit are illustrated by the following example with n = 7, k = 3 and nb = 2: .br ( a h a a a ) .br ( a h a a a ) .br ( a h a a a ) .br ( h h a a a ) .br ( v1 h a a a ) .br ( v1 v2 a a a ) .br ( v1 v2 a a a ) .br where a denotes an element of the original matrix .br A(ia:ia+n-1,ja:ja+n-k), h denotes a modified element of the upper Hessenberg matrix H, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pzlange.l0100644000056400000620000001305106335610657017067 0ustar pfrauenfstaff.TH PZLANGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLANGE - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PZLANGE( NORM, M, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM .TP 17 .ti +4 INTEGER IA, JA, M, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION WORK( * ) .TP 17 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLANGE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a distributed matrix sub( A ) = A(IA:IA+M-1, JA:JA+N-1). .br PZLANGE returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+M-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PZLANGE as described above. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PZLANGE is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PZLANGE is set to zero. N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the distributed matrix sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pzlanhe.l0100644000056400000620000001447706335610657017105 0ustar pfrauenfstaff.TH PZLANHE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLANHE - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PZLANHE( NORM, UPLO, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM, UPLO .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION WORK( * ) .TP 17 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLANHE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex hermitian distributed matrix sub(A) = A(IA:IA+N-1,JA:JA+N-1). PZLANHE returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PZLANHE as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the hermitian matrix sub( A ) is to be referenced. = 'U': Upper triangular part of sub( A ) is referenced, .br = 'L': Lower triangular part of sub( A ) is referenced. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PZLANHE is set to zero. N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix which norm is to be computed, and the strictly lower triangular part of this matrix is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix which norm is to be computed, and the strictly upper triangular part of sub( A ) is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), 2*Nq0+Np0+LDW if NORM = '1', 'O', 'o', 'I' or 'i', where LDW is given by: IF( NPROW.NE.NPCOL ) THEN LDW = MB_A*CEIL(CEIL(Np0/MB_A)/(LCM/NPROW)) ELSE LDW = 0 END IF 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where LCM is the least common multiple of NPROW and NPCOL LCM = ILCM( NPROW, NPCOL ) and CEIL denotes the ceiling operation (ICEIL). IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), ICEIL, ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pzlanhs.l0100644000056400000620000001256106335610657017113 0ustar pfrauenfstaff.TH PZLANHS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLANHS - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PZLANHS( NORM, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION WORK( * ) .TP 17 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLANHS returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a Hessenberg distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PZLANHS returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PZLANHS as described above. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PZLANHS is set to zero. N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pzlansy.l0100644000056400000620000001447706335610660017136 0ustar pfrauenfstaff.TH PZLANSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLANSY - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PZLANSY( NORM, UPLO, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER NORM, UPLO .TP 17 .ti +4 INTEGER IA, JA, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION WORK( * ) .TP 17 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLANSY returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). PZLANSY returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with IA <= i <= IA+N-1, ( and JA <= j <= JA+N-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PZLANSY as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric matrix sub( A ) is to be referenced. = 'U': Upper triangular part of sub( A ) is referenced, .br = 'L': Lower triangular part of sub( A ) is referenced. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the number of rows and columns of the distributed submatrix sub( A ). When N = 0, PZLANSY is set to zero. N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)) containing the local pieces of the symmetric distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix which norm is to be computed, and the strictly lower triangular part of this matrix is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix which norm is to be computed, and the strictly upper triangular part of sub( A ) is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), 2*Nq0+Np0+LDW if NORM = '1', 'O', 'o', 'I' or 'i', where LDW is given by: IF( NPROW.NE.NPCOL ) THEN LDW = MB_A*CEIL(CEIL(Np0/MB_A)/(LCM/NPROW)) ELSE LDW = 0 END IF 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where LCM is the least common multiple of NPROW and NPCOL LCM = ILCM( NPROW, NPCOL ) and CEIL denotes the ceiling operation (ICEIL). IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Np0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), ICEIL, ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pzlantr.l0100644000056400000620000001373106335610660017120 0ustar pfrauenfstaff.TH PZLANTR l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLANTR - return the value of the one norm, or the Frobenius norm, .SH SYNOPSIS .TP 17 DOUBLE PRECISION FUNCTION PZLANTR( NORM, UPLO, DIAG, M, N, A, IA, JA, DESCA, WORK ) .TP 17 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 17 .ti +4 INTEGER IA, JA, M, N .TP 17 .ti +4 INTEGER DESCA( * ) .TP 17 .ti +4 DOUBLE PRECISION WORK( * ) .TP 17 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLANTR returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular distributed matrix sub( A ) denoting A(IA:IA+M-1, JA:JA+N-1). .br PZLANTR returns the value .br ( max(abs(A(i,j))), NORM = 'M' or 'm' with ia <= i <= ia+m-1, ( and ja <= j <= ja+n-1, ( .br ( norm1( sub( A ) ), NORM = '1', 'O' or 'o' .br ( .br ( normI( sub( A ) ), NORM = 'I' or 'i' .br ( .br ( normF( sub( A ) ), NORM = 'F', 'f', 'E' or 'e' .br where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a matrix norm. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies the value to be returned in PZLANTR as described above. .TP 8 UPLO (global input) CHARACTER Specifies whether the matrix sub( A ) is upper or lower trapezoidal. = 'U': Upper trapezoidal .br = 'L': Lower trapezoidal Note that sub( A ) is triangular instead of trapezoidal if M = N. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) has unit diagonal. = 'N': Non-unit diagonal .br = 'U': Unit diagonal .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). When M = 0, PZLANTR is set to zero. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). When N = 0, PZLANTR is set to zero. N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1) ) containing the local pieces of sub( A ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 WORK (local workspace) DOUBLE PRECISION array dimension (LWORK) LWORK >= 0 if NORM = 'M' or 'm' (not referenced), Nq0 if NORM = '1', 'O' or 'o', Mp0 if NORM = 'I' or 'i', 0 if NORM = 'F', 'f', 'E' or 'e' (not referenced), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. scalapack-doc-1.5/man/manl/pzlapiv.l0100644000056400000620000001550406335610660017113 0ustar pfrauenfstaff.TH PZLAPIV l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAPIV - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PZLAPIV( DIREC, ROWCOL, PIVROC, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP, IWORK ) .TP 20 .ti +4 CHARACTER*1 DIREC, PIVROC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ), IWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLAPIV applies either P (permutation matrix indicated by IPIV) or inv( P ) to a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector may be distributed across a process row or a column. The pivot vector should be aligned with the distributed matrix A. This routine will transpose the pivot vector if necessary. For example if the row pivots should be applied to the columns of sub( A ), pass ROWCOL='C' and PIVROC='C'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Restrictions .br ============ .br IPIV must always be a distributed vector (not a matrix). Thus: IF( ROWPIV .EQ. 'C' ) THEN .br JP must be 1 .br ELSE .br IP must be 1 .br END IF .br The following restrictions apply when IPIV must be transposed: IF( ROWPIV.EQ.'C' .AND. PIVROC.EQ.'C') THEN .br DESCIP(MB_) must equal DESCA(NB_) .br ELSE IF( ROWPIV.EQ.'R" .AND. PIVROC.EQ.'R') THEN .br DESCIP(NB_) must equal DESCA(MB_) .br END IF .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER*1 Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P*sub( A ). = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P )*sub( A ). .TP 8 ROWCOL (global input) CHARACTER*1 Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 PIVROC (global input) CHARACTER*1 Specifies whether IPIV is distributed over a process row or column: = 'R' IPIV distributed over a process row = 'C' IPIV distributed over a process column .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the distributed submatrix sub( A ) to which the row or column interchanges will be applied. On exit, the local pieces of the permuted distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (local input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL='R', otherwise LOCc(N_A)+NB_A. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. This array is tied to the distributed matrix A. .TP 8 IWORK (local workspace) INTEGER array, dimension (LDW) where LDW is equal to the workspace necessary for transposition, and the storage of the tranposed IPIV: Let LCM be the least common multiple of NPROW and NPCOL. IF( ROWCOL.EQ.'R' .AND. PIVROC.EQ.'R' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P ELSE LDW = LOCr( N_P + MOD(JP-1, NB_P) ) + NB_P * CEIL( CEIL(LOCc(N_P)/NB_P) / (LCM/NPCOL) ) END IF ELSE IF( ROWCOL.EQ.'C' .AND. PIVROC.EQ.'C' ) THEN IF( NPROW.EQ.NPCOL ) THEN LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P ELSE LDW = LOCc( M_P + MOD(IP-1, MB_P) ) + MB_P * CEIL( CEIL(LOCr(M_P)/MB_P) / (LCM/NPROW) ) END IF ELSE IWORK is not referenced. END IF scalapack-doc-1.5/man/manl/pzlapv2.l0100644000056400000620000001360106335610660017020 0ustar pfrauenfstaff.TH PZLAPV2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAPV2 - applie either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting .SH SYNOPSIS .TP 20 SUBROUTINE PZLAPV2( DIREC, ROWCOL, M, N, A, IA, JA, DESCA, IPIV, IP, JP, DESCIP ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, IP, JA, JP, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCIP( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLAPV2 applies either P (permutation matrix indicated by IPIV) or inv( P ) to a M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1), resulting in row or column pivoting. The pivot vector should be aligned with the distributed matrix A. For pivoting the rows of sub( A ), IPIV should be distributed along a process column and replicated over all process rows. Similarly, IPIV should be distributed along a process row and replicated over all process columns for column pivoting. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) Applies pivots Forward from top of matrix. Computes P * sub( A ); = 'B' (Backward) Applies pivots Backward from bottom of matrix. Computes inv( P ) * sub( A ). .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are to be permuted: = 'R' Rows will be permuted, = 'C' Columns will be permuted. .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this local array contains the local pieces of the distributed matrix sub( A ) to which the row or columns interchanges will be applied. On exit, this array contains the local pieces of the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 IPIV (input) INTEGER array, dimension >= LOCr(M_A)+MB_A if ROWCOL = 'R', LOCc(N_A)+NB_A otherwise. It contains the pivoting information. IPIV(i) is the global row (column), local row (column) i was swapped with. The last piece of the array of size MB_A (resp. NB_A) is used as workspace. IPIV is tied to the distributed matrix A. .TP 8 IP (global input) INTEGER IPIV's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JP (global input) INTEGER IPIV's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCIP (global and local input) INTEGER array of dimension 8 The array descriptor for the distributed matrix IPIV. scalapack-doc-1.5/man/manl/pzlaqge.l0100644000056400000620000001402006335610660017061 0ustar pfrauenfstaff.TH PZLAQGE l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAQGE - equilibrate a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C .SH SYNOPSIS .TP 20 SUBROUTINE PZLAQGE( M, N, A, IA, JA, DESCA, R, C, ROWCND, COLCND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, COLCND, ROWCND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION C( * ), R( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLAQGE equilibrates a general M-by-N distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) using the row and scaling factors in the vectors R and C. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)) containing on entry the M-by-N matrix sub( A ). On exit, the equilibrated distributed matrix. See EQUED for the form of the equilibrated distributed submatrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 R (local input) DOUBLE PRECISION array, dimension LOCr(M_A) The row scale factors for sub( A ). R is aligned with the distributed matrix A, and replicated across every process column. R is tied to the distributed matrix A. .TP 8 C (local input) DOUBLE PRECISION array, dimension LOCc(N_A) The column scale factors of sub( A ). C is aligned with the distributed matrix A, and replicated down every process row. C is tied to the distributed matrix A. .TP 8 ROWCND (global input) DOUBLE PRECISION The global ratio of the smallest R(i) to the largest R(i), IA <= i <= IA+M-1. .TP 8 COLCND (global input) DOUBLE PRECISION The global ratio of the smallest C(i) to the largest C(i), JA <= j <= JA+N-1. .TP 8 AMAX (global input) DOUBLE PRECISION Absolute value of largest distributed submatrix entry. .TP 8 EQUED (global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration .br = 'R': Row equilibration, i.e., sub( A ) has been pre- .br multiplied by diag(R(IA:IA+M-1)), .br = 'C': Column equilibration, i.e., sub( A ) has been post- .br multiplied by diag(C(JA:JA+N-1)), .br = 'B': Both row and column equilibration, i.e., sub( A ) has been replaced by diag(R(IA:IA+M-1)) * sub( A ) * diag(C(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if row or column scaling should be done based on the ratio of the row or column scaling factors. If ROWCND < THRESH, row scaling is done, and if COLCND < THRESH, column scaling is done. LARGE and SMALL are threshold values used to decide if row scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. scalapack-doc-1.5/man/manl/pzlaqsy.l0100644000056400000620000001415306335610660017130 0ustar pfrauenfstaff.TH PZLAQSY l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAQSY - equilibrate a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC .SH SYNOPSIS .TP 20 SUBROUTINE PZLAQSY( UPLO, N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, EQUED ) .TP 20 .ti +4 CHARACTER EQUED, UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION SC( * ), SR( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLAQSY equilibrates a symmetric distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the scaling factors in the vectors SR and SC. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the symmetric distributed matrix sub( A ) is to be referenced: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (input/output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1)). On entry, the local pieces of the distributed symmetric matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and the strictly upper trian- gular part of sub( A ) is not referenced. On exit, if EQUED = 'Y', the equilibrated matrix: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local input) DOUBLE PRECISION array, dimension LOCr(M_A) The scale factors for A(IA:IA+M-1,JA:JA+N-1). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local input) DOUBLE PRECISION array, dimension LOCc(N_A) The scale factors for sub( A ). SC is aligned with the dis- tributed matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global input) DOUBLE PRECISION Ratio of the smallest SR(i) (respectively SC(j)) to the largest SR(i) (respectively SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. .TP 8 AMAX (global input) DOUBLE PRECISION Absolute value of the largest distributed submatrix entry. .TP 8 EQUED (output) CHARACTER*1 Specifies whether or not equilibration was done. = 'N': No equilibration. .br = 'Y': Equilibration was done, i.e., sub( A ) has been re- .br placed by: .br diag(SR(IA:IA+N-1)) * sub( A ) * diag(SC(JA:JA+N-1)). .SH PARAMETERS THRESH is a threshold value used to decide if scaling should be done based on the ratio of the scaling factors. If SCOND < THRESH, scaling is done. LARGE and SMALL are threshold values used to decide if scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, scaling is done. scalapack-doc-1.5/man/manl/pzlarf.l0100644000056400000620000002001506335610660016715 0ustar pfrauenfstaff.TH PZLARF l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARF - applie a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PZLARF( SIDE, M, N, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 COMPLEX*16 C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PZLARF applies a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also have the first row of sub( C ). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also have the first column of sub( C ) and MOD(JV-1,NB_V) must be equal to MOD(JC-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 V (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+M-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+M-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+N-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+N-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzlarfb.l0100644000056400000620000001767606335610660017102 0ustar pfrauenfstaff.TH PZLARFB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARFB - applie a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right .SH SYNOPSIS .TP 20 SUBROUTINE PZLARFB( SIDE, TRANS, DIRECT, STOREV, M, N, K, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, DIRECT, STOREV .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX*16 C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PZLARFB applies a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 DIRECT (global input) CHARACTER Indicates how Q is formed from a product of elementary reflectors = 'F': Q = H(1) H(2) . . . H(k) (Forward) .br = 'B': Q = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 V (local input) COMPLEX*16 pointer into the local memory to an array of dimension ( LLD_V, LOCc(JV+K-1) ) if STOREV = 'C', ( LLD_V, LOCc(JV+M-1)) if STOREV = 'R' and SIDE = 'L', ( LLD_V, LOCc(JV+N-1) ) if STOREV = 'R' and SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation. See further details. If STOREV = 'C' and SIDE = 'L', LLD_V >= MAX(1,LOCr(IV+M-1)); if STOREV = 'C' and SIDE = 'R', LLD_V >= MAX(1,LOCr(IV+N-1)); if STOREV = 'R', LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) COMPLEX*16 array, dimension MB_V by MB_V if STOREV = 'R' and NB_V by NB_V if STOREV = 'C'. The trian- gular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzlarfc.l0100644000056400000620000002000106335610660017053 0ustar pfrauenfstaff.TH PZLARFC l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARFC - applie a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), .SH SYNOPSIS .TP 20 SUBROUTINE PZLARFC( SIDE, M, N, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE .TP 20 .ti +4 INTEGER IC, INCV, IV, JC, JV, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX*16 C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PZLARFC applies a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also have the first row of sub( C ). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also have the first column of sub( C ) and MOD(JV-1,NB_V) must be equal to MOD(JC-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q**H * sub( C ), .br = 'R': form sub( C ) * Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 V (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+M-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+M-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+N-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+N-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q**H * sub( C ) if SIDE = 'L', or sub( C ) * Q**H if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzlarfg.l0100644000056400000620000001256606335610660017100 0ustar pfrauenfstaff.TH PZLARFG l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARFG - generate a complex elementary reflector H of order n, such that H * sub( X ) = H * ( x(iax,jax) ) = ( alpha ), H' * H = I .SH SYNOPSIS .TP 20 SUBROUTINE PZLARFG( N, ALPHA, IAX, JAX, X, IX, JX, DESCX, INCX, TAU ) .TP 20 .ti +4 INTEGER IAX, INCX, IX, JAX, JX, N .TP 20 .ti +4 COMPLEX*16 ALPHA .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX*16 TAU( * ), X( * ) .SH PURPOSE PZLARFG generates a complex elementary reflector H of order n, such that ( x ) ( 0 ) .br where alpha is a real scalar, and sub( X ) is an (N-1)-element complex distributed vector X(IX:IX+N-2,JX) if INCX = 1 and X(IX,JX:JX+N-2) if INCX = DESCX(M_). H is represented in the form H = I - tau * ( 1 ) * ( 1 v' ) , .br ( v ) .br where tau is a complex scalar and v is a complex (N-1)-element vector. Note that H is not Hermitian. .br If the elements of sub( X ) are all zero and X(IAX,JAX) is real, then tau = 0 and H is taken to be the unit matrix. .br Otherwise 1 <= real(tau) <= 2 and abs(tau-1) <= 1. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. .SH ARGUMENTS .TP 8 N (global input) INTEGER The global order of the elementary reflector. N >= 0. .TP 8 ALPHA (local output) COMPLEX*16 On exit, alpha is computed in the process scope having the vector sub( X ). .TP 8 IAX (global input) INTEGER The global row index in X of X(IAX,JAX). .TP 8 JAX (global input) INTEGER The global column index in X of X(IAX,JAX). .TP 8 X (local input/local output) COMPLEX*16, pointer into the local memory to an array of dimension (LLD_X,*). This array contains the local pieces of the distributed vector sub( X ). Before entry, the incremented array sub( X ) must contain the vector x. On exit, it is overwritten with the vector v. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JX) if INCX = 1, and LOCr(IX) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix X. scalapack-doc-1.5/man/manl/pzlarft.l0100644000056400000620000001510406335610660017104 0ustar pfrauenfstaff.TH PZLARFT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARFT - form the triangular factor T of a complex block reflector H of order n, which is defined as a product of k elementary reflectors .SH SYNOPSIS .TP 20 SUBROUTINE PZLARFT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 COMPLEX*16 TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PZLARFT forms the triangular factor T of a complex block reflector H of order n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the distributed matrix V, and H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the distributed matrix V, and H = I - V' * T * V .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER*1 Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER*1 Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The order of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) COMPLEX*16 pointer into the local memory to an array of local dimension (LOCr(IV+N-1),LOCc(JV+K-1)) if STOREV = 'C', and (LOCr(IV+K-1),LOCc(JV+N-1)) if STOREV = 'R'. The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) COMPLEX*16 array, dimension (NB_V,NB_V) if STOREV = 'Col', and (MB_V,MB_V) otherwise. It contains the k-by-k triangular factor of the block reflector asso- ciated with V. If DIRECT = 'F', T is upper triangular; if DIRECT = 'B', T is lower triangular. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': V( IV:IV+N-1, ( 1 ) V( IV:IV+K-1, ( 1 v1 v1 v1 v1 ) JV:JV+K-1 ) = ( v1 1 ) JV:JV+N-1 ) = ( 1 v2 v2 v2 ) ( v1 v2 1 ) ( 1 v3 v3 ) ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': V( IV:IV+N-1, ( v1 v2 v3 ) V( IV:IV+K-1, ( v1 v1 1 ) JV:JV+K-1 ) = ( v1 v2 v3 ) JV:JV+N-1 ) = ( v2 v2 v2 1 ) ( 1 v2 v3 ) ( v3 v3 v3 v3 1 ) ( 1 v3 ) .br ( 1 ) .br scalapack-doc-1.5/man/manl/pzlarz.l0100644000056400000620000002043206335610660016744 0ustar pfrauenfstaff.TH PZLARZ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARZ - applie a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right .SH SYNOPSIS .TP 19 SUBROUTINE PZLARZ( SIDE, M, N, L, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 19 .ti +4 CHARACTER SIDE .TP 19 .ti +4 INTEGER IC, INCV, IV, JC, JV, L, M, N .TP 19 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 19 .ti +4 COMPLEX*16 C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PZLARZ applies a complex elementary reflector Q to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Q is a product of k elementary reflectors as returned by PZTZRZF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also own C(IC+M-L,JC:JC+N-1). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC+N-L-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also own C(IC:IC+M-1,JC+N-L) and MOD(JV-1,NB_V) must be equal to MOD(JC+N-L-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q * sub( C ), .br = 'R': form sub( C ) * Q. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+L-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+L-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q * sub( C ) if SIDE = 'L', or sub( C ) * Q if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzlarzb.l0100644000056400000620000002011106335610661017101 0ustar pfrauenfstaff.TH PZLARZB l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARZB - applie a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right .SH SYNOPSIS .TP 20 SUBROUTINE PZLARZB( SIDE, TRANS, DIRECT, STOREV, M, N, K, L, V, IV, JV, DESCV, T, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, SIDE, STOREV, TRANS .TP 20 .ti +4 INTEGER IC, IV, JC, JV, K, L, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX*16 C( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PZLARZB applies a complex block reflector Q or its conjugate transpose Q**H to a complex M-by-N distributed matrix sub( C ) denoting C(IC:IC+M-1,JC:JC+N-1), from the left or the right. Q is a product of k elementary reflectors as returned by PZTZRZF. Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 DIRECT (global input) CHARACTER Indicates how H is formed from a product of elementary reflectors = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Indicates how the vectors which define the elementary reflectors are stored: .br = 'C': Columnwise (not supported yet) .br = 'R': Rowwise .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The order of the matrix T (= the number of elementary reflectors whose product defines the block reflector). .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_V, LOCc(JV+M-1)) if SIDE = 'L', (LLD_V, LOCc(JV+N-1)) if SIDE = 'R'. It contains the local pieces of the distributed vectors V representing the Householder transformation as returned by PZTZRZF. LLD_V >= LOCr(IV+K-1). .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 T (local input) COMPLEX*16 array, dimension MB_V by MB_V The lower triangular matrix T in the representation of the block reflector. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the M-by-N distributed matrix sub( C ). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q or sub( C )*Q'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) If STOREV = 'C', if SIDE = 'L', LWORK >= ( NqC0 + MpC0 ) * K else if SIDE = 'R', LWORK >= ( NqC0 + MAX( NpV0 + NUMROC( NUMROC( N+ICOFFC, NB_V, 0, 0, NPCOL ), NB_V, 0, 0, LCMQ ), MpC0 ) ) * K end if else if STOREV = 'R', if SIDE = 'L', LWORK >= ( MpC0 + MAX( MqV0 + NUMROC( NUMROC( M+IROFFC, MB_V, 0, 0, NPROW ), MB_V, 0, 0, LCMP ), NqC0 ) ) * K else if SIDE = 'R', LWORK >= ( MpC0 + NqC0 ) * K end if end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFV = MOD( IV-1, MB_V ), ICOFFV = MOD( JV-1, NB_V ), IVROW = INDXG2P( IV, MB_V, MYROW, RSRC_V, NPROW ), IVCOL = INDXG2P( JV, NB_V, MYCOL, CSRC_V, NPCOL ), MqV0 = NUMROC( M+ICOFFV, NB_V, MYCOL, IVCOL, NPCOL ), NpV0 = NUMROC( N+IROFFV, MB_V, MYROW, IVROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NpC0 = NUMROC( N+ICOFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If STOREV = 'Columnwise' If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if STOREV = 'Rowwise' If SIDE = 'Left', ( NB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzlarzc.l0100644000056400000620000002041606335610661017112 0ustar pfrauenfstaff.TH PZLARZC l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARZC - applie a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), .SH SYNOPSIS .TP 20 SUBROUTINE PZLARZC( SIDE, M, N, L, V, IV, JV, DESCV, INCV, TAU, C, IC, JC, DESCC, WORK ) .TP 20 .ti +4 CHARACTER SIDE .TP 20 .ti +4 INTEGER IC, INCV, IV, JC, JV, L, M, N .TP 20 .ti +4 INTEGER DESCC( * ), DESCV( * ) .TP 20 .ti +4 COMPLEX*16 C( * ), TAU( * ), V( * ), WORK( * ) .SH PURPOSE PZLARZC applies a complex elementary reflector Q**H to a complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1), from either the left or the right. Q is represented in the form Q = I - tau * v * v' .br where tau is a complex scalar and v is a complex vector. .br If tau = 0, then Q is taken to be the unit matrix. .br Q is a product of k elementary reflectors as returned by PZTZRZF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. Restrictions .br ============ .br If SIDE = 'Left' and INCV = 1, then the row process having the first entry V(IV,JV) must also own C(IC+M-L,JC:JC+N-1). Moreover, MOD(IV-1,MB_V) must be equal to MOD(IC+N-L-1,MB_C), if INCV=M_V, only the last equality must be satisfied. .br If SIDE = 'Right' and INCV = M_V then the column process having the first entry V(IV,JV) must also own C(IC:IC+M-1,JC+N-L) and MOD(JV-1,NB_V) must be equal to MOD(JC+N-L-1,NB_C), if INCV = 1 only the last equality must be satisfied. .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': form Q**H * sub( C ), .br = 'R': form sub( C ) * Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 V (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_V,*) containing the local pieces of the distributed vectors V representing the Householder transformation Q, V(IV:IV+L-1,JV) if SIDE = 'L' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'L' and INCV = M_V, .br V(IV:IV+L-1,JV) if SIDE = 'R' and INCV = 1, .br V(IV,JV:JV+L-1) if SIDE = 'R' and INCV = M_V, The vector v in the representation of Q. V is not used if TAU = 0. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 INCV (global input) INTEGER The global increment for the elements of V. Only two values of INCV are supported in this version, namely 1 and M_V. INCV must not be zero. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JV) if INCV = 1, and LOCr(IV) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C, LOCc(JC+N-1) ), containing the local pieces of sub( C ). On exit, sub( C ) is overwritten by the Q**H * sub( C ) if SIDE = 'L', or sub( C ) * Q**H if SIDE = 'R'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) If INCV = 1, if SIDE = 'L', if IVCOL = ICCOL, LWORK >= NqC0 else LWORK >= MpC0 + MAX( 1, NqC0 ) end if else if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_V,0,0,NPCOL ),NB_V,0,0,LCMQ ) ) end if else if INCV = M_V, if SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_V,0,0,NPROW ),MB_V,0,0,LCMP ) ) else if SIDE = 'R', if IVROW = ICROW, LWORK >= MpC0 else LWORK >= NqC0 + MAX( 1, MpC0 ) end if end if end if where LCM is the least common multiple of NPROW and NPCOL and LCM = ILCM( NPROW, NPCOL ), LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. Alignment requirements ====================== The distributed submatrices V(IV:*, JV:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: MB_V = NB_V, If INCV = 1, If SIDE = 'Left', ( MB_V.EQ.MB_C .AND. IROFFV.EQ.IROFFC .AND. IVROW.EQ.ICROW ) If SIDE = 'Right', ( MB_V.EQ.NB_A .AND. MB_V.EQ.NB_C .AND. IROFFV.EQ.ICOFFC ) else if INCV = M_V, If SIDE = 'Left', ( MB_V.EQ.NB_V .AND. MB_V.EQ.MB_C .AND. ICOFFV.EQ.IROFFC ) If SIDE = 'Right', ( NB_V.EQ.NB_C .AND. ICOFFV.EQ.ICOFFC .AND. IVCOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzlarzt.l0100644000056400000620000001567206335610661017143 0ustar pfrauenfstaff.TH PZLARZT l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLARZT - form the triangular factor T of a complex block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PZTZRZF .SH SYNOPSIS .TP 20 SUBROUTINE PZLARZT( DIRECT, STOREV, N, K, V, IV, JV, DESCV, TAU, T, WORK ) .TP 20 .ti +4 CHARACTER DIRECT, STOREV .TP 20 .ti +4 INTEGER IV, JV, K, N .TP 20 .ti +4 INTEGER DESCV( * ) .TP 20 .ti +4 COMPLEX*16 TAU( * ), T( * ), V( * ), WORK( * ) .SH PURPOSE PZLARZT forms the triangular factor T of a complex block reflector H of order > n, which is defined as a product of k elementary reflectors as returned by PZTZRZF. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and .br H = I - V * T * V' .br If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and .br H = I - V' * T * V .br Currently, only STOREV = 'R' and DIRECT = 'B' are supported. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIRECT (global input) CHARACTER Specifies the order in which the elementary reflectors are multiplied to form the block reflector: .br = 'F': H = H(1) H(2) . . . H(k) (Forward, not supported yet) .br = 'B': H = H(k) . . . H(2) H(1) (Backward) .TP 8 STOREV (global input) CHARACTER Specifies how the vectors which define the elementary reflectors are stored (see also Further Details): .br = 'R': rowwise .TP 8 N (global input) INTEGER The number of meaningful entries of the block reflector H. N >= 0. .TP 8 K (global input) INTEGER The order of the triangular factor T (= the number of elementary reflectors). 1 <= K <= MB_V (= NB_V). .TP 8 V (input/output) COMPLEX*16 pointer into the local memory to an array of local dimension (LOCr(IV+K-1),LOCc(JV+N-1)). The distributed matrix V contains the Householder vectors. See further details. .TP 8 IV (global input) INTEGER The row index in the global array V indicating the first row of sub( V ). .TP 8 JV (global input) INTEGER The column index in the global array V indicating the first column of sub( V ). .TP 8 DESCV (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix V. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCr(IV+K-1) if INCV = M_V, and LOCc(JV+K-1) otherwise. This array contains the Householder scalars related to the Householder vectors. TAU is tied to the distributed matrix V. .TP 8 T (local output) COMPLEX*16 array, dimension (MB_V,MB_V) It contains the k-by-k triangular factor of the block reflector associated with V. T is lower triangular. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (K*(K-1)/2) .SH FURTHER DETAILS The shape of the matrix V and the storage of the vectors which define the H(i) is best illustrated by the following example with n = 5 and k = 3. The elements equal to 1 are not stored; the corresponding array elements are modified but restored on exit. The rest of the array is not used. .br DIRECT = 'F' and STOREV = 'C': DIRECT = 'F' and STOREV = 'R': ______V_____ .br ( v1 v2 v3 ) / \ ( v1 v2 v3 ) ( v1 v1 v1 v1 v1 . . . . 1 ) V = ( v1 v2 v3 ) ( v2 v2 v2 v2 v2 . . . 1 ) ( v1 v2 v3 ) ( v3 v3 v3 v3 v3 . . 1 ) ( v1 v2 v3 ) .br . . . .br . . . .br 1 . . .br 1 . .br 1 .br DIRECT = 'B' and STOREV = 'C': DIRECT = 'B' and STOREV = 'R': ______V_____ 1 / \ . 1 ( 1 . . . . v1 v1 v1 v1 v1 ) . . 1 ( . 1 . . . v2 v2 v2 v2 v2 ) . . . ( . . 1 . . v3 v3 v3 v3 v3 ) . . . .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br V = ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br ( v1 v2 v3 ) .br scalapack-doc-1.5/man/manl/pzlascl.l0100644000056400000620000001242506335610661017076 0ustar pfrauenfstaff.TH PZLASCL l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLASCL - multiplie the M-by-N complex distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM .SH SYNOPSIS .TP 20 SUBROUTINE PZLASCL( TYPE, CFROM, CTO, M, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER TYPE .TP 20 .ti +4 INTEGER IA, INFO, JA, M, N .TP 20 .ti +4 DOUBLE PRECISION CFROM, CTO .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLASCL multiplies the M-by-N complex distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) by the real scalar CTO/CFROM. This is done without over/underflow as long as the final result CTO * A(I,J) / CFROM does not over/underflow. TYPE specifies that sub( A ) may be full, upper triangular, lower triangular or upper Hessenberg. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 TYPE (global input) CHARACTER TYPE indices the storage type of the input distributed matrix. = 'G': sub( A ) is a full matrix, .br = 'L': sub( A ) is a lower triangular matrix, .br = 'U': sub( A ) is an upper triangular matrix, .br = 'H': sub( A ) is an upper Hessenberg matrix. .TP 8 CFROM (global input) DOUBLE PRECISION CTO (global input) DOUBLE PRECISION The distributed matrix sub( A ) is multiplied by CTO/CFROM. A(I,J) is computed without over/underflow if the final result CTO * A(I,J) / CFROM can be represented without over/underflow. CFROM must be nonzero. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ). On exit, this array contains the local pieces of the distributed matrix multiplied by CTO/CFROM. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzlase2.l0100644000056400000620000001217606335610661017011 0ustar pfrauenfstaff.TH PZLASE2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLASE2 - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PZLASE2( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 COMPLEX*16 ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLASE2 initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. PZLASE2 requires that only dimension of the matrix operand is distributed. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) COMPLEX*16 The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) COMPLEX*16 The constant to which the diagonal elements are to be set. .TP 8 A (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pzlaset.l0100644000056400000620000001205606335610661017110 0ustar pfrauenfstaff.TH PZLASET l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLASET - initialize an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals .SH SYNOPSIS .TP 20 SUBROUTINE PZLASET( UPLO, M, N, ALPHA, BETA, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, M, N .TP 20 .ti +4 COMPLEX*16 ALPHA, BETA .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLASET initializes an M-by-N distributed matrix sub( A ) denoting A(IA:IA+M-1,JA:JA+N-1) to BETA on the diagonal and ALPHA on the offdiagonals. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies the part of the distributed matrix sub( A ) to be set: .br = 'U': Upper triangular part is set; the strictly lower triangular part of sub( A ) is not changed; = 'L': Lower triangular part is set; the strictly upper triangular part of sub( A ) is not changed; Otherwise: All of the matrix sub( A ) is set. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 ALPHA (global input) COMPLEX*16 The constant to which the offdiagonal elements are to be set. .TP 8 BETA (global input) COMPLEX*16 The constant to which the diagonal elements are to be set. .TP 8 A (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). This array contains the local pieces of the distributed matrix sub( A ) to be set. On exit, the leading M-by-N submatrix sub( A ) is set as follows: if UPLO = 'U', A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=j-1, 1<=j<=N, if UPLO = 'L', A(IA+i-1,JA+j-1) = ALPHA, j+1<=i<=M, 1<=j<=N, otherwise, A(IA+i-1,JA+j-1) = ALPHA, 1<=i<=M, 1<=j<=N, IA+i.NE.JA+j, and, for all UPLO, A(IA+i-1,JA+i-1) = BETA, 1<=i<=min(M,N). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pzlassq.l0100644000056400000620000001251406335610661017122 0ustar pfrauenfstaff.TH PZLASSQ l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLASSQ - return the values scl and smsq such that ( scl**2 )*smsq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, .SH SYNOPSIS .TP 20 SUBROUTINE PZLASSQ( N, X, IX, JX, DESCX, INCX, SCALE, SUMSQ ) .TP 20 .ti +4 INTEGER IX, INCX, JX, N .TP 20 .ti +4 DOUBLE PRECISION SCALE, SUMSQ .TP 20 .ti +4 INTEGER DESCX( * ) .TP 20 .ti +4 COMPLEX*16 X( * ) .SH PURPOSE PZLASSQ returns the values scl and smsq such that where x( i ) = sub( X ) = abs( X( IX+(JX-1)*DESCX(M_)+(i-1)*INCX ) ). The value of sumsq is assumed to be at least unity and the value of ssq will then satisfy .br 1.0 .le. ssq .le. ( sumsq + 2*n ). .br scale is assumed to be non-negative and scl returns the value scl = max( scale, abs( real( x( i ) ) ), abs( aimag( x( i ) ) ) ), i .br scale and sumsq must be supplied in SCALE and SUMSQ respectively. SCALE and SUMSQ are overwritten by scl and ssq respectively. The routine makes only one pass through the vector sub( X ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. The result are only available in the scope of sub( X ), i.e if sub( X ) is distributed along a process row, the correct results are only available in this process row of the grid. Similarly if sub( X ) is distributed along a process column, the correct results are only available in this process column of the grid. .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The length of the distributed vector sub( X ). .TP 8 X (input) COMPLEX*16 The vector for which a scaled sum of squares is computed. x( i ) = X(IX+(JX-1)*M_X +(i-1)*INCX ), 1 <= i <= n. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. .TP 8 SCALE (local input/local output) DOUBLE PRECISION On entry, the value scale in the equation above. On exit, SCALE is overwritten with scl , the scaling factor for the sum of squares. .TP 8 SUMSQ (local input/local output) DOUBLE PRECISION On entry, the value sumsq in the equation above. On exit, SUMSQ is overwritten with smsq , the basic sum of squares from which scl has been factored out. scalapack-doc-1.5/man/manl/pzlaswp.l0100644000056400000620000001225506335610661017127 0ustar pfrauenfstaff.TH PZLASWP l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLASWP - perform a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZLASWP( DIREC, ROWCOL, N, A, IA, JA, DESCA, K1, K2, IPIV ) .TP 20 .ti +4 CHARACTER DIREC, ROWCOL .TP 20 .ti +4 INTEGER IA, JA, K1, K2, N .TP 20 .ti +4 INTEGER DESCA( * ), IPIV( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLASWP performs a series of row or column interchanges on the distributed matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1). One interchange is initiated for each of rows or columns K1 trough K2 of sub( A ). This routine assumes that the pivoting information has already been broadcast along the process row or column. .br Also note that this routine will only work for K1-K2 being in the same MB (or NB) block. If you want to pivot a full matrix, use PZLAPIV. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 DIREC (global input) CHARACTER Specifies in which order the permutation is applied: = 'F' (Forward) = 'B' (Backward) .TP 8 ROWCOL (global input) CHARACTER Specifies if the rows or columns are permuted: = 'R' (Rows) = 'C' (Columns) .TP 8 N (global input) INTEGER If ROWCOL = 'R', the length of the rows of the distributed matrix A(*,JA:JA+N-1) to be permuted; If ROWCOL = 'C', the length of the columns of the distributed matrix A(IA:IA+N-1,*) to be permuted. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, * ). On entry, this array contains the local pieces of the distri- buted matrix to which the row/columns interchanges will be applied. On exit the permuted distributed matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 K1 (global input) INTEGER The first element of IPIV for which a row or column inter- change will be done. .TP 8 K2 (global input) INTEGER The last element of IPIV for which a row or column inter- change will be done. .TP 8 IPIV (local input) INTEGER array, dimension LOCr(M_A)+MB_A for row pivoting and LOCc(N_A)+NB_A for column pivoting. This array is tied to the matrix A, IPIV(K) = L implies rows (or columns) K and L are to be interchanged. scalapack-doc-1.5/man/manl/pzlatra.l0100644000056400000620000000767606335610661017117 0ustar pfrauenfstaff.TH PZLATRA l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLATRA - compute the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ) .SH SYNOPSIS .TP 20 COMPLEX*16 FUNCTION PZLATRA( N, A, IA, JA, DESCA ) .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLATRA computes the trace of an N-by-N distributed matrix sub( A ) denoting A( IA:IA+N-1, JA:JA+N-1 ). The result is left on every process of the grid. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the distributed matrix the trace is to be computed. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pzlatrd.l0100644000056400000620000002140106335610661017100 0ustar pfrauenfstaff.TH PZLATRD l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLATRD - reduce NB rows and columns of a complex Hermitian distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to complex tridiagonal form by an unitary similarity transformation Q' * sub( A ) * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of sub( A ) .SH SYNOPSIS .TP 20 SUBROUTINE PZLATRD( UPLO, N, NB, A, IA, JA, DESCA, D, E, TAU, W, IW, JW, DESCW, WORK ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IW, JA, JW, N, NB .TP 20 .ti +4 INTEGER DESCA( * ), DESCW( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), W( * ), WORK( * ) .SH PURPOSE PZLATRD reduces NB rows and columns of a complex Hermitian distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) to complex tridiagonal form by an unitary similarity transformation Q' * sub( A ) * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of sub( A ). If UPLO = 'U', PZLATRD reduces the last NB rows and columns of a matrix, of which the upper triangle is supplied; .br if UPLO = 'L', PZLATRD reduces the first NB rows and columns of a matrix, of which the lower triangle is supplied. .br This is an auxiliary routine called by PZHETRD. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NB (global input) INTEGER The number of rows and columns to be reduced. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the Hermitian distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the last NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements above the diagonal with the array TAU, represent the unitary matrix Q as a product of elementary reflectors. If UPLO = 'L', the first NB columns have been reduced to tridiagonal form, with the diagonal elements overwriting the diagonal elements of sub( A ); the elements below the diagonal with the array TAU, represent the unitary matrix Q as a product of elementary reflectors; See Further Details. IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 D (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed matrix A. .TP 8 E (local output) DOUBLE PRECISION array, dimension LOCc(JA+N-1) if UPLO = 'U', LOCc(JA+N-2) otherwise. The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i+1) if UPLO = 'U', E(i) = A(i+1,i) if UPLO = 'L'. E is tied to the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCc(JA+N-1). This array contains the scalar factors TAU of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 W (local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_W,NB_W), This array contains the local pieces of the N-by-NB_W matrix W required to update the unreduced part of sub( A ). .TP 8 IW (global input) INTEGER The row index in the global array W indicating the first row of sub( W ). .TP 8 JW (global input) INTEGER The column index in the global array W indicating the first column of sub( W ). .TP 8 DESCW (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix W. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (NB_A) .SH FURTHER DETAILS If UPLO = 'U', the matrix Q is represented as a product of elementary reflectors .br Q = H(n) H(n-1) . . . H(n-nb+1). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(i:n) = 0 and v(i-1) = 1; v(1:i-1) is stored on exit in .br A(ia:ia+i-2,ja+i), and tau in TAU(ja+i-1). .br If UPLO = 'L', the matrix Q is represented as a product of elementary reflectors .br Q = H(1) H(2) . . . H(nb). .br Each H(i) has the form .br H(i) = I - tau * v * v' .br where tau is a complex scalar, and v is a complex vector with v(1:i) = 0 and v(i+1) = 1; v(i+2:n) is stored on exit in .br A(ia+i+1:ia+n-1,ja+i-1), and tau in TAU(ja+i-1). .br The elements of the vectors v together form the N-by-NB matrix V which is needed, with W, to apply the transformation to the unreduced part of the matrix, using a Hermitian rank-2k update of the form: sub( A ) := sub( A ) - V*W' - W*V'. .br The contents of A on exit are illustrated by the following examples with n = 5 and nb = 2: .br if UPLO = 'U': if UPLO = 'L': .br ( a a a v4 v5 ) ( d ) ( a a v4 v5 ) ( 1 d ) ( a 1 v5 ) ( v1 1 a ) ( d 1 ) ( v1 v2 a a ) ( d ) ( v1 v2 a a a ) where d denotes a diagonal element of the reduced matrix, a denotes an element of the original matrix that is unchanged, and vi denotes an element of the vector defining H(i). .br scalapack-doc-1.5/man/manl/pzlatrs.l0100644000056400000620000000122306335610661017117 0ustar pfrauenfstaff.TH PZLATRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLATRS - solve a triangular system .SH SYNOPSIS .TP 20 SUBROUTINE PZLATRS( UPLO, TRANS, DIAG, NORMIN, N, A, IA, JA, DESCA, X, IX, JX, DESCX, SCALE, CNORM, WORK ) .TP 20 .ti +4 CHARACTER DIAG, NORMIN, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IX, JA, JX, N .TP 20 .ti +4 DOUBLE PRECISION SCALE .TP 20 .ti +4 INTEGER DESCA( * ), DESCX( * ) .TP 20 .ti +4 DOUBLE PRECISION CNORM( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), X( * ), WORK( * ) .SH PURPOSE PZLATRS solves a triangular system. This routine in unfinished at this time, but will be part of the next release. .br scalapack-doc-1.5/man/manl/pzlatrz.l0100644000056400000620000001455206335610661017137 0ustar pfrauenfstaff.TH PZLATRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZLATRZ - reduce the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = [A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1)] .SH SYNOPSIS .TP 20 SUBROUTINE PZLATRZ( M, N, L, A, IA, JA, DESCA, TAU, WORK ) .TP 20 .ti +4 INTEGER IA, JA, L, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZLATRZ reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = [A(IA:IA+M-1,JA:JA+M-1) A(IA:IA+M-1,JA+N-L:JA+N-1)] to upper triangular form by means of unitary transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N unitary matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. L > 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements N-L+1 to N of the first M rows of sub( A ), with the array TAU, represent the unitary matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace) COMPLEX*16 array, dimension (LWORK) LWORK >= Nq0 + MAX( 1, Mp0 ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), whose conjugate transpose is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pzlauu2.l0100644000056400000620000001161006335610662017024 0ustar pfrauenfstaff.TH PZLAUU2 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAUU2 - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZLAUU2( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLAUU2 computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the unblocked form of the algorithm, calling Level 2 BLAS. No communication is performed by this routine, the matrix to operate on should be strictly local to one process. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pzlauum.l0100644000056400000620000001144606335610662017126 0ustar pfrauenfstaff.TH PZLAUUM l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZLAUUM - compute the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZLAUUM( UPLO, N, A, IA, JA, DESCA ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZLAUUM computes the product U * U' or L' * L, where the triangular factor U or L is stored in the upper or lower triangular part of the distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in sub( A ). .br If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in sub( A ). .br This is the blocked form of the algorithm, calling Level 3 PBLAS. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the triangular factor stored in the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the triangular factor U or L. N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor L or U. On exit, if UPLO = 'U', the upper triangle of the distributed matrix sub( A ) is overwritten with the upper triangle of the product U * U'; if UPLO = 'L', the lower triangle of sub( A ) is overwritten with the lower triangle of the product L' * L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. scalapack-doc-1.5/man/manl/pzmax1.l0100644000056400000620000001341106335610662016643 0ustar pfrauenfstaff.TH PZMAX1 l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZMAX1 - compute the global index of the maximum element in absolute value of a distributed vector sub( X ) .SH SYNOPSIS .TP 19 SUBROUTINE PZMAX1( N, AMAX, INDX, X, IX, JX, DESCX, INCX ) .TP 19 .ti +4 INTEGER INDX, INCX, IX, JX, N .TP 19 .ti +4 COMPLEX*16 AMAX .TP 19 .ti +4 INTEGER DESCX( * ) .TP 19 .ti +4 COMPLEX*16 X( * ) .SH PURPOSE PZMAX1 computes the global index of the maximum element in absolute value of a distributed vector sub( X ). The global index is returned in INDX and the value is returned in AMAX, .br where sub( X ) denotes X(IX:IX+N-1,JX) if INCX = 1, .br X(IX,JX:JX+N-1) if INCX = M_X. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br Because vectors may be viewed as a subclass of matrices, a distributed vector is considered to be a distributed matrix. When the result of a vector-oriented PBLAS call is a scalar, it will be made available only within the scope which owns the vector(s) being operated on. Let X be a generic term for the input vector(s). Then, the processes which receive the answer will be (note that if an operation involves more than one vector, the processes which re- ceive the result will be the union of the following calculation for each vector): .br If N = 1, M_X = 1 and INCX = 1, then one can't determine if a process row or process column owns the vector operand, therefore only the process of coordinate {RSRC_X, CSRC_X} receives the result; If INCX = M_X, then sub( X ) is a vector distributed over a process row. Each process part of this row receives the result; .br If INCX = 1, then sub( X ) is a vector distributed over a process column. Each process part of this column receives the result; Based on PZAMAX from Level 1 PBLAS. The change is to use the 'genuine' absolute value. .br The serial version was contributed to LAPACK by Nick Higham for use with ZLACON. .br .SH ARGUMENTS .TP 8 N (global input) pointer to INTEGER The number of components of the distributed vector sub( X ). N >= 0. .TP 8 AMAX (global output) pointer to DOUBLE PRECISION The absolute value of the largest entry of the distributed vector sub( X ) only in the scope of sub( X ). .TP 8 INDX (global output) pointer to INTEGER The global index of the element of the distributed vector sub( X ) whose real part has maximum absolute value. .TP 8 X (local input) COMPLEX*16 array containing the local pieces of a distributed matrix of dimension of at least ( (JX-1)*M_X + IX + ( N - 1 )*abs( INCX ) ) This array contains the entries of the distributed vector sub( X ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 INCX (global input) INTEGER The global increment for the elements of X. Only two values of INCX are supported in this version, namely 1 and M_X. INCX must not be zero. scalapack-doc-1.5/man/manl/pzpbsv.l0100644000056400000620000000141606335610662016751 0ustar pfrauenfstaff.TH PZPBSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPBSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PZPBSV( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER BW, IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX*16 A( * ), B( * ), WORK( * ) .SH PURPOSE PZPBSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PZPBTRF and PZPBTRS for details. .br scalapack-doc-1.5/man/manl/pzpbtrf.l0100644000056400000620000000232006335610662017107 0ustar pfrauenfstaff.TH PZPBTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPBTRF - compute a Cholesky factorization of an N-by-N complex banded symmetric positive definite distributed matrix with bandwidth BW .SH SYNOPSIS .TP 20 SUBROUTINE PZPBTRF( UPLO, N, BW, A, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), WORK( * ) .SH PURPOSE PZPBTRF computes a Cholesky factorization of an N-by-N complex banded symmetric positive definite distributed matrix with bandwidth BW: A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PZPBTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' U , if UPLO = 'U', or P A(1:N, JA:JA+N-1) P^T = L L', if UPLO = 'L' .br where U is a banded upper triangular matrix and L is banded lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pzpbtrs.l0100644000056400000620000000170006335610662017125 0ustar pfrauenfstaff.TH PZPBTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PZPBTRS( UPLO, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PZPBTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PZPBTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br banded symmetric positive definite distributed .br matrix with bandwidth BW. .br Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'*U or L*L' as computed by PZPBTRF. .br Routine PZPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzpbtrsv.l0100644000056400000620000000216706335610662017323 0ustar pfrauenfstaff.TH PZPBTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPBTRSV - solve a banded triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PZPBTRSV( UPLO, TRANS, N, BW, NRHS, A, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER BW, IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ) .SH PURPOSE PZPBTRSV solves a banded triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a banded .br triangular matrix factor produced by the .br Cholesky factorization code PZPBTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PZPBTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzpocon.l0100644000056400000620000001511106335610662017112 0ustar pfrauenfstaff.TH PZPOCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOCON - estimate the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite distributed matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by PZPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PZPOCON( UPLO, N, A, IA, JA, DESCA, ANORM, RCOND, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, LRWORK, LWORK, N .TP 20 .ti +4 DOUBLE PRECISION ANORM, RCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), WORK( * ) .SH PURPOSE PZPOCON estimates the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite distributed matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by PZPOTRF. An estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), and the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the factor stored in A(IA:IA+N-1,JA:JA+N-1) is upper or lower triangular. .br = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). On entry, this array contains the local pieces of the factors L or U from the Cholesky factorization A(IA:IA+N-1,JA:JA+N-1) = U'*U or L*L', as computed by PZPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 ANORM (global input) DOUBLE PRECISION The 1-norm (or infinity-norm) of the hermitian distributed matrix A(IA:IA+N-1,JA:JA+N-1). .TP 8 RCOND (global output) DOUBLE PRECISION The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + MAX( 2, MAX(NB_A*MAX(1,CEIL(P-1,Q)),LOCc(N+MOD(JA-1,NB_A)) + NB_A*MAX(1,CEIL(Q-1,P))) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= 2*LOCc(N+MOD(JA-1,NB_A)). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzpoequ.l0100644000056400000620000001414106335610662017127 0ustar pfrauenfstaff.TH PZPOEQU l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOEQU - compute row and column scalings intended to equilibrate a distributed Hermitian positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm) .SH SYNOPSIS .TP 20 SUBROUTINE PZPOEQU( N, A, IA, JA, DESCA, SR, SC, SCOND, AMAX, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 DOUBLE PRECISION AMAX, SCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION SC( * ), SR( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZPOEQU computes row and column scalings intended to equilibrate a distributed Hermitian positive definite matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) and reduce its condition number (with respect to the two-norm). SR and SC contain the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled distri- buted matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of SR and SC puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. .br The scaling factor are stored along process rows in SR and along process columns in SC. The duplication of information simplifies greatly the application of the factors. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ), the N-by-N Hermitian positive definite distributed matrix sub( A ) whose scaling factors are to be computed. Only the diagonal elements of sub( A ) are referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 SR (local output) DOUBLE PRECISION array, dimension LOCr(M_A) If INFO = 0, SR(IA:IA+N-1) contains the row scale factors for sub( A ). SR is aligned with the distributed matrix A, and replicated across every process column. SR is tied to the distributed matrix A. .TP 8 SC (local output) DOUBLE PRECISION array, dimension LOCc(N_A) If INFO = 0, SC(JA:JA+N-1) contains the column scale factors .br for A(IA:IA+M-1,JA:JA+N-1). SC is aligned with the distribu- ted matrix A, and replicated down every process row. SC is tied to the distributed matrix A. .TP 8 SCOND (global output) DOUBLE PRECISION If INFO = 0, SCOND contains the ratio of the smallest SR(i) (or SC(j)) to the largest SR(i) (or SC(j)), with IA <= i <= IA+N-1 and JA <= j <= JA+N-1. If SCOND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by SR (or SC). .TP 8 AMAX (global output) DOUBLE PRECISION Absolute value of largest matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the K-th diagonal entry of sub( A ) is nonpositive. scalapack-doc-1.5/man/manl/pzporfs.l0100644000056400000620000002365606335610662017142 0ustar pfrauenfstaff.TH PZPORFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPORFS - improve the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and provides error bounds and backward error estimates for the solutions .SH SYNOPSIS .TP 20 SUBROUTINE PZPORFS( UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), BERR( * ), FERR( * ), WORK( * ), X( * ) .TP 20 .ti +4 DOUBLE PRECISION RWORK( * ) .SH PURPOSE PZPORFS improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and provides error bounds and backward error estimates for the solutions. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 Specifies whether the upper or lower triangular part of the Hermitian matrix sub( A ) is stored. = 'U': Upper triangular .br = 'L': Lower triangular .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_AF,LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky factorization sub( A ) = L*L**H or U**H*U, as computed by PZPOTRF. .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 B (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). On exit, it contains the improved solution vectors. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The estimated forward error bound for each solution vector of sub( X ). If XTRUE is the true solution corresponding to sub( X ), FERR is an estimated upper bound for the magnitude of the largest element in (sub( X ) - XTRUE) divided by the magnitude of the largest element in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr( N + MOD( IA-1, MB_A ) ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH PARAMETERS ITMAX is the maximum number of steps of iterative refinement. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices op( A ) and op( AF ) (respectively sub( X ) and sub( B ) ) should be distributed the same way on the same processes. These conditions ensure that sub( A ) and sub( AF ) (resp. sub( X ) and sub( B ) ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( AF ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IAF, DESCAF( MB_ ) ) = f( JAF, DESCAF( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pzposv.l0100644000056400000620000001461206335610662016770 0ustar pfrauenfstaff.TH PZPOSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOSV - compute the solution to a complex system of linear equations sub( A ) * X = sub( B ), .SH SYNOPSIS .TP 19 SUBROUTINE PZPOSV( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZPOSV computes the solution to a complex system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is an N-by-N hermitian distributed positive definite matrix and X and sub( B ) denoting B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS distributed matrices. .br The Cholesky decomposition is used to factor sub( A ) as .br sub( A ) = U**H * U, if UPLO = 'U', or sub( A ) = L * L**H, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of sub( A ) is then used to solve the system of equations. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if INFO = 0, this array contains the local pieces of the factor U or L from the Cholesky factori- zation sub( A ) = U**H*U or L*L**H. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B,LOC(JB+NRHS-1)). On entry, the local pieces of the right hand sides distribu- ted matrix sub( B ). On exit, if INFO = 0, sub( B ) is over- written with the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed, and the solution has not been computed. scalapack-doc-1.5/man/manl/pzposvx.l0100644000056400000620000003326106335610662017161 0ustar pfrauenfstaff.TH PZPOSVX l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOSVX - use the Cholesky factorization A = U**H*U or A = L*L**H to compute the solution to a complex system of linear equations A(IA:IA+N-1,JA:JA+N-1) * X = B(IB:IB+N-1,JB:JB+NRHS-1), .SH SYNOPSIS .TP 20 SUBROUTINE PZPOSVX( FACT, UPLO, N, NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, EQUED, SR, SC, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER EQUED, FACT, UPLO .TP 20 .ti +4 INTEGER IA, IAF, IB, INFO, IX, JA, JAF, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION RCOND .TP 20 .ti +4 INTEGER DESCA( * ), DESCAF( * ), DESCB( * ), DESCX( * ) .TP 20 .ti +4 DOUBLE PRECISION BERR( * ), FERR( * ), SC( * ), SR( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), AF( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PZPOSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to compute the solution to a complex system of linear equations where A(IA:IA+N-1,JA:JA+N-1) is an N-by-N matrix and X and B(IB:IB+N-1,JB:JB+NRHS-1) are N-by-NRHS matrices. .br Error bounds on the solution and a condition estimate are also provided. In the following comments Y denotes Y(IY:IY+M-1,JY:JY+K-1) a M-by-K matrix where Y can be A, AF, B and X. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH DESCRIPTION The following steps are performed: .br 1. If FACT = 'E', real scaling factors are computed to equilibrate the system: .br diag(SR) * A * diag(SC) * inv(diag(SC)) * X = diag(SR) * B Whether or not the system will be equilibrated depends on the scaling of the matrix A, but if equilibration is used, A is overwritten by diag(SR)*A*diag(SC) and B by diag(SR)*B. 2. If FACT = 'N' or 'E', the Cholesky decomposition is used to factor the matrix A (after equilibration if FACT = 'E') as A = U**T* U, if UPLO = 'U', or .br A = L * L**T, if UPLO = 'L', .br where U is an upper triangular matrix and L is a lower triangular matrix. .br 3. The factored form of A is used to estimate the condition number of the matrix A. If the reciprocal of the condition number is less than machine precision, steps 4-6 are skipped. .br 4. The system of equations is solved for X using the factored form of A. .br 5. Iterative refinement is applied to improve the computed solution matrix and calculate error bounds and backward error estimates for it. .br 6. If equilibration was used, the matrix X is premultiplied by diag(SR) so that it solves the original system before .br equilibration. .br .SH ARGUMENTS .TP 8 FACT (global input) CHARACTER Specifies whether or not the factored form of the matrix A is supplied on entry, and if not, whether the matrix A should be equilibrated before it is factored. = 'F': On entry, AF contains the factored form of A. If EQUED = 'Y', the matrix A has been equilibrated with scaling factors given by S. A and AF will not be modified. = 'N': The matrix A will be copied to AF and factored. .br = 'E': The matrix A will be equilibrated if necessary, then copied to AF and factored. .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of A is stored; .br = 'L': Lower triangle of A is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrices B and X. NRHS >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension ( LLD_A, LOCc(JA+N-1) ). On entry, the Hermitian matrix A, except if FACT = 'F' and EQUED = 'Y', then A must contain the equilibrated matrix diag(SR)*A*diag(SC). If UPLO = 'U', the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. A is not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED = 'N' on exit. On exit, if FACT = 'E' and EQUED = 'Y', A is overwritten by diag(SR)*A*diag(SC). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 AF (local input or local output) COMPLEX*16 pointer into the local memory to an array of local dimension ( LLD_AF, LOCc(JA+N-1)). If FACT = 'F', then AF is an input argument and on entry contains the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T, in the same storage format as A. If EQUED .ne. 'N', then AF is the factored form of the equilibrated matrix diag(SR)*A*diag(SC). If FACT = 'N', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the original matrix A. If FACT = 'E', then AF is an output argument and on exit returns the triangular factor U or L from the Cholesky factorization A = U**T*U or A = L*L**T of the equilibrated matrix A (see the description of A for the form of the equilibrated matrix). .TP 8 IAF (global input) INTEGER The row index in the global array AF indicating the first row of sub( AF ). .TP 8 JAF (global input) INTEGER The column index in the global array AF indicating the first column of sub( AF ). .TP 8 DESCAF (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix AF. .TP 8 EQUED (global input/global output) CHARACTER Specifies the form of equilibration that was done. = 'N': No equilibration (always true if FACT = 'N'). .br = 'Y': Equilibration was done, i.e., A has been replaced by diag(SR) * A * diag(SC). EQUED is an input variable if FACT = 'F'; otherwise, it is an output variable. .TP 8 SR (local input/local output) COMPLEX*16 array, dimension (LLD_A) The scale factors for A distributed across process rows; not accessed if EQUED = 'N'. SR is an input variable if FACT = 'F'; otherwise, SR is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SR must be positive. .TP 8 SC (local input/local output) COMPLEX*16 array, dimension (LOC(N_A)) The scale factors for A distributed across process columns; not accessed if EQUED = 'N'. SC is an input variable if FACT = 'F'; otherwise, SC is an output variable. If FACT = 'F' and EQUED = 'Y', each element of SC must be positive. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension ( LLD_B, LOCc(JB+NRHS-1) ). On entry, the N-by-NRHS right-hand side matrix B. On exit, if EQUED = 'N', B is not modified; if TRANS = 'N' and EQUED = 'R' or 'B', B is overwritten by diag(R)*B; if TRANS = 'T' or 'C' and EQUED = 'C' or 'B', B is overwritten by diag(C)*B. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension ( LLD_X, LOCc(JX+NRHS-1) ). If INFO = 0, the N-by-NRHS solution matrix X to the original system of equations. Note that A and B are modified on exit if EQUED .ne. 'N', and the solution to the equilibrated system is inv(diag(SC))*X if TRANS = 'N' and EQUED = 'C' or 'B', or inv(diag(SR))*X if TRANS = 'T' or 'C' and EQUED = 'R' or 'B'. .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 RCOND (global output) DOUBLE PRECISION The estimate of the reciprocal condition number of the matrix A after equilibration (if done). If RCOND is less than the machine precision (in particular, if RCOND = 0), the matrix is singular to working precision. This condition is indicated by a return code of INFO > 0, and the solution and error bounds are not computed. .TP 8 FERR (local output) DOUBLE PRECISION array, dimension (LOC(N_B)) The estimated forward error bounds for each solution vector X(j) (the j-th column of the solution matrix X). If XTRUE is the true solution, FERR(j) bounds the magnitude of the largest entry in (X(j) - XTRUE) divided by the magnitude of the largest entry in X(j). The quality of the error bound depends on the quality of the estimate of norm(inv(A)) computed in the code; if the estimate of norm(inv(A)) is accurate, the error bound is guaranteed. .TP 8 BERR (local output) DOUBLE PRECISION array, dimension (LOC(N_B)) The componentwise relative backward error of each solution vector X(j) (i.e., the smallest relative change in any entry of A or B that makes X(j) an exact solution). .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK = MAX( PZPOCON( LWORK ), PZPORFS( LWORK ) ) + LOCr( N_A ). LWORK = 3*DESCA( LLD_ ) If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK = 2*LOCc(N_A). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, and i is .br <= N: if INFO = i, the leading minor of order i of A is not positive definite, so the factorization could not be completed, and the solution and error bounds could not be computed. = N+1: RCOND is less than machine precision. The factorization has been completed, but the matrix is singular to working precision, and the solution and error bounds have not been computed. scalapack-doc-1.5/man/manl/pzpotf2.l0100644000056400000620000001261006335610662017027 0ustar pfrauenfstaff.TH PZPOTF2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOTF2 - compute the Cholesky factorization of a complex hermitian positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZPOTF2( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZPOTF2 computes the Cholesky factorization of a complex hermitian positive definite distributed matrix sub( A )=A(IA:IA+N-1,JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires N <= NB_A-MOD(JA-1, NB_A) and square block decomposition ( MB_A = NB_A ). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pzpotrf.l0100644000056400000620000001261206335610662017131 0ustar pfrauenfstaff.TH PZPOTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOTRF - compute the Cholesky factorization of an N-by-N complex hermitian positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZPOTRF( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZPOTRF computes the Cholesky factorization of an N-by-N complex hermitian positive definite distributed matrix sub( A ) denoting A(IA:IA+N-1, JA:JA+N-1). The factorization has the form .br sub( A ) = U' * U , if UPLO = 'U', or .br sub( A ) = L * L', if UPLO = 'L', .br where U is an upper triangular matrix and L is lower triangular. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub( A ) to be factored. If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. On exit, if UPLO = 'U', the upper triangular part of the distributed matrix contains the Cholesky factor U, if UPLO = 'L', the lower triangular part of the distribu- ted matrix contains the Cholesky factor L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, the leading minor of order K, .br A(IA:IA+K-1,JA:JA+K-1) is not positive definite, and the factorization could not be completed. scalapack-doc-1.5/man/manl/pzpotri.l0100644000056400000620000001146306335610663017140 0ustar pfrauenfstaff.TH PZPOTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOTRI - compute the inverse of a complex Hermitian positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**H*U or L*L**H computed by PZPOTRF .SH SYNOPSIS .TP 20 SUBROUTINE PZPOTRI( UPLO, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZPOTRI computes the inverse of a complex Hermitian positive definite distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) using the Cholesky factorization sub( A ) = U**H*U or L*L**H computed by PZPOTRF. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the triangular factor U or L from the Cholesky factorization of the distributed matrix sub( A ) = U**H*U or L*L**H, as computed by PZPOTRF. On exit, the local pieces of the upper or lower triangle of the (Hermitian) inverse of sub( A ), overwriting the input factor U or L. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the (i,i) element of the factor U or L is zero, and the inverse could not be computed. scalapack-doc-1.5/man/manl/pzpotrs.l0100644000056400000620000001270406335610663017151 0ustar pfrauenfstaff.TH PZPOTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPOTRS - solve a system of linear equations sub( A ) * X = sub( B ) A(IA:IA+N-1,JA:JA+N-1)*X = B(IB:IB+N-1,JB:JB+NRHS-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZPOTRS( UPLO, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZPOTRS solves a system of linear equations where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a N-by-N hermitian positive definite distributed matrix using the Cholesky factorization sub( A ) = U**H*U or L*L**H computed by PZPOTRF. sub( B ) denotes the distributed matrix B(IB:IB+N-1,JB:JB+NRHS-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br This routine requires square block decomposition ( MB_A = NB_A ). .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': Upper triangle of sub( A ) is stored; .br = 'L': Lower triangle of sub( A ) is stored. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed submatrix sub( B ). NRHS >= 0. .TP 8 A (local input) COMPLEX*16 pointer into local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, this array contains the factors L or U from the Cholesky facto- rization sub( A ) = L*L**H or U**H*U, as computed by PZPOTRF. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the the local pieces of the right hand sides sub( B ). On exit, this array contains the local pieces of the solution distributed matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzptsv.l0100644000056400000620000000144206335610663016773 0ustar pfrauenfstaff.TH PZPTSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPTSV - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 19 SUBROUTINE PZPTSV( UPLO, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, WORK, LWORK, INFO ) .TP 19 .ti +4 CHARACTER UPLO .TP 19 .ti +4 INTEGER IB, INFO, JA, LWORK, N, NRHS .TP 19 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 19 .ti +4 COMPLEX*16 B( * ), E( * ), WORK( * ) .TP 19 .ti +4 DOUBLE PRECISION D( * ) .SH PURPOSE PZPTSV solves a system of linear equations where A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal symmetric positive definite distributed .br matrix. .br Cholesky factorization is used to factor a reordering of .br the matrix into L L'. .br See PZPTTRF and PZPTTRS for details. .br scalapack-doc-1.5/man/manl/pzpttrf.l0100644000056400000620000000227006335610663017136 0ustar pfrauenfstaff.TH PZPTTRF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPTTRF - compute a Cholesky factorization of an N-by-N complex tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZPTTRF( N, D, E, JA, DESCA, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER INFO, JA, LAF, LWORK, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 AF( * ), E( * ), WORK( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ) .SH PURPOSE PZPTTRF computes a Cholesky factorization of an N-by-N complex tridiagonal symmetric positive definite distributed matrix A(1:N, JA:JA+N-1). Reordering is used to increase parallelism in the factorization. This reordering results in factors that are DIFFERENT from those produced by equivalent sequential codes. These factors cannot be used directly by users; however, they can be used in .br subsequent calls to PZPTTRS to solve linear systems. .br The factorization has the form .br P A(1:N, JA:JA+N-1) P^T = U' D U or .br P A(1:N, JA:JA+N-1) P^T = L D L', .br where U is a tridiagonal upper triangular matrix and L is tridiagonal lower triangular, and P is a permutation matrix. .br scalapack-doc-1.5/man/manl/pzpttrs.l0100644000056400000620000000172406335610663017156 0ustar pfrauenfstaff.TH PZPTTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPTTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 20 SUBROUTINE PZPTTRS( UPLO, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER UPLO .TP 20 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 AF( * ), B( * ), E( * ), WORK( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ) .SH PURPOSE PZPTTRS solves a system of linear equations where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PZPTTRF. .br A(1:N, JA:JA+N-1) is an N-by-N complex .br tridiagonal symmetric positive definite distributed .br matrix. .br Depending on the value of UPLO, A stores either U or L in the equn A(1:N, JA:JA+N-1) = U'D *U or L*D L' as computed by PZPTTRF. Routine PZPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzpttrsv.l0100644000056400000620000000224706335610663017345 0ustar pfrauenfstaff.TH PZPTTRSV l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZPTTRSV - solve a tridiagonal triangular system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS) .SH SYNOPSIS .TP 21 SUBROUTINE PZPTTRSV( UPLO, TRANS, N, NRHS, D, E, JA, DESCA, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO ) .TP 21 .ti +4 CHARACTER TRANS, UPLO .TP 21 .ti +4 INTEGER IB, INFO, JA, LAF, LWORK, N, NRHS .TP 21 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 21 .ti +4 COMPLEX*16 AF( * ), B( * ), E( * ), WORK( * ) .TP 21 .ti +4 DOUBLE PRECISION D( * ) .SH PURPOSE PZPTTRSV solves a tridiagonal triangular system of linear equations or .br A(1:N, JA:JA+N-1)^H * X = B(IB:IB+N-1, 1:NRHS) where A(1:N, JA:JA+N-1) is a tridiagonal .br triangular matrix factor produced by the .br Cholesky factorization code PZPTTRF .br and is stored in A(1:N,JA:JA+N-1) and AF. .br The matrix stored in A(1:N, JA:JA+N-1) is either .br upper or lower triangular according to UPLO, .br and the choice of solving A(1:N, JA:JA+N-1) or A(1:N, JA:JA+N-1)^H is dictated by the user by the parameter TRANS. .br Routine PZPTTRF MUST be called first. .br scalapack-doc-1.5/man/manl/pzstein.l0100644000056400000620000002600406335610663017122 0ustar pfrauenfstaff.TH PZSTEIN l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZSTEIN - compute the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration .SH SYNOPSIS .TP 20 SUBROUTINE PZSTEIN( N, D, E, M, W, IBLOCK, ISPLIT, ORFAC, Z, IZ, JZ, DESCZ, WORK, LWORK, IWORK, LIWORK, IFAIL, ICLUSTR, GAP, INFO ) .TP 20 .ti +4 INTEGER INFO, IZ, JZ, LIWORK, LWORK, M, N .TP 20 .ti +4 DOUBLE PRECISION ORFAC .TP 20 .ti +4 INTEGER DESCZ( * ), IBLOCK( * ), ICLUSTR( * ), IFAIL( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 DOUBLE PRECISION D( * ), E( * ), GAP( * ), W( * ), WORK( * ) .TP 20 .ti +4 COMPLEX*16 Z( * ) .SH PURPOSE PZSTEIN computes the eigenvectors of a symmetric tridiagonal matrix in parallel, using inverse iteration. The eigenvectors found correspond to user specified eigenvalues. PZSTEIN does not orthogonalize vectors that are on different processes. The extent of orthogonalization is controlled by the input parameter LWORK. Eigenvectors that are to be orthogonalized are computed by the same process. PZSTEIN decides on the allocation of work among the processes and then calls DSTEIN2 (modified LAPACK routine) on each individual process. If insufficient workspace is allocated, the expected orthogonalization may not be done. .br Note : If the eigenvectors obtained are not orthogonal, increase LWORK and run the code again. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS P = NPROW * NPCOL is the total number of processes .TP 8 N (global input) INTEGER The order of the tridiagonal matrix T. N >= 0. .TP 8 D (global input) DOUBLE PRECISION array, dimension (N) The n diagonal elements of the tridiagonal matrix T. .TP 8 E (global input) DOUBLE PRECISION array, dimension (N-1) The (n-1) off-diagonal elements of the tridiagonal matrix T. .TP 8 M (global input) INTEGER The total number of eigenvectors to be found. 0 <= M <= N. .TP 8 W (global input/global output) DOUBLE PRECISION array, dim (M) On input, the first M elements of W contain all the eigenvalues for which eigenvectors are to be computed. The eigenvalues should be grouped by split-off block and ordered from smallest to largest within the block (The output array W from PDSTEBZ with ORDER='b' is expected here). This array should be replicated on all processes. On output, the first M elements contain the input eigenvalues in ascending order. Note : To obtain orthogonal vectors, it is best if eigenvalues are computed to highest accuracy ( this can be done by setting ABSTOL to the underflow threshold = DLAMCH('U') --- ABSTOL is an input parameter to PDSTEBZ ) .TP 8 IBLOCK (global input) INTEGER array, dimension (N) The submatrix indices associated with the corresponding eigenvalues in W -- 1 for eigenvalues belonging to the first submatrix from the top, 2 for those belonging to the second submatrix, etc. (The output array IBLOCK from PDSTEBZ is expected here). .TP 8 ISPLIT (global input) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT(1), the second of rows/columns ISPLIT(1)+1 through ISPLIT(2), etc., and the NSPLIT-th consists of rows/columns ISPLIT(NSPLIT-1)+1 through ISPLIT(NSPLIT)=N (The output array ISPLIT from PDSTEBZ is expected here.) .TP 8 ORFAC (global input) DOUBLE PRECISION ORFAC specifies which eigenvectors should be orthogonalized. Eigenvectors that correspond to eigenvalues which are within ORFAC*||T|| of each other are to be orthogonalized. However, if the workspace is insufficient (see LWORK), this tolerance may be decreased until all eigenvectors to be orthogonalized can be stored in one process. No orthogonalization will be done if ORFAC equals zero. A default value of 10^-3 is used if ORFAC is negative. ORFAC should be identical on all processes. .TP 8 Z (local output) COMPLEX*16 array, dimension (DESCZ(DLEN_), N/npcol + NB) Z contains the computed eigenvectors associated with the specified eigenvalues. Any vector which fails to converge is set to its current iterate after MAXITS iterations ( See DSTEIN2 ). On output, Z is distributed across the P processes in block cyclic format. .TP 8 IZ (global input) INTEGER Z's global row index, which points to the beginning of the submatrix which is to be operated on. .TP 8 JZ (global input) INTEGER Z's global column index, which points to the beginning of the submatrix which is to be operated on. .TP 8 DESCZ (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix Z. .TP 8 WORK (local workspace/global output) DOUBLE PRECISION array, dimension ( LWORK ) On output, WORK(1) gives a lower bound on the workspace ( LWORK ) that guarantees the user desired orthogonalization (see ORFAC). Note that this may overestimate the minimum workspace needed. .TP 8 LWORK (local input) integer LWORK controls the extent of orthogonalization which can be done. The number of eigenvectors for which storage is allocated on each process is NVEC = floor(( LWORK- max(5*N,NP00*MQ00) )/N). Eigenvectors corresponding to eigenvalue clusters of size NVEC - ceil(M/P) + 1 are guaranteed to be orthogonal ( the orthogonality is similar to that obtained from ZSTEIN2). Note : LWORK must be no smaller than: max(5*N,NP00*MQ00) + ceil(M/P)*N, and should have the same input value on all processes. It is the minimum value of LWORK input on different processes that is significant. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IWORK (local workspace/global output) INTEGER array, dimension ( 3*N+P+1 ) On return, IWORK(1) contains the amount of integer workspace required. On return, the IWORK(2) through IWORK(P+2) indicate the eigenvectors computed by each process. Process I computes eigenvectors indexed IWORK(I+2)+1 thru' IWORK(I+3). .TP 8 LIWORK (local input) INTEGER Size of array IWORK. Must be >= 3*N + P + 1 If LIWORK = -1, then LIWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 IFAIL (global output) integer array, dimension (M) On normal exit, all elements of IFAIL are zero. If one or more eigenvectors fail to converge after MAXITS iterations (as in ZSTEIN), then INFO > 0 is returned. If mod(INFO,M+1)>0, then for I=1 to mod(INFO,M+1), the eigenvector corresponding to the eigenvalue W(IFAIL(I)) failed to converge ( W refers to the array of eigenvalues on output ). ICLUSTR (global output) integer array, dimension (2*P) This output array contains indices of eigenvectors corresponding to a cluster of eigenvalues that could not be orthogonalized due to insufficient workspace (see LWORK, ORFAC and INFO). Eigenvectors corresponding to clusters of eigenvalues indexed ICLUSTR(2*I-1) to ICLUSTR(2*I), I = 1 to INFO/(M+1), could not be orthogonalized due to lack of workspace. Hence the eigenvectors corresponding to these clusters may not be orthogonal. ICLUSTR is a zero terminated array --- ( ICLUSTR(2*K).NE.0 .AND. ICLUSTR(2*K+1).EQ.0 ) if and only if K is the number of clusters. .TP 8 GAP (global output) DOUBLE PRECISION array, dimension (P) This output array contains the gap between eigenvalues whose eigenvectors could not be orthogonalized. The INFO/M output values in this array correspond to the INFO/(M+1) clusters indicated by the array ICLUSTR. As a result, the dot product between eigenvectors corresponding to the I^th cluster may be as high as ( O(n)*macheps ) / GAP(I). .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. < 0 : if INFO = -I, the I-th argument had an illegal value .br > 0 : if mod(INFO,M+1) = I, then I eigenvectors failed to converge in MAXITS iterations. Their indices are stored in the array IFAIL. if INFO/(M+1) = I, then eigenvectors corresponding to I clusters of eigenvalues could not be orthogonalized due to insufficient workspace. The indices of the clusters are stored in the array ICLUSTR. scalapack-doc-1.5/man/manl/pztrcon.l0100644000056400000620000001612306335610663017126 0ustar pfrauenfstaff.TH PZTRCON l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZTRCON - estimate the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm .SH SYNOPSIS .TP 20 SUBROUTINE PZTRCON( NORM, UPLO, DIAG, N, A, IA, JA, DESCA, RCOND, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, NORM, UPLO .TP 20 .ti +4 INTEGER IA, JA, INFO, LRWORK, LWORK, N .TP 20 .ti +4 DOUBLE PRECISION RCOND .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 DOUBLE PRECISION RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), WORK( * ) .SH PURPOSE PZTRCON estimates the reciprocal of the condition number of a triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1), in either the 1-norm or the infinity-norm. The norm of A(IA:IA+N-1,JA:JA+N-1) is computed and an estimate is obtained for norm(inv(A(IA:IA+N-1,JA:JA+N-1))), then the reciprocal of the condition number is computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 NORM (global input) CHARACTER Specifies whether the 1-norm condition number or the infinity-norm condition number is required: .br = '1' or 'O': 1-norm; .br = 'I': Infinity-norm. .TP 8 UPLO (global input) CHARACTER .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is upper triangular; .br = 'L': A(IA:IA+N-1,JA:JA+N-1) is lower triangular. .TP 8 DIAG (global input) CHARACTER .br = 'N': A(IA:IA+N-1,JA:JA+N-1) is non-unit triangular; .br = 'U': A(IA:IA+N-1,JA:JA+N-1) is unit triangular. .TP 8 N (global input) INTEGER .br The order of the distributed matrix A(IA:IA+N-1,JA:JA+N-1). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension ( LLD_A, LOCc(JA+N-1) ). This array contains the local pieces of the triangular distributed matrix A(IA:IA+N-1,JA:JA+N-1). If UPLO = 'U', the leading N-by-N upper triangular part of this distributed matrix con- tains the upper triangular matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of this ditributed matrix contains the lower triangular matrix, and the strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of A(IA:IA+N-1,JA:JA+N-1) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 RCOND (global output) DOUBLE PRECISION The reciprocal of the condition number of the distributed matrix A(IA:IA+N-1,JA:JA+N-1), computed as .br RCOND = 1 / ( norm( A(IA:IA+N-1,JA:JA+N-1) ) * .br norm( inv(A(IA:IA+N-1,JA:JA+N-1)) ) ). .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr(N+MOD(IA-1,MB_A)) + MAX( 2, MAX(NB_A*CEIL(P-1,Q),LOCc(N+MOD(JA-1,NB_A)) + NB_A*CEIL(Q-1,P)) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCc(N+MOD(JA-1,NB_A)). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pztrrfs.l0100644000056400000620000002274406335610663017147 0ustar pfrauenfstaff.TH PZTRRFS l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZTRRFS - provide error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix .SH SYNOPSIS .TP 20 SUBROUTINE PZTRRFS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, X, IX, JX, DESCX, FERR, BERR, WORK, LWORK, RWORK, LRWORK, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER INFO, IA, IB, IX, JA, JB, JX, LRWORK, LWORK, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ), DESCX( * ) .TP 20 .ti +4 DOUBLE PRECISION BERR( * ), FERR( * ), RWORK( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ), WORK( * ), X( * ) .SH PURPOSE PZTRRFS provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix. The solution matrix X must be computed by PZTRTRS or some other means before entering this routine. PZTRRFS does not do iterative refinement because doing so cannot improve the backward error. Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br In the following comments, sub( A ), sub( X ) and sub( B ) denote respectively A(IA:IA+N-1,JA:JA+N-1), X(IX:IX+N-1,JX:JX+NRHS-1) and B(IB:IB+N-1,JB:JB+NRHS-1). .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER*1 Specifies the form of the system of equations. = 'N': sub( A ) * sub( X ) = sub( B ) (No transpose) .br = 'T': sub( A )**T * sub( X ) = sub( B ) (Transpose) .br = 'C': sub( A )**H * sub( X ) = sub( B ) (Conjugate transpose) .TP 8 DIAG (global input) CHARACTER*1 = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The order of the matrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the matrices sub( B ) and sub( X ). NRHS >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the original triangular distributed matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular part of the matrix, and its strictly lower triangular part is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular part of the distribu- ted matrix, and its strictly upper triangular part is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_B, LOCc(JB+NRHS-1) ). On entry, this array contains the the local pieces of the right hand sides sub( B ). .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 X (local input) COMPLEX*16 pointer into the local memory to an array of local dimension (LLD_X, LOCc(JX+NRHS-1) ). On entry, this array contains the the local pieces of the solution vectors sub( X ). .TP 8 IX (global input) INTEGER The row index in the global array X indicating the first row of sub( X ). .TP 8 JX (global input) INTEGER The column index in the global array X indicating the first column of sub( X ). .TP 8 DESCX (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix X. .TP 8 FERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The estimated forward error bounds for each solution vector of sub( X ). If XTRUE is the true solution, FERR bounds the magnitude of the largest entry in (sub( X ) - XTRUE) divided by the magnitude of the largest entry in sub( X ). The estimate is as reliable as the estimate for RCOND, and is almost always a slight overestimate of the true error. This array is tied to the distributed matrix X. .TP 8 BERR (local output) DOUBLE PRECISION array of local dimension LOCc(JB+NRHS-1). The componentwise relative backward error of each solution vector (i.e., the smallest re- lative change in any entry of sub( A ) or sub( B ) that makes sub( X ) an exact solution). This array is tied to the distributed matrix X. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= 2*LOCr( N + MOD( IA-1, MB_A ) ). If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 RWORK (local workspace/local output) DOUBLE PRECISION array, dimension (LRWORK) On exit, RWORK(1) returns the minimal and optimal LRWORK. .TP 8 LRWORK (local or global input) INTEGER The dimension of the array RWORK. LRWORK is local input and must be at least LRWORK >= LOCr( N + MOD( IB-1, MB_B ) ). If LRWORK = -1, then LRWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Notes ===== This routine temporarily returns when N <= 1. The distributed submatrices sub( X ) and sub( B ) should be distributed the same way on the same processes. These conditions ensure that sub( X ) and sub( B ) are "perfectly" aligned. Moreover, this routine requires the distributed submatrices sub( A ), sub( X ), and sub( B ) to be aligned on a block boundary, i.e., if f(x,y) = MOD( x-1, y ): f( IA, DESCA( MB_ ) ) = f( JA, DESCA( NB_ ) ) = 0, f( IB, DESCB( MB_ ) ) = f( JB, DESCB( NB_ ) ) = 0, and f( IX, DESCX( MB_ ) ) = f( JX, DESCX( NB_ ) ) = 0. scalapack-doc-1.5/man/manl/pztrti2.l0100644000056400000620000001206106335610663017042 0ustar pfrauenfstaff.TH PZTRTI2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZTRTI2 - compute the inverse of a complex upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZTRTI2( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZTRTI2 computes the inverse of a complex upper or lower triangular block matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). This matrix should be contained in one and only one process memory space (local operation). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER*1 = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 DIAG (global input) CHARACTER*1 .br = 'N': sub( A ) is non-unit triangular .br = 'U': sub( A ) is unit triangular .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)), this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. On exit, the (triangular) inverse of the original matrix, in the same storage format. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pztrtri.l0100644000056400000620000001214106335610663017141 0ustar pfrauenfstaff.TH PZTRTRI l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZTRTRI - compute the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1) .SH SYNOPSIS .TP 20 SUBROUTINE PZTRTRI( UPLO, DIAG, N, A, IA, JA, DESCA, INFO ) .TP 20 .ti +4 CHARACTER DIAG, UPLO .TP 20 .ti +4 INTEGER IA, INFO, JA, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ) .SH PURPOSE PZTRTRI computes the inverse of a upper or lower triangular distributed matrix sub( A ) = A(IA:IA+N-1,JA:JA+N-1). Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER Specifies whether the distributed matrix sub( A ) is upper or lower triangular: .br = 'U': Upper triangular, .br = 'L': Lower triangular. .TP 8 DIAG (global input) CHARACTER Specifies whether or not the distributed matrix sub( A ) is unit triangular: .br = 'N': Non-unit triangular, .br = 'U': Unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on, i.e. the order of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, this array contains the local pieces of the triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of the matrix sub( A ) contains the upper triangular matrix to be inverted, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of the matrix sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. On exit, the (triangular) inverse of the original matrix. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = K, A(IA+K-1,JA+K-1) is exactly zero. The triangular matrix sub( A ) is singular and its inverse can not be computed. scalapack-doc-1.5/man/manl/pztrtrs.l0100644000056400000620000001447506335610663017167 0ustar pfrauenfstaff.TH PZTRTRS l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME PZTRTRS - solve a triangular system of the form sub( A ) * X = sub( B ) or sub( A )**T * X = sub( B ) or sub( A )**H * X = sub( B ), .SH SYNOPSIS .TP 20 SUBROUTINE PZTRTRS( UPLO, TRANS, DIAG, N, NRHS, A, IA, JA, DESCA, B, IB, JB, DESCB, INFO ) .TP 20 .ti +4 CHARACTER DIAG, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IB, INFO, JA, JB, N, NRHS .TP 20 .ti +4 INTEGER DESCA( * ), DESCB( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), B( * ) .SH PURPOSE PZTRTRS solves a triangular system of the form where sub( A ) denotes A(IA:IA+N-1,JA:JA+N-1) and is a triangular distributed matrix of order N, and B(IB:IB+N-1,JB:JB+NRHS-1) is an N-by-NRHS distributed matrix denoted by sub( B ). A check is made to verify that sub( A ) is nonsingular. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 UPLO (global input) CHARACTER = 'U': sub( A ) is upper triangular; .br = 'L': sub( A ) is lower triangular. .TP 8 TRANS (global input) CHARACTER .br Specifies the form of the system of equations: .br = 'N': Solve sub( A ) * X = sub( B ) (No transpose) .br = 'T': Solve sub( A )**T * X = sub( B ) (Transpose) .br = 'C': Solve sub( A )**H * X = sub( B ) (Conjugate transpose) .TP 8 DIAG (global input) CHARACTER .br = 'N': sub( A ) is non-unit triangular; .br = 'U': sub( A ) is unit triangular. .TP 8 N (global input) INTEGER The number of rows and columns to be operated on i.e the order of the distributed submatrix sub( A ). N >= 0. .TP 8 NRHS (global input) INTEGER The number of right hand sides, i.e., the number of columns of the distributed matrix sub( B ). NRHS >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1) ). This array contains the local pieces of the distributed triangular matrix sub( A ). If UPLO = 'U', the leading N-by-N upper triangular part of sub( A ) contains the upper triangular matrix, and the strictly lower triangular part of sub( A ) is not referenced. If UPLO = 'L', the leading N-by-N lower triangular part of sub( A ) contains the lower triangular matrix, and the strictly upper triangular part of sub( A ) is not referenced. If DIAG = 'U', the diagonal elements of sub( A ) are also not referenced and are assumed to be 1. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 B (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_B,LOCc(JB+NRHS-1)). On entry, this array contains the local pieces of the right hand side distributed matrix sub( B ). On exit, if INFO = 0, sub( B ) is overwritten by the solution matrix X. .TP 8 IB (global input) INTEGER The row index in the global array B indicating the first row of sub( B ). .TP 8 JB (global input) INTEGER The column index in the global array B indicating the first column of sub( B ). .TP 8 DESCB (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix B. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. > 0: If INFO = i, the i-th diagonal element of sub( A ) is zero, indicating that the submatrix is singular and the solutions X have not been computed. scalapack-doc-1.5/man/manl/pztzrzf.l0100644000056400000620000001566406335610664017172 0ustar pfrauenfstaff.TH PZTZRZF l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZTZRZF - reduce the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of unitary transformations .SH SYNOPSIS .TP 20 SUBROUTINE PZTZRZF( M, N, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZTZRZF reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix sub( A ) = A(IA:IA+M-1,JA:JA+N-1) to upper triangular form by means of unitary transformations. The upper trapezoidal matrix sub( A ) is factored as .br sub( A ) = ( R 0 ) * Z, .br where Z is an N-by-N unitary matrix and R is an M-by-M upper triangular matrix. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on, i.e. the number of rows of the distributed submatrix sub( A ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on, i.e. the number of columns of the distributed submatrix sub( A ). N >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A, LOCc(JA+N-1)). On entry, the local pieces of the M-by-N distributed matrix sub( A ) which is to be factored. On exit, the leading M-by-M upper triangular part of sub( A ) contains the upper trian- gular matrix R, and elements M+1 to N of the first M rows of sub( A ), with the array TAU, represent the unitary matrix Z as a product of M elementary reflectors. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local output) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors of the elementary reflectors. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( Mp0 + Nq0 + MB_A ), where IROFF = MOD( IA-1, MB_A ), ICOFF = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), Mp0 = NUMROC( M+IROFF, MB_A, MYROW, IAROW, NPROW ), Nq0 = NUMROC( N+ICOFF, NB_A, MYCOL, IACOL, NPCOL ), and NUMROC, INDXG2P are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. .SH FURTHER DETAILS The factorization is obtained by Householder's method. The kth transformation matrix, Z( k ), whose conjugate transpose is used to introduce zeros into the (m - k + 1)th row of sub( A ), is given in the form .br Z( k ) = ( I 0 ), .br ( 0 T( k ) ) .br where .br T( k ) = I - tau*u( k )*u( k )', u( k ) = ( 1 ), ( 0 ) ( z( k ) ) tau is a scalar and z( k ) is an ( n - m ) element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of sub( A ). .br The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of sub( A ), such that the elements of z( k ) are in a( k, m + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of sub( A ). .br Z is given by .br Z = Z( 1 ) * Z( 2 ) * ... * Z( m ). .br scalapack-doc-1.5/man/manl/pzung2l.l0100644000056400000620000001405306335610664017031 0ustar pfrauenfstaff.TH PZUNG2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNG2L - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PZUNG2L( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNG2L generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PZGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PZGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzung2r.l0100644000056400000620000001403606335610664017040 0ustar pfrauenfstaff.TH PZUNG2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNG2R - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PZUNG2R( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNG2R generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PZGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PZGEQRF in the K columns of its array argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MpA0 + MAX( 1, NqA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzungl2.l0100644000056400000620000001402506335610664017030 0ustar pfrauenfstaff.TH PZUNGL2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNGL2 - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k)' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNGL2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNGL2 generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PZGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzunglq.l0100644000056400000620000001403606335610664017131 0ustar pfrauenfstaff.TH PZUNGLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNGLQ - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k)' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNGLQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNGLQ generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N as returned by PZGELQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCr(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGELQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzungql.l0100644000056400000620000001406406335610664017132 0ustar pfrauenfstaff.TH PZUNGQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNGQL - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) .SH SYNOPSIS .TP 20 SUBROUTINE PZUNGQL( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNGQL generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M as returned by PZGEQLF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA+N-K <= j <= JA+N-1, as returned by PZGEQLF in the K columns of its distributed matrix argument A(IA:*,JA+N-K:JA+N-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQLF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzungqr.l0100644000056400000620000001406306335610664017137 0ustar pfrauenfstaff.TH PZUNGQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNGQR - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) .SH SYNOPSIS .TP 20 SUBROUTINE PZUNGQR( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNGQR generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M as returned by PZGEQRF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. M >= N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. N >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the j-th column must contain the vector which defines the elementary reflector H(j), JA <= j <= JA+K-1, as returned by PZGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+K-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQRF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NB_A * ( NqA0 + MpA0 + NB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzungr2.l0100644000056400000620000001404006335610664017033 0ustar pfrauenfstaff.TH PZUNGR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNGR2 - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1)' H(2)' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNGR2( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNGR2 generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PZGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PZGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= NqA0 + MAX( 1, MpA0 ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzungrq.l0100644000056400000620000001405106335610664017134 0ustar pfrauenfstaff.TH PZUNGRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNGRQ - generate an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1)' H(2)' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNGRQ( M, N, K, A, IA, JA, DESCA, TAU, WORK, LWORK, INFO ) .TP 20 .ti +4 INTEGER IA, INFO, JA, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNGRQ generates an M-by-N complex distributed matrix Q denoting A(IA:IA+M-1,JA:JA+N-1) with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N as returned by PZGERQF. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix Q. M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix Q. N >= M >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. M >= K >= 0. .TP 8 A (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+N-1)). On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA+M-K <= i <= IA+M-1, as returned by PZGERQF in the K rows of its distributed matrix argument A(IA+M-K:IA+M-1,JA:*). On exit, this array contains the local pieces of the M-by-N distributed matrix Q. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCr(IA+M-1) This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGERQF. TAU is tied to the distributed matrix A. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least LWORK >= MB_A * ( MpA0 + NqA0 + MB_A ), where IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MpA0 = NUMROC( M+IROFFA, MB_A, MYROW, IAROW, NPROW ), NqA0 = NUMROC( N+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. scalapack-doc-1.5/man/manl/pzunm2l.l0100644000056400000620000001730206335610664017037 0ustar pfrauenfstaff.TH PZUNM2L l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNM2L - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNM2L( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNM2L overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PZGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PZGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pzunm2r.l0100644000056400000620000001730406335610664017047 0ustar pfrauenfstaff.TH PZUNM2R l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNM2R - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNM2R( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNM2R overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PZGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PZGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( 1, NqC0 ); if SIDE = 'R', LWORK >= NqC0 + MAX( MAX( 1, MpC0 ), NUMROC( NUMROC( N+ICOFFC,NB_A,0,0,NPCOL ),NB_A,0,0,LCMQ ) ); where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pzunmbr.l0100644000056400000620000002404106335610664017123 0ustar pfrauenfstaff.TH PZUNMBR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMBR - VECT = 'Q', PZUNMBR overwrites the general complex distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMBR( VECT, SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, VECT .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE If VECT = 'Q', PZUNMBR overwrites the general complex distributed M-by-N matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br If VECT = 'P', PZUNMBR overwrites sub( C ) with .br SIDE = 'L' SIDE = 'R' .br TRANS = 'N': P * sub( C ) sub( C ) * P .br TRANS = 'C': P**H * sub( C ) sub( C ) * P**H .br Here Q and P**H are the unitary distributed matrices determined by PZGEBRD when reducing a complex distributed matrix A(IA:*,JA:*) to bidiagonal form: A(IA:*,JA:*) = Q * B * P**H. Q and P**H are defined as products of elementary reflectors H(i) and G(i) respectively. Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the order of the unitary matrix Q or P**H that is applied. .br If VECT = 'Q', A(IA:*,JA:*) is assumed to have been an NQ-by-K matrix: .br if nq >= k, Q = H(1) H(2) . . . H(k); .br if nq < k, Q = H(1) H(2) . . . H(nq-1). .br If VECT = 'P', A(IA:*,JA:*) is assumed to have been a K-by-NQ matrix: .br if k < nq, P = G(1) G(2) . . . G(k); .br if k >= nq, P = G(1) G(2) . . . G(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 VECT (global input) CHARACTER = 'Q': apply Q or Q**H; .br = 'P': apply P or P**H. .TP 8 SIDE (global input) CHARACTER .br = 'L': apply Q, Q**H, P or P**H from the Left; .br = 'R': apply Q, Q**H, P or P**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q or P; .br = 'C': Conjugate transpose, apply Q**H or P**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER If VECT = 'Q', the number of columns in the original distributed matrix reduced by PZGEBRD. If VECT = 'P', the number of rows in the original distributed matrix reduced by PZGEBRD. K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+MIN(NQ,K)-1)) if VECT='Q', and (LLD_A,LOCc(JA+NQ-1)) if VECT = 'P'. NQ = M if SIDE = 'L', and NQ = N otherwise. The vectors which define the elementary reflectors H(i) and G(i), whose products determine the matrices Q and P, as returned by PZGEBRD. If VECT = 'Q', LLD_A >= max(1,LOCr(IA+NQ-1)); if VECT = 'P', LLD_A >= max(1,LOCr(IA+MIN(NQ,K)-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16 array, dimension LOCc(JA+MIN(NQ,K)-1) if VECT = 'Q', LOCr(IA+MIN(NQ,K)-1) if VECT = 'P', TAU(i) must contain the scalar factor of the elementary reflector H(i) or G(i), which determines Q or P, as returned by PDGEBRD in its array argument TAUQ or TAUP. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, if VECT='Q', sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q; if VECT='P, sub( C ) is overwritten by P*sub( C ) or P'*sub( C ) or sub( C )*P or sub( C )*P'. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', NQ = M; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA+1; JAA=JA; MI=M-1; NI=N; ICC=IC+1; JCC=JC; end if else if SIDE = 'R', NQ = N; if( (VECT = 'Q' and NQ >= K) or (VECT <> 'Q' and NQ > K) ), IAA=IA; JAA=JA; MI=M; NI=N; ICC=IC; JCC=JC; else IAA=IA; JAA=JA+1; MI=M; NI=N-1; ICC=IC; JCC=JC+1; end if end if If VECT = 'Q', If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if else if VECT <> 'Q', if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( MI+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if end if where LCMP = LCM / NPROW, LCMQ = LCM / NPCOL, with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), IACOL = INDXG2P( JAA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( MI+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If VECT = 'Q', If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) else If SIDE = 'L', ( MB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) end if scalapack-doc-1.5/man/manl/pzunmhr.l0100644000056400000620000002016406335610665017134 0ustar pfrauenfstaff.TH PZUNMHR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMHR - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMHR( SIDE, TRANS, M, N, ILO, IHI, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, IHI, ILO, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMHR overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of IHI-ILO elementary reflectors, as returned by PZGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 ILO (global input) INTEGER IHI (global input) INTEGER ILO and IHI must have the same values as in the previous call of PZGEHRD. Q is equal to the unit matrix except in the distributed submatrix Q(ia+ilo:ia+ihi-1,ia+ilo:ja+ihi-1). If SIDE = 'L', 1 <= ILO <= IHI <= max(1,M); if SIDE = 'R', 1 <= ILO <= IHI <= max(1,N); ILO and IHI are relative indexes. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PZGEHRD. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+M-2) if SIDE = 'L', and LOCc(JA+N-2) if SIDE = 'R'. This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEHRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least IAA = IA + ILO; JAA = JA+ILO-1; If SIDE = 'L', MI = IHI-ILO; NI = N; ICC = IC + ILO; JCC = JC; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; NI = IHI-ILO; ICC = IC; JCC = JC + ILO; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pzunml2.l0100644000056400000620000001727006335610665017044 0ustar pfrauenfstaff.TH PZUNML2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNML2 - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNML2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNML2 overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k)' . . . H(2)' H(1)' .br as returned by PZGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pzunmlq.l0100644000056400000620000001770006335610665017141 0ustar pfrauenfstaff.TH PZUNMLQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMLQ - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMLQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMLQ overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k)' . . . H(2)' H(1)' .br as returned by PZGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= max(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZGELQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGELQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pzunmql.l0100644000056400000620000001771206335610665017144 0ustar pfrauenfstaff.TH PZUNMQL l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMQL - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMQL( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMQL overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(k) . . . H(2) H(1) .br as returned by PZGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PZGEQLF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ), if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+N-1) This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQLF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pzunmqr.l0100644000056400000620000001771406335610665017154 0ustar pfrauenfstaff.TH PZUNMQR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMQR - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMQR( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMQR overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of k elementary reflectors .br Q = H(1) H(2) . . . H(k) .br as returned by PZGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+K-1)). On entry, the j-th column must contain the vector which defines the elemen- tary reflector H(j), JA <= j <= JA+K-1, as returned by PZGEQRF in the K columns of its distributed matrix argument A(IA:*,JA:JA+K-1). A(IA:*,JA:JA+K-1) is modified by the routine but restored on exit. If SIDE = 'L', LLD_A >= MAX( 1, LOCr(IA+M-1) ); if SIDE = 'R', LLD_A >= MAX( 1, LOCr(IA+N-1) ). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(JA+K-1). This array contains the scalar factors TAU(j) of the elementary reflectors H(j) as returned by PZGEQRF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( N+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IAROW = INDXG2P( IA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( N+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/pzunmr2.l0100644000056400000620000001727006335610665017052 0ustar pfrauenfstaff.TH PZUNMR2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMR2 - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMR2( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMR2 overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PZGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pzunmr3.l0100644000056400000620000001761306335610665017054 0ustar pfrauenfstaff.TH PZUNMR3 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMR3 - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMR3( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMR3 overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PZTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If SIDE = 'L', LWORK >= MpC0 + MAX( MAX( 1, NqC0 ), NUMROC( NUMROC( M+IROFFC,MB_A,0,0,NPROW ),MB_A,0,0,LCMP ) ); if SIDE = 'R', LWORK >= NqC0 + MAX( 1, MpC0 ); where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (local output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pzunmrq.l0100644000056400000620000001770106335610665017150 0ustar pfrauenfstaff.TH PZUNMRQ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMRQ - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMRQ( SIDE, TRANS, M, N, K, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMRQ overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PZGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZGERQF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZGERQF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pzunmrz.l0100644000056400000620000002022406335610665017153 0ustar pfrauenfstaff.TH PZUNMRZ l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMRZ - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMRZ( SIDE, TRANS, M, N, K, L, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, K, L, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMRZ overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix defined as the product of K elementary reflectors .br Q = H(1)' H(2)' . . . H(k)' .br as returned by PZTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 TRANS (global input) CHARACTER .br = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 K (global input) INTEGER The number of elementary reflectors whose product defines the matrix Q. If SIDE = 'L', M >= K >= 0, if SIDE = 'R', N >= K >= 0. .TP 8 L (global input) INTEGER The columns of the distributed submatrix sub( A ) containing the meaningful part of the Householder reflectors. If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', and (LLD_A,LOCc(JA+N-1)) if SIDE='R', where LLD_A >= MAX(1,LOCr(IA+K-1)); On entry, the i-th row must contain the vector which defines the elementary reflector H(i), IA <= i <= IA+K-1, as returned by PZTZRZF in the K rows of its distributed matrix argument A(IA:IA+K-1,JA:*). .br A(IA:IA+K-1,JA:*) is modified by the routine but restored on exit. .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16, array, dimension LOCc(IA+K-1). This array contains the scalar factors TAU(i) of the elementary reflectors H(i) as returned by PZTZRZF. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least if SIDE = 'L', LWORK >= MAX( (MB_A*(MB_A-1))/2, ( MpC0 + MAX( MqA0 + NUMROC( NUMROC( M+IROFFC, MB_A, 0, 0, NPROW ), MB_A, 0, 0, LCMP ), NqC0 ) )*MB_A ) + MB_A * MB_A else if SIDE = 'R', LWORK >= MAX( (MB_A*(MB_A-1))/2, (MpC0 + NqC0)*MB_A ) + MB_A * MB_A end if where LCMP = LCM / NPROW with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IA-1, MB_A ), ICOFFA = MOD( JA-1, NB_A ), IACOL = INDXG2P( JA, NB_A, MYCOL, CSRC_A, NPCOL ), MqA0 = NUMROC( M+ICOFFA, NB_A, MYCOL, IACOL, NPCOL ), IROFFC = MOD( IC-1, MB_C ), ICOFFC = MOD( JC-1, NB_C ), ICROW = INDXG2P( IC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( M+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( N+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( NB_A.EQ.MB_C .AND. ICOFFA.EQ.IROFFC ) If SIDE = 'R', ( NB_A.EQ.NB_C .AND. ICOFFA.EQ.ICOFFC .AND. IACOL.EQ.ICCOL ) scalapack-doc-1.5/man/manl/pzunmtr.l0100644000056400000620000002047306335610665017153 0ustar pfrauenfstaff.TH PZUNMTR l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME PZUNMTR - overwrite the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with SIDE = 'L' SIDE = 'R' TRANS = 'N' .SH SYNOPSIS .TP 20 SUBROUTINE PZUNMTR( SIDE, UPLO, TRANS, M, N, A, IA, JA, DESCA, TAU, C, IC, JC, DESCC, WORK, LWORK, INFO ) .TP 20 .ti +4 CHARACTER SIDE, TRANS, UPLO .TP 20 .ti +4 INTEGER IA, IC, INFO, JA, JC, LWORK, M, N .TP 20 .ti +4 INTEGER DESCA( * ), DESCC( * ) .TP 20 .ti +4 COMPLEX*16 A( * ), C( * ), TAU( * ), WORK( * ) .SH PURPOSE PZUNMTR overwrites the general complex M-by-N distributed matrix sub( C ) = C(IC:IC+M-1,JC:JC+N-1) with TRANS = 'C': Q**H * sub( C ) sub( C ) * Q**H .br where Q is a complex unitary distributed matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by PZHETRD: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); .br if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). .br Notes .br ===== .br Each global data object is described by an associated description vector. This vector stores the information required to establish the mapping between an object element and its corresponding process and memory location. .br Let A be a generic term for any 2D block cyclicly distributed array. Such a global array has an associated description vector DESCA. In the following comments, the character _ should be read as "of the global array". .br NOTATION STORED IN EXPLANATION .br --------------- -------------- -------------------------------------- DTYPE_A(global) DESCA( DTYPE_ )The descriptor type. In this case, DTYPE_A = 1. .br CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating the BLACS process grid A is distribu- ted over. The context itself is glo- bal, but the handle (the integer value) may vary. .br M_A (global) DESCA( M_ ) The number of rows in the global array A. .br N_A (global) DESCA( N_ ) The number of columns in the global array A. .br MB_A (global) DESCA( MB_ ) The blocking factor used to distribute the rows of the array. .br NB_A (global) DESCA( NB_ ) The blocking factor used to distribute the columns of the array. .br RSRC_A (global) DESCA( RSRC_ ) The process row over which the first row of the array A is distributed. CSRC_A (global) DESCA( CSRC_ ) The process column over which the first column of the array A is distributed. .br LLD_A (local) DESCA( LLD_ ) The leading dimension of the local array. LLD_A >= MAX(1,LOCr(M_A)). Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has dimension p x q. .br LOCr( K ) denotes the number of elements of K that a process would receive if K were distributed over the p processes of its process column. .br Similarly, LOCc( K ) denotes the number of elements of K that a process would receive if K were distributed over the q processes of its process row. .br The values of LOCr() and LOCc() may be determined via a call to the ScaLAPACK tool function, NUMROC: .br LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ), LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ). An upper bound for these quantities may be computed by: .br LOCr( M ) <= ceil( ceil(M/MB_A)/NPROW )*MB_A .br LOCc( N ) <= ceil( ceil(N/NB_A)/NPCOL )*NB_A .br .SH ARGUMENTS .TP 8 SIDE (global input) CHARACTER = 'L': apply Q or Q**H from the Left; .br = 'R': apply Q or Q**H from the Right. .TP 8 UPLO (global input) CHARACTER .br = 'U': Upper triangle of A(IA:*,JA:*) contains elementary reflectors from PZHETRD; = 'L': Lower triangle of A(IA:*,JA:*) contains elementary reflectors from PZHETRD. .TP 8 TRANS (global input) CHARACTER = 'N': No transpose, apply Q; .br = 'C': Conjugate transpose, apply Q**H. .TP 8 M (global input) INTEGER The number of rows to be operated on i.e the number of rows of the distributed submatrix sub( C ). M >= 0. .TP 8 N (global input) INTEGER The number of columns to be operated on i.e the number of columns of the distributed submatrix sub( C ). N >= 0. .TP 8 A (local input) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_A,LOCc(JA+M-1)) if SIDE='L', or (LLD_A,LOCc(JA+N-1)) if SIDE = 'R'. The vectors which define the elementary reflectors, as returned by PZHETRD. If SIDE = 'L', LLD_A >= max(1,LOCr(IA+M-1)); if SIDE = 'R', LLD_A >= max(1,LOCr(IA+N-1)). .TP 8 IA (global input) INTEGER The row index in the global array A indicating the first row of sub( A ). .TP 8 JA (global input) INTEGER The column index in the global array A indicating the first column of sub( A ). .TP 8 DESCA (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix A. .TP 8 TAU (local input) COMPLEX*16 array, dimension LTAU, where if SIDE = 'L' and UPLO = 'U', LTAU = LOCc(M_A), if SIDE = 'L' and UPLO = 'L', LTAU = LOCc(JA+M-2), if SIDE = 'R' and UPLO = 'U', LTAU = LOCc(N_A), if SIDE = 'R' and UPLO = 'L', LTAU = LOCc(JA+N-2). TAU(i) must contain the scalar factor of the elementary reflector H(i), as returned by PZHETRD. TAU is tied to the distributed matrix A. .TP 8 C (local input/local output) COMPLEX*16 pointer into the local memory to an array of dimension (LLD_C,LOCc(JC+N-1)). On entry, the local pieces of the distributed matrix sub(C). On exit, sub( C ) is overwritten by Q*sub( C ) or Q'*sub( C ) or sub( C )*Q' or sub( C )*Q. .TP 8 IC (global input) INTEGER The row index in the global array C indicating the first row of sub( C ). .TP 8 JC (global input) INTEGER The column index in the global array C indicating the first column of sub( C ). .TP 8 DESCC (global and local input) INTEGER array of dimension DLEN_. The array descriptor for the distributed matrix C. .TP 8 WORK (local workspace/local output) COMPLEX*16 array, dimension (LWORK) On exit, WORK(1) returns the minimal and optimal LWORK. .TP 8 LWORK (local or global input) INTEGER The dimension of the array WORK. LWORK is local input and must be at least If UPLO = 'U', IAA = IA, JAA = JA+1, ICC = IC, JCC = JC; else UPLO = 'L', IAA = IA+1, JAA = JA; if SIDE = 'L', ICC = IC+1; JCC = JC; else ICC = IC; JCC = JC+1; end if end if If SIDE = 'L', MI = M-1; NI = N; LWORK >= MAX( (NB_A*(NB_A-1))/2, (NqC0 + MpC0)*NB_A ) + NB_A * NB_A else if SIDE = 'R', MI = M; MI = N-1; LWORK >= MAX( (NB_A*(NB_A-1))/2, ( NqC0 + MAX( NpA0 + NUMROC( NUMROC( NI+ICOFFC, NB_A, 0, 0, NPCOL ), NB_A, 0, 0, LCMQ ), MpC0 ) )*NB_A ) + NB_A * NB_A end if where LCMQ = LCM / NPCOL with LCM = ICLM( NPROW, NPCOL ), IROFFA = MOD( IAA-1, MB_A ), ICOFFA = MOD( JAA-1, NB_A ), IAROW = INDXG2P( IAA, MB_A, MYROW, RSRC_A, NPROW ), NpA0 = NUMROC( NI+IROFFA, MB_A, MYROW, IAROW, NPROW ), IROFFC = MOD( ICC-1, MB_C ), ICOFFC = MOD( JCC-1, NB_C ), ICROW = INDXG2P( ICC, MB_C, MYROW, RSRC_C, NPROW ), ICCOL = INDXG2P( JCC, NB_C, MYCOL, CSRC_C, NPCOL ), MpC0 = NUMROC( MI+IROFFC, MB_C, MYROW, ICROW, NPROW ), NqC0 = NUMROC( NI+ICOFFC, NB_C, MYCOL, ICCOL, NPCOL ), ILCM, INDXG2P and NUMROC are ScaLAPACK tool functions; MYROW, MYCOL, NPROW and NPCOL can be determined by calling the subroutine BLACS_GRIDINFO. If LWORK = -1, then LWORK is global input and a workspace query is assumed; the routine only calculates the minimum and optimal size for all work arrays. Each of these values is returned in the first entry of the corresponding work array, and no error message is issued by PXERBLA. .TP 8 INFO (global output) INTEGER = 0: successful exit .br < 0: If the i-th argument is an array and the j-entry had an illegal value, then INFO = -(i*100+j), if the i-th argument is a scalar and had an illegal value, then INFO = -i. Alignment requirements ====================== The distributed submatrices A(IA:*, JA:*) and C(IC:IC+M-1,JC:JC+N-1) must verify some alignment properties, namely the following expressions should be true: If SIDE = 'L', ( MB_A.EQ.MB_C .AND. IROFFA.EQ.IROFFC .AND. IAROW.EQ.ICROW ) If SIDE = 'R', ( MB_A.EQ.NB_C .AND. IROFFA.EQ.ICOFFC ) scalapack-doc-1.5/man/manl/sdbtf2.l0100644000056400000620000000463306335610665016620 0ustar pfrauenfstaff.SH NAME SDBTF2 - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE SDBTF2( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 REAL AB( LDAB, * ) .SH PURPOSE Sdbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges. This is the unblocked version of the algorithm, calling Level 2 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) REAL array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine; elements marked + need not be set on entry, but are required by the routine to store elements of U, because of fill-in resulting from the row .br interchanges. .br scalapack-doc-1.5/man/manl/sdbtrf.l0100644000056400000620000000434706335610665016722 0ustar pfrauenfstaff.SH NAME SDBTRF - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE SDBTRF( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 REAL AB( LDAB, * ) .SH PURPOSE Sdbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges. This is the blocked version of the algorithm, calling Level 3 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) REAL array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine. .br scalapack-doc-1.5/man/manl/sdttrf.l0100644000056400000620000000331406335610665016735 0ustar pfrauenfstaff.TH SDTTRF l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME SDTTRF - compute an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting .SH SYNOPSIS .TP 19 SUBROUTINE SDTTRF( N, DL, D, DU, INFO ) .TP 19 .ti +4 INTEGER INFO, N .TP 19 .ti +4 REAL D( * ), DL( * ), DU( * ) .SH PURPOSE SDTTRF computes an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting. The factorization has the form .br A = L * U .br where L is a product of unit lower bidiagonal .br matrices and U is upper triangular with nonzeros in only the main diagonal and first superdiagonal. .br .SH ARGUMENTS .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 DL (input/output) COMPLEX array, dimension (N-1) On entry, DL must contain the (n-1) subdiagonal elements of A. On exit, DL is overwritten by the (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input/output) COMPLEX array, dimension (N) On entry, D must contain the diagonal elements of A. On exit, D is overwritten by the n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input/output) COMPLEX array, dimension (N-1) On entry, DU must contain the (n-1) superdiagonal elements of A. On exit, DU is overwritten by the (n-1) elements of the first superdiagonal of U. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/sdttrsv.l0100644000056400000620000000352106335610666017141 0ustar pfrauenfstaff.TH SDTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME SDTTRSV - solve one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE SDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER UPLO, TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 REAL B( LDB, * ), D( * ), DL( * ), DU( * ) .SH PURPOSE SDTTRSV solves one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, U * X = B, U**T * X = B, or U**H * X = B, .br with factors of the tridiagonal matrix A from the LU factorization computed by SDTTRF. .br .SH ARGUMENTS .TP 8 UPLO (input) CHARACTER*1 Specifies whether to solve with L or U. .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': A * X = B (No transpose) .br = 'T': A**T * X = B (Transpose) .br = 'C': A**H * X = B (Conjugate transpose) .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 DL (input) COMPLEX array, dimension (N-1) The (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input) COMPLEX array, dimension (N) The n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input) COMPLEX array, dimension (N-1) The (n-1) elements of the first superdiagonal of U. .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, B is overwritten by the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/slamsh.l0100644000056400000620000000444406335610666016724 0ustar pfrauenfstaff.TH SLAMSH l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME SLAMSH - send multiple shifts through a small (single node) matrix to see how consecutive small subdiagonal elements are modified by subsequent shifts in an effort to maximize the number of bulges that can be sent through .SH SYNOPSIS .TP 18 SUBROUTINE SLAMSH ( S, LDS, NBULGE, JBLK, H, LDH, N, ULP ) .TP 18 .ti +4 INTEGER LDS, NBULGE, JBLK, LDH, N .TP 18 .ti +4 REAL ULP .TP 18 .ti +4 REAL S(LDS,*), H(LDH,*) .SH PURPOSE SLAMSH sends multiple shifts through a small (single node) matrix to see how consecutive small subdiagonal elements are modified by subsequent shifts in an effort to maximize the number of bulges that can be sent through. SLAMSH should only be called when there are multiple shifts/bulges (NBULGE > 1) and the first shift is starting in the middle of an unreduced Hessenberg matrix because of two or more consecutive small subdiagonal elements. .br .SH ARGUMENTS .TP 8 S (local input/output) REAL array, (LDS,*) On entry, the matrix of shifts. Only the 2x2 diagonal of S is referenced. It is assumed that S has JBLK double shifts (size 2). On exit, the data is rearranged in the best order for applying. .TP 8 LDS (local input) INTEGER On entry, the leading dimension of S. Unchanged on exit. 1 < NBULGE <= JBLK <= LDS/2 .TP 8 NBULGE (local input/output) INTEGER On entry, the number of bulges to send through H ( >1 ). NBULGE should be less than the maximum determined (JBLK). 1 < NBULGE <= JBLK <= LDS/2 On exit, the maximum number of bulges that can be sent through. .TP 8 JBLK (local input) INTEGER On entry, the number of shifts determined for S. Unchanged on exit. .TP 8 H (local input/output) REAL array (LDH,N) On entry, the local matrix to apply the shifts on. H should be aligned so that the starting row is 2. On exit, the data is destroyed. .TP 8 LDS (local input) INTEGER On entry, the leading dimension of S. Unchanged on exit. .TP 8 N (local input) INTEGER On entry, the size of H. If all the bulges are expected to go through, N should be at least 4*NBULGE+2. Otherwise, NBULGE may be reduced by this routine. .TP 8 ULP (local input) REAL On entry, machine precision Unchanged on exit. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/slaref.l0100644000056400000620000000615306335610666016710 0ustar pfrauenfstaff.TH SLAREF l "12 May 1997" "LAPACK version 1.5" "LAPACK auxiliary routine (version 1.5)" .SH NAME SLAREF - applie one or several Householder reflectors of size 3 to one or two matrices (if column is specified) on either their rows or columns .SH SYNOPSIS .TP 19 SUBROUTINE SLAREF( TYPE, A, LDA, WANTZ, Z, LDZ, BLOCK, IROW1, ICOL1, ISTART, ISTOP, ITMP1, ITMP2, LILOZ, LIHIZ, VECS, V2, V3, T1, T2, T3 ) .TP 19 .ti +4 LOGICAL BLOCK, WANTZ .TP 19 .ti +4 CHARACTER TYPE .TP 19 .ti +4 INTEGER ICOL1, IROW1, ISTART, ISTOP, ITMP1, ITMP2, LDA, LDZ, LIHIZ, LILOZ .TP 19 .ti +4 REAL T1, T2, T3, V2, V3 .TP 19 .ti +4 REAL A( LDA, * ), VECS( * ), Z( LDZ, * ) .SH PURPOSE SLAREF applies one or several Householder reflectors of size 3 to one or two matrices (if column is specified) on either their rows or columns. .SH ARGUMENTS .TP 8 TYPE (global input) CHARACTER*1 If 'R': Apply reflectors to the rows of the matrix (apply from left) Otherwise: Apply reflectors to the columns of the matrix Unchanged on exit. .TP 8 A (global input/output) REAL array, (LDA,*) On entry, the matrix to receive the reflections. The updated matrix on exit. .TP 8 LDA (local input) INTEGER On entry, the leading dimension of A. Unchanged on exit. .TP 8 WANTZ (global input) LOGICAL If .TRUE., then apply any column reflections to Z as well. If .FALSE., then do no additional work on Z. .TP 8 Z (global input/output) REAL array, (LDZ,*) On entry, the second matrix to receive column reflections. This is changed only if WANTZ is set. .TP 8 LDZ (local input) INTEGER On entry, the leading dimension of Z. Unchanged on exit. .TP 8 BLOCK (global input) LOGICAL If .TRUE., then apply several reflectors at once and read their data from the VECS array. If .FALSE., apply the single reflector given by V2, V3, T1, T2, and T3. .TP 8 IROW1 (local input/output) INTEGER On entry, the local row element of A. Undefined on output. .TP 8 ICOL1 (local input/output) INTEGER On entry, the local column element of A. Undefined on output. .TP 8 ISTART (global input) INTEGER Specifies the "number" of the first reflector. This is used as an index into VECS if BLOCK is set. ISTART is ignored if BLOCK is .FALSE.. .TP 8 ISTOP (global input) INTEGER Specifies the "number" of the last reflector. This is used as an index into VECS if BLOCK is set. ISTOP is ignored if BLOCK is .FALSE.. .TP 8 ITMP1 (local input) INTEGER Starting range into A. For rows, this is the local first column. For columns, this is the local first row. .TP 8 ITMP2 (local input) INTEGER Ending range into A. For rows, this is the local last column. For columns, this is the local last row. LILOZ LIHIZ (local input) INTEGER These serve the same purpose as ITMP1,ITMP2 but for Z when WANTZ is set. .TP 8 VECS (global input) REAL array of size 3*N (matrix size) This holds the size 3 reflectors one after another and this is only accessed when BLOCK is .TRUE. V2 V3 T1 T2 T3 (global input/output) REAL This holds information on a single size 3 Householder reflector and is read when BLOCK is .FALSE., and overwritten when BLOCK is .TRUE. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/slasorte.l0100644000056400000620000000262106335610666017264 0ustar pfrauenfstaff.TH SLASORTE l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME SLASORTE - sort eigenpairs so that real eigenpairs are together and complex are together .SH SYNOPSIS .TP 20 SUBROUTINE SLASORTE ( S, LDS, J, OUT, INFO ) .TP 20 .ti +4 INTEGER INFO, J, LDS .TP 20 .ti +4 REAL OUT( J, * ), S( LDS, * ) .SH PURPOSE SLASORTE sorts eigenpairs so that real eigenpairs are together and complex are together. This way one can employ 2x2 shifts easily since every 2nd subdiagonal is guaranteed to be zero. .br This routine does no parallel work and makes no calls. .br .SH ARGUMENTS .TP 8 S (local input/output) REAL array, dimension LDS On entry, a matrix already in Schur form. On exit, the diagonal blocks of S have been rewritten to pair the eigenvalues. The resulting matrix is no longer similar to the input. .TP 8 LDS (local input) INTEGER On entry, the leading dimension of the local array S. Unchanged on exit. .TP 8 J (local input) INTEGER On entry, the order of the matrix S. Unchanged on exit. .TP 8 OUT (local input/output) REAL array, dimension Jx2 This is the work buffer required by this routine. .TP 8 INFO (local input) INTEGER This is set if the input matrix had an odd number of real eigenvalues and things couldn't be paired or if the input matrix S was not originally in Schur form. 0 indicates successful completion. Implemented by: G. Henry, May 1, 1997 scalapack-doc-1.5/man/manl/slasrt2.l0100644000056400000620000000263206335610666017024 0ustar pfrauenfstaff.TH SLASRT2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME SLASRT2 - the numbers in D in increasing order (if ID = 'I') or in decreasing order (if ID = 'D' ) .SH SYNOPSIS .TP 20 SUBROUTINE SLASRT2( ID, N, D, KEY, INFO ) .TP 20 .ti +4 CHARACTER ID .TP 20 .ti +4 INTEGER INFO, N .TP 20 .ti +4 INTEGER KEY( * ) .TP 20 .ti +4 REAL D( * ) .SH PURPOSE Sort the numbers in D in increasing order (if ID = 'I') or in decreasing order (if ID = 'D' ). Use Quick Sort, reverting to Insertion sort on arrays of .br size <= 20. Dimension of STACK limits N to about 2**32. .br .SH ARGUMENTS .TP 8 ID (input) CHARACTER*1 = 'I': sort D in increasing order; .br = 'D': sort D in decreasing order. .TP 8 N (input) INTEGER The length of the array D. .TP 8 D (input/output) REAL array, dimension (N) On entry, the array to be sorted. On exit, D has been sorted into increasing order (D(1) <= ... <= D(N) ) or into decreasing order (D(1) >= ... >= D(N) ), depending on ID. .TP 8 KEY (input/output) INTEGER array, dimension (N) On entry, KEY contains a key to each of the entries in D() Typically, KEY(I) = I for all I On exit, KEY is permuted in exactly the same manner as D() was permuted from input to output Therefore, if KEY(I) = I for all I upon input, then D_out(I) = D_in(KEY(I)) .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/spttrsv.l0100644000056400000620000000314406335610666017156 0ustar pfrauenfstaff.TH SPTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME SPTTRSV - solve one of the triangular systems L**T* X = B, or L * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE SPTTRSV( TRANS, N, NRHS, D, E, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 REAL D( * ) .TP 20 .ti +4 REAL B( LDB, * ), E( * ) .SH PURPOSE SPTTRSV solves one of the triangular systems L**T* X = B, or L * X = B, where L is the Cholesky factor of a Hermitian positive .br definite tridiagonal matrix A such that .br A = L*D*L**H (computed by SPTTRF). .br .SH ARGUMENTS .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': L * X = B (No transpose) .br = 'T': L**T * X = B (Transpose) .TP 8 N (input) INTEGER The order of the tridiagonal matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 D (input) REAL array, dimension (N) The n diagonal elements of the diagonal matrix D from the factorization computed by SPTTRF. .TP 8 E (input) COMPLEX array, dimension (N-1) The (n-1) off-diagonal elements of the unit bidiagonal factor U or L from the factorization computed by SPTTRF (see UPLO). .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/sstein2.l0100644000056400000620000000666706335610666017035 0ustar pfrauenfstaff.TH SSTEIN2 l "12 May 1997" "LAPACK version 1.5" "LAPACK routine (version 1.5)" .SH NAME SSTEIN2 - compute the eigenvectors of a real symmetric tridiagonal matrix T corresponding to specified eigenvalues, using inverse iteration .SH SYNOPSIS .TP 20 SUBROUTINE SSTEIN2( N, D, E, M, W, IBLOCK, ISPLIT, ORFAC, Z, LDZ, WORK, IWORK, IFAIL, INFO ) .TP 20 .ti +4 INTEGER INFO, LDZ, M, N .TP 20 .ti +4 REAL ORFAC .TP 20 .ti +4 INTEGER IBLOCK( * ), IFAIL( * ), ISPLIT( * ), IWORK( * ) .TP 20 .ti +4 REAL D( * ), E( * ), W( * ), WORK( * ), Z( LDZ, * ) .SH PURPOSE SSTEIN2 computes the eigenvectors of a real symmetric tridiagonal matrix T corresponding to specified eigenvalues, using inverse iteration. The maximum number of iterations allowed for each eigenvector is specified by an internal parameter MAXITS (currently set to 5). .SH ARGUMENTS .TP 8 N (input) INTEGER The order of the matrix. N >= 0. .TP 8 D (input) REAL array, dimension (N) The n diagonal elements of the tridiagonal matrix T. .TP 8 E (input) REAL array, dimension (N) The (n-1) subdiagonal elements of the tridiagonal matrix T, in elements 1 to N-1. E(N) need not be set. .TP 8 M (input) INTEGER The number of eigenvectors to be found. 0 <= M <= N. .TP 8 W (input) REAL array, dimension (N) The first M elements of W contain the eigenvalues for which eigenvectors are to be computed. The eigenvalues should be grouped by split-off block and ordered from smallest to largest within the block. ( The output array W from SSTEBZ with ORDER = 'B' is expected here. ) .TP 8 IBLOCK (input) INTEGER array, dimension (N) The submatrix indices associated with the corresponding eigenvalues in W; IBLOCK(i)=1 if eigenvalue W(i) belongs to the first submatrix from the top, =2 if W(i) belongs to the second submatrix, etc. ( The output array IBLOCK from SSTEBZ is expected here. ) .TP 8 ISPLIT (input) INTEGER array, dimension (N) The splitting points, at which T breaks up into submatrices. The first submatrix consists of rows/columns 1 to ISPLIT( 1 ), the second of rows/columns ISPLIT( 1 )+1 through ISPLIT( 2 ), etc. ( The output array ISPLIT from SSTEBZ is expected here. ) .TP 8 ORFAC (input) REAL ORFAC specifies which eigenvectors should be orthogonalized. Eigenvectors that correspond to eigenvalues which are within ORFAC*||T|| of each other are to be orthogonalized. .TP 8 Z (output) REAL array, dimension (LDZ, M) The computed eigenvectors. The eigenvector associated with the eigenvalue W(i) is stored in the i-th column of Z. Any vector which fails to converge is set to its current iterate after MAXITS iterations. .TP 8 LDZ (input) INTEGER The leading dimension of the array Z. LDZ >= max(1,N). .TP 8 WORK (workspace) REAL array, dimension (5*N) .TP 8 IWORK (workspace) INTEGER array, dimension (N) .TP 8 IFAIL (output) INTEGER array, dimension (M) On normal exit, all elements of IFAIL are zero. If one or more eigenvectors fail to converge after MAXITS iterations, then their indices are stored in array IFAIL. .TP 8 INFO (output) INTEGER = 0: successful exit. .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, then i eigenvectors failed to converge in MAXITS iterations. Their indices are stored in array IFAIL. .SH PARAMETERS .TP 8 MAXITS INTEGER, default = 5 The maximum number of iterations performed. .TP 8 EXTRA INTEGER, default = 2 The number of iterations performed after norm growth criterion is satisfied, should be at least 1. scalapack-doc-1.5/man/manl/ssteqr2.l0100644000056400000620000000503706335610666017037 0ustar pfrauenfstaff.TH SSTEQR2 l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME SSTEQR2 - i a modified version of LAPACK routine SSTEQR .SH SYNOPSIS .TP 20 SUBROUTINE SSTEQR2( COMPZ, N, D, E, Z, LDZ, NR, WORK, INFO ) .TP 20 .ti +4 CHARACTER COMPZ .TP 20 .ti +4 INTEGER INFO, LDZ, N, NR .TP 20 .ti +4 REAL D( * ), E( * ), WORK( * ), Z( LDZ, * ) .SH PURPOSE SSTEQR2 is a modified version of LAPACK routine SSTEQR. SSTEQR2 computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the implicit QL or QR method. running SSTEQR2 to perform updates on a distributed matrix Q. Proper usage of SSTEQR2 can be gleaned from examination of ScaLAPACK's PSSYEV. .br .SH ARGUMENTS .TP 8 COMPZ (input) CHARACTER*1 = 'N': Compute eigenvalues only. .br = 'I': Compute eigenvalues and eigenvectors of the tridiagonal matrix. Z must be initialized to the identity matrix by PDLASET or DLASET prior to entering this subroutine. .TP 8 N (input) INTEGER The order of the matrix. N >= 0. .TP 8 D (input/output) REAL array, dimension (N) On entry, the diagonal elements of the tridiagonal matrix. On exit, if INFO = 0, the eigenvalues in ascending order. .TP 8 E (input/output) REAL array, dimension (N-1) On entry, the (n-1) subdiagonal elements of the tridiagonal matrix. On exit, E has been destroyed. .TP 8 Z (local input/local output) REAL array, global dimension (N, N), local dimension (LDZ, NR). On entry, if COMPZ = 'V', then Z contains the orthogonal matrix used in the reduction to tridiagonal form. On exit, if INFO = 0, then if COMPZ = 'V', Z contains the orthonormal eigenvectors of the original symmetric matrix, and if COMPZ = 'I', Z contains the orthonormal eigenvectors of the symmetric tridiagonal matrix. If COMPZ = 'N', then Z is not referenced. .TP 8 LDZ (input) INTEGER The leading dimension of the array Z. LDZ >= 1, and if eigenvectors are desired, then LDZ >= max(1,N). .TP 8 NR (input) INTEGER NR = MAX(1, NUMROC( N, NB, MYPROW, 0, NPROCS ) ). If COMPZ = 'N', then NR is not referenced. .TP 8 WORK (workspace) REAL array, dimension (max(1,2*N-2)) If COMPZ = 'N', then WORK is not referenced. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: the algorithm has failed to find all the eigenvalues in a total of 30*N iterations; if INFO = i, then i elements of E have not converged to zero; on exit, D and E contain the elements of a symmetric tridiagonal matrix which is orthogonally similar to the original matrix. scalapack-doc-1.5/man/manl/zdbtf2.l0100644000056400000620000000464106335610666016627 0ustar pfrauenfstaff.SH NAME ZDBTF2 - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE ZDBTF2( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 COMPLEX*16 AB( LDAB, * ) .SH PURPOSE Zdbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting with row interchanges. This is the unblocked version of the algorithm, calling Level 2 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) COMPLEX*16 array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine; elements marked + need not be set on entry, but are required by the routine to store elements of U, because of fill-in resulting from the row .br interchanges. .br scalapack-doc-1.5/man/manl/zdbtrf.l0100644000056400000620000000435506335610666016731 0ustar pfrauenfstaff.SH NAME ZDBTRF - compute an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges .SH SYNOPSIS .TP 19 SUBROUTINE ZDBTRF( M, N, KL, KU, AB, LDAB, INFO ) .TP 19 .ti +4 INTEGER INFO, KL, KU, LDAB, M, N .TP 19 .ti +4 COMPLEX*16 AB( LDAB, * ) .SH PURPOSE Zdbtrf computes an LU factorization of a real m-by-n band matrix A without using partial pivoting or row interchanges. This is the blocked version of the algorithm, calling Level 3 BLAS. .SH ARGUMENTS .TP 8 M (input) INTEGER The number of rows of the matrix A. M >= 0. .TP 8 N (input) INTEGER The number of columns of the matrix A. N >= 0. .TP 8 KL (input) INTEGER The number of subdiagonals within the band of A. KL >= 0. .TP 8 KU (input) INTEGER The number of superdiagonals within the band of A. KU >= 0. .TP 8 AB (input/output) REAL array, dimension (LDAB,N) On entry, the matrix A in band storage, in rows KL+1 to 2*KL+KU+1; rows 1 to KL of the array need not be set. The j-th column of A is stored in the j-th column of the array AB as follows: AB(kl+ku+1+i-j,j) = A(i,j) for max(1,j-ku)<=i<=min(m,j+kl) On exit, details of the factorization: U is stored as an upper triangular band matrix with KL+KU superdiagonals in rows 1 to KL+KU+1, and the multipliers used during the factorization are stored in rows KL+KU+2 to 2*KL+KU+1. See below for further details. .TP 8 LDAB (input) INTEGER The leading dimension of the array AB. LDAB >= 2*KL+KU+1. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = +i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. .SH FURTHER DETAILS The band storage scheme is illustrated by the following example, when M = N = 6, KL = 2, KU = 1: .br On entry: On exit: .br * a12 a23 a34 a45 a56 * u12 u23 u34 u45 u56 a11 a22 a33 a44 a55 a66 u11 u22 u33 u44 u55 u66 a21 a32 a43 a54 a65 * m21 m32 m43 m54 m65 * a31 a42 a53 a64 * * m31 m42 m53 m64 * * Array elements marked * are not used by the routine. .br scalapack-doc-1.5/man/manl/zdttrf.l0100644000056400000620000000332206335610666016744 0ustar pfrauenfstaff.TH ZDTTRF l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME ZDTTRF - compute an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting .SH SYNOPSIS .TP 19 SUBROUTINE ZDTTRF( N, DL, D, DU, INFO ) .TP 19 .ti +4 INTEGER INFO, N .TP 19 .ti +4 COMPLEX*16 D( * ), DL( * ), DU( * ) .SH PURPOSE ZDTTRF computes an LU factorization of a complex tridiagonal matrix A using elimination without partial pivoting. The factorization has the form .br A = L * U .br where L is a product of unit lower bidiagonal .br matrices and U is upper triangular with nonzeros in only the main diagonal and first superdiagonal. .br .SH ARGUMENTS .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 DL (input/output) COMPLEX array, dimension (N-1) On entry, DL must contain the (n-1) subdiagonal elements of A. On exit, DL is overwritten by the (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input/output) COMPLEX array, dimension (N) On entry, D must contain the diagonal elements of A. On exit, D is overwritten by the n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input/output) COMPLEX array, dimension (N-1) On entry, DU must contain the (n-1) superdiagonal elements of A. On exit, DU is overwritten by the (n-1) elements of the first superdiagonal of U. .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value .br > 0: if INFO = i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, and division by zero will occur if it is used to solve a system of equations. scalapack-doc-1.5/man/manl/zdttrsv.l0100644000056400000620000000352706335610666017156 0ustar pfrauenfstaff.TH ZDTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME ZDTTRSV - solve one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE ZDTTRSV( UPLO, TRANS, N, NRHS, DL, D, DU, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER UPLO, TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 COMPLEX*16 B( LDB, * ), D( * ), DL( * ), DU( * ) .SH PURPOSE ZDTTRSV solves one of the systems of equations L * X = B, L**T * X = B, or L**H * X = B, U * X = B, U**T * X = B, or U**H * X = B, .br with factors of the tridiagonal matrix A from the LU factorization computed by ZDTTRF. .br .SH ARGUMENTS .TP 8 UPLO (input) CHARACTER*1 Specifies whether to solve with L or U. .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': A * X = B (No transpose) .br = 'T': A**T * X = B (Transpose) .br = 'C': A**H * X = B (Conjugate transpose) .TP 8 N (input) INTEGER The order of the matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 DL (input) COMPLEX array, dimension (N-1) The (n-1) multipliers that define the matrix L from the LU factorization of A. .TP 8 D (input) COMPLEX array, dimension (N) The n diagonal elements of the upper triangular matrix U from the LU factorization of A. .TP 8 DU (input) COMPLEX array, dimension (N-1) The (n-1) elements of the first superdiagonal of U. .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, B is overwritten by the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/man/manl/zpttrsv.l0100644000056400000620000000413306335610666017164 0ustar pfrauenfstaff.TH ZPTTRSV l "12 May 1997" "modified LAPACK routine" "LAPACK routine (version 2.0)" .SH NAME ZPTTRSV - solve one of the triangular systems L * X = B, or L**H * X = B, .SH SYNOPSIS .TP 20 SUBROUTINE ZPTTRSV( UPLO, TRANS, N, NRHS, D, E, B, LDB, INFO ) .TP 20 .ti +4 CHARACTER UPLO, TRANS .TP 20 .ti +4 INTEGER INFO, LDB, N, NRHS .TP 20 .ti +4 DOUBLE PRECISION D( * ) .TP 20 .ti +4 COMPLEX*16 B( LDB, * ), E( * ) .SH PURPOSE ZPTTRSV solves one of the triangular systems L * X = B, or L**H * X = B, U * X = B, or U**H * X = B, .br where L or U is the Cholesky factor of a Hermitian positive definite tridiagonal matrix A such that .br A = U**H*D*U or A = L*D*L**H (computed by ZPTTRF). .br .SH ARGUMENTS .TP 8 UPLO (input) CHARACTER*1 Specifies whether the superdiagonal or the subdiagonal of the tridiagonal matrix A is stored and the form of the factorization: .br = 'U': E is the superdiagonal of U, and A = U'*D*U; .br = 'L': E is the subdiagonal of L, and A = L*D*L'. (The two forms are equivalent if A is real.) .TP 8 TRANS (input) CHARACTER Specifies the form of the system of equations: .br = 'N': L * X = B (No transpose) .br = 'N': L * X = B (No transpose) .br = 'C': U**H * X = B (Conjugate transpose) .br = 'C': L**H * X = B (Conjugate transpose) .TP 8 N (input) INTEGER The order of the tridiagonal matrix A. N >= 0. .TP 8 NRHS (input) INTEGER The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0. .TP 8 D (input) REAL array, dimension (N) The n diagonal elements of the diagonal matrix D from the factorization computed by ZPTTRF. .TP 8 E (input) COMPLEX array, dimension (N-1) The (n-1) off-diagonal elements of the unit bidiagonal factor U or L from the factorization computed by ZPTTRF (see UPLO). .TP 8 B (input/output) COMPLEX array, dimension (LDB,NRHS) On entry, the right hand side matrix B. On exit, the solution matrix X. .TP 8 LDB (input) INTEGER The leading dimension of the array B. LDB >= max(1,N). .TP 8 INFO (output) INTEGER = 0: successful exit .br < 0: if INFO = -i, the i-th argument had an illegal value scalapack-doc-1.5/pblasqref.ps0100644000056400000620000026616306710272402016076 0ustar pfrauenfstaff%!PS-Adobe-2.0 %%Creator: dvips 5.526 Copyright 1986, 1994 Radical Eye Software %%Title: pblasqref.dvi %%CreationDate: Thu May 16 14:16:09 1996 %%Pages: 2 %%PageOrder: Ascend %%BoundingBox: 0 0 612 792 %%EndComments %DVIPSCommandLine: dvips pblasqref.dvi -o pblasqref.ps %DVIPSParameters: dpi=300, comments removed %DVIPSSource: TeX output 1996.05.16:1416 %%BeginProcSet: tex.pro /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N}B /TR{translate}N /isls false N /vsize 11 72 mul N /hsize 8.5 72 mul N /landplus90{false}def /@rigin{isls{[0 landplus90{1 -1}{-1 1} ifelse 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale isls{landplus90{VResolution 72 div vsize mul 0 exch}{Resolution -72 div hsize mul 0}ifelse TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{ pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D }B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V {}B /RV statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false} ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail} B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{ 3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{ 3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end %%EndProcSet %%BeginProcSet: special.pro TeXDict begin /SDict 200 dict N SDict begin /@SpecialDefaults{/hs 612 N /vs 792 N /ho 0 N /vo 0 N /hsc 1 N /vsc 1 N /ang 0 N /CLIP 0 N /rwiSeen false N /rhiSeen false N /letter{}N /note{}N /a4{}N /legal{}N}B /@scaleunit 100 N /@hscale{@scaleunit div /hsc X}B /@vscale{@scaleunit div /vsc X}B /@hsize{/hs X /CLIP 1 N}B /@vsize{/vs X /CLIP 1 N}B /@clip{ /CLIP 2 N}B /@hoffset{/ho X}B /@voffset{/vo X}B /@angle{/ang X}B /@rwi{ 10 div /rwi X /rwiSeen true N}B /@rhi{10 div /rhi X /rhiSeen true N}B /@llx{/llx X}B /@lly{/lly X}B /@urx{/urx X}B /@ury{/ury X}B /magscale true def end /@MacSetUp{userdict /md known{userdict /md get type /dicttype eq{userdict begin md length 10 add md maxlength ge{/md md dup length 20 add dict copy def}if end md begin /letter{}N /note{}N /legal{} N /od{txpose 1 0 mtx defaultmatrix dtransform S atan/pa X newpath clippath mark{transform{itransform moveto}}{transform{itransform lineto} }{6 -2 roll transform 6 -2 roll transform 6 -2 roll transform{ itransform 6 2 roll itransform 6 2 roll itransform 6 2 roll curveto}}{{ closepath}}pathforall newpath counttomark array astore /gc xdf pop ct 39 0 put 10 fz 0 fs 2 F/|______Courier fnt invertflag{PaintBlack}if}N /txpose{pxs pys scale ppr aload pop por{noflips{pop S neg S TR pop 1 -1 scale}if xflip yflip and{pop S neg S TR 180 rotate 1 -1 scale ppr 3 get ppr 1 get neg sub neg ppr 2 get ppr 0 get neg sub neg TR}if xflip yflip not and{pop S neg S TR pop 180 rotate ppr 3 get ppr 1 get neg sub neg 0 TR}if yflip xflip not and{ppr 1 get neg ppr 0 get neg TR}if}{noflips{TR pop pop 270 rotate 1 -1 scale}if xflip yflip and{TR pop pop 90 rotate 1 -1 scale ppr 3 get ppr 1 get neg sub neg ppr 2 get ppr 0 get neg sub neg TR}if xflip yflip not and{TR pop pop 90 rotate ppr 3 get ppr 1 get neg sub neg 0 TR}if yflip xflip not and{TR pop pop 270 rotate ppr 2 get ppr 0 get neg sub neg 0 S TR}if}ifelse scaleby96{ppr aload pop 4 -1 roll add 2 div 3 1 roll add 2 div 2 copy TR .96 dup scale neg S neg S TR}if}N /cp {pop pop showpage pm restore}N end}if}if}N /normalscale{Resolution 72 div VResolution 72 div neg scale magscale{DVImag dup scale}if 0 setgray} N /psfts{S 65781.76 div N}N /startTexFig{/psf$SavedState save N userdict maxlength dict begin /magscale false def normalscale currentpoint TR /psf$ury psfts /psf$urx psfts /psf$lly psfts /psf$llx psfts /psf$y psfts /psf$x psfts currentpoint /psf$cy X /psf$cx X /psf$sx psf$x psf$urx psf$llx sub div N /psf$sy psf$y psf$ury psf$lly sub div N psf$sx psf$sy scale psf$cx psf$sx div psf$llx sub psf$cy psf$sy div psf$ury sub TR /showpage{}N /erasepage{}N /copypage{}N /p 3 def @MacSetUp}N /doclip{ psf$llx psf$lly psf$urx psf$ury currentpoint 6 2 roll newpath 4 copy 4 2 roll moveto 6 -1 roll S lineto S lineto S lineto closepath clip newpath moveto}N /endTexFig{end psf$SavedState restore}N /@beginspecial{SDict begin /SpecialSave save N gsave normalscale currentpoint TR @SpecialDefaults count /ocount X /dcount countdictstack N}N /@setspecial {CLIP 1 eq{newpath 0 0 moveto hs 0 rlineto 0 vs rlineto hs neg 0 rlineto closepath clip}if ho vo TR hsc vsc scale ang rotate rwiSeen{rwi urx llx sub div rhiSeen{rhi ury lly sub div}{dup}ifelse scale llx neg lly neg TR }{rhiSeen{rhi ury lly sub div dup scale llx neg lly neg TR}if}ifelse CLIP 2 eq{newpath llx lly moveto urx lly lineto urx ury lineto llx ury lineto closepath clip}if /showpage{}N /erasepage{}N /copypage{}N newpath }N /@endspecial{count ocount sub{pop}repeat countdictstack dcount sub{ end}repeat grestore SpecialSave restore end}N /@defspecial{SDict begin} N /@fedspecial{end}B /li{lineto}B /rl{rlineto}B /rc{rcurveto}B /np{ /SaveX currentpoint /SaveY X N 1 setlinecap newpath}N /st{stroke SaveX SaveY moveto}N /fil{fill SaveX SaveY moveto}N /ellipse{/endangle X /startangle X /yrad X /xrad X /savematrix matrix currentmatrix N TR xrad yrad scale 0 0 1 startangle endangle arc savematrix setmatrix}N end %%EndProcSet TeXDict begin 40258431 52099146 1000 300 300 (/a/rudolph/snow/homes/petitet/PAPERS/PBLAS/PAPER39/pblasqref.dvi) @start end %%EndProlog %%BeginSetup %%Feature: *Resolution 300dpi TeXDict begin %%EndSetup %%Page: 1 1 1 0 bop -390 -296 a 42626580 53046396 0 0 40258437 52099153 startTexFig -390 -296 a %%BeginDocument: pblasqref2.ps %DVIPSCommandLine: dvips pblasqref2.dvi -t landscape -o pblasqref2.ps %DVIPSParameters: dpi=300, comments removed %DVIPSSource: TeX output 1996.05.16:1416 /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N}B /TR{translate}N /isls false N /vsize 11 72 mul N /hsize 8.5 72 mul N /landplus90{false}def /@rigin{isls{[0 landplus90{1 -1}{-1 1} ifelse 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale isls{landplus90{VResolution 72 div vsize mul 0 exch}{Resolution -72 div hsize mul 0}ifelse TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{ pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D }B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V {}B /RV statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false} ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail} B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{ 3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{ 3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end TeXDict begin 52099146 40258431 1000 300 300 (/a/rudolph/snow/homes/petitet/PAPERS/PBLAS/PAPER39/pblasqref2.dvi) @start /Fa 11 115 df<60F0F070101020204004097D830A>44 D<06003E00CE000E000E000E000E000E000E000E000E000E000E000E000E007FE00B107E 8F11>49 D<3F8079C0F8E0F8707070007000E000E001C0038006000C3018303FE07FE0FF E00C107E8F11>I<60607FC07F807F00600060007F8061C040E000F060F0F0F0F0F0F0E0 61C03F000C107E8F11>53 D<0FE03870703870387C387F703FE01FF03FF871FCE07CE01C E01CE01870301FE00E107F8F11>56 D<0FC038706038E018E01CE01CE01C603C305C0F9C 301C78187838703030E01F800E107F8F11>I77 D<1F8078E07870787003F03E7070 70E070E07070B03F1E0F0B7F8A11>97 D<0FC038F070F0E0F0E000E000E000E000700038 300FE00C0B7F8A0F>99 D104 D114 D E /Fb 8 116 df<387CFEFEFE7C3807077C860F>46 D<01FC0007FF001F07C01E03C03E03E07C01F07C01F07C01F0FC01F8FC01F8FC01F8FC01 F8FC01F8FC01F8FC01F8FC01F8FC01F8FC01F8FC01F8FC01F8FC01F87C01F07C01F07C01 F03E03E01E03C01F8FC007FF0001FC00151D7E9C1A>48 D<00700000F00007F000FFF000 F9F00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F000 01F00001F00001F00001F00001F00001F00001F00001F00001F00001F00001F0007FFFC0 7FFFC0121D7D9C1A>I82 D<07FC001FFF803F07C03F03E03F01F03F01F00C01F00001F0003FF007FDF01F81F03E01 F07C01F0F801F0F801F0F801F0FC02F07E0CF03FF87E0FE03E17147F9319>97 D<01FE0007FF800F83C01E01E03E00F07C00F07C00F8FC00F8FFFFF8FFFFF8FC0000FC00 00FC00007C00007C00003E00181E00180F807007FFE000FF8015147F9318>101 D 108 D<0FE63FFE701E600EE006E006F800FFC07FF83FFC1FFE03FE001FC007C007E007F0 06F81EFFFCC7F010147E9315>115 D E /Fc 19 118 df<0000000003E0000000000000 00000007F000000000000000000007F00000000000000000000FF8000000000000000000 0FF80000000000000000000FF80000000000000000001FFC0000000000000000001FFC00 00000000000000003FFE0000000000000000003FFE0000000000000000007FFF00000000 00000000007FFF0000000000000000007FFF000000000000000000FFFF80000000000000 0000FFFF800000000000000001FFFFC00000000000000001FFFFC00000000000000001FF FFC00000000000000003FFFFE00000000000000003EFFFE00000000000000007EFFFF000 00000000000007CFFFF00000000000000007C7FFF0000000000000000FC7FFF800000000 0000000F83FFF8000000000000001F83FFFC000000000000001F01FFFC00000000000000 3F01FFFE000000000000003E01FFFE000000000000003E00FFFE000000000000007E00FF FF000000000000007C007FFF00000000000000FC007FFF80000000000000F8007FFF8000 0000000000F8003FFF80000000000001F8003FFFC0000000000001F0001FFFC000000000 0003F0001FFFE0000000000003E0001FFFE0000000000003E0000FFFE0000000000007E0 000FFFF0000000000007C00007FFF000000000000FC00007FFF800000000000F800003FF F800000000001F800003FFFC00000000001F000003FFFC00000000001F000001FFFC0000 0000003FFFFFFFFFFE00000000003FFFFFFFFFFE00000000007FFFFFFFFFFF0000000000 7FFFFFFFFFFF00000000007FFFFFFFFFFF0000000000FC0000007FFF8000000000F80000 003FFF8000000001F80000003FFFC000000001F00000003FFFC000000001F00000001FFF C000000003F00000001FFFE000000003E00000000FFFE000000007E00000000FFFF00000 0007C000000007FFF00000000FC000000007FFF80000000F8000000007FFF80000000F80 00000003FFF80000001F8000000003FFFC0000001F0000000001FFFC0000007FC0000000 01FFFE0000FFFFFF800003FFFFFFFF80FFFFFF800003FFFFFFFF80FFFFFF800003FFFFFF FF80FFFFFF800003FFFFFFFF80FFFFFF800003FFFFFFFF8051487CC75A>65 DI76 D80 D<00003FF80003800003FFFF800780000FFFFFE00F80003FFFFFF81F8000FFFFFFFE3F80 01FFC007FF7F8003FE00007FFF8007FC00001FFF800FF800000FFF801FF0000003FF801F E0000001FF803FE0000000FF803FC0000000FF807FC00000007F807FC00000003F807FC0 0000003F80FFC00000001F80FFC00000001F80FFC00000001F80FFE00000001F80FFE000 00000F80FFF00000000F80FFF80000000F80FFFC0000000F80FFFF00000000007FFFE000 0000007FFFFF000000007FFFFFF00000003FFFFFFF8000003FFFFFFFF800001FFFFFFFFE 00001FFFFFFFFF80000FFFFFFFFFE00007FFFFFFFFF00003FFFFFFFFFC0001FFFFFFFFFE 0000FFFFFFFFFF00007FFFFFFFFF00001FFFFFFFFF800007FFFFFFFFC00000FFFFFFFFC0 00000FFFFFFFE0000000FFFFFFE000000007FFFFF0000000007FFFF0000000000FFFF000 00000003FFF80000000001FFF80000000000FFF878000000007FF8F8000000003FF8F800 0000003FF8F8000000003FF8F8000000001FF8F8000000001FF8FC000000001FF8FC0000 00001FF0FC000000001FF0FE000000001FF0FE000000001FE0FF000000003FE0FF800000 003FC0FFC00000007FC0FFE00000007F80FFF8000000FF80FFFE000001FF00FFFFC00007 FE00FF7FFC003FFC00FE3FFFFFFFF800FC0FFFFFFFE000F803FFFFFF8000F0003FFFFE00 00E00003FFE0000035497AC742>83 D<0007FFFC000000007FFFFFC0000001FFFFFFF800 0003FFFFFFFE000007FE001FFF000007FF0003FFC0000FFF8001FFE0000FFF8000FFF000 0FFF80007FF0000FFF80007FF8000FFF80007FF80007FF00003FFC0007FF00003FFC0003 FE00003FFC0000F800003FFC00000000003FFC00000000003FFC00000000003FFC000000 00003FFC00000007FFFFFC000000FFFFFFFC000007FFFFFFFC00003FFFE03FFC0000FFFE 003FFC0003FFF0003FFC0007FFC0003FFC000FFF00003FFC001FFE00003FFC003FFC0000 3FFC007FF800003FFC007FF800003FFC00FFF000003FFC00FFF000003FFC00FFF000003F FC00FFF000003FFC00FFF000003FFC00FFF000007FFC007FF80000FFFC007FF80001EFFC 003FFC0003EFFC003FFF0007CFFF000FFFC03F8FFFF807FFFFFF07FFFC01FFFFFC03FFFC 007FFFF001FFFC0003FF80007FF8362E7DAD3A>97 D<007FC00000000000FFFFC0000000 0000FFFFC00000000000FFFFC00000000000FFFFC00000000000FFFFC0000000000003FF C0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000 000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FF C0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000 000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FF C0000000000001FFC0000000000001FFC00FFC00000001FFC07FFFC0000001FFC3FFFFF0 000001FFCFFFFFFC000001FFDFF00FFF000001FFFF8003FF800001FFFE0001FFC00001FF F80000FFE00001FFF000007FF00001FFE000003FF80001FFE000003FFC0001FFE000001F FC0001FFE000001FFE0001FFE000001FFE0001FFE000000FFE0001FFE000000FFF0001FF E000000FFF0001FFE000000FFF0001FFE000000FFF8001FFE000000FFF8001FFE000000F FF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FF E000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF0001FFE000000F FF0001FFE000000FFF0001FFE000001FFE0001FFE000001FFE0001FFE000001FFC0001FF E000001FFC0001FFE000003FF80001FFF000003FF80001FFF800007FF00001FFFC0000FF E00001FFFE0001FFC00001FFBF0007FF800001FF1FE01FFE000001FE0FFFFFFC000001FC 03FFFFF0000001F800FFFF80000001F0001FF800000039487CC742>I<00001FFFC00000 00FFFFF8000007FFFFFE00001FFFFFFF80007FFC00FFC000FFE001FFC001FFC003FFE003 FF8003FFE007FF0003FFE00FFE0003FFE00FFE0003FFE01FFC0001FFC03FFC0001FFC03F FC0000FF803FFC00003E007FF8000000007FF8000000007FF800000000FFF800000000FF F800000000FFF800000000FFF800000000FFF800000000FFF800000000FFF800000000FF F800000000FFF800000000FFF8000000007FF8000000007FF8000000007FFC000000003F FC000000003FFC000000003FFC000000F81FFE000000F80FFE000000F80FFF000001F007 FF800003F003FFC00007E001FFE0000FC000FFF0001F80007FFE00FF00001FFFFFFE0000 07FFFFF8000000FFFFE00000001FFE00002D2E7CAD35>I<00001FFE00000001FFFFE000 0007FFFFF800001FFFFFFE00007FFC07FF0000FFE001FF8001FFC0007FC003FF80003FE0 07FF00003FF00FFE00001FF01FFE00000FF81FFC00000FF83FFC00000FFC3FFC000007FC 7FFC000007FC7FF8000007FC7FF8000007FE7FF8000007FEFFF8000007FEFFF8000007FE FFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFFFFFCFFF800000000FFF800000000 FFF800000000FFF8000000007FF8000000007FF8000000007FFC000000003FFC00000000 3FFC000000003FFC0000001C1FFE0000003E0FFE0000003E0FFF0000007E07FF000000FC 03FF800001F801FFC00003F0007FF0001FE0003FFE00FFC0001FFFFFFF800007FFFFFE00 0000FFFFF80000000FFF80002F2E7DAD36>101 D<00000000001F8000007FF8007FE000 03FFFF01FFF0001FFFFFE7FFF0003FFFFFFFE7F800FFE01FFF0FF801FF8007FE0FF803FF 0003FF0FF807FE0001FF87F007FE0001FF87F00FFC0000FFC1C00FFC0000FFC0000FFC00 00FFC0001FFC0000FFE0001FFC0000FFE0001FFC0000FFE0001FFC0000FFE0001FFC0000 FFE0001FFC0000FFE0000FFC0000FFC0000FFC0000FFC0000FFC0000FFC00007FE0001FF 800007FE0001FF800003FF0003FF000001FF8007FE000000FFE01FFC000000FFFFFFF000 0001FFFFFFE0000003C3FFFF00000003C07FF800000007C0000000000007C00000000000 07C0000000000007C0000000000007E0000000000007F0000000000007F8000000000007 FFFFFFF0000007FFFFFFFF000003FFFFFFFFE00003FFFFFFFFF80001FFFFFFFFFE0001FF FFFFFFFF0000FFFFFFFFFF80007FFFFFFFFF8003FFFFFFFFFFC00FFFFFFFFFFFC01FF800 001FFFE03FE0000001FFE07FC00000007FF07FC00000003FF0FF800000001FF0FF800000 001FF0FF800000001FF0FF800000001FF0FF800000001FF07FC00000003FE07FC0000000 3FE03FE00000007FC03FF0000000FFC01FFC000003FF800FFF00000FFF0003FFF000FFFC 0000FFFFFFFFF000003FFFFFFFC0000007FFFFFE000000003FFFC0000035447DAE3B> 103 D<00FC0001FE0003FF0007FF800FFFC01FFFE01FFFE01FFFE01FFFE01FFFE01FFFE0 0FFFC007FF8003FF0001FE0000FC00000000000000000000000000000000000000000000 000000000000000000000000007FC0FFFFC0FFFFC0FFFFC0FFFFC0FFFFC003FFC001FFC0 01FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0 01FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0 01FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0FFFFFFFFFFFFFFFFFF FFFFFFFFFFFF18497CC820>105 D<007FC000FFFFC000FFFFC000FFFFC000FFFFC000FF FFC00003FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001 FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001 FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001 FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001 FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001 FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001 FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC000FFFFFF80FF FFFF80FFFFFF80FFFFFF80FFFFFF8019487CC720>108 D<007FC001FFC00000FFE00000 FFFFC00FFFF80007FFFC0000FFFFC03FFFFE001FFFFF0000FFFFC0FFFFFF007FFFFF8000 FFFFC1FC07FF80FE03FFC000FFFFC3E003FFC1F001FFE00003FFC7C001FFC3E000FFE000 01FFCF0001FFE78000FFF00001FFDE0000FFEF00007FF00001FFDC0000FFEE00007FF000 01FFFC0000FFFE00007FF80001FFF80000FFFC00007FF80001FFF00000FFF800007FF800 01FFF00000FFF800007FF80001FFF00000FFF800007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF800FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0 FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0 5C2E7CAD63>I<007FC001FFC00000FFFFC00FFFF80000FFFFC03FFFFE0000FFFFC0FFFF FF0000FFFFC1FC07FF8000FFFFC3E003FFC00003FFC7C001FFC00001FFCF0001FFE00001 FFDE0000FFE00001FFDC0000FFE00001FFFC0000FFF00001FFF80000FFF00001FFF00000 FFF00001FFF00000FFF00001FFF00000FFF00001FFE00000FFF00001FFE00000FFF00001 FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000 FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001 FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000 FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001 FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000 FFF00001FFE00000FFF000FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FF FFFFC07FFFFFE0FFFFFFC07FFFFFE03B2E7CAD42>I<00000FFF0000000000FFFFF00000 0007FFFFFE0000001FFFFFFF8000003FFC03FFC00000FFE0007FF00001FF80001FF80003 FF00000FFC0007FE000007FE000FFE000007FF000FFC000003FF001FFC000003FF803FFC 000003FFC03FF8000001FFC03FF8000001FFC07FF8000001FFE07FF8000001FFE07FF800 0001FFE0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF80000 01FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001 FFF07FF8000001FFE07FF8000001FFE07FF8000001FFE07FF8000001FFE03FFC000003FF C03FFC000003FFC01FFC000003FF801FFE000007FF800FFE000007FF0007FF00000FFE00 03FF80001FFC0001FFC0003FF80000FFE0007FF000007FFC03FFE000001FFFFFFF800000 07FFFFFE00000000FFFFF0000000000FFF000000342E7DAD3B>I<007FC00FFC000000FF FFC07FFFC00000FFFFC3FFFFF00000FFFFCFFFFFFC0000FFFFDFF01FFF0000FFFFFF8007 FF800003FFFE0003FFC00001FFF80000FFE00001FFF00000FFF00001FFE000007FF80001 FFE000003FFC0001FFE000003FFC0001FFE000003FFE0001FFE000001FFE0001FFE00000 1FFE0001FFE000001FFF0001FFE000001FFF0001FFE000000FFF0001FFE000000FFF8001 FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE00000 0FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001 FFE000000FFF0001FFE000001FFF0001FFE000001FFF0001FFE000001FFE0001FFE00000 1FFE0001FFE000003FFC0001FFE000003FFC0001FFE000007FF80001FFF000007FF80001 FFF80000FFF00001FFFC0001FFE00001FFFE0003FFC00001FFFF000FFF800001FFFFE03F FE000001FFEFFFFFFC000001FFE3FFFFF0000001FFE0FFFF80000001FFE01FF800000001 FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE00000 00000001FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001 FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE00000 00000001FFE00000000000FFFFFFC000000000FFFFFFC000000000FFFFFFC000000000FF FFFFC000000000FFFFFFC00000000039427CAD42>I<00FF803F8000FFFF80FFF000FFFF 83FFFC00FFFF87FFFE00FFFF8FC3FF00FFFF8F07FF0003FF9E0FFF8001FFBC0FFF8001FF B80FFF8001FFF80FFF8001FFF00FFF8001FFF007FF0001FFF007FF0001FFE003FE0001FF E000F80001FFE000000001FFE000000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC0000000FFFFFFE00000FFFFFFE00000FFFFFFE00000FFFF FFE00000FFFFFFE00000292E7CAD31>114 D<000FFF00E0007FFFE3E001FFFFFFE007FF FFFFE00FF801FFE01FC0003FE03F80000FE03F000007E07F000007E07F000003E0FF0000 03E0FF000003E0FF800003E0FFC00003E0FFF0000000FFFE000000FFFFF800007FFFFFC0 007FFFFFF0003FFFFFFC001FFFFFFF000FFFFFFF8007FFFFFFC003FFFFFFE000FFFFFFF0 003FFFFFF00003FFFFF800001FFFF8000000FFFC0000001FFC7800000FFCF8000007FCF8 000003FCFC000003FCFC000003FCFE000003F8FE000003F8FF000003F8FF800007F0FFC0 000FF0FFF0001FE0FFFC00FFC0FFFFFFFF80FC7FFFFE00F81FFFF800E003FF8000262E7C AD2F>I<007FE000003FF000FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFFF000 FFFFE0007FFFF000FFFFE0007FFFF00003FFE00001FFF00001FFE00000FFF00001FFE000 00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000 01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000 00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000 01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000 00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000 01FFE00000FFF00001FFE00000FFF00001FFE00001FFF00001FFE00001FFF00001FFE000 01FFF00001FFE00003FFF00000FFE00007FFF00000FFE0000F7FF000007FE0001F7FF000 007FF0003E7FF800003FFC00FC7FFFE0001FFFFFF87FFFE00007FFFFE07FFFE00001FFFF 807FFFE000003FFE007FFFE03B2E7CAD42>117 D E /Fd 39 122 df<38787838081010204080050A79960B>39 D45 D<3078F06005047D830B>I<01E006300C1018101010301030106030603060306030C060 C060C060C040C0C080808180C10046003C000C157B9412>48 D<004000C000C003801D80 01800180030003000300030006000600060006000C000C000C000C001800FF800A157C94 12>I<0C1E1E0C0000000000003078F060070E7D8D0B>58 D<0001800001800003800003 80000780000780000B800013800013800021C00021C00041C00081C00081C00101C001FF C00201C00201C00401C00801C00801C01801E0FE07F815177E961A>65 D<03FFF000E03800E01C00E00C00E00C01C00C01C01C01C01C01C03803807003FFC003FF E00380700700300700380700380700380E00700E00700E00E00E01C01C0380FFFE001617 7E9619>I<003F0400E0880380580600380C00381C0010380010300010700010600000E0 0000E00000E00000C00000C00040C00040C00080E00080E0010060020030040018180007 E00016177A961A>I<03FFF000E01800E00C00E00600E00701C00301C00301C00301C003 03800703800703800703800707000E07000E07000C07001C0E00180E00300E00600E00C0 1C0380FFFC0018177E961B>I<003F0400E0880380580600380C00381C00103800103000 10700010600000E00000E00000E00000C01FF8C001C0C001C0C001C0E00380E003806003 80300780181B0007E10016177A961C>71 D<07FE00E000E000E000E001C001C001C001C0 038003800380038007000700070007000E000E000E000E001C00FF800F177E960E>73 D<03FE0FC000E0070000E0040000E0080000E0100001C0200001C0800001C1000001C200 0003860000038E000003A7000003C700000783800007038000070380000701C0000E01C0 000E00E0000E00E0000E00E0001C00F000FF83FC001A177E961B>75 D<03FE0000E00000E00000E00000E00001C00001C00001C00001C0000380000380000380 000380000700000700200700200700400E00400E00C00E00800E01801C0780FFFF001317 7E9616>I<03F0003F8000F000780000B800780000B800B80000B8013800013801700001 3802700001380270000138047000023808E000021C08E000021C10E000021C10E000041C 21C000041C41C000041C41C000041C81C000081D038000081D038000080E038000080E03 8000180C070000FE083FE00021177E9620>I<03FFE000E03800E01C00E00C00E00C01C0 1C01C01C01C01C01C0380380700380E003FF800380000700000700000700000700000E00 000E00000E00000E00001C0000FF800016177E9618>80 D<007C40018280030180060180 0601800C01000C01000E00000E00000FC00007F80003FC00007C00000E00000E00000600 200600400C00400C00600800601000D8600087C00012177D9614>83 D<7FC1FC1C00601C00401C00401C00403800803800803800803800807001007001007001 00700100E00200E00200E00200E00200E00400E00800E008006030003040001F80001617 79961A>85 D<03900C70187030303060606060606060C0C0C0C840C841C862D01C700D0E 7C8D12>97 D<7C0018001800180018003000300030003000678068C070406060C060C060 C060C06080C080C08180C10046003C000B177C9610>I<07C00C6030E020E06000C000C0 00C00080008000C020C04061803E000B0E7C8D10>I<003E000C000C000C000C00180018 0018001803B00C70187030303060606060606060C0C0C0C840C841C862D01C700F177C96 12>I<07801840302060206040FF80C000C000C000C000C020C04061803E000B0E7C8D10> I<001C0036003E006C00600060006000C000C007F800C000C000C0018001800180018001 8003000300030003000200060006006600E400C80070000F1D81960B>I<01C4063C0C1C 181C1818301830183018203020302030307011E00E600060006060C0E0C0C3807E000E14 7E8D10>I<1F0006000600060006000C000C000C000C0019E01A301C1018103030303030 3030306060606460C460C8C048C0700E177D9612>I<030706000000000000182C4C4C8C 18181830326264243808177D960B>I<1F0006000600060006000C000C000C000C001838 184C189C191832003C003F00318060C060C860C860C8C0D0C0600E177E9610>107 D<3E0C0C0C0C181818183030303060606060C0C8C8C8D06007177D9609>I<30783C0049 8CC6004E0502004C0602009C0E0600180C0600180C0600180C060030180C0030180C8030 181880301818806030090060300E00190E7D8D1D>I<3078498C4E044C049C0C180C180C 180C30183019303130316012601C100E7D8D14>I<078018C0304060606060C060C060C0 6080C080C08180C10046003C000B0E7B8D12>I<0C3812C4130613062606060606060606 0C0C0C0C0C180C101A2019C018001800300030003000FC000F147F8D12>I<30F04B184E 384C3098001800180018003000300030003000600060000D0E7D8D0F>114 D<07800C4018E018E038001E001F8007C000C060C0E0C0C180C3003E000B0E7D8D0F>I< 060006000C000C000C000C00FF8018001800180030003000300030006000610061006200 6400380009147D930C>I<38042C0C4C0C4C0C8C18181818181818303030323032307218 B40F1C0F0E7D8D13>I<38102C184C184C188C1018101810181030203020304030401880 0F000D0E7D8D10>I<38042C0C4C0C4C0C8C18181818181818303030303030307018E00F 60006000C0E0C0E18043003C000E147D8D11>121 D E /Fe 2 85 df<0FF1FE0180300180300300600300600300600300600600C007FFC00600C00600C00C 01800C01800C01800C0180180300FF1FE017117E9019>72 D<3FFFC03060C040604040C0 4080C04080C04000C0000180000180000180000180000300000300000300000300000600 003FE00012117E9012>84 D E /Ff 7 89 df<60F0F070101020204040040A7D830A>59 D<0000C00000C00001C00001C00003C00005C00005E00008E00018E00010E00020E00020 E00040E00080E00080E001FFF0010070020070040070040070080070180070FE03FE1717 7F961A>65 D<001F8200E04403802C07001C0C001C1C0008380008300008700008600000 E00000E00000E00000C00000C00020C00020C00040E000406000806001003002001C1C00 07E00017177E9619>67 D<07FFF80000E00E0000E0030000E0038000E0018001C001C001 C001C001C000C001C000C0038001C0038001C0038001C0038001C0070003800700038007 000300070007000E000E000E000C000E0018000E0070001C01C000FFFF00001A177F961D >I<07FFFF8000E0038000E0010000E0010000E0010001C0010001C0010001C0400001C0 4000038080000381800003FF800003818000070100000701020007010200070004000E00 04000E000C000E0008000E0018001C007000FFFFF00019177F961A>I<003E1000C1A001 00E00200600600600C00400C00400E00000F000007E00007FC0001FE00003F0000078000 0380000380200180400300400300600600600400D8180087E00014177E9615>83 D<03FE0FE0007807000078060000380C0000380800003C1000001C2000001E4000000E80 00000F00000007000000070000000F8000001380000023C0000061C00000C1C0000181E0 000100E0000200F000040070001C007000FF03FE001B177F961D>88 D E /Fg 1 51 df<7FFFE0FFFFE0C00060C00060C00060C00060C00060C00060C00060C0 0060C00060C00060C00060C00060C00060C00060C00060FFFFE07FFFE013137D9419>50 D E /Fh 46 121 df<070007000700E738FFF87FF01FC01FC07FF0FFF8E7380700070007 000D0E7E9012>42 D<60F0F0600404798312>46 D<0018003800380070007000E000E001 C001C001C003800380070007000E000E001C001C001C003800380070007000E000E000C0 000D1A7E9612>I<07C00FE01C703838701C701CE00EE00EE00EE00EE00EE00EE00EE01E 701C701C38381C700FE007C00F147F9312>I<060006000E001E00FE00EE000E000E000E 000E000E000E000E000E000E000E000E000E00FFE0FFE00B147D9312>I<00F001F00370 037006700E700C701C70387038707070E070FFFEFFFE007000700070007003FE03FE0F14 7F9312>52 D<7FF07FF07000700070007000700070007F807FE06070007000384038E038 E038E07070E03FC01F000D147E9312>I<01F007F80E1C181C381C70007000E7C0EFF0F8 38F01CE00EE00EE00E700E700E301C38381FF007C00F147F9312>I<07C01FF038387018 E01CE00CE00EE00E701E383E1FEE0FCE000E001C001C7018703870F03FC00F800F147F93 12>57 D<03E007F01E18381C30FC71FE739EE30EE70EE70EE70EE70EE30C739C71F830F0 38001E0E07FE03F80F147F9312>64 D<038007C007C006C006C00EE00EE00EE00EE00C60 1C701C701C701FF01FF0383838383838FC7EFC7E0F147F9312>II<03E60FFE1C3E381E700E700E600EE000E000E000E000E000E000600E700E700E381C 1C380FF003E00F147F9312>III73 D76 DII<3FE07FF07070E038E038E038E038E038E038E038E038E038E038E0 38E038E038E03870707FF03FE00D147E9312>II82 D<1F303FF070F0E070E070E070E00070007F003FC00FE000F00078003860 38E038E030F070FFE0CF800D147E9312>I85 D<7C7C7C7C3C701CF01EE00FE00FC007C0078003800780 07C00FC00EE01EE01C701C703838FC7EFC7E0F147F9312>88 D<1FC0003FF00038380010 1C00001C0007FC003FFC00781C00E01C00E01C00E01C00703C003FFF801FCF80110E7F8D 12>97 DI<07F01FF8383870106000E000E000E000E000600070 3838381FF007E00D0E7E8D12>I<00F800F8003800380038003807B81FF8387870386038 E038E038E038E0386038707838781FFE0FBE0F147F9312>I<07801FE0387070706038E0 38FFF8FFF8E0006000703838381FF007C00D0E7E8D12>I<007E00FF01C7038203800380 7FFEFFFE03800380038003800380038003800380038003803FF83FF81014809312>I<0F 9E1FFF38E7707070707070707038E03FC03F8070003FE03FF83FFC701EE00EE00EE00E60 0C783C1FF00FE010167F8D12>I<06000F000F0006000000000000007F007F0007000700 07000700070007000700070007000700FFF0FFF00C157D9412>105 D107 DIII< 0F803FE038E07070E038E038E038E038E038F078707038E03FE00F800D0E7E8D12>II114 D<1FF03FF06070C070E0007F003FE00FF000786018E018F0 30FFE0DFC00D0E7E8D12>I<06000E000E000E007FF8FFF80E000E000E000E000E000E00 0E000E1C0E1C0E1C07F801E00E127F9112>IIII<7C7C7C7C1CF00EE00FC007C00380078007C00EE0 1EF01C70FC7EFC7E0F0E7F8D12>I E /Fi 68 123 df<00FC000782000E07001C07001C 02001C00001C00001C00001C0000FFFF001C07001C07001C07001C07001C07001C07001C 07001C07001C07001C07001C07001C07007F1FC01217809614>12 D<60F0F070101020204040040A7D960A>39 D<0102040C1818303070606060E0E0E0E0E0 E0E0E0E0E060606070303018180C04020108227D980E>I<8040203018180C0C0E060606 070707070707070707070606060E0C0C18183020408008227E980E>I<020002000200C2 18F2783AE00F800F803AE0F278C2180200020002000D0E7E9812>I<60F0F07010102020 4040040A7D830A>44 DI<60F0F06004047D830A>I<07C0183030 18701C600C600CE00EE00EE00EE00EE00EE00EE00EE00EE00E600C600C701C30181C7007 C00F157F9412>48 D<06000E00FE000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E00FFE00B157D9412>I<0F8030E040708030C038E0384038 003800700070006000C00180030006000C08080810183FF07FF0FFF00D157E9412>I<0F E030306018701C701C001C00180038006007E000300018000C000E000EE00EE00EC00C40 1830300FE00F157F9412>I<00300030007000F001F00170027004700870187010702070 4070C070FFFE0070007000700070007003FE0F157F9412>I<60307FE07FC04400400040 00400040004F8070E040700030003800384038E038E0388030406020C01F000D157E9412 >I<07E018302018600C600C700C78183E101F6007C00FF018F8607C601EC00EC006C006 C004600C38300FE00F157F9412>56 D<07C0183030186018E00CE00CE00EE00EE00E601E 301E186E0F8E000E000C001C70187018603020E01F800F157F9412>I<60F0F060000000 00000060F0F060040E7D8D0A>I61 D<00FC000303000C00C01000202078102184104302 084701C88601C48E01C48E01C48E01C48E01C48E01C48601C44701C44303C42184C82078 701000000C001C0300F000FF0016177E961B>64 D<001000003800003800003800005C00 005C00005C00008E00008E00008E0001070001070002038002038002038007FFC00401C0 0401C00800E00800E01800F03800F0FE03FE17177F961A>II<00FC100383300E 00B01C0070380030300030700010600010E00010E00000E00000E00000E00000E00000E0 00106000107000103000203800201C00400E008003830000FC0014177E9619>I III<007E080381980600580C0038180018300018700008700008E00008 E00000E00000E00000E00000E003FEE000387000387000383000381800380C0038060038 0380D8007F0817177E961C>III<0FF8 00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0E0E0 E0E0C1C061801F000D177E9612>IIIII<00FC000303000E01C01C00E0380070300030 700038600018E0001CE0001CE0001CE0001CE0001CE0001CE0001C700038700038300030 3800701C00E00E01C003030000FC0016177E961B>II82 D<0FC4302C601C400CC004C004C004E00070007F003FE00FF801FC001C000E0006800680 068006C004E008D81087E00F177E9614>I<7FFFF8603818403808403808803804803804 803804003800003800003800003800003800003800003800003800003800003800003800 00380000380000380000380003FF8016177F9619>II87 D89 DI<3FC0706070302038003803F81E3830387038E039E039E07970FF1F1E100E7F 8D12>97 DI<07F01838303870106000E000E000E000E000600070083008183007 C00D0E7F8D10>I<003E00000E00000E00000E00000E00000E00000E00000E00000E0007 CE001C3E00300E00700E00600E00E00E00E00E00E00E00E00E00600E00700E00301E0018 2E0007CF8011177F9614>I<0FC0186030307038E018FFF8E000E000E000600070083010 183007C00D0E7F8D10>I<03E006700E701C201C001C001C001C001C00FF801C001C001C 001C001C001C001C001C001C001C001C001C007F800C1780960B>I<0F9E18E330607070 70707070306018C02F80200060003FE03FF83FFC600EC006C006C006600C38380FE01015 7F8D12>II<307878300000000000F8383838383838383838383838FE07177F96 0A>I<0300078007800300000000000000000000001F8003800380038003800380038003 80038003800380038003800380038003804380E300E7007C00091D82960B>IIIII<07C018303018600C600CE00EE00EE00EE00EE00E701C301818 3007C00F0E7F8D12>II<07C2001C2600381E00700E00600E00E00E00E00E00E00E00E00E00600E0070 0E00301E001C2E0007CE00000E00000E00000E00000E00000E00003F8011147F8D13>I< F9E03A703C703820380038003800380038003800380038003800FF000C0E7F8D0E>I<1F 4060C0C040C040E000FF007F801FC001E080608060C060E0C09F000B0E7F8D0E>I<0800 08000800180018003800FFC0380038003800380038003800380038403840384038401C80 0F000A147F930E>IIIIIII E /Fj 41 122 df<000FF000007FFC0001F80E0003E01F0007C03F000F 803F000F803F000F801E000F800C000F8000000F8000000F8000000F800000FFFFFF00FF FFFF000F801F000F801F000F801F000F801F000F801F000F801F000F801F000F801F000F 801F000F801F000F801F000F801F000F801F000F801F000F801F000F801F000F801F000F 801F007FF0FFE07FF0FFE01B237FA21F>12 D<387CFEFFFF7F3B03030306060C1C187020 08117C8610>44 D<03FC000FFF003C1FC07007E07C07F0FE03F0FE03F8FE03F8FE01F87C 01F83803F80003F80003F00003F00007E00007C0000F80001F00003E0000380000700000 E01801C0180380180700180E00380FFFF01FFFF03FFFF07FFFF0FFFFF0FFFFF015207D9F 1C>50 D<00FE0007FFC00F07E01E03F03F03F03F81F83F81F83F81F81F03F81F03F00003 F00003E00007C0001F8001FE0001FF000007C00001F00001F80000FC0000FC3C00FE7E00 FEFF00FEFF00FEFF00FEFF00FC7E01FC7801F81E07F00FFFC001FE0017207E9F1C>I<00 0070000000007000000000F800000000F800000000F800000001FC00000001FC00000003 FE00000003FE00000003FE000000067F000000067F0000000C7F8000000C3F8000000C3F 800000181FC00000181FC00000301FE00000300FE00000700FF000006007F000006007F0 0000C007F80000FFFFF80001FFFFFC00018001FC00018001FC00030001FE00030000FE00 070000FF000600007F000600007F00FFE007FFF8FFE007FFF825227EA12A>65 DI68 D<0003FE0040001FFFC0C0007F00F1C001F8003FC003F0000FC007C00007C00FC00003C0 1F800003C03F000001C03F000001C07F000000C07E000000C07E000000C0FE00000000FE 00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000FFFFC7E00 0FFFFC7F00001FC07F00001FC03F00001FC03F00001FC01F80001FC00FC0001FC007E000 1FC003F0001FC001FC003FC0007F80E7C0001FFFC3C00003FF00C026227DA12C>71 D73 D76 DI<0007 FC0000003FFF800000FC07E00003F001F80007E000FC000FC0007E001F80003F001F8000 3F003F00001F803F00001F807F00001FC07E00000FC07E00000FC0FE00000FE0FE00000F E0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0 7E00000FC07F00001FC07F00001FC03F00001F803F80003F801F80003F000FC0007E0007 E000FC0003F001F80000FC07E000003FFF80000007FC000023227DA12A>79 DI< 0007FC0000003FFF800000FC07E00003F001F80007E000FC000FC0007E001F80003F001F 80003F003F00001F803F00001F807F00001FC07E00000FC07E00000FC0FE00000FE0FE00 000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE0000 0FE07E00000FC07F00001FC07F00001FC03F00001F803F81F03F801F83F83F000FC70C7E 0007E606FC0003F607F80000FF07E000003FFF80000007FF80200000038020000001C020 000001E0E0000001FFE0000001FFC0000000FFC0000000FFC00000007F800000007F0000 00001E00232C7DA12A>II<01FE0207FF861F01FE3C007E7C001E78000E78000EF80006F80006FC 0006FC0000FF0000FFE0007FFF007FFFC03FFFF01FFFF80FFFFC03FFFE003FFE0003FE00 007F00003F00003FC0001FC0001FC0001FE0001EE0001EF0003CFC003CFF00F8C7FFE080 FF8018227DA11F>I<7FFFFFFF807FFFFFFF807E03F80F807803F807807003F803806003 F80180E003F801C0E003F801C0C003F800C0C003F800C0C003F800C0C003F800C00003F8 00000003F800000003F800000003F800000003F800000003F800000003F800000003F800 000003F800000003F800000003F800000003F800000003F800000003F800000003F80000 0003F800000003F800000003F800000003F800000003F8000001FFFFF00001FFFFF00022 227EA127>II<0FFC003FFF807E07C07E03E07E01E07E01F03C01F00001F00001F0003FF003FDF0 1FC1F03F01F07E01F0FC01F0FC01F0FC01F0FC01F07E02F07E0CF81FF87F07E03F18167E 951B>97 DI<00FF8007FFE00F83F01F03F03E03F07E03F07C01E07C0000FC0000FC 0000FC0000FC0000FC0000FC00007C00007E00007E00003E00181F00300FC06007FFC000 FF0015167E9519>I<0001FE000001FE0000003E0000003E0000003E0000003E0000003E 0000003E0000003E0000003E0000003E0000003E0000003E0001FC3E0007FFBE000F81FE 001F007E003E003E007E003E007C003E00FC003E00FC003E00FC003E00FC003E00FC003E 00FC003E00FC003E00FC003E007C003E007C003E003E007E001F00FE000F83BE0007FF3F C001FC3FC01A237EA21F>I<00FE0007FF800F87C01E01E03E01F07C00F07C00F8FC00F8 FC00F8FFFFF8FFFFF8FC0000FC0000FC00007C00007C00007E00003E00181F00300FC070 03FFC000FF0015167E951A>I<001FC0007FE000F1F001E3F003E3F007C3F007C1E007C0 0007C00007C00007C00007C00007C000FFFE00FFFE0007C00007C00007C00007C00007C0 0007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C0 0007C0003FFC003FFC00142380A211>I<01FE0F0007FFBF800F87C7801F03E7801E01E0 003E01F0003E01F0003E01F0003E01F0003E01F0001E01E0001F03E0000F87C0000FFF80 0009FE000018000000180000001C0000001FFFE0000FFFF80007FFFE001FFFFF003C003F 0078000F80F0000780F0000780F0000780F000078078000F003C001E001F007C000FFFF8 0001FFC00019217F951C>II<1C003E007F007F007F003E001C0000000000000000 00000000000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F 001F001F001F001F001F00FFE0FFE00B247EA310>I107 DIII<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C 007CFC007EFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C3E00F81F01F00F 83E007FFC000FE0017167E951C>II114 D<0FF3003FFF00781F00600700E00300E00300F00300FC00007FE000 7FF8003FFE000FFF0001FF00000F80C00780C00380E00380E00380F00700FC0E00EFFC00 C7F00011167E9516>I<0180000180000180000180000380000380000780000780000F80 003F8000FFFF00FFFF000F80000F80000F80000F80000F80000F80000F80000F80000F80 000F80000F80000F81800F81800F81800F81800F81800F830007C30003FE0000F8001120 7F9F16>IIIIII E end TeXDict begin @landscape 1 0 bop eop 2 1 bop -135 -37 a Fj(Meaning)19 b(of)g(pre\014xes)-110 42 y Fi(S)12 b(-)f Fh(REAL)268 b Fi(C)12 b(-)f Fh(COMPLEX)-110 81 y Fi(D)h(-)f Fh(DOUBLE)16 b(PRECISION)47 b Fi(Z)11 b(-)g Fh(COMPLEX*16)285 121 y Fi(\(ma)o(y)f(not)g(b)q(e)h(supp)q(orted) 285 160 y(b)o(y)g(all)g(mac)o(hines\))-135 290 y Fj(Lev)n(el)17 b(2)i(and)g(Lev)n(el)e(3)i(PBLAS)-135 336 y(Matrix)f(T)n(yp)r(es)-135 415 y Fi(GE)12 b(-)f(GEneral)-135 455 y(SY)h(-)g(SYmmetric)-135 494 y(HE)g(-)g(HErmitian)-135 533 y(TR)h(-)e(TRiangular)-135 698 y Fj(Lev)n(el)17 b(2)i(and)g(Lev)n(el)e(3)i(PBLAS)-135 745 y(Options)-135 823 y Fi(Dumm)o(y)8 b(options)f(argumen)o(ts)g(are)h (declared)f(as)i(CHARA)o(C-)-135 863 y(TER*1)j(and)e(ma)o(y)h(b)q(e)g (passed)f(as)h(c)o(haracter)e(strings.)-135 902 y(TRANS)p Fg(2)16 b Fi(=)g('No)f(transp)q(ose',)d('T)m(ransp)q(ose',)i ('Conjugate)-135 942 y(transp)q(ose',)9 b(\()p Ff(X)q(;)17 b(X)146 930 y Fe(T)170 942 y Ff(;)g(X)229 930 y Fe(H)258 942 y Fi(\))-135 981 y(UPLO)c(=)f('Upp)q(er)f(triangular',)d('Lo)o(w)o (er)j(T)m(riangular')-135 1021 y(DIA)o(G)h(=)g('Non-unit)e (triangular',)e('Unit)j(triangular')-135 1060 y(SIDE)f(=)i('Left')d(or) h('Righ)o(t')f(\(A)i(or)f(op\(A\))g(on)g(the)g(left,)g(or)g(A)-135 1099 y(or)h(op\(A\))g(on)g(the)g(righ)o(t\))-135 1178 y(F)m(or)k(real)f(matrices,)f(TRANS)p Fg(2)k Fi(=)e('T')h(and)e(TRANS)p Fg(2)i Fi(=)-135 1218 y('C')c(ha)o(v)o(e)f(the)f(same)h(meaning.)-135 1297 y(F)m(or)e(Hermitian)f(matrices,)f(TRANS)p Fg(2)p Fi(='T')12 b(is)d(not)g(allo)o(w)o(ed.)-135 1376 y(F)m(or)f(complex)e (symmetric)g(matrices,)g(TRANS)p Fg(2)p Fi(='C')k(is)e(not)-135 1415 y(allo)o(w)o(ed.)-135 1580 y Fj(Obtaining)19 b(the)f(soft)n(w)n (are)i(via)e(netlib)-135 1659 y Fi(In)24 b(order)e(to)i(get)f (instructions)e(for)i(do)o(wnloading)e(the)-135 1698 y(PBLAS,)28 b(send)e(email)g(to)h Fh(netlib@or)o(nl.)o(go)o(v)e Fi(and)h(in)-135 1737 y(the)h(b)q(o)q(dy)f(of)h(the)f(message)g(t)o(yp) q(e)g Fh(send)16 b(index)g(from)-135 1777 y(scalapack)p Fi(.)-135 1856 y(Send)11 b(commen)o(ts,)d(questions)h(to)j Fh(scalapack)o(@cs)o(.ut)o(k.e)o(du)o Fi(.)860 -37 y Fj(Arra)n(y)19 b(Descriptor,)e(Incremen)n(t)860 42 y Fi(The)g(arra)o(y)f(descriptor)e Ff(D)q(E)r(S)r(C)r(A)k Fi(is)f(an)g(in)o(teger)e(arra)o(y)g(of)i(di-)860 81 y(mension)c(9.)27 b(It)15 b(describ)q(es)e(the)i(t)o(w)o(o-dimensiona)o (l)e(blo)q(c)o(k-cycli)o(c)860 121 y(mapping)c(of)i(the)g(matrix)f(A.) 860 200 y(The)e(\014rst)f(t)o(w)o(o)h(en)o(tries)f(are)g(the)g (descriptor)f(t)o(yp)q(e)g(and)h(the)h(BLA)o(CS)860 239 y(con)o(text.)20 b(The)14 b(third)f(and)g(fourth)f(en)o(tries)g(are)h (the)g(dimensions)860 279 y(of)i(the)f(matrix)f(\(ro)o(w,)i(column\).) 23 b(The)15 b(\014fth)e(and)h(sixth)g(en)o(tries)860 318 y(are)d(the)g(ro)o(w-)h(and)e(column)g(blo)q(c)o(k)g(sizes)h(used)g (to)g(distribute)e(the)860 358 y(matrix.)14 b(The)d(sev)o(en)o(th)f (and)h(eigh)o(th)f(are)h(the)f(co)q(ordinates)f(of)i(the)860 397 y(pro)q(cess)h(con)o(taining)f(the)i(\014rst)f(en)o(try)h(of)g(the) g(matrix.)19 b(The)14 b(last)860 436 y(en)o(try)8 b(con)o(tains)f(the)i (leading)e(dimension)f(of)j(the)g(lo)q(cal)f(arra)o(y)f(con-)860 476 y(taining)j(the)g(matrix)g(elemen)o(ts.)860 555 y(The)h(incremen)o (t)c(sp)q(eci\014ed)h(for)i(v)o(ectors)f(is)h(alw)o(a)o(ys)g(global.)j (So)d(far)860 594 y(only)h(1)g(and)g(DESCA\(M)p 1206 594 11 2 v 13 w(\))g(are)g(supp)q(orted.)860 719 y Fj(References)860 798 y Fi(J.)j(Dongarra)e(and)h(R.)h(C.)g(Whaley)m(,)f(LAP)m(A)o(CK,)j (W)m(orking)d(Note)860 838 y(94,)k Fd(A)h(User's)g(Guide)f(to)h(the)g (BLA)o(CS)g(v1.0)p Fi(,)h(Computer)14 b(Sci-)860 877 y(ence)8 b(Dept.)14 b(T)m(ec)o(hnical)8 b(Rep)q(ort)f(CS-95-281,)h (Univ)o(ersit)o(y)g(of)h(T)m(en-)860 917 y(nessee,)17 b(Kno)o(xville,)f(Marc)o(h,)h(1995.)30 b(T)m(o)17 b(receiv)o(e)e(a)i(p) q(ostscript)860 956 y(cop)o(y)m(,)c(send)g(email)g(to)g(netlib@ornl.go) o(v)e(and)i(in)g(the)h(mail)e(mes-)860 995 y(sage)f(t)o(yp)q(e:)j Fh(send)i(lawn94.ps)f(from)h(lapack/law)o(ns)o(.)860 1074 y Fi(J.)h(Choi,)g(J.)f(Dongarra,)g(and)f(D.)i(W)m(alk)o(er,)f Fd(PB-BLAS:)i(A)g(Set)860 1114 y(of)e(Par)n(al)r(lel)h(Blo)n(ck)g (Basic)f(Line)n(ar)h(A)o(lgebr)n(a)h(Subr)n(outines)p Fi(,)e(Pro-)860 1153 y(ceedings)9 b(of)h(Scalable)e(High)j(P)o (erformance)c(Computing)h(Confer-)860 1193 y(ence)f(\(Kno)o(xville,)g (TN\),)i(pp.)f(534-541,)f(IEEE)h(Computer)e(So)q(ciet)o(y)860 1232 y(Press,)11 b(Ma)o(y)g(23-25,)f(1994.)860 1311 y(J.)16 b(Choi,)h(J.)f(Demmel,)f(I.)h(Dhillon,)f(J.)h(Dongarra,)f(S.)g(Ostrou-) 860 1350 y(c)o(ho)o(v,)f(A.)i(P)o(etitet,)e(K.)h(Stanley)m(,)e(D.)i(W)m (alk)o(er)f(and)g(R.)h(C.)g(Wha-)860 1390 y(ley)m(,)c(LAP)m(A)o(CK,)i (W)m(orking)c(Note)i(95,)g Fd(Sc)n(aLAP)m(A)o(CK:)j(A)f(Sc)n(alable)860 1429 y(Line)n(ar)f(A)o(lgebr)n(a)h(libr)n(ary)f(for)f(Distribute)n(d)g (Memory)h(Concurr)n(ent)860 1469 y(Computers)17 b(-)g(Design)g(Issues)f (and)h(Performanc)n(e)p Fi(,)i(Computer)860 1508 y(Science)12 b(Dept.)23 b(T)m(ec)o(hnical)13 b(Rep)q(ort)f(CS-95-283,)h(Univ)o (ersit)o(y)g(of)860 1548 y(T)m(ennessee,)d(Kno)o(xville,)g(Marc)o(h)i (1995.)j(T)m(o)d(receiv)o(e)e(a)i(p)q(ostscript)860 1587 y(cop)o(y)m(,)c(send)f(email)f(to)i(netlib@ornl.go)n(v)d(and)i(in)h (the)f(mail)g(message)860 1626 y(t)o(yp)q(e:)14 b Fh(send)i(lawn95.ps)f (from)h(lapack/lawn)o(s.)860 1705 y Fi(J.)11 b(Choi,)f(J.)g(Dongarra,)f (S.)h(Ostrouc)o(ho)o(v,)e(A.)j(P)o(etitet,)e(D.)h(W)m(alk)o(er)860 1745 y(and)j(R.)h(C.)g(Whaley)m(,)f(LAP)m(A)o(CK,)j(W)m(orking)c(Note)i (100,)f Fd(A)j(Pr)n(o-)860 1784 y(p)n(osal)h(for)f(a)g(Set)h(of)f(Par)n (al)r(lel)h(Basic)g(Line)n(ar)g(A)o(lgebr)n(a)h(Subpr)n(o-)860 1824 y(gr)n(ams)p Fi(,)d(Computer)c(Science)g(Dept.)21 b(T)m(ec)o(hnical)11 b(Rep)q(ort)h(CS-95-)860 1863 y(292,)18 b(Univ)o(ersit)o(y)e(of)i(T)m(ennessee,)f(Kno)o(xville,)g(July)g(1995.) 33 b(T)m(o)860 1903 y(receiv)o(e)16 b(a)i(p)q(ostscript)d(cop)o(y)m(,)j (send)e(email)h(to)g(netlib@ornl.go)n(v)860 1942 y(and)k(in)g(the)f (mail)h(message)e(t)o(yp)q(e:)34 b Fh(send)16 b(lawn100.ps)f(from)860 1981 y(lapack/lawn)o(s.)1960 197 y Fc(P)m(arallel)1960 380 y(Basic)1960 563 y(Linear)1960 746 y(Algebra)1960 937 y(Subprograms)1960 1088 y Fb(Release)j(1.0)1960 1204 y Fj(Univ)n(ersit)n(y)f(of)i(T)-5 b(ennessee)1960 1313 y Fa(Marc)o(h)12 b(28,)g(1995)1960 1729 y Fj(A)19 b(Quic)n(k)f (Reference)f(Guide)p eop end userdict /end-hook known{end-hook}if %%EndDocument endTexFig eop %%Page: 2 2 2 1 bop -390 -71 a 43573823 44994723 -40258437 -52099153 0 0 startTexFig 180 rotate -390 -71 a %%BeginDocument: pblasqref1.ps %DVIPSCommandLine: dvips pblasqref1.dvi -t landscape -o pblasqref1.ps %DVIPSParameters: dpi=300, comments removed %DVIPSSource: TeX output 1996.05.16:1416 /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N}B /TR{translate}N /isls false N /vsize 11 72 mul N /hsize 8.5 72 mul N /landplus90{false}def /@rigin{isls{[0 landplus90{1 -1}{-1 1} ifelse 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale isls{landplus90{VResolution 72 div vsize mul 0 exch}{Resolution -72 div hsize mul 0}ifelse TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{ pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D }B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V {}B /RV statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false} ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail} B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{ 3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{ 3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end TeXDict begin 52099146 40258431 1000 300 300 (/a/rudolph/snow/homes/petitet/PAPERS/PBLAS/PAPER39/pblasqref1.dvi) @start /Fa 1 1 df0 D E /Fb 2 51 df<187898181818181818181818181818FF08107D8F0F>49 D<1F00618040C08060C0 600060006000C00180030006000C00102020207FC0FFC00B107F8F0F>I E /Fc 6 117 df<0FF1FE0180300180300300600300600300600300600600C007FFC006 00C00600C00C01800C01800C01800C0180180300FF1FE017117E9019>72 D<3FFFC03060C040604040C04080C04080C04000C0000180000180000180000180000300 000300000300000300000600003FE00012117E9012>84 D<040C00000000003058989830 30606464683006127E910B>105 D<3C000C000C00180018001800187031903230340038 007F00618061906190C1A0C0C00C117E9010>107 D<0F001080218020003E001F000180 8080C00083007C00090B7D8A0F>115 D<08181818FF30303030606062646438080F7E8E 0C>I E /Fd 7 108 df0 D<4001C0036006300C18180C30 066003C00180018003C006600C301818300C6006C003400110127B901B>2 D<03000000030000000300000006000000060000000C0000001800000030000000FFFFFF F8FFFFFFF830000000180000000C00000006000000060000000300000003000000030000 001D127D9023>32 D<03000C0003000C0003000C0006000600060006000C000300180001 80300000C0FFFFFFF8FFFFFFF8300000C0180001800C000300060006000600060003000C 0003000C0003000C001D127D9023>36 D<7FC0007FF000003800000C0000060000030000 0300000300000180000180FFFF80FFFF8000018000018000030000030000030000060000 0C000038007FF0007FC00011167D9218>51 D106 D<4040C060C060C060C060C0 60C060C060C060C060C060C060C060C060C060C060C060C060C060C060C060C060C060C0 60C060C060C060C060C060C060C060C060C06040400B227D9812>I E /Fe 25 122 df<03C0000C3040101840201880600C80C00C80C00D00C00E00800E0080 0C00C01C00C02C0060C4801F0300120E7E8D17>11 D<000F0000308000C0C00100400100 600200C00400C0040080040180083F00083E000801000801801001801001801001801001 80300300300300300600280C0044180043E0004000004000008000008000008000008000 00131D7F9614>I<60F0F070101020204040040A7D830A>59 D<0000C00000C00001C000 01C00003C00005C00005E00008E00018E00010E00020E00020E00040E00080E00080E001 FFF0010070020070040070040070080070180070FE03FE17177F961A>65 D<07FFF800E00E00E00700E00300E00301C00301C00701C00701C00E03803C03FFF003FF F003803C07001C07000E07000E07000E0E001C0E001C0E00380E00701C01E0FFFF001817 7F961B>I<001F8200E04403802C07001C0C001C1C0008380008300008700008600000E0 0000E00000E00000C00000C00020C00020C00040E000406000806001003002001C1C0007 E00017177E9619>I<07FE00E000E000E000E001C001C001C001C0038003800380038007 000700070007000E000E000E000E001C00FFC00F177E960F>73 D<07FFF00000E01C0000 E0060000E0070000E0070001C0070001C0070001C0070001C00E0003801C000380700003 FF80000380E000070070000700380007003800070038000E0070000E0070000E0070800E 0070801C003100FF801E0019177F961B>82 D<03FE0FE0007807000078060000380C0000 380800003C1000001C2000001E4000000E8000000F00000007000000070000000F800000 1380000023C0000061C00000C1C0000181E0000100E0000200F000040070001C007000FF 03FE001B177F961D>88 D<071018F0307060706060C060C060C06080C080C480C4C1C446 C838700E0E7E8D13>97 D<07C00C20107020706000C000C000C00080008000C010C02060 C03F000C0E7E8D0F>99 D<003E000C000C000C000C0018001800180018073018F0307060 706060C060C060C06080C080C480C4C1C446C838700F177E9612>I<07C01C2030106010 6020FFC0C000C000C000C000C010402060C01F000C0E7E8D10>I<030003800300000000 0000000000000000001C002400460046008C000C0018001800180031003100320032001C 0009177F960C>105 D<1F0006000600060006000C000C000C000C00181C1866188E190C 32003C003F00318060C060C460C460C4C0C8C0700F177E9612>107 D<383C1E0044C6630047028100460301008E0703000C0603000C0603000C060300180C06 00180C0620180C0C20180C0C40301804C0301807001B0E7F8D1F>109 D<383C0044C6004702004602008E06000C06000C06000C0600180C00180C401818401818 80300980300E00120E7F8D15>I<07C00C20101020186018C018C018C01880308030C060 C0C061803E000D0E7E8D11>I<1C3C22462382230346030603060306030C060C060C0C0C 081A3019E018001800300030003000FC001014808D12>I<30F049184E384C309C001800 180018003000300030003000600060000D0E7F8D10>114 D<07C00C201870187038001E 000FC003E000606060E060C0C0C1803F000C0E7E8D10>I<030003000600060006000600 FFC00C000C000C001800180018001800300030803080310032001C000A147F930D>I<1C 0200260600460600460600860C000C0C000C0C000C0C001818001818801818801838800C 5900078E00110E7F8D14>I<0F1F0011A18020C38020C300418000018000018000018000 030000030200C30200E70400C5080078F000110E7F8D14>120 D<1C0226064606460686 0C0C0C0C0C0C0C18181818181818380C7007B000300060706070C021801E000F147F8D11 >I E /Ff 1 51 df<7FFFE0FFFFE0C00060C00060C00060C00060C00060C00060C00060 C00060C00060C00060C00060C00060C00060C00060C00060FFFFE07FFFE013137D9419> 50 D E /Fg 18 121 df<00FC000782000E07001C07001C02001C00001C00001C00001C 0000FFFF001C07001C07001C07001C07001C07001C07001C07001C07001C07001C07001C 07001C07007F1FC01217809614>12 D22 D<0102040C181830307060 6060E0E0E0E0E0E0E0E0E0E060606070303018180C04020108227D980E>40 D<8040203018180C0C0E060606070707070707070707070606060E0C0C18183020408008 227E980E>I<003000003000003000003000003000003000003000003000003000003000 003000FFFFFCFFFFFC003000003000003000003000003000003000003000003000003000 00300000300016187E931B>43 D<60F0F070101020204040040A7D830A>I<06000E00FE 000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FF E00B157D9412>49 D<0F8030E040708030C038E0384038003800700070006000C0018003 0006000C08080810183FF07FF0FFF00D157E9412>I61 D<00FC100383300E00B01C0070 380030300030700010600010E00010E00000E00000E00000E00000E00000E00010600010 7000103000203800201C00400E008003830000FC0014177E9619>67 DI<0FC4302C601C400CC004C004C004E00070007F003FE00FF801FC001C000E00 06800680068006C004E008D81087E00F177E9614>83 D90 D<0FC0186030307038E018FFF8E000E000E000600070083010183007C00D0E7F8D10> 101 D112 D 114 D<1F4060C0C040C040E000FF007F801FC001E080608060C060E0C09F000B0E7F8D0E >I 120 D E /Fh 42 121 df<0180038006000C0018003800300070007000E000E000E000E0 00E000E000E000700070003000380018000C0006000380018009197B9612>40 D<80C06030181C0C0E0E070707070707070E0E0C1C183060C08008197C9612>I<60F0F8 78183030E0C00509798312>44 D<0F803FC070E0E070E038E038403800380030007000E0 00C00180030006000C00183830387FF87FF80D147E9312>50 D<038007C007C006C006C0 0EE00EE00EE00EE00C601C701C701C701FF01FF0383838383838FC7EFC7E0F147F9312> 65 DI<03E60FFE1C3E381E700E700E600EE000E000E000E000 E000E000600E700E700E381C1C380FF003E00F147F9312>II< FFFEFFFE380E380E380E3800380038E038E03FE03FE038E038E03800380E380E380E380E FFFEFFFE0F147F9312>I<07CC0FFC1C7C383C701C701C601CE000E000E000E07EE07EE0 1C601C701C703C383C1C7C0FFC07DC0F147F9312>71 DII<07F807F800E000E000E000E000E000E000E000E000E000E000E000 E000E040E0E0E0E1C07F803F000D147E9312>IIIII<3FE07FF07070E038 E038E038E038E038E038E038E038E038E038E038E038E038E03870707FF03FE00D147E93 12>II82 D<1F303FF070F0E070E070 E070E00070007F003FC00FE000F0007800386038E038E030F070FFE0CF800D147E9312> I<7FFEFFFEE38EE38EE38E03800380038003800380038003800380038003800380038003 800FE00FE00F147F9312>IIII<7C7C7C7C3C701C F01EE00FE00FC007C007800380078007C00FC00EE01EE01C701C703838FC7EFC7E0F147F 9312>II<1FC0003FF000383800101C00001C0007FC003FFC00 781C00E01C00E01C00E01C00703C003FFF801FCF80110E7F8D12>97 D<07F01FF8383870106000E000E000E000E0006000703838381FF007E00D0E7E8D12>99 D<00F800F8003800380038003807B81FF8387870386038E038E038E038E0386038707838 781FFE0FBE0F147F9312>I<07801FE0387070706038E038FFF8FFF8E000600070383838 1FF007C00D0E7E8D12>I<06000F000F0006000000000000007F007F0007000700070007 00070007000700070007000700FFF0FFF00C157D9412>105 D 108 DII<0F803FE038E07070E038E038E038E038E038F07870 7038E03FE00F800D0E7E8D12>II114 D<1FF03FF06070C070E0007F003FE00FF000786018E018F030FFE0DFC00D0E7E8D12>I< 06000E000E000E007FF8FFF80E000E000E000E000E000E000E000E1C0E1C0E1C07F801E0 0E127F9112>I118 D<7C7C7C7C1CF00EE00FC007C00380078007C00EE01EF01C70FC7EFC7E 0F0E7F8D12>120 D E /Fi 11 119 df<00180000780001F800FFF800FFF80001F80001 F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8007F FFE07FFFE013207C9F1C>49 D<03FC000FFF003C1FC07007E07C07F0FE03F0FE03F8FE03 F8FE01F87C01F83803F80003F80003F00003F00007E00007C0000F80001F00003E000038 0000700000E01801C0180380180700180E00380FFFF01FFFF03FFFF07FFFF0FFFFF0FFFF F015207D9F1C>I<00FE0007FFC00F07E01E03F03F03F03F81F83F81F83F81F81F03F81F 03F00003F00003E00007C0001F8001FE0001FF000007C00001F00001F80000FC0000FC3C 00FE7E00FEFF00FEFF00FEFF00FEFF00FC7E01FC7801F81E07F00FFFC001FE0017207E9F 1C>I<000070000000007000000000F800000000F800000000F800000001FC00000001FC 00000003FE00000003FE00000003FE000000067F000000067F0000000C7F8000000C3F80 00000C3F800000181FC00000181FC00000301FE00000300FE00000700FF000006007F000 006007F00000C007F80000FFFFF80001FFFFFC00018001FC00018001FC00030001FE0003 0000FE00070000FF000600007F000600007F00FFE007FFF8FFE007FFF825227EA12A>65 DI76 D80 D<01FE0207FF861F01FE3C007E7C001E78000E78000EF80006F80006FC0006FC0000FF00 00FFE0007FFF007FFFC03FFFF01FFFF80FFFFC03FFFE003FFE0003FE00007F00003F0000 3FC0001FC0001FC0001FE0001EE0001EF0003CFC003CFF00F8C7FFE080FF8018227DA11F >83 D<00FE0007FF800F87C01E01E03E01F07C00F07C00F8FC00F8FC00F8FFFFF8FFFFF8 FC0000FC0000FC00007C00007C00007E00003E00181F00300FC07003FFC000FF0015167E 951A>101 D108 D 118 D E end TeXDict begin @landscape 1 0 bop -465 8 a Fi(Lev)n(el)17 b(1)h(PBLAS)-249 87 y Fh(dim)f(scalar)103 b(vector)316 b(vector)2295 b Fg(pre\014xes)-465 126 y Fh(P)p Ff(2)p Fh(SWAP)66 b(\()17 b(N,)246 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX,)g(Y,)h(IY,)f(JY,)h(DESCY,)e(INCY)h(\))886 b Fe(x)10 b Fd($)g Fe(y)1002 b Fg(S,)11 b(D,)h(C,)g(Z)-465 166 y Fh(P)p Ff(2)p Fh(SCAL)66 b(\()17 b(N,)g(ALPHA,)121 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX)g(\))1309 b Fe(x)10 b Fd( )g Fe(\013x)976 b Fg(S,)11 b(D,)h(C,)g(Z,)f(CS,)h(ZD)-465 205 y Fh(P)p Ff(2)p Fh(COPY)66 b(\()17 b(N,)246 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX,)g(Y,)h(IY,)f(JY,)h(DESCY,)e(INCY)h(\))886 b Fe(y)12 b Fd( )e Fe(x)1000 b Fg(S,)11 b(D,)h(C,)g(Z)-465 245 y Fh(P)p Ff(2)p Fh(AXPY)66 b(\()17 b(N,)g(ALPHA,)121 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX,)g(Y,)h(IY,)f(JY,)h(DESCY,)e(INCY)h (\))886 b Fe(y)12 b Fd( )e Fe(\013x)d Fg(+)i Fe(y)917 b Fg(S,)11 b(D,)h(C,)g(Z)-465 285 y Fh(P)p Ff(2)p Fh(DOT)84 b(\()17 b(N,)g(DOT,)157 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX,)g(Y,)h (IY,)f(JY,)h(DESCY,)e(INCY)h(\))886 b Fe(dot)10 b Fd( )g Fe(x)1889 274 y Fc(T)1913 285 y Fe(y)930 b Fg(S,)11 b(D)-465 326 y Fh(P)p Ff(2)p Fh(DOTU)66 b(\()17 b(N,)g(DOTU,)139 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX,)g(Y,)h(IY,)f(JY,)h(DESCY,)e(INCY)h (\))886 b Fe(dotu)11 b Fd( )f Fe(x)1910 314 y Fc(T)1934 326 y Fe(y)909 b Fg(C,)12 b(Z)-465 367 y Fh(P)p Ff(2)p Fh(DOTC)66 b(\()17 b(N,)g(DOTC,)139 b(X,)17 b(IX,)g(JX,)f(DESCX,)g (INCX,)g(Y,)h(IY,)f(JY,)h(DESCY,)e(INCY)h(\))886 b Fe(dotc)10 b Fd( )g Fe(x)1904 355 y Fc(H)1933 367 y Fe(y)910 b Fg(C,)12 b(Z)-465 406 y Fh(P)p Ff(2)p Fh(NRM2)66 b(\()17 b(N,)g(NORM2,)121 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX)g(\))1309 b Fe(nor)q(m)p Fg(2)10 b Fd( )g(k)p Fe(x)p Fd(k)1981 411 y Fb(2)2860 406 y Fg(S,)h(D,)h(SC,)g(DZ)-465 445 y Fh(P)p Ff(2)p Fh(ASUM)66 b(\()17 b(N,)g(ASUM,)139 b(X,)17 b(IX,)g(JX,)f(DESCX,)g (INCX)g(\))1309 b Fe(asum)11 b Fd( )f(k)p Fe(r)q(e)p Fg(\()p Fe(x)p Fg(\))p Fd(k)2025 450 y Fb(1)2049 445 y Fg(+)e Fd(k)p Fe(im)p Fg(\()p Fe(x)p Fg(\))p Fd(k)2211 450 y Fb(1)2860 445 y Fg(S,)j(D,)h(SC,)g(DZ)-465 485 y Fh(P)p Ff(2)p Fh(AMAX)66 b(\()17 b(N,)g(AMAX,)f(INDX,)33 b(X,)17 b(IX,)g(JX,)f(DESCX,)g(INCX)g(\))1309 b Fe(indx)11 b Fd( )f Fg(1)1911 473 y Fc(st)1951 485 y Fe(k)h Fd(3)e(j)p Fe(Re)p Fg(\()p Fe(x)2100 491 y Fc(k)2119 485 y Fg(\))p Fd(j)e Fg(+)h Fd(j)p Fe(I)s(m)p Fg(\()p Fe(x)2278 491 y Fc(k)2297 485 y Fg(\))p Fd(j)539 b Fg(S,)11 b(D,)h(C,)g(Z)1990 524 y(=)e Fe(max)p Fg(\()p Fd(j)p Fe(Re)p Fg(\()p Fe(x)2198 529 y Fc(i)2210 524 y Fg(\))p Fd(j)e Fg(+)g Fd(j)p Fe(I)s(m)p Fg(\()p Fe(x)2370 529 y Fc(i)2383 524 y Fg(\))p Fd(j)p Fg(\))h(=)h Fe(amax)-465 610 y Fi(Lev)n(el)17 b(2)h(PBLAS)-249 689 y Fh(options)209 b(dim)52 b(scalar)16 b(matrix)209 b(vector)298 b(scalar)15 b(vector)-465 730 y(P)p Ff(2)p Fh(GEMV)66 b(\()123 b(TRANS,)e(M,)17 b(N,)g(ALPHA,)f(A,)g(IA,)h(JA,)g (DESCA,)e(X,)i(IX,)g(JX,)f(DESCX,)g(INCX,)f(BETA,)h(Y,)h(IY,)g(JY,)f (DESCY,)g(INCY)g(\))180 b Fe(y)12 b Fd( )e Fe(\013op)p Fg(\()p Fe(A)p Fg(\))p Fe(x)d Fg(+)h Fe(\014)r(y)q(;)e(op)p Fg(\()p Fe(A)p Fg(\))k(=)g Fe(A;)c(A)2274 718 y Fc(T)2299 730 y Fe(;)f(A)2340 718 y Fc(H)2370 730 y Fe(;)g(A)k Fd(\000)f Fe(m)g Fd(\002)g Fe(n)310 b Fg(S,)11 b(D,)h(C,)g(Z)-465 769 y Fh(P)p Ff(2)p Fh(HEMV)66 b(\()17 b(UPLO,)298 b(N,)17 b(ALPHA,)f(A,)g(IA,)h(JA,)g(DESCA,)e(X,)i(IX,)g(JX,)f(DESCX,)g(INCX,)f (BETA,)h(Y,)h(IY,)g(JY,)f(DESCY,)g(INCY)g(\))180 b Fe(y)12 b Fd( )e Fe(\013Ax)e Fg(+)g Fe(\014)r(y)869 b Fg(C,)12 b(Z)-465 808 y Fh(P)p Ff(2)p Fh(SYMV)66 b(\()17 b(UPLO,)298 b(N,)17 b(ALPHA,)f(A,)g(IA,)h(JA,)g(DESCA,)e(X,)i(IX,)g(JX,)f(DESCX,)g (INCX,)f(BETA,)h(Y,)h(IY,)g(JY,)f(DESCY,)g(INCY)g(\))180 b Fe(y)12 b Fd( )e Fe(\013Ax)e Fg(+)g Fe(\014)r(y)869 b Fg(S,)11 b(D)-465 849 y Fh(P)p Ff(2)p Fh(TRMV)66 b(\()17 b(UPLO,)f(TRANS,)f(DIAG,)69 b(N,)141 b(A,)16 b(IA,)h(JA,)g(DESCA,)e(X,) i(IX,)g(JX,)f(DESCX,)g(INCX)g(\))709 b Fe(x)10 b Fd( )g Fe(\013Ax;)c(x)j Fd( )h Fe(\013A)2049 837 y Fc(T)2074 849 y Fe(x;)5 b(x)10 b Fd( )g Fe(\013A)2233 837 y Fc(H)2262 849 y Fe(x;)568 b Fg(S,)11 b(D,)h(C,)g(Z)-465 890 y Fh(P)p Ff(2)p Fh(TRSV)66 b(\()17 b(UPLO,)f(TRANS,)f(DIAG,)69 b(N,)141 b(A,)16 b(IA,)h(JA,)g(DESCA,)e(X,)i(IX,)g(JX,)f(DESCX,)g(INCX) g(\))709 b Fe(x)10 b Fd( )g Fe(\013A)1890 878 y Fa(\000)p Fb(1)1931 890 y Fe(x;)c(x)k Fd( )g Fe(\013A)2091 878 y Fa(\000)p Fc(T)2139 890 y Fe(x;)c(x)j Fd( )h Fe(\013A)2298 878 y Fa(\000)p Fc(H)2351 890 y Fe(x;)479 b Fg(S,)11 b(D,)h(C,)g(Z)-249 968 y Fh(options)209 b(dim)52 b(scalar)16 b(vector)315 b(vector)g(matrix)-465 1009 y(P)p Ff(2)p Fh(GER)84 b(\()352 b(M,)17 b(N,)g(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e (INCX,)h(Y,)h(IY,)f(JY,)h(DESCY,)e(INCY,)h(A,)h(IA,)g(JA,)f(DESCA)g(\)) 286 b Fe(A)11 b Fd( )f Fe(\013xy)1909 997 y Fc(T)1941 1009 y Fg(+)e Fe(A;)e(A)j Fd(\000)f Fe(m)g Fd(\002)g Fe(n)677 b Fg(S,)11 b(D)-465 1050 y Fh(P)p Ff(2)p Fh(GERU)66 b(\()352 b(M,)17 b(N,)g(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e(INCX,)h(Y,)h (IY,)f(JY,)h(DESCY,)e(INCY,)h(A,)h(IA,)g(JA,)f(DESCA)g(\))286 b Fe(A)11 b Fd( )f Fe(\013xy)1909 1038 y Fc(T)1941 1050 y Fg(+)e Fe(A;)e(A)j Fd(\000)f Fe(m)g Fd(\002)g Fe(n)677 b Fg(C,)12 b(Z)-465 1090 y Fh(P)p Ff(2)p Fh(GERC)66 b(\()352 b(M,)17 b(N,)g(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e(INCX,)h(Y,)h(IY,)f (JY,)h(DESCY,)e(INCY,)h(A,)h(IA,)g(JA,)f(DESCA)g(\))286 b Fe(A)11 b Fd( )f Fe(\013xy)1909 1078 y Fc(H)1945 1090 y Fg(+)f Fe(A;)d(A)i Fd(\000)g Fe(m)h Fd(\002)f Fe(n)672 b Fg(C,)12 b(Z)-465 1131 y Fh(P)p Ff(2)p Fh(HER)84 b(\()17 b(UPLO,)298 b(N,)17 b(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e(INCX,)439 b(A,)17 b(IA,)g(JA,)f(DESCA)g(\))286 b Fe(A)11 b Fd( )f Fe(\013xx)1911 1119 y Fc(H)1947 1131 y Fg(+)e Fe(A)852 b Fg(C,)12 b(Z)-465 1171 y Fh(P)p Ff(2)p Fh(HER2)66 b(\()17 b(UPLO,)298 b(N,)17 b(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e(INCX,)h(Y,)h (IY,)f(JY,)h(DESCY,)e(INCY,)h(A,)h(IA,)g(JA,)f(DESCA)g(\))286 b Fe(A)11 b Fd( )f Fe(\013xy)1909 1160 y Fc(H)1945 1171 y Fg(+)f Fe(y)q Fg(\()p Fe(\013x)p Fg(\))2070 1160 y Fc(H)2106 1171 y Fg(+)f Fe(A)693 b Fg(C,)12 b(Z)-465 1212 y Fh(P)p Ff(2)p Fh(SYR)84 b(\()17 b(UPLO,)298 b(N,)17 b(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e(INCX,)439 b(A,)17 b(IA,)g(JA,)f(DESCA)g(\))286 b Fe(A)11 b Fd( )f Fe(\013xx)1911 1200 y Fc(T)1942 1212 y Fg(+)e Fe(A)857 b Fg(S,)11 b(D)-465 1252 y Fh(P)p Ff(2)p Fh(SYR2)66 b(\()17 b(UPLO,)298 b(N,)17 b(ALPHA,)f(X,)g(IX,)h(JX,)g(DESCX,)e(INCX,)h(Y,)h(IY,)f(JY,)h(DESCY,)e (INCY,)h(A,)h(IA,)g(JA,)f(DESCA)g(\))286 b Fe(A)11 b Fd( )f Fe(\013xy)1909 1241 y Fc(T)1941 1252 y Fg(+)e Fe(\013y)q(x)2037 1241 y Fc(T)2070 1252 y Fg(+)g Fe(A)729 b Fg(S,)11 b(D)-465 1378 y Fi(Lev)n(el)17 b(3)h(PBLAS)-249 1457 y Fh(options)474 b(dim)105 b(scalar)15 b(matrix)210 b(matrix)f(scalar)16 b(matrix)-465 1497 y(P)p Ff(2)p Fh(GEMM)66 b(\()229 b(TRANSA,)15 b(TRANSB,)121 b(M,)17 b(N,)g(K,)g(ALPHA,)e(A,)i(IA,)g(JA,)f(DESCA,)g(B,)h(IB,)f(JB,)h(DESCB,) e(BETA,)h(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013op)p Fg(\()p Fe(A)p Fg(\))p Fe(op)p Fg(\()p Fe(B)r Fg(\))5 b(+)k Fe(\014)r(C;)c(op)p Fg(\()p Fe(X)s Fg(\))k(=)h Fe(X)q(;)c(X)2375 1485 y Fc(T)2399 1497 y Fe(;)f(X)2446 1485 y Fc(H)2475 1497 y Fe(;)g(C)11 b Fd(\000)d Fe(m)g Fd(\002)g Fe(n)204 b Fg(S,)11 b(D,)h(C,)g(Z)-465 1538 y Fh(P)p Ff(2)p Fh(SYMM)66 b(\()17 b(SIDE,)f(UPLO,)404 b(M,)17 b(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)g(B,)h(IB,)f(JB,) h(DESCB,)e(BETA,)h(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013AB)f Fg(+)f Fe(\014)r(C;)e(C)12 b Fd( )e Fe(\013B)r(A)d Fg(+)i Fe(\014)r(C;)c(C)11 b Fd(\000)d Fe(m)g Fd(\002)g Fe(n;)f(A)j Fg(=)g Fe(A)2579 1526 y Fc(T)2860 1538 y Fg(S,)h(D,)h(C,)g(Z)-465 1578 y Fh(P)p Ff(2)p Fh(HEMM)66 b(\()17 b(SIDE,)f(UPLO,)404 b(M,)17 b(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)g(B,)h(IB,)f(JB,)h (DESCB,)e(BETA,)h(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013AB)f Fg(+)f Fe(\014)r(C;)e(C)12 b Fd( )e Fe(\013B)r(A)d Fg(+)i Fe(\014)r(C;)c(C)11 b Fd(\000)d Fe(m)g Fd(\002)g Fe(n;)f(A)j Fg(=)g Fe(A)2579 1567 y Fc(H)2860 1578 y Fg(C,)i(Z)-465 1619 y Fh(P)p Ff(2)p Fh(SYRK)66 b(\()123 b(UPLO,)16 b(TRANS,)333 b(N,)17 b(K,)g(ALPHA,)e(A,) i(IA,)g(JA,)f(DESCA,)333 b(BETA,)16 b(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013AA)1924 1607 y Fc(T)1956 1619 y Fg(+)f Fe(\014)r(C;)c(C)13 b Fd( )d Fe(\013A)2186 1607 y Fc(T)2210 1619 y Fe(A)e Fg(+)h Fe(\014)r(C;)c(C)11 b Fd(\000)d Fe(n)g Fd(\002)h Fe(n)361 b Fg(S,)11 b(D,)h(C,)g(Z)-465 1659 y Fh(P)p Ff(2)p Fh(HERK)66 b(\()123 b(UPLO,)16 b(TRANS,)333 b(N,)17 b(K,)g(ALPHA,)e(A,)i(IA,)g(JA,)f(DESCA,)333 b(BETA,)16 b(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013AA)1924 1648 y Fc(H)1961 1659 y Fg(+)e Fe(\014)r(C;)e(C)12 b Fd( )e Fe(\013A)2190 1648 y Fc(H)2219 1659 y Fe(A)f Fg(+)f Fe(\014)r(C;)e(C)k Fd(\000)e Fe(n)g Fd(\002)h Fe(n)352 b Fg(C,)12 b(Z)-465 1700 y Fh(P)p Ff(2)p Fh(SYR2K)48 b(\()123 b(UPLO,)16 b(TRANS,)333 b(N,)17 b(K,)g(ALPHA,)e(A,)i(IA,)g (JA,)f(DESCA,)g(B,)h(IB,)f(JB,)h(DESCB,)e(BETA,)h(C,)h(IC,)g(JC,)f (DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013AB)1927 1688 y Fc(T)1958 1700 y Fg(+)f Fe(\013B)r(A)2072 1688 y Fc(T)2103 1700 y Fg(+)f Fe(\014)r(C;)e(C)12 b Fd( )e Fe(\013A)2332 1688 y Fc(T)2357 1700 y Fe(B)f Fg(+)f Fe(\013B)2480 1688 y Fc(T)2503 1700 y Fe(A)h Fg(+)f Fe(\014)r(C;)e(C)k Fd(\000)e Fe(n)h Fd(\002)f Fe(n)68 b Fg(S,)11 b(D,)h(C,)g(Z)-465 1741 y Fh(P)p Ff(2)p Fh(HER2K)48 b(\()123 b(UPLO,)16 b(TRANS,)333 b(N,)17 b(K,)g(ALPHA,)e(A,)i(IA,)g(JA,)f(DESCA,)g(B,)h (IB,)f(JB,)h(DESCB,)e(BETA,)h(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\013AB)1927 1729 y Fc(H)1963 1741 y Fg(+)i(\026)-22 b Fe(\013B)r(A)2076 1729 y Fc(H)2112 1741 y Fg(+)8 b Fe(\014)r(C;)e(C)12 b Fd( )e Fe(\013A)2341 1729 y Fc(H)2370 1741 y Fe(B)f Fg(+)j(\026)-21 b Fe(\013)o(B)2493 1729 y Fc(H)2522 1741 y Fe(A)8 b Fg(+)g Fe(\014)r(C;)e(C)k Fd(\000)e Fe(n)h Fd(\002)f Fe(n)50 b Fg(C,)12 b(Z)-465 1781 y Fh(P)p Ff(2)p Fh(TRAN)66 b(\()617 b(M,)17 b(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)333 b(BETA,)16 b(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\014)r(C)g Fg(+)e Fe(\013A)1990 1769 y Fc(T)2014 1781 y Fe(;)e(A)i Fd(\000)g Fe(n)h Fd(\002)f Fe(m;)e(C)k Fd(\000)f Fe(m)f Fd(\002)g Fe(n)483 b Fg(S,)11 b(D)-465 1822 y Fh(P)p Ff(2)p Fh(TRANU)48 b(\()617 b(M,)17 b(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)333 b(BETA,)16 b(C,)h(IC,)g(JC,)f(DESCC)g(\)) 74 b Fe(C)13 b Fd( )d Fe(\014)r(C)g Fg(+)e Fe(\013A)1990 1810 y Fc(T)2014 1822 y Fe(;)e(A)i Fd(\000)g Fe(n)h Fd(\002)f Fe(m;)e(C)k Fd(\000)f Fe(m)f Fd(\002)g Fe(n)483 b Fg(C,)12 b(Z)-465 1862 y Fh(P)p Ff(2)p Fh(TRANC)48 b(\()617 b(M,)17 b(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)333 b(BETA,)16 b(C,)h(IC,)g(JC,)f(DESCC)g(\))74 b Fe(C)13 b Fd( )d Fe(\014)r(C)g Fg(+)e Fe(\013A)1990 1851 y Fc(H)2019 1862 y Fe(;)d(A)k Fd(\000)f Fe(n)g Fd(\002)h Fe(m;)c(C)11 b Fd(\000)d Fe(m)g Fd(\002)g Fe(n)479 b Fg(C,)12 b(Z)-465 1903 y Fh(P)p Ff(2)p Fh(TRMM)66 b(\()17 b(SIDE,)f(UPLO,)g(TRANSA,)156 b(DIAG,)16 b(M,)h(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)g(B,)h (IB,)f(JB,)h(DESCB)f(\))497 b Fe(B)11 b Fd( )f Fe(\013op)p Fg(\()p Fe(A)p Fg(\))p Fe(B)r(;)5 b(B)11 b Fd( )f Fe(\013B)r(op)p Fg(\()p Fe(A)p Fg(\))p Fe(;)t(op)p Fg(\()p Fe(A)p Fg(\))g(=)g Fe(A;)c(A)2447 1891 y Fc(T)2472 1903 y Fe(;)f(A)2513 1891 y Fc(H)2543 1903 y Fe(;)g(B)10 b Fd(\000)e Fe(m)g Fd(\002)g Fe(n)135 b Fg(S,)11 b(D,)h(C,)g(Z)-465 1943 y Fh(P)p Ff(2)p Fh(TRSM)66 b(\()17 b(SIDE,)f(UPLO,)g(TRANSA,)156 b(DIAG,)16 b(M,)h(N,)70 b(ALPHA,)15 b(A,)i(IA,)g(JA,)f(DESCA,)g(B,)h (IB,)f(JB,)h(DESCB)f(\))497 b Fe(B)11 b Fd( )f Fe(\013op)p Fg(\()p Fe(A)1947 1932 y Fa(\000)p Fb(1)1988 1943 y Fg(\))p Fe(B)r(;)5 b(B)11 b Fd( )f Fe(\013B)r(op)p Fg(\()p Fe(A)2256 1932 y Fa(\000)p Fb(1)2296 1943 y Fg(\))p Fe(;)c(op)p Fg(\()p Fe(A)p Fg(\))j(=)h Fe(A;)d(A)2530 1932 y Fc(T)2554 1943 y Fe(;)f(A)2596 1932 y Fc(H)2625 1943 y Fe(;)g(B)j Fd(\000)f Fe(m)g Fd(\002)h Fe(n)52 b Fg(S,)11 b(D,)h(C,)g(Z)p eop end userdict /end-hook known{end-hook}if %%EndDocument endTexFig eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF scalapack-doc-1.5/README0100644000056400000620000000245606335611037014430 0ustar pfrauenfstaffScaLAPACK, version 1.5 DATE: May 1, 1997 To install the ScaLAPACK documentation, do the following: 1) Copy the ScaLAPACK manual pages to the desired location. This will probably be /usr/local/man/manl or other.dir/man/manl. % cp man/manl/* /usr/local/man/manl Be aware that if the manpages are installed in some directory other than /usr/local/man/manl you will need to add that directory to your MANPATH environment variable. First type: % echo $MANPATH If it says that MANPATH is undefined, type: % setenv MANPATH /usr/man:/usr/local/man:/user_path/man_dir Otherwise, if MANPATH is defined, type: % setenv MANPATH ${MANPATH}:/user_path/man_dir where man_dir is the directory of manual pages containing /man/manl. ('%' is assumed to be your shell prompt.) 2) Add the LAPACK manual pages to the whatis database for keyword searching via 'man -k': % catman -w -M /usr/local/man or % catman -w -M other.dir/man 3) Test out the installation. First make sure your current shell knows about the installation: % rehash Now test out the manual pages: % man dgetrf You should get the man page for dgetrf. BE AWARE that your request should be in lower case. If you have any comments/suggestions, please direct them to scalapack@cs.utk.edu. scalapack-doc-1.5/scalapackqref.ps0100644000056400000620000052335506710272402016716 0ustar pfrauenfstaff%!PS-Adobe-2.0 %%Creator: dvipsk 5.58f Copyright 1986, 1994 Radical Eye Software %%Title: scalapackqref.dvi %%Pages: 2 %%PageOrder: Ascend %%Orientation: Landscape %%BoundingBox: 0 0 612 792 %%DocumentPaperSizes: Letter %%EndComments %DVIPSCommandLine: dvips -p 2 -l 3 -t landscape -o scalapackqref.ps %+ scalapackqref.dvi %DVIPSParameters: dpi=600, comments removed %DVIPSSource: TeX output 1997.05.11:1342 %%BeginProcSet: tex.pro /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N}B /TR{translate}N /isls false N /vsize 11 72 mul N /hsize 8.5 72 mul N /landplus90{false}def /@rigin{isls{[0 landplus90{1 -1}{-1 1} ifelse 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale isls{landplus90{VResolution 72 div vsize mul 0 exch}{Resolution -72 div hsize mul 0}ifelse TR}if Resolution VResolution vsize -72 div 1 add mul TR[matrix currentmatrix{dup dup round sub abs 0.00001 lt{round}if} forall round exch round exch]setmatrix}N /@landscape{/isls true N}B /@manualfeed{statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{ /nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{ /sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0] N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{ 128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 sub]{ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]} if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{ cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore userdict /eop-hook known{eop-hook}if showpage}N /@start{userdict /start-hook known{start-hook}if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V {}B /RV statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false} ifelse}{false}ifelse end{{gsave TR -.1 .1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 .1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{gsave newpath transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail {dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}B /c{-4 M} B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{ 4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{ p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end %%EndProcSet TeXDict begin 52099146 40258431 1000 600 600 (scalapackqref.dvi) @start /Fa 33 91 df<000380000FC0001FC0003F8000FF0000FE0001F80003F80007F0 000FE0000FC0001F80001F80003F00003F00003E00007E00007E00007C00007C0000FC00 00FC0000F80000F80000F80000F80000F80000F80000F80000F80000FC0000FC00007C00 007C00007E00007E00003E00003F00003F00001F80001F80000FC0000FE00007F00003F8 0001F80000FE0000FF00003F80001FC0000FC0000380123476AD23>40 D<700000FC0000FE00007F00003FC0001FC00007E00007F00003F80001FC0000FC00007E 00007E00003F00003F00001F00001F80001F80000F80000F80000FC0000FC00007C00007 C00007C00007C00007C00007C00007C00007C0000FC0000FC0000F80000F80001F80001F 80001F00003F00003F00007E00007E0000FC0001FC0003F80007F00007E0001FC0003FC0 007F0000FE0000FC000070000012347AAD23>I<001E0000001E0000001F0000001E0000 001E0000001E0000701E03807C1E0F80FE1E1FC07F9E7F807FFFFF801FFFFE0007FFF800 01FFE000007F800001FFE00007FFF8001FFFFE007FFFFF807F9E7F80FE1E1FC07C1E0F80 701E0380001E0000001E0000001E0000001F0000001E0000001E00001A1D7CA223>I<1F 003F807FC07FE07FE07FE07FE03FE01FE007E007E00FC01FC07F80FF00FE00FC0070000B 12748823>44 D<7FFFFF80FFFFFFC0FFFFFFC0FFFFFFC0FFFFFFC07FFFFF801A067C9623 >I<001800003C00007C00007C0000FC0001FC0003FC0007FC007FFC00FFFC00FFFC00FF 7C007C7C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C0000 7C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C0000 7C00007C007FFFFC7FFFFE7FFFFE7FFFFE7FFFFC172A7AA923>49 D<0003F800001FFE00003FFF0000FFFF8001FFFFC003FE0FC007F01FC00FE01FC01FC01F C01F801FC03F800F803F0000007E0000007E0000007C0100007C3FF000FCFFFC00FFFFFE 00FFFFFF00FFFFFF80FFC03FC0FF000FE0FF0007E0FE0007E0FC0003F0FC0003F0FC0001 F0FC0001F0FC0001F07C0001F07C0001F07E0003F07E0003F03F0003F03F0007E03F800F E01FC01FC00FF07F800FFFFF0007FFFE0001FFFC0000FFF000003FC0001C2B7DA923>54 D<000FC000000FC000001FE000001FE000001FE000001FE000003CF000003CF000003CF0 00003CF000003CF000007CF800007CF800007CF8000078780000F87C0000F87C0000F87C 0000F87C0000F87C0001F03E0001F03E0001F03E0001F03E0003F03F0003E01F0003E01F 0003FFFF0003FFFF0007FFFF8007FFFF8007FFFF8007C00F800FC00FC00F8007C00F8007 C00F8007C07FF03FF8FFF03FFCFFF03FFCFFF03FFC7FF03FF81E2A7EA923>65 DI<000FE0E0003FF9F0 00FFFDF001FFFFF003FFFFF007FC3FF00FF00FF01FC007F01FC007F03F8003F03F0003F0 7F0001F07E0001F07E0001F07C0000E0FC000000FC000000F8000000F8000000F8000000 F8000000F8000000F8000000F8000000F8000000F8000000FC000000FC0000007C000000 7E0000E07E0001F07F0001F03F0001F03F8003F01FC003F01FC007E00FF00FE007FC1FC0 03FFFF8001FFFF0000FFFE00003FF800000FE0001C2B7DA923>I<7FFFF000FFFFFC00FF FFFE00FFFFFF007FFFFF800F803FC00F801FC00F8007E00F8007E00F8003F00F8001F00F 8001F80F8000F80F8000F80F8000F80F8000FC0F80007C0F80007C0F80007C0F80007C0F 80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F8000F80F8000F80F8000F80F 8001F80F8001F00F8003F00F8003F00F8007E00F800FC00F803FC07FFFFF80FFFFFF00FF FFFE00FFFFFC007FFFF0001E297FA823>II<7FFFFFFCFFFFFFFEFFFFFFFEFFFFFFFE7FFFFFFE07C0003E07C000 3E07C0003E07C0003E07C0003E07C0001C07C0000007C0000007C0000007C00E0007C01F 0007C01F0007C01F0007FFFF0007FFFF0007FFFF0007FFFF0007FFFF0007C01F0007C01F 0007C01F0007C00E0007C0000007C0000007C0000007C0000007C0000007C0000007C000 0007C0000007C000007FFE0000FFFF0000FFFF0000FFFF00007FFE00001F297FA823>I< 001FC1C0007FF3E000FFFFE003FFFFE007FFFFE007F83FE00FE01FE01FC00FE01F800FE0 3F8007E03F0007E07E0003E07E0003E07E0003E07C0001C0FC000000FC000000F8000000 F8000000F8000000F8000000F8000000F8000000F8007FF8F8007FFCF8007FFCFC007FFC FC007FF87C0003E07E0003E07E0007E07E0007E03F0007E03F800FE01F800FE01FC01FE0 0FE01FE007F87FE007FFFFE003FFFFE000FFFBE0007FF1C0001FC0001E2B7EA923>I<7F F00FFEFFF81FFFFFF81FFFFFF81FFF7FF00FFE0F8001F00F8001F00F8001F00F8001F00F 8001F00F8001F00F8001F00F8001F00F8001F00F8001F00F8001F00F8001F00FFFFFF00F FFFFF00FFFFFF00FFFFFF00FFFFFF00F8001F00F8001F00F8001F00F8001F00F8001F00F 8001F00F8001F00F8001F00F8001F00F8001F00F8001F00F8001F00F8001F00F8001F07F F00FFEFFF81FFFFFF81FFFFFF81FFF7FF00FFE20297FA823>II<001FFFE0003FFFF0003FFFF0003FFFF0001FFFE000003E00 00003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E00 00003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E00 00003E0000003E0000003E0000003E0000003E0000003E0000003E007C003E00FE003E00 FE003E00FE007E00FE00FE00FF03FC007FFFF8007FFFF8001FFFE0000FFFC00001FE0000 1C2A7DA823>I<7FE01FF8FFF03FF8FFF03FF8FFF03FF87FE01FF80F000F800F001F000F 003F000F007E000F007C000F00F8000F01F8000F03F0000F03E0000F07C0000F0FC0000F 1F80000F1F80000F3F80000F7FC0000FFFE0000FFBE0000FF1F0000FF1F0000FE0F8000F C0F8000F807C000F807C000F003E000F003E000F001F000F001F000F000F800F000FC00F 0007C00F0003E07FE007FCFFF00FFCFFF00FFCFFF00FFC7FE007FC1E297EA823>I<7FFE 0000FFFF0000FFFF0000FFFF00007FFE000007C0000007C0000007C0000007C0000007C0 000007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C0 000007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C0 000007C0000007C0007C07C0007C07C0007C07C0007C07C0007C07C0007C07C0007C7FFF FFFCFFFFFFFCFFFFFFFCFFFFFFFC7FFFFFF81E297EA823>II<7FC01FF8FFC03FFCFFE03FFCFFE03FFC7FF01FF8 0F7003C00F7003C00F7803C00F3803C00F3803C00F3C03C00F3C03C00F1C03C00F1E03C0 0F1E03C00F0E03C00F0F03C00F0F03C00F0F03C00F0783C00F0783C00F0783C00F03C3C0 0F03C3C00F03C3C00F01C3C00F01E3C00F01E3C00F00E3C00F00F3C00F00F3C00F0073C0 0F0073C00F007BC00F003BC00F003BC07FE03FC0FFF01FC0FFF01FC0FFF00FC07FE00F80 1E297EA823>I<03FFF0000FFFFC001FFFFE003FFFFF003FFFFF007F807F807E001F807C 000F807C000F80FC000FC0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F8 0007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F8 0007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0FC000FC0FC000FC07C 000F807C000F807E001F807F807F803FFFFF003FFFFF001FFFFE000FFFFC0003FFF0001A 2B7CA923>II<03FFF0 000FFFFC001FFFFE003FFFFF003FFFFF007F807F807E001F807C000F807C000F80FC000F C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007 C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007C0F80007 C0F80007C0F80007C0F80007C0F81F07C0F81F87C0FC1F8FC07C0FCF807C0FCF807E07FF 807F87FF803FFFFF003FFFFF001FFFFE000FFFFC0003FFFC000000FC0000007E0000007E 0000003F0000003F0000001F8000001F8000000F801A337CA923>I<7FFFC00000FFFFF8 0000FFFFFC0000FFFFFF00007FFFFF00000F807F80000F801FC0000F800FC0000F8007E0 000F8003E0000F8003E0000F8003E0000F8003E0000F8003E0000F8007E0000F800FC000 0F801FC0000F807F80000FFFFF00000FFFFF00000FFFFC00000FFFFE00000FFFFF00000F 807F00000F803F80000F801F80000F800F80000F800F80000F800F80000F800F80000F80 0F80000F800F80000F800F87000F800F8F800F800F8F800F800FCF807FF00FDF80FFF807 FF80FFF807FF00FFF803FF007FF001FE00000000F800212A7FA823>I<00FF838003FFE3 C007FFFFC01FFFFFC03FFFFFC07FC0FFC07F003FC0FE001FC0FC000FC0FC000FC0F8000F C0F80007C0F80007C0FC000380FC0000007E0000007F0000003FF000001FFF00000FFFF0 0007FFFC0001FFFE00001FFF800001FFC000001FC000000FE0000007E0000003F0000003 F0700001F0F80001F0F80001F0F80001F0FC0003F0FC0003F0FE0007E0FF000FE0FFE03F C0FFFFFFC0FFFFFF00FFFFFE00F1FFFC00703FE0001C2B7DA923>I<7FFFFFF8FFFFFFFC FFFFFFFCFFFFFFFCFFFFFFFCF807C07CF807C07CF807C07CF807C07CF807C07C7007C038 0007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C000 0007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C0000007C000 0007C0000007C0000007C0000007C0000007C0000007C0000007C00000FFFE0001FFFF00 01FFFF0001FFFF0000FFFE001E297EA823>II<7FF00FFEFFF00FFFFFF00FFFFFF00FFF7FF00FFE0F80 01F00F8001F007C003E007C003E007C003E007C003E003E007C003E007C003E007C003E0 07C003F00FC001F00F8001F00F8001F00F8001F00F8000F81F0000F81F0000F81F0000F8 1F00007C3E00007C3E00007C3E00007C3E00003C3C00003E7C00003E7C00003E7C00001E 7800001E7800001E7800001E7800001FF800000FF000000FF000000FF0000007E0000007 E000202A7FA823>II<7FF07FF07FF8FFF07FF8FFF07FF8FFF07FF07FF007E03F0003E03E0003F07E00 01F07C0001F8FC0000F8F80000FDF800007DF000007FF000003FE000003FE000001FC000 001FC000000F8000000F8000000FC000001FC000001FE000003FE000003FF000007DF000 007CF80000F8F80000F87C0001F07C0001F03E0003F03E0003E01F0007E01F0007C00F80 0FC00F807FE03FF8FFF03FFCFFF03FFCFFF03FFC7FE03FF81E297EA823>I<7FF00FFEFF F81FFFFFF81FFFFFF81FFF7FF00FFE07C003E007E007E003E007C003F007C001F00FC001 F80F8000F81F8000FC1F00007C1F00007C3E00003E3E00003E7E00001F7C00001F7C0000 0FF800000FF8000007F0000007F0000007E0000003E0000003E0000003E0000003E00000 03E0000003E0000003E0000003E0000003E0000003E0000003E0000003E000001FFC0000 3FFE00003FFE00003FFE00001FFC0020297FA823>I<3FFFFFE07FFFFFF07FFFFFF07FFF FFF07FFFFFF07C0007E07C000FE07C001FC07C001F807C003F8038007F0000007E000000 FE000001FC000001F8000003F8000007F0000007E000000FE000001FC000001F8000003F 8000007F0000007E000000FE000001FC000001F8000003F8000007F0000007E000E00FE0 01F01FC001F01F8001F03F8001F07F0001F07E0001F0FFFFFFF0FFFFFFF0FFFFFFF0FFFF FFF07FFFFFE01C297DA823>I E /Fb 47 122 df<0000FF00000007FFE000001F80F000 003E003800007C007C0000F800FC0001F000FC0003F000FC0003E000780003E000300003 E000000003E000000003E000000003E000000003E000000003E000000003E000000003E0 007C00FFFFFFFC00FFFFFFFC0003E000FC0003E0007C0003E0007C0003E0007C0003E000 7C0003E0007C0003E0007C0003E0007C0003E0007C0003E0007C0003E0007C0003E0007C 0003E0007C0003E0007C0003E0007C0003E0007C0003E0007C0003E0007C0003E0007C00 03E0007C0003E0007C0003E0007C0003E0007C0003E0007C0007F000FE007FFF0FFFE07F FF0FFFE0232F7FAE27>12 D<3C00F07E01F8FF03FCFF03FCFF83FEFF83FE7F81FE3D80F6 01800601800601800603800E03000C03000C07001C0600180E00381C00703800E07001C0 60018017157EAD23>34 D<00030007000E001C0038007000F001E001C003C0078007800F 000F001E001E001E003C003C003C003C0078007800780078007800F800F800F000F000F0 00F000F000F000F000F000F000F000F000F800F800780078007800780078003C003C003C 003C001E001E001E000F000F000780078003C001C001E000F000700038001C000E000700 0310437AB11B>40 DI<3C007E00FF00FF00FF80FF807F803D800180018001800380030003000700 06000E001C0038007000600009157A8714>44 D<000000C0000001C0000003C000000380 0000038000000780000007000000070000000F0000000E0000000E0000001E0000001C00 00001C0000003C00000038000000780000007000000070000000F0000000E0000000E000 0001E0000001C0000001C0000003C00000038000000780000007000000070000000F0000 000E0000000E0000001E0000001C0000001C0000003C0000003800000038000000780000 0070000000F0000000E0000000E0000001E0000001C0000001C0000003C0000003800000 0380000007800000070000000F0000000E0000000E0000001E0000001C0000001C000000 3C0000003800000038000000780000007000000070000000F0000000E0000000E0000000 1A437CB123>47 D<3C7EFFFFFFFF7E3C000000000000000000000000003C7EFFFFFFFF7E 3C081D7A9C14>58 D<000001800000000003C00000000003C00000000003C00000000007 E00000000007E0000000000FF0000000000FF0000000000FF0000000001BF80000000019 F80000000019F80000000030FC0000000030FC0000000070FE00000000607E0000000060 7E00000000C03F00000000C03F00000000C03F00000001801F80000001801F8000000380 1FC0000003000FC0000003000FC00000060007E00000060007E00000060007E000000C00 03F000000C0003F000001FFFFFF800001FFFFFF80000180001F80000300000FC00003000 00FC0000300000FC00006000007E00006000007E0000E000007F0000C000003F0000C000 003F0001C000001F8003C000001F8007C000001FC00FF000003FE0FFFC0003FFFFFFFC00 03FFFF302F7EAE35>65 DI< 00001FF000C00000FFFE01C00003F00F83C0000F8001E3C0003F000077C0007C00003FC0 01F800001FC003F000000FC007E0000007C007E0000007C00FC0000003C01FC0000003C0 1F80000001C03F80000001C03F00000001C07F00000000C07F00000000C07F00000000C0 FE0000000000FE0000000000FE0000000000FE0000000000FE0000000000FE0000000000 FE0000000000FE0000000000FE0000000000FE0000000000FE00000000007F0000000000 7F00000000C07F00000000C03F00000000C03F80000000C01F80000001C01FC000000180 0FC00000018007E00000038007E00000070003F00000060001F800000E00007C00001C00 003F00007800000F8001E0000003F00FC0000000FFFE000000001FF000002A2F7CAD33> IIII<00001FF000 C00000FFFE01C00003F00F83C0000F8001E3C0003F000077C0007C00003FC001F800001F C003F000000FC007E0000007C007E0000007C00FC0000003C01FC0000003C01F80000001 C03F80000001C03F00000001C07F00000000C07F00000000C07F00000000C0FE00000000 00FE0000000000FE0000000000FE0000000000FE0000000000FE0000000000FE00000000 00FE0000000000FE0000000000FE00001FFFFEFE00001FFFFE7F0000001FE07F0000000F C07F0000000FC03F0000000FC03F8000000FC01F8000000FC01FC000000FC00FC000000F C007E000000FC007E000000FC003F000000FC001F800001FC0007C00001FC0003F00003F C0000F8000F3C00003F007C1C00000FFFF00C000001FF800002F2F7CAD37>II77 DI<00003FF000000001FFFE 00000007E01F8000001F8007E000003E0001F00000FC0000FC0001F800007E0003F00000 3F0007E000001F8007C000000F800FC000000FC01F80000007E01F80000007E03F000000 03F03F00000003F07F00000003F87F00000003F87E00000001F87E00000001F8FE000000 01FCFE00000001FCFE00000001FCFE00000001FCFE00000001FCFE00000001FCFE000000 01FCFE00000001FCFE00000001FCFE00000001FC7F00000003F87F00000003F87F000000 03F83F00000003F03F80000007F01F80000007E01F80000007E00FC000000FC00FE00000 1FC007E000001F8003F000003F0001F800007E0000FC0000FC00007E0001F800001F8007 E0000007E01F80000001FFFE000000003FF000002E2F7CAD37>II82 D<003F803001FFF07007C07C700F000EF01E0007F03C0003F0 780001F0780000F0700000F0F0000070F0000070F0000070F0000030F8000030F8000030 FC0000007E0000007F0000003FE000003FFE00001FFFE0000FFFFC0007FFFF0001FFFF80 003FFFE00003FFE000003FF0000007F8000001F8000000F8000000FC0000007CC000007C C000003CC000003CC000003CE000003CE000003CE0000078F0000078F8000070FC0000F0 FE0001E0F78003C0E3F00F00E07FFE00C00FF0001E2F7CAD27>I<7FFFFFFFFFF87FFFFF FFFFF87F000FC003F87C000FC000F870000FC0003870000FC0003860000FC0001860000F C00018E0000FC0001CE0000FC0001CC0000FC0000CC0000FC0000CC0000FC0000CC0000F C0000CC0000FC0000C00000FC0000000000FC0000000000FC0000000000FC0000000000F C0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000F C0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000F C0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000F C0000000000FC0000000000FC0000000000FC0000000000FC0000000001FE00000001FFF FFE000001FFFFFE0002E2D7EAC33>III<03000C07001C0E00381C00703800E03000C07001 C0600180600180E00380C00300C00300C00300DE0378FF03FCFF83FEFF83FE7F81FE7F81 FE3F00FC1E0078171577AD23>92 D<00FF000007FFC0000F01F0001C00F8003F007C003F 003E003F003E003F003F001E001F0000001F0000001F0000001F0000001F000007FF0000 7FFF0001FE1F0007F01F001FC01F003F801F007F001F007E001F00FE001F06FC001F06FC 001F06FC001F06FC003F06FE003F067E007F067F00EF8C1F83C7FC0FFF03F801FC01E01F 207D9E23>97 D<07C0000000FFC0000000FFC00000000FC000000007C000000007C00000 0007C000000007C000000007C000000007C000000007C000000007C000000007C0000000 07C000000007C000000007C000000007C0FE000007C7FF800007CF03E00007DC01F00007 F8007C0007F0007E0007E0003E0007C0001F0007C0001F8007C0001F8007C0000F8007C0 000FC007C0000FC007C0000FC007C0000FC007C0000FC007C0000FC007C0000FC007C000 0FC007C0000FC007C0001F8007C0001F8007C0001F0007C0003F0007E0003E0007F0007C 0007B000F80007BC01F000070E07E0000607FF80000001FC0000222F7EAD27>I<001FE0 00007FFC0001F01E0003E0070007C01F800F801F801F001F803F001F803E000F007E0000 007E0000007C000000FC000000FC000000FC000000FC000000FC000000FC000000FC0000 00FC000000FC0000007E0000007E0000007E0000C03F0000C01F0001C01F8001800FC003 8007E0070001F03E00007FF800001FC0001A207E9E1F>I<000000F80000001FF8000000 1FF800000001F800000000F800000000F800000000F800000000F800000000F800000000 F800000000F800000000F800000000F800000000F800000000F800000000F800000FE0F8 00007FF8F80001F81EF80003E007F80007C003F8000F8001F8001F0001F8003F0000F800 3E0000F8007E0000F8007E0000F800FC0000F800FC0000F800FC0000F800FC0000F800FC 0000F800FC0000F800FC0000F800FC0000F800FC0000F8007C0000F8007E0000F8007E00 00F8003E0001F8001F0001F8001F8003F8000F8007F80003E00EFC0001F03CFFC0007FF0 FFC0001FC0F800222F7EAD27>I<001F800000FFF00003E0780007C03E000F801E001F00 1F001F000F803E000F807E0007807E0007C07C0007C0FC0007C0FC0007C0FC0007C0FFFF FFC0FFFFFFC0FC000000FC000000FC000000FC000000FC0000007E0000007E0000003E00 00C03F0000C01F0001C00F8003800FC0030003E00F0001F03C00007FF800001FC0001A20 7E9E1F>I<003F00F800FFC3FE03E1FF1E07807C1E0F807C0C1F003E001F003E003E001F 003E001F003E001F003E001F003E001F003E001F003E001F001F003E001F003E000F807C 00078078000FE1F0000CFFC0001C3F00001C0000001C0000001C0000001E0000001F0000 000FFFF8000FFFFF0007FFFFC00FFFFFF01E0007F83C0000F87800007CF800007CF00000 3CF000003CF000003CF000003CF800007C7C0000F83E0001F01F0003E007E01F8001FFFE 00003FF0001F2D7E9D23>103 D<07C0000000FFC0000000FFC00000000FC000000007C0 00000007C000000007C000000007C000000007C000000007C000000007C000000007C000 000007C000000007C000000007C000000007C000000007C0FE000007C3FF800007C703E0 0007DE01F00007F801F00007F000F80007F000F80007E000F80007E000F80007C000F800 07C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007 C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007C0 00F80007C000F80007C000F8000FE001FC00FFFE1FFFC0FFFE1FFFC0222E7EAD27>I<07 800FC01FE01FE01FE01FE00FC007800000000000000000000000000000000007C0FFC0FF C00FC007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007 C007C007C007C007C007C007C00FE0FFFCFFFC0E2E7EAD14>I<07C0000000FFC0000000 FFC00000000FC000000007C000000007C000000007C000000007C000000007C000000007 C000000007C000000007C000000007C000000007C000000007C000000007C000000007C0 00000007C01FFE0007C01FFE0007C00FF00007C007C00007C007800007C00E000007C01C 000007C038000007C070000007C0E0000007C3C0000007C7C0000007CFE0000007DFF000 0007F9F0000007F0F8000007E0FC000007C07E000007C03E000007C01F000007C01F8000 07C00FC00007C007C00007C003E00007C003F00007C001F8000FE003FC00FFFE07FF80FF FE07FF80212E7EAD25>107 D<07C0FFC0FFC00FC007C007C007C007C007C007C007C007 C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007 C007C007C007C007C007C007C007C007C007C007C007C007C007C00FE0FFFEFFFE0F2E7E AD14>I<07C07F0007F000FFC3FFC03FFC00FFC783F0783F000FCE01F8E01F8007DC00F9 C00F8007F800FF800FC007F0007F0007C007E0007E0007C007E0007E0007C007C0007C00 07C007C0007C0007C007C0007C0007C007C0007C0007C007C0007C0007C007C0007C0007 C007C0007C0007C007C0007C0007C007C0007C0007C007C0007C0007C007C0007C0007C0 07C0007C0007C007C0007C0007C007C0007C0007C007C0007C0007C007C0007C0007C007 C0007C0007C007C0007C0007C00FE000FE000FE0FFFE0FFFE0FFFEFFFE0FFFE0FFFE371E 7E9D3C>I<07C0FE0000FFC3FF8000FFC703E0000FDE01F00007F801F00007F000F80007 F000F80007E000F80007E000F80007C000F80007C000F80007C000F80007C000F80007C0 00F80007C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007C000 F80007C000F80007C000F80007C000F80007C000F80007C000F80007C000F8000FE001FC 00FFFE1FFFC0FFFE1FFFC0221E7E9D27>I<001FE000007FF80001F03E0003C00F000780 07800F0003C01F0003E03E0001F03E0001F07C0000F87C0000F87C0000F8FC0000FCFC00 00FCFC0000FCFC0000FCFC0000FCFC0000FCFC0000FCFC0000FCFC0000FC7C0000F87C00 00F83E0001F03E0001F01F0003E01F0003E00F8007C007C00F8001F03E00007FF800001F E0001E207E9E23>I<07C0FE0000FFC7FF8000FFCF03E0000FDC01F00007F800FC0007F0 007E0007E0003E0007C0003F0007C0001F8007C0001F8007C0001F8007C0000FC007C000 0FC007C0000FC007C0000FC007C0000FC007C0000FC007C0000FC007C0000FC007C0001F C007C0001F8007C0001F8007C0003F0007C0003F0007E0007E0007F0007C0007F000F800 07FC01F00007CE07E00007C7FF800007C1FC000007C000000007C000000007C000000007 C000000007C000000007C000000007C000000007C000000007C00000000FE0000000FFFE 000000FFFE000000222B7E9D27>I<0781F8FF87FEFF8E3F0F9C3F07B83F07B03F07F01E 07E00007E00007E00007E00007C00007C00007C00007C00007C00007C00007C00007C000 07C00007C00007C00007C00007C00007C00007C00007C0000FE000FFFF00FFFF00181E7E 9D1C>114 D<01FE1807FFB81E01F83C00F8780078F00038F00038F00018F00018F80018 FC0018FF00007FF0003FFF001FFFC00FFFF001FFF8001FFC0001FCC0007EC0003EC0003E E0001EE0001EF0001EF0001EF8003CF8003CFC0078FF01F0E3FFC0C0FF0017207E9E1C> I<00600000600000600000600000E00000E00000E00001E00003E00003E00007E0001FE0 00FFFFF0FFFFF003E00003E00003E00003E00003E00003E00003E00003E00003E00003E0 0003E00003E00003E00003E00003E00003E01803E01803E01803E01803E01803E01803E0 1803E03801F03001F07000F860003FE0000F80152A7FA81B>I<07C000F800FFC01FF800 FFC01FF8000FC001F80007C000F80007C000F80007C000F80007C000F80007C000F80007 C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007C000F80007C0 00F80007C000F80007C000F80007C000F80007C000F80007C000F80007C001F80007C001 F80007C001F80007C003F80003E007F80003E00EFC0001F81CFFC0007FF8FFC0001FE0F8 00221F7E9D27>IIII< FFFC01FFC0FFFC01FFC00FE0007E0007E0007C0007E000380003E000300003F000700001 F000600001F000600000F800C00000F800C00000FC01C000007C018000007E038000003E 030000003E030000001F060000001F060000001F8E0000000F8C0000000F8C00000007D8 00000007D800000003F000000003F000000003F000000001E000000001E000000000C000 000000C0000000018000000001800000000380000000030000007803000000FC06000000 FC06000000FC0C000000FC1C000000783800000070700000003FE00000000F8000000022 2B7F9C25>I E /Fc 32 123 df<00001FF8000003FFFF00000FFFFF80003FF00FC000FF 800FE001FF001FE001FE001FE003FE003FF003FC001FE003FC001FE003FC001FE003FC00 078003FC00000003FC00000003FC00000003FC00FFF0FFFFFFFFF0FFFFFFFFF0FFFFFFFF F0FFFFFFFFF003FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003FC000FF0 03FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003 FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003FC000FF003FC 000FF003FC000FF03FFFC0FFFF3FFFC0FFFF3FFFC0FFFF3FFFC0FFFF282E7FAD2D>12 D68 DI<000003FF80038000007FFFF007800001FFFFFC0F80000FFFFFFF1F80001FFF803FFF 80007FF8000FFF8000FFE00003FF8001FF800001FF8003FF000000FF8007FE0000007F80 0FFC0000007F801FFC0000003F801FF80000001F803FF80000001F803FF00000001F807F F00000000F807FF00000000F807FF00000000F807FE00000000000FFE00000000000FFE0 0000000000FFE00000000000FFE00000000000FFE00000000000FFE00000000000FFE000 00000000FFE00000000000FFE00000000000FFE00007FFFFFE7FE00007FFFFFE7FF00007 FFFFFE7FF00007FFFFFE7FF0000001FF803FF0000001FF803FF8000001FF801FF8000001 FF801FFC000001FF800FFC000001FF8007FE000001FF8003FF000001FF8001FF800001FF 8000FFE00003FF80007FF80007FF80001FFF801FFF80000FFFFFFFFF800001FFFFFF0F80 00007FFFFC0180000003FFC0000037307CAE40>71 D76 DI80 D82 D<001FF8038000FFFF07 8003FFFFCF8007FFFFFF801FF00FFF801FC001FF803F80007F807F00003F807E00001F80 7E00001F80FE00000F80FE00000F80FE00000780FF00000780FF80000780FFC0000000FF F00000007FFF0000007FFFF000007FFFFF80003FFFFFE0001FFFFFF8000FFFFFFE0007FF FFFF0003FFFFFF8000FFFFFFC0001FFFFFC00001FFFFE000000FFFE0000000FFF0000000 3FF00000001FF00000000FF0F000000FF0F0000007F0F0000007F0F0000007F0F8000007 F0F8000007E0FC00000FE0FE00000FC0FF00001FC0FFC0003F80FFFC00FF00FFFFFFFE00 F9FFFFFC00F03FFFF000E003FF800024307CAE2D>I86 D<007FF8000003FFFF00000FFFFFC0001FE01FF0001FF007F8001FF003FC001FF003FC00 1FF001FE000FE001FE0007C001FE00000001FE00000001FE000000FFFE00003FFFFE0001 FFFFFE0007FFC1FE001FFC01FE003FE001FE007FC001FE00FF8001FE00FF0001FE00FF00 01FE00FF0001FE00FF0003FE00FF8003FE007FC007FF003FF03FFFF81FFFFEFFF807FFF8 7FF800FFC01FF8251E7E9D28>97 D<03F0000000FFF0000000FFF0000000FFF0000000FF F00000000FF00000000FF00000000FF00000000FF00000000FF00000000FF00000000FF0 0000000FF00000000FF00000000FF00000000FF00000000FF03FE0000FF1FFFC000FF7FF FF000FFFC0FF800FFE003FC00FFC001FE00FF8000FF00FF0000FF00FF0000FF80FF00007 F80FF00007FC0FF00007FC0FF00007FC0FF00007FC0FF00007FC0FF00007FC0FF00007FC 0FF00007FC0FF00007FC0FF00007F80FF00007F80FF0000FF80FF0000FF00FF8001FF00F FC001FE00FFE003FC00FDF80FF800FC7FFFE000F81FFF8000F007FC000262E7DAD2D>I< 000FFE00007FFFC001FFFFF007FC07F80FF00FF81FE00FF83FE00FF83FC00FF87FC007F0 7F8003E07F800000FF800000FF800000FF800000FF800000FF800000FF800000FF800000 FF8000007F8000007FC000007FC000003FE0003C3FE0003C1FF0007C0FF800F807FE03F0 01FFFFE0007FFF80000FFC001E1E7D9D24>I<0000000FC0000003FFC0000003FFC00000 03FFC0000003FFC00000003FC00000003FC00000003FC00000003FC00000003FC0000000 3FC00000003FC00000003FC00000003FC00000003FC00000003FC0000FF83FC0007FFF3F C001FFFFFFC007FC07FFC00FF001FFC01FE0007FC03FE0003FC03FC0003FC07FC0003FC0 7F80003FC07F80003FC0FF80003FC0FF80003FC0FF80003FC0FF80003FC0FF80003FC0FF 80003FC0FF80003FC0FF80003FC0FF80003FC07F80003FC07FC0003FC03FC0003FC03FC0 007FC01FE000FFC00FF003FFC007FC0FFFFC01FFFFBFFC00FFFE3FFC001FF03FFC262E7D AD2D>I<000FFC00007FFF8001FFFFE007FC0FF00FF003F81FE001FC1FE000FC3FC000FE 7FC0007E7F80007E7F80007FFF80007FFF80007FFFFFFFFFFFFFFFFFFFFFFFFFFF800000 FF800000FF8000007F8000007FC000007FC000003FC0000F1FE0000F0FF0001F07F8007E 03FE01FC01FFFFF8007FFFE00007FF00201E7E9D25>I<0000FF800007FFE0001FFFF000 7FC7F000FF0FF801FE0FF801FE0FF803FC0FF803FC0FF803FC07F003FC008003FC000003 FC000003FC000003FC000003FC0000FFFFFC00FFFFFC00FFFFFC00FFFFFC0003FC000003 FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC000003FC000003FC00003FFFE0003FFFE0003FFFE0003FFFE0001D2E7EAD19>I<0000 0007C0001FF81FF000FFFF7FF803FFFFFDF807F81FF3F80FE007F3F81FE007F9F81FC003 F8F03FC003FC003FC003FC003FC003FC003FC003FC003FC003FC003FC003FC001FC003F8 001FE007F8000FE007F00007F81FE00007FFFFC0000FFFFF00000F1FF800001E00000000 1F000000001F000000001F800000001FFFFFC0000FFFFFF8000FFFFFFF0007FFFFFF8007 FFFFFFC00FFFFFFFE03FFFFFFFE07F00003FF0FE00000FF0FC000007F0FC000003F0FC00 0003F0FC000003F0FE000007F07F00000FE03F80001FC01FF000FF8007FFFFFE0001FFFF F800001FFF8000252D7E9E29>I<07C00FE01FF03FF83FF83FF83FF83FF81FF00FE007C0 00000000000000000000000003F0FFF0FFF0FFF0FFF00FF00FF00FF00FF00FF00FF00FF0 0FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF0FFFFFFFFFFFFFFFF 102F7CAE17>105 D<03F0FFF0FFF0FFF0FFF00FF00FF00FF00FF00FF00FF00FF00FF00F F00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F F00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF0FFFFFFFFFFFFFFFF102E7CAD17> 108 D<07E00FF8001FF000FFE07FFE00FFFC00FFE1FFFF83FFFF00FFE7E07FCFC0FF80FF EF803FDF007F800FFE003FFC007F800FFE001FFC003FC00FFC001FF8003FC00FF8001FF0 003FC00FF8001FF0003FC00FF0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC00F F0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC00FF0001FE0 003FC00FF0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC00F F0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC00FF0001FE0003FC0FFFF01FFFE 03FFFCFFFF01FFFE03FFFCFFFF01FFFE03FFFCFFFF01FFFE03FFFC3E1E7C9D45>I<07E0 1FF000FFE07FFC00FFE1FFFF00FFE7E0FF80FFEF807F800FFE007FC00FFC003FC00FFC00 3FC00FF8003FC00FF8003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003F C00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC0 0FF0003FC00FF0003FC00FF0003FC00FF0003FC0FFFF03FFFCFFFF03FFFCFFFF03FFFCFF FF03FFFC261E7C9D2D>I<0007FE0000007FFFE00001FFFFF80003FC03FC0007F000FE00 0FE0007F001FC0003F803FC0003FC03F80001FC07F80001FE07F80001FE0FF80001FF0FF 80001FF0FF80001FF0FF80001FF0FF80001FF0FF80001FF0FF80001FF0FF80001FF07F80 001FE07F80001FE07FC0003FE03FC0003FC01FC0003F801FE0007F800FF000FF0003FC03 FC0001FFFFF800007FFFE0000007FE0000241E7E9D29>I<03F03FE000FFF1FFFC00FFF7 FFFF00FFFFC0FF80FFFE007FC00FFC003FE00FF8001FF00FF0001FF00FF0000FF80FF000 0FF80FF0000FFC0FF00007FC0FF00007FC0FF00007FC0FF00007FC0FF00007FC0FF00007 FC0FF00007FC0FF00007FC0FF0000FF80FF0000FF80FF0000FF80FF0001FF00FF8001FF0 0FFC003FE00FFE007FC00FFF81FF800FF7FFFE000FF1FFF8000FF07FC0000FF00000000F F00000000FF00000000FF00000000FF00000000FF00000000FF00000000FF00000000FF0 000000FFFF000000FFFF000000FFFF000000FFFF000000262B7D9D2D>I<000FF003C000 7FFE07C001FFFF87C007FE07CFC00FF803FFC01FF001FFC03FE000FFC03FE0007FC07FC0 003FC07FC0003FC07FC0003FC0FF80003FC0FF80003FC0FF80003FC0FF80003FC0FF8000 3FC0FF80003FC0FF80003FC0FF80003FC0FFC0003FC07FC0003FC07FC0003FC03FE0007F C03FE000FFC01FF000FFC00FF803FFC007FC0FFFC001FFFFBFC0007FFE3FC0001FF03FC0 0000003FC00000003FC00000003FC00000003FC00000003FC00000003FC00000003FC000 00003FC00000003FC0000003FFFC000003FFFC000003FFFC000003FFFC262B7D9D2B>I< 07E07E00FFE1FF80FFE3FFE0FFE78FE0FFEF1FF00FFE1FF00FFC1FF00FFC1FF00FF80FE0 0FF807C00FF800000FF000000FF000000FF000000FF000000FF000000FF000000FF00000 0FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF00000FFFF8000 FFFF8000FFFF8000FFFF80001C1E7D9D22>I<01FF8E0007FFFE001FFFFE003F00FE007C 007E0078003E00F8001E00F8001E00FC001E00FF000000FFF00000FFFF80007FFFE0003F FFF8001FFFFE000FFFFE0003FFFF00003FFF800000FF8000003F80F0001F80F0000F80F8 000F80F8000F80FC001F00FE001F00FF807E00FFFFFC00FBFFF000E0FF8000191E7D9D20 >I<003C0000003C0000003C0000003C0000007C0000007C0000007C000000FC000000FC 000001FC000003FC000007FC00001FFFFF00FFFFFF00FFFFFF00FFFFFF0003FC000003FC 000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC 000003FC000003FC000003FC000003FC03C003FC03C003FC03C003FC03C003FC03C003FC 03C003FC07C001FE078001FF0F8000FFFF00003FFC00000FF0001A2A7FA920>I<03F000 0FC0FFF003FFC0FFF003FFC0FFF003FFC0FFF003FFC00FF0003FC00FF0003FC00FF0003F C00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC0 0FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00FF0003FC00F F0007FC00FF0007FC00FF000FFC007F001FFC007F807FFFC03FFFFBFFC00FFFE3FFC003F F03FFC261E7C9D2D>II 120 DI<3FFFFFFC3FFFFFFC3FFFFFFC3FC00FF83F001FF03E003FF07C007F E07C00FFC07C01FF807801FF007803FE007807FE00000FFC00001FF800001FF000003FE0 00007FE03C00FFC03C01FF803C01FF003C03FE007C07FE007C0FFC007C1FF800F83FF000 F83FE003F87FC00FF8FFFFFFF8FFFFFFF8FFFFFFF81E1E7E9D24>I E /Fd 13 121 df68 DI<00000FFF800007000000 FFFFF8000F000007FFFFFF001F00001FFFFFFFC03F00003FFFFFFFF07F0000FFFC00FFF8 FF0001FFE0000FFDFF0003FF800001FFFF0007FE0000007FFF000FFC0000003FFF000FF8 0000000FFF001FF800000007FF001FF000000003FF003FF000000003FF003FE000000001 FF007FE000000000FF007FE000000000FF007FE0000000007F00FFE0000000007F00FFE0 000000003F00FFE0000000003F00FFF0000000003F00FFF0000000003F00FFF800000000 1F00FFF8000000001F00FFFC000000001F00FFFE000000001F00FFFF000000000000FFFF C000000000007FFFF000000000007FFFFF00000000007FFFFFF8000000003FFFFFFF8000 00003FFFFFFFFC0000001FFFFFFFFFC000001FFFFFFFFFF000000FFFFFFFFFFC000007FF FFFFFFFF000003FFFFFFFFFFC00001FFFFFFFFFFE00000FFFFFFFFFFF000007FFFFFFFFF F800003FFFFFFFFFFC00000FFFFFFFFFFE000003FFFFFFFFFE000000FFFFFFFFFF000000 1FFFFFFFFF80000000FFFFFFFF800000000FFFFFFFC0000000007FFFFFC00000000007FF FFE00000000000FFFFE000000000003FFFE000000000000FFFF0000000000007FFF00000 00000003FFF0000000000003FFF0780000000001FFF0F80000000000FFF0F80000000000 FFF0F80000000000FFF0F800000000007FF0F800000000007FF0FC00000000007FF0FC00 000000007FF0FC00000000007FE0FE00000000007FE0FE00000000007FE0FF0000000000 FFC0FF0000000000FFC0FF8000000000FFC0FFC000000001FF80FFE000000001FF00FFF0 00000003FF00FFFC00000007FE00FFFF0000000FFC00FFFFC000001FF800FFFFF800007F F000FF1FFFC003FFE000FE0FFFFFFFFFC000FC03FFFFFFFF0000F8007FFFFFFC0000F000 0FFFFFF00000E000007FFF0000003C5479D24B>83 D<000003FFC0000000003FFFFC0000 0001FFFFFF00000007FFFFFFC000000FFF81FFE000003FFC007FF800007FF8003FFC0000 FFF0001FFE0001FFE0000FFE0003FFC00007FF0007FFC00007FF800FFF800003FF800FFF 800003FFC01FFF800001FFC01FFF000001FFC03FFF000001FFE03FFF000001FFE07FFF00 0000FFE07FFE000000FFE07FFE000000FFF07FFE000000FFF0FFFE000000FFF0FFFE0000 00FFF0FFFE000000FFF0FFFE000000FFF0FFFFFFFFFFFFF0FFFFFFFFFFFFF0FFFFFFFFFF FFF0FFFFFFFFFFFFE0FFFE0000000000FFFE0000000000FFFE0000000000FFFE00000000 00FFFE0000000000FFFE00000000007FFE00000000007FFE00000000007FFF0000000000 3FFF00000000003FFF00000000003FFF00000000E01FFF00000001F01FFF80000003F00F FF80000003F007FFC0000007E007FFC0000007E003FFE000000FC001FFF000001FC000FF F800003F80007FFC0000FF00001FFE0003FE00000FFFC03FF8000003FFFFFFF0000000FF FFFFC00000001FFFFE0000000001FFF0000034387CB63D>101 D<007F000000FF800003 FFE00007FFF00007FFF0000FFFF8000FFFF8000FFFF8000FFFF8000FFFF8000FFFF8000F FFF80007FFF00007FFF00003FFE00000FF8000007F000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000003FF000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF00001FFF00000FFF00000 FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000 FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000 FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000 FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000 FFF00000FFF00000FFF00000FFF00000FFF000FFFFFFE0FFFFFFE0FFFFFFE0FFFFFFE0FF FFFFE01B547BD325>105 D<003FF000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF000 01FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 00FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF00000FFF000 FFFFFFF0FFFFFFF0FFFFFFF0FFFFFFF0FFFFFFF01C537BD225>108 D<003FF0001FFC000000FFE00000FFFFF000FFFFC00007FFFE0000FFFFF003FFFFF0001F FFFF8000FFFFF00FFFFFF8007FFFFFC000FFFFF01FE07FFC00FF03FFE000FFFFF03F001F FE01F800FFF00003FFF07C001FFF03E000FFF80000FFF0F0000FFF0780007FF80000FFF1 E0000FFF8F00007FFC0000FFF3C0000FFF9E00007FFC0000FFF7800007FFBC00003FFC00 00FFF7800007FFFC00003FFE0000FFFF000007FFF800003FFE0000FFFE000007FFF00000 3FFE0000FFFE000007FFF000003FFE0000FFFC000007FFE000003FFE0000FFFC000007FF E000003FFE0000FFFC000007FFE000003FFE0000FFFC000007FFE000003FFE0000FFF800 0007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000 FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003F FE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC0 00003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF80000 07FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FF F8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE 0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000 003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007 FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8 000007FFC000003FFE0000FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE00 00FFF8000007FFC000003FFE0000FFF8000007FFC000003FFE00FFFFFFF807FFFFFFC03F FFFFFEFFFFFFF807FFFFFFC03FFFFFFEFFFFFFF807FFFFFFC03FFFFFFEFFFFFFF807FFFF FFC03FFFFFFEFFFFFFF807FFFFFFC03FFFFFFE67367BB570>I<003FF001FFE0000000FF FFF00FFFFE000000FFFFF03FFFFFC00000FFFFF0FFFFFFF00000FFFFF3FF01FFF80000FF FFF7F8007FFE000003FFFFE0001FFF000000FFFF80000FFF800000FFFF000007FFC00000 FFFE000007FFE00000FFFC000003FFF00000FFF8000001FFF80000FFF8000001FFF80000 FFF8000000FFFC0000FFF8000000FFFC0000FFF8000000FFFE0000FFF80000007FFE0000 FFF80000007FFF0000FFF80000007FFF0000FFF80000007FFF0000FFF80000007FFF0000 FFF80000003FFF8000FFF80000003FFF8000FFF80000003FFF8000FFF80000003FFF8000 FFF80000003FFF8000FFF80000003FFF8000FFF80000003FFF8000FFF80000003FFF8000 FFF80000003FFF8000FFF80000003FFF8000FFF80000003FFF8000FFF80000003FFF8000 FFF80000003FFF0000FFF80000007FFF0000FFF80000007FFF0000FFF80000007FFF0000 FFF80000007FFE0000FFF8000000FFFE0000FFF8000000FFFE0000FFF8000000FFFC0000 FFF8000001FFFC0000FFF8000001FFF80000FFFC000003FFF00000FFFC000003FFF00000 FFFE000007FFE00000FFFF00000FFFC00000FFFF80001FFF800000FFFFC0003FFF000000 FFFFF000FFFC000000FFFBFE07FFF8000000FFF8FFFFFFE0000000FFF87FFFFF80000000 FFF81FFFFC00000000FFF803FFC000000000FFF800000000000000FFF800000000000000 FFF800000000000000FFF800000000000000FFF800000000000000FFF800000000000000 FFF800000000000000FFF800000000000000FFF800000000000000FFF800000000000000 FFF800000000000000FFF800000000000000FFF800000000000000FFF800000000000000 FFF800000000000000FFF800000000000000FFF8000000000000FFFFFFF80000000000FF FFFFF80000000000FFFFFFF80000000000FFFFFFF80000000000FFFFFFF8000000000041 4D7BB54B>112 D<007FE003FE00FFFFE00FFF80FFFFE03FFFE0FFFFE07FFFF0FFFFE0FE 1FF8FFFFE1F83FFC03FFE3E03FFE00FFE3C07FFE00FFE7807FFE00FFEF807FFE00FFEF00 7FFE00FFEE007FFE00FFFE003FFC00FFFC003FFC00FFFC001FF800FFFC000FF000FFF800 000000FFF800000000FFF800000000FFF800000000FFF800000000FFF000000000FFF000 000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000 000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000 000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000 000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000000000FFF000 000000FFF000000000FFF0000000FFFFFFFC0000FFFFFFFC0000FFFFFFFC0000FFFFFFFC 0000FFFFFFFC00002F367CB537>114 D<0003FFF00F00003FFFFE1F0000FFFFFFFF0003 FFFFFFFF0007FF003FFF000FF80007FF001FE00001FF003FC00000FF003F8000007F007F 8000007F007F0000003F007F0000003F00FF0000001F00FF0000001F00FF8000001F00FF 8000001F00FFC000001F00FFF000000000FFFC00000000FFFFC00000007FFFFF0000007F FFFFF800003FFFFFFF00003FFFFFFFC0001FFFFFFFF0000FFFFFFFF80007FFFFFFFC0003 FFFFFFFE0000FFFFFFFF00003FFFFFFF80000FFFFFFFC00000FFFFFFC0000007FFFFE000 00003FFFE000000007FFF000000001FFF0780000007FF0F80000003FF0F80000001FF0FC 0000001FF0FC0000000FF0FC0000000FF0FE0000000FF0FE0000000FE0FF0000000FE0FF 8000001FE0FF8000001FC0FFC000001FC0FFE000003F80FFF800007F00FFFE0001FE00FF FFC00FFC00FF7FFFFFF800FC1FFFFFE000F807FFFF8000F000FFF800002C387CB635>I< 00003E00000000003E00000000003E00000000003E00000000003E00000000003E000000 00007E00000000007E00000000007E00000000007E0000000000FE0000000000FE000000 0001FE0000000001FE0000000001FE0000000003FE0000000007FE0000000007FE000000 000FFE000000001FFE000000003FFE00000000FFFE00000001FFFE0000000FFFFFFFFF00 FFFFFFFFFF00FFFFFFFFFF00FFFFFFFFFF00FFFFFFFFFF00003FFE000000003FFE000000 003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000 003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000 003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000 003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000003FFE000000 003FFE000000003FFE0007C0003FFE0007C0003FFE0007C0003FFE0007C0003FFE0007C0 003FFE0007C0003FFE0007C0003FFE0007C0003FFE0007C0003FFE0007C0003FFE0007C0 001FFE000F80001FFF000F80001FFF000F80000FFF001F00000FFF801F000007FFC03E00 0003FFF0FC000001FFFFF80000007FFFF00000001FFFE000000003FF80002A4D7ECB34> I118 D<7FFFFFF0007FFFFE007FFFFFF0007FFFFE007FFFFFF0007FFFFE 007FFFFFF0007FFFFE007FFFFFF0007FFFFE00007FFE00000FFF0000003FFF000007F800 00001FFF800007F00000000FFFC0000FE000000007FFC0001FC000000007FFE0003F8000 000003FFF0003F0000000001FFF8007E0000000000FFFC00FC00000000007FFE01FC0000 0000007FFE03F800000000003FFF07F000000000001FFF8FE000000000000FFFCFC00000 00000007FFFF80000000000003FFFF00000000000003FFFF00000000000001FFFE000000 00000000FFFC000000000000007FFE000000000000003FFF000000000000003FFF000000 000000001FFF800000000000001FFFC00000000000003FFFE00000000000007FFFF00000 00000000FFFFF0000000000001FDFFF8000000000001F8FFFC000000000003F07FFE0000 00000007E03FFF00000000000FE01FFF00000000001FC01FFF80000000003F800FFFC000 0000007F0007FFE000000000FE0003FFF000000000FC0001FFF800000001F80001FFF800 000003F00000FFFC00000007F000007FFE0000000FE000003FFF0000001FC000001FFF80 0000FFF000001FFF8000FFFFFE0001FFFFFFC0FFFFFE0001FFFFFFC0FFFFFE0001FFFFFF C0FFFFFE0001FFFFFFC0FFFFFE0001FFFFFFC042357EB447>120 D E /Fe 8 116 df<07801FE03FF07FF87FF8FFFCFFFCFFFCFFFC7FF87FF83FF01FE007 800E0E788D1F>46 D<000003C000000007C00000001FC00000007FC0000003FFC000003F FFC000FFFFFFC000FFFFFFC000FFFFFFC000FFC3FFC0000003FFC0000003FFC0000003FF C0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0 000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC000 0003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC00000 03FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003 FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FF C0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0000003FFC0 000003FFC0000003FFC0000003FFC0000003FFC0000003FFC000FFFFFFFFFCFFFFFFFFFC FFFFFFFFFCFFFFFFFFFC264177C038>49 D<03000000030007E000003F0007FF0007FF00 07FFFFFFFE0007FFFFFFFC0007FFFFFFF80007FFFFFFF00007FFFFFFE00007FFFFFFC000 07FFFFFF000007FFFFFE000007FFFFF8000007FFFFE0000007FFFE00000007C000000000 07C00000000007C00000000007C00000000007C00000000007C00000000007C000000000 07C00000000007C00000000007C00000000007C03FF0000007C1FFFF000007C7FFFFC000 07DFC03FF00007FE000FF80007F80007FC0007F00003FE0007E00001FF0007C00001FF80 03800001FFC000000001FFC000000000FFE000000000FFE000000000FFF000000000FFF0 00000000FFF000000000FFF800000000FFF806000000FFF81FC00000FFF83FE00000FFF8 7FF00000FFF8FFF00000FFF8FFF80000FFF8FFF80000FFF8FFF80000FFF0FFF00000FFF0 FFF00000FFF0FFE00000FFE07FC00001FFE07F800001FFE03C000001FFC03E000003FF80 1E000003FF000F800007FE0007C0000FFC0003F0003FF80001FE00FFF00000FFFFFFE000 003FFFFF8000000FFFFC00000001FFC000002D427BC038>53 D82 D<0007FFF0000000007FFFFF00000001FFFFFFC0000003FC007FF0000007FE001F F8000007FE000FFC00000FFF0007FE00000FFF0003FF00000FFF0003FF80000FFF0003FF 800007FE0001FF800007FE0001FFC00003FC0001FFC00000F00001FFC00000000001FFC0 0000000001FFC00000000001FFC00000000001FFC0000000000FFFC00000003FFFFFC000 0003FFFFFFC000001FFF01FFC000007FF001FFC00001FFC001FFC00007FF0001FFC0000F FE0001FFC0001FFC0001FFC0003FF80001FFC0003FF00001FFC0007FF00001FFC0007FE0 0001FFC000FFE00001FFC000FFE00001FFC000FFE00001FFC000FFE00003FFC000FFE000 03FFC0007FF00007FFC0007FF00006FFC0003FF8000CFFC0001FFC0038FFF0000FFF80F0 7FFFC003FFFFE07FFFC000FFFF801FFFC0000FFE0007FFC0322C7DAB36>97 D<00003FF800000003FFFF8000000FFFFFE000003FF01FF80000FFC007FC0001FF0001FE 0003FE0001FF0007FE0000FF800FFC00007F800FFC00007FC01FF800003FC01FF800003F E03FF800003FE03FF000003FE07FF000003FE07FF000001FF07FF000001FF0FFF000001F F0FFF000001FF0FFFFFFFFFFF0FFFFFFFFFFF0FFFFFFFFFFF0FFF000000000FFF0000000 00FFF000000000FFF000000000FFF0000000007FF0000000007FF0000000007FF0000000 003FF8000000003FF8000000001FF8000000F01FF8000000F00FFC000000F007FC000001 E007FE000003E003FF000007C000FF80000F80007FE0003F00001FF803FC000007FFFFF8 000001FFFFC00000001FFE00002C2C7DAB33>101 D<007FC000FFFFC000FFFFC000FFFF C000FFFFC00003FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FF C00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FF C00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FF C00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FF C00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FF C00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FF C00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC00001FFC000FFFFFF80FFFF FF80FFFFFF80FFFFFF8019457CC420>108 D<001FFE038000FFFFCF8003FFFFFF800FF0 03FF801F80007F801F00003F803F00001F807E00000F807E00000F80FE00000780FE0000 0780FF00000780FF00000780FFC0000000FFF0000000FFFF8000007FFFFC00007FFFFF80 003FFFFFE0001FFFFFF8000FFFFFFC0007FFFFFE0003FFFFFF0000FFFFFF80003FFFFFC0 0001FFFFC000000FFFC0000000FFE07000007FE0F000001FE0F000001FE0F800000FE0F8 00000FE0F800000FE0FC00000FC0FE00000FC0FE00000F80FF00001F80FF80001F00FFE0 007E00FFF801FC00F8FFFFF800F03FFFE000E007FE0000232C7CAB2C>115 D E /Ff 25 119 df[<0000000000000000001FF0000000000000000000000000000000 0000003FF80000000000000000000000000000000000007FFC0000000000000000000000 000000000000007FFC000000000000000000000000000000000000FFFE00000000000000 0000000000000000000000FFFE000000000000000000000000000000000000FFFE000000 000000000000000000000000000001FFFF000000000000000000000000000000000001FF FF000000000000000000000000000000000003FFFF800000000000000000000000000000 000003FFFF800000000000000000000000000000000007FFFFC000000000000000000000 00000000000007FFFFC00000000000000000000000000000000007FFFFC0000000000000 000000000000000000000FFFFFE0000000000000000000000000000000000FFFFFE00000 00000000000000000000000000001FFFFFF0000000000000000000000000000000001FFF FFF0000000000000000000000000000000001FFFFFF00000000000000000000000000000 00003FFFFFF8000000000000000000000000000000003FFFFFF800000000000000000000 0000000000007FFFFFFC000000000000000000000000000000007FFFFFFC000000000000 000000000000000000007FFFFFFC00000000000000000000000000000000FFFFFFFE0000 0000000000000000000000000000FFFFFFFE00000000000000000000000000000001FFFF FFFF00000000000000000000000000000001FFFFFFFF0000000000000000000000000000 0001FFFFFFFF00000000000000000000000000000003FFFFFFFF80000000000000000000 000000000003FFFFFFFF80000000000000000000000000000007FFFFFFFFC00000000000 00000000000000000007FFFFFFFFC0000000000000000000000000000007FFFFFFFFC000 000000000000000000000000000FFFFFFFFFE000000000000000000000000000000FFFFF FFFFE000000000000000000000000000001FFFFFFFFFF000000000000000000000000000 001FFDFFFFFFF000000000000000000000000000001FF9FFFFFFF0000000000000000000 00000000003FF8FFFFFFF800000000000000000000000000003FF8FFFFFFF80000000000 0000000000000000007FF0FFFFFFFC00000000000000000000000000007FF07FFFFFFC00 000000000000000000000000007FE07FFFFFFC0000000000000000000000000000FFE03F FFFFFE0000000000000000000000000000FFE03FFFFFFE00000000000000000000000000 01FFC03FFFFFFF0000000000000000000000000001FFC01FFFFFFF000000000000000000 0000000001FF801FFFFFFF0000000000000000000000000003FF800FFFFFFF8000000000 000000000000000003FF800FFFFFFF8000000000000000000000000007FF000FFFFFFFC0 00000000000000000000000007FF0007FFFFFFC000000000000000000000000007FE0007 FFFFFFC00000000000000000000000000FFE0003FFFFFFE0000000000000000000000000 0FFE0003FFFFFFE00000000000000000000000001FFC0003FFFFFFF00000000000000000 000000001FFC0001FFFFFFF00000000000000000000000001FF80001FFFFFFF000000000 00000000000000003FF80000FFFFFFF80000000000000000000000003FF80000FFFFFFF8 0000000000000000000000007FF00000FFFFFFFC0000000000000000000000007FF00000 7FFFFFFC0000000000000000000000007FE000007FFFFFFC000000000000000000000000 FFE000003FFFFFFE000000000000000000000000FFE000003FFFFFFE0000000000000000 00000001FFC000003FFFFFFF000000000000000000000001FFC000001FFFFFFF00000000 0000000000000001FF8000001FFFFFFF000000000000000000000003FF8000000FFFFFFF 800000000000000000000003FF8000000FFFFFFF800000000000000000000007FF000000 0FFFFFFFC00000000000000000000007FF00000007FFFFFFC00000000000000000000007 FE00000007FFFFFFC0000000000000000000000FFE00000003FFFFFFE000000000000000 0000000FFE00000003FFFFFFE0000000000000000000001FFC00000003FFFFFFF0000000 000000000000001FFC00000001FFFFFFF0000000000000000000001FF800000001FFFFFF F0000000000000000000003FF800000000FFFFFFF8000000000000000000003FF8000000 00FFFFFFF8000000000000000000007FF000000000FFFFFFFC000000000000000000007F F0000000007FFFFFFC000000000000000000007FE0000000007FFFFFFC00000000000000 000000FFE0000000003FFFFFFE00000000000000000000FFE0000000003FFFFFFE000000 00000000000001FFC0000000003FFFFFFF00000000000000000001FFC0000000001FFFFF FF00000000000000000001FF80000000001FFFFFFF00000000000000000003FF80000000 000FFFFFFF80000000000000000003FF80000000000FFFFFFF80000000000000000007FF 00000000000FFFFFFFC0000000000000000007FF000000000007FFFFFFC0000000000000 000007FE000000000007FFFFFFC000000000000000000FFE000000000003FFFFFFE00000 0000000000000FFFFFFFFFFFFFFFFFFFFFE000000000000000001FFFFFFFFFFFFFFFFFFF FFF000000000000000001FFFFFFFFFFFFFFFFFFFFFF000000000000000001FFFFFFFFFFF FFFFFFFFFFF000000000000000003FFFFFFFFFFFFFFFFFFFFFF800000000000000003FFF FFFFFFFFFFFFFFFFFFF800000000000000007FFFFFFFFFFFFFFFFFFFFFFC000000000000 00007FFFFFFFFFFFFFFFFFFFFFFC00000000000000007FFFFFFFFFFFFFFFFFFFFFFC0000 000000000000FFE00000000000003FFFFFFE0000000000000000FFE00000000000003FFF FFFE0000000000000001FFC00000000000003FFFFFFF0000000000000001FFC000000000 00001FFFFFFF0000000000000001FF800000000000001FFFFFFF0000000000000003FF80 0000000000000FFFFFFF8000000000000003FF800000000000000FFFFFFF800000000000 0007FF000000000000000FFFFFFFC000000000000007FF0000000000000007FFFFFFC000 000000000007FE0000000000000007FFFFFFC00000000000000FFE0000000000000003FF FFFFE00000000000000FFE0000000000000003FFFFFFE00000000000001FFC0000000000 000003FFFFFFF00000000000001FFC0000000000000001FFFFFFF00000000000001FF800 00000000000001FFFFFFF00000000000003FF80000000000000000FFFFFFF80000000000 003FF80000000000000000FFFFFFF80000000000007FF00000000000000000FFFFFFFC00 00000000007FF000000000000000007FFFFFFC0000000000007FE000000000000000007F FFFFFC000000000000FFE000000000000000003FFFFFFE000000000000FFE00000000000 0000003FFFFFFE000000000001FFC000000000000000001FFFFFFF000000000001FFC000 000000000000001FFFFFFF000000000001FF8000000000000000001FFFFFFF0000000000 03FF8000000000000000000FFFFFFF800000000003FF8000000000000000000FFFFFFF80 0000000007FF00000000000000000007FFFFFFC00000000007FF00000000000000000007 FFFFFFC00000000007FE00000000000000000007FFFFFFC0000000001FFF800000000000 00000003FFFFFFE000000003FFFFFE000000000000000003FFFFFFE00000FFFFFFFFFFFF C0000000001FFFFFFFFFFFFFFFFEFFFFFFFFFFFFC0000000001FFFFFFFFFFFFFFFFEFFFF FFFFFFFFC0000000001FFFFFFFFFFFFFFFFEFFFFFFFFFFFFC0000000001FFFFFFFFFFFFF FFFEFFFFFFFFFFFFC0000000001FFFFFFFFFFFFFFFFEFFFFFFFFFFFFC0000000001FFFFF FFFFFFFFFFFEFFFFFFFFFFFFC0000000001FFFFFFFFFFFFFFFFEFFFFFFFFFFFFC0000000 001FFFFFFFFFFFFFFFFEFFFFFFFFFFFFC0000000001FFFFFFFFFFFFFFFFE>159 145 120 272 176 65 D[<000000000000000003FFFFC00000000001F000000000000000 03FFFFFFFE0000000003F8000000000000007FFFFFFFFFE000000007F800000000000007 FFFFFFFFFFFC0000000FF80000000000003FFFFFFFFFFFFF0000001FF8000000000001FF FFFFFFFFFFFFE000003FF800000000000FFFFFFFFFFFFFFFF800007FF800000000003FFF FFFFFFFFFFFFFE0000FFF80000000000FFFFFFFFFFFFFFFFFF0001FFF80000000003FFFF FFFFF8000FFFFFC003FFF8000000000FFFFFFFFE0000007FFFF007FFF8000000003FFFFF FFC00000000FFFF80FFFF800000000FFFFFFFE0000000003FFFC1FFFF800000001FFFFFF F000000000007FFF3FFFF800000007FFFFFFC000000000001FFFFFFFF80000000FFFFFFF 0000000000000FFFFFFFF80000003FFFFFFC00000000000003FFFFFFF80000007FFFFFF0 00000000000001FFFFFFF8000000FFFFFFE000000000000000FFFFFFF8000001FFFFFF80 000000000000003FFFFFF8000003FFFFFF00000000000000001FFFFFF8000007FFFFFE00 000000000000000FFFFFF800000FFFFFFC000000000000000007FFFFF800001FFFFFF000 0000000000000003FFFFF800003FFFFFE0000000000000000001FFFFF800007FFFFFE000 0000000000000000FFFFF80000FFFFFFC0000000000000000000FFFFF80001FFFFFF8000 00000000000000007FFFF80003FFFFFF000000000000000000003FFFF80003FFFFFE0000 00000000000000001FFFF80007FFFFFE000000000000000000001FFFF8000FFFFFFC0000 00000000000000000FFFF8000FFFFFF8000000000000000000000FFFF8001FFFFFF80000 000000000000000007FFF8003FFFFFF00000000000000000000003FFF8003FFFFFF00000 000000000000000003FFF8007FFFFFE00000000000000000000003FFF8007FFFFFE00000 000000000000000001FFF800FFFFFFC00000000000000000000001FFF800FFFFFFC00000 000000000000000000FFF801FFFFFFC00000000000000000000000FFF801FFFFFF800000 000000000000000000FFF803FFFFFF8000000000000000000000007FF803FFFFFF800000 0000000000000000007FF807FFFFFF0000000000000000000000007FF807FFFFFF000000 0000000000000000003FF807FFFFFF0000000000000000000000003FF80FFFFFFF000000 0000000000000000003FF80FFFFFFE0000000000000000000000003FF80FFFFFFE000000 0000000000000000003FF81FFFFFFE0000000000000000000000001FF81FFFFFFE000000 0000000000000000001FF81FFFFFFE0000000000000000000000001FF83FFFFFFC000000 0000000000000000001FF83FFFFFFC0000000000000000000000000FF03FFFFFFC000000 00000000000000000000003FFFFFFC00000000000000000000000000007FFFFFFC000000 00000000000000000000007FFFFFFC00000000000000000000000000007FFFFFFC000000 00000000000000000000007FFFFFFC00000000000000000000000000007FFFFFF8000000 00000000000000000000007FFFFFF800000000000000000000000000007FFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 0000000000000000000000FFFFFFF80000000000000000000000000000FFFFFFF8000000 00000000000000000000007FFFFFF800000000000000000000000000007FFFFFF8000000 00000000000000000000007FFFFFF800000000000000000000000000007FFFFFFC000000 00000000000000000000007FFFFFFC00000000000000000000000000007FFFFFFC000000 00000000000000000000007FFFFFFC00000000000000000000000000003FFFFFFC000000 00000000000000000000003FFFFFFC00000000000000000000000000003FFFFFFC000000 00000000000000000000003FFFFFFC00000000000000000000000000001FFFFFFE000000 00000000000000000000001FFFFFFE00000000000000000000000007F01FFFFFFE000000 0000000000000000000FF80FFFFFFE0000000000000000000000000FF80FFFFFFE000000 0000000000000000000FF80FFFFFFF0000000000000000000000000FF807FFFFFF000000 0000000000000000000FF807FFFFFF0000000000000000000000000FF807FFFFFF000000 0000000000000000000FF803FFFFFF8000000000000000000000001FF803FFFFFF800000 0000000000000000001FF001FFFFFF8000000000000000000000001FF001FFFFFFC00000 0000000000000000001FF000FFFFFFC000000000000000000000001FF000FFFFFFC00000 0000000000000000003FF0007FFFFFE000000000000000000000003FE0007FFFFFE00000 0000000000000000003FE0003FFFFFF000000000000000000000007FE0003FFFFFF00000 0000000000000000007FC0001FFFFFF80000000000000000000000FFC0000FFFFFF80000 000000000000000000FFC0000FFFFFFC0000000000000000000001FF800007FFFFFE0000 000000000000000001FF800003FFFFFE0000000000000000000003FF000003FFFFFF0000 000000000000000007FF000001FFFFFF8000000000000000000007FE000000FFFFFFC000 00000000000000000FFC0000007FFFFFE00000000000000000001FFC0000003FFFFFF000 00000000000000003FF80000001FFFFFF80000000000000000007FF00000000FFFFFFC00 0000000000000000FFF000000007FFFFFE000000000000000001FFE000000003FFFFFF00 0000000000000003FFC000000001FFFFFF800000000000000007FF8000000000FFFFFFE0 000000000000000FFF00000000007FFFFFF0000000000000003FFE00000000003FFFFFFC 000000000000007FFC00000000000FFFFFFF00000000000001FFF8000000000007FFFFFF C0000000000007FFF0000000000001FFFFFFF000000000001FFFE0000000000000FFFFFF FE00000000007FFF800000000000003FFFFFFFE000000003FFFF000000000000000FFFFF FFFE0000003FFFFC0000000000000003FFFFFFFFF80007FFFFF80000000000000000FFFF FFFFFFFFFFFFFFE000000000000000003FFFFFFFFFFFFFFFFF8000000000000000000FFF FFFFFFFFFFFFFE00000000000000000001FFFFFFFFFFFFFFF8000000000000000000003F FFFFFFFFFFFFC00000000000000000000007FFFFFFFFFFFF000000000000000000000000 7FFFFFFFFFF000000000000000000000000003FFFFFFFF00000000000000000000000000 0003FFFFC0000000000000>141 146 115 271 168 67 D[156 142 120 269 178 I[<000000000000000003FFFFC00000000001F000000000000000000003FFFF FFFE0000000003F80000000000000000007FFFFFFFFFE000000007F80000000000000000 07FFFFFFFFFFFC0000000FF800000000000000003FFFFFFFFFFFFF0000001FF800000000 00000001FFFFFFFFFFFFFFE000003FF8000000000000000FFFFFFFFFFFFFFFF800007FF8 000000000000003FFFFFFFFFFFFFFFFE0000FFF800000000000000FFFFFFFFFFFFFFFFFF 0001FFF800000000000003FFFFFFFFF8000FFFFFC003FFF80000000000000FFFFFFFFE00 00007FFFF007FFF80000000000003FFFFFFFC00000000FFFF80FFFF8000000000000FFFF FFFE0000000003FFFC1FFFF8000000000001FFFFFFF000000000007FFF3FFFF800000000 0007FFFFFFC000000000001FFFFFFFF800000000000FFFFFFF0000000000000FFFFFFFF8 00000000003FFFFFFC00000000000003FFFFFFF800000000007FFFFFF000000000000001 FFFFFFF80000000000FFFFFFE000000000000000FFFFFFF80000000001FFFFFF80000000 000000003FFFFFF80000000003FFFFFF00000000000000001FFFFFF80000000007FFFFFE 00000000000000000FFFFFF8000000000FFFFFFC000000000000000007FFFFF800000000 1FFFFFF0000000000000000003FFFFF8000000003FFFFFE0000000000000000001FFFFF8 000000007FFFFFE0000000000000000000FFFFF800000000FFFFFFC00000000000000000 00FFFFF800000001FFFFFF800000000000000000007FFFF800000003FFFFFF0000000000 00000000003FFFF800000003FFFFFE000000000000000000001FFFF800000007FFFFFE00 0000000000000000001FFFF80000000FFFFFFC000000000000000000000FFFF80000000F FFFFF8000000000000000000000FFFF80000001FFFFFF80000000000000000000007FFF8 0000003FFFFFF00000000000000000000003FFF80000003FFFFFF0000000000000000000 0003FFF80000007FFFFFE00000000000000000000003FFF80000007FFFFFE00000000000 000000000001FFF8000000FFFFFFC00000000000000000000001FFF8000000FFFFFFC000 00000000000000000000FFF8000001FFFFFFC00000000000000000000000FFF8000001FF FFFF800000000000000000000000FFF8000003FFFFFF8000000000000000000000007FF8 000003FFFFFF8000000000000000000000007FF8000007FFFFFF00000000000000000000 00007FF8000007FFFFFF0000000000000000000000003FF8000007FFFFFF000000000000 0000000000003FF800000FFFFFFF0000000000000000000000003FF800000FFFFFFE0000 000000000000000000003FF800000FFFFFFE0000000000000000000000003FF800001FFF FFFE0000000000000000000000001FF800001FFFFFFE0000000000000000000000001FF8 00001FFFFFFE0000000000000000000000001FF800003FFFFFFC00000000000000000000 00001FF800003FFFFFFC0000000000000000000000000FF000003FFFFFFC000000000000 000000000000000000003FFFFFFC000000000000000000000000000000007FFFFFFC0000 00000000000000000000000000007FFFFFFC000000000000000000000000000000007FFF FFFC000000000000000000000000000000007FFFFFFC0000000000000000000000000000 00007FFFFFF8000000000000000000000000000000007FFFFFF800000000000000000000 0000000000007FFFFFF800000000000000000000000000000000FFFFFFF8000000000000 00000000000000000000FFFFFFF800000000000000000000000000000000FFFFFFF80000 0000000000000000000000000000FFFFFFF800000000000000000000000000000000FFFF FFF800000000000000000000000000000000FFFFFFF80000000000000000000000000000 0000FFFFFFF800000000000000000000000000000000FFFFFFF800000000000000000000 000000000000FFFFFFF800000000000000000000000000000000FFFFFFF8000000000000 00000000000000000000FFFFFFF800000000000000000000000000000000FFFFFFF80000 0000000000000000000000000000FFFFFFF800000000000000000000000000000000FFFF FFF800000000000000000000000000000000FFFFFFF80000000000000000000000000000 0000FFFFFFF800000000000000000000000000000000FFFFFFF800000000000000000000 000000000000FFFFFFF8000000000000000000000000000000007FFFFFF8000000000000 000000000000000000007FFFFFF8000000000000000000000000000000007FFFFFF80000 00000000000000000000000000007FFFFFFC000000000000000000000000000000007FFF FFFC000000000000000000000000000000007FFFFFFC0000000000000FFFFFFFFFFFFFFF FFF87FFFFFFC0000000000000FFFFFFFFFFFFFFFFFF83FFFFFFC0000000000000FFFFFFF FFFFFFFFFFF83FFFFFFC0000000000000FFFFFFFFFFFFFFFFFF83FFFFFFC000000000000 0FFFFFFFFFFFFFFFFFF83FFFFFFC0000000000000FFFFFFFFFFFFFFFFFF81FFFFFFE0000 000000000FFFFFFFFFFFFFFFFFF81FFFFFFE0000000000000FFFFFFFFFFFFFFFFFF81FFF FFFE0000000000000FFFFFFFFFFFFFFFFFF80FFFFFFE000000000000000000007FFFFFF8 00000FFFFFFE000000000000000000007FFFFFF800000FFFFFFF00000000000000000000 7FFFFFF8000007FFFFFF000000000000000000007FFFFFF8000007FFFFFF000000000000 000000007FFFFFF8000007FFFFFF000000000000000000007FFFFFF8000003FFFFFF8000 00000000000000007FFFFFF8000003FFFFFF800000000000000000007FFFFFF8000001FF FFFF800000000000000000007FFFFFF8000001FFFFFFC00000000000000000007FFFFFF8 000000FFFFFFC00000000000000000007FFFFFF8000000FFFFFFE0000000000000000000 7FFFFFF80000007FFFFFE00000000000000000007FFFFFF80000007FFFFFE00000000000 000000007FFFFFF80000003FFFFFF00000000000000000007FFFFFF80000003FFFFFF000 00000000000000007FFFFFF80000001FFFFFF80000000000000000007FFFFFF80000000F FFFFFC0000000000000000007FFFFFF80000000FFFFFFC0000000000000000007FFFFFF8 00000007FFFFFE0000000000000000007FFFFFF800000003FFFFFF000000000000000000 7FFFFFF800000003FFFFFF0000000000000000007FFFFFF800000001FFFFFF8000000000 000000007FFFFFF800000000FFFFFFC000000000000000007FFFFFF8000000007FFFFFE0 00000000000000007FFFFFF8000000003FFFFFF000000000000000007FFFFFF800000000 1FFFFFF800000000000000007FFFFFF8000000000FFFFFFC0000000000000000FFFFFFF8 0000000007FFFFFE0000000000000000FFFFFFF80000000003FFFFFF0000000000000000 FFFFFFF80000000001FFFFFFC000000000000001FFFFFFF80000000000FFFFFFE0000000 00000001FFFFFFF800000000007FFFFFF800000000000003FFFFFFF800000000003FFFFF FE00000000000007FFFFFFF800000000000FFFFFFF0000000000000FFFFFFFF800000000 0007FFFFFFE000000000003FFFFFFFF8000000000001FFFFFFF800000000007FFFFFFFF8 000000000000FFFFFFFF0000000001FFFCFFFFF80000000000003FFFFFFFE000000007FF F87FFFF80000000000000FFFFFFFFF0000003FFFF03FFFF800000000000003FFFFFFFFFC 0007FFFFE00FFFF800000000000000FFFFFFFFFFFFFFFFFFC007FFF8000000000000003F FFFFFFFFFFFFFFFF0001FFF8000000000000000FFFFFFFFFFFFFFFFC00007FF800000000 00000001FFFFFFFFFFFFFFF000001FF800000000000000003FFFFFFFFFFFFFC0000007F8 000000000000000007FFFFFFFFFFFF00000001F80000000000000000007FFFFFFFFFF800 0000007000000000000000000003FFFFFFFF8000000000000000000000000000000003FF FFE00000000000000000>157 146 115 271 183 71 D[163 142 120 269 182 75 D[121 142 120 269 140 I[137 142 120 269 159 80 D[<0000000000000000FFFFF00000000000000000000000000000007FFFFF FFE00000000000000000000000000007FFFFFFFFFE000000000000000000000000007FFF FFFFFFFFE0000000000000000000000003FFFFFFFFFFFFFC00000000000000000000001F FFFFFFFFFFFFFF80000000000000000000007FFFFFFFFFFFFFFFE0000000000000000000 03FFFFFFFFFFFFFFFFFC0000000000000000000FFFFFFFE0007FFFFFFF00000000000000 00003FFFFFFE000007FFFFFFC000000000000000007FFFFFE00000007FFFFFE000000000 00000001FFFFFF800000001FFFFFF80000000000000007FFFFFE0000000007FFFFFE0000 00000000000FFFFFF80000000001FFFFFF000000000000001FFFFFE000000000007FFFFF 800000000000007FFFFF8000000000001FFFFFE0000000000000FFFFFF0000000000000F FFFFF0000000000001FFFFFE00000000000007FFFFF8000000000003FFFFFC0000000000 0003FFFFFC000000000007FFFFF000000000000000FFFFFE00000000000FFFFFE0000000 000000007FFFFF00000000001FFFFFE0000000000000007FFFFF80000000003FFFFFC000 0000000000003FFFFFC0000000007FFFFF80000000000000001FFFFFE000000000FFFFFF 00000000000000000FFFFFF000000001FFFFFE000000000000000007FFFFF800000001FF FFFE000000000000000007FFFFF800000003FFFFFC000000000000000003FFFFFC000000 07FFFFF8000000000000000001FFFFFE0000000FFFFFF8000000000000000001FFFFFF00 00000FFFFFF0000000000000000000FFFFFF0000001FFFFFF0000000000000000000FFFF FF8000001FFFFFE00000000000000000007FFFFF8000003FFFFFE0000000000000000000 7FFFFFC000007FFFFFC00000000000000000003FFFFFE000007FFFFFC000000000000000 00003FFFFFE00000FFFFFF800000000000000000001FFFFFF00000FFFFFF800000000000 000000001FFFFFF00001FFFFFF800000000000000000001FFFFFF80001FFFFFF00000000 0000000000000FFFFFF80003FFFFFF000000000000000000000FFFFFFC0003FFFFFF0000 00000000000000000FFFFFFC0003FFFFFE0000000000000000000007FFFFFC0007FFFFFE 0000000000000000000007FFFFFE0007FFFFFE0000000000000000000007FFFFFE0007FF FFFE0000000000000000000007FFFFFE000FFFFFFC0000000000000000000003FFFFFF00 0FFFFFFC0000000000000000000003FFFFFF000FFFFFFC0000000000000000000003FFFF FF001FFFFFFC0000000000000000000003FFFFFF801FFFFFFC0000000000000000000003 FFFFFF801FFFFFF80000000000000000000001FFFFFF803FFFFFF8000000000000000000 0001FFFFFFC03FFFFFF80000000000000000000001FFFFFFC03FFFFFF800000000000000 00000001FFFFFFC03FFFFFF80000000000000000000001FFFFFFC03FFFFFF80000000000 000000000001FFFFFFC07FFFFFF80000000000000000000001FFFFFFE07FFFFFF8000000 0000000000000001FFFFFFE07FFFFFF80000000000000000000001FFFFFFE07FFFFFF000 00000000000000000000FFFFFFE07FFFFFF00000000000000000000000FFFFFFE07FFFFF F00000000000000000000000FFFFFFE07FFFFFF00000000000000000000000FFFFFFE0FF FFFFF00000000000000000000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFF F0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF00000000000000000000000FF FFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF000000000000000000000 00FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF00000000000000000 000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF0000000000000 0000000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF000000000 00000000000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF00000 000000000000000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFFFFF0 0000000000000000000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0FFFF FFF00000000000000000000000FFFFFFF0FFFFFFF00000000000000000000000FFFFFFF0 7FFFFFF00000000000000000000000FFFFFFE07FFFFFF00000000000000000000000FFFF FFE07FFFFFF00000000000000000000000FFFFFFE07FFFFFF00000000000000000000000 FFFFFFE07FFFFFF80000000000000000000001FFFFFFE07FFFFFF8000000000000000000 0001FFFFFFE07FFFFFF80000000000000000000001FFFFFFE03FFFFFF800000000000000 00000001FFFFFFC03FFFFFF80000000000000000000001FFFFFFC03FFFFFF80000000000 000000000001FFFFFFC03FFFFFF80000000000000000000001FFFFFFC03FFFFFF8000000 0000000000000001FFFFFFC01FFFFFF80000000000000000000001FFFFFF801FFFFFFC00 00000000000000000003FFFFFF801FFFFFFC0000000000000000000003FFFFFF800FFFFF FC0000000000000000000003FFFFFF000FFFFFFC0000000000000000000003FFFFFF000F FFFFFC0000000000000000000003FFFFFF0007FFFFFE0000000000000000000007FFFFFE 0007FFFFFE0000000000000000000007FFFFFE0007FFFFFE0000000000000000000007FF FFFE0003FFFFFF000000000000000000000FFFFFFC0003FFFFFF00000000000000000000 0FFFFFFC0001FFFFFF000000000000000000000FFFFFF80001FFFFFF0000000000000000 00000FFFFFF80000FFFFFF800000000000000000001FFFFFF00000FFFFFF800000000000 000000001FFFFFF000007FFFFFC00000000000000000003FFFFFE000007FFFFFC0000000 0000000000003FFFFFE000003FFFFFC000000007FE000000003FFFFFC000003FFFFFE000 00003FFFC00000007FFFFFC000001FFFFFE0000000FFFFF00000007FFFFF8000001FFFFF F0000003FFFFFC000000FFFFFF8000000FFFFFF800000FFFFFFE000001FFFFFF00000007 FFFFF800001FFFFFFF800001FFFFFE00000007FFFFFC00003FFFFFFFC00003FFFFFE0000 0003FFFFFC00007FFC03FFE00003FFFFFC00000001FFFFFE00007FE0007FF00007FFFFF8 00000000FFFFFF0000FF80001FF8000FFFFFF0000000007FFFFF8001FF000007F8001FFF FFE0000000003FFFFF8001FE000003FC001FFFFFC0000000003FFFFFC001FC000003FE00 3FFFFFC0000000001FFFFFE003FC000001FF007FFFFF80000000000FFFFFF003F8000000 FF00FFFFFF000000000007FFFFF803F8000000FF81FFFFFE000000000001FFFFFC03F800 00007FC3FFFFF8000000000000FFFFFF03F80000007FCFFFFFF00000000000007FFFFF83 F80000003FFFFFFFE00000000000003FFFFFE3F80000003FFFFFFFC00000000000000FFF FFF3FC0000001FFFFFFF0000000000000007FFFFFDFC0000001FFFFFFE00000000000000 01FFFFFFFE0000001FFFFFF80000000000000000FFFFFFFF0000007FFFFFF00000000000 0000003FFFFFFFC00003FFFFFFC000000000000000000FFFFFFFF0007FFFFFFF00000000 000000000003FFFFFFFFFFFFFFFFFC00000000000000000000FFFFFFFFFFFFFFFFF00000 00000000000000001FFFFFFFFFFFFFFF800000000000000000000007FFFFFFFFFFFFFF80 00000000F00000000000007FFFFFFFFFFFFFC000000001F80000000000000FFFFFFFFFFF FFE000000001F8000000000000007FFFFFFFE3FFE000000001F80000000000000000FFFF F003FFF000000001F8000000000000000000000003FFF800000001F80000000000000000 00000001FFFC00000003F8000000000000000000000001FFFC00000003F8000000000000 000000000001FFFE00000007F8000000000000000000000001FFFF0000000FF800000000 0000000000000000FFFFC000001FF8000000000000000000000000FFFFE000007FF80000 00000000000000000000FFFFF80003FFF0000000000000000000000000FFFFFF003FFFF0 000000000000000000000000FFFFFFFFFFFFF00000000000000000000000007FFFFFFFFF FFF00000000000000000000000007FFFFFFFFFFFF00000000000000000000000007FFFFF FFFFFFE00000000000000000000000007FFFFFFFFFFFE00000000000000000000000003F FFFFFFFFFFE00000000000000000000000003FFFFFFFFFFFE00000000000000000000000 003FFFFFFFFFFFC00000000000000000000000001FFFFFFFFFFFC0000000000000000000 0000001FFFFFFFFFFFC00000000000000000000000001FFFFFFFFFFF8000000000000000 00000000000FFFFFFFFFFF800000000000000000000000000FFFFFFFFFFF000000000000 0000000000000007FFFFFFFFFF0000000000000000000000000007FFFFFFFFFE00000000 00000000000000000003FFFFFFFFFE0000000000000000000000000003FFFFFFFFFC0000 000000000000000000000001FFFFFFFFFC0000000000000000000000000001FFFFFFFFF8 0000000000000000000000000000FFFFFFFFF000000000000000000000000000007FFFFF FFE000000000000000000000000000007FFFFFFFC000000000000000000000000000003F FFFFFF8000000000000000000000000000001FFFFFFF0000000000000000000000000000 000FFFFFFE00000000000000000000000000000003FFFFFC000000000000000000000000 00000001FFFFF0000000000000000000000000000000003FFFC000000000000000000000 00000000000007FE000000>149 184 115 271 175 I[163 144 120 269 173 I[<000000000FFFF8000000003E0000000001FFFFFFE00000003F000000 001FFFFFFFFE0000007F000000007FFFFFFFFFC00000FF00000003FFFFFFFFFFF00001FF 0000000FFFFFFFFFFFFC0003FF0000001FFFFFFFFFFFFF0007FF0000007FFFFFFFFFFFFF C00FFF000000FFFFFE000FFFFFF01FFF000003FFFFC000007FFFF83FFF000007FFFF0000 0007FFFC7FFF00000FFFFC00000000FFFFFFFF00001FFFF0000000003FFFFFFF00003FFF E0000000000FFFFFFF00007FFF800000000007FFFFFF0000FFFF800000000001FFFFFF00 00FFFF000000000000FFFFFF0001FFFE0000000000007FFFFF0003FFFC0000000000001F FFFF0003FFFC0000000000000FFFFF0007FFF800000000000007FFFF0007FFF800000000 000007FFFF000FFFF000000000000003FFFF000FFFF000000000000001FFFF001FFFF000 000000000000FFFF001FFFF000000000000000FFFF003FFFE0000000000000007FFF003F FFE0000000000000003FFF003FFFE0000000000000003FFF007FFFE0000000000000001F FF007FFFE0000000000000001FFF007FFFE0000000000000001FFF007FFFE00000000000 00000FFF007FFFE0000000000000000FFF00FFFFE00000000000000007FF00FFFFF00000 000000000007FF00FFFFF00000000000000007FF00FFFFF00000000000000007FF00FFFF F80000000000000003FF00FFFFF80000000000000003FF00FFFFFC0000000000000003FF 00FFFFFC0000000000000003FF00FFFFFE0000000000000001FF00FFFFFF000000000000 0001FF00FFFFFF0000000000000001FF00FFFFFF8000000000000001FF00FFFFFFC00000 0000000001FF00FFFFFFF000000000000000FE007FFFFFF80000000000000000007FFFFF FE0000000000000000007FFFFFFF0000000000000000007FFFFFFFE00000000000000000 3FFFFFFFFE00000000000000003FFFFFFFFFE0000000000000003FFFFFFFFFFF00000000 0000001FFFFFFFFFFFF00000000000001FFFFFFFFFFFFF8000000000000FFFFFFFFFFFFF F800000000000FFFFFFFFFFFFFFFC00000000007FFFFFFFFFFFFFFFC0000000007FFFFFF FFFFFFFFFF8000000003FFFFFFFFFFFFFFFFE000000003FFFFFFFFFFFFFFFFF800000001 FFFFFFFFFFFFFFFFFE00000000FFFFFFFFFFFFFFFFFF800000007FFFFFFFFFFFFFFFFFE0 0000007FFFFFFFFFFFFFFFFFF00000003FFFFFFFFFFFFFFFFFF80000001FFFFFFFFFFFFF FFFFFC0000000FFFFFFFFFFFFFFFFFFF00000007FFFFFFFFFFFFFFFFFF80000003FFFFFF FFFFFFFFFFFFC0000000FFFFFFFFFFFFFFFFFFE00000007FFFFFFFFFFFFFFFFFE0000000 3FFFFFFFFFFFFFFFFFF00000000FFFFFFFFFFFFFFFFFF800000003FFFFFFFFFFFFFFFFFC 00000000FFFFFFFFFFFFFFFFFE000000003FFFFFFFFFFFFFFFFE000000000FFFFFFFFFFF FFFFFF0000000001FFFFFFFFFFFFFFFF00000000001FFFFFFFFFFFFFFF800000000001FF FFFFFFFFFFFF8000000000000FFFFFFFFFFFFFC00000000000007FFFFFFFFFFFC0000000 00000007FFFFFFFFFFE0000000000000003FFFFFFFFFE00000000000000003FFFFFFFFF0 00000000000000003FFFFFFFF000000000000000000FFFFFFFF0000000000000000003FF FFFFF8000000000000000000FFFFFFF80000000000000000007FFFFFF800000000000000 00001FFFFFF80000000000000000000FFFFFF800000000000000000007FFFFFC00000000 000000000007FFFFFC00000000000000000003FFFFFC7F000000000000000001FFFFFCFF 800000000000000000FFFFFCFF800000000000000000FFFFFCFF8000000000000000007F FFFCFF8000000000000000007FFFFCFF8000000000000000007FFFFCFF80000000000000 00003FFFFCFF8000000000000000003FFFFCFF8000000000000000003FFFFCFFC0000000 00000000001FFFFCFFC000000000000000001FFFFCFFC000000000000000001FFFFCFFC0 00000000000000001FFFF8FFE000000000000000001FFFF8FFE000000000000000001FFF F8FFE000000000000000001FFFF8FFF000000000000000001FFFF8FFF000000000000000 001FFFF0FFF000000000000000001FFFF0FFF800000000000000003FFFF0FFFC00000000 000000003FFFE0FFFC00000000000000003FFFE0FFFE00000000000000003FFFE0FFFF00 000000000000007FFFC0FFFF00000000000000007FFFC0FFFF80000000000000007FFF80 FFFFC000000000000000FFFF80FFFFE000000000000000FFFF00FFFFF800000000000001 FFFF00FFFFFC00000000000003FFFE00FFFFFE00000000000003FFFC00FFFFFF80000000 000007FFFC00FFFFFFE000000000000FFFF800FFFFFFF800000000001FFFF000FFFFFFFE 00000000007FFFE000FFFFFFFFC000000000FFFFC000FFFFFFFFF800000003FFFF8000FF FE7FFFFF8000001FFFFF0000FFFC1FFFFFFF0001FFFFFE0000FFF807FFFFFFFFFFFFFFFC 0000FFF003FFFFFFFFFFFFFFF00000FFE000FFFFFFFFFFFFFFE00000FFC0001FFFFFFFFF FFFF800000FF800007FFFFFFFFFFFE000000FF000000FFFFFFFFFFF8000000FE0000001F FFFFFFFFC0000000FC00000000FFFFFFFC000000007C0000000001FFFF8000000000> 102 146 115 271 129 I<00000000FFFFFC000000000000000000007FFFFFFFF0000000 000000000007FFFFFFFFFF00000000000000003FFFFFFFFFFFE000000000000000FFFFFF FFFFFFF800000000000001FFFFFFFFFFFFFE00000000000007FFFFFFFFFFFFFF80000000 00000FFFFFC0007FFFFFE000000000001FFFE000000FFFFFF000000000003FFFF0000003 FFFFF800000000007FFFF8000000FFFFFE00000000007FFFFC0000007FFFFF0000000000 FFFFFC0000003FFFFF8000000000FFFFFE0000001FFFFFC000000000FFFFFE0000000FFF FFC000000001FFFFFF00000007FFFFE000000001FFFFFF00000003FFFFF000000001FFFF FF00000003FFFFF000000001FFFFFF00000001FFFFF800000001FFFFFF00000001FFFFF8 00000001FFFFFF00000001FFFFFC00000001FFFFFF00000000FFFFFC00000001FFFFFF00 000000FFFFFE00000000FFFFFE00000000FFFFFE00000000FFFFFE00000000FFFFFE0000 00007FFFFC000000007FFFFE000000007FFFFC000000007FFFFF000000003FFFF8000000 007FFFFF000000001FFFF0000000007FFFFF0000000007FFC0000000007FFFFF00000000 01FF00000000007FFFFF00000000000000000000007FFFFF00000000000000000000007F FFFF00000000000000000000007FFFFF00000000000000000000007FFFFF000000000000 00000000007FFFFF00000000000000000000007FFFFF00000000000000000000007FFFFF 00000000000000000000007FFFFF00000000000000000000007FFFFF0000000000000000 0003FFFFFFFF00000000000000000FFFFFFFFFFF0000000000000003FFFFFFFFFFFF0000 00000000007FFFFFFFFFFFFF00000000000007FFFFFFFFFFFFFF0000000000003FFFFFFF FFFFFFFF000000000001FFFFFFFF807FFFFF00000000000FFFFFFFE0007FFFFF00000000 003FFFFFFC00007FFFFF0000000000FFFFFFE000007FFFFF0000000003FFFFFF0000007F FFFF0000000007FFFFFC0000007FFFFF000000001FFFFFF00000007FFFFF000000003FFF FFC00000007FFFFF000000007FFFFF800000007FFFFF00000000FFFFFE000000007FFFFF 00000001FFFFFC000000007FFFFF00000003FFFFF8000000007FFFFF00000007FFFFF000 0000007FFFFF0000000FFFFFE0000000007FFFFF0000001FFFFFE0000000007FFFFF0000 001FFFFFC0000000007FFFFF0000003FFFFF80000000007FFFFF0000003FFFFF80000000 007FFFFF0000007FFFFF00000000007FFFFF0000007FFFFF00000000007FFFFF0000007F FFFF00000000007FFFFF000000FFFFFF00000000007FFFFF000000FFFFFE00000000007F FFFF000000FFFFFE00000000007FFFFF000000FFFFFE00000000007FFFFF000000FFFFFE 0000000000FFFFFF000000FFFFFE0000000000FFFFFF000000FFFFFE0000000000FFFFFF 000000FFFFFE0000000000FFFFFF000000FFFFFE0000000001FFFFFF000000FFFFFF0000 000001FFFFFF0000007FFFFF0000000003FFFFFF0000007FFFFF0000000007EFFFFF0000 003FFFFF800000000FEFFFFF0000003FFFFFC00000000FCFFFFF0000001FFFFFC0000000 3FCFFFFF0000001FFFFFE00000007F8FFFFF8000000FFFFFF0000000FF0FFFFFE0000007 FFFFFC000003FE0FFFFFFE000003FFFFFE00000FFC07FFFFFFFF0001FFFFFF80003FF807 FFFFFFFF8000FFFFFFF803FFF003FFFFFFFF80003FFFFFFFFFFFE001FFFFFFFF80000FFF FFFFFFFF8000FFFFFFFF800003FFFFFFFFFF00007FFFFFFF800000FFFFFFFFFC00001FFF FFFF8000001FFFFFFFF0000007FFFFFF80000001FFFFFF800000007FFFFF0000000007FF F80000000000000000695F79DD71>97 D<00000000007FFFF0000000000000001FFFFFFF E0000000000000FFFFFFFFFE000000000007FFFFFFFFFF80000000001FFFFFFFFFFFF000 000000FFFFFFFFFFFFF800000001FFFFFFFFFFFFFE00000007FFFFFFFFFFFFFF0000001F FFFFF8000FFFFF8000003FFFFF800000FFFFC000007FFFFE000001FFFFE00001FFFFF800 0003FFFFE00003FFFFF0000003FFFFF00007FFFFE0000007FFFFF0000FFFFFC0000007FF FFF0001FFFFF8000000FFFFFF8003FFFFF0000000FFFFFF8003FFFFE0000000FFFFFF800 7FFFFC0000000FFFFFF800FFFFFC0000000FFFFFF800FFFFF80000000FFFFFF801FFFFF8 0000000FFFFFF803FFFFF00000000FFFFFF803FFFFF000000007FFFFF007FFFFF0000000 07FFFFF007FFFFE000000003FFFFE00FFFFFE000000003FFFFE00FFFFFE000000001FFFF C01FFFFFE000000000FFFF801FFFFFC0000000003FFE001FFFFFC0000000000FF8003FFF FFC0000000000000003FFFFFC0000000000000003FFFFFC0000000000000003FFFFFC000 0000000000007FFFFF80000000000000007FFFFF80000000000000007FFFFF8000000000 0000007FFFFF80000000000000007FFFFF8000000000000000FFFFFF8000000000000000 FFFFFF8000000000000000FFFFFF8000000000000000FFFFFF8000000000000000FFFFFF 8000000000000000FFFFFF8000000000000000FFFFFF8000000000000000FFFFFF800000 0000000000FFFFFF8000000000000000FFFFFF8000000000000000FFFFFF800000000000 0000FFFFFF8000000000000000FFFFFF8000000000000000FFFFFF8000000000000000FF FFFF8000000000000000FFFFFF80000000000000007FFFFF80000000000000007FFFFF80 000000000000007FFFFFC0000000000000007FFFFFC0000000000000007FFFFFC0000000 000000003FFFFFC0000000000000003FFFFFC0000000000000003FFFFFC0000000000000 001FFFFFE0000000000000001FFFFFE0000000000000001FFFFFE0000000000000000FFF FFE0000000000000000FFFFFF0000000000000FE07FFFFF0000000000001FF07FFFFF000 0000000001FF03FFFFF8000000000003FF03FFFFF8000000000003FE01FFFFFC00000000 0003FE01FFFFFC000000000007FE00FFFFFE00000000000FFC007FFFFF00000000000FFC 007FFFFF80000000001FF8003FFFFF80000000003FF0001FFFFFC0000000007FF0000FFF FFE000000000FFE00007FFFFF800000001FFC00003FFFFFC00000003FFC00001FFFFFF00 00000FFF800000FFFFFFC000003FFF0000003FFFFFF00001FFFC0000001FFFFFFF801FFF F800000007FFFFFFFFFFFFF000000003FFFFFFFFFFFFC000000000FFFFFFFFFFFF800000 00003FFFFFFFFFFE000000000007FFFFFFFFF8000000000000FFFFFFFFC0000000000000 1FFFFFFE00000000000000007FFFC0000000585F78DD67>99 D[<000000000000000000 00003FE00000000000000000000000FFFFFFE000000000000000000001FFFFFFFFE00000 0000000000000001FFFFFFFFE000000000000000000001FFFFFFFFE00000000000000000 0001FFFFFFFFE000000000000000000001FFFFFFFFE000000000000000000001FFFFFFFF E000000000000000000001FFFFFFFFE000000000000000000001FFFFFFFFE00000000000 0000000001FFFFFFFFE000000000000000000000007FFFFFE00000000000000000000000 0FFFFFE000000000000000000000000FFFFFE0000000000000000000000007FFFFE00000 00000000000000000007FFFFE0000000000000000000000007FFFFE00000000000000000 00000007FFFFE0000000000000000000000007FFFFE0000000000000000000000007FFFF E0000000000000000000000007FFFFE0000000000000000000000007FFFFE00000000000 00000000000007FFFFE0000000000000000000000007FFFFE00000000000000000000000 07FFFFE0000000000000000000000007FFFFE0000000000000000000000007FFFFE00000 00000000000000000007FFFFE0000000000000000000000007FFFFE00000000000000000 00000007FFFFE0000000000000000000000007FFFFE0000000000000000000000007FFFF E0000000000000000000000007FFFFE0000000000000000000000007FFFFE00000000000 00000000000007FFFFE0000000000000000000000007FFFFE00000000000000000000000 07FFFFE0000000000000000000000007FFFFE0000000000000000000000007FFFFE00000 00000000000000000007FFFFE0000000000000000000000007FFFFE00000000000000000 00000007FFFFE0000000000000000000000007FFFFE0000000000000000000000007FFFF E0000000000000000000000007FFFFE0000000000000000000000007FFFFE00000000000 00000000000007FFFFE0000000000000000000000007FFFFE00000000000000000000000 07FFFFE0000000000000000000000007FFFFE0000000000000003FFF800007FFFFE00000 000000000FFFFFFC0007FFFFE0000000000000FFFFFFFF8007FFFFE0000000000007FFFF FFFFE007FFFFE000000000001FFFFFFFFFF807FFFFE000000000007FFFFFFFFFFE07FFFF E00000000001FFFFFFFFFFFF87FFFFE00000000007FFFFFE007FFFC7FFFFE0000000001F FFFFE00007FFF7FFFFE0000000003FFFFF000000FFFFFFFFE0000000007FFFFC0000003F FFFFFFE000000001FFFFF80000000FFFFFFFE000000003FFFFE000000007FFFFFFE00000 0007FFFFC000000003FFFFFFE00000000FFFFF8000000001FFFFFFE00000001FFFFF0000 0000007FFFFFE00000003FFFFE00000000007FFFFFE00000003FFFFE00000000003FFFFF E00000007FFFFC00000000001FFFFFE0000000FFFFF800000000000FFFFFE0000001FFFF F800000000000FFFFFE0000001FFFFF000000000000FFFFFE0000003FFFFF00000000000 0FFFFFE0000003FFFFF000000000000FFFFFE0000007FFFFE000000000000FFFFFE00000 07FFFFE000000000000FFFFFE000000FFFFFE000000000000FFFFFE000000FFFFFC00000 0000000FFFFFE000001FFFFFC000000000000FFFFFE000001FFFFFC000000000000FFFFF E000001FFFFFC000000000000FFFFFE000003FFFFFC000000000000FFFFFE000003FFFFF C000000000000FFFFFE000003FFFFF8000000000000FFFFFE000007FFFFF800000000000 0FFFFFE000007FFFFF8000000000000FFFFFE000007FFFFF8000000000000FFFFFE00000 7FFFFF8000000000000FFFFFE000007FFFFF8000000000000FFFFFE00000FFFFFF800000 0000000FFFFFE00000FFFFFF8000000000000FFFFFE00000FFFFFF8000000000000FFFFF E00000FFFFFF8000000000000FFFFFE00000FFFFFF8000000000000FFFFFE00000FFFFFF 8000000000000FFFFFE00000FFFFFF8000000000000FFFFFE00000FFFFFF800000000000 0FFFFFE00000FFFFFF8000000000000FFFFFE00000FFFFFF8000000000000FFFFFE00000 FFFFFF8000000000000FFFFFE00000FFFFFF8000000000000FFFFFE00000FFFFFF800000 0000000FFFFFE00000FFFFFF8000000000000FFFFFE00000FFFFFF8000000000000FFFFF E00000FFFFFF8000000000000FFFFFE000007FFFFF8000000000000FFFFFE000007FFFFF 8000000000000FFFFFE000007FFFFF8000000000000FFFFFE000007FFFFF800000000000 0FFFFFE000007FFFFF8000000000000FFFFFE000003FFFFF8000000000000FFFFFE00000 3FFFFFC000000000000FFFFFE000003FFFFFC000000000000FFFFFE000003FFFFFC00000 0000000FFFFFE000001FFFFFC000000000000FFFFFE000001FFFFFC000000000000FFFFF E000000FFFFFC000000000000FFFFFE000000FFFFFC000000000000FFFFFE000000FFFFF E000000000000FFFFFE0000007FFFFE000000000000FFFFFE0000007FFFFE00000000000 0FFFFFE0000003FFFFF000000000000FFFFFE0000003FFFFF000000000000FFFFFE00000 01FFFFF000000000001FFFFFE0000000FFFFF800000000003FFFFFE0000000FFFFF80000 0000007FFFFFE00000007FFFFC00000000007FFFFFE00000003FFFFE0000000000FFFFFF E00000001FFFFE0000000001FFFFFFE00000000FFFFF0000000007FFFFFFE00000000FFF FF800000000FFFFFFFE000000007FFFFC00000001FFFFFFFF000000001FFFFF00000007F FFFFFFF000000000FFFFF8000000FFFFFFFFFE000000007FFFFE000003FFEFFFFFFFFF80 00003FFFFF80001FFFCFFFFFFFFF8000000FFFFFF801FFFF0FFFFFFFFF80000003FFFFFF FFFFFE0FFFFFFFFF80000001FFFFFFFFFFF80FFFFFFFFF800000007FFFFFFFFFE00FFFFF FFFF800000000FFFFFFFFF800FFFFFFFFF8000000001FFFFFFFC000FFFFFFFFF80000000 003FFFFFE0000FFFFFFFFF800000000000FFFE00000FFFFE000000>113 144 120 270 129 I<00000000007FFFC000000000000000000FFFFFFF00000000000000 00FFFFFFFFE000000000000007FFFFFFFFFC0000000000001FFFFFFFFFFF000000000000 7FFFFFFFFFFFC00000000001FFFFFFFFFFFFF00000000007FFFFF803FFFFF8000000001F FFFF80003FFFFE000000003FFFFE00000FFFFF000000007FFFF8000003FFFF80000001FF FFF0000001FFFFC0000003FFFFC00000007FFFE0000007FFFF800000003FFFF000000FFF FF000000003FFFF000001FFFFF000000001FFFF800003FFFFE000000000FFFFC00003FFF FC000000000FFFFE00007FFFFC0000000007FFFE0000FFFFF80000000003FFFF0001FFFF F80000000003FFFF0001FFFFF00000000003FFFF8003FFFFF00000000001FFFF8003FFFF E00000000001FFFF8007FFFFE00000000001FFFFC007FFFFE00000000000FFFFC00FFFFF E00000000000FFFFE00FFFFFC00000000000FFFFE01FFFFFC00000000000FFFFE01FFFFF C000000000007FFFE01FFFFFC000000000007FFFF03FFFFFC000000000007FFFF03FFFFF C000000000007FFFF03FFFFF8000000000007FFFF07FFFFF8000000000007FFFF07FFFFF 8000000000007FFFF07FFFFF8000000000007FFFF87FFFFF8000000000003FFFF87FFFFF 8000000000003FFFF8FFFFFF8000000000003FFFF8FFFFFF8000000000003FFFF8FFFFFF 8000000000003FFFF8FFFFFFFFFFFFFFFFFFFFFFF8FFFFFFFFFFFFFFFFFFFFFFF8FFFFFF FFFFFFFFFFFFFFFFF8FFFFFFFFFFFFFFFFFFFFFFF8FFFFFFFFFFFFFFFFFFFFFFF8FFFFFF FFFFFFFFFFFFFFFFF0FFFFFF800000000000000000FFFFFF800000000000000000FFFFFF 800000000000000000FFFFFF800000000000000000FFFFFF800000000000000000FFFFFF 800000000000000000FFFFFF8000000000000000007FFFFF8000000000000000007FFFFF 8000000000000000007FFFFF8000000000000000007FFFFF8000000000000000007FFFFF C000000000000000003FFFFFC000000000000000003FFFFFC000000000000000003FFFFF C000000000000000001FFFFFC000000000000000001FFFFFC000000000000000001FFFFF E000000000000000000FFFFFE000000000000000000FFFFFE0000000000000000007FFFF E000000000000007F007FFFFF00000000000000FF803FFFFF00000000000000FF803FFFF F80000000000000FF801FFFFF80000000000001FF801FFFFFC0000000000001FF000FFFF FC0000000000003FF0007FFFFE0000000000007FE0003FFFFF0000000000007FE0003FFF FF000000000000FFC0001FFFFF800000000001FF80000FFFFFC00000000003FF800007FF FFF00000000007FF000003FFFFF8000000001FFE000001FFFFFC000000003FFC0000007F FFFF00000000FFF80000003FFFFFE0000003FFF00000001FFFFFFC00001FFFE000000007 FFFFFFC003FFFFC000000003FFFFFFFFFFFFFF0000000000FFFFFFFFFFFFFE0000000000 3FFFFFFFFFFFF800000000000FFFFFFFFFFFE0000000000001FFFFFFFFFF800000000000 003FFFFFFFFC0000000000000003FFFFFFC000000000000000000FFFF8000000005D5F7A DD6A>I[<0000000000003FFF80000000000000000FFFFFF000000000000000FFFFFFFE00 000000000007FFFFFFFF0000000000001FFFFFFFFFC000000000007FFFFFFFFFE0000000 0001FFFFFFFFFFF00000000007FFFFF807FFF8000000001FFFFFC00FFFFC000000003FFF FE001FFFFE000000007FFFFC003FFFFE00000000FFFFF0003FFFFF00000001FFFFE0007F FFFF00000003FFFFC0007FFFFF00000007FFFFC000FFFFFF80000007FFFF8000FFFFFF80 00000FFFFF0000FFFFFF8000001FFFFF0000FFFFFF8000001FFFFE0000FFFFFF8000003F FFFE0000FFFFFF8000003FFFFC0000FFFFFF8000007FFFFC0000FFFFFF8000007FFFFC00 007FFFFF0000007FFFFC00007FFFFF0000007FFFF800003FFFFE000000FFFFF800001FFF FC000000FFFFF800001FFFFC000000FFFFF800000FFFF8000000FFFFF8000003FFE00000 00FFFFF8000000FF80000000FFFFF80000000000000000FFFFF80000000000000000FFFF F80000000000000000FFFFF80000000000000000FFFFF80000000000000000FFFFF80000 000000000000FFFFF80000000000000000FFFFF80000000000000000FFFFF80000000000 000000FFFFF80000000000000000FFFFF80000000000000000FFFFF80000000000000000 FFFFF80000000000000000FFFFF80000000000000000FFFFF80000000000000000FFFFF8 0000000000000000FFFFF80000000000000000FFFFF80000000000000000FFFFF8000000 0000000000FFFFF80000000000000000FFFFF80000000000000000FFFFF8000000000000 FFFFFFFFFFFFFFF8000000FFFFFFFFFFFFFFF8000000FFFFFFFFFFFFFFF8000000FFFFFF FFFFFFFFF8000000FFFFFFFFFFFFFFF8000000FFFFFFFFFFFFFFF8000000FFFFFFFFFFFF FFF8000000FFFFFFFFFFFFFFF8000000FFFFFFFFFFFFFFF80000000000FFFFFC00000000 00000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC00000000000000 00FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFF FC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000 000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000 000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000 FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC 0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC000000 0000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC000000000000 0000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FF FFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC00 00000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC00000000 00000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC00000000000000 00FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFF FC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000 000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000 000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000 FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC 0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC000000 0000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC000000000000 0000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FF FFFC0000000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC00 00000000000000FFFFFC0000000000000000FFFFFC0000000000000000FFFFFC00000000 00000000FFFFFC0000000000003FFFFFFFFFFFFF000000003FFFFFFFFFFFFF000000003F FFFFFFFFFFFF000000003FFFFFFFFFFFFF000000003FFFFFFFFFFFFF000000003FFFFFFF FFFFFF000000003FFFFFFFFFFFFF000000003FFFFFFFFFFFFF000000003FFFFFFFFFFFFF 00000000>81 144 121 271 71 I[<0000001FF000000000000000000000007FFFFFF000 000000000000000000FFFFFFFFF000000000000000000000FFFFFFFFF000000000000000 000000FFFFFFFFF000000000000000000000FFFFFFFFF000000000000000000000FFFFFF FFF000000000000000000000FFFFFFFFF000000000000000000000FFFFFFFFF000000000 000000000000FFFFFFFFF000000000000000000000FFFFFFFFF000000000000000000000 003FFFFFF0000000000000000000000007FFFFF0000000000000000000000007FFFFF000 0000000000000000000003FFFFF0000000000000000000000003FFFFF000000000000000 0000000003FFFFF0000000000000000000000003FFFFF0000000000000000000000003FF FFF0000000000000000000000003FFFFF0000000000000000000000003FFFFF000000000 0000000000000003FFFFF0000000000000000000000003FFFFF000000000000000000000 0003FFFFF0000000000000000000000003FFFFF0000000000000000000000003FFFFF000 0000000000000000000003FFFFF0000000000000000000000003FFFFF000000000000000 0000000003FFFFF0000000000000000000000003FFFFF0000000000000000000000003FF FFF0000000000000000000000003FFFFF0000000000000000000000003FFFFF000000000 0000000000000003FFFFF0000000000000000000000003FFFFF000000000000000000000 0003FFFFF0000000000000000000000003FFFFF0000000000000000000000003FFFFF000 0000000000000000000003FFFFF0000000000000000000000003FFFFF000000000000000 0000000003FFFFF0000000000000000000000003FFFFF0000000000000000000000003FF FFF0000000000000000000000003FFFFF0000000000000000000000003FFFFF000000000 0000000000000003FFFFF0000000000000000000000003FFFFF000000000000000000000 0003FFFFF0000000000000000000000003FFFFF0000000000000000000000003FFFFF000 0000000000000000000003FFFFF0000003FFFC00000000000003FFFFF000003FFFFFE000 0000000003FFFFF00000FFFFFFF8000000000003FFFFF00007FFFFFFFE000000000003FF FFF0001FFFFFFFFF800000000003FFFFF0003FFFFFFFFFE00000000003FFFFF000FFFFFF FFFFF00000000003FFFFF001FFFC03FFFFF80000000003FFFFF003FFC000FFFFFC000000 0003FFFFF007FE00007FFFFE0000000003FFFFF00FF800003FFFFE0000000003FFFFF01F E000003FFFFF0000000003FFFFF03FC000001FFFFF8000000003FFFFF07F8000001FFFFF 8000000003FFFFF0FE0000001FFFFFC000000003FFFFF1FC0000000FFFFFC000000003FF FFF1F80000000FFFFFC000000003FFFFF3F80000000FFFFFE000000003FFFFF7F0000000 0FFFFFE000000003FFFFF7E000000007FFFFE000000003FFFFFFC000000007FFFFE00000 0003FFFFFFC000000007FFFFE000000003FFFFFF8000000007FFFFF000000003FFFFFF80 00000007FFFFF000000003FFFFFF0000000007FFFFF000000003FFFFFF0000000007FFFF F000000003FFFFFE0000000007FFFFF000000003FFFFFE0000000007FFFFF000000003FF FFFE0000000007FFFFF000000003FFFFFC0000000007FFFFF000000003FFFFFC00000000 07FFFFF000000003FFFFFC0000000007FFFFF000000003FFFFFC0000000007FFFFF00000 0003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800 00000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFF F000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FF FFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800000000 07FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF00000 0003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800 00000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFF F000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FF FFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800000000 07FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF00000 0003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800 00000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFF F000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FF FFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800000000 07FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF00000 0003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800 00000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFF F000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FF FFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800000000 07FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF00000 0003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF800 00000007FFFFF00000FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFF FFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFF FFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FF FFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0 >114 143 119 270 129 104 D[<00003FC00000000000FFF00000000003FFFC00000000 07FFFE000000000FFFFF000000001FFFFF800000003FFFFFC00000007FFFFFE00000007F FFFFE0000000FFFFFFF0000000FFFFFFF0000001FFFFFFF8000001FFFFFFF8000001FFFF FFF8000001FFFFFFF8000001FFFFFFF8000001FFFFFFF8000001FFFFFFF8000001FFFFFF F8000000FFFFFFF0000000FFFFFFF00000007FFFFFE00000007FFFFFE00000003FFFFFC0 0000001FFFFF800000000FFFFF0000000007FFFE0000000003FFFC0000000000FFF00000 0000003FC000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000001FF00000007FFFFFF00000FFFFFFFFF00000FFFFFFFF F00000FFFFFFFFF00000FFFFFFFFF00000FFFFFFFFF00000FFFFFFFFF00000FFFFFFFFF0 0000FFFFFFFFF00000FFFFFFFFF00000003FFFFFF000000007FFFFF000000007FFFFF000 000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF00000 0003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF0000000 03FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003 FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FF FFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFF F000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF0 00000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000 000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF00000 0003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF0000000 03FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003 FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FF FFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFF F000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF000000003FFFFF0 00000003FFFFF000000003FFFFF000000003FFFFF00000FFFFFFFFFFFF80FFFFFFFFFFFF 80FFFFFFFFFFFF80FFFFFFFFFFFF80FFFFFFFFFFFF80FFFFFFFFFFFF80FFFFFFFFFFFF80 FFFFFFFFFFFF80FFFFFFFFFFFF80>49 144 119 271 65 I[<0000001FF0000000000000 000000007FFFFFF0000000000000000000FFFFFFFFF0000000000000000000FFFFFFFFF0 000000000000000000FFFFFFFFF0000000000000000000FFFFFFFFF00000000000000000 00FFFFFFFFF0000000000000000000FFFFFFFFF0000000000000000000FFFFFFFFF00000 00000000000000FFFFFFFFF0000000000000000000FFFFFFFFF000000000000000000000 3FFFFFF00000000000000000000007FFFFF00000000000000000000007FFFFF000000000 00000000000003FFFFF00000000000000000000003FFFFF00000000000000000000003FF FFF00000000000000000000003FFFFF00000000000000000000003FFFFF0000000000000 0000000003FFFFF00000000000000000000003FFFFF00000000000000000000003FFFFF0 0000000000000000000003FFFFF00000000000000000000003FFFFF00000000000000000 000003FFFFF00000000000000000000003FFFFF00000000000000000000003FFFFF00000 000000000000000003FFFFF00000000000000000000003FFFFF000000000000000000000 03FFFFF00000000000000000000003FFFFF00000000000000000000003FFFFF000000000 00000000000003FFFFF00000000000000000000003FFFFF00000000000000000000003FF FFF00000000000000000000003FFFFF00000000000000000000003FFFFF0000000000000 0000000003FFFFF00000000000000000000003FFFFF00000000000000000000003FFFFF0 0000000000000000000003FFFFF00000000000000000000003FFFFF00000000000000000 000003FFFFF00000000000000000000003FFFFF00000000000000000000003FFFFF00000 000000000000000003FFFFF00000000000000000000003FFFFF000000000000000000000 03FFFFF00000000000000000000003FFFFF00000000000000000000003FFFFF000000000 00000000000003FFFFF00000000000000000000003FFFFF00000003FFFFFFFFFC00003FF FFF00000003FFFFFFFFFC00003FFFFF00000003FFFFFFFFFC00003FFFFF00000003FFFFF FFFFC00003FFFFF00000003FFFFFFFFFC00003FFFFF00000003FFFFFFFFFC00003FFFFF0 0000003FFFFFFFFFC00003FFFFF00000003FFFFFFFFFC00003FFFFF00000003FFFFFFFFF C00003FFFFF000000001FFFFFE00000003FFFFF0000000003FFFC000000003FFFFF00000 00003FFF0000000003FFFFF0000000007FFE0000000003FFFFF000000000FFFC00000000 03FFFFF000000001FFF80000000003FFFFF000000003FFE00000000003FFFFF00000000F FFC00000000003FFFFF00000001FFF800000000003FFFFF00000003FFF000000000003FF FFF00000007FFC000000000003FFFFF0000001FFF8000000000003FFFFF0000003FFF000 0000000003FFFFF0000007FFE0000000000003FFFFF000000FFFC0000000000003FFFFF0 00001FFF00000000000003FFFFF000007FFE00000000000003FFFFF00000FFFC00000000 000003FFFFF00001FFF800000000000003FFFFF00003FFF000000000000003FFFFF00007 FFC000000000000003FFFFF0001FFF8000000000000003FFFFF0003FFF00000000000000 03FFFFF0007FFE0000000000000003FFFFF000FFF80000000000000003FFFFF003FFFC00 00000000000003FFFFF007FFFC0000000000000003FFFFF00FFFFE0000000000000003FF FFF01FFFFF0000000000000003FFFFF03FFFFF8000000000000003FFFFF0FFFFFFC00000 0000000003FFFFF1FFFFFFC000000000000003FFFFF3FFFFFFE000000000000003FFFFF7 FFFFFFF000000000000003FFFFFFFFFFFFF800000000000003FFFFFFFFFFFFF800000000 000003FFFFFFFFFFFFFC00000000000003FFFFFFFFFFFFFE00000000000003FFFFFFF1FF FFFF00000000000003FFFFFFE0FFFFFF00000000000003FFFFFFC0FFFFFF800000000000 03FFFFFF807FFFFFC0000000000003FFFFFF003FFFFFE0000000000003FFFFFC001FFFFF F0000000000003FFFFF8001FFFFFF0000000000003FFFFF0000FFFFFF8000000000003FF FFE00007FFFFFC000000000003FFFFE00003FFFFFE000000000003FFFFE00003FFFFFE00 0000000003FFFFE00001FFFFFF000000000003FFFFE00000FFFFFF800000000003FFFFE0 00007FFFFFC00000000003FFFFE000003FFFFFC00000000003FFFFE000003FFFFFE00000 000003FFFFE000001FFFFFF00000000003FFFFE000000FFFFFF80000000003FFFFE00000 07FFFFFC0000000003FFFFE0000007FFFFFC0000000003FFFFE0000003FFFFFE00000000 03FFFFE0000001FFFFFF0000000003FFFFE0000000FFFFFF8000000003FFFFE0000000FF FFFF8000000003FFFFE00000007FFFFFC000000003FFFFE00000003FFFFFE000000003FF FFE00000001FFFFFF000000003FFFFE00000000FFFFFF000000003FFFFE00000000FFFFF F800000003FFFFE000000007FFFFFC00000003FFFFE000000003FFFFFE00000003FFFFE0 00000001FFFFFF00000003FFFFE000000001FFFFFF00000003FFFFE000000000FFFFFF80 000003FFFFE000000000FFFFFFC0000003FFFFE000000001FFFFFFF000FFFFFFFFFFFF80 007FFFFFFFFFFFFFFFFFFFFFFF80007FFFFFFFFFFFFFFFFFFFFFFF80007FFFFFFFFFFFFF FFFFFFFFFF80007FFFFFFFFFFFFFFFFFFFFFFF80007FFFFFFFFFFFFFFFFFFFFFFF80007F FFFFFFFFFFFFFFFFFFFFFF80007FFFFFFFFFFFFFFFFFFFFFFF80007FFFFFFFFFFFFFFFFF FFFFFF80007FFFFFFFFFFF>112 143 121 270 123 107 D<0000003FE0000003FFFC00 00000000007FFFFFE000003FFFFFE000000000FFFFFFFFE00000FFFFFFF800000000FFFF FFFFE00007FFFFFFFE00000000FFFFFFFFE0001FFFFFFFFF80000000FFFFFFFFE0003FFF FFFFFFE0000000FFFFFFFFE000FFFFFFFFFFF0000000FFFFFFFFE001FFFC03FFFFF80000 00FFFFFFFFE003FFC000FFFFFC000000FFFFFFFFE007FE00007FFFFE000000FFFFFFFFE0 0FF800003FFFFE000000003FFFFFE01FE000003FFFFF0000000007FFFFE03FC000001FFF FF8000000007FFFFE07F8000001FFFFF8000000003FFFFE0FE0000001FFFFFC000000003 FFFFE1FC0000000FFFFFC000000003FFFFE1F80000000FFFFFC000000003FFFFE3F80000 000FFFFFE000000003FFFFE7F00000000FFFFFE000000003FFFFE7E000000007FFFFE000 000003FFFFEFC000000007FFFFE000000003FFFFEFC000000007FFFFE000000003FFFFFF 8000000007FFFFF000000003FFFFFF8000000007FFFFF000000003FFFFFF0000000007FF FFF000000003FFFFFF0000000007FFFFF000000003FFFFFE0000000007FFFFF000000003 FFFFFE0000000007FFFFF000000003FFFFFE0000000007FFFFF000000003FFFFFC000000 0007FFFFF000000003FFFFFC0000000007FFFFF000000003FFFFFC0000000007FFFFF000 000003FFFFFC0000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8 0000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FF FFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003 FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8000000 0007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000 000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8 0000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FF FFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003 FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8000000 0007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000 000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8 0000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FF FFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003 FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8000000 0007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000 000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8 0000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FF FFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003 FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8000000 0007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000 000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8 0000000007FFFFF000000003FFFFF80000000007FFFFF00000FFFFFFFFFFFFE001FFFFFF FFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFF FFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001 FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFFC0FFFFFFFFFFFFE001FFFFFFFFFFFF C0FFFFFFFFFFFFE001FFFFFFFFFFFFC0725D77DC81>110 D<00000000001FFFF0000000 00000000000007FFFFFFC000000000000000007FFFFFFFFC0000000000000003FFFFFFFF FF800000000000000FFFFFFFFFFFE00000000000007FFFFFFFFFFFFC000000000001FFFF FFFFFFFFFF000000000003FFFFFC007FFFFF80000000000FFFFFC00007FFFFE000000000 1FFFFE000000FFFFF0000000007FFFF80000003FFFFC00000000FFFFF00000001FFFFE00 000001FFFFC000000007FFFF00000003FFFF8000000003FFFF80000007FFFF0000000001 FFFFC000000FFFFE0000000000FFFFE000001FFFFC00000000007FFFF000003FFFFC0000 0000007FFFF800007FFFF800000000003FFFFC00007FFFF800000000003FFFFC0000FFFF F000000000001FFFFE0001FFFFF000000000001FFFFF0001FFFFE000000000000FFFFF00 03FFFFE000000000000FFFFF8003FFFFE000000000000FFFFF8007FFFFE000000000000F FFFFC007FFFFC0000000000007FFFFC00FFFFFC0000000000007FFFFE00FFFFFC0000000 000007FFFFE01FFFFFC0000000000007FFFFF01FFFFFC0000000000007FFFFF01FFFFFC0 000000000007FFFFF03FFFFF80000000000003FFFFF83FFFFF80000000000003FFFFF83F FFFF80000000000003FFFFF83FFFFF80000000000003FFFFF87FFFFF80000000000003FF FFFC7FFFFF80000000000003FFFFFC7FFFFF80000000000003FFFFFC7FFFFF8000000000 0003FFFFFC7FFFFF80000000000003FFFFFCFFFFFF80000000000003FFFFFEFFFFFF8000 0000000003FFFFFEFFFFFF80000000000003FFFFFEFFFFFF80000000000003FFFFFEFFFF FF80000000000003FFFFFEFFFFFF80000000000003FFFFFEFFFFFF80000000000003FFFF FEFFFFFF80000000000003FFFFFEFFFFFF80000000000003FFFFFEFFFFFF800000000000 03FFFFFEFFFFFF80000000000003FFFFFEFFFFFF80000000000003FFFFFEFFFFFF800000 00000003FFFFFEFFFFFF80000000000003FFFFFEFFFFFF80000000000003FFFFFE7FFFFF 80000000000003FFFFFC7FFFFF80000000000003FFFFFC7FFFFF80000000000003FFFFFC 7FFFFF80000000000003FFFFFC7FFFFF80000000000003FFFFFC3FFFFFC0000000000007 FFFFF83FFFFFC0000000000007FFFFF83FFFFFC0000000000007FFFFF81FFFFFC0000000 000007FFFFF01FFFFFC0000000000007FFFFF01FFFFFC0000000000007FFFFF00FFFFFC0 000000000007FFFFE00FFFFFE000000000000FFFFFE007FFFFE000000000000FFFFFC007 FFFFE000000000000FFFFFC003FFFFF000000000001FFFFF8003FFFFF000000000001FFF FF8001FFFFF000000000001FFFFF0001FFFFF800000000003FFFFF0000FFFFF800000000 003FFFFE00007FFFFC00000000007FFFFC00007FFFFE0000000000FFFFFC00003FFFFE00 00000000FFFFF800001FFFFF0000000001FFFFF000000FFFFF8000000003FFFFE0000007 FFFFC000000007FFFFC0000003FFFFF00000001FFFFF80000001FFFFF80000003FFFFF00 0000007FFFFE000000FFFFFC000000003FFFFFC00007FFFFF8000000001FFFFFFC007FFF FFF00000000007FFFFFFFFFFFFFFC00000000001FFFFFFFFFFFFFF0000000000007FFFFF FFFFFFFC0000000000001FFFFFFFFFFFF000000000000003FFFFFFFFFF80000000000000 007FFFFFFFFC000000000000000007FFFFFFC00000000000000000001FFFF00000000000 675F7ADD74>I<0000007FC00001FF80000000FFFFFFC0001FFFF80000FFFFFFFFC0007F FFFE0000FFFFFFFFC001FFFFFF8000FFFFFFFFC003FFFFFFE000FFFFFFFFC007FFFFFFF0 00FFFFFFFFC00FFFFFFFF800FFFFFFFFC01FFF07FFFC00FFFFFFFFC03FF00FFFFE00FFFF FFFFC07FC01FFFFE00FFFFFFFFC0FF803FFFFF00003FFFFFC0FF003FFFFF000007FFFFC1 FE007FFFFF800007FFFFC3FC007FFFFF800003FFFFC3F8007FFFFF800003FFFFC7F0007F FFFF800003FFFFC7E0007FFFFF800003FFFFCFE0007FFFFF800003FFFFCFC0007FFFFF80 0003FFFFDFC0007FFFFF800003FFFFDF80003FFFFF000003FFFFDF80003FFFFF000003FF FFFF00001FFFFE000003FFFFFF00001FFFFE000003FFFFFF00000FFFFC000003FFFFFE00 0007FFF8000003FFFFFE000003FFF0000003FFFFFE000000FFC0000003FFFFFC00000000 00000003FFFFFC0000000000000003FFFFFC0000000000000003FFFFFC00000000000000 03FFFFF80000000000000003FFFFF80000000000000003FFFFF80000000000000003FFFF F80000000000000003FFFFF80000000000000003FFFFF80000000000000003FFFFF00000 000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000 000003FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000000003 FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF0 0000000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF0000000 0000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF0000000000000 0003FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000000003FF FFF00000000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF000 00000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF000000000 00000003FFFFF00000000000000003FFFFF00000000000000003FFFFF000000000000000 03FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000000003FFFF F00000000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF00000 000000000003FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000 000003FFFFF00000000000000003FFFFF00000000000000003FFFFF00000000000000003 FFFFF00000000000000003FFFFF00000000000000003FFFFF0000000000000FFFFFFFFFF FFFC00000000FFFFFFFFFFFFFC00000000FFFFFFFFFFFFFC00000000FFFFFFFFFFFFFC00 000000FFFFFFFFFFFFFC00000000FFFFFFFFFFFFFC00000000FFFFFFFFFFFFFC00000000 FFFFFFFFFFFFFC00000000FFFFFFFFFFFFFC00000000515D79DC5F>114 D<0000001FFFF80000F800000003FFFFFFC001FC0000003FFFFFFFFC07FC000000FFFFFF FFFF1FFC000003FFFFFFFFFFFFFC00000FFFFFFFFFFFFFFC00003FFFF8001FFFFFFC0000 7FFF000000FFFFFC0000FFF80000003FFFFC0001FFE00000000FFFFC0003FFC000000003 FFFC0007FF8000000001FFFC000FFF0000000000FFFC000FFF00000000007FFC001FFE00 000000003FFC001FFE00000000001FFC003FFC00000000001FFC003FFC00000000001FFC 007FFC00000000000FFC007FFC00000000000FFC007FFC00000000000FFC007FFC000000 000007FC00FFFE000000000007FC00FFFE000000000007FC00FFFF000000000007FC00FF FF000000000007FC00FFFF800000000007FC00FFFFE00000000003F800FFFFF000000000 000000FFFFFC00000000000000FFFFFF00000000000000FFFFFFF00000000000007FFFFF FF8000000000007FFFFFFFFC00000000007FFFFFFFFFF0000000003FFFFFFFFFFF800000 003FFFFFFFFFFFF00000001FFFFFFFFFFFFE0000001FFFFFFFFFFFFF8000000FFFFFFFFF FFFFE000000FFFFFFFFFFFFFF0000007FFFFFFFFFFFFFC000003FFFFFFFFFFFFFE000001 FFFFFFFFFFFFFF800000FFFFFFFFFFFFFFC000007FFFFFFFFFFFFFE000001FFFFFFFFFFF FFF000000FFFFFFFFFFFFFF8000003FFFFFFFFFFFFF8000000FFFFFFFFFFFFFC0000003F FFFFFFFFFFFE0000000FFFFFFFFFFFFE00000001FFFFFFFFFFFF000000001FFFFFFFFFFF 00000000007FFFFFFFFF800000000003FFFFFFFF8000000000000FFFFFFFC00000000000 00FFFFFFC00000000000003FFFFFC00000000000000FFFFFE07F000000000003FFFFE0FF 800000000001FFFFE0FF8000000000007FFFE0FF8000000000003FFFE0FFC00000000000 3FFFE0FFC000000000001FFFE0FFC000000000000FFFE0FFE000000000000FFFE0FFE000 000000000FFFE0FFE0000000000007FFE0FFF0000000000007FFE0FFF0000000000007FF C0FFF8000000000007FFC0FFF8000000000007FFC0FFFC000000000007FFC0FFFC000000 000007FF80FFFE00000000000FFF80FFFF00000000000FFF80FFFF00000000000FFF00FF FF80000000001FFF00FFFFC0000000001FFE00FFFFE0000000003FFE00FFFFF000000000 7FFC00FFFFFC00000000FFF800FFFFFE00000003FFF000FFFFFF80000007FFE000FFFFFF F000003FFFC000FFFFFFFF0007FFFF8000FFF9FFFFFFFFFFFF0000FFF07FFFFFFFFFFC00 00FFE01FFFFFFFFFF00000FF8007FFFFFFFFC00000FF0001FFFFFFFF000000FE00003FFF FFF80000007C000003FFFE000000004B5F78DD5C>I[<00000003FE0000000000000003FE 0000000000000003FE0000000000000003FE0000000000000003FE0000000000000003FE 0000000000000003FE0000000000000003FE0000000000000007FE0000000000000007FE 0000000000000007FE0000000000000007FE0000000000000007FE000000000000000FFE 000000000000000FFE000000000000000FFE000000000000000FFE000000000000001FFE 000000000000001FFE000000000000001FFE000000000000003FFE000000000000003FFE 000000000000007FFE000000000000007FFE00000000000000FFFE00000000000000FFFE 00000000000001FFFE00000000000001FFFE00000000000003FFFE00000000000007FFFE 00000000000007FFFE0000000000000FFFFE0000000000001FFFFE0000000000003FFFFE 0000000000007FFFFE000000000000FFFFFE000000000003FFFFFE000000000007FFFFFE 00000000001FFFFFFE00000000007FFFFFFFFFFFFFF007FFFFFFFFFFFFFFF0FFFFFFFFFF FFFFFFF0FFFFFFFFFFFFFFFFF0FFFFFFFFFFFFFFFFF0FFFFFFFFFFFFFFFFF0FFFFFFFFFF FFFFFFF0FFFFFFFFFFFFFFFFF0FFFFFFFFFFFFFFFFF00000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE000000000000FFFFFE 000000000000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE 000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE 000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE 000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE000001FF0000FFFFFE 000001FF0000FFFFFE000001FF0000FFFFFE000003FF00007FFFFE000003FE00007FFFFF 000003FE00007FFFFF000003FE00007FFFFF000007FE00003FFFFF000007FC00003FFFFF 800007FC00001FFFFF80000FF800001FFFFFC0001FF800000FFFFFE0003FF000000FFFFF F0007FF0000007FFFFF800FFE0000003FFFFFF03FFC0000001FFFFFFFFFF80000000FFFF FFFFFF000000003FFFFFFFFE000000001FFFFFFFFC0000000007FFFFFFF80000000000FF FFFFE000000000001FFFFF80000000000000FFFC0000>72 132 124 258 90 I<0000000FF80000000000001FF00000007FFFFFF800000000FFFFFFF00000FF FFFFFFF8000001FFFFFFFFF00000FFFFFFFFF8000001FFFFFFFFF00000FFFFFFFFF80000 01FFFFFFFFF00000FFFFFFFFF8000001FFFFFFFFF00000FFFFFFFFF8000001FFFFFFFFF0 0000FFFFFFFFF8000001FFFFFFFFF00000FFFFFFFFF8000001FFFFFFFFF00000FFFFFFFF F8000001FFFFFFFFF00000FFFFFFFFF8000001FFFFFFFFF00000003FFFFFF8000000007F FFFFF000000007FFFFF8000000000FFFFFF000000007FFFFF8000000000FFFFFF0000000 03FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000 000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0 00000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFF F80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007 FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0000000 03FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000 000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0 00000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFF F80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007 FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0000000 03FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000 000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0 00000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFF F80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007 FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0000000 03FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000 000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0 00000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFF F80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007 FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF0000000 03FFFFF80000000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF80000 000007FFFFF000000003FFFFF80000000007FFFFF000000003FFFFF8000000000FFFFFF0 00000003FFFFF8000000000FFFFFF000000003FFFFF8000000000FFFFFF000000003FFFF F8000000000FFFFFF000000003FFFFF8000000001FFFFFF000000003FFFFF8000000001F FFFFF000000003FFFFF8000000003FFFFFF000000003FFFFF8000000003FFFFFF0000000 01FFFFF8000000003FFFFFF000000001FFFFF8000000007FFFFFF000000001FFFFF80000 0000FFFFFFF000000001FFFFF800000000FFFFFFF000000000FFFFF800000001FBFFFFF0 00000000FFFFFC00000003FBFFFFF0000000007FFFFC00000007F3FFFFF0000000007FFF FC0000000FE3FFFFF8000000003FFFFE0000001FE3FFFFF8000000003FFFFE0000007FC3 FFFFFF000000001FFFFF800000FF83FFFFFFFFC000000FFFFFC00003FF03FFFFFFFFC000 0007FFFFFC003FFE03FFFFFFFFC0000003FFFFFFFFFFFC03FFFFFFFFC0000000FFFFFFFF FFF803FFFFFFFFC00000007FFFFFFFFFE003FFFFFFFFC00000001FFFFFFFFF8003FFFFFF FFC000000003FFFFFFFE0003FFFFFFFFC0000000007FFFFFF80003FFFFFFFFC000000000 01FFFFC00003FFFF000000725E77DC81>II E end %%EndProlog %%BeginSetup %%Feature: *Resolution 600dpi TeXDict begin %%PaperSize: Letter @landscape %%EndSetup %%Page: 2 1 2 0 bop -359 -151 a Ff(ScaLAP)-19 b(A)-6 b(CK)-359 316 y(Quic)g(k)-359 800 y(Reference)-359 1267 y(Guide)-359 1734 y(to)78 b(the)-359 2201 y(Driv)-6 b(er)-359 2668 y(Routines)-359 3219 y Fe(Release)37 b(1.5)1236 674 y Fd(Simple)45 b(Driv)l(ers)2524 753 y Fc(Simple)25 b(Driv)n(er)i (Routines)f(for)g(Linear)h(Equations)1236 941 y Fb(Matrix)c(T)n(yp)r(e) 678 b(Routine)p 1186 968 4277 4 v 1236 1023 a(General)832 b Fa(PSGESV\()249 b(N,)318 b(NRHS,)37 b(A,)f(IA,)g(JA,)g(IPIV,)h(B,)f (IB,)g(JB,)354 b(INFO)36 b(\))2307 1102 y(PCGESV\()249 b(N,)318 b(NRHS,)37 b(A,)f(IA,)g(JA,)g(IPIV,)h(B,)f(IB,)g(JB,)354 b(INFO)36 b(\))1236 1259 y Fb(General)24 b(Band)645 b Fa(PSDBSV\()249 b(N,)36 b(BWL,)h(BWU,)f(NRHS,)h(A,)f(JA,)g(DESCA,)248 b(B,)36 b(IB,)g(DESCB,)h(WORK,LWORK,)i(INFO)d(\))1236 1338 y Fb(\(no)24 b(piv)n(oting\))669 b Fa(PCDBSV\()249 b(N,)36 b(BWL,)h(BWU,)f(NRHS,)h(A,)f(JA,)g(DESCA,)248 b(B,)36 b(IB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO)g(\))1236 1496 y Fb(General)24 b(Band)645 b Fa(PSGBSV\()249 b(N,)36 b(BWL,)h(BWU,)f(NRHS,)h(A,)f(JA,)g(DESCA,)h(IPIV,)f(B,)g(IB,)g(DESCB,)h (WORK,)g(LWORK,)g(INFO)g(\))1236 1575 y Fb(\(partial)24 b(piv)n(oting\))539 b Fa(PCGBSV\()249 b(N,)36 b(BWL,)h(BWU,)f(NRHS,)h (A,)f(JA,)g(DESCA,)h(IPIV,)f(B,)g(IB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO) g(\))1236 1733 y Fb(General)24 b(T)-6 b(ridiagonal)457 b Fa(PSDTSV\()249 b(N,)318 b(NRHS,)37 b(DL,)f(D,)g(DU,)g(JA,)g(DESCA,)h (B,)f(IB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO)g(\))1236 1811 y Fb(\(no)24 b(piv)n(oting\))669 b Fa(PCDTSV\()249 b(N,)318 b(NRHS,)37 b(DL,)f(D,)g(DU,)g(JA,)g(DESCA,)h(B,)f(IB,)g (DESCB,)h(WORK,)g(LWORK,)g(INFO)g(\))1236 1969 y Fb (Symmetric/Hermitian)390 b Fa(PSPOSV\()38 b(UPLO,)e(N,)318 b(NRHS,)37 b(A,)f(IA,)g(JA,)g(DESCA,)178 b(B,)36 b(IB,)g(JB,)h(DESCB,) 354 b(INFO)37 b(\))1236 2048 y Fb(P)n(ositiv)n(e)24 b(De\014nite)564 b Fa(PCPOSV\()38 b(UPLO,)e(N,)318 b(NRHS,)37 b(A,)f(IA,)g(JA,)g(DESCA,) 178 b(B,)36 b(IB,)g(JB,)h(DESCB,)354 b(INFO)37 b(\))1236 2206 y Fb(Symmetric/Hermitian)390 b Fa(PSPBSV\()38 b(UPLO,)e(N,)g(BW,) 177 b(NRHS,)37 b(A,)f(JA,)g(DESCA,)178 b(B,)36 b(IB,)g(DESCB,)h(WORK,)g (LWORK,)g(INFO)f(\))1236 2285 y Fb(P)n(ositiv)n(e)24 b(De\014nite)h(Band)376 b Fa(PCPBSV\()38 b(UPLO,)e(N,)g(BW,)177 b(NRHS,)37 b(A,)f(JA,)g(DESCA,)178 b(B,)36 b(IB,)g(DESCB,)h(WORK,)g (LWORK,)g(INFO)f(\))1236 2442 y Fb(Symmetric/Hermitian)390 b Fa(PSPTSV\()249 b(N,)318 b(NRHS,)37 b(D,)f(E,)g(JA,)g(DESCA,)72 b(B,)36 b(IB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO)f(\))1236 2521 y Fb(P)n(ositiv)n(e)24 b(De\014nite)h(T)-6 b(ridiagonal)188 b Fa(PCPTSV\()249 b(N,)318 b(NRHS,)37 b(D,)f(E,)g(JA,)g(DESCA,)72 b(B,)36 b(IB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO)f(\))1794 2679 y Fc(Simple)26 b(Driv)n(er)g(Routines)h(for)f(Standard)h(and)g (Generalized)f(Linear)h(Least)f(Squares)h(Problems)1236 2867 y Fb(Problem)22 b(T)n(yp)r(e)630 b(Routine)p 1186 2894 V 1236 2949 a(Solv)n(e)24 b(Using)g(Orthogonal)g(F)-6 b(actor,)100 b Fa(PSGELS\()38 b(TRANS,)f(M,)e(N,)h(NRHS,)h(A,)f(IA,)g (JA,)g(DESCA,)h(B,)f(IB,)g(JB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO)f(\)) 1236 3028 y Fb(Assuming)22 b(F)-6 b(ull)23 b(Rank)442 b Fa(PCGELS\()38 b(TRANS,)f(M,)e(N,)h(NRHS,)h(A,)f(IA,)g(JA,)g(DESCA,)h (B,)f(IB,)g(JB,)g(DESCB,)h(WORK,)g(LWORK,)g(INFO)f(\))1928 3186 y Fc(Simple)25 b(Driv)n(er)i(Routines)f(for)h(Standard)f(Eigen)n (v)-5 b(alue)27 b(and)g(Singular)f(V)-7 b(alue)27 b(Problems)1236 3358 y Fb(Matrix/Problem)22 b(T)n(yp)r(e)383 b(Routine)p 1186 3385 V 1236 3441 a(Symmetric/Hermitian)390 b Fa(PSSYEV\()38 b(JOBZ,)e(UPLO,)h(N,)f(A,)g(IA,)g(JA,)g(DESCA,)h(W,)f(Z,)g(IZ,)g(JZ,)g (DESCZ,)h(WORK,)g(LWORK,)g(INFO)f(\))1236 3519 y Fb(Eigen)n(v)l (alues/v)n(ectors)1236 3677 y(General)832 b Fa(PSGESVD\()38 b(JOBU,)f(JOBVT,)g(M,)e(N,)h(A,)g(IA,)g(JA,)g(DESCA,)h(S,)f(U,)g(IU,)g (JU,)g(DESCU,)h(VT,)g(IVT,)f(JVT,)g(DESCVT,)1236 3756 y Fb(Singular)23 b(V)-6 b(alues/V)g(ectors)649 b Fa(WORK,)37 b(LWORK,)g(INFO)f(\))p eop %%Page: 3 2 3 1 bop -459 -201 a Fd(Exp)t(ert)45 b(Driv)l(ers)1717 -55 y Fc(Exp)r(ert)27 b(Driv)n(er)f(Routines)h(for)f(Linear)g (Equations)-409 163 y Fb(Matrix)d(T)n(yp)r(e)385 b(Routine)p -459 190 5959 4 v -409 245 a(General)539 b Fa(PSGESVX\()37 b(FACT,)g(TRANS,)g(N,)f(NRHS,)h(A,)e(IA,)i(JA,)f(DESCA,)h(AF,)f(IAF,)g (JAF,)h(DESCAF,)g(IPIV,)g(EQUED,)g(R,)f(C,)f(B,)h(IB,)g(JB,)h(DESCB,)g (X,)e(IX,)h(JX,)h(DESCX,)g(RCOND,)g(FERR,)f(BERR,)686 324 y(WORK,)h(LWORK,)g(IWORK,)g(LIWORK,)g(INFO)g(\))369 403 y(PCGESVX\()g(FACT,)g(TRANS,)g(N,)f(NRHS,)h(A,)e(IA,)i(JA,)f (DESCA,)h(AF,)f(IAF,)g(JAF,)h(DESCAF,)g(IPIV,)g(EQUED,)g(R,)f(C,)f(B,)h (IB,)g(JB,)h(DESCB,)g(X,)e(IX,)h(JX,)h(DESCX,)g(RCOND,)g(FERR,)f(BERR,) 686 482 y(WORK,)h(LWORK,)g(RWORK,)g(LRWORK,)g(INFO)g(\))-409 640 y Fb(Symmetric/Hermitian)97 b Fa(PSPOSVX\()37 b(FACT,)g(UPLO,)72 b(N,)36 b(NRHS,)h(A,)e(IA,)i(JA,)f(DESCA,)h(AF,)f(IAF,)g(JAF,)h (DESCAF,)249 b(EQUED,)37 b(S,)141 b(B,)36 b(IB,)g(JB,)h(DESCB,)g(X,)e (IX,)h(JX,)h(DESCX,)g(RCOND,)g(FERR,)f(BERR,)-409 718 y Fb(P)n(ositiv)n(e)24 b(De\014nite)588 b Fa(WORK,)37 b(LWORK,)g(IWORK,)g(LIWORK,)g(INFO)g(\))369 797 y(PCPOSVX\()g(FACT,)g (UPLO,)72 b(N,)36 b(NRHS,)h(A,)e(IA,)i(JA,)f(DESCA,)h(AF,)f(IAF,)g (JAF,)h(DESCAF,)249 b(EQUED,)37 b(S,)141 b(B,)36 b(IB,)g(JB,)h(DESCB,)g (X,)e(IX,)h(JX,)h(DESCX,)g(RCOND,)g(FERR,)f(BERR,)686 876 y(WORK,)h(LWORK,)g(RWORK,)g(LRWORK,)g(INFO)g(\))1006 1222 y Fc(Exp)r(ert)27 b(Driv)n(er)f(Routines)h(for)f(Standard)g(and)i (Generalized)e(Symmetric)e(Eigen)n(v)-5 b(alue)27 b(Problems)-409 1440 y Fb(Matrix/Problem)22 b(T)n(yp)r(e)100 b(Routine)p -459 1467 6040 4 v -409 1522 a(Symmetric)455 b Fa(PSSYEVX\()38 b(JOBZ,)f(RANGE,)g(UPLO,)f(N,)g(A,)g(IA,)g(JA,)g(DESCA,)h(VL,)g(VU,)f (IL,)g(IU,)g(ABSTOL,)h(M,)f(NZ,)g(W,)g(ORFAC,)h(Z,)f(IZ,)g(JZ,)g (DESCZ,)h(WORK,)g(LWORK,)g(IWORK,)g(LIWORK,)-409 1601 y Fb(Eigen)n(v)l(alues/v)n(ectors)499 b Fa(IFAIL,)37 b(ICLUSTR,)h(GAP,)e(INFO)g(\))-409 1680 y Fb(Hermitian)475 b Fa(PCHEEVX\()38 b(JOBZ,)f(RANGE,)g(UPLO,)f(N,)g(A,)g(IA,)g(JA,)g (DESCA,)h(VL,)g(VU,)f(IL,)g(IU,)g(ABSTOL,)h(M,)f(NZ,)g(W,)g(ORFAC,)h (Z,)f(IZ,)g(JZ,)g(DESCZ,)h(WORK,)g(LWORK,)g(RWORK,)g(LRWORK,)-409 1759 y Fb(Eigen)n(v)l(alues/v)n(ectors)499 b Fa(IWORK,)37 b(LIWORK,)g(IFAIL,)g(ICLUSTR,)h(GAP,)e(INFO)h(\))-409 1917 y Fb(Symmetric)455 b Fa(PSSYGVX\()38 b(IBTYPE,)f(JOBZ,)g(RANGE,)g (UPLO,)g(N,)f(A,)f(IA,)i(JA,)f(DESCA,)h(B,)f(IB,)g(JB,)g(DESCB,)h(VL,)f (VU,)g(IL,)g(IU,)g(ABSTOL,)i(M,)e(NZ,)g(W,)g(ORFAC,)h(Z,)e(IZ,)i(JZ,)f (DESCZ,)h(WORK,)f(LWORK,)-409 1995 y Fb(Eigen)n(v)l(alues/v)n(ectors) 499 b Fa(IWORK,)37 b(LIWORK,)g(IFAIL,)g(ICLUSTR,)h(GAP,)e(INFO)h(\)) -409 2074 y Fb(Hermitian)475 b Fa(PCHEGVX\()38 b(IBTYPE,)f(JOBZ,)g (RANGE,)g(UPLO,)g(N,)f(A,)f(IA,)i(JA,)f(DESCA,)h(B,)f(IB,)g(JB,)g (DESCB,)h(VL,)f(VU,)g(IL,)g(IU,)g(ABSTOL,)i(M,)e(NZ,)g(W,)g(ORFAC,)h (Z,)e(IZ,)i(JZ,)f(DESCZ,)h(WORK,)f(LWORK,)-409 2153 y Fb(Eigen)n(v)l(alues/v)n(ectors)499 b Fa(RWORK,)37 b(LRWORK,)g(IWORK,)g (LIWORK,)h(IFAIL,)f(ICLUSTR,)g(GAP,)g(INFO)f(\))2191 2470 y Fc(Meaning)26 b(of)h(pre\014xes)1817 2609 y Fb(Routines)e(b)r (eginning)f(with)g(\\PS")g(are)g(a)n(v)l(ailable)g(in:)2382 2747 y Fa(PS)36 b(-)g(REAL)2171 2826 y(PD)f(-)h(DOUBLE)h(PRECISION)1812 2905 y Fb(Routines)24 b(b)r(eginning)g(with)g(\\PC")g(are)g(a)n(v)l (ailable)g(in:)2329 3044 y Fa(PC)36 b(-)g(COMPLEX)2276 3123 y(PZ)g(-)g(COMPLEX*16)1662 3202 y Fb(Note:)c Fa(COMPLEX*16)27 b Fb(ma)n(y)c(not)h(b)r(e)g(supp)r(orted)h(b)n(y)f(all)f(mac)n(hines)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF