Read More
Date: 16-8-2021
1175
Date: 23-12-2021
1576
Date: 13-9-2021
1120
|
The conjugate gradient method is not suitable for nonsymmetric systems because the residual vectors cannot be made orthogonal with short recurrences, as proved in Voevodin (1983) and Faber and Manteuffel (1984). The generalized minimal residual method retains orthogonality of the residuals by using long recurrences, at the cost of a larger storage demand. The biconjugate gradient method (BCG) takes another approach, replacing the orthogonal sequence of residuals by two mutually orthogonal sequences, at the price of no longer providing a minimization.
The update relations for residuals in the conjugate gradient method are augmented in the biconjugate gradient method by relations that are similar but based on instead of . Thus we update two sequences of residuals
(1) |
|||
(2) |
and two sequences of search directions
(3) |
|||
(4) |
The choices
(5) |
|||
(6) |
ensure the orthogonality relations
(7) |
if .
Few theoretical results are known about the convergence of the biconjugate gradient method. For symmetric positive definite systems, the method delivers the same results as the conjugate gradient method, but at twice the cost per iteration. For nonsymmetric matrices, it has been shown that in phases of the process where there is significant reduction of the norm of the residual, the method is more or less comparable to the full generalized minimal residual method in terms of numbers of iterations (Freund and Nachtigal 1991). In practice, this is often confirmed, but it is also observed that the convergence behavior may be quite irregular, and the method may even break down. The breakdown situation due to the possible event that
(8) |
can be circumvented by so-called look-ahead strategies (Parlett et al. 1985). The other breakdown situation,
(9) |
occurs when the LU decomposition fails (c.f. conjugate gradient method), and can be repaired by using another decomposition. This is done for example in some versions of the quasi-minimal residual method.
Sometimes, breakdown or near breakdown situations can be satisfactorily avoided by a restart at the iteration step immediately before the (near) breakdown step. Another possibility is to switch to a more robust (but possibly more expensive) method such as the generalized minimal residual method.
BCG requires computing a matrix-vector product and a transpose product . In some applications, the latter product may be impossible to perform, for instance if the matrix is not formed explicitly and the regular product is only given in operation form, for instance as a function call evaluation.
In a parallel environment, the two matrix-vector products can theoretically be performed simultaneously; however, in a distributed-memory environment, there will be extra communication costs associated with one of the two matrix-vector products, depending upon the storage scheme for . A duplicate copy of the matrix will alleviate this problem, at the cost of doubling the storage requirements for the matrix.
Care must also be exercised in choosing the preconditioner, since similar problems arise during the two solves involving the preconditioning matrix.
It is difficult to make a fair comparison between the generalized minimal residual method (GMRES) and BCG. GMRES really minimizes a residual, but at the cost of increasing work for keeping all residuals orthogonal and increasing demands for memory space. BCG does not minimize a residual, but often its accuracy is comparable to GMRES, at the cost of twice the amount of matrix vector products per iteration step. However, the generation of the basis vectors is relatively cheap and the memory requirements are modest. Several variants of BCG have been proposed (e.g., conjugate gradient squared method and biconjugate gradient stabilized method) that increase the effectiveness of this class of methods in certain circumstances.
REFERENCES:
Barrett, R.; Berry, M.; Chan, T. F.; Demmel, J.; Donato, J.; Dongarra, J.; Eijkhout, V.; Pozo, R.; Romine, C.; and van der Vorst, H. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd ed. Philadelphia, PA: SIAM, 1994. http://www.netlib.org/linalg/html_templates/Templates.html.
Faber, V. and Manteuffel, T. "Necessary and Sufficient Conditions for the Existence of a Conjugate Gradient Method." SIAM J. Numer. Anal. 21, 315-339, 1984.
Freund, R. and Nachtigal, N. "QMR: A Quasi-Minimal Residual Method for Non-Hermitian Linear Systems." Numer. Math. 60, 315-339, 1991.
Parlett, B. N. Taylor, D. R.; and Liu, Z. A. "A Look-Ahead Lanczos Algorithm for Unsymmetric Matrices." Math. Comput. 44, 105-124, 1985.
Voevodin, V. "The Problem of Non-Self-Adjoint Generalization of the Conjugate Gradient Method is Closed." U.S.S.R. Comput. Maths. and Math. Phys. 23, 143-144, 1983.
|
|
مخاطر خفية لمكون شائع في مشروبات الطاقة والمكملات الغذائية
|
|
|
|
|
"آبل" تشغّل نظامها الجديد للذكاء الاصطناعي على أجهزتها
|
|
|
|
|
تستخدم لأول مرة... مستشفى الإمام زين العابدين (ع) التابع للعتبة الحسينية يعتمد تقنيات حديثة في تثبيت الكسور المعقدة
|
|
|