(ch:eps)= # EPS: Eigenvalue Problem Solver The Eigenvalue Problem Solver (`EPS`) is the main object provided by SLEPc. It is used to specify a linear eigenvalue problem, either in standard or generalized form, and provides uniform and efficient access to all of the linear eigensolvers included in the package. Conceptually, the level of abstraction occupied by `EPS` is similar to other solvers in PETSc such as {external:doc}`KSP` for solving linear systems of equations. {#sec:eig label="sec:eig"} ## Eigenvalue Problems In this section, we briefly present some basic concepts about eigenvalue problems as well as general techniques used to solve them. The description is not intended to be exhaustive. The objective is simply to define terms that will be referred to throughout the rest of the manual. Readers who are familiar with the terminology and the solution approach can skip this section. For a more comprehensive description, we refer the reader to monographs such as {cite:p}`Stewart:2001:MAV`, {cite:p}`Bai:2000:TSA`, {cite:p}`Saad:1992:NML` or {cite:p}`Parlett:1980:SEP`. A historical perspective of the topic can be found in {cite:p}`Golub:2000:EC2`. See also the SLEPc [technical reports](#str). In the standard formulation, the linear eigenvalue problem consists in the determination of $\lambda\in\mathbb{C}$ for which the equation ```{math} :label: eq:eigstd Ax=\lambda x ``` has nontrivial solution, where $A\in\mathbb{C}^{n\times n}$ and $x\in\mathbb{C}^n$. The scalar $\lambda$ and the vector $x$ are called eigenvalue and (right) eigenvector, respectively. Note that they can be complex even when the matrix is real. If $\lambda$ is an eigenvalue of $A$ then $\bar{\lambda}$ is an eigenvalue of its conjugate transpose, $A^*$, or equivalently ```{math} :label: eq:eigstdleft y^*\!A=\lambda\, y^*, ``` where $y$ is called the left eigenvector. In many applications, the problem is formulated as ```{math} :label: eq:eiggen Ax=\lambda Bx, ``` where $B\in\mathbb{C}^{n\times n}$, which is known as the generalized eigenvalue problem. Usually, this problem is solved by reformulating it in standard form, for example $B^{-1}Ax=\lambda x$ if $B$ is non-singular. SLEPc focuses on the solution of problems in which the matrices are large and sparse. Hence, only methods that preserve sparsity are considered. These methods obtain the solution from the information generated by the application of the operator to various vectors (the operator is a simple function of matrices $A$ and $B$), that is, matrices are only used in matrix-vector products. This not only maintains sparsity but allows the solution of problems in which matrices are not available explicitly. In practical analyses, from the $n$ possible solutions, typically only a few eigenpairs $(\lambda,x)$ are considered relevant, either in the extremities of the spectrum, in an interval, or in a region of the complex plane. Depending on the application, either eigenvalues or eigenvectors (or both) are required. In some cases, left eigenvectors are also of interest. ### Projection Methods. Most eigensolvers provided by SLEPc perform a Rayleigh-Ritz projection for extracting the spectral approximations, that is, they project the problem onto a low-dimensional subspace that is built appropriately. Suppose that an orthogonal basis of this subspace is given by $V_j=[v_1,v_2,\ldots,v_j]$. If the solutions of the projected (reduced) problem $B_js=\theta s$ (i.e., $V_j^TAV_j=B_j$) are assumed to be $(\theta_i,s_i)$, $i=1,2,\ldots,j$, then the approximate eigenpairs $(\tilde{\lambda}_i,\tilde{x}_i)$ of the original problem (Ritz value and Ritz vector) are obtained as ```{math} :label: eq:ritz-value \begin{aligned} \tilde{\lambda}_i=\theta_i,\\ \end{aligned} ``` ```{math} :label: eq:ritz-vector \begin{aligned} \tilde{x}_i=V_js_i. \end{aligned} ``` Starting from this general idea, eigensolvers differ from each other in which subspace is used, how it is built and other technicalities aimed at improving convergence, reducing storage requirements, etc. The subspace ```{math} :label: eq:krylov \mathcal{K}_m(A,v)\equiv\mathrm{span}\left\{v,Av,A^2v,\ldots,A^{m-1}v\right\}, ``` is called the $m$-th Krylov subspace corresponding to $A$ and $v$. Methods that use subspaces of this kind to carry out the projection are called Krylov methods. One example of such methods is the Arnoldi algorithm: starting with $v_1$, $\|v_1\|_2=1$, the Arnoldi basis generation process can be expressed by the recurrence ```{math} :label: eq:arnoldi v_{j+1}h_{j+1,j}=w_j=Av_j-\sum_{i=1}^jh_{i,j}v_i, ``` where $h_{i,j}$ are the scalar coefficients obtained in the Gram-Schmidt orthogonalization of $Av_j$ with respect to $v_i$, $i=1,2,\ldots,j$, and $h_{j+1,j}=\|w_j\|_2$. Then, the columns of $V_j$ span the Krylov subspace $\mathcal{K}_j(A,v_1)$ and $Ax=\lambda x$ is projected into $H_js=\theta s$, where $H_j$ is an upper Hessenberg matrix with elements $h_{i,j}$, which are 0 for $i\geq j+2$. The related Lanczos algorithms obtain a projected matrix that is tridiagonal. A generalization to the above methods are the block Krylov strategies, in which the starting vector $v_1$ is replaced by a full rank $n\times p$ matrix $V_1$, which allows for better convergence properties when there are multiple eigenvalues and can provide better data management on some computer architectures. Block tridiagonal and block Hessenberg matrices are then obtained as projections. It is generally assumed (and observed) that the Lanczos and Arnoldi algorithms find solutions at the extremities of the spectrum. Their convergence pattern, however, is strongly related to the eigenvalue distribution. Slow convergence may be experienced in the presence of tightly clustered eigenvalues. The maximum allowable $j$ may be reached without having achieved convergence for all desired solutions. Then, restarting is usually a useful technique and different strategies exist for that purpose. However, convergence can still be very slow and acceleration strategies must be applied. Usually, these techniques consist in computing eigenpairs of a transformed operator and then recovering the solution of the original problem. The aim of these transformations is twofold. On one hand, they make it possible to obtain eigenvalues other than those lying in the periphery of the spectrum. On the other hand, the separation of the eigenvalues of interest is improved in the transformed spectrum thus leading to faster convergence. The most commonly used spectral transformation is called shift-and-invert, which works with operator $(A-\sigma I)^{-1}$. It allows the computation of eigenvalues closest to $\sigma$ with very good separation properties. When using this approach, a linear system of equations, $(A-\sigma I)y=x$, must be solved in each iteration of the eigenvalue process. ### Preconditioned Eigensolvers In many applications, Krylov eigensolvers perform very well because Krylov subspaces are optimal in a certain theoretical sense. However, these methods may not be appropriate in some situations such as the computation of interior eigenvalues. The spectral transformation mentioned above may not be a viable solution or it may be too costly. For these reasons, other types of eigensolvers such as Davidson and Jacobi-Davidson rely on a different way of expanding the subspace. Instead of satisfying the Krylov relation, these methods compute the new basis vector by the so-called correction equation. The resulting subspace may be richer in the direction of the desired eigenvectors. These solvers may be competitive especially for computing interior eigenvalues. From a practical point of view, the correction equation may be seen as a cheap replacement for the shift-and-invert system of equations, $(A-\sigma I)y=x$. By cheap we mean that it may be solved inaccurately without compromising robustness, via a preconditioned iterative linear solver. For this reason, these are known as *preconditioned* eigensolvers. ### Related Problems In many applications such as the analysis of damped vibrating systems the problem to be solved is a *polynomial eigenvalue problem* (PEP), or more generally a *nonlinear eigenvalue problem* (NEP). For these, the reader is referred to chapters [](#ch:pep) and [](#ch:nep). Another linear algebra problem that is very closely related to the eigenvalue problem is the *singular value decomposition* (SVD), see chapter [](#ch:svd). ## Basic Usage The `EPS` module in SLEPc is used in a similar way as PETSc modules such as {external:doc}`KSP`. All the information related to an eigenvalue problem is handled via a context variable. The usual object management functions are available (`EPSCreate`, `EPSDestroy`, `EPSView`, `EPSSetFromOptions`). In addition, the `EPS` object provides functions for setting several parameters such as the number of eigenvalues to compute, the dimension of the subspace, the portion of the spectrum of interest, the requested tolerance or the maximum number of iterations allowed. The solution of the problem is obtained in several steps. First of all, the matrices associated with the eigenproblem are specified via `EPSSetOperators` and `EPSSetProblemType` is used to specify the type of problem. Then, a call to `EPSSolve` is done that invokes the subroutine for the selected eigensolver. `EPSGetConverged` can be used afterwards to determine how many of the requested eigenpairs have converged to working accuracy. `EPSGetEigenpair` is finally used to retrieve the eigenvalues and eigenvectors. In order to illustrate the basic functionality of the `EPS` package, a simple example is shown in figure [](#fig:ex-eps). The example code implements the solution of a simple standard eigenvalue problem. Code for setting up the matrix $A$ is not shown and error-checking code is omitted. ```{code-block} c :name: fig:ex-eps :caption: Example code for basic solution with `EPS` :linenos: EPS eps; /* eigensolver context */ Mat A; /* matrix of Ax=kx */ Vec xr, xi; /* eigenvector, x */ PetscScalar kr, ki; /* eigenvalue, k */ PetscInt j, nconv; PetscReal error; EPSCreate( PETSC_COMM_WORLD, &eps ); EPSSetOperators( eps, A, NULL ); EPSSetProblemType( eps, EPS_NHEP ); EPSSetFromOptions( eps ); EPSSolve( eps ); EPSGetConverged( eps, &nconv ); for (j=0; j` or inside the program with the function `EPSSetTolerances` ```{code} c EPSSetTolerances(EPS eps,PetscReal tol,PetscInt max_it); ``` The third parameter of this function allows the programmer to modify the maximum number of iterations allowed to the solution algorithm, which can also be set via `-eps_max_it `. #### Convergence Check The error estimates used for the convergence test are based on the residual norm, as discussed in section [](#sec:errbnd). Most eigensolvers explicitly compute the residual of the relevant eigenpairs during the iteration, but Krylov solvers use a cheap formula instead, allowing to track many eigenpairs simultaneously. When using a spectral transformation, this formula may give too optimistic bounds (corresponding to the residual of the transformed problem, not the original problem). In such cases, the users can force the computation of the residual with `EPSSetTrueResidual` ```{code} c EPSSetTrueResidual(EPS eps,PetscBool trueres); ``` or with `-eps_true_residual`. :::{table} Available possibilities for the convergence criterion. :name: tab:convergence Convergence criterion |`EPSConv` |Command line key |Error bound --------------------------|-----------------|------------------|-------------------------------- Absolute |`EPS_CONV_ABS` |`-eps_conv_abs` |$\|r\|$ Relative to eigenvalue |`EPS_CONV_REL` |`-eps_conv_rel` |$\|r\|/\|\lambda\|$ Relative to matrix norms |`EPS_CONV_NORM` |`-eps_conv_norm` |$\|r\|/(\|A\|+\|\lambda\|\|B\|)$ User-defined |`EPS_CONV_USER` |`-eps_conv_user` |user function ::: From the residual norm, the error bound can be computed in different ways, see table [](#tab:convergence). This can be set via the corresponding command-line switch or with `EPSSetConvergenceTest` ```{code} c EPSSetConvergenceTest(EPS eps,EPSConv conv); ``` The default is to use the criterion relative to the eigenvalue (note: for computing eigenvalues close to the origin this criterion will likely give very poor accuracy, so the user is advised to use `EPS_CONV_ABS` in that case). Finally, a custom convergence criterion may be established by specifying a user function (`EPSSetConvergenceTestFunction`). Error estimates used internally by eigensolvers for checking convergence may be different from the error bounds provided by `EPSComputeError`. At the end of the solution process, error estimates are available via `EPSGetErrorEstimate` ```{code} c EPSGetErrorEstimate(EPS eps,PetscInt j,PetscReal *errest); ``` By default, the eigensolver will stop iterating when the current number of eigenpairs satisfying the convergence test is equal to (or greater than) the number of requested eigenpairs (or if the maximum number of iterations has been reached). However, it is also possible to provide a user-defined stopping test that may decide to quit earlier, see `EPSSetStoppingTest`. #### Monitors Error estimates can be displayed during execution of the solution algorithm, as a way of monitoring convergence. There are several such monitors available. The user can activate them via the options database (see examples below), or within the code with `EPSMonitorSet`. By default, the solvers run silently without displaying information about the iteration. Also, application programmers can provide their own routines to perform the monitoring by using the function `EPSMonitorSet`. The most basic monitor prints one approximate eigenvalue together with its associated error estimate in each iteration. The shown eigenvalue is the first unconverged one. ```{code} console $ ./ex9 -eps_nev 1 -eps_tol 1e-6 -eps_monitor 1 EPS nconv=0 first unconverged value (error) -0.0695109+2.10989i (2.38956768e-01) 2 EPS nconv=0 first unconverged value (error) -0.0231046+2.14902i (1.09212525e-01) 3 EPS nconv=0 first unconverged value (error) -0.000633399+2.14178i (2.67086904e-02) 4 EPS nconv=0 first unconverged value (error) 9.89074e-05+2.13924i (6.62097793e-03) 5 EPS nconv=0 first unconverged value (error) -0.000149404+2.13976i (1.53444214e-02) 6 EPS nconv=0 first unconverged value (error) 0.000183676+2.13939i (2.85521004e-03) 7 EPS nconv=0 first unconverged value (error) 0.000192479+2.13938i (9.97563492e-04) 8 EPS nconv=0 first unconverged value (error) 0.000192534+2.13938i (1.77259863e-04) 9 EPS nconv=0 first unconverged value (error) 0.000192557+2.13938i (2.82539990e-05) 10 EPS nconv=0 first unconverged value (error) 0.000192559+2.13938i (2.51440008e-06) 11 EPS nconv=2 first unconverged value (error) -0.671923+2.52712i (8.92724972e-05) ``` Graphical monitoring (in an X display) is also available with `-eps_monitor draw::draw_lg`. Figure [Monitor](#fig:plot-monitor) shows the result of the following sample command line: ```{code} console $ ./ex9 -n 200 -eps_nev 12 -eps_tol 1e-12 -eps_monitor draw::draw_lg -draw_pause .2 ``` Again, only the error estimate of one eigenvalue is drawn. The spikes in the last part of the plot indicate convergence of one eigenvalue and switching to the next. ```{figure} ../../_static/images/manual/png/monitor.png :alt: SLEPc convergence monitor :name: fig:plot-monitor :scale: 60 Default convergence monitor. ``` The two previously mentioned monitors have an alternative version (`*_all`) that processes all eigenvalues instead of just the first one. Figure [Monitor-all](#fig:plot-monitorall) corresponds to the same example but with `-eps_monitor_all draw::draw_lg`. Note that these variants have a side effect: they force the computation of all error estimates even if the method would not normally do so. ```{figure} ../../_static/images/manual/png/monitorall.png :alt: SLEPc convergence monitor (all eigenvalues) :name: fig:plot-monitorall :scale: 60 Simultaneous convergence monitor for all eigenvalues. ``` A less verbose monitor is `-eps_monitor_conv`, which simply displays the iteration number at which convergence takes place. Note that several monitors can be used at the same time. ```{code} console $ ./ex9 -n 200 -eps_nev 8 -eps_tol 1e-12 -eps_monitor_conv 79 EPS converged value (error) #0 4.64001e-06+2.13951i (8.26091148e-13) 79 EPS converged value (error) #1 4.64001e-06-2.13951i (8.26091148e-13) 93 EPS converged value (error) #2 -0.674926+2.52867i (6.85260521e-13) 93 EPS converged value (error) #3 -0.674926-2.52867i (6.85260521e-13) 94 EPS converged value (error) #4 -1.79963+3.03259i (5.48825386e-13) 94 EPS converged value (error) #5 -1.79963-3.03259i (5.48825386e-13) 98 EPS converged value (error) #6 -3.37383+3.55626i (2.74909207e-13) 98 EPS converged value (error) #7 -3.37383-3.55626i (2.74909207e-13) ``` {#sec:epsviewers} ### Viewing the Solution The computed solution (eigenvalues and eigenvectors) can be viewed in different ways, exploiting the flexibility of {external:doc}`PetscViewer`s. The API functions for this are `EPSValuesView` and `EPSVectorsView`. We next illustrate their usage via the command line. The command-line option `-eps_view_values` shows the computed eigenvalues on the standard output at the end of `EPSSolve`. It admits an argument to specify {external:doc}`PetscViewer` options, for instance the following will create a Matlab command file `myeigenvalues.m` to load the eigenvalues in Matlab: ```{code} console $ ./ex1 -n 120 -eps_nev 8 -eps_view_values :myeigenvalues.m:ascii_matlab ``` ```{code} console Lambda_EPS_0xb430f0_0 = [ 3.9993259306070224e+00 3.9973041767976509e+00 3.9939361013742269e+00 3.9892239746533216e+00 3.9831709729353331e+00 3.9757811763634532e+00 3.9670595661733632e+00 3.9570120213355646e+00 ]; ``` One particular instance of this option is `-eps_view_values draw`, that will plot the computed approximations of the eigenvalues on an X window. See Figure [Eigenvalues](#fig:plot-eigs) for an example. ```{figure} ../../_static/images/manual/png/ploteigs.png :alt: SLEPc eigenvalues plot :name: fig:plot-eigs :scale: 60 Eigenvalues plot. ``` Similarly, eigenvectors may be viewed with `-eps_view_vectors`, either in text form, in Matlab format, in binary format, or as a draw. All eigenvectors are viewed, one after the other. As an example, the next line will dump eigenvectors to the binary file `evec.bin`: ```{code} console $ ./ex1 -n 120 -eps_nev 8 -eps_view_vectors binary:evec.bin ``` Two more related functions are available: `EPSErrorView` and `EPSConvergedReasonView`. These will show computed errors and the converged reason (plus number of iterations), respectively. Again, we illustrate its use via the command line. The option `-eps_error_relative` will show eigenvalues whose relative error are below the tolerance. The different types of errors have their corresponding options, see table [](#tab:errors). A more detailed output can be obtained as follows: ```{code} console $ ./ex1 -n 120 -eps_nev 8 -eps_error_relative ::ascii_info_detail ---------------------- -------------------- k ||Ax-kx||/||kx|| ---------------------- -------------------- 3.999326 1.26221e-09 3.997304 3.82982e-10 3.993936 2.76971e-09 3.989224 4.94104e-10 3.983171 6.19307e-10 3.975781 5.9628e-10 3.967060 2.32347e-09 3.957012 6.12436e-09 ---------------------- -------------------- ``` Finally, the option for showing the converged reason is: ```{code} console $ ./ex1 -n 120 -eps_nev 8 -eps_converged_reason Linear eigensolve converged (8 eigenpairs) due to CONVERGED_TOL; iterations 14 ``` ## Advanced Usage This section includes the description of advanced features of the eigensolver object. Default settings are appropriate for most applications and modification is unnecessary for normal usage. ### Initial Guesses In this subsection, we consider the possibility of providing initial guesses so that the eigensolver can exploit this information to get the answer faster. Most of the algorithms implemented in `EPS` iteratively build and improve a basis of a certain subspace, which will eventually become an eigenspace corresponding to the wanted eigenvalues. In some solvers such as those of Krylov type, this basis is constructed starting from an initial vector, $v_1$, whereas in other solvers such as those of Davidson type, an arbitrary subspace can be used to start the method. By default, `EPS` initializes the starting vector or the initial subspace randomly. This default is a reasonable choice. However, it is also possible to supply an initial subspace with the command `EPSSetInitialSpace` ```{code} c EPSSetInitialSpace(EPS eps,PetscInt n,Vec is[]); ``` In some cases, a suitable initial space can accelerate convergence significantly, for instance when the eigenvalue calculation is one of a sequence of closely related problems, where the eigenspace of one problem is fed as the initial guess for the next problem. Note that if the eigensolver supports only a single initial vector, but several guesses are provided, then all except the first one will be discarded. One could still build a vector that is rich in the directions of all guesses, by taking a linear combination of them, but this is less effective than using a solver that considers all guesses as a subspace. ### Dealing with Deflation Subspaces In some applications, when solving an eigenvalue problem the user wishes to use a priori knowledge about the solution. This is the case when an invariant subspace has already been computed (e.g., in a previous `EPSSolve` call) or when a basis of the null-space is known. Consider the following example. Given a graph $G$, with vertex set $V$ and edges $E$, the Laplacian matrix of $G$ is a sparse symmetric positive semidefinite matrix $L$ with elements ```{math} :label: eq:laplacian l_{ij}=\left\{\begin{array}{cl} d(v_i) & \mathrm{if}\;i=j\\ -1 & \mathrm{if}\;e_{ij}\in E\\ 0&\mathrm{otherwise} \end{array}\right. ``` where $d(v_i)$ is the degree of vertex $v_i$. This matrix is singular since all row sums are equal to zero. The constant vector is an eigenvector with zero eigenvalue, and if the graph is connected then all other eigenvalues are positive. The so-called Fiedler vector is the eigenvector associated with the smallest nonzero eigenvalue and can be used in heuristics for a number of graph manipulations such as partitioning. One possible way of computing this vector with SLEPc is to instruct the eigensolver to search for the smallest eigenvalue (with `EPSSetWhichEigenpairs` or by using a spectral transformation as described in next chapter) but preventing it from computing the already known eigenvalue. For this, the user must provide a basis for the invariant subspace (in this case just vector $[1,1,\ldots,1]^T$) so that the eigensolver can *deflate* this subspace. This process is very similar to what eigensolvers normally do with invariant subspaces associated with eigenvalues as they converge. In other words, when a deflation space has been specified, the eigensolver works with the restriction of the problem to the orthogonal complement of this subspace. The following function can be used to provide the `EPS` object with some basis vectors corresponding to a subspace that should be deflated during the solution process. `EPSSetDeflationSpace` ```{code} c EPSSetDeflationSpace(EPS eps,PetscInt n,Vec defl[]) ``` The value `n` indicates how many vectors are passed in argument `defl`. The deflation space can be any subspace but typically it is most useful in the case of an invariant subspace or a null-space. In any case, SLEPc internally checks to see if all (or part of) the provided subspace is a null-space of the associated linear system (see section [](#sec:lin)). In this case, this null-space is attached to the coefficient matrix of the linear solver (see PETSc's function {external:doc}`MatSetNullSpace`) to enable the solution of singular systems. In practice, this allows the computation of eigenvalues of singular pencils (i.e., when $A$ and $B$ share a common null-space). {#sec:orthog} ### Orthogonalization Internally, eigensolvers in `EPS` often need to orthogonalize a vector against a set of vectors (for instance, when building an orthonormal basis of a Krylov subspace). This operation is carried out typically by a Gram-Schmidt orthogonalization procedure. The user is able to adjust several options related to this algorithm, although the default behavior is good for most cases, and we strongly suggest not to change any of these settings. This topic is covered in detail in {cite:t}`str-1`. {#sec:region} ### Specifying a Region for Filtering Some solvers in `EPS` (and other solver classes as well) can take into consideration a user-defined region of the complex plane, e.g., an ellipse. A region can be used for two purposes: - To instruct the eigensolver to compute all eigenvalues lying in that region. This is available only in eigensolvers based on the contour integral technique (`EPSCISS`). - To filter out eigenvalues outside the region. In this way, eigenvalues lying inside the region get higher priority during the iteration and are more likely to be returned as computed solutions. Regions are specified by means of an `RG` object. This object is handled internally in the `EPS` solver, as other auxiliary objects, and can be extracted with `EPSGetRG` ```{code} c EPSGetRG(EPS eps,RG *rg); ``` to set the options that define the region. These options can also be set in the command line. The following example computes largest magnitude eigenvalues, but restricting to an ellipse of radius 0.5 centered at the origin (with vertical scale 0.1): ```{code} console $ ./ex1 -rg_type ellipse -rg_ellipse_center 0 -rg_ellipse_radius 0.5 -rg_ellipse_vscale 0.1 ``` If one wants to use the region to specify where eigenvalues should *not* be computed, then the region must be the complement of the specified one. The next command line computes the smallest eigenvalues not contained in the ellipse: ```{code} console $ ./ex1 -eps_smallest_magnitude -rg_type ellipse -rg_ellipse_center 0 -rg_ellipse_radius 0.5 -rg_ellipse_vscale 0.1 -rg_complement ``` Additional details of the `RG` class can be found in section [](#sec:sys). {#sec:large-nev} ### Computing a Large Portion of the Spectrum We now consider the case when the user requests a relatively large number of eigenpairs (the related case of computing all eigenvalues in a given interval is addressed in section [](#sec:slice)). To fix ideas, suppose that the problem size (the dimension of the matrix, denoted as `n`), is in the order of 100,000's, and the user wants `nev` to be approximately 5,000 (recall the notation of `EPSSetDimensions` in section [](#sec:defprob)). The first comment is that for such large values of `nev`, the rule of thumb suggested in section [](#sec:defprob) for selecting the value of `ncv` ($\mathtt{ncv}\geq2\cdot\mathtt{nev}$) may be inappropriate. For small values of `nev`, this rule of thumb is intended to provide the solver with a sufficiently large subspace. But for large values of `nev`, it may be enough setting `ncv` to be slightly larger than `nev`. The second thing to take into account has to do with costs, both in terms of storage and in terms of computational effort. This issue is dependent on the particular eigensolver used, but generally speaking the user can simplify to the following points: 1. It is necessary to store a basis of the subspace, that is, `ncv` vectors of length `n`. 2. A considerable part of the computation is devoted to orthogonalization of the basis vectors, whose cost is roughly of order $\mathtt{ncv}^2\cdot\mathtt{n}$. 3. Within the eigensolution process, a projected eigenproblem of order `ncv` is built. At least one dense matrix of this dimension has to be stored. 4. Solving the projected eigenproblem has a computational cost of order $\mathtt{ncv}^3$. Typically, such problems need to be solved many times within the eigensolver iteration. It is clear that a large value of `ncv` implies a high storage requirement (points 1 and 3, especially point 1), and a high computational cost (points 2 and 4, especially point 2). However, in a scenario of such big eigenproblems, it is customary to solve the problem in parallel with many processors. In that case, it turns out that the basis vectors are stored in a distributed way and the associated operations are parallelized, so that points 1 and 2 become benign as long as sufficient processors are used. Then points 3 and 4 become really critical since in the current SLEPc version the projected eigenproblem (and its associated operations) are not treated in parallel. In conclusion, the user must be aware that using a large `ncv` value introduces a serial step in the computation with high cost, that cannot be amortized by increasing the number of processors. From SLEPc 3.0.0, another parameter `mpd` has been introduced to alleviate this problem. The name `mpd` stands for maximum projected dimension. The idea is to bound the size of the projected eigenproblem so that steps 3 and 4 work with a dimension of `mpd` at most, while steps 1 and 2 still work with a bigger dimension, up to `ncv`. Suppose we want to compute `nev`=5000. Setting `ncv`=10000 or even `ncv`=6000 would be prohibitively expensive, for the reasons explained above. But if we set e.g. `mpd`=600 then the overhead of steps 3 and 4 will be considerably diminished. Of course, this reduces the potential of approximation at each outer iteration of the algorithm, but with more iterations the same result should be obtained. The benefits will be specially noticeable in the setting of parallel computation with many processors. Note that it is not necessary to set both `ncv` and `mpd`. For instance, one can do ```{code} console $ ./program -eps_nev 5000 -eps_mpd 600 ``` {#sec:harmonic} ### Computing Interior Eigenvalues with Harmonic Extraction The standard Rayleigh-Ritz projection procedure described in section [](#sec:eig) is most appropriate for approximating eigenvalues located at the periphery of the spectrum, especially those of largest magnitude. Most eigensolvers in SLEPc are restarted, meaning that the projection is carried out repeatedly with increasingly good subspaces. An effective restarting mechanism, such as that implemented in Krylov-Schur, improves the subspace by realizing a filtering effect that tries to eliminate components in the direction of unwanted eigenvectors. In that way, it is possible to compute eigenvalues located anywhere in the spectrum, even in its interior. Even though in theory eigensolvers could be able to approximate interior eigenvalues with a standard extraction technique, in practice convergence difficulties may arise that prevent success. The problem comes from the property that Ritz values (the approximate eigenvalues provided by the standard projection procedure) converge from the interior to the periphery of the spectrum. That is, the Ritz values that stabilize first are those in the periphery, so convergence of interior ones requires the previous convergence of all eigenvalues between them and the periphery. Furthermore, this convergence behaviour usually implies that restarting is carried out with bad approximations, so the restart is ineffective and global convergence is severely damaged. Harmonic projection is a variation that uses a target value, $\tau$, around which the user wants to compute eigenvalues (see, e.g., {cite:p}`Morgan:2006:HRA`). The theory establishes that harmonic Ritz values converge in such a way that eigenvalues closest to the target stabilize first, and also that no unconverged value is ever close to the target, so restarting is safe in this case. As a conclusion, eigensolvers with harmonic extraction may be effective in computing interior eigenvalues. Whether it works or not in practical cases depends on the particular distribution of the spectrum. In order to use harmonic extraction in SLEPc, it is necessary to indicate it explicitly, and provide the target value as described in section [](#sec:defprob) (default is $\tau=0$). The type of extraction can be set with: `EPSSetExtraction` ```{code} c EPSSetExtraction(EPS eps,EPSExtraction extr); ``` Available possibilities are `EPS_RITZ` for standard projection and `EPS_HARMONIC` for harmonic projection (other alternatives such as refined extraction are still experimental). A command line example would be: ```{code} console $ ./ex5 -m 45 -eps_harmonic -eps_target 0.8 -eps_ncv 60 ``` The example computes the eigenvalue closest to $\tau=0.8$ of a real non-symmetric matrix of order 1035. Note that `ncv` has been set to a larger value than would be necessary for computing the largest magnitude eigenvalues. In general, users should expect a much slower convergence when computing interior eigenvalues compared to extreme eigenvalues. Increasing the value of `ncv` may help. Currently, harmonic extraction is available in the default `EPS` solver, that is, Krylov-Schur, and also in Arnoldi, GD, and JD. {#sec:balancing} ### Balancing for Non-Hermitian Problems In problems where the matrix has a large norm, $\|A\|_2$, the roundoff errors introduced by the eigensolver may be large. The goal of balancing is to apply a simple similarity transformation, $DAD^{-1}$, that keeps the eigenvalues unaltered but reduces the matrix norm, thus enhancing the accuracy of the computed eigenpairs. Obviously, this makes sense only in the non-Hermitian case. The matrix $D$ is chosen to be diagonal, so balancing amounts to scaling the matrix rows and columns appropriately. In SLEPc, the matrix $DAD^{-1}$ is not formed explicitly. Instead, the operators of table [](#tab:op) are preceded by a multiplication by $D^{-1}$ and followed by a multiplication by $D$. This allows for balancing in the case of problems with an implicit matrix. A simple and effective Krylov balancing technique, described in {cite:p}`Chen:2000:BSM`, is implemented in SLEPc. The user calls the following subroutine to activate it. `EPSSetBalance` ```{code} c EPSSetBalance(EPS eps,EPSBalance bal,PetscInt its,PetscReal cutoff); ``` Two variants are available, one-sided and two-sided, and there is also the possibility for the user to provide a pre-computed $D$ matrix. {#sec:structured} ### Structured Eigenvalue Problems Structured eigenvalue problems are those whose defining matrices are structured, i.e., their $n^2$ entries depend on less than $n^2$ parameters. Symmetry is the most obvious structure, and it is supported in SLEPc solvers via the `EPS_HEP` and `EPS_GHEP` problem types, see table [](#tab:ptype). The last entries listed in table [](#tab:ptype) address other types of structured eigenproblems, which are discussed in this subsection. Preserving the algebraic structure can help preserve physically relevant symmetries in the eigenvalues of the matrix and may improve the accuracy and efficiency of the eigensolver. For example, in quadratic eigenvalue problems arising from gyroscopic systems (see section [](#sec:qep)), eigenvalues appear in quadruples $\{\lambda,-\lambda,\bar\lambda,-\bar\lambda\}$, i.e., the spectrum is symmetric with respect to both the real and imaginary axes. This problem can be linearized to a $2n\times 2n$ skew-Hamiltonian/Hamiltonian pencil with the same eigenvalues. A structure-preserving eigensolver will give a more accurate answer because it enforces the structure throughout the computation. Unless otherwise stated, the structured eigenproblems discussed below are only supported in the default `EPS` solver, Krylov-Schur. The idea is that the user creates the structured matrix with a helper function such as `MatCreateBSE` (see below), and then selects the appropriate problem type with `EPSSetProblemType`, in this case `EPS_BSE`. This will instruct the solver to exploit the problem structure. Alternatively, one can solve the problem as `EPS_NHEP`, in which case the solver will neglect the structure. #### Bethe-Salpeter One structured eigenproblem that has raised interest recently is related to the Bethe--Salpeter equation, which is relevant in many state-of-the-art computational physics codes. For instance, in the Yambo software {cite:p}`Sangalli:2019:MPT` it is used to evaluate optical properties. It is commonly formulated as an eigenvalue problem with a block-structured matrix, ```{math} :label: eq:bse H = \begin{bmatrix} R & C \\ -C^H & -R^T \end{bmatrix}, ``` where $R$ is Hermitian and $C$ is complex symmetric. For a good enough approximation of the optical absorption spectrum, it is sufficient to compute a few eigenvalues with a customized version of Krylov-Schur. In this problem, eigenvalues are real and come in pairs $\{\lambda,-\lambda\}$. The eigenvalues of interest are those with the smallest magnitude, which in this case lie in the middle of the spectrum. Usually, both right and left eigenvectors are required, but the left eigenvectors can be obtained inexpensively once the corresponding right ones are known. The helper function to generate the matrix $H$ of equation {math:numref}`eq:bse` from the blocks $R$ and $C$ is `MatCreateBSE`, and the associated problem type is `EPS_BSE` (or `-eps_bse` from the command line). It is possible to select a few variants of the solver with the function `EPSKrylovSchurSetBSEType`. Further details about the implementation of the SLEPc solvers for the BSE can be found in {cite:p}`Alvarruiz:2025:VTR`. #### Hamiltonian The Hamiltonian structure is relevant in many applications, particularly in control theory. A (complex) Hamiltonian matrix has the block structure ```{math} :label: eq:hamilt H = \begin{bmatrix} A & B \\ C & -A^* \end{bmatrix}, ``` where $A$, $B$ and $C$ are either real with $B=B^T$, $C=C^T$, or complex with $B=B^*$, $C=C^*$. In the real case, eigenvalues appear in pairs $\{\lambda,-\lambda\}$ and for complex eigenvalues in quadruples $\{\lambda,-\lambda,\bar\lambda,-\bar\lambda\}$. For a complex Hamiltonian matrix, if $\lambda$ is an eigenvalue, then $-\bar\lambda$ is also an eigenvalue. A structure-preserving eigensolver has been implemented in `EPS` (in Krylov-Schur), which is activated by selecting the `EPS_HAMILT` problem type (or `-eps_hamiltonian` from the command line). Note that matrix $H$ {math:numref}`eq:hamilt` must be created with the helper function `MatCreateHamiltonian` in order to use this solver. :::{warning} The structure-preserving eigensolver for Hamiltonian eigenvalue problems should be considered experimental. Depending on the problem, it may become numerically unstable after some iterations, in which case the solver will abort, returning less eigenvalues than requested. ::: ```{rubric} Footnotes ``` ```{eval-rst} .. bibliography:: :filter: docname in docnames ```