Frequently Asked Questions
See also the PETSc FAQ.
Send all maintenance requests to the SLEPc developers via the email address .
No, but SLEPc-related queries can be posted in the petsc-users mailing list.
You can join the slepc-announce mailing list by following the instructions in the Contact section. We will update users regarding new major releases through this mailing list.
An alternative is to subscribe to the RSS news feed in the SLEPc front webpage. In addition to new releases, we will also notify the publication of patches.
When writing a scientific paper that makes use of SLEPc, you should cite at least reference [1] in the list of references. In addition, if you use specific SLEPc features (such as computational intervals) that have papers on the list, we suggest citing them as well.
No, the only requirement to use SLEPc is to have PETSc installed on your system. Additionally, if you want to have access to eigensolvers not included in SLEPc, then you will have to install other libraries (e.g. ARPACK). See also the comment on linear solvers in FAQ #10 below.
Does not apply for version 3.6 or later, see FAQ #10 below.
Probably you are dealing with a generalized eigenproblem (or a standard eigenproblem with
shift-and-invert) and solving the linear systems with the default direct solver.
By default, SLEPc uses a direct linear solver via PETSc's redundant
mechanism,
which allows the use of direct solvers in parallel executions but is not a really parallel
factorization. In order to get speedup in parallel executions, you need to configure
PETSc with a parallel direct linear solver such as MUMPS. For details, see the section
"Solution of Linear Systems" in SLEPc's user manual.
Possibly, the best way of learning to use SLEPc is to follow these steps:
- First of all, get acquainted with PETSc if you are not already familiar with it (see the PETSc tutorials page).
- Read through the entire SLEPc Users Manual. In a first reading, one may skip the "advanced usage" sections.
- Follow the steps provided by the hands-on exercises, trying the examples in an available SLEPc installation.
- Use the example programs available in the SLEPc distribution as a basis for your own programs.
- Use the on-line manual pages for reference for individual routines.
We also provide several video-tutorials.
The shift-and-invert spectral transformation (and Cayley as well) is intended for computing the eigenvalues closest to a given value σ (the shift). Those eigenvalues closest to the shift become dominant in the transformed spectrum, so in SLEPc 3.0.0 one had to use EPS_LARGEST_MAGNITUDE
(the default) for this situation. For example (the last option can be omitted because it is the default):
$ ./ex1 -st_type sinvert -st_shift 3.5 -eps_largest_magnitude
In contrast, in SLEPc 3.1 the approach is to specify the target value directly in EPS (with EPSSetTarget) and indicate that we want to compute eigenvalues closest to the target, with EPS_TARGET_MAGNITUDE
. For example (again, the last option can be omitted):
$ ./ex1 -st_type sinvert -eps_target 3.5 -eps_target_magnitude
The value of the shift need not be provided because it is taken from the target value.
Note that another difference is that in 3.1 eigenvalues are returned in the correct order, that is, the first one is the closest to the target, and so on.
After the solver has finished, the solution can be retrieved with EPSGetEigenpair. In the Vr
(and Vi
) argument, one can pass NULL
(if the eigenvector is not required), or a valid Vec
object. This means the vector must have been created, for example with VecCreate
, VecDuplicate
, or MatCreateVecs
, see for instance ex7. The same occurs with analog functions in SVD
, PEP
, and NEP
.
In 3.6 and later versions, the shift-and-invert spectral transformation defaults to using preonly
+lu
for solving linear systems. If you run with more than one MPI process this will fail, unless you use an external package for the parallel LU factorization. This is explained in section "Solution of Linear Systems" in SLEPc's user manual. In previous versions of SLEPc, this would not generate an error since it was using redundant
rather than plain lu
.
SLEPc (and PETSc) provides pkg_config files that can be used from makefiles as well as from CMake files. See the discussion at Issue #19 - Detecting SLEPc via CMake.
Krylov methods (and in particular the default SLEPc eigensolver, Krylov-Schur) are good for approximating eigenvalues in the periphery of the spectrum. Assuming an eigenproblem with real eigenvalues only, the use of EPS_SMALLEST_MAGNITUDE
will be appropriate only if all eigenvalues are either positive or negative. Otherwise, the smallest magnitude eigenvalues lie in the interior of the spectrum, and therefore the convergence will likely be very slow. The usual approach for computing interior eigenvalues is the shift-and-invert spectral transformation (see chapter 3 of the users manual). Hence, instead of -eps_smallest_magnitude
one would generally prefer -st_type sinvert -eps_target 0
.
Matrix preallocation is extremely important, especially for large matrices. See the performance chapter of the PETSc users manual.
Note: since PETSc version 3.19 the Mat
data structures have been changed so that the performance is reasonably good without preallocation.
The following instructions can be followed to install the conda-forge variant of slepc4py with scalar type complex.
# Download Miniforge wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh # Install miniforge in $HOME/miniforge bash Miniforge3-Linux-x86_64.sh -b -p ~/miniforge # Activate Miniforge base environment source ~/miniforge/bin/activate # Create new test environment with a starting package specification set # Package specification is <package>=<version>==<build_string> # To request the latest version, we can use '*' # For {petsc|slepc}[4py], the build string starts with either 'real' # or 'complex', depending on the configured scalar type. The 'real' # variants have a build number offset by 100, so they take precedence # if a specific <build_string> is not requested. # Long story short, we ask conda to create a new environment initally installing # the complex variant of slepc4py latest version with the followng command conda create --name testenv slepc4py=*=*complex* # Activate the test environment conda activate testenv # Verify we are running the complex variant python -c ' from slepc4py import SLEPc from petsc4py import PETSc print(PETSc.ScalarType) '
Make sure you select the petsc+complex
variant:
spack install py-slepc4py ^petsc+complex
This will install PETSc with complex scalars, together with SLEPc as well as petsc4py and slepc4py. Before that, you can also do spack spec py-slepc4py ^petsc+complex
to check what it is going to install.
A real symmetric matrix has real eigenvectors, but when building SLEPc with complex scalars the computed eigenvectors have nonzero imaginary part. The rationale is the following. In real scalars, if x
is a unit-norm eigenvector then -x
is also a valid eigenvector. In complex scalars, if x
is a unit-norm eigenvector then alpha*x
is also a valid eigenvector, where alpha
is a generalized sign, i.e., alpha=exp(theta*j)
for any theta
. So if one wants the imaginary part to be zero, the eigenvectors returned by SLEPc must be normalized a posteriori, as is done for example in ex20.c (or the equivalent python example ex7.py
). SLEPc does not know if the input matrix is real or complex, so it cannot normalize the vectors internally.
Note that the simple scaling strategy shown in those examples will not be sufficient in case of degenerate eigenvalues, i.e., eigenvalues with multiplicity larger than one.