Frequently Asked Questions

See also the PETSc FAQ.

1. Where should I send SLEPc bug reports and questions?

Send all maintenance requests to the SLEPc developers via the email address .

2. Is there a SLEPc users mailing list?

No, but SLEPc-related queries can be posted in the petsc-users mailing list.

3. How can I receive announcements of new SLEPc versions?

You can join the slepc-announce mailing list by following the instructions in the Contact section. We will update users regarding new major releases through this mailing list.

An alternative is to subscribe to the RSS news feed in the SLEPc front webpage. In addition to new releases, we will also notify the publication of patches.

4. How should I cite SLEPc?

When writing a scientific paper that makes use of SLEPc, you should cite at least reference [1] in the list of references. In addition, if you use specific SLEPc features (such as computational intervals) that have papers on the list, we suggest citing them as well.

5. Apart from PETSc, is it necessary to install other software to use SLEPc?

No, the only requirement to use SLEPc is to have PETSc installed on your system. Additionally, if you want to have access to eigensolvers not included in SLEPc, then you will have to install other libraries (e.g. ARPACK). See also the comment on linear solvers in FAQ #10 below.

6. I do not see any speedup when using more than one process

Does not apply for version 3.6 or later, see FAQ #10 below.

Probably you are dealing with a generalized eigenproblem (or a standard eigenproblem with shift-and-invert) and solving the linear systems with the default direct solver. By default, SLEPc uses a direct linear solver via PETSc's redundant mechanism, which allows the use of direct solvers in parallel executions but is not a really parallel factorization. In order to get speedup in parallel executions, you need to configure PETSc with a parallel direct linear solver such as MUMPS. For details, see the section "Solution of Linear Systems" in SLEPc's user manual.

7. Which is the recommended way of learning SLEPc?

Possibly, the best way of learning to use SLEPc is to follow these steps:

We also provide several video-tutorials.

8. From 3.0.0 to 3.1 the behaviour of shift-and-invert has changed

The shift-and-invert spectral transformation (and Cayley as well) is intended for computing the eigenvalues closest to a given value σ (the shift). Those eigenvalues closest to the shift become dominant in the transformed spectrum, so in SLEPc 3.0.0 one had to use EPS_LARGEST_MAGNITUDE (the default) for this situation. For example (the last option can be omitted because it is the default):

$ ./ex1 -st_type sinvert -st_shift 3.5 -eps_largest_magnitude

In contrast, in SLEPc 3.1 the approach is to specify the target value directly in EPS (with EPSSetTarget) and indicate that we want to compute eigenvalues closest to the target, with EPS_TARGET_MAGNITUDE. For example (again, the last option can be omitted):

$ ./ex1 -st_type sinvert -eps_target 3.5 -eps_target_magnitude

The value of the shift need not be provided because it is taken from the target value.

Note that another difference is that in 3.1 eigenvalues are returned in the correct order, that is, the first one is the closest to the target, and so on.

9. I get an error when retrieving the eigenvector

After the solver has finished, the solution can be retrieved with EPSGetEigenpair. In the Vr (and Vi) argument, one can pass NULL (if the eigenvector is not required), or a valid Vec object. This means the vector must have been created, for example with VecCreate, VecDuplicate, or MatCreateVecs, see for instance ex7. The same occurs with analog functions in SVD, PEP, and NEP.

10. I get an error when running shift-and-invert in parallel

In 3.6 and later versions, the shift-and-invert spectral transformation defaults to using preonly+lu for solving linear systems. If you run with more than one MPI process this will fail, unless you use an external package for the parallel LU factorization. This is explained in section "Solution of Linear Systems" in SLEPc's user manual. In previous versions of SLEPc, this would not generate an error since it was using redundant rather than plain lu.

11. Building an application with CMake or pkg_config

SLEPc (and PETSc) provides pkg_config files that can be used from makefiles as well as from CMake files. See the discussion at Issue #19 - Detecting SLEPc via CMake.

12. Why is it generally a bad idea to use EPS_SMALLEST_MAGNITUDE?

Krylov methods (and in particular the default SLEPc eigensolver, Krylov-Schur) are good for approximating eigenvalues in the periphery of the spectrum. Assuming an eigenproblem with real eigenvalues only, the use of EPS_SMALLEST_MAGNITUDE will be appropriate only if all eigenvalues are either positive or negative. Otherwise, the smallest magnitude eigenvalues lie in the interior of the spectrum, and therefore the convergence will likely be very slow. The usual approach for computing interior eigenvalues is the shift-and-invert spectral transformation (see chapter 3 of the users manual). Hence, instead of -eps_smallest_magnitude one would generally prefer -st_type sinvert -eps_target 0.

13. Creating a sparse matrix gets terribly slow when I increase the matrix size

Matrix preallocation is extremely important, especially for large matrices. See the related PETSc FAQ - Assembling large sparse matrices takes a long time.