# Resolvent operator with FF/PETSc (MatMatSolve?)

Hello FF developers,

I am interested in implementing a resolvent (input/output) analysis framework using the FreeFEM/PETSc interface.

To do this, I need to construct a matrix that is defined by a series of products and of several sparse matrices and matrix inverses. The matrix of interest is Hermitian, with a dimension of n\times n. It is defined as L=B^HR^HM_qRB.

Here, B is a m\times n matrix of 1s and 0s with m\geq n, M_q is a positive semi-definite m\times m matrix, and R is an m\times m matrix defined by an inverse as R=(i\omega M_q+J)^{-1}.

With the exception of R, I can construct the Mat objects for each of the basic components of L, and I understand how to perform the necessary MatMatMult() operations. However, I do not know how to use MUMPS to find R. It seems like this would require the MatMatSolve(), but I couldn’t find any documentation on this in FreeFEM. Has anyone encountered a similar problem and found a solution?

For context, I need to construct this matrix in order to solve the eigenvalue problem (using the SLEPc interface):

L\tilde{\mathbf{f}}=\lambda M_f\tilde{\mathbf{f}}, where M_f is a positive definite n\times n matrix.

A good overview with more detail on this subject is given here (see Section 4).

You cannot access R from MUMPS, nobody can, except MUMPS developers. So you’ll need to solve (i\omega M_q+J)C = B using KSPSolve() in your .edp, by first converting B to a dense Mat since KSPMatSolve() (in PETSc library) only handles dense right-hand sides. This will be extremely costly, these inversions make your L dense, and depending on which \lambda you are looking for, SLEPc may have to “invert” L, so maybe it would be best to think of an approximation of L^{-1} which would not need an explicit representation of your L.

See FreeFem-tutorial - Section 8 - example14.edp for an example of solves with multiple right-hand sides.

1 Like

@prj thank you for the quick reply.

You are correct that this would be very expensive. Thanks to your comment, I think I see the solution. I’ll state it briefly here in case it helps anyone in the future.

1. Define a func which computes the action of the operator L on \tilde{\mathbf{f}} i.e. \mathbf{x}=L\tilde{\mathbf{f}}.

2. Use this func to create the Arnoldi basis in the EVP in a matrix-free approach (à la FreeFem-tutorial - Section 8 - example11.edp). This way the LU decomposition is only factored once and applied only as many times as is needed to build the Arnoldi subspace.

That would indeed be orders of magnitude cheaper to compute.
This is like for PCD preconditioning, cf. FreeFem-sources/oseen-2d-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub, the PCD operator is never assembled, but is computed instead by a sequence of MatMult() or KSPSolve(). In your case, you’ll need to implement:

• MatMult() in a func as a sequence of MatMult(B, ...) + KSPSolve(R, ...) + MatMult(Mq, ...) + KSPSolveHermitianTranspose(R, ...) + MatMultHermitianTranspose(B, ...)
• PCApply() – for L^{-1} if you are doing shift-and-invert in SLEPc for smallest \lambda – in a func (maybe an initial approximation could be simply M_q^{-1})

LU decomposition of what? L? You won’t have that. R or M_q? Yes, whatever you choose as a preconditioner for these two operators, PETSc is smart enough to reuse them throughout the Arnoldi iterations.

Thanks, yes I was referring the the LU decomposition of R. I can see that PETSc is automatically reusing that factorization.

Thank you for your suggestions. I believe I am close to having this working, but I am a bit confused on the required syntax.

I have defined a func complex[int] Lop(complex[int]& inPETSC) {...} to handle the sequence of MatMult()s and KSPsolve()s that define the operator L. I have tested this function independently (i.e. outside of the EPSSolve()) and can confirm that it works as intended.

My struggle right now has to do with how to get EPSSolve() to do what I want. In particular, I don’t understand how to use the precon argument used in your examples. My current code is similar to what I’ve given below. It runs without error, but the results I’m getting don’t make sense yet. This could be due to BC’s or something else, but I wondered if you see an obvious error based on my lack of experience with PETSc.

Mat<complex> L(Mf, Lop); // Mf is a Mat<complex>, Lop is a func
int nev = getARGV("-nev",1);
string EPSparams = " -eps_nev " + nev + " " +
" -eps_type krylovschur " +
" -eps_largest_real " +
" -st_pc_type none " +
" -eps_gen_hermitian ";
complex[int] val(nev);
Xh<complex>[int] deff(vec)(nev); // Xh is a fespace
int k = EPSSolve(L, Mf, vectors = vec, values = val, sparams = EPSparams);


I believe you don’t need -st_pc_type none with -eps_largest_real. What do you mean by “the results […] don’t make sense”? Are you getting a meaningful -eps_view? What about -eps_monitor and -eps_converged_reason?

1 Like

Yes, these are all meaningful. I think the issue is with my varfs/BCs. Thank you for your helpful insights as always @prj