I am interested in implementing a resolvent (input/output) analysis framework using the FreeFEM/PETSc interface.
To do this, I need to construct a matrix that is defined by a series of products and of several sparse matrices and matrix inverses. The matrix of interest is Hermitian, with a dimension of n\times n. It is defined as L=B^HR^HM_qRB.
Here, B is a m\times n matrix of 1s and 0s with m\geq n, M_q is a positive semi-definite m\times m matrix, and R is an m\times m matrix defined by an inverse as R=(i\omega M_q+J)^{-1}.
With the exception of R, I can construct the Mat objects for each of the basic components of L, and I understand how to perform the necessary MatMatMult() operations. However, I do not know how to use MUMPS to find R. It seems like this would require the MatMatSolve(), but I couldn’t find any documentation on this in FreeFEM. Has anyone encountered a similar problem and found a solution?
For context, I need to construct this matrix in order to solve the eigenvalue problem (using the SLEPc interface):
L\tilde{\mathbf{f}}=\lambda M_f\tilde{\mathbf{f}}, where M_f is a positive definite n\times n matrix.
A good overview with more detail on this subject is given here (see Section 4).
You cannot access R from MUMPS, nobody can, except MUMPS developers. So you’ll need to solve (i\omega M_q+J)C = B using KSPSolve() in your .edp, by first converting B to a dense Mat since KSPMatSolve() (in PETSc library) only handles dense right-hand sides. This will be extremely costly, these inversions make your L dense, and depending on which \lambda you are looking for, SLEPc may have to “invert” L, so maybe it would be best to think of an approximation of L^{-1} which would not need an explicit representation of your L.
You are correct that this would be very expensive. Thanks to your comment, I think I see the solution. I’ll state it briefly here in case it helps anyone in the future.
Define a func which computes the action of the operator L on \tilde{\mathbf{f}} i.e. \mathbf{x}=L\tilde{\mathbf{f}}.
Use this func to create the Arnoldi basis in the EVP in a matrix-free approach (Ă la FreeFem-tutorial - Section 8 - example11.edp). This way the LU decomposition is only factored once and applied only as many times as is needed to build the Arnoldi subspace.
That would indeed be orders of magnitude cheaper to compute.
This is like for PCD preconditioning, cf. FreeFem-sources/oseen-2d-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub, the PCD operator is never assembled, but is computed instead by a sequence of MatMult() or KSPSolve(). In your case, you’ll need to implement:
MatMult() in a func as a sequence of MatMult(B, ...) + KSPSolve(R, ...) + MatMult(Mq, ...) + KSPSolveHermitianTranspose(R, ...) + MatMultHermitianTranspose(B, ...)
PCApply() – for L^{-1} if you are doing shift-and-invert in SLEPc for smallest \lambda – in a func (maybe an initial approximation could be simply M_q^{-1})
LU decomposition of what? L? You won’t have that. R or M_q? Yes, whatever you choose as a preconditioner for these two operators, PETSc is smart enough to reuse them throughout the Arnoldi iterations.
Thank you for your suggestions. I believe I am close to having this working, but I am a bit confused on the required syntax.
I have defined a func complex[int] Lop(complex[int]& inPETSC) {...} to handle the sequence of MatMult()s and KSPsolve()s that define the operator L. I have tested this function independently (i.e. outside of the EPSSolve()) and can confirm that it works as intended.
My struggle right now has to do with how to get EPSSolve() to do what I want. In particular, I don’t understand how to use the precon argument used in your examples. My current code is similar to what I’ve given below. It runs without error, but the results I’m getting don’t make sense yet. This could be due to BC’s or something else, but I wondered if you see an obvious error based on my lack of experience with PETSc.
Mat<complex> L(Mf, Lop); // Mf is a Mat<complex>, Lop is a func
int nev = getARGV("-nev",1);
string EPSparams = " -eps_nev " + nev + " " +
" -eps_type krylovschur " +
" -eps_largest_real " +
" -st_pc_type none " +
" -eps_gen_hermitian ";
complex[int] val(nev);
Xh<complex>[int] deff(vec)(nev); // Xh is a fespace
int k = EPSSolve(L, Mf, vectors = vec, values = val, sparams = EPSparams);
I believe you don’t need -st_pc_type none with -eps_largest_real. What do you mean by “the results […] don’t make sense”? Are you getting a meaningful -eps_view? What about -eps_monitor and -eps_converged_reason?
I am considering using the resolvent framework, and your code is very helpful. Could you share the variational formulation of vMq, vMf and LocPQ, and the corresponding FE spaces?
Hi @aszaboa and @tea, I’ve opened the ff-bifbox repo to the public now. Note that the code is still under development and some things may not work perfectly yet.
You can see/test my resolvent analysis implementation by following the commands in examples/garnaud_2012.
I have some small puzzle about the rslvcompute.edp in your ff-bifbox repo. First, the tgv = -2 for linear operator and tgv = -20 for response norm. Is this better than tgv = -1 for linear operator and no tgv for response norm ( I used to apply ARPACK to solve the problem with this way to construct matrix in FreeFem++ and don’t encounter problems)? Second, to compute the response modes, I think the complex array gm or qm is in PETSc indexing, and its length should be sized as qm(J.n) instead of qm(XMh.n).
Thanks very much for your codes and I learn much from it!
Thanks for the message. I’m glad you found the codes helpful! To answer your post:
Yes, tgv=-2 is needed here instead of tgv=-1, because we are solving both with KSPSolve() (i.e. needs row elimination to enforce BCs) and with KSPSolveHermitianTranspose() (i.e. needs column elimination to enforce BCs). Another option would be to use the normal tgv=1e30 approach, but I avoided that here to avoid potential issues with iterative solvers.
Yes, you are correct – good catch. However, this does not cause problems in this case since ChangeNumbering() automatically resizes the arrays.