SLEPc-Complex eigenmodes

Hello forum,

I am using FreeFEM++ 4.8 on Windows 10. Lately, I’ve tried several eigenvalue problems in parallel using SLEPc-complex and I have a few questions.

  1. Is there some sort of value scaling in SLEPc when it comes to computing the eigenmodes? In the example “navier-stokes-2d-SLEPc-complex” the Vec Value range in the interval [0 0.03455], which seems a bit low. I also checked and compared the results from other cases I have by running them on a single core (using FreeFEM++ and EigenValue) and in parallel (with FreeFEM+±mpi and EPSSolve). The eigenvalues obtained are the same, the spatial structures are identical, the only thing that differs is the iso value ranging values as you can see in the photo attached.

2)Is there any way to make the graph resulting from using "-eps_view_values draw " bigger, maybe with custom axis?
3)If I use “plot” instead of “plotMPI” I only get a sub-domain. May I use plot in MPI without these lines from partitioning?
4) Since ParaVIEW can’t handle P2 elements, I prefered MatLAB for post-processing but now, in parallel it doesn’t save the files anymore. Is there some activation I must do in the command prompt (such as -wg for plotting, -v for verbosity)?

Sorry for the long post and thank you in advance.


Regards,
Robert

  1. I believe there can be scaling, which could explain the difference in isovalues
  2. You can try to add -draw_size 800,800 to your command line parameters
  3. You can only plot from rank #0, so if you want to plot within FreeFEM without those lines, you’ll need to gather the solution on rank #0 and then plot. It’s rather inefficient, and I’d encourage you to use ParaView instead (even if it does handle P_2, I don’t think your eyes handle P_2 as well :smiley:)
  4. Which file? You can do -eps_view_vectors ::ascii_matlab to export the eigenvectors to MATLAB
1 Like

I am sorry for the late answer, thank you for the explanations. I followed your advice and used ParaVIEW for post-processing but I still get only a part of the subdomain as you can see in the picture attached. I also attach the part of the code which implies the plotting and saving data. This problem occurs when running in parallel, otherwise, one a single core simulation (by a single core simulation I mean that I don’t call the SLEPc-complex library) it works fine, I get the full geometry.

load “iovtk”
load “SLEPc-complex”
include “macro_ddm.idp”
//…EQUATIONS…//
int nev = 2; //Number of desired eigenvalues
//…Eigenvalues settings, tolerances etc…//
XXXXFES <com.plex> [int] [ EigenVecu ] (nev); //finite element space to store eigenvector
//PLOT EIGENMODES/SENSITIVITY MAP//
//Create finite element spaces//
fespace Xh(Th,P2); //For the eigenmodes of velocity
Xh EMur;
macro def1(i)i //Macro for the field plotting
int[int] Order = [1]; //Change order to 1 for paraview
string DataName= “EigenMode”;

for (int i=0; i<nev; i++)
{
// Direct eigenmodes plot//
EMur=real(EigenVecu[i]); //Real part of the streamwise component direct eigenmode
savevtk(“…Path…\EMur.vtu”, Th, EMur, dataname=DataName, order=Order);

//Below plot works://
// plotMPI(Th, EMur, P2, def1, real, cmm = “Leading StreamwiseVelocity (Real part) eigenmode”)
}

When I open EMur.vtu in paraview I get the results from the screenshot.

Do not load iovtk, it’s for sequential post-processing (hence the single subdomain view). For parallel post-processing, everything is already bundled in SLEPc-complex.

Thank you, I have the full geometry now. But even in ParaVIEW I still have the partitioning lines as you can see. Is there any chance of getting rid of them?

Sure, you can add a filter of type “Threshold”, according to “Label”, and only keep positive or null values.

Hello again. The “Threshold” filter worked fine. Now I am facing another small problem. I want to apply a dot product between the eigenmodes plotted above and their adjoint. It should like this:

for (i=0; i<nev; i++)
{
// Sensitivity = (sqrt(abs(EigenVecu[i])^2+abs(EigenVecv[i])^2))*(sqrt(abs(AdjEigenVecu[i])^2+abs(AdjEigenVecv[i])^2))/(int2d(Th)(EigenVecu[i]*AdjEigenVecu[i]+EigenVecv[i]*AdjEigenVecv[i]));

Sensitivity = (sqrt(abs(EigenVecu[i])^2+abs(EigenVecv[i])^2))*(sqrt(abs(AdjEigenVecu[i])^2+abs(AdjEigenVecv[i])^2));

Sr=real(Sensitivity); //Real part of the streamwise component direct eigenmode
Si=imag(Sensitivity);
savevtk(“C:\Users\Results\Resultsensitivitat.vtu”, Th, [Sr, Si], dataname=DataName, order=Order);
}

Normally, the first formula for Sensitivity is used, the one commented which has the integral on surface (Th) in its equation, but when I use it while running in parallel on 2+ cores I get results like the ones attached in the first figure, it looks like the results/the partitions are overlapped.
However, if I still use SLEPc library but with 1 core, -np 1 it works, I get the expected spatial structure but again, as I said in earlier post, the intensity seems to be scaled (I still didn’t find out if SLEPc is scaling the results in respect to something). Removing the integral when running on several cores partially solved the problem (I get the expected structure) but with very low values. Is there another way I should write the integral while running in parallel? Thank you.


The way you compute the integral in parallel is wrong. int2d(Th)(EigenVecu[i]*AdjEigenVecu[i]+EigenVecv[i]*AdjEigenVecv[i]) is defined on each process, and they all have different values, since all Th are different. You can check this by simply doing a cout on all MPI processes. You need to do a global reduction, as well as scale the vectors so that the overlapping unknowns are counted with multiplicity.

complex wrong = int2d(Th)(EigenVecu[i]*AdjEigenVecu[i]+EigenVecv[i]*AdjEigenVecv[i]);
cout << mpirank << " " << wrong << endl;
Vh<complex> scaledVecu = EigenVecu[i];
for[i, v : the name of the Mat you use in EPSSolve.D] scaledVecu[][i] *= v;
Vh<complex> scaledVecv = EigenVecv[i][];
for[i, v : the name of the Mat you use in EPSSolve.D] scaledVecv[][i] *= v;
wrong = int2d(Th)(scaledVecu*AdjEigenVecu[i]+scaledVecv*AdjEigenVecv[i]);
complex reduce;
mpiAllReduce(wrong, reduce, mpiCommWorld, mpiSUM);
if(mpirank == 0)
        cout << reduce << endl;

I used this FreeFEM example for debugging, you can copy/paste this at the end to see it working, it may then be easier for you to adapt.

diff --git a/examples/hpddm/laplace-2d-SLEPc-complex.edp b/examples/hpddm/laplace-2d-SLEPc-complex.edp
index 719d306b..77a12657 100644
--- a/examples/hpddm/laplace-2d-SLEPc-complex.edp
+++ b/examples/hpddm/laplace-2d-SLEPc-complex.edp
@@ -154,4 +154,13 @@ for(int i=0;i<k;i++){
             complex,// Type
             cmm = "Psi("+i+")  EV = "+EigenVAL[i]
            )
+    complex wrong = int2d(Th)(EigenVEC[i]*EigenVEC[i]);
+    cout << mpirank << " " << wrong << endl;
+    Temp[] = EigenVEC[i][];
+    for[i, v : DistA.D] Temp[][i] *= v;
+    wrong = int2d(Th)(EigenVEC[i]*Temp);
+    complex reduce;
+    mpiAllReduce(wrong, reduce, mpiCommWorld, mpiSUM);
+    if(mpirank == 0)
+        cout << reduce << endl;
 }
1 Like

Thank you, I will try this method.

Hello again. I tried to apply the following debug (Where Xh is my finite element space and J is the name given to Mat) and the error I get is the first photo attached:

complex wrong = int2d(Th)(EigenVecu[i]*AdjEigenVecu[i]+EigenVecv[i]*AdjEigenVecv[i]);
cout << mpirank << " " << wrong << endl;
Xh scaledVecu = EigenVecu[i]; //EigenVecu[i] - same error
for[i, v : J] scaledVecu[i] = v;
Xh scaledVecv = EigenVecv[i][];
for[i, v : J] scaledVecv[][i] = v;
wrong = int2d(Th)(scaledVecu
AdjEigenVecu[i]+scaledVecv
AdjEigenVecv[i]);
complex reduce;
mpiAllReduce(wrong, reduce, mpiCommWorld, mpiSUM);
if(mpirank == 0)
cout << reduce << endl;

Another question I have is about ParaVIEW post-processing, as you can see in the second attached photo. I tried several filters (point data to cell data/point interpolate/delaunay) to interpolate/smooth the results. The results are good if I process them in MatLAB or directly from FreeFEM++, so I don’t think the mesh is a big problem. Is there another filter to apply, or a filter applied to another filter? The above filters were applied directly on the .pvd file.

Thank you very much for your help and patience
Regards,
Robert

No idea what J is. As you can see in my snippet above, you need the equivalent of DistA.D, where DistA is the Mat you pass to EPSSolve. What order are you using in savevtk?

I will check again the matrix. In savetk I use order 1, but everything is calculated with order 2. I attach a small part of the code which implies creating the finite element spaces and export data. If I change Order=[2,2…] I only get the Labels in the .pvd file.

fespace Xh(Th,P2);
fespace Mh(Th,P1);
Xh EMur, EMui, EMvr, EMvi;
Xh AdjEMur, AdjEMui, AdjEMvr, AdjEMvi;
Mh EMpr, EMpi;
Mh AdjEMpr, AdjEMpi;

  Xh<complex> Sensitivity;		
  Xh Sr, Si;		

int[int] Order = [1,1,1,1,1,1,1,1,1];
string DataName= "Velocity Pressure EigenmodesAxial EigenmodesRadial EigenmodesPressure " +
"AdjointEigenmodesAxial AdjointEigenmodesRadial AdjointEigenmodesPressure " +
“Sensitivity”;
for (i=0; i<nev; i++)
{

EMur=real(EigenVecu[i]);
EMui=imag(EigenVecu[i]);
EMvr=real(EigenVecv[i]);
EMvi=imag(EigenVecv[i]);
EMpr=real(EigenVecp[i]);
EMpi=imag(EigenVecp[i]);
AdjEMur=real(AdjEigenVecu[i]);
AdjEMui=imag(AdjEigenVecu[i]);
AdjEMvr=real(AdjEigenVecv[i]);
AdjEMvi=imag(AdjEigenVecv[i]);
AdjEMpr=real(AdjEigenVecp[i]);
AdjEMpi=imag(AdjEigenVecp[i]);
Sensitivity=…
Sr=real(Sensitivity);
Si=imag(Sensitivity);
savevtk(“C:\Users\Desktop\Results\Reynolds”+Re+“\Results.vtu”, Th,
[u, v], p,
[EMur, EMui], [EMvr, EMvi], [EMpr, EMpi],
[AdjEMur, AdjEMui], [AdjEMvr, AdjEMvi], [AdjEMpr, AdjEMpi],
[Sr, Si],
dataname=DataName, order=Order);
}

Dear professor,

While I was trying to fix int2d(Th)(EigenVecu[i]*AdjEigenVecu[i]+EigenVecv[i]*AdjEigenVecv[i]) using the Laplace 2d example I remarked the difference in writing the distributed matrices compared to Navier-Stokes 2d SLEPc complex . I applied both ways of writing to my case which is also a flow around a cylinder. The code look like this (I didn’t write the equations as they are quite long):

load “SLEPc-complex”
include “macro_ddm.idp”
mesh Th =…
func E = [P2, P2, P2, P1];
fespace XXXXFES(Th,E);

int[int][int] intersection; // local-to-neighbors renumbering
real[int] D; // partition of unity

//Problem definition
varf LNSE =…
varf B =…

//Matrix creation

//VARIANT I //

Mat J;
macro def(i) [i, i#A, i#B, i#C]
macro init(i) [i, i, i, i]
createMat(Th, J, E);
J = LNSE(XXXXFES, XXXXFES);
matrix B = b(XXXXFES, XXXXFES);
Mat M(J, B);

int k = EPSSolve(J, M, vectors = …, values = …, sparams = …);

//VARIANT II //

matrix J = LNSE(XXXXFES, XXXXFES);
matrix B = b(XXXXFES, XXXXFES);

Mat DistJ(J,intersection, D);
Mat DistB(DistJ, B);

int k = EPSSolve(DistJ, DistB, vectors = …, values = …, sparams = …);

Which is the difference between VARIANT I and VARIANT II? I am asking because the eigenvalues are the same in both cases but the eigenvectors are different. In the documentary of SLEPc it is said that the eigenvectors are normalized (I guess normalized by sqrt(a^2+b^2…) where a, b are the components of the vector). Now that I have 2 different values for the eigenvectors I am a bit confused. I looked into “macro_ddm.idp” but I didn’t find where the eigenvectors are normalized. My questions would be:

  1. What is exactly “intersection” and “D” doing to the “MatDistJ”?
  2. Is it possible to eliminate the normalization of the eigenvectors (maybe from macro_ddm.idp)?
  3. Is there something I could improve in writing the files for ParaVIEW post-processing in the code posted in the last post?
    Thank you,
    Robert
  1. Do not use variant II, it’s an old API, variant I is much easier to follow (you don’t need to guess what are those variables).
  2. Do not touch macro_ddm.idp, it has nothing to do with the eigensolver. You can look at the SLEPc manual to understand how the vectors are normalized and how to adjust this (I’m not sure this is possible).
  3. If what you write is what you want, I guess there is no real point in trying to improve that part of the script, is there?

Thank you for the answer. I will keep going with Variant I. About 2), I read in the manual how the vectors are normalized, I wanted to see how the norm is written in the program (especially the complex conjugate) and maybe do some simple tests. About 3) I tought there is a better way to export the results to avoid that problem in plotting, but I will just try some filter combinations in ParaVIEW.

Coming back to int2d(Th)(EigenVecu[i]*AdjEigenVecu[i]+EigenVecv[i]*AdjEigenVecv[i]), I attach a very simplified code with a simplified integral:

load “SLEPc-complex”
include “macro_ddm.idp”

mesh Th=…

func E = [P2, P2, P2, P1];

fespace XXXXFES(Th,E);
fespace XFES(Th,P2);

varf LNSE=…
varf b=…

Mat M, AdjM;
macro def(j) [j, j#A, j#B, j#C]
macro init(j) [j, j, j, j]
createMat(Th, M, E);

M = LNSE(XXXXFES, XXXXFES);
matrix B = b(XXXXFES, XXXXFES);

Mat J(M, B);

int nev = 1; //nb. of eigenvalues
int ncv = 2*nev; //nb. of eigenvectors
real[int] D;

complex[int] EigenVal(nev); //store eigenvalues
XXXXFES[int] EigenVecu,EigenVecv,EigenVecw,EigenVecp; //store eigenvectors

int i, k;
k = EPSSolve(M, J, values=EigenVal, vectors=EigenVecu, sparams = …);

for (i=0; i<k; i++)
{

complex wrong = int2d(Th)(EigenVecu[i]+EigenVecv[i]);
cout << mpirank << " " << wrong << endl;

Xh scaledVecu = EigenVecu[i];
for[i, su : M.D] scaledVecu *= su;

Xh scaledVecv = EigenVecv[i];
for[i, su : M.D] scaledVecv *= su;

complex wrong2 = int2d(Th)(scaledVecu*scaledVecv);
complex reduce;
mpiAllReduce(wrong, reduce, mpiCommWorld, mpiSUM);
if(mpirank == 0)
cout << reduce << endl;

cout<<"wrong2 = " << wrong2 <<endl;

}

As you can see in the picture attached, indeed I have different values for the integral because all “Th” are different but after the global reduction the integral (redefined with wrong2) is 0 and I am unable to find the reason. I changed from your example “v” as it is defined as radial velocity in my case to “su” and scaledVecu[i] *= v to scaledVecu *=su due to an array error. Let me know if I should add some more information to the script.

Regards,
Robert

int2d(Th)(scaledVecu*scaledVecv) is wrong, if you want to compute u * v, you need to scale either u or v, not both.

I did the following change but I still get "wrong 2 = (0,0):

for (i=0; i<k; i++)
{

complex wrong = int2d(Th)(EigenVecu[i]+EigenVecv[i]);
cout << mpirank << " " << wrong << endl;

Xh scaledVecu = EigenVecu[i];
for[i, su : M.D] scaledVecu *= su;

//Xh scaledVecv = EigenVecv[i];
//for[i, su : M.D] scaledVecv *= su;

complex wrong2 = int2d(Th)(scaledVecu*EigenVecv[i]);
complex reduce;
mpiAllReduce(wrong, reduce, mpiCommWorld, mpiSUM);
if(mpirank == 0)
cout << reduce << endl;

cout<<"wrong2 = " << wrong2 <<endl;

}

Well, you didn’t used my code properly… scaledVecu[] should be scaledVecu[][i]. Just use my script, which is giving correct results, and adapt it to your needs if you don’t know how to debug your code.

With * `scaledVecu[i] .* I get the attached error. This is how I tried first time:

for (i=0; i<k; i++)
{

complex wrong = int2d(Th)(EigenVecu[i]+EigenVecv[i]);
cout << mpirank << " " << wrong << endl;

Xh < complex > scaledVecu = EigenVecu[i]; //also tried =EigenVecu[i]
for[i, su : M.D] scaledVecu[i] *= su;

complex wrong2 = int2d(Th)(scaledVecu*EigenVecv[i]);
complex reduce;
mpiAllReduce(wrong, reduce, mpiCommWorld, mpiSUM);
if(mpirank == 0)
cout << reduce << endl;

cout<<"wrong2 = " << wrong2 <<endl;

}

Line 236 being: for[i, su : M.D] scaledVecu[i] *= su;

You are not using the proper vector then.

I’ll repeat myself.

Just use my script, which is giving correct results, and adapt it to your needs if you don’t know how to debug your code.