I have tried to simplify them as much as I can. After running the baseflow.edp with one or multiple cpus, baseflow.txt and baseflow.msh will be created and used as inputs for eigenmodes.edp. Running with one cpu give the eigenvalue of (0.0571415,0.738642) , and with 2 cpus the eigenvalue is (0.00873975,0.676901).
Please make an earnest effort to simplify this. Eliminate any required user inputs so I can just run the program without any input. You may find your bug simply by cleaning up this code.
One other suggestion, I would encourage you to favor the implementation strategy of Moulin et al over the StabFEM implementation strategy you are using. For a modest 2-D configuration, you should be able to neglect all the intricacies of the preconditioner and get very good performance with direct methods using MUMPS.
Although following the readme.md of Moulin, I couldn’t run the eigensolver.edp, getting error in decomposition. nonlinear-solver.edp run properly. This is under Windows system using the latest version 4.6.
Since your problem is two dimensional, you can also simply use the code from the FreeFEM repository, navier-stokes-2d-PETSc.edp and navier-stokes-2d-SLEPc-complex.edp. Then, add T, and then add rho. Always try to increase complexity gradually, your “MWE” is far from being minimalist as Chris said, so it’s kind of tricky to help you out…
Also, feel free to send me the error you are getting with the code from Moulin et al., just used the solver yesterday without a problem (but I’m using the develop branch).
OK, good. Then, as I said earlier, since your problem seems to be two dimensional (for now), I’d suggest you use these files as a starting point. The code from Moulin et al. is mostly useful if you look at very large-scale 3D problems, which is not the case right now. I’ll investigate this in the meantime, but I think this is fixed in the develop branch, so it will be good with FreeFEM 4.7, to be released soon.
I mainly do this because I need the non-overlapping distributed mesh for some calculations. Now the question is the correlation of this restriction matrix (2D array) and the R array that we obtain by the restrict function. I mean, how can I use this restriction (of type int[int][int]) to perform a manipulation like u[] = uGlobal[](R) (as R is obviously an array of type int[int])? I couldn’t figure it out by reading the source code of macro_ddm.idp file. Is there any link to mpirank for the first dimension of this array?
I’d highly suggest that you transition to the “new school approach”, which is more flexible, efficient, and should be less intrusive in your code. You can then use this to get the local subdomain without overlap.
Is there any link to mpirank for the first dimension of this array?
I’m sorry but I don’t understand this question.
Also, in your case, the n2o array + restrict keyword will help you go from local with overlap to local without overlap and vice-versa. You can follow the same procedure as in this thread — forget about int[int][int] restriction, this has nothing to do with this — though here it’s used to go from global to local and vice versa.
Thank you @prj for your quick reply. Indeed, I have no issue to use n2o and restrict to go from local overlap to local non-overlap meshes. My question was more related to use int[int][int] restriction to go from local to global.
So, I will transfer my code to the new approach and use the procedure discussed in this thread to go from local to global.