Saving and loading global/distibuted arrays

Baseflow.edp (6.7 KB) Eigenmodes.edp (8.5 KB)

I have tried to simplify them as much as I can. After running the baseflow.edp with one or multiple cpus, baseflow.txt and baseflow.msh will be created and used as inputs for eigenmodes.edp. Running with one cpu give the eigenvalue of (0.0571415,0.738642) , and with 2 cpus the eigenvalue is (0.00873975,0.676901).

Thanks for your help.
Cyrille

Please make an earnest effort to simplify this. Eliminate any required user inputs so I can just run the program without any input. You may find your bug simply by cleaning up this code.

Ok thanks, difficult to know how much , but will try my best and as you said I may find the issue . Thanks for your support

One other suggestion, I would encourage you to favor the implementation strategy of Moulin et al over the StabFEM implementation strategy you are using. For a modest 2-D configuration, you should be able to neglect all the intricacies of the preconditioner and get very good performance with direct methods using MUMPS.

Thanks for the advice,

Although following the readme.md of Moulin, I couldn’t run the eigensolver.edp, getting error in decomposition. nonlinear-solver.edp run properly. This is under Windows system using the latest version 4.6.

What is your error?

When I run on Ubuntu, I get:

 Error opening file State/sol_FlatPlate3D.mesh_0-4.dat
  current line = 51 mpirank 0 / 4
Exec error : Error opening file
   -- number :1
 Error opening file State/sol_FlatPlate3D.mesh_3-4.dat
  current line = 51 mpirank 3 / 4
 Error opening file State/sol_FlatPlate3D.mesh_1-4.dat
  current line = 51 mpirank 1 / 4
 Error opening file State/sol_FlatPlate3D.mesh_2-4.dat
  current line = 51 mpirank 2 / 4

but the error is caught and the solver proceeds. Is this what you see?

Since your problem is two dimensional, you can also simply use the code from the FreeFEM repository, navier-stokes-2d-PETSc.edp and navier-stokes-2d-SLEPc-complex.edp. Then, add T, and then add rho. Always try to increase complexity gradually, your “MWE” is far from being minimalist as Chris said, so it’s kind of tricky to help you out…

Also, feel free to send me the error you are getting with the code from Moulin et al., just used the solver yesterday without a problem (but I’m using the develop branch).

No,Eigensolver.log (6.7 KB) it looks like some memory error in MatCreate line 83 (gcreate.c) see attached log file.

Are you using the same number of processes for the nonlinear solver and the eigensolver?

Yes I use 2 to minimize the screen ouput from the error.

Do the other SLEPc complex examples from FreeFEM run correctly?

Yes no issue, for example navier-stokes-2d-PETSc.edp and navier-stokes-2d-SLEPc-complex.edp works perfectly.

OK, good. Then, as I said earlier, since your problem seems to be two dimensional (for now), I’d suggest you use these files as a starting point. The code from Moulin et al. is mostly useful if you look at very large-scale 3D problems, which is not the case right now. I’ll investigate this in the meantime, but I think this is fixed in the develop branch, so it will be good with FreeFEM 4.7, to be released soon.

Yes I will do that, and still my main objectives are like Chris to have only one sol and eigenvector files, instead of multiple according to cpus no.

In the meantime I have tried the exchange_changeNumbering.edp, but I am getting the following error in assert:

44 : assert(changed.n == A.n No operator .n for type

Error line number 44, in file exchange_changeNumbering.edp, before token n

With one or 2 cpus , same error.

What is this file? Why such an assert?

The “exchange_changeNumbering.edp” files was sent by you on July 10.

You need the develop branch to use .n on A. If you don’t use this branch, just comment that assert.

Dear @prj,

Sorry for posting something on a closed topic, but I think it’s relevant to this.

I use the old school approach for partitioning a mesh using HPDDM, which is indeed something like this based on FF examples:

real[int] D;
int[int][int] restriction;
int[int] n2o;
{
    fespace SpaceP0Temp(Mesh, P0);
    SpaceP0Temp part;
    if(mpirank == 0)
        partitionerSeq(part[], Mesh, mpisize);
    partitionerPar(part[], Mesh, mpiCommWorld, size);
    buildWithPartitioning(Mesh, part[], 1, restriction, D, Pk, mpiCommWorld);
    part = part;
    MeshNoOverlap = trunc(Mesh, abs(part - mpirank) < 1e-1, new2old = n2o);
}

I mainly do this because I need the non-overlapping distributed mesh for some calculations. Now the question is the correlation of this restriction matrix (2D array) and the R array that we obtain by the restrict function. I mean, how can I use this restriction (of type int[int][int]) to perform a manipulation like u[] = uGlobal[](R) (as R is obviously an array of type int[int])? I couldn’t figure it out by reading the source code of macro_ddm.idp file. Is there any link to mpirank for the first dimension of this array?

I’d highly suggest that you transition to the “new school approach”, which is more flexible, efficient, and should be less intrusive in your code. You can then use this to get the local subdomain without overlap.

Is there any link to mpirank for the first dimension of this array?

I’m sorry but I don’t understand this question.
Also, in your case, the n2o array + restrict keyword will help you go from local with overlap to local without overlap and vice-versa. You can follow the same procedure as in this thread — forget about int[int][int] restriction, this has nothing to do with this — though here it’s used to go from global to local and vice versa.

Thank you @prj for your quick reply. Indeed, I have no issue to use n2o and restrict to go from local overlap to local non-overlap meshes. My question was more related to use int[int][int] restriction to go from local to global.

So, I will transfer my code to the new approach and use the procedure discussed in this thread to go from local to global.