Saving and loading global/distibuted arrays

Yes, I agree. This is not as elegant as I thought it might be. :neutral_face: Oh well! Hopefully it will be useful to some.

I made huge progress thanks to your help.
But I don’t get the same result with 2 npus compared to one cpu.
I can see that the problem is at the ghost cell.
I am getting the following message:
" Warning: – Your set of boundary condition is incompatible with the mesh label."
This message does not appear with only one cpu.
Any hints/tips?

"

The warning is to be expected, because there are “artificial” boundaries created at the interfaces between subdomains due to the domain decomposition. You can suppress it using -v 0 in the command line. If you have problems at the ghost cells, you are probably missing a synchronization of the values, like, exchange = true during the changeNumbering. But this is only one of the many possible explanations. If you can, it would be best to extract the part of the code which does not give satisfying results into a MWE, and I’ll help you debug.

Baseflow.edp (6.7 KB) Eigenmodes.edp (8.5 KB)

I have tried to simplify them as much as I can. After running the baseflow.edp with one or multiple cpus, baseflow.txt and baseflow.msh will be created and used as inputs for eigenmodes.edp. Running with one cpu give the eigenvalue of (0.0571415,0.738642) , and with 2 cpus the eigenvalue is (0.00873975,0.676901).

Thanks for your help.
Cyrille

Please make an earnest effort to simplify this. Eliminate any required user inputs so I can just run the program without any input. You may find your bug simply by cleaning up this code.

Ok thanks, difficult to know how much , but will try my best and as you said I may find the issue . Thanks for your support

One other suggestion, I would encourage you to favor the implementation strategy of Moulin et al over the StabFEM implementation strategy you are using. For a modest 2-D configuration, you should be able to neglect all the intricacies of the preconditioner and get very good performance with direct methods using MUMPS.

Thanks for the advice,

Although following the readme.md of Moulin, I couldn’t run the eigensolver.edp, getting error in decomposition. nonlinear-solver.edp run properly. This is under Windows system using the latest version 4.6.

What is your error?

When I run on Ubuntu, I get:

 Error opening file State/sol_FlatPlate3D.mesh_0-4.dat
  current line = 51 mpirank 0 / 4
Exec error : Error opening file
   -- number :1
 Error opening file State/sol_FlatPlate3D.mesh_3-4.dat
  current line = 51 mpirank 3 / 4
 Error opening file State/sol_FlatPlate3D.mesh_1-4.dat
  current line = 51 mpirank 1 / 4
 Error opening file State/sol_FlatPlate3D.mesh_2-4.dat
  current line = 51 mpirank 2 / 4

but the error is caught and the solver proceeds. Is this what you see?

Since your problem is two dimensional, you can also simply use the code from the FreeFEM repository, navier-stokes-2d-PETSc.edp and navier-stokes-2d-SLEPc-complex.edp. Then, add T, and then add rho. Always try to increase complexity gradually, your “MWE” is far from being minimalist as Chris said, so it’s kind of tricky to help you out…

Also, feel free to send me the error you are getting with the code from Moulin et al., just used the solver yesterday without a problem (but I’m using the develop branch).

No,Eigensolver.log (6.7 KB) it looks like some memory error in MatCreate line 83 (gcreate.c) see attached log file.

Are you using the same number of processes for the nonlinear solver and the eigensolver?

Yes I use 2 to minimize the screen ouput from the error.

Do the other SLEPc complex examples from FreeFEM run correctly?

Yes no issue, for example navier-stokes-2d-PETSc.edp and navier-stokes-2d-SLEPc-complex.edp works perfectly.

OK, good. Then, as I said earlier, since your problem seems to be two dimensional (for now), I’d suggest you use these files as a starting point. The code from Moulin et al. is mostly useful if you look at very large-scale 3D problems, which is not the case right now. I’ll investigate this in the meantime, but I think this is fixed in the develop branch, so it will be good with FreeFEM 4.7, to be released soon.

Yes I will do that, and still my main objectives are like Chris to have only one sol and eigenvector files, instead of multiple according to cpus no.

In the meantime I have tried the exchange_changeNumbering.edp, but I am getting the following error in assert:

44 : assert(changed.n == A.n No operator .n for type

Error line number 44, in file exchange_changeNumbering.edp, before token n

With one or 2 cpus , same error.

What is this file? Why such an assert?

The “exchange_changeNumbering.edp” files was sent by you on July 10.

You need the develop branch to use .n on A. If you don’t use this branch, just comment that assert.