Saving and loading global/distibuted arrays

Hello FreeFEM developers!

Is there an easy way to gather/scatter distributed arrays on local partitions to/from a global array for file I/O under the HPDDM framework developed by @prj? In the examples I’ve seen (e.g. navier-stokes-2d-PETSc.edp), the file I/O is handled in an efficient distributed fashion where the local solution on each mpirank is saved/loaded in a separate file. How could I perform file I/O via a single file on a single process?

In essence, I’d like to gather a distributed solution into a single array on mpirank 0 and save it as a single file for post-processing purposes. Then, I could similarly load that single solution file on mpirank 0 and scatter it to the various other processors for other computations.

I imagine that macro_ddm.idp already has a convenient way to do this, but I don’t know of any examples or documentation which could give some clues. Any suggestions would be helpful!

Thank you,
Chris

Sure thing, see this followed by that, or this followed by that. Both examples do exactly what you are asking for: from an initial global mesh, distribute it, keep the information to go from global to local, and once the parallel payload is over, go back from local to global and gather the solution (either on process 0 or everywhere).

1 Like

What is the purpose of this? (pasted below)

real[int] sol;
changeNumbering(A, solx[], sol);
changeNumbering(A, solx[], sol, inverse = true);

As I see it, this

  1. Copies the fe array solx[] in FreeFEM numbering to sol in PETSc numbering.
  2. Copies sol back into FreeFEM numbering.

I feel as if I’m missing a subtle trick here…

You have to always keep in mind two things when using PETSc in FreeFEM:

  1. FreeFEM numbering has ghost unknowns (overlapping decomposition)
  2. PETSc numbering has no ghost unknown (each unknown in PETSc numbering is assigned to a single process)

So what this little piece of code does is basically:

  1. go from FreeFEM to PETSc, which is very cheap, basically create a vector sol of dimension lower than solx[] and only keep the PETSc unknowns assigned to this local process
  2. go from PETSc to FreeFEM. The trick here is that the additional parameter exchange is not used, false by default, so the value of the ghost unknowns are not updated, and set to 0

Why all this is needed? It’s just a very fancy way of making sure duplicated unknowns (because of ghost elements) are not summed multiple times in the follow-up MPI reduction.
You can also simply remove those two changeNumbering, and do something like solx[] .*= A.D;, which will scale the solution by the partition of unity. Thus, duplicated unknowns (because of ghost elements) will be summed but weighted, so you’ll still get a consistent solution w.r.t. a run on 1 process.

See this tutorial and the attached example for more.
exchange_changeNumbering.edp (1.9 KB)

2 Likes

Very enlightening. Thank you for the detailed explanation! :relaxed:

I am having the same issue. From the reply, I m understanding that the information from global to local is given from “see this followed by that”. In That it requires the matrix A in “changeNumbering(A, solx[], sol);” so it requires “CreateMat” , does it ? Am I understanding that this should include
{
macro def(i)[i, i#y, i#z]// EOM // vector field definition
macro init(i)[i, i, i]// EOM // vector field initialization
createMat(Th, A, Pk)
}

Thanks for your help.
Cyrille

That plus the definition of the macro YourMeshName#N2O so that you can later on use int[int] rest = restrict(VhLocal, VhGlobal, YourMeshName#N2O); to get the interpolation from local to global.

Dear Chris,

Are you willing to share your experience code specially from global to local? As you, I am trying to load solution (mean flow from NavierStokes Equation) on mpirank 0 and scatter it to all processors for eigenvalue analysis. There are some concepts that I can’t understand.
Thanks
Cyrille

@cyrillebreard here is how I do it:

load "PETSc"
macro dimension()2//
include "macro_ddm.idp" 
string workdir = getARGV("-dir", "");
string meshfile = getARGV("-msh", "");
string filein = getARGV("-fi", "");
string fileout = getARGV("-fo", "");
func Pk = []; // define FE space
mesh Th = readmesh(workdir + meshfile + ".msh");
fespace Vh(Th, Pk);
Vh def(u); // define u globally
u[] = loadfile(workdir, filein, meshfile); // function to load file
Mat A;
int[int] n2o;
macro ThN2O()n2o// EOM
buildDmesh(Th);
createMat(Th, A, Pk);
def(u) = def(u); // define u on local partitions

/*Main part of code
...
*/

real[int] q;
changeNumbering(A, u[], q);
if(fileout !=""){
  changeNumbering(A, u[], q, inverse = true);
  mesh Thg = readmesh(workdir + meshfile + ".msh");
  fespace Vhg(Thg, Pk);
  Vhg def(ug), def(uo);
  int[int] rest = restrict(Vh, Vhg, n2o);
  for[i, v : rest] ug[][v] = u[][i];
  mpiReduce(ug[], uo[], processor(0, mpiCommWorld), mpiSUM);
  real[int] qo = uo[];
  if (mpirank == 0) printfile(workdir, fileout, qo); // function to write file
}

The load and print functions are basically just ifstream and ofstream. Note that I made some simplifications from my actual code to (hopefully) make this easier to understand, but this may have also led to a few typos or inconsistencies.

I think the important thing here is knowing when you are working with local or global variables. Once you call buildDmesh and createMat, lots of things change! I spent a good chunk of time figuring out what the macros_ddm.idp file is actually doing before I got everything to work.

By the way @prj, are there any efforts underway to expand the documentation for the PETSc interface in FreeFEM, or are things still evolving too rapidly to make that effort worthwhile? Obviously I’m still learning, but I’d be happy to help however I could.

I’ve added some documentation in my tutorial w.r.t.:

  1. restrict and new2old functions, see this slide and that example
  2. N2O for Dmesh, see this slide and that example

@cyrillebreard, what you want to do is exactly, I think, what is done in this script which computes a baseflow, followed by that script which does linear stability analysis. 3D scripts are available there as well.

@cmd, the interface is almost complete now, so it’s not evolving very fast, just new functionalities here and there, plus bug fixes. What would you want more to be put in the tutorial? Would you like to contribute? It’s missing parallel mesh adaptation and parallel interpolation. Probably will add these sometime soon.

@prj I will message you separately to avoid making this a very long thread :slight_smile:

Dear Chris,

Thanks for sharing the coding, and specially this line:
def(u) = def(u); // define u on local partitions

I would never find out this is the way to “transfer” the data from global to local.

Thanks again.

@prj like Chris I want to read one “sol” file and not several files like in " **navier-stokes-2d-SLEPc-complex.edp"

Thanks for your prompt reply.
Cyrille

1 Like

def(u) = def(u); // define u on local partitions

This is actually not the smartest thing to do since @cmd is using macro ThN2O()n2o// EOM. Indeed, this line relies on the interpolator, which may be very costly, and even inaccurate for very complex meshes. Much faster and safer to use the n2o array + the restrict keyword. See the penultimate line of this example.

u[] = uGlobal[](R); // notice the [] here, which means there is indeed no interpolation

No interpolation, just integer manipulation.

1 Like

@prj Thanks for the tip! This is indeed a better solution.

@prj would it be worth adding a something similar to the ThName#n2o macros to automatically get this restriction information using the DDM macros? I’m thinking of something like:

int[int] n2o, restriction;
macro ThN2O()n2o// EOM
macro ThRestriction()restriction// EOM
buildDmesh(Th);
createMat(Th, A, Pk);

where createMat will automatically populate the restriction array if the ThRestriction macro exists. This might be a little bit misleading since the restriction is linked to the FE space on the mesh and not the mesh itself, but I think this would be a nice feature if it doesn’t exist already. :slight_smile:

It doesn’t exist, and it’s not that trivial to add, because you also need to pass the original global mesh.
Maybe something like

int[int] n2o, restriction;
mesh backup = Th;
macro ThRestriction()restriction// EOM
macro ThOriginal()backup// EOM 
buildDmesh(Th)
createMat(Th, A, Pk)

But this is a bit of a mouthful, IMHO…

Anyway, here you go f36059d.
With the following diff for example13.edp.

diff --git a/section_8/example13.edp b/section_8/example13.edp
index cda1adc..63fc255 100644
--- a/section_8/example13.edp
+++ b/section_8/example13.edp
@@ -19,3 +19,3 @@ broadcast(processor(0), ThGlobal);

-int[int] n2o;
+int[int] n2o, R;
 mesh Th = ThGlobal;
@@ -27,5 +27,8 @@ fespace Vh(Th, Pk); // full global mesh
 fespace VhGlobal(ThGlobal, Pk); // distributed local mesh
+macro ThOriginal()ThGlobal// EOM
+macro ThRestriction()R// EOM
+Mat A;
+createMat(Th, A, Pk)

 Vh u = x^2 + y^2;
-int[int] R = restrict(Vh, VhGlobal, n2o);
 VhGlobal uGlobal, uReduce;
@@ -42,4 +45,2 @@ plot(uReduce, wait = 1, cmm = "u on the global mesh (weighted before summed)");

-Mat A;
-createMat(Th, A, Pk)
 u = x^2 + y^2;

Yes, I agree. This is not as elegant as I thought it might be. :neutral_face: Oh well! Hopefully it will be useful to some.

I made huge progress thanks to your help.
But I don’t get the same result with 2 npus compared to one cpu.
I can see that the problem is at the ghost cell.
I am getting the following message:
" Warning: – Your set of boundary condition is incompatible with the mesh label."
This message does not appear with only one cpu.
Any hints/tips?

"

The warning is to be expected, because there are “artificial” boundaries created at the interfaces between subdomains due to the domain decomposition. You can suppress it using -v 0 in the command line. If you have problems at the ghost cells, you are probably missing a synchronization of the values, like, exchange = true during the changeNumbering. But this is only one of the many possible explanations. If you can, it would be best to extract the part of the code which does not give satisfying results into a MWE, and I’ll help you debug.