Full parallel adaptmesh

Hi!

From several examples, I understood how to perform mesh adaptation. In particular in case of parallel example, I noticed that the following steps are taken:

  1. Compute the global solution from solutions on local meshes (with solution .*= matrice.D, restrict, etc …)
  2. Perform adaptmesh on the global solution only on proc 0 and then broadcast the new mesh on all other procs.
  3. Interpolate the global solution on the new mesh.
  4. Re-create the partitioning.
  5. Compute local solutions from the global solution.

I wonder if it would be possible to do something similar fully in parallel, i.e. all procs would run adaptmesh and not only the proc 0. I understand that this is not trivial as it means that we should have after partitioning identical overlapping elements.

Best,

Lucas

Hello Lucas,
You are explicitly mentioning adaptmesh, which only works in 2D, so I’m guessing right now you are focusing on a 2D problem. Unfortunately:

  1. ParMmg only works in 3D;
  2. I’m not sure adaptmesh is 100% deterministic, and for sure, it is sequential.

You can try to do a mpiAllReduce instead of mpiReduce, and call adaptmesh on all processes, but please be careful and make sure that the output mesh is the same on all processes, e.g., by checking ThOutput.nt and ThOutput.nv.
You can also perform step 5 in parallel, using the transfer macro.
All in all, the adaptation step is supposed to be quite cheap in 2D, this is a completely different story in 3D, is it not the case in your application?
In 3D, you can do eveything in parallel, see, e.g., laplace-adapt-dist-3d-PETSc.edp. Only the initial mesh is create redundantly on all processes, but this can be bypassed, see, e.g., reconstructDmesh.edp.

Thank you Pierre for this clear answer.

Indeed I am working on 2D cases, I guess for now I can live with sequential mesh adaptation ^^

Hi Pierre, would it be possible to update the example newton-adaptmesh-2d-PETSc.edp to use the createMat() and buildDmesh() macros (perhaps also showing the use of transfer()?) instead of using the older buildMat() macro?

I am facing a problem with a parallel code that iteratively calls adaptmesh, and I think I am messing up the PETSc numbering somewhere.

Would the example FreeFem-sources/laplace-adapt-3d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub help you in anyway?

1 Like

Thank you for pointing that one out. I will play around with this to see if I can figure out my issue.

This did help. It turns out my issue was that I was doing this:

real[int] q;
changeNumbering(A, u[], q, inverse = true, exchange = false);
Vhg uG, uR;
uR = u;
mpiAllReduce(uR[], uG[], mpiCommWorld, mpiSUM);

instead of this:

real[int] q;
changeNumbering(A, u[], q, inverse = true, exchange = false);
Vhg uG, uR;
for[i, v : restu] uR[][v] = u[][i];
mpiAllReduce(uR[], uG[], mpiCommWorld, mpiSUM);

Thanks for the guidance!

Best to avoid the interpolator whenever you can :slight_smile:

On that note, is there a better way to update the fespace besides using interpolation after mesh adaptation?

Right now, I do this:

broadcast(processor(0), Thg);
uG = uG;
{
  Th = Thg;
  Mat Adapt;
  createMatu(Th, Adapt, Pk);
  A = Adapt;
}
u = uG;

but it would be nice to replace the last line with something like:

{
  int[int] temprest;
  temprest = restrict(Vh, Vhg, n2o);
  rest.resize(temprest.n);
  rest = temprest;
}
u[] = uG[](rest);

This gets the job done. I’m not sure this is what you are looking for?

diff --git a/examples/hpddm/laplace-adapt-3d-PETSc.edp b/examples/hpddm/laplace-adapt-3d-PETSc.edp
index 7f7bb3d6..72c7da7d 100644
--- a/examples/hpddm/laplace-adapt-3d-PETSc.edp
+++ b/examples/hpddm/laplace-adapt-3d-PETSc.edp
@@ -57,6 +57,15 @@ for(int i = 0; i < iMax; ++i) {
         createMat(Th3, Adapt, P1);
         A = Adapt;
         u = 0.0;
+        hReduced = hReduced; // this may be costly, interpolation on the global adapted mesh
+        {
+          int[int] temprest;
+          temprest = restrict(Vh, VhBackup, n2o);
+          rest.resize(temprest.n);
+          rest = temprest;
+        }
+        u[] = hReduced[](rest);
+        plot(u);
         err *= 0.5;
     }
 }
1 Like

This code is still calling the interpolator on the line u = 0.0; though, correct? Is there a way to bypass that?

Since 0.0 is an analytical function, this is not calling the interpolator, just evaluating the function.

1 Like