From several examples, I understood how to perform mesh adaptation. In particular in case of parallel example, I noticed that the following steps are taken:
Compute the global solution from solutions on local meshes (with solution .*= matrice.D, restrict, etc …)
Perform adaptmesh on the global solution only on proc 0 and then broadcast the new mesh on all other procs.
Interpolate the global solution on the new mesh.
Re-create the partitioning.
Compute local solutions from the global solution.
I wonder if it would be possible to do something similar fully in parallel, i.e. all procs would run adaptmesh and not only the proc 0. I understand that this is not trivial as it means that we should have after partitioning identical overlapping elements.
Hello Lucas,
You are explicitly mentioning adaptmesh, which only works in 2D, so I’m guessing right now you are focusing on a 2D problem. Unfortunately:
ParMmg only works in 3D;
I’m not sure adaptmesh is 100% deterministic, and for sure, it is sequential.
You can try to do a mpiAllReduce instead of mpiReduce, and call adaptmesh on all processes, but please be careful and make sure that the output mesh is the same on all processes, e.g., by checking ThOutput.nt and ThOutput.nv.
You can also perform step 5 in parallel, using the transfer macro.
All in all, the adaptation step is supposed to be quite cheap in 2D, this is a completely different story in 3D, is it not the case in your application?
In 3D, you can do eveything in parallel, see, e.g., laplace-adapt-dist-3d-PETSc.edp. Only the initial mesh is create redundantly on all processes, but this can be bypassed, see, e.g., reconstructDmesh.edp.
Hi Pierre, would it be possible to update the example newton-adaptmesh-2d-PETSc.edp to use the createMat() and buildDmesh() macros (perhaps also showing the use of transfer()?) instead of using the older buildMat() macro?
I am facing a problem with a parallel code that iteratively calls adaptmesh, and I think I am messing up the PETSc numbering somewhere.