Reassembling distributed mesh

Dear FreeFem users,

I would like to perform mesh adaptation in parallel, then reassemble the distributed mesh into a global one. Is there a convenient high-level macro or an example which demonstrates this?

Thanks
Andras

There are multiple examples: [1,2,3] are just some of them.

Thanks, but I already found all those examples. In example 2 does sequential adaptation on a global mesh (but I want to perform parallel adaptation); although example 3 does parallel adaptation, in that example the global mesh is not recovered. I also tried to understand example 1, and I thought with the macro gatherDmesh(Th, comm, ThGather) the global mesh can be recovered; however, I did not manage to do it. This example I also find hard to understand, e.g. in line 9 the int/int division looks weird, it does not work with 5 processes; I also tried to export the gathered into paraview with the , communicator = mpiCommSelfoption, but I got only a part of the mesh.
I think I found the solution in this video, and I can just call parmmg3d on the global mesh. However, I would greatly appreciate some clarification on both the
redistributeDmesh example and whether my solution is correct or not.

it does not work with 5 processes

I don’t know how you tested that, but with the develop branch, it’s working as expected.

$ ff-mpirun -n 5 redistributeDmesh.edp -v 0 -wg && echo $?
0

gatherDmesh will do what you want, i.e., gather a distributed mesh into a subcommunicator of your original communicator (in your case, I guess you want a single process in the output communicator, so -sizeComm 1 in redistributeDmesh.edp).

BTW, why do you need to reassemble the distributed mesh in the first place if you are able to do mesh adaptation in parallel?

Thanks for the quick reply.
Thanks for the info now the redistributeDmesh.edp example makes more sense. I use the precompiled 4.9 version, which is likely why the example did not work for me. It looks like I cannot keep postponing the compilation of the development branch anymore :smiley: I will try this option out, thank you.

I like to keep a global mesh since with a global .vtu output, I do not need to post-process the internal edges with the threshold filter in ParaView. Furthermore, also in ParaView, the distributed mesh did not allow me to use particle tracers (although I tried it a long time ago, and there might exist a ParaView method to glue the mesh together). Moreover, keeping the global mesh allows me to integrate on it. But the more I think about it, the more I agree that the code should be paralellized as much as possible (e.g., I just realized that it is possible to calculate integrals on the whole domainin parallel using the Mat(vec,vec) operation with and additional multiplication with the mass matrix (and also watching out to the partition of unity)).

I see. Indeed, for postprocessing purposes, a centralized mesh could be beneficial. Just be aware that it will likely ruin the scalability of your application. Any sort of integration or scalar product can be computed in a distributed fashion, either through Mat(vec,vec) as you say, or using a nonoverlapping decomposition, see for example FreeFem-sources/minimal-surface-Tao-2d-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub. Let me know if you need additional help.

@prj FYI, I tried to rerun the redistributeDmesh.edp script. It work on both the precompiled 4.9 and on a recently compiled develop branch, except when I am running it with -sizeComm 1 : in that case, I get an assertion error at line 45.

Yes, the assertion failure is expected. Good the rest is working.

I see, I was not sure whether it was intended or not :slight_smile: