Dear FreeFem users,
I would like to perform mesh adaptation in parallel, then reassemble the distributed mesh into a global one. Is there a convenient high-level macro or an example which demonstrates this?
Thanks
Andras
Dear FreeFem users,
I would like to perform mesh adaptation in parallel, then reassemble the distributed mesh into a global one. Is there a convenient high-level macro or an example which demonstrates this?
Thanks
Andras
Thanks, but I already found all those examples. In example 2 does sequential adaptation on a global mesh (but I want to perform parallel adaptation); although example 3 does parallel adaptation, in that example the global mesh is not recovered. I also tried to understand example 1, and I thought with the macro gatherDmesh(Th, comm, ThGather)
the global mesh can be recovered; however, I did not manage to do it. This example I also find hard to understand, e.g. in line 9 the int/int division looks weird, it does not work with 5 processes; I also tried to export the gathered into paraview with the , communicator = mpiCommSelf
option, but I got only a part of the mesh.
I think I found the solution in this video, and I can just call parmmg3d on the global mesh. However, I would greatly appreciate some clarification on both the
redistributeDmesh example and whether my solution is correct or not.
it does not work with 5 processes
I don’t know how you tested that, but with the develop
branch, it’s working as expected.
$ ff-mpirun -n 5 redistributeDmesh.edp -v 0 -wg && echo $?
0
gatherDmesh
will do what you want, i.e., gather a distributed mesh into a subcommunicator of your original communicator (in your case, I guess you want a single process in the output communicator, so -sizeComm 1
in redistributeDmesh.edp
).
BTW, why do you need to reassemble the distributed mesh in the first place if you are able to do mesh adaptation in parallel?
Thanks for the quick reply.
Thanks for the info now the redistributeDmesh.edp
example makes more sense. I use the precompiled 4.9 version, which is likely why the example did not work for me. It looks like I cannot keep postponing the compilation of the development branch anymore I will try this option out, thank you.
I like to keep a global mesh since with a global .vtu output, I do not need to post-process the internal edges with the threshold filter in ParaView. Furthermore, also in ParaView, the distributed mesh did not allow me to use particle tracers (although I tried it a long time ago, and there might exist a ParaView method to glue the mesh together). Moreover, keeping the global mesh allows me to integrate on it. But the more I think about it, the more I agree that the code should be paralellized as much as possible (e.g., I just realized that it is possible to calculate integrals on the whole domainin parallel using the Mat(vec,vec)
operation with and additional multiplication with the mass matrix (and also watching out to the partition of unity)).
I see. Indeed, for postprocessing purposes, a centralized mesh could be beneficial. Just be aware that it will likely ruin the scalability of your application. Any sort of integration or scalar product can be computed in a distributed fashion, either through Mat(vec,vec)
as you say, or using a nonoverlapping decomposition, see for example FreeFem-sources/minimal-surface-Tao-2d-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub. Let me know if you need additional help.
@prj FYI, I tried to rerun the redistributeDmesh.edp
script. It work on both the precompiled 4.9 and on a recently compiled develop branch, except when I am running it with -sizeComm 1
: in that case, I get an assertion error at line 45.
Yes, the assertion failure is expected. Good the rest is working.
I see, I was not sure whether it was intended or not
If I may jump in the conversation: I definitely agree that computing integrals without reconstructing the global mesh / solution is better, but I am wondering if it is possible to compute integrals in Paraview as a double check.
It seems that saving the solution with the usual savevtk
command in FF and then using the Integrate Variables
filter in Paraview is not giving correct results (perhaps because overlapping regions are counted multiple times?). Do you know if there is a way to get correct result, either by saving vtu files differently in FF and/or computing integrals differently in Paraview?
Thank you in advance!
I think you first need to apply the Threshold
filter with a label of -111111 to remove ghost unknowns. Maybe this gives better results? Or you can save the results on the nonoverlapping decomposition.
Thank you. It seems that the label -111111 corresponds to faces (triangles) and not elements (tetrahedrons), so filtering out this label does not change the values of 3d integrals. So it’s really a problem of multiplicity, and I don’t see an obvious, simple way to do that in Paraview.
Is there a FF example showing how to save results correctly on the nonoverlapping decomposition when the mesh has already been created and saved in another code with
mesh3 Th=readmesh3("filename.mesh");
(...)
saveDmesh(Th,"meshname")
and is simply loaded in the current code with
mesh3 Th;
loadDmesh(Th,"meshname")
and when the solution is vectorial (e.g. defined in [P1,P2])?
After loadDmesh
, you can call createPartition
like in FreeFem-sources/minimal-surface-Tao-2d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub.
Then interpolate your solution in the local subdomain without overlap, and use savevtk
as in the overlapping case.
Thank you. The code looks like this:
verbosity = 0;
load "PETSc-complex"
load "msh3"
load "gmsh"
macro dimension()3// EOM
include "macro_ddm.idp"
mesh3 Th;
loadDmesh(Th, "meshname")
{
fespace Ph(Th, P0);
Ph part;
createPartition(Th, part[], P0)
}
and returns the following message:
Out of bound 0 <=0 < 0 array type = P2KNIS_IlEE
current line = 424 mpirank 0 / 32
Exec error : Out of bound in operator []
-- number :1
Exec error : Out of bound in operator []
-- number :1
err code 8 , mpirank 0
What did I miss?
Could you please share a fully runnable code? I tried on examples/hpddm/save-load-Dmesh.edp
, and it’s working with no such error.