Is it possible to directly merge the dispatched meshes into a whole mesh

Dear all,

I know that if we need the information of global mesh after DmeshCreate(Th), we need to save the global mesh as a backup in advance.

I just wonder if it is possible to directly merge the dispatched mesh into a whole mesh without prior backup.

Yes, look for DmeshReconstruct() in the examples.

Many thanks. I have tried the command DmeshReconstruct(), I find it can generate a distributed overlapping mesh from some dispatched non-overlapping meshes.

mesh ThGlobal = square(40,40);
fespace Ph(ThGlobal, P0);
Ph part;
if(mpirank == 0) {
    partitionerSeq(part[], ThGlobal, mpisize);
}
partitionerPar(part[], ThGlobal, mpiCommWorld, mpisize);
mesh Th = trunc(ThGlobal, abs(part - mpirank) < 1.0e-2, renum = 1, label = -111112);
DmeshReconstruct(Th);

I have a further question, is it possible to generate a whole non-overlapping mesh?

Please reformulate your last question.

Sorry, let me restate my question, that is

I have a global mesh Th, for example, generated by mesh Th = square(40, 40));at this time, every processor share the same global Th,

after DmeshCreate(Th); it becomes an overlapping distribted mesh, and every piece belongs to one processor, at this time, every processor has its own local different Th,

my question is that if it is possible directly recover the global mesh like Picture 1 from dispatched meshes like Picture 2 so that every processor has the same global Th again.

You could just save the initial global mesh in a backup variable.

Sorry to bother you again, professor, I have another irrelevant question,

I want to calculate the running time of my parallel program. At the beginning, I use mpiWtime(), but I find it returns different values from different processors. like the following picture shows

I am not sure if it is reliable when I only count the running time of processor 0. (I want to compare the time of 4,16, 64 ,256 processors)

I know another way is use linux command time like

time ff-mpirun -n 6 ex7-compute_pi_parallel.edp -v 0

How can I compare the running time of different number of processors reliably?

Use PETSc -log_view with stages in your script.

Thank you very much. -log_view works well and I got some valuable information.

By the way, do you think the average time (here is the Time row and Avg column) is a good factor to compare the parallel efficiency? And are there other good factors to measure the parallel efficiency?

You’ll find help in the PETSc manual. Below the numbers you are reporting, there is a column “Ratio” which highlights the load imbalance between processes.