How to use ParMmg for mesh adaptation when performing parallel computations within the FFDDM framework?

Hello,

I am currently using FreeFEM to solve three-dimensional elasticity problems with approximately 5–10 million degrees of freedom. I encountered the following issues when using ParMmg for mesh adaptation.

1.In the official examples repository’s ffddm folder, I only found one case using ‘adapmesh’ for adaptive meshing in a 2D problem (‘Richard-2d-mpi-adaptmesh.edp’). I’m now working on a 3D elasticity problem, and I understand that ‘parmmg’ is better suited for 3D mesh adaptation in parallel. However, there is no ‘parmmg’ example in the ffddm folder. I did find these parmmg-based mesh‑adaptation examples:
‘distributed-parmmg.edp’
‘laplace-adapt-3d-PETSc.edp’
‘laplace-adapt-dist-3d-PETSc.edp’
All of them use the PETSc framework for parallelization. Consequently, I also tried solving my 3D elasticity problem within the PETSc framework and emulated their mesh‑adaptation approach.

The key portion of my code is as follows:

func Pk = [P1, P1, P1];       // vector FE space
macro def(u) [u, u#B, u#C] // // vector field definition
macro init(u) [u, u, u] //    // vector field initialization
fespace Vh(Th, Pk);

// Variational formulation
varf vA(def(u), def(v))
    = int3d(Th)(epsV(u)' * C * epsV(v))
    + on(nDrSME1, u = 0)
    + on(nDrSME2, uB = 0)
    + on(nDrSME3, uC = 0);

varf vb(def(u), def(v))
    = int3d(Th)((3.0 * lambda + 2.0 * mu) * alphaTE * DeltaT * epsTrace(v))
    + on(nDrSME1, u = 0)
    + on(nDrSME2, uB = 0)
    + on(nDrSME3, uC = 0);
for(int i = 0; i < 4; ++i) {
    // Matrix form
    matrix Aloc = vA(Vh, Vh);
    real[int] b = vb(0, Vh);

    // PETSc matrix
    A = Aloc;
    
    set(A, sparams = "-pc_type hypre -pc_hypre_boomeramg_coarsen_type HMIS -pc_hypre_boomeramg_interp_type ext+i " + "-ksp_type cg -ksp_monitor -ksp_converged_reason", bs = 3);
    Vh<real> def(uA);
    uA[] = A^-1 * b;

    mesh3 ThParMmg;
    int[int] n2o;
    int[int][int] communicators;
    ParMmgCreateCommunicators(Th, ThParMmg, n2o, communicators);
    
    fespace Wh(ThParMmg, P1);
    Wh uAbs = sqrt(uA^2 + uAB^2 + uAC^2);      
    real[int] met = mshmet(Th, abs(uAbs), aniso =0, nbregul=1, hmin=1e-6, hmax=200e-6, err=errm);    
    
    ThParMmg = parmmg3d(ThParMmg, metric = met, nodeCommunicators = communicators); 
    DmeshReconstruct(ThParMmg);
    
    DmeshCopy(ThParMmg, Th);
    Mat Adapt;
    MatCreate(Th, Adapt, Pk); 
    A = Adapt;
    u = 0.0;
    errm *= 0.5;    
}

However, an error occurred during the simulation; the error message is as follows:
468 @ WhPartPrivate def(func2vec) [func2vec, func2vecB, func2vecC] ; the array size must be 1 not 3 Error line number 468, in file macro: partitionPrivate in /usr/local/lib/ff++/4.15/idp/macro_ddm.idp, before token ; Invalid array size for vectorial fespace function
How can I modify my code to resolve the above error?

2.I am still primarily using the FFDDM framework for parallel simulations and would like to ask if there is an official example for using Parmmg to simulate three-dimensional elasticity problems.

Below is the core section of my simulation code:

for (int i = 0; i < 5; i++){
    ffddmbuildDmesh(E3d, ThGlobal, mpiCommWorld);
    ffddmbuildDfespace(E3d, E3d, real, def, init, Pk);
    ffddmsetupOperator(E3d, E3d, Varf); 
    ffddmsetupPrecond(E3d, Varf);  // one-level preconditioner
    ffddmgeneosetup(E3d,Varf);   // two-level preconditioner
    
    // Domain decomposition solve
    real[int] rhs(1);
    ffddmbuildrhs(E3d,Varf,rhs);
    real[int] x0(rhs.n);
    x0 = 0;

    E3dVhi def(u), def(err);   
    u[] = E3dfGMRES(x0, rhs, 1.e-6, 500, "right");
    E3dwritesummary;

    // the global finite element space defined on the global mesh prmesh#Thglob
    E3dVhglob def(uglob);
    E3dfromVhi(u[],E3dVhglob,uglob[]);

    real[int] met = mshmet(ThGlobal, sqrt(uglob^2+uglobB^2+uglobC^2), normalization=1, aniso=0, nbregul=1, hmin=1e-6, hmax=200e-6, err=errm);
    ThGlobal = parmmg3d(ThGlobal, metric=met, hgrad=2.3, requiredTriangle=rt, verbose=1);
    broadcast(processor(0),ThGlobal);
    
    errm*=0.5;        
}

When running the code above, adaptive mesh refinement succeeds with fewer than eight processes, but with more than sixteen processes it fails with an “out of memory” error. Could you point out what might be wrong in my code or suggest areas for improvement?

3.Additionally, in the code above, a domain decomposition is performed after each adaptive mesh update. Is there a way to perform domain decomposition just once, and then allow each subdomain to carry out mesh adaptation independently, without triggering a global domain decomposition again each time? This could help reduce the overall computation time. For example, by modifying the core code as follows:

ffddmbuildDmesh(E3d, ThGlobal, mpiCommWorld);
ffddmbuildDfespace(E3d, E3d, real, def, init, Pk);
ffddmsetupOperator(E3d, E3d, Varf); 
ffddmsetupPrecond(E3d, Varf);  // one-level preconditioner
ffddmgeneosetup(E3d,Varf);   // two-level preconditioner
real[int] rhs(1);
for (int i = 0; i < 5; i++){
    // Domain decomposition solve
    ffddmbuildrhs(E3d,Varf,rhs);
    real[int] x0(rhs.n);
    x0 = 0;

    E3dVhi def(u), def(err);
    u[] = E3dfGMRES(x0, rhs, 1.e-6, 500, "right");
    E3dwritesummary;

    // the global finite element space defined on the global mesh prmesh#Thglob
    E3dVhglob def(uglob);
    E3dfromVhi(u[],E3dVhglob,uglob[]);

    real[int] met = mshmet(ThGlobal, sqrt(uglob^2+uglobB^2+uglobC^2), normalization=1, aniso=0, nbregul=1, hmin=1e-6, hmax=200e-6, err=errm);
    ThGlobal = parmmg3d(ThGlobal, metric=met, hgrad=2.3, requiredTriangle=rt, verbose=1);
    broadcast(processor(0),ThGlobal);
    
errm*=0.5; 
 savevtk("utot"+i+".vtu", ThGlobal, sqrt(uglob^2+uglobB^2+uglobC^2), dataname="utot", order=Order1);      
}

After running the code above, I found that once adaptive mesh refinement is completed, the meshes within each subdomain remain unchanged compared to before the adaptation, while the global mesh has been updated. How should I modify the code to achieve the intended functionality described above?

Thank you very much.

If you don’t share a runnable code, then I can’t help you. My guess is that you are missing some macro definition, which should be straightforward to fix if you send your code in full.