Level-set with parmmg3d

Dear FreeFem users,

I am trying to use parmmg3d from FreeFEM in level-set mode, but I am not sure whether this functionality is actually supported by the current FreeFEM interface to ParMmg.

From reading the parmmg.cpp plugin source in FreeFEM, it seems that the parameters

  • iso
  • ls

do exist and are forwarded to the ParMmg API. However, in practice, the call does not seem to modify the mesh. So, is ParMmg level-set mode really supported through parmmg3d in FreeFEM?

In a minimal test, if I run ParMmg externally from the shell with something like

parmmg_O3 -in mesh.mesh -sol phi.sol -ls 0 -out out.mesh

the level-set mode seems to be available.

But when I call it from FreeFEM with parmmg3d(..., metric=..., ls=0), the mesh does not change, and I also get the message

## Error: MMG5_scale_scalarMetric: at least 1 wrong metric.

Here is a minimal example:

load "mshmet"
load "PETSc"
load "parmmg"
load "medit"
macro dimension()3// EOM
include "macro_ddm.idp"

func real Heav(real x, real beta, real eta)
{
  return (tanh(beta*eta) + tanh(beta*(x-eta))) /
         (tanh(beta*eta) + tanh(beta*(1.0-eta)));
}

int[int] fforder = [1];
real err = 0.05;
int n = 15;

mesh3 Th3=cube(n,n,n);
mesh3 ThBackup = Th3;

Mat A;
int[int] n2o;
NewMacro Th3N2O() n2o EndMacro
DmeshCreate(Th3);

MatCreate(Th3, A, P1);
fespace Vh(Th3, P1);
Vh u;
plotDmesh(Th3, cmm = "Th3");

Vh uh = sqrt((x-0.5)^2 + (y-0.5)^2 + (z-0.5)^2) - 0.3;
// Smooth
uh[] += 0.5;
uh = Heav(uh, 40, 0.5) - 0.5;

fespace VhBackup(ThBackup, P1);
VhBackup h, hReduced;
uh[] .*= A.D;
int[int] rest = restrict(Vh, VhBackup, n2o);
for[i, v : rest] h[][v] = uh[][i];
mpiAllReduce(h[], hReduced[], mpiCommWorld, mpiSUM); // centralize distributed solution

// real[int] met = mshmet(ThBackup, hReduced, aniso = 0, hmin = 1.0e-3, hmax = 2.0e-1, err = err);
ThBackup = parmmg3d(ThBackup, metric = uh[], hgrad = 2.3, verbose=3, ls=0);
Th3 = ThBackup;
Mat Adapt;
MatCreate(Th3, Adapt, P1);
A = Adapt;
cout << "mpirank # " << mpirank << " : Th3.nv " << Th3.nv << ", Th3.nt " << Th3.nt << endl;
plotDmesh(Th3, cmm = "Th3Adapt");

Thank you

You are missing the iso = 1 parameter to make this work with mmg3d. It still doesn’t work with parmmg3d, though. Where are the mesh.mesh, phi.sol, and out.mesh coming from in your test?

Dear @prj, thank you!
I called parmmg from the terminal with the centralized saved mesh ThBackup and the solution hReduced, by just including

if(mpirank == 0){
    savemesh(ThBackup, "mesh.mesh");
    savesol("phi.sol", ThBackup, hReduced);
}

before the FreeFem call to parmmg.
I attach here the full code

load "mshmet"
load "PETSc"
load "parmmg"
load "medit"
macro dimension()3// EOM
include "macro_ddm.idp"

func real Heav(real x, real beta, real eta)
{
  return (tanh(beta*eta) + tanh(beta*(x-eta))) /
         (tanh(beta*eta) + tanh(beta*(1.0-eta)));
}

int[int] fforder = [1];
real err = 0.05;
int n = 15;

mesh3 Th3=cube(n,n,n);
mesh3 ThBackup = Th3;

Mat A;
int[int] n2o;
NewMacro Th3N2O() n2o EndMacro
DmeshCreate(Th3);

MatCreate(Th3, A, P1);
fespace Vh(Th3, P1);
Vh u;
plotDmesh(Th3, cmm = "Th3");

Vh uh = sqrt((x-0.5)^2 + (y-0.5)^2 + (z-0.5)^2) - 0.3;
// Smooth
uh[] += 0.5;
uh = Heav(uh, 40, 0.5) - 0.5;

fespace VhBackup(ThBackup, P1);
VhBackup h, hReduced;
uh[] .*= A.D;
int[int] rest = restrict(Vh, VhBackup, n2o);
for[i, v : rest] h[][v] = uh[][i];
mpiAllReduce(h[], hReduced[], mpiCommWorld, mpiSUM); // centralize distributed solution

// real[int] met = mshmet(ThBackup, hReduced, aniso = 0, hmin = 1.0e-3, hmax = 2.0e-1, err = err);

if(mpirank == 0){
    savemesh(ThBackup, "mesh.mesh");
    savesol("phi.sol", ThBackup, hReduced);
}

ThBackup = parmmg3d(ThBackup, metric = uh[], hgrad = 2.3, verbose=3, ls=0);
Th3 = ThBackup;
Mat Adapt;
MatCreate(Th3, Adapt, P1);
A = Adapt;
cout << "mpirank # " << mpirank << " : Th3.nv " << Th3.nv << ", Th3.nt " << Th3.nt << endl;
plotDmesh(Th3, cmm = "Th3Adapt");

Thanks, Giacomo

Thanks, I have reproduced the issue and I’m working on a fix. Should be good in a couple of minutes.

Fixed in Fix ParMmg in level-set mode · FreeFem/FreeFem-sources@2cb676e · GitHub. You also need to fix your .edp: you are missing the iso = 1 parameter, and you should use hReduced[] instead of uh[] for the metric argument. Let me know if you run into other issues. Thanks for the report.

Wow, thanks a lot!
Yes, you are right, I need to call parmmg as
ThBackup = parmmg3d(ThBackup, metric = hReduced[], hgrad = 2.3, verbose=3, ls=0, iso=1);

I’ve recompiled FreeFem, in the develop branch, but the error ## Error: MMG5_scale_scalarMetric: at least 1 wrong metric. persists.

I also notice that by calling parmmg with a metric (computed e.g. with meshmet) instead of hReduced and iso=1, and looking at the output of parmmg, the isovalue discretization option seems to be activated:

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   MODULE PARMMGLIB_CENTRALIZED: IMB-LJLL : 1.5.0 (Nov. 01, 2024)
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
     git branch: HEAD
     git commit: cd8a6e340e466e857b0bb9952f115d8999337167
     git date:   2024-11-01 09:12:00 +0100


  -- PMMG: CHECK INPUT DATA
  -- CHECK INPUT DATA COMPLETED.     0.000s

  -- PHASE 1 : ANALYSIS AND MESH DISTRIBUTION

  -- ANALYSIS
  -- PARALLEL MESH QUALITY  4096   20250
     BEST   0.657267  AVRG.   0.657267  WRST.   0.657267 (ELT 251)
     HISTOGRAMM:  100.00 % > 0.12

  -- PHASE 1a: ISOVALUE DISCRETIZATION     

 Unexpected error:  Segmentation fault

I know that the metric information and the iso-requirement are not compatible in this case, it was only a test. Hope that it helps.

Thank you again,
Giacomo

I cannot reproduce the error.

'/Volumes/Data/repositories/petsc/arch-darwin-c-opt-real/bin/mpiexec' -n 4 /Volumes/Data/repositories/FreeFem-sources-opt/src/mpi/FreeFem++-mpi -nw 'parmmg.edp' -v 0

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   MODULE PARMMGLIB_CENTRALIZED: IMB-LJLL : 1.5.0 (Nov. 01, 2024)
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
     git branch: HEAD
     git commit: cd8a6e340e466e857b0bb9952f115d8999337167
     git date:   2024-11-01 09:12:00 +0100


  -- PMMG: CHECK INPUT DATA
  -- CHECK INPUT DATA COMPLETED.     0.000s

  -- PHASE 1 : ANALYSIS AND MESH DISTRIBUTION

  -- ANALYSIS
  -- PARALLEL MESH QUALITY  4096   20250
     BEST   0.657267  AVRG.   0.657267  WRST.   0.657267 (PROC 0 - ELT 251)
     HISTOGRAMM:  100.00 % > 0.12

  -- PHASE 1a: ISOVALUE DISCRETIZATION     
  -- PHASE 1a COMPLETED     0.037s
       LS discretization OK: no non-manifold topology.

  -- ANALYSIS COMPLETED    0.067s

  -- PARTITIONING
  -- PARTITIONING COMPLETED    0.021s
  -- PHASE 1 COMPLETED.     0.088s

  -- PHASE 2 : ISOTROPIC MESHING

  ## Warning: MMG3D_chkmani: at least 1 tetra with 4 boundary faces.

  -- PARALLEL MESH QUALITY  1087   5890
     BEST   0.993480  AVRG.   0.652338  WRST.   0.089281 (PROC 1 - ELT 1111)
     HISTOGRAMM:   99.90 % > 0.12
  -- PHASE 2 COMPLETED.     0.768s

  -- PHASE 3 : MERGE MESHES OVER PROCESSORS
  -- PHASE 3 COMPLETED.     0.001s

  -- PHASE 4 : MESH PACKED UP
  -- PHASE 4 COMPLETED.     0.000s

   PARMMGLIB_CENTRALIZED: ELAPSED TIME  0.858s

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   END OF MODULE PARMMGLIB_CENTRALIZED: IMB-LJLL 
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
mpirank # 0 : Th3.nv 507, Th3.nt 2312
mpirank # 1 : Th3.nv 524, Th3.nt 2375
mpirank # 2 : Th3.nv 534, Th3.nt 2375
mpirank # 3 : Th3.nv 557, Th3.nt 2491