Integral over nested meshes

Dear all,

I am trying to evaluate an integral for the solution obtained in the nested meshes.
More concretely, I’d like to evaluate the integral of the solution in the following example:

According to the previous entry (Issue with transfer between two meshes),
Maybe, I should add the n2o operator for the mesh “ThFine”, as in the example “maxwell-3d-PETSc.edp”.
However, I don’t know how to do this with “buildMatEdgeRecursive.”

Would someone show me how to do this?
Or, is there any other method?

Thank you in advance.

Instead of computing an integral, you want to compute a weighted integral, using the partition of unity.
E.g., instead of

real val = int3d(Th)(real(u));
mpiAllReduce(val, ..., mpiSUM);

Do

real[int] D = MG[0].D; // partition of unity on the fine mesh
u .*= D; // entry-wise scaling by the partition of unity
real val = int3d(Th)(u);
mpiAllReduce(val, ..., mpiSUM);

Dear prj,
Thank you so much for your response.

To confirm the integral value, I evaluated the volume of a unit cube based on your method.

//volume of a unit cube
fespace VhP1(ThFine, P1);
VhP1 volint = 1.0;
real valwrong = int3d(ThFine)(volint);
real valwrongG;
mpiAllReduce(valwrong, valwrongG, mpiCommWorld, mpiSUM);
real[int] D = MG[0].D; // partition of unity on the fine mesh, OK
volint[].*=D; // entry-wise scaling by the partition of unity
real val = int3d(ThFine)(volint);
real valG;
mpiAllReduce(val, valG, mpiCommWorld, mpiSUM);
if(mpirank==0){
  cout << "valwrongG = " << valwrongG << endl;
  cout << "valG = " << valG << endl;
}

If I run the code with np=1, I obtained correct results with valwrongG=1 and valG=1.
However, when I run the code with np=4, they are evaluated as valwrongG=1.709913 and valG = 8.724715e-01. That is, both of them are apart from 1, and even valG seems to contain an error.

Am I missing something?

OK, there are multiple reasons why it’s not working (one is that you are mixing the partition of unity which is of Nédélec type with a P_1 function). Still, there are some issues in macro_ddm.idp, or missing pieces, to make this work. I’ll try to think of something and get back to you ASAP. Sorry about that.

I have a solution, far from ideal, but it gets the job done. Please fetch Adds buildMatEdgeRecursiveWithPartitioning. · FreeFem/FreeFem-sources@57d5ad4 · GitHub.
Then, the following code works and always returns a volume of 1.

diff --git a/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp b/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp
index a70f15b8..c1a872ba 100644
--- a/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp
+++ b/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp
@@ -28,3 +28,25 @@ func Pk = Edge03d;
 func PkPart = Edge03ds0;
-buildMatEdgeRecursive(Th, s, level, P, MG, Pk, mpiCommWorld, PkPart, defPart, initPart);
+fespace PhFine(Th[0], P0);
+PhFine partFine;
+{
+    fespace PhCoarse(Th[level - 1], P0);
+    PhCoarse part;
+    partitionerSeq(part[], Th[level - 1], mpisize);
+    partitionerPar(part[], Th[level - 1], mpiCommSelf, mpisize);
+    buildMatEdgeRecursiveWithPartitioning(Th, part[], s, level, P, MG, Pk, mpiCommWorld, PkPart, defPart, initPart);
+    part = part;
+    partFine = part;
+}
+
+{
+    // volume of a unit cube
+    mesh3 ThNo = trunc(Th[0], abs(partFine-mpirank) < 1e-2);
+    fespace VhP1(ThNo, P1);
+    VhP1 volint = 1.0;
+    real val = int3d(ThNo)(volint);
+    real valG;
+    mpiAllReduce(val, valG, mpiCommWorld, mpiSUM);
+    if(mpirank==0)
+        cout << "valG = " << valG << endl;
+}
 

The idea is to let you store the partitioning, so that you can get a nonoverlapping decomposition on the fine level, so that you can integrate properly any variable without duplicated unknowns. Let me know if something is not clear to you.

Thank you so much! It worked very well in my environment.

Dear prj,

Thank you so much for this solution.
I have an additional question.
I would like to solve Maxwell’s equation with periodic boundary conditions based on the example in this post.
However, the number of nodes on the surfaces of a cube does not match if I use your macro “buildMatEdgeRecursiveWithPartitioning” because of the partitions.
I think a macro like
macro ThPeriodicity()labPeriodic//
will solve this problem as in the following example.

However, I don’t know how to define such a macro because we define the mesh with an array like
Th[level - 1] = cube(n, n, n, [x, y, z], label = LL);

It would be very helpful if you show me the solution when you are available.
Thank you in advance.

I believe they should match. Could you please provide a full running example that highlight the exact issue?

Dear prj,

Thank you for your response.
I modified the example a little bit as follows:

//  run with MPI:  ff-mpirun -np 4 script.edp
// NBPROC 4

load "PETSc-complex"
load "Element_Mixte3d"
macro dimension()3// EOM
include "macro_ddm.idp"
include "cube.idp"

macro Curl(ux, uy, uz)[dy(uz)-dz(uy), dz(ux)-dx(uz), dx(uy)-dy(ux)]// EOM
macro CrossN(ux, uy, uz)[uy*N.z-uz*N.y, uz*N.x-ux*N.z, ux*N.y-uy*N.x]// EOM
macro def(i)[i, i#y, i#z]// EOM
macro init(i)[i, i, i]// EOM
macro defPart(i)i// EOM
macro initPart(i)i// EOM

int level = 2;
int s = 3;

Mat<complex>[int] MG(level);
mesh3[int] Th(level);
matrix[int] P(level - 1);
//int Robin = 2;
//int[int] LL = [Robin, Robin, Robin, Robin, Robin, Robin];
real n = getARGV("-n", 21) / s;
Th[level - 1] = cube(n, n, n, [x, y, z]);
func perio = [[1,x,z],[3,x,z],[2,y,z],[4,y,z],[5,x,y],[6,x,y]];

//func Pk = Edge03d;
//func PkPart = Edge03ds0;
macro Pk() Edge03d, periodic = perio//EOM
macro PkPart() Edge03ds0, periodic = perio//EOM

//buildMatEdgeRecursive(Th, s, level, P, MG, Pk, mpiCommWorld, PkPart, defPart, initPart);
fespace PhFine(Th[0], P0);
PhFine partFine;
{
    fespace PhCoarse(Th[level - 1], P0);
    PhCoarse part;
    partitionerSeq(part[], Th[level - 1], mpisize);
    partitionerPar(part[], Th[level - 1], mpiCommSelf, mpisize);
    buildMatEdgeRecursiveWithPartitioning(Th, part[], s, level, P, MG, Pk, mpiCommWorld, PkPart, defPart, initPart);
    part = part;
    partFine = part;
}

func k = 6 * pi;
func f = exp(-60 * ((x-0.5)^2 + (y-0.5)^2 + (z-0.5)^2));

complex[int] rhs;
matrix<complex>[int] Opt(level);
for(int i = 0; i < level; ++i) {
    fespace Vh(Th[i], Pk);
    varf vPb([Ex,Ey,Ez], [vx,vy,vz]) =
        int3d(Th[i])(Curl(vx,vy,vz)'*Curl(Ex,Ey,Ez))
      + int3d(Th[i])(-k^2*[vx,vy,vz]'*[Ex,Ey,Ez])
      //+ int2d(Th[i],Robin)(1i*k*CrossN(vx,vy,vz)'*CrossN(Ex,Ey,Ez))
      + int2d(Th[i],-111111)(1i*k*CrossN(vx,vy,vz)'*CrossN(Ex,Ey,Ez));
    Opt[i] = vPb(Vh, Vh, sym = 1);
    MG[i] = Opt[i];
    if(i == 0) {
        varf vRHS([Ex,Ey,Ez],[vx,vy,vz]) = -int3d(Th[i])([vx,vy,vz]'*[0,0,f]);
        rhs.resize(Vh.ndof);
        rhs = vRHS(0, Vh);
    }
}
set(MG, P, sparams = "-pc_type mg -ksp_monitor -ksp_view_final_residual -ksp_type fgmres -ksp_gmres_restart 200 -ksp_max_it 200");
set(MG, 0, sparams = "-mg_coarse_ksp_type gmres -mg_coarse_ksp_rtol 1e-1 -mg_coarse_ksp_pc_side right " + " -mg_coarse_ksp_max_it 50 -mg_coarse_ksp_converged_reason -mg_coarse_pc_type asm -mg_coarse_sub_pc_factor_mat_solver_type mumps -mg_coarse_sub_pc_type cholesky -mg_coarse_pc_asm_type restrict", O = Opt[level - 1]);
set(MG, level - 1, sparams = "-mg_levels_ksp_type richardson -mg_levels_ksp_pc_side left -mg_levels_pc_type asm -mg_levels_sub_pc_type cholesky -mg_levels_sub_pc_factor_mat_solver_type mumps -mg_levels_pc_asm_type restrict", O = Opt[0]);
fespace Vh(Th[0], Pk);
Vh<complex> def(u);
u[] = MG[0]^-1 * rhs;
mesh3 ThFine = Th[0];
fespace VhFine(ThFine, P1);
VhFine plt = sqrt(real(u)^2 + real(uy)^2 + real(uz)^2);
plotD(ThFine, plt, cmm = "Solution");
int[int] fforder = [1, 1];
savevtk("maxwell-mg-3d.vtu", ThFine, [real(u), real(uy), real(uz)], [imag(u), imag(uy), imag(uz)], order = fforder, bin = 1);

It runs with np=1. However, an error about the number of nodes on the surfaces appears when n>=2.

This is not expected to work, you need to make sure your partitioning will preserve periodicity, see, e.g., FreeFem-sources/diffusion-periodic-balanced-2d-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub.

Thank you for your reply.
In my understanding, I need to combine the codes in lines 24–105 in “diffusion-periodic-balanced-2d-PETSc.edp” with your macro “buildMatEdgeRecursiveWithPartitioning.” Is this right?
I will modify the code based on your advice.

There are many ways to achieve what you need. Another way is to decompose the initial coarse cube in two, a part touching the periodic boundary, and the rest. Then call partitionerSeq() on the rest. The partitioning made up of “part touching the periodic boundary” + “the rest + 1” will preserve periodicity. Process #0 will be in charge of the part touching the periodic boundary, and other processes will handle the rest of the domain.

I understood the concept well. I would like to implement the periodic boundary conditions using one of the methods you suggested. Thank you so much!

Let me know if you’re stuck, I’d be happy to help.