Integral over nested meshes

Dear all,

I am trying to evaluate an integral for the solution obtained in the nested meshes.
More concretely, I’d like to evaluate the integral of the solution in the following example:

According to the previous entry (Issue with transfer between two meshes),
Maybe, I should add the n2o operator for the mesh “ThFine”, as in the example “maxwell-3d-PETSc.edp”.
However, I don’t know how to do this with “buildMatEdgeRecursive.”

Would someone show me how to do this?
Or, is there any other method?

Thank you in advance.

Instead of computing an integral, you want to compute a weighted integral, using the partition of unity.
E.g., instead of

real val = int3d(Th)(real(u));
mpiAllReduce(val, ..., mpiSUM);

Do

real[int] D = MG[0].D; // partition of unity on the fine mesh
u .*= D; // entry-wise scaling by the partition of unity
real val = int3d(Th)(u);
mpiAllReduce(val, ..., mpiSUM);

Dear prj,
Thank you so much for your response.

To confirm the integral value, I evaluated the volume of a unit cube based on your method.

//volume of a unit cube
fespace VhP1(ThFine, P1);
VhP1 volint = 1.0;
real valwrong = int3d(ThFine)(volint);
real valwrongG;
mpiAllReduce(valwrong, valwrongG, mpiCommWorld, mpiSUM);
real[int] D = MG[0].D; // partition of unity on the fine mesh, OK
volint[].*=D; // entry-wise scaling by the partition of unity
real val = int3d(ThFine)(volint);
real valG;
mpiAllReduce(val, valG, mpiCommWorld, mpiSUM);
if(mpirank==0){
  cout << "valwrongG = " << valwrongG << endl;
  cout << "valG = " << valG << endl;
}

If I run the code with np=1, I obtained correct results with valwrongG=1 and valG=1.
However, when I run the code with np=4, they are evaluated as valwrongG=1.709913 and valG = 8.724715e-01. That is, both of them are apart from 1, and even valG seems to contain an error.

Am I missing something?

OK, there are multiple reasons why it’s not working (one is that you are mixing the partition of unity which is of Nédélec type with a P_1 function). Still, there are some issues in macro_ddm.idp, or missing pieces, to make this work. I’ll try to think of something and get back to you ASAP. Sorry about that.

I have a solution, far from ideal, but it gets the job done. Please fetch Adds buildMatEdgeRecursiveWithPartitioning. · FreeFem/FreeFem-sources@57d5ad4 · GitHub.
Then, the following code works and always returns a volume of 1.

diff --git a/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp b/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp
index a70f15b8..c1a872ba 100644
--- a/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp
+++ b/examples/hpddm/maxwell-mg-3d-PETSc-complex.edp
@@ -28,3 +28,25 @@ func Pk = Edge03d;
 func PkPart = Edge03ds0;
-buildMatEdgeRecursive(Th, s, level, P, MG, Pk, mpiCommWorld, PkPart, defPart, initPart);
+fespace PhFine(Th[0], P0);
+PhFine partFine;
+{
+    fespace PhCoarse(Th[level - 1], P0);
+    PhCoarse part;
+    partitionerSeq(part[], Th[level - 1], mpisize);
+    partitionerPar(part[], Th[level - 1], mpiCommSelf, mpisize);
+    buildMatEdgeRecursiveWithPartitioning(Th, part[], s, level, P, MG, Pk, mpiCommWorld, PkPart, defPart, initPart);
+    part = part;
+    partFine = part;
+}
+
+{
+    // volume of a unit cube
+    mesh3 ThNo = trunc(Th[0], abs(partFine-mpirank) < 1e-2);
+    fespace VhP1(ThNo, P1);
+    VhP1 volint = 1.0;
+    real val = int3d(ThNo)(volint);
+    real valG;
+    mpiAllReduce(val, valG, mpiCommWorld, mpiSUM);
+    if(mpirank==0)
+        cout << "valG = " << valG << endl;
+}
 

The idea is to let you store the partitioning, so that you can get a nonoverlapping decomposition on the fine level, so that you can integrate properly any variable without duplicated unknowns. Let me know if something is not clear to you.

Thank you so much! It worked very well in my environment.