Poisson Equation (Parallel Implementation)

Hello, I am new to parallel implementation with FreeFem++, so I started by implementing a Poisson problem using domain decomposition, based on some examples I found online. I first implemented this code in a serial format. In this case I obtain error in the L2 norm or order 10^-14. When I implement the same code in parallel, the error in the L2 norm is of order 10^-6, using either 1 or 4 processors. I do not understand where this huge difference come from. Any help/suggestion would be greatly appreciated. Here is a minimal code of the parallel implementation:

load "PETSc"
macro dimension()2// EOM
include "macro_ddm.idp"

/* ----- Declare/Define user variables ----- */
int nx=50; //degree of freedom in x direction
int ny=25; //degree of freedom in y direction
real a1=0;
real b1=20;
real c1=0;
real d1=10;
/* ----- END Declare/Define user variables ----- */



/* ----- Mesh Generation ----- */
// Borders
real L=b1-a1;
real H=d1-c1;
border rect1(t=0,1){x=a1+L*t;y=c1;label=1;};
border rect2(t=0,1){x=b1;y=c1+H*t;label=2;};
border rect3(t=0,1){x=a1+L*(1-t);y=d1;label=3;};
border rect4(t=0,1){x=a1;y=c1+H*(1-t);label=4;};

mesh Th = buildmesh(rect1(nx)+rect2(ny)+rect3(nx)+rect4(ny));
plot(Th, wait=true, ps="poisson_2d_mesh.ps"); //Display and save mesh
/* ----- END Mesh Generation ----- */

fespace Vh (Th, P2);


// Declare/Define source term
Vh fh1, fh2;
fh1=0;


varf a(u, v)
   = int2d(Th)(dx(u)*dx(v) + dy(u)*dy(v))
   + int2d(Th)(-fh1*v)
   +on(4,u=0.0)
   +on(2,u=1.0);
   ;

Mat A;
createMat(Th, A, P2);
set(A, sparams="-pc_type hypre");

real[int] b(Vh.ndof);

A = a(Vh, Vh);
b = a(0, Vh);

Vh uh;


uh[] = A^-1 * b;


/* ----- Compute L2 error ----- */
real errorL2, normL2;
real errorH1, normH1;

func u1 = (x-a1)/(b1-a1);
Vh u = u1;




// Compute L2 relative error
normL2 = sqrt(int2d(Th)(u^2));
errorL2 = sqrt(int2d(Th)((u-uh)^2)) / normL2;
//cout << "L2-error relative is " << errorL2 << endl;

// Compute H1 relative error
normH1 = sqrt(normL2^2 + int2d(Th)(dx(u)^2 + dy(u)^2));
errorH1 = sqrt( sqrt(int2d(Th)((u-uh)^2)) + int2d(Th)((dx(u)-dx(uh))^2 + (dy(u)-dy(uh))^2)) / normH1;
//cout << "H1-error relative is " << errorH1 << endl;
/* ----- End Compute L2 error ----- */


real globalL2error;
real globalH1error;

mpiAllReduce(errorL2, globalL2error, mpiCommWorld, mpiSUM);
mpiAllReduce(errorH1, globalH1error, mpiCommWorld, mpiSUM);
if (mpirank == 0){
cout << "The errorL2 is " << globalL2error << endl;
cout << "The errorH1 is " << globalH1error << endl;
}

You are summing contributions on overlapping domains (Th), so you either need to use the partition of unity or compute the contributions on non-overlapping domains, see, e.g., FreeFem-sources/examples/hpddm/minimal-surface-Tao-2d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub + FreeFem-sources/examples/hpddm/minimal-surface-Tao-2d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub.

1 Like

See also Integral in parallel - #4 by prj. Please search the forum.

1 Like

Thank you so much for your suggestions and for your time. I tried the partition of unity and also the scaling of the integrals. Errors are still large compared to the code that is not implemented in parallel.
Not sure if you have any additional time to take a look at the implementation of the partition of unity to see if I did anything wrong. Thank you so much in advance!

// Define partition of unity function 

fespace Vhpart(Th, P0);  // P0 elements for partition function

Vhpart part;  // Partition function
PartitionCreate(Th, part[], P0);  // Create partition using the mesh Th
mesh ThNo = trunc(Th, abs(part - 1.0) < 1e-1);


normL2 = sqrt(int2d(ThNo)(u^2));
errorL2 = sqrt(int2d(ThNo)((u - uh)^2)) / normL2;

// Compute H1 error on the truncated mesh ThNo
normH1 = sqrt(normL2^2 + int2d(ThNo)(dx(u)^2 + dy(u)^2));
errorH1 = sqrt(sqrt(int2d(ThNo)((u - uh)^2)) + int2d(ThNo)((dx(u) - dx(uh))^2 + (dy(u) - dy(uh))^2)) / normH1;

/* ----- MPI Reduce ----- */
real globalL2error;
real globalH1error;

mpiAllReduce(errorL2, globalL2error, mpiCommWorld, mpiSUM);
mpiAllReduce(errorH1, globalH1error, mpiCommWorld, mpiSUM);

if (mpirank == 0) {
    cout << "The global L2 error is " << globalL2error << endl;
    cout << "The global H1 error is " << globalH1error << endl;
}

Of course, the sum of square roots is not the same as the square root of the sum.