Maxwell equation with periodic boundary condition using PETSc

Hello,

I’m working on a Maxwell problem with periodic boundary condition. I would like to go to parallel with my implementation, so I used the Maxwell3D_PETSc example as a starting point. However, I’m facing some problem as I don’t know how to deal with the macro ThPkPart for the periodic set up. Therefore, I would like some help on adapting this example for the case of periodic boundary condition. It follows a naive attempt from my side:

load "PETSc"                        // PETSc plugin
load "mat_edgeP1"
load "Element_Mixte3d"
macro dimension()3// EOM            // 2D or 3D
include "macro_ddm.idp"
include "cube.idp"

macro Curl(ux, uy, uz)[dy(uz)-dz(uy), dz(ux)-dx(uz), dx(uy)-dy(ux)]// EOM
macro CrossN(ux, uy, uz)[uy*N.z-uz*N.y, uz*N.x-ux*N.z, ux*N.y-uy*N.x]// EOM
macro Pk() Edge03d, periodic=[[LP[0],y,z],[LP[1],y,z]] // EOM
func PkNode = P1;

int[int] LP = [1,2,3,4,5,6];
mesh3 Th = cube(getARGV("-global", 10), getARGV("-global", 10), getARGV("-global", 10));

fespace Wh(Th, Pk);

Mat A;
int[int] n2o;
macro ThN2O()n2o// EOM
macro ThPeriodicity() LP // EOM
DmeshCreate(Th);

{
    macro ThPkPart() Edge03ds0// EOM special FE for the domain partitioning
    MatCreate(Th, A, Pk);
}

real[int] rhs(Wh.ndof);
varf vPb([Ex,Ey,Ez],[vx,vy,vz]) =
    int3d(Th)(Curl(vx,vy,vz)'*Curl(Ex,Ey,Ez))
    + int3d(Th)([vx,vy,vz]'*[Ex,Ey,Ez])
    + on(1, Ex=0, Ey=0, Ez=0);
A = vPb(Wh, Wh);
func f = exp(-2.0*((x-0.5)^2+(y-0.5)^2+(z-0.5)^2));
varf vPbRhs([Ex,Ey,Ez],[vx,vy,vz]) =
    - int3d(Th)([vx,vy,vz]'*[0,0,f])
    + on(1, Ex=0,Ey=0,Ez=0);
rhs = vPbRhs(0, Wh);

{
    Mat B;
    MatCreate(Th, B, P1);
    fespace VhNode(Th, PkNode);
    matrix GLoc;
    MatrixEdgeP1(GLoc, Th);
    Mat G(A, B, GLoc);
    VhNode coord = x;
    real[int] tmp;
    ChangeNumbering(B, coord[], tmp);
    real[int, int] coordPETSc(3, tmp.n);
    for(int j = 0; j < 3; ++j) {
        for[i, ui : tmp] coordPETSc(j, i) = ui;
        if(j == 0)
            coord = y;
        else if(j == 1)
            coord = z;
        ChangeNumbering(B, coord[], tmp);
    }
    set(A, sparams = "-pc_type hypre -pc_hypre_type ams -ksp_monitor -ksp_view -ksp_max_it 40", coordinates = coordPETSc, gradient = G);
}

Wh [solx, soly, solz];
solx[] = A^-1 * rhs;
if(!NoGraphicWindow) {
    real[int] sol;
    ChangeNumbering(A, solx[], sol);
    ChangeNumbering(A, solx[], sol, inverse = true);
    mesh3 ThPlt = cube(getARGV("-global", 10), getARGV("-global", 10), getARGV("-global", 10));
    fespace WhPlt(ThPlt, Pk);
    WhPlt [pltx, plty, pltz];
    WhPlt [reducex, reducey, reducez];
    int[int] rest = restrict(Wh, WhPlt, n2o);
    for[i, v : rest] pltx[][v] = solx[][i];
    mpiReduce(pltx[], reducex[], processor(0, mpiCommWorld), mpiSUM);
    if(mpirank == 0)
        medit("Global solution", ThPlt, [real(reducex), real(reducey), real(reducez)]);
}

Thank you!

I’m not sure edge elements handle periodic boundary conditions, it is not related to PETSc.

$ cat bug.edp 
load "Element_Mixte3d"
include "getARGV.idp"
include "cube.idp"

macro Pk() Edge03d, periodic=[[LP[0],y,z],[LP[1],y,z]] // EOM
func PkNode = P1;

int[int] LP = [1,2,3,4,5,6];
mesh3 Th = cube(getARGV("-global", 10), getARGV("-global", 10), getARGV("-global", 10));

fespace Wh(Th, Pk);
$ FreeFem++ bug.edp -v 0
  current line = 11
Assertion fail : (hmn>1.0e-20)
	line :215, in file lgmesh3.cpp
Assertion fail : (hmn>1.0e-20)
	line :215, in file lgmesh3.cpp
 err code 6 ,  mpirank 0

Dear Professor Prj,

Thank you for your reply. I made a change in the mesh generation command and this error associated with the finite element space definition didn’t occur anymore and I don’t understand why. After this change the code seems to work, but only for one process. For more processes I get a PETSc error. But as you mentioned, edge elements may not handle periodic boundary condition. It follows the code that generates a PETSc error for more than 1 process.


load "PETSc"                        // PETSc plugin
load "mat_edgeP1"
load "Element_Mixte3d"
macro dimension()3// EOM            // 2D or 3D
include "macro_ddm.idp"
include "cube.idp"

macro Curl(ux, uy, uz)[dy(uz)-dz(uy), dz(ux)-dx(uz), dx(uy)-dy(ux)]// EOM
macro CrossN(ux, uy, uz)[uy*N.z-uz*N.y, uz*N.x-ux*N.z, ux*N.y-uy*N.x]// EOM
macro Pk() Edge03d, periodic=[[LP[0],y,z],[LP[1],y,z],[LP[2],x,z],[LP[3],x,z],[LP[4],x,y],[LP[5],x,y]] // EOM
func PkNode = P1;

int n = 3;
int[int] LP = [1,2,3,4,5,6];
mesh3 Th = Cube(5*n, 5*n, 5*n);
//medit("Th", Th);

fespace Wh(Th, Pk);
Mat A;
int[int] n2o;
macro ThN2O()n2o// EOM
macro ThPeriodicity() LP // EOM
DmeshCreate(Th);

{
    macro ThPkPart() Edge03ds0// EOM special FE for the domain partitioning
    MatCreate(Th, A, Pk);
}

real[int] rhs(Wh.ndof);
varf vPb([Ex,Ey,Ez],[vx,vy,vz]) =
    int3d(Th)(Curl(vx,vy,vz)'*Curl(Ex,Ey,Ez))
    + int3d(Th)([vx,vy,vz]'*[Ex,Ey,Ez]);
    //+ on(1, Ex=0, Ey=0, Ez=0);
A = vPb(Wh, Wh);
func f = exp(-1.0*((x-0.5)^2+(y-0.5)^2+(z-0.5)^2));
varf vPbRhs([Ex,Ey,Ez],[vx,vy,vz]) =
    - int3d(Th)([vx,vy,vz]'*[0,0,f]);
    //+ on(1, Ex=0, Ey=0, Ez=0);
rhs = vPbRhs(0, Wh);

{
    Mat B;
    MatCreate(Th, B, P1);
    fespace VhNode(Th, PkNode);
    matrix GLoc;
    MatrixEdgeP1(GLoc, Th);
    Mat G(A, B, GLoc);
    VhNode coord = x;
    real[int] tmp;
    ChangeNumbering(B, coord[], tmp);
    real[int, int] coordPETSc(3, tmp.n);
    for(int j = 0; j < 3; ++j) {
        for[i, ui : tmp] coordPETSc(j, i) = ui;
        if(j == 0)
            coord = y;
        else if(j == 1)
            coord = z;
        ChangeNumbering(B, coord[], tmp);
    }
    set(A, sparams = "-pc_type hypre -pc_hypre_type ams -ksp_monitor -ksp_view -ksp_max_it 200", coordinates = coordPETSc, gradient = G);
}

Wh [solx, soly, solz];
solx[] = A^-1 * rhs;

if(!NoGraphicWindow) {
    real[int] sol;
    ChangeNumbering(A, solx[], sol);
    ChangeNumbering(A, solx[], sol, inverse = true);
    mesh3 ThPlt = Cube(5*n, 5*n, 5*n);
    fespace WhPlt(ThPlt, Pk);
    WhPlt [pltx, plty, pltz];
    WhPlt [reducex, reducey, reducez];
    int[int] rest = restrict(Wh, WhPlt, n2o);
    for[i, v : rest] pltx[][v] = solx[][i];
    mpiReduce(pltx[], reducex[], processor(0, mpiCommWorld), mpiSUM);
    if(mpirank == 0)
        medit("Global solution", ThPlt, [real(reducex), real(reducey), real(reducez)]);
}

Thank you!

Very interesting “bug”. The fix is to first define:

macro PkPart() Edge03ds0, periodic=[[LP[0],y,z],[LP[1],y,z],[LP[2],x,z],[LP[3],x,z],[LP[4],x,y],[LP[5],x,y]] // EOM

And then to use it as:

    macro ThPkPart() PkPart// EOM special FE for the domain partitioning

Yes indeed, a very interesting/weird “bug”.

It now works fine for more than 1 process. It’ll certainly be helpful for my applications.

Thank you so much for your precise and quick replies!
Best regards!