Petsc with fully periodic boundary conditions

Dear all,

I am trying to using petsc in 3d with fully periodic boundary conditions.
Here my code

load "msh3"
load "medit"
include "getARGV.idp"
load "PETSc"     
include "macro_ddm.idp"             
macro dimension() 3 // EOM // 2D or 3D


//######################################################################  MESHING ###################################################################
int number = getARGV( "-number", 1);
int Nb = 20;
real R = 0.15; 

// Parameters
searchMethod=1; // More safe seach algo
real a = 1, d = 2*R, h = 1;
int nnb = Nb, nni = Nb;
int nz = 12;
func zmin = 0;
func zmax = h;


// Mesh 2D
border b1(t=0.5, -0.5){x=a*t; y=-a/2; label=1;}
border b2(t=0.5, -0.5){x=a/2; y=a*t; label=2;}
border b3(t=0.5, -0.5){x=a*t; y=a/2; label=3;}
border b4(t=0.5, -0.5){x=-a/2; y=a*t; label=4;}

border i(t=0, 2*pi){ x = R*cos(t); y = -R*sin(t); label=7; }

mesh Th = buildmesh(b1(-nnb) + b3(nnb) + b2(-nnb) + b4(nnb) + i(nni));
                    
{ // Cleaning the memory correctly
    int[int] old2new(0:Th.nv-1);
    fespace Vh2(Th, P1);
    Vh2 sorder = x + y;
    sort(sorder[], old2new);
    int[int] new2old = old2new^-1; // Inverse permutation
    Th = change(Th, renumv=new2old);
}

// Mesh 3D
int[int] rup = [0, 5], rlow = [0, 6], rmid = [1, 1, 2, 2, 3, 3, 4, 4, 7, 7], rtet = [0, 41];
mesh3 Th3 = buildlayers(Th, nz, zbound=[zmin, zmax],
    reftet=rtet, reffacemid=rmid, reffaceup=rup, reffacelow=rlow);

//plot(Th3, wait=1); 
//int Meshsize = Th3.nt;
//cout << Meshsize << endl;

//############################################################################################################################################

macro def(i)[i, i#B, i#C, i#D]// EOM     // vector field definition
macro init(u)[u,u,u,u]// EOM
macro Grad(u) [dx(u),dy(u),dz(u)]// EOM
macro div(u1,u2,u3) (dx(u1)+dy(u2)+dz(u3)) //EOM

int[int] labPeriodic = [1, 3, 2, 4, 5, 6];

Mat A;
int[int] n2o;
macro Th3N2O()n2o//
macro Th3Periodic()[[labPeriodic[0], x,z], [labPeriodic[1],  x, z], [labPeriodic[2],  y, z], [labPeriodic[3], y, z], [labPeriodic[4], x,y], [labPeriodic[5], x,y]]//
macro Pk() [P2, P2, P2, P1], periodic = Th3Periodic// EOM

DmeshCreate(Th3);
MatCreate(Th3, A, Pk);

fespace Vh(Th3, Pk);
func F=[1.,0,0];

varf vPb([u1, u2, u3, p], [v1, v2, v3, q])  = int3d(Th3,qforder=3)( Grad(u1)'*Grad(v1) +  Grad(u2)'*Grad(v2) +  Grad(u3)'*Grad(v3) - div(u1,u2,u3)*q - div(v1,v2,v3)*p  + 1e-7*q*p)
                                             +int3d(Th3, qforder = 3)(F'*[v1, v2, v3])
                                            + on(7, u1=0, u2=0, u3=0);


real[int] rhs = vPb(0, Vh);
A = vPb(Vh, Vh);

set(A,sparams="-ksp_type gmres -pc_type none -ksp_monitor_true_residual -ksp_converged_reason");
Vh def(u);                         // local solution

u[] = A^-1 * rhs;

However, the solver does not converge for this simple case, and I get

Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 30.

Does someone have an idea, what could be wrong in my implementation ?

Thank you in advance,

Best regards,

Loïc,

I think you should put something for pc_type.
Also, even with the renumbering of Th Th = change(Th, renumv=new2old), probably there is a problem with the periodic opposite faces, that do not match when you use PETSc with several procs (because of distributed mesh).

I am using only one proc which is enough for my application. I am using petsc only to have access to more preconditioning option. I have try with others preconditioners also. But I get the same behaviour.

Best regards,

Loïc,

When I run your code with -pc_type lu and Nb = 10 , 1 proc it works:

0 KSP preconditioned resid norm 1.262935356419e+01 true resid norm 1.249711693053e-02 ||r(i)||/||b|| 1.000000000000e+00
1 KSP preconditioned resid norm 2.530565463067e-09 true resid norm 2.224209956742e-14 ||r(i)||/||b|| 1.779778463389e-12
Linear solve converged due to CONVERGED_RTOL iterations 1

Thanks, but when the discretization increases I would like to use an iterative solvers and a preconditionner. Would know a solver /preconditioner adapted for this case since I have some trouble with gmres.

Best regards,

Loïc

I don’t know what are the “good” choices.

Thanks,
I soon as I use gmres, the linear solve diverges.

Do not use the default tgv value. If you switch to tgv = -1, GMRES won’t breakdown.