Exporting Matrices of AS method for Poisson Equation

I want to decompose a 3D mesh (sphere) into multiple subdomains and export the stiffness matrices / load vectors to solve the problem with an external CG solver.
Unfortunately it seems that the local matrices are non-symmetric.

// ff-mpirun -np 4 sphere2.edp -wg -ffddm_overlap 2 -ffddm_schwarz_method asm

macro dimension 3 // EOM

verbosity = 0;

load "msh3"
load "tetgen"
// load "medit"
// load "iovtk"

include "MeshSurface.idp"
include "ffddm.idp"

real radius = 5;
real hh = radius * 0.1;
meshS ThS = Sphere(radius, hh, 1, 0., 0., 0., 1);
// meshS ThS = Sphere(radius, 500., 500., 500., hh, 1, 1); // laut Doku
// plot(ThS,wait=1);

real[int] domain =[0.,0.,0.,1,hh^3]; // x, y, z, attrib, max_vol
mesh3 Th3 = tetg(ThS, switch="pqaAAQYY", nbofregions=1, regionlist=domain);
// savevtk("sphere.vtu", Th3);

// Mesh refinement
cout << "Num vertices coarse: " << Th3.nv << ", hmax = " << Th3.hmax << endl;
// medit("Coarse", Th3);

int refineSteps = 0;
for (int i = 0; i < refineSteps; i++)
{
    Th3 = trunc(Th3, 1, split=2);
    cout << "Num vertices fine: " << Th3.nv << ", hmax = " << Th3.hmax << endl;
}
// medit("Fine", Th3);


func bc = x^3*z^2*sin(y);

macro Grad3(u) [dx(u),dy(u),dz(u)] // EOM

macro Varf(varfName, meshName, VhName)
    varf varfName(u,v) = int3d(meshName)(Grad3(u)'* Grad3(v)) + on(1, u=bc);
// EOM

// Domain decomposition
ffddmbuildDmesh( LapMesh , Th3 , mpiCommWorld )
// medit("sphere" + mpirank, LapMeshThi);
int[int] ll = labels(LapMeshThi);
meshS surf = extract(LapMeshThi, label=ll);
cout << "Rank " << mpirank << ": Num vertices " << LapMeshThi.nv << ", extracted vertices: " << surf.nv << endl;
// savevtk("Th" + mpirank + ".vtk", LapMeshThi);

macro def(u) u// EOM
macro init(u) u// EOM
ffddmbuildDfespace(LapFE, LapMesh, real, def, init, P1)
ffddmsetupOperator( PB , LapFE , Varf )

func f = -6*x*z^2*sin(y)+x^3*z^2*sin(y) -2*x^3*sin(y);
macro myVarfrhs(varfName, meshName, VhName)
    varf varfName(u,v) = int3d(meshName)(f*v) + on(1, u=bc); // f + Laplace g
// EOM

real[int] rhsi(0);
ffddmbuildrhs(PB, myVarfrhs, rhsi)

if (mpisize == 1)
{
    ofstream fout("A_global.txt");
    fout << PBaRd[0] << endl;
}
else
{
    ofstream fout("A_local" + mpirank + ".txt");
    fout << PBaRd[mpirank] << endl;
}

if (mpisize == 1)
{
    ofstream fout("R_global.txt");
    fout << LapFERih[0] << endl;
}
else
{
    ofstream fout("R_local" + mpirank + ".txt");
    fout << LapFERih[mpirank] << endl;
}

if (mpisize == 1)
{
    ofstream fout("rhs_global.txt");
    fout << rhsi << endl;
}
else
{
    ofstream fout("rhs_local" + mpirank + ".txt");
    fout << rhsi << endl;
}

if (mpisize == 1)
{
    ofstream fout("PoU_global.txt");
    fout << LapFEDih[0] << endl;
}
else
{
    ofstream fout("PoU_local" + mpirank + ".txt");
    fout << LapFEDih[mpirank] << endl;
}

You can do that with PETSc quite trivially, with -ksp_type cg -pc_type asm. The interface is much more concise as well, see this example.

Here is the script using PETSc CG + ASM. You can also export the matrix and precondition it externally using the additional command line argument -mat_view ascii:Poisson.

macro dimension 3 // EOM

verbosity = 0;

load "msh3"
load "tetgen"
// load "medit"
// load "iovtk"

include "MeshSurface.idp"
load "PETSc"
macro def(u) u// EOM
macro init(u) u// EOM
include "macro_ddm.idp"

real radius = 5;
real hh = radius * 0.1;
meshS ThS = Sphere(radius, hh, 1, 0., 0., 0., 1);
// meshS ThS = Sphere(radius, 500., 500., 500., hh, 1, 1); // laut Doku
// plot(ThS,wait=1);

real[int] domain =[0.,0.,0.,1,hh^3]; // x, y, z, attrib, max_vol
mesh3 Th3 = tetg(ThS, switch="pqaAAQYY", nbofregions=1, regionlist=domain);
// savevtk("sphere.vtu", Th3);

// Mesh refinement
cout << "Num vertices coarse: " << Th3.nv << ", hmax = " << Th3.hmax << endl;
// medit("Coarse", Th3);

int refineSteps = 0;
for (int i = 0; i < refineSteps; i++)
{
    Th3 = trunc(Th3, 1, split=2);
    cout << "Num vertices fine: " << Th3.nv << ", hmax = " << Th3.hmax << endl;
}
// medit("Fine", Th3);


func bc = x^3*z^2*sin(y);

macro Grad3(u) [dx(u),dy(u),dz(u)] // EOM

// Domain decomposition
buildDmesh( Th3 )
// medit("sphere" + mpirank, Th3);
int[int] ll = labels(Th3);
meshS surf = extract(Th3, label=ll);
cout << "Rank " << mpirank << ": Num vertices " << Th3.nv << ", extracted vertices: " << surf.nv << endl;
// savevtk("Th" + mpirank + ".vtk", Th3);

Mat A;
createMat( Th3, A, P1 )

func f = -6*x*z^2*sin(y)+x^3*z^2*sin(y) -2*x^3*sin(y);
    varf Poisson(u,v) =  int3d(Th3)(Grad3(u)'* Grad3(v)) + int3d(Th3)(f*v) + on(1, u=bc); // f + Laplace g
fespace Vh(Th3, P1);
A = Poisson(Vh, Vh, sym = 1);
real[int] rhs = Poisson(0, Vh);
set(A, sparams = "-pc_type asm -pc_asm_overlap 2 -pc_asm_type basic -sub_pc_type cholesky -ksp_type cg -ksp_monitor -ksp_view");
real[int] sol = A^-1 * rhs;

Thank you for your script.

I have run it with “ff-mpirun -n 4 sphere3.edp -mat_view ascii:Poisson” and I get a “Poisson” textfile containing a matrix description.
But it seems it is only one partition? At least it says “Mat Object: 1 MPI processes”.

How can I get all the submatrices? Is there a way to visualize the subdomains?

You are correct, it’s the global matrix. You are free to partition this matrix how you seem appropriate in your external solver if you don’t want to use PETSc CG.

Is there a way to export the local matrices from the ASM as well?

I would like to implement the whole AS+CG algorithm and then later compare it to the solution of FreeFem. I just need some program to do the meshing/partitioning/assembly.

Here is what you can do:

  1. replace sym = 1 by sym = 0
  2. replace the current sparams by:
set(A, sparams = "-pc_type gasm -pc_gasm_overlap 2 -pc_gasm_type basic -sub_pc_type cholesky -ksp_type cg -ksp_monitor -ksp_view -pc_gasm_view_subdomains -ksp_view_mat ascii:Poisson:ascii_matlab -ksp_view_rhs ascii:rhs:ascii_matlab -sub_ksp_view_mat ascii:Poisson_"+mpirank+":ascii_matlab");

This will export a Poisson file which is the global system, and Poisson_rank for each local subdomain matrices. In the KSP view, you’ll see each subdomain indices (inner means without overlap, outer means with overlap). It also saves the global right-hand side.

Thank you very much, that was really helpful.

Can I export the restriction matrices and the decomposition of the one as well?

The restriction is the list of indices in the Outer subdomain: that you get with -ksp_view. You can define in your external solver a matrix of dimension (size of local matrix, that you get in the -ksp_view as well) x (size of global matrix, in your case 5654). Then, you just fill with ones for all lines at each column index given in the -ksp_view.