Global solution with PETSc

For verification purposes I would like to compare a solution computed in parallel with the sequentially computed one.

As a basis for an minimal working example I took an example from Pierre Jolivet’s tutorial and tried to reduce the distributed local solutions to a global one. I tried to accomplish this in the same way as it was done in the maxwell-3d-PETSc.edp example:

load "PETSc"
load "medit"

macro dimension()2// EOM
include "macro_ddm.idp"

real c = 6.25;
int N  = 80;

mesh M;
{
    border a(t=0,1) { x=t;   y=0;   label=1; };
    border b(t=0,1) { x=1;   y=t;   label=1; };
    border c(t=0,1) { x=1-t; y=1;   label=1; };
    border d(t=0,1) { x=0;   y=1-t; label=1; };
    M = buildmesh(a(N)+b(N)+c(N)+d(N));
}

mesh ThPlt = M;

load "Element_P3"
func Pk = P3;
Mat J;

int[int] n2o;
macro ThN2O()n2o// EOM

createMat(M, J, Pk)

fespace Vh(M, Pk);
Vh u;
func BC = cos(pi*x)*cos(pi*y);
varf vInit(w, v) = on(1, w = BC);
varf vJ(w, v) = int2d(M)(dx(w)*dx(v) + dy(w)*dy(v) - c*exp(u)*w*v) + on(1, w = 0);
varf vRes(w, v) = int2d(M)(dx(u)*dx(v) + dy(u)*dy(v) - c*exp(u)*v) + on(1, w = u);
int it = 1;
real tol = 1.0e-6;
u[] = vInit(0, Vh, tgv = -1);
real[int] b = u[];
for(int i = 1; i < it; ++i) {
    real[int] v = vRes(0, Vh, tgv = -1);
    v -= b;
    real res = sqrt(J(v, v));
    if(mpirank == 0)
        cout << "Norm of the residual = " << res << endl;
    if(res < tol)
        break;
    matrix Loc = vJ(Vh, Vh, tgv = -1);
    J = Loc;
    real[int] inc = J^-1 * v;
    u[] -= inc;
    plotD(M, u, cmm = "Solution at iteration #" + i);
}
plotD(M, u, cmm = "Final solution");



if(!NoGraphicWindow) {
    real[int] sol;
    changeNumbering(J, u[], sol);
    changeNumbering(J, u[], sol, inverse = true);
    fespace WhPlt(ThPlt, Pk);
    WhPlt plt;
    WhPlt reduce;
    cout << mpirank << " " << u[].n << " " << sol.n << " " << Vh.ndof << " " << WhPlt.ndof << " " <<endl;
    int[int] rest = restrict(Vh, WhPlt, n2o);
    for[i, v : rest] plt[][v] = sol[i];
    mpiReduce(plt[], reduce[], processor(0, mpiCommWorld), mpiSUM);
    if(mpirank == 0)
        medit("Global solution", ThPlt, reduce); // To keep this example short I kept medit here instead of saving the solution or calculating the difference to a reference solution
}

But the restrict(?) results in a segfault:
[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range

I would appreciate it if someone would point out my mistake.

You made two small mistakes which explain the segfault. Here is the fix:

  1. macro ThN2O()n2o// EOM should read macro MN2O()n2o// EOM because your mesh name is M, not Th as in maxwell-3d-PETSc.edp
  2. changeNumbering(J, u[], sol); goes from FreeFEM to PETSc numbering, then changeNumbering(J, u[], sol, inverse = true); goes from PETSc to FreeFEM numbering. You want to look at the values of u[], not sol. for[i, v : rest] plt[][v] = sol[i]; is wrong and should indeed be for[i, v : rest] plt[][v] = u[][i];

Let me know if I can be of any further assistance.

1 Like

I can’t reply to you in that way. How can i touch with you?

Please do not spam other threads…

I seem to be having the same issue with the segmentation fault. I have taken the stokes 3d PETSc example and adapted it as I understood it from the original example above and the corrections from @prj. Here is the resulting edp file:

stokes-3d-PETScagain.edp (2.1 KB)

By systematically removing lines, I too seem to find the error only once the line

int[int] rest = restrict(Whbackup, Wh, n2o);

is included. As best I can tell, the two things included in the corrections are addressed since my original mesh name is Th for the macro line, and I use the values of u[] in the for loop. I appreciate any suggestions on fixing this issue.

You are using buildMat, you need to use createMat instead. With the following line instead, I don’t get any segmentation fault.

createMat(Th, A, Pk)
1 Like

That works perfectly, thank you. I’ve been reviewing the other active thread here and now see that the createMat command is common to the two examples you had mentioned at the top that specifically gather the results at the end of the program.

As a followup question (maybe better moved to the below mentioned thread if more pertinent), I notice that the file output of this now matches the description of the output files of another user when they were using an older version of one of the examples and generated a single PVD file and four VTU files. The suggestion you gave made use of a revised version of that example which made use of the line

export("sol_stokes_3d_io", Th, sol, orderAndName, comm);

and it looks like that fixed the issue for them. In the most recent version of the example, that line has been again revised to now read:

savevtk("stokes-io-3d.vtu", Th, [u, uB, uC], uD, order = fforder, dataname = "velocity pressure");

Is it advisable to switch back to the earlier export command to create a single VTU file as the output, or is there a way to alter my current version of the code to do so?

stokes-3d-PETScagain.edp (2.0 KB)

Do not use export. Use savevtk on the process which gathers the solution, e.g., mpirank == 0, with the additional parameter communicator = mpiCommSelf.

1 Like

Thank you, passing that parameter to savevtk, while making sure it works on the mpirank==0 process, does build a single mesh. However, it seems that while it contains the full geometry it only keeps the solution set from one of the four regions (at least from what I see in Paraview). Here is the current version of the script (I had to also alter the way the scalar and vector fields to match the Stokes output example):

stokes-3d-PETScagain.edp (2.1 KB)

and here are the PVD and VTU files it outputs:

output.zip (133.7 KB)

Well, of course… you don’t save the reduced solution. Same for the medit command…

1 Like

That did the trick, thank you very much for all your help! I attach the current version of the code for posterity.

stokes-3d-PETScagain.edp (2.1 KB)