Iteration solver and direct solver results are huge different with some meshes

Hey FreeFem team, I recently debugged my code and found that, with a mesh generated by gmsh, the results from the direct solver and iteration solver under ffddm are totally different; I believe this problem is coming from the mesh, since I test it with other meshes build from freefem directly;
My question is under which cases, the results of the iteration solver and the direct solver will be in conflict? Cus the mesh from gmsh looks fine; Thank you

If you use PETSc, there are automatic tools to help you diagnoze such a scenario, e.g., compute the algebraic error at the end of each linear solve/get a message to see if the iterative solver converged or broke down.

Thx for the replay prj; Could you please show me how to do so, do I need to pass the flag via the terminal?

Here is a simple example. You can set parameters either via the command line or through the set function.

$ cat simple.edp
load "PETSc"
include "macro_ddm.idp"

macro grad(u)[dx(u), dy(u)]// EOM   // two-dimensional gradient
func Pk = P1;

mesh Th = square(getARGV("-global", 40), getARGV("-global", 40));
Mat A;
createMat(Th, A, Pk)

fespace Wh(Th, Pk);
varf vPb(u, v) = int2d(Th)(grad(u)' * grad(v)) + int2d(Th)(v) + on(1, u = 0.0);

set(A, sparams = "-ksp_converged_reason -ksp_view_final_residual");
A = vPb(Wh, Wh, tgv = -1);
real[int] rhs = vPb(0, Wh, tgv = -1);
real[int] u = A^-1 * rhs;

set(A, sparams = "-ksp_max_it 20");
$ ff-mpirun -n 4 simple.edp -v 0
Linear solve converged due to CONVERGED_RTOL iterations 102
KSP final norm of residual 4.46724e-07
Linear solve did not converge due to DIVERGED_ITS iterations 20
KSP final norm of residual 0.0108944

okay, the message I got is “Linear solve did not converge due to DIVERGED_ITS iterations 10000 & KSP final norm of residual 0.00101905”; Seems my system is bad;

What kind of preconditioner are you using (-ksp_view)? Do you get the proper solution with an exact method (-pc_type lu)?

the output of ksp_viewlooks like this

when I change it back to -lu it is good which looks like the direct solver results and ksp resid is converged as well;

Are you solving an elasticity problem? BJacobi + ILU as subdomain solver is probably the worst kind of preconditioner for this problem. You can use optimal multigrid methods, see the 2D or 3D parameters. If this still does not converge well, could you share your script?

Yes, I am solving the elasticity problem; I will check the link you give me; Thank you very much. now I know how to check the residual of the iteration solver of petsc; I will let you know if the 2D or 3D parameters works or not.

One more question. Since I passed petsc flag like this; How can I pass the " nearnullspace = Rb" to FreeFem?

There is no real point in using the ffddm wrapper on top of PETSc here, it’s mostly just adding boiler plate code, so I’d just stick to plain PETSc code. But I think you can pass the nearnullspace by doing the following just before calling elastictyfGMRES:

set(elastictypetscOP,  sparams = "...", nearnullspace = ..., bs = dim);

Do not forget to put bs = dim (2 or 3), it’s automatically set in the PETSc example, but you’ll need that as well in your script.

Note: there is a typo in your script, it reads elasticty instead of elasticity, if you fix that, you’ll need to use the variable elasticitypetscOP.

Thx for this replay, I am using ffddm because the

the way to pass this varf seems easy for me; I tried the PETSc framework before; I could not make it at that stage; Thx for the information;

The PETSc interface does not rely on any macro. You define your varf as you would do in any FreeFEM script. It’s much simpler than ffddm in that sense.
Just look at the elasticity example again.
If you type your varf in plain text I can give you the PETSc “equivalent”, which will be basically exactly the same.

Sure, I will test it. Thx;