what is the proper way to solve it efficiently?
What are you using currently? And why is not giving you appropriate performance?
will PETSc reuse the inversion of the matrix?
Yes.
what is the proper way to solve it efficiently?
What are you using currently? And why is not giving you appropriate performance?
will PETSc reuse the inversion of the matrix?
Yes.
Unfortunately, I got a blow up for my new code, every variable looks weird even in the first time step.
I am afraid that my boundary condition is imposed in a wrong way. It seems that constructing rhs by
varf rhs(u,v)=int2d(Th)(uold*v)+on(1,u=0); real[int] b=rhs(0,Vh,tgv=-1);
is wrong.
Maybe a mass matrix multiply uold M*uold[] is correct, I just thought they were the same thing.
Actually I am not sure with the time derivative term discretized by finite element.
Anyway, I will figure out what’s going wrong as soon as possible.
Hello, I am working on pressure Poisson equation, and your tutorial (Section 8, example 5, Poisson equation without Dirichlet BC) has been very helpful. In the process, I came across a couple of questions:
The augmented system [[ A, b ], [ b', 0 ]] introduces one additional degree of freedom. Why is this extra DoF assigned to process 0?
How should I set up an appropriate solver? I tried gamg+gmres, but the performance is poor — it’s very slow and even diverges when the mesh size reaches n=200. I think this is due to the zero entry on the diagonal.
PCFIELDSPLIT and it will converge in a single iteration).Hi, @prj
Thanks for your reply. I’ve tried what you suggested, and the following sparams gives a converge result, but I’m not sure if it is actually the correct or recommended configuration:
set(N ,sparams = "-ksp_type fgmres -pc_type fieldsplit "
+ "-pc_fieldsplit_schur_precondition self -pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type full -pc_fieldsplit_detect_saddle_point "
+ "-fieldsplit_0_pc_type gamg -fieldsplit_0_ksp_type preonly ");
Could you please take a quick look at my solver settings?
If they give you satisfactory results, then good. It’s not what I recommended just prior, but that is fine.
Hello @prj,
It has been a few days since our last discussion.
For a Poisson problem with pure Neumann boundary conditions (i.e. PPE), is it possible to handle the null space using MatNullSpace? I know a similar problem was posted by @cmd, but I am not sure how to deal with it in practice.
If yes, how should it be specified through the PETSc interface?
I tried pinning one DoF via SetBC(), but this introduces a Dirichlet constraint, so the solution is spatially affected. I also tried removing the constant mode by modifying the RHS (subtracting the mean via an inner product), but this does not seem to work correctly in parallel.
Best regards,
Haocheng Weng
There is a named parameter in the set() command to specify the nullspace.
Hello @prj,
Thanks for the tip about the named parameter in set() for specifying the null space.
I searched for examples and tried the following minimal test for a Poisson problem with pure Neumann boundary conditions:
load "PETSc"
macro dimension()2//
include "macro_ddm.idp"
macro grad(u)[dx(u), dy(u)]//
func Pk = P2;
mesh Th = square(100,100);
Mat A;
MatCreate(Th, A, Pk);
fespace Vh(Th, Pk);
func f = 1 + x - y;
varf vA(u,v) = intN(Th)( grad(u)'*grad(v) ) + intN(Th)( f*v );
matrix Loc = vA(Vh, Vh, tgv = -2);
A = Loc;
real[int] b = vA(0, Vh);
// constant nullspace
real[int,int] Rb(Vh.ndof, 1);
Rb(:,0) = 1.0;
set(A,
sparams = "-ksp_type preonly -pc_type lu -ksp_monitor_true_residual",
nullspace = Rb);
Vh u;
u[] = A^-1 * b;
plotD(Th, u, cmm = "Solution");
Solvers such as gmres + gamg diverge.
Using lu + preonly does return a solution, but the result is distorted: the global trend looks reasonable, while local structures appear incorrect.
Is specifying the null space alone sufficient in this case?
Best regards,
Haocheng Weng
Dear Haocheng Weng,
theoretically, imposing a condition u(x0)=0 gives a unique solution,
where x0 is an arbitrary point.
Did you try with setBC, imposing a single dof to zero? This needs to be on a single point of the whole domain, hence for example only on proc 0.
Another difficulty is that for the existence of a solution, it is necessary that \int f=0 (to see that take v=1 as test function). In particular you need to subtract to f its average on the whole domain, otherwise you get wrong results.
I think you need to do both corrections: subtract the mean to f, and fix a single dof to 0.
To compute the integral of f on the whole domain you need to use a partition of unity.
Dear Prof. Bouchut,
Thank you for your suggestion. You are right — enforcing \int_\Omega f = 0 is crucial. After subtracting the global mean of f, the solution becomes consistent.
BTW, regarding SetBC(), which is correct, tgv = -1 or tgv = -2?
In particular, tgv = -2 zero out both the row and the column associated with the constrained DoF, so that this DoF no longer contributes to the other DoFs?
Best regards,
Haocheng Weng
tgv=-1 is always correct. Here the value -2 of tgv is also correct because the right-hand side (corresponding to the imposed dof) is zero. A good point for tgv=-2 is that it leads to a symmetric matrix, so that (depending on the method used for the linear system) it could give a more stable or faster resolution.
Dear Prof. Bouchut,
I noticed that once \int_{\Omega} f = 0 is enforced, the solver seems able to return a solution even without fixing a DoF or explicitly specifying a null space. The solution is not unique, but it does satisfy the equation.
When only the pressure gradient is of interest, is it then unnecessary to use SetBC() or to explicitly fix the null space?
Best regards,
Haocheng Weng
I think that it is necessary in principle to fix a dof in order to have an invertible matrix. However, when an iterative solver is used, it can converge to a solution even if there is no uniqueness, which is good for you. Hence you can skip fixing a dof as long as the solver does the job. If at some point it diverges, you know what you have to do!