Efficient solver and preconditionner for 3D Stokes Problem

Test_Case_Stokes.zip (5.4 KB)

I run this with mpiexec -np 4 FreeFem++-mpi Stokes_PAR_PETSc_3D.edp -split #meshlevel

Loïc,

Your problem is not properly formulated. I tried ff-mpirun -n 8 Stokes_PAR_PETSc_3D.edp -v 0 -split 5 and it does not converge in two iterations even with LU.

Nevermind, I just saw that you are using -ksp_rtol 1E-50, what are you trying to do?

I keep this relative tolerance r_tol by habit, but in all cases the problem converges with the absolute tolerance ksp_atol. So, I don’t think this value has an influence, isn’t it ?

Loïc,

Of course it does, it’s a terrible value to use.

What might be the error of formulation with my problem ?

Loïc,

You are using a terrible -ksp_rtol value, if you don’t know what’s the purpose of one parameter, don’t change it.

I was thinking that since my code converge with the absolute tolerance, thus the relative tolerance had no effect.

What is reasonable value for the relative tolerance ?

Loïc,

As I told you, if you don’t know something, leave it to default.

Okay thanks !
Do you think that keeping this value by default, will help me to solve bigger case ?

By default this value is set at rtol = 1e-5, now my code converge with the relative tolerance, without reaching the absolute tolerance that I have fixed. Isn’t this a problem for controlling convergence ?

Loïc,

Well, did you check the quality of the solution?

On my personal machine, on small case I get the same values for the error calculations.

My concern is about the fact that it is a problem to validate for example a new finite element if one does not impose an absolute tolerance each time we run the code with a finer mesh.

I am agree that 1e-50 is a bad value, but can we choose an intermediary value ?

I think I have imposed this value, to bue sure that my code converge with the absolute tolerance.

Loïc,

By the way, I have run the code on HPC with the default value of rtol. This does not make my code more efficient.

I think the blocking point is the LU factorization, since the PETSc iterations do not begin.

Loïc,

Right, so I told you to look at the literature, what seems like a possible option to precondition this with a fieldsplit?

Yes, I have found this paper: https://arxiv.org/pdf/2006.06052.pdf

In this, The PETSc implementation uses FieldSplit preconditioner of type PC_COMPOSITE_SCHUR with LOWER approximate block factorization type.

This is somewhat similar to the example in the FreeFEM git.

In this: Preconditioned iterative methods for Stokes flow problems arising in computational geodynamics - ScienceDirect
they also apply FGMRES to the pressure Schur complement.

I have found also other papers about efficient ways to solve the Stokes problem, but they are theoretic and do not explain the PETSc parameters.

Loïc,

@prj
Using the solver proposed on git, I have succeeded to launch a case with 750 000 tetras on HPC.
The first iterations are

  0 KSP Residual norm 2.302593429934e+32
  1 KSP Residual norm 2.850333886622e+17
  2 KSP Residual norm 7.955100808977e+16
  3 KSP Residual norm 5.824808136534e+16
  4 KSP Residual norm 4.807673227723e+16
  5 KSP Residual norm 4.186721907602e+16
  6 KSP Residual norm 3.757306120399e+16
  7 KSP Residual norm 3.437661273431e+16
  8 KSP Residual norm 3.187794629662e+16

It is a progress, however I am wondering why the first residual is so high and how can we decrease it to make the code more efficient. I have also change the rtol since the decrease between the first and the second iterations is big while the code has clearly not converged.

Best regards,

Loïc,

You probably aren’t using tgv=-1

Indeed, I was using tgv= 1e30. I have relanched the code with tgv=-1, now the first residual is
0 KSP Residual norm 2.302593520324e+02

I will see if this allows to run the code in a reasonable time with a reasonable CPU requirement.

Best regards,

Loïc,

It really looks like you are just trying random parameters, with big test cases, without trying to understand why it is working or not. That’s pointless.

Hi @prj,

Yes, indeed you are right.

A priori I have no problem with small case that I am able to solve with a classical solver GMRES \ LU.

No, I try to go a step further by increasing the number of elements to see what’s happen (nevertheless my problem remains relatively small).

To be honest, I haven’t found much documentation about how to solve efficiently such cases when the number of element increases. Thus, I just try different methods by changing parameters to see what happens and try to understand the influence of different parameters.

This may not be the right approach to take. Since it is the first time I try to do things like that, I will be happy if you share the right strategy or we can have a talk when you are a free time.

Thank you in advance,

Best regards,

Loïc,