Stokes PETSC solver but in a doubled-size space

Dear all

I learnt from https://github.com/mbarzegary/navier-stokes-solver an efficient @prj @mojtaba.barzegari an efficient PETSC solver for Stokes solution
[u, v, w, p] in FEM space [P2, P2, P2, P1]:

set(A, sparams = "-ksp_monitor -ksp_type fgmres -ksp_converged_reason "
+ "-pc_type fieldsplit -pc_fieldsplit_type schur "
+ "-fieldsplit_velocity_pc_type gamg -fieldsplit_velocity_pc_gamg_sym_graph true -fieldsplit_pressure_ksp_max_it 5 "
+ “-fieldsplit_pressure_pc_type jacobi -fieldsplit_velocity_ksp_type preonly -pc_fieldsplit_schur_fact_type full”,
fields = ux, names = names);

with

string[int] names(2);
names[0] = “velocity”;
names[1] = “pressure”;

Now, I have two velocity fields and two pressure fields: [u1, v1, w1, u2, v2, w2, p1, p2] in a FEM space [P2,P2,P2,P2,P2,P2, P1,P1], is the parameter set-up still the same?

BTW, the second velocity or pressure field is from the adjoint space.

Many thanks in advance!

It all depends on how you want to precondition your linear system. But I do think it’s a bad idea to group everything together.

Thank you @prj .

The reason I solve the primal and adjoint euqations (which are fully coupled together) in a monolithic manner is because I want to avoid iterations from one another – convergence is also painful.

Look at the linear system [[K, B’], [B, 0]], which is quite similar to the Stokes system, but just both the sizes of K and B are doubled. So the preconditioner can also be similarly built from the Schur complement. Do you think the above parameter set will solve this in its most efficient manner?

I doubt it will be as efficient as solving this in a staggered way, but I’d love to be proven wrong!

Is it necessary for you to write a separate problem for your adjoint equations? I agree that the monolithic approach is probably not optimal.

Once you create your linear operator for the primal problem (to be solved with KSPSolve()), the discrete adjoint operator is given by its (conjugate) transpose and can be solved using KPSSolveConjugateTranspose() [complex] or KSPSolveTranspose() [real] without having to reconstruct the matrix or refactorize the preconditioner (though you will need to perform the linear iterations again to solve the adjoint system). Note that the same preconditioner may or may not work efficiently for the adjoint problem.

1 Like

Thank you @prj @cmd both. Will compare the performance of the monolithic and decoupled formulations in details…

BTW, my problem is a large-scale 3D FSI interaction, and the mesh is reconstructed in advance for every time frames. Will introduce this work in the CFC conference 2023 25-28 April at Cannes, hope to talk with you guys there if you will attend as well by any chance.



worm_mesh

Dear @cmd @cmd : I implemented the decoupled version, however, the linear solver for the adjoint equations struggles to converge (although the primal equation solver converges in 10 to 20 iterations)… do you expect this?
test.zip (1.4 MB)

The test is a pure fluid problem, not FSI. The adjoint equation couples the primal equations through a volume integral, while the primal equation couples the adjoint ones through a boundary condition.

Hi there. It is hard to say, but since you are using different BC’s for your primal and adjoint equations, I do understand why the convergence behavior could be quite different. Are you sure that your variational forms for the system are correct, including the bilinear concomitant in the adjoint system?

From what I can see, the equations are, in general, only one-way coupled: the primal system is independent of the adjoint one. Also, in the particular case you’ve sent of g=0, the primal system is homogeneous, and the solution is trivial. In this case, the adjoint system is also homogeneous and has a trivial solution.

The primal - adjoint system is derived from an optimal control problem: minimising the viscous dissipation by a boundary velocity profile, constrained by the PDEs (NS equations in this case). Using the Lagrange multiplier (adjoint variables) method to cancel out the constraints.

Imagine you take the variation with respect to the primal variables (which gives you the adjoint equations), the coupled-volume integral in the adjoint equations is from the variation of the objective function (viscous dissipation in this case).

The variation of the control variable (boundary velocity in this case) provides the optimality condition which relates the control variable to the adjoin variables, this would go back to the primal equations as a boundary condtion.

There was a previous paper, A monolithic one-velocity-field optimal control formulation for fluid–structure interaction problems with large solid deformation - ScienceDirect
using a distributed control, but is similar to a boundary control in the current code.

In the script you’ve sent, I see the following definitions:

  varf primal([ux,uy,uz,p],[uhx,uhy,uhz,ph]) =
    int3d(Th)( nuf/2*(DD(u):DD(uh))
	      -div(uh)*p-div(u)*ph - penal*p*ph
	       )
    + int3d(Th)(g*uhy)
    + on(1,2,3, ux=ubx, uy=uby, uz=ubz)
    + on(4,5,6,7,8,9,ux=0,uy=0,uz=0);

  varf adjoint([vx,vy,vz,q],[vhx,vhy,vhz,qh]) =
    int3d(Th)(  nuf/2*(DD(v):DD(vh))
	      -div(vh)*q-div(v)*qh - penal*q*qh
	       )
    + int2d(Th,1,2,3)(vect(vh)'*vect(v))
    + int3d(Th)(- nuf/4*(DD(u):DD(vh)) )
    + on(4,5,6,7,8,9,vx=0,vy=0,vz=0);

where g=0.0, ubx=uby=ubz=0.0, penal=0.0, nuf=1.0. Hence, we could simplify the forms to:

  varf primal([ux,uy,uz,p],[uhx,uhy,uhz,ph]) =
    int3d(Th)( nuf/2*(DD(u):DD(uh))
	      -div(uh)*p-div(u)*ph
	       )
    + on(1,2,3,4,5,6,7,8,9,ux=0,uy=0,uz=0);

  varf adjoint([vx,vy,vz,q],[vhx,vhy,vhz,qh]) =
    int3d(Th)(  nuf/2*(DD(v):DD(vh))
	      -div(vh)*q-div(v)*qh
	       )
    + int2d(Th,1,2,3)(vect(vh)'*vect(v))
    + int3d(Th)(- nuf/4*(DD(u):DD(vh)) )
    + on(4,5,6,7,8,9,vx=0,vy=0,vz=0);

Since the solution to the primal problem is trivial, there will be no forcing on the adjoint problem and that solution will also be trivial. If my understanding here is correct, maybe this isn’t a good test case?

Are you sure that the BC written for the adjoint problem on surfaces 1, 2, and 3 is well-posed?

Yes, your understanding is correct, except that [ubx, uby, ubz] is not zero, but

ubx=ul2x+1./alpha*r(rl2)'*r(rl2)*vx - 1./alpha*r(rl2)'*vect(v)*(x-rl2x);
uby=ul2y+1./alpha*r(rl2)'*r(rl2)*vy - 1./alpha*r(rl2)'*vect(v)*(y-rl2y);
ubz=ul2z+1./alpha*r(rl2)'*r(rl2)*vz - 1./alpha*r(rl2)'*vect(v)*(z-rl2z);

where [ul2x, ul2y, ul2z] is given which is not zero – linear velocity measure from laboratory. The rest part is from the angular velocity, which is related to the ajoint solution, and only zero only in the first iteration step.

The boundary integral for the adjoint equation is from variation of L with respect to u:


with
image

image
is the solid domain (the worm), we can forget this at the moment and only consider a fluid problem in \Omeg - \Omega_s.

The well possedness is a good question, it is difficult to analyse. But, compare with the classical boundary control problems, I just decomposed the boundary velocity to its linear and angular components. What do you think about the well posedness?

Ah, I see. Thanks for clarifying. Unfortunately, I do not know about the well-posedness. You are enforcing some type of Robin-like condition resulting from the sum of the implicit Neumann conditions from integrating the viscous and pressure terms by parts, plus the vect(vh)'*vect(v) term. I’m not sure how this behaves.

Yes, I will implement a regularisation method (boundary force or Lagrange multiplier to enforce the Dirichlet:
Dirichlet control.pdf (2.8 MB)
)… in this case, the Adjoint equation would have a homogeneous boundary condition (but we have to solve a surface Laplace equation for the control), but it would still be interacting with the Primal equations through the volume integral, which is the reason to slow down the linear solver I think – I guess the numerical influence of the Robin boundary is trivial (although it may not be quite right mathematically).