Several Lagrange mutiplier using fieldsplit

Hello Freefem user,

I’m studying the following system :

Mat NN = [[ A , pBEx,pBEy,pBEz,pBMx,pBMy,pBMz ],
[ pBEx’,-1/kt,0,0,0,0,0 ],
[ pBEy’,0,-1/kt,0,0,0,0 ],
[ pBEz’,0,0,-1/ka,0,0,0 ],
[ pBMx’,0,0,0,-1/kf,0,0 ],
[ pBMy’,0,0,0,0,-1/kf,0 ],
[ pBMz’,0,0,0,0,0,-1/kth ]];

Here, A is a large matrix coming from 3D elasticity varf, and the pBxxx are linear forms used to enforce boundary conditions via six Lagrange multipliers. More precisely, the goal is to create a mean spring effect between two surfaces of the solid, where the kxx are the stiffnesses of these springs.

I managed to solve this problem on a small mesh using PETSc + LU and obtained correct results.

I now want to increase the mesh size and use Hypre preconditioning. Even on small meshes, the system does not converge with six springs, but it works with just one.

Correct me if I’m wrong, but a good way to handle this problem could be to use splitfield:

  • Solve A using Hypre.

  • Solve the rest using LU.

I’m struggling with implementing this using set(NN, sparams=" ") and specifying the block sizes.

I also haven’t found a way to rewrite the problem in a more compact form like NN = [A B; B' S], which would be more convenient if I want to handle more springs in the future.

Do you have any suggestions on how to do this?

Thanks,

What is splitfield? Please share a running code, that should work seamlessly.

My mistake, I swept splitfield and fieldsplit (like in -pc_type fieldsplit) ! It was correct in my .edp though

Please find a working exemple with
set(NN, sparams=“-pc_type lu -ksp_view -ksp_monitor -ksp_converged_reason”);

(and not working with)
set(NN, sparams = “-ksp_type gmres -pc_type fieldsplit "
+” -fieldsplit_0_indices 0:nA -fieldsplit_1_indices nA+1:nA+6 "
+" -fieldsplit_0_pc_type hypre -fieldsplit_1_pc_type lu "
+" -ksp_view -ksp_monitor -ksp_converged_reason");

ex0.edp (4.9 KB)

and the associated mesh :

Mod1.zip (967.2 KB)

What is -fieldsplit_0_indices and -fieldsplit_1_indices?

In my understanding these indices are used to delimit the blocks solved with the various conditioners, but they seem to have absolutely no effect.

Did I get the keyword wrong for specifying the separation, or is this not the correct approach for this type of problem at all?

Where did you get this keyword from? It is definitely the correct approach, I have the fix, I’m trying to figure out how you got to the point of using the indices option that I cannot find anywhere.

Finding no clues of how solve my problem in this forum or in the git exemples, I tried IA. Sometimes it works and sometimes it creates unknown keywords !

Oh, of course it’s AI producing nonsense, I should have known better. This will work -pc_fieldsplit_0_fields 0 -pc_fieldsplit_1_fields 1,2,3,4,5,6 instead of the nonsensical indices.

Using

set(NN, sparams = “-ksp_type gmres -pc_type fieldsplit "
+” -pc_fieldsplit_0_fields 0 -pc_fieldsplit_1_fields 1,2,3,4,5,6 "
+" -fieldsplit_0_pc_type hypre -fieldsplit_1_pc_type lu "
+" -ksp_view -ksp_monitor -ksp_converged_reason");

It reacts, but maybe not in the expected way :

[6]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[6]PETSC ERROR: Argument out of range
[6]PETSC ERROR: Field 1 requested but only 1 exist

What are your FreeFEM and PETSc versions?

I’m currently using petsc/3.14.2 and freefem/4.14

freefem/4.15 is available on my machine. With this version, there are no Petsc errors anymore and it does iterate.

However, I’m suprised by the results obtained -pc_type fieldsplit. Trying various parameter i’ve got “Linear solve did not converge due to DIVERGED_BREAKDOWN” I failed to reproduce the results obtained by using -pc_type lu alone on the global system.

Just use FreeFEM develop branch, there is no error there and the code runs fine.

$ ff-mpirun -n 4 ex0.edp -v 0
  0 KSP Residual norm 9.454872682505e+13
  1 KSP Residual norm 4.725899220351e-03
  Linear solve converged due to CONVERGED_RTOL iterations 1
[...]

By develop branch, you mean Release FreeFEM v4.14 · FreeFem/FreeFem-sources · GitHub ?

I’ll check with the IT team that it is actually the version installed on our machine and not (Release FreeFEM v4.14 · FreeFem/FreeFem-sources · GitHub)

I see here Release FreeFEM v4.14 · FreeFem/FreeFem-sources · GitHub that the version of Petsc recquired is PETSc 3.20.2 which is quite far from the one I’m using (3.14.2). Could it be the explanation why we don’t have the same results with the same edp file ?

(3.14.2 is the latest available on my machine, I’ll ask for an update)

Hello again,

I’m still struggling with my problem because I’ve different results using fieldsplit or not.

My physical case is simple and I can find an analitycal solution and easily discriminate if a numerical solution is correct or not.

Here the Lagrange multipliers represent the forces and moment apllied by the spring.

If i solve the problem using a global LU for elasticity and lagrange multipliers
set(NN, sparams=“-pc_type lu -ksp_view -ksp_monitor”);
I got correct values for the mean displacements/forces and for the Lagrange multipliers.

However, if I try to solve it using fieldsplit with 2 LU, one for elasticity and one for Lagrange multipliers
set(NN, sparams = “-ksp_type gmres -pc_type fieldsplit "
+” -pc_fieldsplit_0_fields 0 -pc_fieldsplit_1_fields 1,2 "
+" -fieldsplit_0_pc_type lu -fieldsplit_1_pc_type lu "
+" -ksp_view -ksp_monitor -ksp_converged_reason"
);

I got wrong values for the mean displacements/forces, but surprisingly, the Lagrange multipliers values are correct !

From a physical point of view, I would guess that having correct Lagrange multipliers (ie correct forces applyied by the spring) means that I’m solving correctly my problem. But afterwards I am handling incorrect u fields when I postprocess them to extract mean displacements/forces, as if there was a index issue.

But it is hard to believe as i am just changing the set(NN, sparams = “ … between the two cases, not the construction of NN nor the postprocess part.

(I check with IT team, v4.14 is the really the developp version and is used with petsc 3.20. I made a confusion between the version available using “module load petsc” and the one actualy called using ff-mpirun.)

Without a code to run, I can’t help you. And again, you should update your FreeFEM version to use the develop branch, without having to resort to installs from the IT team which is lagging behind.

Thank you prj. Please find here an .edp to reproduce the two behaviors (global LU vs fieldsplit). It is a simplified version, just 2 Lagrange multipliers

Mod2.edp (9.6 KB)

The mesh used is the same

Mod1.zip (967.2 KB)

I can also give you the cout files in order to check that we have the same results. Both were obtained using ff-mpirun -n 4 and freefem v4.15 :

Cout.zip (1.1 KB)

What does matter in here are

the values of Lagrange multipliers, the mean efforts/moments/displacements and rotations in Z. The different values correspond to the means over the surfaces defined bellow :

I’ll check how to launch Freefem using directcly the developp branch.

There are at least two issues in the code:

  1. you are not using a Schur complement approach, though you should given the small number of Lagrange multipliers
  2. the vectors you are putting in the block matrix are not consistent with the BC. You should use tgv = -10 for the blocks on the first row (except for the diagonal block), you should use linear forms without Dirichlet BC on the first row (except for the diagonal block).

This gives slightly better results, see attached.

schur.txt.zip (3.1 KB)

Thanks for the advice ! Could you share your modified edp file ?

I don’t think it will run until you have an up-to-date FreeFEM version (as it rely on some new PETSc features).

Mod2.edp (10.2 KB)