Large scale FSI example with parallel implementation

Hi, everyone.

Nowadays I am doing some simulation of fluid-structure interaction. Actually, I know how to assemble the matrix for fluid, solid and its coupling part. I also know how to write a parallel code for decoupled fluid part or solid part respectively, following the tutorials of Professor Pierre Jolivet.

But I don’t know how to implement it in parallel, for example, I need to construct different meshes for solid and fluid, and maybe different PETSc Mat? (But I think solid or fluid part is solved by different cores) Then with PETSc changeOperator or some other commands for the total coupled Mat. I am a little confused. Is there any related example?

Thanks a lot~

I’m sorry, I’m not a liberty to share any code, but what you want to do is perfectly feasible. For FSI, the difficulty is to know how you want to couple the physics: strongly, e.g., using ALE, cf. slide 20 of https://joliv.et/FreeFem-tutorial/2019.pdf, or weakly, cf. Topology optimization of thermal fluid–structure systems using body-fitted meshes and parallel computing - ScienceDirect or https://www.sciencedirect.com/science/article/abs/pii/S0307904X21003966. Let me know if you have some specific questions.

Much thanks for your reply.

I am working on the same problem with ALE. Now when I solve the eigenvalue problem and resolvent problem, I first assemble all the matrix in FreeFem++ without parallelization and then output all the matrix to a parallel evp solver (This is not a good way, and I don’t use preconditioner).

I have two detailed problems for parallelizing the codes:

  1. I have to create a Mat for PETSc, but with ALE method, the mesh (Th) is decomposed into at least two parts for fluid (ThF) and solid (ThS), and also the extensional field (ThE). So to build the final Mat J (as in your tutorial at slide 20), I should first trunc the mesh and build each sub-Mat, for example Jqq for ThS, and finally assemble Mat J or I should build Mat J first for Th, then trunc the mesh. (Maybe I should test it myself first…)
  2. After I have assemble Mat J, I don’t know the exact way to set the preconditioner for the 4X4 block Mat J. I know for 2X2 block matrix, Schur-complement could be a way to set the preconditioner, with -pc_shell to write the preconditioner. For 4X4 block matrix, there should be similar method, but I don’t know whether I could directly use -fieldsplit instead of writing it by my own.

Because I didn’t figure out these questions before and mostly followed some examples to parallelize my project on only fluid equations, I don’t know how to solve them for this FSI problem… Would you mind giving me some ideas? Thanks again.

  1. you should build each Mat separately, since, e.g., there is no solid unknowns on the fluid mesh;
  2. it would be much more convenient to use -pc_type fieldsplit indeed.

Hi, Professor Pierre Jolivet.
I’m working on this problem from a simple conjugate heat transfer problem. Following the discussion in Changing Matrix components PETSc Parallel - #10 by prj, I build the Mat to couple the fluid part on thF to lagrange multiplier on interface thL.

Mat Aff,Ass;
{
buildDmesh(thF);
createMat(thF,Aff,P2);
}
{
buildDmesh(thS);
createMat(thS,Ass,P2);
}
{
fespace PhL(thGL,P0);
PhL Ltemp = chi(thF); //characteristic function of a mesh
thL = trunc(thGL,abs(Ltemp-1.0)<1e-2);
}
varf aff(uF,vF) = int2d(thF)(kF*(dx(uF) * dx(vF)+dy(uF) * dy(vF)));
varf alf(uL,vF) = int1d(thL)(uL * vF);

Aff = aff(XhF,XhF);

matrix rtoFtemp = interpolate(XhL,XhF);
matrix Alftemp = alf(XhL,XhF);
Mat rtoF(Aff, restriction = rtoFtemp);
Mat Alf(Aff,rtoF,Alftemp);

This is OK, but for thS it fails.

varf ass(uS,vS) = int2d(thS)(kS*(dx(uS) * dx(vS)+dy(uS) * dy(vS)));
varf als(uL,vS) = int1d(thL)(-1.0uLvS);
Ass = ass(XhS,XhS);

matrix rtoStemp = interpolate(XhL,XhS);
matrix Alstemp = als(XhL,XhS);
Mat rtoS(Ass, restriction = rtoStemp);
Mat Als(Ass,rtoS,Alstemp);

I think the problem is due to I build thL from thF, not from thS.

The code fails on this line test_parallel.edp (2.4 KB). I am trying to parallize it from
test_seq.edp (1.7 KB).

Would you mind giving me some suggestions on this problem? Much thanks~

This is the failure:

  current line = 108 mpirank 0 / 2
Assertion fail : (mList->n == mList->nnz)
	line :3804, in file ./PETSc-code.hpp

Your restriction matrices are not properly built. You should avoid interpolate, because these yield non-Boolean matrices.

Hi, professor Pierre Jolivet,
Much thanks for your reply and I find the problem that matrix rtoStemp built with “interpolate” has elements very badly away from 0 or 1.

  192     10693 0.21299877250727042299
  192     10694 0.89619315367679119877
  192     10711 -0.10919192618406167727
  193     10718 1
  194     10693 1
  195     10693 0.9296007754733229822
  195     10694 0.093107490182908467236
  195     10711 -0.0227082656562314876

However, I have a small question that I partition the 1D interface mesh thL with “trunc” from thGL and I could define the n2o from mesh thGL to thL. If I want to use “restrict” to build the passage matrix, I should first calculate the n2o from thF to thL and thS to thL. How could I evaluate this? Maybe I should manually handle with these passage matrix and find their common elements (because I have n2o of mesh (1) thGF to thF, (2) thGL to thL and (3) thG to both thGF and thGL, then I evaluate (4) thF to thL) or any easier ways?

Thanks very much~

I think you want to use extract instead of trunc. Cf. this example FreeFem-sources/examples/hpddm/helmholtz-coupled-2d-PETSc-complex.edp at develop · FreeFem/FreeFem-sources · GitHub where there is a similar coupling between a “volume” and a “surface”.

Thanks very much, I have tried this example.

  1. I think in this example, the local interface mesh is built with “trunc” as I did before. I built local interface mesh following this example and only changed Th to ThF(local fluid mesh).
  2. As a result, some Dofs of local interface mesh are not of local solid mesh on the same proc (I also checked it by constructing the interpolation matrix from global solid mesh to local interface mesh and to local solid mesh). I don’t know whether this will be a problem, but I think this possibly causes the local restriction matrix for solid part to interface non-Boolean (and no problem for fluid part).

meshL ThL;
{
int[int] labels=labels(Th);
if(labels.max > 0) {
fespace PhL(ThG, P0);
PhL u = (region==interface)*chi(Th);
ThL = trunc(ThG,abs(u-1.0)<1.0e-2);
}
}

Would you mind giving me advice for this example and the problem? Thanks very much.

Please share the current code.

Ok, professor Pierre Jolivet. Much thanks for your patience.

I update the current code by another way to construct the restriction matrix S2L in line 137 to 162 for local solid mesh to local interface mesh.

My idea is that the restriction matrix SG2L from global solid mesh to local interface mesh is always Boolean by “interpolate”, and I compare it with restriction SG2S from global solid mesh to local solid mesh to find the elements in restriction matrix S2L (because SG2S * S2L should equals to SG2L) and build it as a Boolean matrix. (Instead of directly use “interpolate” to build it as S2Ltemp).

S2Ltemp = interpolate(XhL,XhS);

Now Mat Aa could be built but linear solving fails.

Then I find the reason might be related to local interface mesh not shared by local solid mesh on the same proc. For example, run with

ff-mpirun -np 2 test_parallel.edp

The element S2Ltemp(20,767) = 0.39 on proc 0 (obviously non-Boolean). But during my construction for S2L, I find this dof from SG2L could not be found on SG2S on proc 0, but on SG2S on proc 1. So I have trouble building restriction matrix S2L on proc 0. (By the way, only solid part has this problem, fluid part coupled with interface is OK).

test_parallel.edp (4.4 KB)

Your code is too long for me to go through it carefully, but I’ll just note that you partition thF and thS separately. You are less likely to run into such errors if you partition the full mesh once, and then extract all local meshes with trunc and n2o. This is what is done in FreeFem-sources/examples/hpddm/restriction-2d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub.

Hi, professor Pierre Jolivet.

Following your suggestions, I have solved this problem. I define the thUserPartitioning first and then partition the global mesh, though I don’t clearly know the reason why this will avoid the problem mentioned before.

This is a script that works now for coupled thermal conduction problem and I will try to build on a parallel solver for coupled fluid structure interaction problem from it.
test_parallel.edp (4.3 KB)

Thank you very much again.

Great news, now the “fun” part begins! :slight_smile: