Nowadays I am doing some simulation of fluid-structure interaction. Actually, I know how to assemble the matrix for fluid, solid and its coupling part. I also know how to write a parallel code for decoupled fluid part or solid part respectively, following the tutorials of Professor Pierre Jolivet.
But I don’t know how to implement it in parallel, for example, I need to construct different meshes for solid and fluid, and maybe different PETSc Mat? (But I think solid or fluid part is solved by different cores) Then with PETSc changeOperator or some other commands for the total coupled Mat. I am a little confused. Is there any related example?
I am working on the same problem with ALE. Now when I solve the eigenvalue problem and resolvent problem, I first assemble all the matrix in FreeFem++ without parallelization and then output all the matrix to a parallel evp solver (This is not a good way, and I don’t use preconditioner).
I have two detailed problems for parallelizing the codes:
I have to create a Mat for PETSc, but with ALE method, the mesh (Th) is decomposed into at least two parts for fluid (ThF) and solid (ThS), and also the extensional field (ThE). So to build the final Mat J (as in your tutorial at slide 20), I should first trunc the mesh and build each sub-Mat, for example Jqq for ThS, and finally assemble Mat J or I should build Mat J first for Th, then trunc the mesh. (Maybe I should test it myself first…)
After I have assemble Mat J, I don’t know the exact way to set the preconditioner for the 4X4 block Mat J. I know for 2X2 block matrix, Schur-complement could be a way to set the preconditioner, with -pc_shell to write the preconditioner. For 4X4 block matrix, there should be similar method, but I don’t know whether I could directly use -fieldsplit instead of writing it by my own.
Because I didn’t figure out these questions before and mostly followed some examples to parallelize my project on only fluid equations, I don’t know how to solve them for this FSI problem… Would you mind giving me some ideas? Thanks again.
Hi, Professor Pierre Jolivet.
I’m working on this problem from a simple conjugate heat transfer problem. Following the discussion in Changing Matrix components PETSc Parallel - #10 by prj, I build the Mat to couple the fluid part on thF to lagrange multiplier on interface thL.
Mat Aff,Ass;
{
buildDmesh(thF);
createMat(thF,Aff,P2);
}
{
buildDmesh(thS);
createMat(thS,Ass,P2);
}
{
fespace PhL(thGL,P0);
PhL Ltemp = chi(thF); //characteristic function of a mesh
thL = trunc(thGL,abs(Ltemp-1.0)<1e-2);
}
varf aff(uF,vF) = int2d(thF)(kF*(dx(uF) * dx(vF)+dy(uF) * dy(vF)));
varf alf(uL,vF) = int1d(thL)(uL * vF);
Aff = aff(XhF,XhF);
matrix rtoFtemp = interpolate(XhL,XhF);
matrix Alftemp = alf(XhL,XhF);
Mat rtoF(Aff, restriction = rtoFtemp);
Mat Alf(Aff,rtoF,Alftemp);
Hi, professor Pierre Jolivet,
Much thanks for your reply and I find the problem that matrix rtoStemp built with “interpolate” has elements very badly away from 0 or 1.
However, I have a small question that I partition the 1D interface mesh thL with “trunc” from thGL and I could define the n2o from mesh thGL to thL. If I want to use “restrict” to build the passage matrix, I should first calculate the n2o from thF to thL and thS to thL. How could I evaluate this? Maybe I should manually handle with these passage matrix and find their common elements (because I have n2o of mesh (1) thGF to thF, (2) thGL to thL and (3) thG to both thGF and thGL, then I evaluate (4) thF to thL) or any easier ways?
I think in this example, the local interface mesh is built with “trunc” as I did before. I built local interface mesh following this example and only changed Th to ThF(local fluid mesh).
As a result, some Dofs of local interface mesh are not of local solid mesh on the same proc (I also checked it by constructing the interpolation matrix from global solid mesh to local interface mesh and to local solid mesh). I don’t know whether this will be a problem, but I think this possibly causes the local restriction matrix for solid part to interface non-Boolean (and no problem for fluid part).
Ok, professor Pierre Jolivet. Much thanks for your patience.
I update the current code by another way to construct the restriction matrix S2L in line 137 to 162 for local solid mesh to local interface mesh.
My idea is that the restriction matrix SG2L from global solid mesh to local interface mesh is always Boolean by “interpolate”, and I compare it with restriction SG2S from global solid mesh to local solid mesh to find the elements in restriction matrix S2L (because SG2S * S2L should equals to SG2L) and build it as a Boolean matrix. (Instead of directly use “interpolate” to build it as S2Ltemp).
S2Ltemp = interpolate(XhL,XhS);
Now Mat Aa could be built but linear solving fails.
Then I find the reason might be related to local interface mesh not shared by local solid mesh on the same proc. For example, run with
ff-mpirun -np 2 test_parallel.edp
The element S2Ltemp(20,767) = 0.39 on proc 0 (obviously non-Boolean). But during my construction for S2L, I find this dof from SG2L could not be found on SG2S on proc 0, but on SG2S on proc 1. So I have trouble building restriction matrix S2L on proc 0. (By the way, only solid part has this problem, fluid part coupled with interface is OK).
Following your suggestions, I have solved this problem. I define the thUserPartitioning first and then partition the global mesh, though I don’t clearly know the reason why this will avoid the problem mentioned before.
This is a script that works now for coupled thermal conduction problem and I will try to build on a parallel solver for coupled fluid structure interaction problem from it. test_parallel.edp (4.3 KB)