Call PETSc matrix operator MatDiagonalSet (…)

Thanks. OK, maybe version is the problem. I will try to build the source code…

Dear Prof. Pierre Jolivet,

Sorry for bothering you again. I have now installed v4.9 for Windows, and all the operations can work now. However,

when I use one processor, everything is right: the results are the same as my serial code; but when I use two processors, I can see from the plot that interpolation is wrong.

Sorry for sending you the code (attached), but you may directly run it. The problem is a benchmark FSI problem: a vertical leaflet in channel flow. The channel flow is discretised by on a stationary mesh Th, and the vertical leaflet is discretised by a separate mesh Ths which is moving. The interaction is implemented by FEM interpolation, and I developed this method with Olivier Pironneau.

Thank you very much for all your help!

fsi_2d.edp (4.0 KB)

I’m not sure why that is. The parallel interpolation is not really designed for your problem (with meshes of greatly different dimensions).
In your case, your leaflet is very small, so it may be counterproductive to distribute it across processes. Instead, you could, for example, let process #0 hold the solid mesh. Then, distribute the fluid mesh by making sure that the solid mesh only lives in the portion of the fluid mesh held by process #0. That way, you can do all fluid <-> solid interpolation on process #0, “in sequential” (which may be better since the domain is tiny), and do all fluid work in parallel.

Thank you, I will give it a try. However, this is just a test. The motivation is to simulate the whole heart including all the valves, in which case the solid (valves and tissues) would be large

It seems I always have to do the following code together (inside the time loop) in order to compute the interpolation matrix P (mesh Ths is moving but Th is static)

createMat(Th, A, PV1)|
createMat(Ths, B, PV1)|
transferMat(Th, PV1, A, Ths, PV1, B, P)
A = fluid(Rh, Rh);
B = solid(Rhs,Rhs)

One problem is I quickly ran out of memory because of (I guess) calling createMat() again and again.

Another problem is matrix A is static, It’s better to put it outside the time loop. However, when I do this, it shows the following error:

Can I somehow only put the following two lines inside the time loop, and all the other three lines outside the time loop because only Ths and matrix B change as the time involves ?

transferMat(Th, PV1, A, Ths, PV1, B, P)
B = solid(Rhs,Rhs)

Yes, of course you can. Maybe for good measure add MatDestroy(P); before the call to transferMat.

Dear Prof. Pierre Jolivet,

Your know, after solving a PDE for u, we can use dx(u) to get the derivative of u.

However, In the case parallel, it seems we cannot directly do so, because I found the results were different between using one processor and two processors.

Do we have to gather (centralise) u to one processor, compute the derivative, and then distribute the results across different processors? or there is a simple trick?


You can, but you need to synchronize the solution at ghost elements, see line 36 of (use dx instead of b).

Sorry, I am too silly to understand the details. I actually want to compute the deformation tensor F as follows:

dispx[] += wx[]*dt;  dispy[] += wy[]*dt;
f11=dx(dispx)+1; f12=dy(dispx); f21=dx(dispy); f22=dy(dispy)+1;

However I found F was not right near the ghost elements.
Do you mean I should put
exchange(A, dx, scaled = true); exchange(A, dy, scaled = true);
Between the above two lines of code? How should I choose matrix A? I have tried…which was problematic.

You need to exchange your f11, f12, etc. A must be a Mat defined using the same fespace as your f11, f12, etc.

Thank you vert much! I did the following:

but it shows the following operator error:

f11[] instead of f11…

Dear prj,
Can I ask you one more question about interpolation matrix: when I use transferMat(), the interpolation matrix it produces is always linear, or it uses the shape function, which could then be high order if using a high-order fespace?

It will use whatever you put in PV1 and P (using your notations from this post), so it can be of high order.

Should I do
exchange (A, f11[], scaled=true)
immediately after
or I can do operations on f11, such as computing the inverse of F^{-1}=P, even do some matrix operations such as F*F’+I=P… then finally call
exchange (A, p11[], scaled=true)

or the order does not matter?

How would that matter if none of the operations involve neither f11 nor A?

Sorry, the operation does involve f11, which is the component of matrix F. In this case, should I perform the operation first, or exchange() first?

Well, if you want the proper value, of course you need to exchange() beforehand. I’m sorry, I’m not sure I understand the question correctly…

Some very simple examples are:
should I exchange ( … a…) first, then compute b, c or d?
can I first compute b, c or d, then exchange (…b…), exchange (…c…) or exchange (…d…)?
or they are the same?

I think it’s important you try to understand why it would be the same for b and d but different for c if you don’t do the exchange() beforehand.