Call PETSc matrix operator MatDiagonalSet (…)

I’m not sure why that is. The parallel interpolation is not really designed for your problem (with meshes of greatly different dimensions).
In your case, your leaflet is very small, so it may be counterproductive to distribute it across processes. Instead, you could, for example, let process #0 hold the solid mesh. Then, distribute the fluid mesh by making sure that the solid mesh only lives in the portion of the fluid mesh held by process #0. That way, you can do all fluid <-> solid interpolation on process #0, “in sequential” (which may be better since the domain is tiny), and do all fluid work in parallel.

Thank you, I will give it a try. However, this is just a test. The motivation is to simulate the whole heart including all the valves, in which case the solid (valves and tissues) would be large

It seems I always have to do the following code together (inside the time loop) in order to compute the interpolation matrix P (mesh Ths is moving but Th is static)

createMat(Th, A, PV1)|
createMat(Ths, B, PV1)|
transferMat(Th, PV1, A, Ths, PV1, B, P)
A = fluid(Rh, Rh);
B = solid(Rhs,Rhs)

One problem is I quickly ran out of memory because of (I guess) calling createMat() again and again.

Another problem is matrix A is static, It’s better to put it outside the time loop. However, when I do this, it shows the following error:

Can I somehow only put the following two lines inside the time loop, and all the other three lines outside the time loop because only Ths and matrix B change as the time involves ?

transferMat(Th, PV1, A, Ths, PV1, B, P)
B = solid(Rhs,Rhs)

Yes, of course you can. Maybe for good measure add MatDestroy(P); before the call to transferMat.

Dear Prof. Pierre Jolivet,

Your know, after solving a PDE for u, we can use dx(u) to get the derivative of u.

However, In the case parallel, it seems we cannot directly do so, because I found the results were different between using one processor and two processors.

Do we have to gather (centralise) u to one processor, compute the derivative, and then distribute the results across different processors? or there is a simple trick?

Best,
Yongxing.

You can, but you need to synchronize the solution at ghost elements, see line 36 of http://jolivet.perso.enseeiht.fr/FreeFem-tutorial/section_8/example2.edp.html (use dx instead of b).

Sorry, I am too silly to understand the details. I actually want to compute the deformation tensor F as follows:

dispx[] += wx[]*dt;  dispy[] += wy[]*dt;
f11=dx(dispx)+1; f12=dy(dispx); f21=dx(dispy); f22=dy(dispy)+1;

However I found F was not right near the ghost elements.
Do you mean I should put
exchange(A, dx, scaled = true); exchange(A, dy, scaled = true);
Between the above two lines of code? How should I choose matrix A? I have tried…which was problematic.

You need to exchange your f11, f12, etc. A must be a Mat defined using the same fespace as your f11, f12, etc.

Thank you vert much! I did the following:


but it shows the following operator error:

f11[] instead of f11…

Dear prj,
Can I ask you one more question about interpolation matrix: when I use transferMat(), the interpolation matrix it produces is always linear, or it uses the shape function, which could then be high order if using a high-order fespace?
Best,
Yongxing.

It will use whatever you put in PV1 and P (using your notations from this post), so it can be of high order.

Should I do
exchange (A, f11[], scaled=true)
immediately after
f11=dx(dispx)
or I can do operations on f11, such as computing the inverse of F^{-1}=P, even do some matrix operations such as F*F’+I=P… then finally call
exchange (A, p11[], scaled=true)

or the order does not matter?

How would that matter if none of the operations involve neither f11 nor A?

Sorry, the operation does involve f11, which is the component of matrix F. In this case, should I perform the operation first, or exchange() first?

Well, if you want the proper value, of course you need to exchange() beforehand. I’m sorry, I’m not sure I understand the question correctly…

Some very simple examples are:
a=dx(disp)
b=a+1
c=a*a
d=sin(a)
should I exchange ( … a…) first, then compute b, c or d?
can I first compute b, c or d, then exchange (…b…), exchange (…c…) or exchange (…d…)?
or they are the same?

I think it’s important you try to understand why it would be the same for b and d but different for c if you don’t do the exchange() beforehand.

Dear prj,

I think your method is very interesting. That is, let process #0 hold the solid mesh. Then, distribute the fluid mesh by making sure that the solid mesh only lives in the portion of the fluid mesh held by process #0.

While my specific physics problem may differ from those discussed here, I ecounter a similar issue regarding the coupling between small computational domain and large computational domain, for example, linear stability analysis of a single bubble moving in uniform flow.

However, I don’t understand how exactly this method is implemented, for example, how to specify a specific domain on the partition mesh of process #0 in large computational domain?

Best,
wqLiu