I have a question regarding computing gradient when the domain is decomposed and paralleled. I solved a solid mechanics problem and got the corresponding u which is the displacement solution. I want to compute dx(u1)+dy(u2)+dz(u3) which is the volume strain of the object.
I find that the result I get in a parallel context differs from the result when I mpiallreduce the displacement to global and then compute the volume strain. I am guessing the global one is the correct one. Is there a way to properly compute the gradient of a vector field under parallelization with PETSc?

Thank you for your response. With parallelization, I just do dx(u1)+dy(u2)+dz(u3). As you mentioned, the ghost values need to be properly handled in this case. But I have no idea how to do it. Do you have any examples that I can refer to?

I met a few problems related to this exchange function:

It seems exchange() does not work for P0 elements.

If I have a vector field u based on P1 and I compute the gradient dx(u[0]), would it be better to assign this gradient to a P0 element, i.e. f=dx(u[0]) and f is based on P0?

Hi,
Here I attach the files to reproduce the result. It reads local solutions (with 4 cores) and meshes then computes the gradient based on the solution. However, inconsistency can be still found in the vtk result after the implementation of exchange. test.zip (1.5 MB)

What inconsistency are you talking about? You are exporting a piecewise constant function as a piecewise linear one, so that is probably why you are seeing inconsistencies. If I fix the order of the output function, I don’t see inconsistencies in ParaView.