Transient elasticity problem: sequential vs parallel

Hello,

I am trying to solve the transient elasticity equation using the New-Mark Beta algorithm. The code is able to yield correct results when I run it on a single proc, however when switching to parallel the results become weird. I did not have any issue with the static problem both is sequential and parallel. For info I use the scotch library to decompose the domain and the simple mpi method (no libraries involved such as ffddm and hpddm).

Why not use PETSc? You can simply use the same script, and just switch between matrix and Mat. You can put the sequential script here if you want.

Hello,

Thanks for your reply. I have not looked into PETSc yet. The code is attached below, when running it on 1 proc the results are correct however when I run it in parallel I get weird resultsNM_seq&par.edp (7.0 KB)

Actually, you can just save into a file each vector and see where it starts diverging between the sequential and the parallel code.
My guess is that you don’t enforce properly the boundary conditions, because you compute sum of vectors, but never enforce the Dirichlet conditions. You see this error only appear in parallel because it’s more sensitive to round-off errors.

I see, this is weird but the results that I am getting let’s say the displacement vector has its entries multiplied by the number of procs. I didn’t encounter this type of problem in the static problem, do you have any idea where should I look? Thank you again

Where do you see that the displacement vector is being multiplied in the first place (which line)?

starting from line 177 ( dummy5) all these vectors have their entries multiplied by the nb of procs

Have you checked whether Dirichlet boundary conditions are properly enforced or not?

boundary conditions are enforced on lines 86 92 96 then I compute the opposite of the linear form by multiplying by -1 on line 125. There is also a fixed boundary that I enforce when defining the stiffness matrix on line 65

Right, but during the time-stepping, the condition may not be enforced properly since you sum vectors.

I think I found the problem, on line 161 I integrate the force on a 2d surface using int2d. Apparently that’s what is causing the problem, because when I switch to int3d and run the code on a single vs multi proc I get the same correct result however I want to apply the force on a specific surface. Honestly I did not quite understand why this error happens.

Good catch, just remove mpirank from the integral and I think you should be good to go.

removing mpirank from this line would yield wrong results. I tried removing mpirank from all integrals, it worked but there would be no difference between running the code in sequential or parallel (same execution time)

Ah, correct, because of master = -1. Sorry, I don’t like this interface and don’t use it anymore, so it’s hard for me to help you out. I’d encourage you to switch to PETSc, where you don’t need all those mpirank in the integrals. It would definitely be easier for me to figure things out.

I started converting the code to PETSc, how can I define the the same matrices on lines 86, 92 and 96? is it set(K, sparams = " -ksp_type gmres -ksp_max_it 200 "); what about the others.
Thank you again for your input

If you’ve never used PETSc in FreeFEM, please have a look at the following tutorial Pierre Jolivet introduction to parallel FreeFEM part 1 - YouTube. You’ll see that very little adjustments are needed to your original sequential script (without all the mpirank mess in the integrals).