Parallel fluid code

Dear all,

Is there an example of parallel code for 3D NS equations?

Best,
Yongxing.

Yes, GitHub - prj-/moulin2019al: Augmented Lagrangian Preconditioner for Hydrodynamic Stability Analysis.

This is brilliant. Thank you very much!
Yongxing.

@yongxing I have developed a simple 3D NS solver for my work based on several examples I found in the FF repository. It is simpler than what Pierre has developed (so maybe easier to get started with) as it works only for relatively low Reynolds numbers. You may want to have a look at that as well: GitHub - mbarzegary/navier-stokes-solver: Parallel Navier-Stokes solver for coupled diffusion-convection PDEs

It works using a fieldsplit preconditioner and is coupled with a diffusion-convection equation. There are some 2D and 3D example mesh files with which you can give it a quick try!

@mojtaba.barzegari :Thank you very much! I will have a look at your code.
Yongxing.

@mojtaba.barzegari ,thanks again for your code, which is brilliant and indeed easy to use.
I just try to understand some details of your code:

  1. What’s the difference between dx(u#x) and dx(u) ?
  2. Do you have a paper about his, and did you test its scalability? I ran the 3D-coarse mesh (~10^4 nodes) using 8 cores, and just got 100-step results after 5 hours (your maximal step is 400, which would then take 20 hours). How long does the fine mesh (~10^5) take when you tested it?
  3. Do you think it’s easy to modify your code to allow the movement of mesh?
    Best,
    Yongxing.
  1. dx(u#x) concatenates the name of the macro u with x, if you call the macro with velocity for example, you’ll end up with dx(velocityx) in your varf;
  2. the preconditioner choice of Mojtaba for the 3D test case is not the most appropriate for a performance point of view: extremely robust, but probably slower than a full LU;
  3. yes, it is easy to allow mesh displacement in any FreeFEM + PETSc script.

Thank you Pierre @prj for taking care of this.

@yongxing yes, as Pierre mentioned, the performance of the fieldsplit is not great, but a full LU preconditioner has terrible memory scaling performance, so for a 3D case, it becomes quickly impossible to use an LU precond (as you can see, I used that for the 2D code). For improving the performance, I had made a couple of changes in the code that I use for my work, but I forgot to put them in the repository as well. I made a new commit, so please check it and see what has been changed. I changed the formulation of viscosity to dynamic viscosity and removed density from the equation. By doing this, you will see a significant performance improvement (so each iteration takes 20 seconds to run using 6 MPI cores). Please grab the new version and check it.

Unfortunately, I don’t have a paper about this as I wrote this code using various techniques and models I found in the FF examples and Pierre’s personal website, but the formulation is relatively standard, so in any finite element CFD book you can find the underlying theory.

1 Like

Thank you both very much for helping me with this.
@mojtaba.barzegari : I have download your code and I can see the modification. This seems similar to a non-dimensional equation, although the geometry is till dimensional.
It is also interesting that this improves the efficiency. Is this equivalent to multiplying a constant (density) on the diagonal of the preconditioner matrix? can you briefly explain the reason of dramatically reducing the iterations?
Best,
Yongxing.

@yongxing sorry for my late reply.

what do you mean by the dimensional geometry? FF doesn’t care about the units specified in the mesh files, so they are always dimensionless.
regarding the preconditioner, unfortunately, I don’t have much information about the field-split preconditioner, so I don’t know exactly how having a lower coefficient improves the convergence rate of it.

Pierre definitely has more insight into this. @prj do you have any idea on this? the question is why this change (see this commit) dramatically improves the performance.

You are not solving the same problem (scaling factor is different), so it seems normal to not have the same convergence behavior.