Unsteady three dimensional PETSc

Hello everyone,
I am trying to solve a simple case, of a 3D laminar flow inside a pipe using PETSc, on FreeFEM++ v4.8. This is a part of the code used (I removed only the part with saving files):

> load "PETSc"
> macro dimension()3				//EOM
> macro def(j)[j, j#B, j#C, j#D]    //EOM
> macro init(j)[j, j, j, j]		   //EOM
> 
> include "macro_ddm.idp"
> load "iovtk" 			
> load "msh3"				
> load "medit"			
> 
> 
> mesh3 Th=readmesh3("....mesh");		
> plot(Th, wait=true);
> 
> func FES=[P2,P2,P2,P1];					
> fespace XXXXFES(Th,FES);				
> XXXXFES[u,v,w,p];						
> XXXXFES[hx,hy,hz,q];					 
> XXXXFES[up,vp,wp,pp];					
> 	
> 
> real InletVel = 1;					
> real dt = 0.01;             		    
> int iter = 3;						
> real Re = 1500.;						
> 
> i    nt Wall = 3;							
    > int Inlet = 12;							
    > int Outlet = 10;						
    > 
    > [u,v,w,p]=[0.,0.,0.,0.];					
> 
> varf LNS ([u, v, w, p],[hx, hy, hz, q])
> 	= int3d(Th)(
> 			   (1./dt)*(u*hx+v*hy+w*hz))
> 	 +int3d(Th)(
> 			   +(1./Re)*(dx(u)*dx(hx)+dy(u)*dy(hx)+dz(u)*dz(hx))		
> 			   +(1./Re)*(dx(v)*dx(hy)+dy(v)*dy(hy)+dz(v)*dz(hy))		
> 			   +(1./Re)*(dx(w)*dx(hz)+dy(w)*dy(hz)+dz(w)*dz(hz))		
> 			   -p*(dx(hx)+dy(hy)+dz(hz))								
> 			   -(dx(u)+dy(v)+dz(w))*q				  					   
> 			   -1e-10*p*q												
> 			   )
> 	 +int3d(Th)(
> 			   +(1./dt)*convect([up,vp,wp],-dt,up)*hx
>                +(1./dt)*convect([up,vp,wp],-dt,vp)*hy
> 		       +(1./dt)*convect([up,vp,wp],-dt,wp)*hz
> 		       )
> 			
> 			+ on(Wall, u=0., v=0., w=0.)
> 			+ on(Inlet, u=0., v=0., w=InletVel);
> 			
> 			
> string[int] Variables(2);
> Variables[0] = "Velocity";
> Variables[1] = "Pressure";
> 	   		
> Mat J;
> createMat(Th, J, FES);
> J = LNS(XXXXFES,XXXXFES, tgv=-1);
> real[int] Jrhs = LNS(0, XXXXFES, tgv=-1);
> set(J, sparams = "-ksp_type bicg -pc_type jacobi", fields = u[], names = Variables);
> 
> int i;
> for(i = 0; i<=iter; i++)
> {
> 	up[] = u[];
>	u[]=J^-1*Jrhs;
> }

When I set -nb 2 or more I get: FreeFEM+±mpi.exe aborted the job. abort code 50176059. I run it with -malloc_debug to see if it is a memory error and I get: FreeFEM+±mpi.exe aborted the job. abort code 70033056.
If I set -nb 1 It run but it seems that my loop is not correct as the velocity/pressure doesn’t update after each iteration.
Any guidance to fix this problem is appreciated. Thank you in advance!
Regards,
Robert

You need to update your right-hand side at every time step.
My first advice would be to do a simple sequential script (without a Mat, but a plain matrix).
Then, when it’s working, switch to a distributed matrix. The fact that your code is not correct even with one process means that you first need to fix your discretization/time-stepper.

Dear professor, thank you. It is working now, I added the right-hand side term inside the loop and changed the preconditioner from Jacobi to LU. I have two more questions more or less related to the topic:

  1. First is about the way the results are written when I use PETSc. I took your advice and I did a simple sequential script and I wrote the results every time step (let’s say that I have 10 time steps). In this case I get a main file, with 10 more sub files. If I open the main file in ParaVIEW I can see the results over all time steps. When I use PETSc, I get 10 main files, each containing another “x” files, with “x” being the number of processes used. Basically I get a file with the full domain and “x” files with each sub-domain obtained from deocomposition. The problem in this case is that I don’t get a file containing the results over all time steps, I have to load each file to see the results at a specific time step. Attached is the script I use to save the results in both cases. What should I change to save the results from all time steps in a single file? Is it also possible maybe to avoid saving the results from each sub-domain? Maybe with in a bigger case it could save up a lot of time/memory by not writing each file. Let me know if I should attach some screenshots if you didn’t see this problem before.

int[int] Order = [1,1];
string DataName= “Velocity Pressure”;

for (int i=0; i<=iter; i++)
{
if(mpirank == 0)
cout << "Iter " << i << endl;

up[]=u[];

NSrhs=LNS(0, XXXXFES, tgv=-1);

u=NS^-1 * NSrhs;

savevtk(“Path\Results”+i+“.vtu”, Th,
[u, v, w], p,
dataname=DataName, order=Order);
}

  1. I’m planning on performing some simulations of a turbulent flow ( Re ~O^6 probably), could you recommend which preconditioners should I look into?

Best regards,
Robert

  1. Writting the solution in a single file means centralizing said solution on a single process. It’s a serious bottleneck once you start looking at decomposition with thousands of processes. It’s doable nonetheless, you can for example search the forum to find a solution.
  2. It will all depend on your discretization scheme and time-stepping.

Thank you!
I partially solved is by writing something like this:

mesh3 Th = readmesh3…
func FES=[P2,P2,P2,P1];
fespace XXXXFES(Th,FES);
int[int] n2o;
mesh3 ThBackup=Th;
macro ThN2O()n2o// EOM
Mat NS;
createMat(Th, NS, FES);
varf LNS …
int[int] Order = [1,1];
string DataName= “Velocity Pressure”;

for (int i=0; i<=iter; i++)
{
up=u;
NSrhs=LNS(0, XXXXFES, tgv=-1);
u=NS^-1 * NSrhs;
real[int] sol;
ChangeNumbering(NS, u, sol);
ChangeNumbering(NS, u, sol, inverse = true);
fespace XXXXFESBackup(ThBackup, FES);
XXXXFESBackup [ub, vb, wb, pb]; //Backup
XXXXFESBackup [ured, vred, wred, pred]; //Reduced
int[int] rest = restrict(XXXXFES, XXXXFESBackup, n2o);

for[i, v : rest] ub[v] = u[i];
mpiReduce(ub, ured, processor(0, mpiCommWorld), mpiSUM);
if(mpirank == 0)
savevtk(“Results”+i+“.vtu”, ThBackup,
[ured, vred, wred], pred,
dataname=DataName, order=Order, communicator = mpiCommSelf);

Now I have a group file, but if I open it, it actually opens the file with Results0.pvd and the “Time step” is stuck to 0. I tried to trick it and open the last results file - Results10.pvd and extract from it the previous time steps and create a group time step but didn’t work. Is there another combination of filters to apply or am I missing something from the code?
Regards,
Robert
image

Yes, you are missing something. See the append option for saving a sequence of files, e.g., in this example.

Thank you, it works!

savevtk(“Results.vtu", ThBackup,[ured, vred, wred], pred,
dataname=DataName, order=Order, communicator = mpiCommSelf, append=true);