Solve the 3D-MHD model in parallel using PETSc

Hi, everyone!
I want to solve a 3D MHD model, but when I used solver=UMFPACK to solve it, an error occurred on the [16,16,16] grid, which exceeded the memory, and the running speed was extremely slow.
My aim is to obtain the error convergence order of the solution model on meshes of different mesh sizes, with the finest mesh reaching [32,32,32]. I have learned that using PETSc parallel computing can speed up the operation and save memory.

But I’m a beginner at Freefem and not very good at converting the current code into PETSc parallel code. Can someone help me? Thank you and look forward to your valuable suggestions!

Blockquote
MHD3D_varf.edp (17.4 KB)

What issue are you facing?

Thank you for your reply!
When I run on the [16,16,16] grid, an error of:out of memory. Do you have any good suggestions to solve this problem?

Could you please send the log in full?

Sure.

  406 :     << errH1(i, 4) << " " << orderH1(i, 4) << " " << endl;
  407 : };
  408 :  sizestack + 1024 =36372  ( 35348 )

 Error Umfpack -1 :  out_of_memory   current line = 313
Exec error :  Error Umfpack -1 :  out_of_memory
   -- number :1
Exec error :  Error Umfpack -1 :  out_of_memory
   -- number :1
 err code 8 ,  mpirank 0

Well, you are still using UMFPACK.

Dear Dainy-Jia,
you can do as
MHD3D-PETSc.edp (21.4 KB)

Probably there are some choices that are not good (prj will tell). In particular concerning the preconditioner (“lu” is used).
It does not necessarily work better than with a single process.
I think it would be worthwile do have a scheme with decoupled u,E,B to have smaller matrices.