Enquiry about MPI

Dear all,

I am new to FreeFEM and have met some problems about MPI recently.
My algorithm is an iterative one, and in each iteration, there are many sub-problems that are independent with each other, hence I want to solve them in parallel. However, it seems that every step of the whole algorithm is parallelized together, which is not what I want. Would you please tell me whether I can parallelize the program just in each iteration part? Or if not, is there any solution to this problem? Thanks a lot!
Sincerely,

Liga

Please look at this very similar post: Solving a lot of independent ODEs in parallel.

Thank you so much for your reply. I will try it.

Thanks for your reply before. I still have some problems on this topic. My code is of this form:

Stage 1: not in parallel

Stage 2: in parallel

Stage 3: not in parallel

In addition, Stage2 and Stage3 should be included in iterations. Would you please tell me whether I can just parallelize Stage2 or are there any code examples like this? Thanks!

Sincerely yours,
Liga

You could simply surround stage 1 and stage 3 with if(mpirank == 0) { // do stage 1 or 3 }, but that is extremely wasteful since all other processes will idle. Why are stage 1 and stage 3 not parallel?

Thanks for your reply. Stage 1 produces one solution, which will be used in stage 2. And stage 2 consists of many independent sub-problems, hence it can be parallelized. And stage 3 employs all the solutions of stage 2.

Stage 1 produces one solution

Why can’t this be done in parallel?

Stage 3 employs all the solutions of stage 2

Again, what makes this stage not runnable in parallel?

Sorry. It definitely could be runnable in parallel and could derive the right solution in all processors. I am a begginer of MPI and I am just wondering whether we can parallelize only part of the code.

I see. It’s actually very easy to parallelize the solution phase using PETSc, you just need to add a couple of keywords. But if you just want to parallelize part of the script, you need to use if conditions with mpirank, as I wrote earlier. But again, that is a rather bad idea, best to use three different scripts (for stage 1 and stage 3 in sequential, and for stage 2 in parallel).

That is a nice idea. Thank you so much for your help and patience.