Dear all,
How can i output the solution of a assigned point in a 3D mesh at the parallel computing by loading “PETSc”?
Give me some advice,please.
Thank you in advance.
Best,
Liu
Dear all,
How can i output the solution of a assigned point in a 3D mesh at the parallel computing by loading “PETSc”?
Give me some advice,please.
Thank you in advance.
Best,
Liu
You need to find which process is in charge of the point, for example using the function chi
, and then output the solution on that process.
load "msh3"
mesh3 Th = cube(20, 20, 20);
cout << chi(Th)(-1, 0, 0) << " " << chi(Th)(0.5, 0.5, 0.5) << endl;
Dear Professor,
Many thanks for your answer.
I have another question.
For the elasticity problems, I want to obtain the propagation process of elastic wave in 3D mesh with time. This need the transfer of displacement of adjacent particle in the 3D mesh with varying time. I have solved this problem based on the serial computation code, which is as follows:
"…
…
…
solve Lame([u1,u2,u3],[v1,v2,v3],solver=CG,master=-1)=
int3d(Th1)(densityidt2u1v1)-int3d(Th1)(density2idt2u1fv1)+int3d(Th1)(densityidt2u1bv1)+
int3d(Th1)(densityidt2u2v2)-int3d(Th1)(density2idt2u2fv2)+int3d(Th1)(densityidt2u2bv2)+
int3d(Th1)(densityidt2u3v3)-int3d(Th1)(density2idt2u3fv3)+int3d(Th1)(densityidt2u3bv3)+
int3d(Th1)(lambdadiv(u1,u2,u3)div(v1,v2,v3)+2.mu( epsilon(u1,u2,u3)'epsilon(v1,v2,v3)))
+on(2001,u1=0.,u3=0.,u2=amplitude(1.-2.(pif*(t-tt))^2)exp(-(pif*(t-tt))^2))
+on(1001,2002,2003,2004,2005,2006,2007,u1=0.,u2=0.,u3=0.);
u1b=0;
u1f= u1b;
u1=u1f;
u2b=0;
u2f=u2b;
u2=u2f;
u3b=0;
u3f=u3b;
u3=u3f;
for(int i=0;i<300;++i)
{
t=t+dt;
Lame;
u1b=u1f;
u1f=u1;
u2b=u2f;
u2f=u2;
u3b=u3f;
u3f=u3;
plot (u1);
}.
However, correspondingly, the solution using the parallel computing by loading “PETSc” is wrong. The parallel computing code is as following:
"…
…
…
varf Lame([u1,u2,u3],[v1,v2,v3])
=int3d(Th1)(density1idt2u1v1)-int3d(T1h)(density12idt2u1fv1)+int3d(Th1)(density1idt2u1bv1)+
int3d(Th1)(density1idt2u2v2)-int3d(Th1)(density12idt2u2fv2)+int3d(Th1)(density1idt2u2bv2)+
int3d(Th1)(density1idt2u3v3)-int3d(Th1)(density12idt2u3fv3)+int3d(Th1)(density1idt2u3bv3)+
int3d(Th1)(lambda1div(u1,u2,u3)div(v1,v2,v3)+2.mu1( epsilon(u1,u2,u3)'epsilon(v1,v2,v3)))
+ on(2002,u1=0.,u3=0.,u2=0.001(1.-2.(pif*(t-tt))^2)exp(-(pif*(t-tt))^2));
[u1b,u2b,u3b]=[0.,0.,0.];
[u1f,u2f,u3f]=[u1b,u2b,u3b];
[u1,u2,u3]=[u1f,u2f,u3f];
Mat A;
createMat(Th, A, Pk)
for(int i=0;i<300;++i) {
t=t+dt;
matrix Loc = Lame(Vh, Vh,tgv=-1);
real[int] rhs = Lame(0, Vh,tgv=-1);
set(A, sparams = “-ksp_view -ksp_max_it 100”);
Vh def(u);
A = Loc;
u = A^-1 * rhs;
[u1b,u2b,u3b]=[u1f,u2f,u3f];
[u1f,u2f,u3f]=[u,uB,uC];
macro def1(u)// EOM
plotMPI (u1);
}.
I do not find the reason for the wrong solution. I have searched all the examples, but I haven’t found any relevant ones. Of course, I may have missed them.
Please give me some advice.
Thank you very much.
Best,
Liu
You should add -pc_type lu
to your set of sparams
.
Dear Professor,
According to your suggestion, I have added “-pc_type lu” in the set of sparams. However, the solutions with time iteration is still wrong.
The parallel computing code is as follows,
"…
…
fespace Vh(Th,Pk);
Vh [u,uB,uC], [v,vB,vC],[u1f,u2f,u3f]=[.0,.0,.0],[u1b,u2b,u3b]=[.0,.0,.0];
…
varf Lame([u,uB,uC],[v,vB,vC])
=int3d(Th)(densityidt2uv)-int3d(Th)(density2idt2u1fv)+int3d(Th)(densityidt2u1bv)+
int3d(Th)(densityidt2uBvB)-int3d(Th)(density2idt2u2fvB)+int3d(Th)(densityidt2u2bvB)+
int3d(Th)(densityidt2uCvC)-int3d(Th)(density2idt2u3fvC)+int3d(Th)(densityidt2u3bvC)+
int3d(Th)(lambdadiv(u,uB,uC)div(v,vB,vC)+2.mu( epsilon(u,uB,uC)'epsilon(v,vB,vC)))
+ on(1001,u=0.,uB=0.,uC=0.00001(1.-2.(pif*(t-tt))^2)exp(-(pif*(t-tt))^2))
+on(2001,2002,2003,2004,2005,2006,2007,u=0.,uB=0.,uC=0.);
[u1b,u2b,u3b]=[0.,0.,0.];
[u1f,u2f,u3f]=[u1b,u2b,u3b];
[u,uB,uC]=[u1f,u2f,u3f];
Mat A;
createMat(Th, A, Pk)
for(int i=0;i<100;++i) {
t=t+dt;
matrix Loc = Lame(Vh, Vh,tgv=-1); //solve the problem
real[int] rhs = Lame(0, Vh,tgv=-1);
set(A, sparams = “-ksp_view -ksp_max_it 100 -pc_type lu”);
Vh def(u); // local solution
A = Loc;
u = A^-1 * rhs;
macro def1(u)u// EOM
//Time iteration of displacement
[u1b,u2b,u3b]=[u1f,u2f,u3f];
[u1f,u2f,u3f]=[u,uB,uC];
macro params()cmm = “Global solution time=”+t, fill = 1// EOM
plotMPI(Th, def1(u), P1, def1, real, params)
int[int] Order=[1,1];
savevtk(“uhydrate6_y.vtu”, Th,[u,uB,uC], dataname=“[u,uB,uC]”, order=Order,append = true);
}."
The obtained solution significantly monotonically increases with number of iterations from (negative solution)^20 to (positive solution)^20. Such solution is apparently wrong. Because the solution should be fluctuating around 0, the solution has been obtained from serial calculation code using FreeFem++.
Please give me some advice. Thank you very much.
Best regards,
Liu
Dear Professor,
Could you give me some advice about the below problem?
Thank you very much.
Best,
Liu
I cannot run the code, so I cannot help.
Dear Professor,
Thank you very much for your reply.
The files of code and of .msh (the second file) are attached to this reply.
Parallel_11.edp (4.8 KB)
Quartz_water_calcite6.edp (2.2 MB)
In addition, my computer has 32 processors. However, the code cannot provide the solution but without any error remind, when I run the code based on the number of processors bigger than 8.
Please give me some advice.
Give you my best withes.
Liu
You did not attach Background_fracture_hydrate6.msh
(or Background_fracture_hydrate6.edp
that I then rename). I don’t understand what the problem is, your code is working on 8 or less processes, but not on more?
Dear Professor,
Thank you so much for your reply.
You are exactly right that the second file “Quartz_water_calcite6.edp” is the file of .msh. You can rename it as "Background_fracture_hdyrate6.msh.
I have two questions. You mentioned “my code is working on 8 or less processes, but not on more” is the first one. The second problem that is the most important one, is that the obtained solution using the parallel computing code significantly monotonically increases from (negative solution)^20 to (positive solution)^20 with number of iterations. Such solution is apparently wrong. Because the solution should be fluctuating around 0 and at the region between (negative solution)^-4 to (positive solution)^-4 , and has been obtained from serial calculation code using FreeFem++, as shown in the below figure,
.
So, could you help me find the problem in the parallel computing code, please?
Thank you very much.
Best,
Liu
Do you get the proper solution when running your code with a single MPI process?
Sorry, I did not running my code with a single MPI process. But I obtain the proper solution using the serial calculation code (one process).
One process with PETSc? The exact same code with a single process is running OK?
Sorry, professor. I did not running my code with a single MPI process with PETSc.
OK, then you need to first check that you get the same result with 1 process with PETSc. If not, then the problem is likely not coming from PETSc, but the fact that you are in practice running two scripts that are different, and thus you should not expect the same results.
OK, professor, now, I’m running my program with a single MPI process with PETSc. I will tell you the result later.
So, you temporarily think that my parallel computing code is OK to solve the problem, right?
And, why the code can not run when the number of processes is bigger than 8?
Dear Professor,
In order to effectively solve the problem, I attached the serial calculating code to this reply.
Quartz_water_calcite6 (2).edp (4.7 KB)
I have carefully compared the two codes. I think that the parallel computing code is right. However, I am afraid there is something I don’t understand. So, please check the two codes to find the potential problems.
Thank you very much.
Dear Professor,
I have obtained the solution with 1 process with PETSc, which is similar to the result from 8 processes with PETSc and is not same to the results from serial calculating code.
In addition, I know that I can not obtain the exactly same results between the two codes. However, the trends and order of magnitude of the results from MPI with PETSc are wrong. Therefore, I suspect there’s something wrong with my parallel computing code So, I ask for your help.
Do you have any other suggestions?
Best,
Liu
You need to figure out why the “PETSc with a single process” and the sequential codes are giving you different results. This is not expected. For that, I would start with a simplified code instead of using your huge variational formulation.
Yes, Professor, I am trying to figure out what the problem is. After I find this problem, I will ask for your help. Thank you very much.