Varf including finite element function that is solution of different PDE in parallel computing using PETSc

Hi FreeFem++ developers!

Thank you very much for makingFF.

I’d like to perform the 3D topology optimization, where the 3D elasticity problem and reaction-diffusion equation (RDE) are included, using PETSc.
There, the structure is represented by characteristic (chi) function obtained by the solution of RDE.

At first, I tried this code for parallel computing of 3D elasticity problem as follows:
There varf includes chiP that is updated in iterative process during optimization.
That chiP is obtained by the RDE (that is not parallel computing) followed by this elasticity problem.


load "PETSc"                        // PETSc plugin
macro dimension()3// EOM            // 2D or 3D
include "macro_ddm.idp"  

// generate mesh
func Pk = [P2, P2, P2]; // finite element space
fespace Wh(Sh, Pk);

buildDmesh(Sh);
Mat A;
	
	// following two macros are required in "creatMat"
	macro def(i)[i, i#B, i#C]// EOM     // vector field definition
	macro init(i)[i, i, i]// EOM        // vector field initialization
	createMat(Sh, A, Pk)

Wh def(u);

macro epsilon(u) 	[dx(u),dy(u#B),dz(u#C),(dz(u#B)+dy(u#C)), (dz(u)+dx(u#C)), (dy(u)+dx(u#B))] // strain tensor
macro D [
			[2.*mu+lambda,	lambda,			lambda,			0,		0,		0	], 
			[lambda,		2.*mu+lambda,	lambda,			0,		0,		0	], 
			[lambda,		lambda,			2.*mu+lambda,	0,		0,		0	], 
			[0,				0,				0,				mu,		0,		0	], 
			[0,				0,				0,				0,		mu,		0	], 
			[0,				0,				0,				0,		0,		mu	]
		] //elastic tensor
		
macro Aotomori [
			[2.*A2+A1	,	A1,				A1,				0,		0,		0	], 
			[A1,			2.*A2+A1	,	A1,				0,		0,		0	], 
			[A1,			A1,				2.*A2+A1	,	0,		0,		0	], 
			[0,				0,				0,				A2	,	0,		0	], 
			[0,				0,				0,				0	,	A2,		0	], 
			[0,				0,				0,				0	,	0,		A2	]
		] //elastic tensor
		
real gZ;	gZ = -0.01;
macro div(u)(dx(u) + dy(u#B) + dz(u#C))// EOM

varf vPb(def(u), def(v)) = int3d(Sh)((D*epsilon(u))'*epsilon(v)*E*   chiP   )
			+int2d(Sh,traction)(gZ*vC)
			+ on(wall, u = 0.0, uB = 0.0, uC = 0.0); 

A = vPb(Wh, Wh);
	
	// define right hand side vector
	real[int] rhs = vPb(0, Wh);
	
	// set solver
	set(A, sparams = "-pc_type gamg -ksp_view -ksp_max_it 200", bs = 3);
	
	// solve
	u[] = A^-1 * rhs;


I don’t have any error, but during optimization process, the change of chiP is not affected when the elasticity is solved.
How can I send information about chiP solved in RDE to the parallel solver of the elastic problem ?

Sorry if there have been similar questions in the past.
I would appreciate your response.

You did not copy/paste’d your code properly.

Thank you for your reply on your busy schedule.

I have sent you a part of the code.
Should I send you the code for the entire optimization?
I wish there was a way to send it to you hidden.

You can send it in private.

Dear Professor Jolivet
Thank you for consulting with me through personal communication.

What I’d like to do is sending the solution, which is computed sequentially, to a segmented region in parallel computing.
I have not looked into the official documentation enough and
that can be accomplished by using the command as follow:


broadcast(processor(int rk), Data D)

This means that
Process rk Broadcast Data D to all process inside MPI_COMM_WORLD.

Many thanks,
Keita Kambayashi

Yes, this calls MPI_Bcast(3) man page (version 4.1.4) internally.