Composite finite element spaces with SLEPc

Dear FreeFEM community,

I have been trying the new feature composite finite element space, which seems quite powerful. However, when I attempted use a composite FE space together with EPSSolve, a problem arose: how to correctly create a finite element function to store the eigenvectors.

In previous examples, this is typically done with something like:

Wh<complex>[int] def(vec)(nev); // array to store eigenvectors

My question is: how should I create vec in the composite FE framework?
I tried to imitate the example examples/hpddm/navier-stokes-2d-SLEPc-complex.edp, but failed.

Here is my code attached, it reports an error:

   67 : complex[int]     val(nev);      // array to store eigenvalues
   68 : Xh<complex>
 Error line number 68, in file convection-FSI-composite-SLEPc.edp, before  token >
syntax error
  current line = 68 mpirank 0 / 1
Compile error : syntax error
	line number :68, >
error Compile error : syntax error
	line number :68, >
 code = 1 mpirank: 0

convection-FSI-composite-SLEPc.edp (1.8 KB)

Best regards,
H. Weng

This question has nothing to do with SLEPc, it’s a composite issue. This doesn’t work:

func PF = [P2, P2, P1];
func PT = P2;

mesh ThT = square(100,100);
mesh ThF = square(100,50,[x*0.5,y]);
fespace VhF(ThF, PF);
fespace VhT(ThT, PT);

fespace Xh(<VhF,VhT>);
Xh<complex>[int] vec;

Yes, maybe declaration of such type is not supported in the current FF version.
In my problem, is it possible to use an array like complex[int, int] vec(Xh.ndof, nev) to save the eigenvector?

Yes, that is enough.

I tried to create an array complex[int,int] to store eigenvector, then I got:

   73 : int nev = 2;
   74 : complex[int]     val(nev);      // array to store eigenvalues
   75 : //Xh<complex>[int] [[u1,u2,p],T](nev); // array to store eigenvectors
   76 : complex[int,int]   vec(Xh.ndof,nev);
   77 : 
   78 : 
   79 : complex s = getARGV("-shift_real", 1.0e0) + getARGV("-shift_imag", 1e-15) * 1i;
   80 : string params = "-eps_tol 1.0e-15 -eps_nev " + nev + " " +
   81 :     "-eps_ncv 40 -eps_type krylovschur -st_type sinver
  ... : t -eps_monitor_all " +
   82 :     "-eps_target_magniude -eps_target "+ real(s) + "+" + imag(s) + "i";
   83 : 
   84 : int k = EPSSolve(J, M, vectors = vec, values  = val, sparams = params)Impossible to cast <P3KNMISt7complexIdEE> in <13FEbaseArrayKnISt7complexIdEE>
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <11FEbaseArrayISt7complexIdE5v_fesE> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <St4pairIP11FEbaseArrayISt7complexIdE6v_fes3EiE> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <11FEbaseArrayISt7complexIdE6v_fes3E> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <St4pairIP11FEbaseArrayISt7complexIdE6v_fesLEiE> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <11FEbaseArrayISt7complexIdE6v_fesLE> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <St4pairIP11FEbaseArrayISt7complexIdE6v_fesSEiE> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <11FEbaseArrayISt7complexIdE6v_fesSE> )
	 (	  <13FEbaseArrayKnISt7complexIdEE> :   <St4pairIP11FEbaseArrayISt7complexIdE5v_fesEiE> )

 Error line number 84, in file FSI-composite.edp, before  token )

If I understand correctly, this suggests that EPSSolve only accepts the FEbaseArray type?
Could you please take another look? I’d really appreciate it!

Just look at SLEPc examples in examples/hpddm. You have to use another named parameter, not vectors.

I’ve reviewed all the SLEPc examples, and they consistently use FE function arrays like Vh[int], Vh<real>[int], or Vh<complex>[int]. As we’ve previously discussed, this type of declaration isn’t implemented for composite spaces yet. Perhaps composite space functionality isn’t fully mature at this stage? :confused:For now, I will go back to use block system.

Use the named parameter array.

For composite spaces involving different meshes, how is domain decomposition implemented? Specifically, when physical domains have overlapping regions, do they share the same decomposition?

The decomposition is done at the algebraic level (the matrix is decomposed into stripes of contiguous rows, like a typical PETSc Mat).

Hi, @prj

In a conjugate heat transfer problem, the fluid domain is usually a subset of the heat domain (which includes both fluid and solid). I have the following questions:

  1. How can I perform the same decomposition on both meshes?
    I have an idea: mark the fluid domain with a specific region, and then use trunc to retrieve the decomposed fluid mesh after the heat domain has been decomposed.

  2. If DDM is too complicated, can I bypass it using the algebraic method you mentioned?
    How can I use such an algebraic method instead of DDM, as done in many examples?

  1. There are multiple examples in the FreeFEM repository, e.g., FreeFem-sources/examples/hpddm/restriction-2d-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub or FreeFem-sources/examples/hpddm/helmholtz-coupled-2d-PETSc-complex.edp at develop · FreeFem/FreeFem-sources · GitHub.
  2. If DDM is too complicated, it’s not too complicated, and it’s the only way to do assemblies in parallel.

Hello! I am not sure how to solve a block system with EPSSolve.
My left hand side is:

 J= [ [AA,  AB,  0   ],
      [BA', BB,  BC  ],
      [0 ,  BC', 0   ]  ];

and my right hand side is


Mat<complex> Loc = [[M1M   ,0      ,0      ],
                    [0      ,M2M   ,0      ],
                    [0      ,0     ,M3M    ]];

the related vector is T,[u1,u2],p for the blocks.

Could you please check out my script to solve a conjugate heat transfer eigenvalue problem:

//  run with MPI:  ff-mpirun -np 4 script.edp
// NBPROC 4
//load "iovtk"

load "PETSc-complex"

macro dimension()2//
include "macro_ddm.idp"

func PkU = [P2, P2];
func Pkp = P1;
func PkT = P2;

macro grad(u)[dx(u), dy(u)]//
macro div(u)(dx(u#1) + dy(u#2))//
macro UgradV(u, v)[[u#1, u#2]' * [dx(v#1), dy(v#1)],
                   [u#1, u#2]' * [dx(v#2), dy(v#2)]]//

int inner = 1;
int outer = 2;

border c1(t=0,2*pi){
  x=0.5*cos(t);
  y=0.5*sin(t);
  label=inner;
}

border c2(t=0,2*pi){
  x=1*cos(t);
  y=1*sin(t);
  label=outer;
}

mesh Th1=buildmesh(c1(-100)+c2(100));
mesh Th2=buildmesh(c1(100));
mesh Th3=Th1+Th2;
plot(Th3,wait=1);

fespace VhU(Th1,[P2,P2]);
fespace Vhp(Th1,P1);

fespace VhT(Th3,P2);


VhU<complex> [uc1,uc2]=[0,0];

VhT<complex> Tc=y+1;

plot(Th3,Tc,wait=1);

real Pr = 1;
real Ra = 1e3;
real tM=getARGV("-tM", 0.5); // thetaM
real kappa,nu;

func real DefCoef()
{
	nu = sqrt(Pr/Ra);
	kappa = 1./sqrt(Ra*Pr);
}
DefCoef();



varf A11(T,TT)  = int2d(Th3)( 
    kappa * (grad(T)' * grad(TT)) 
    + [uc1,uc2]'*[dx(T), dy(T)]*TT) 
    + on(outer, T=-y);

varf A12([u1,u2],[TT]) = int2d(Th1)(
    [u1,u2]'*[dx(Tc), dy(Tc)]*TT)
    //+ on(inner,outer, u1 = 0, u2 = 0)
    ;

varf A21([v1,v2],[T]) = int2d(Th1)(
    - (T)*v2)
    
    /* + on(outer, T=-y) */;

varf A22([u1,u2],[v1,v2]) = int2d(Th1)(
      (UgradV(uc, u) + UgradV(u, uc))' * [v1, v2]
    + nu * (grad(u1)' * grad(v1) +
            grad(u2)' * grad(v2)))
    + on(inner,outer, u1 = 0, u2 = 0);
    
varf A23([p],[v1,v2]) = int2d(Th1)(
    - p * div(v))
        ;
varf mass1(T,TT) = int2d(Th3)( T*TT );
varf mass2([u1,u2],[v1,v2]) = int2d(Th1)( [u1,u2]'*[v1,v2] );
varf mass3(p,q) = int2d(Th1)( 0*p*q );


matrix<complex> A11m=A11(VhT,VhT);
matrix<complex> A12m=A12(VhU,VhT);
matrix<complex> A21m=A21(VhT,VhU);
matrix<complex> A22m=A22(VhU,VhU);
matrix<complex> A23m=A23(Vhp,VhU);
//Mat<complex> ;
Mat<complex> AA=A11m;
Mat<complex> AB=A12m;
Mat<complex> BA=A21m;
Mat<complex> BB=A22m;
Mat<complex> BC=A23m;
Mat<complex> J= [ [AA,  AB,  0   ],
                  [BA', BB,  BC  ],
                  [0 ,  BC', 0   ]  ];

matrix<complex> M1m = mass1(VhT, VhT);
matrix<complex> M2m = mass2(VhU, VhU);
matrix<complex> M3m = mass3(Vhp, Vhp);

Mat<complex> M1M = M1m;
Mat<complex> M2M = M2m;
Mat<complex> M3M = M3m;


Mat<complex> Loc = [[M1M   ,0      ,0      ],
                    [0      ,M2M   ,0      ],
                    [0      ,0      ,M3M   ]];


Mat<complex> M(J, Loc, clean = true);


//fespace Wh(Th3, [P2, P2, P2, P1]); // complete space [u, v, T, p]
int dofGlobal = VhT.ndof + VhU.ndof + Vhp.ndof;


int nev = 10;
complex[int,int] vec(dofGlobal,nev);

complex[int]     val(nev);      // array to store eigenvalues

//Wh<complex>[int] [vec,vecB,vecC,vecD](nev); // array to store eigenvectors
complex s = getARGV("-shift_real", 1.0e-6) + getARGV("-shift_imag", 0.6) * 1i;
string params = "-eps_tol 1.0e-6 -eps_nev " + nev + " " +
    "-eps_type krylovschur -eps_monitor_all " +
    "-eps_target " + real(s) + "+" + imag(s) + "i";
int k = EPSSolve(J, M, array = vec, values  = val, sparams = params);

I am not sure what’s wrong with my code.

What is the issue with your code?

Hello,
The example FreeFem-sources/examples/hpddm/helmholtz-coupled-2d-PETSc-complex.edp at develop · FreeFem/FreeFem-sources · GitHub can not run with parameter -user_partitioning.

ff-mpirun -n 4 helmholtz-coupled-2d-PETSc-complex.edp -wg gives a normal end,

but ff-mpirun -n 4 helmholtz-coupled-2d-PETSc-complex.edp -wg -user_partitioning prints:

  0 KSP Residual norm 7.501711698142e-03
 ** On entry to ZGEMV parameter number  6 had an illegal value
--------------------------------------------------------------------------
mpiexec has exited due to process rank 3 with PID 0 on
node whc-MU71-SU0-00 exiting improperly. There are three reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

3. this process called "MPI_Abort" or "orte_abort" and the mca parameter
orte_create_session_dirs is set to false. In this case, the run-time cannot
detect that the abort call was an abnormal termination. Hence, the only
error message you will receive is this one.

This may have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).

You can avoid this message by specifying -quiet on the mpiexec command line.

Moreover, could you please tell me the function of partitionerSeq, partitionerPar and CoherentGlobalMesh.

Thanks for the report, this is fixed in Properly-sized interpolation matrix · FreeFem/FreeFem-sources@b406672 · GitHub, but it also needs a PR from Htool Avoid empty-sized gemv by prj- · Pull Request #64 · htool-ddm/htool · GitHub, so it will take a while to get everything integrated. partitionerSeq is to call a sequential mesh partitioner, partitionerPar is to call a distributed-memory mesh partitioner or broadcast the result from a sequential mesh partitioner, and CoherentGlobalMesh is to make sure that the numbering of the mesh is coherent for doing either a FEM or a BEM assembly. These questions are unrelated to your initial inquiries (I believe?), so it would be best to open new posts.