Lagrange multiplier for Dirichlet boundary condition

Dear all,
Is there an example to implement Dirichlet boundary condition using Lagrange multiplier method?
Thank you in advance!
Best,
Yongxing

For an example with a Lagrange multiplier, see: FreeFem-sources/laplace-lagrange-PETSc.edp at develop · FreeFem/FreeFem-sources · GitHub
You’ll obviously have to change this a bit to make it work with Dirichlet conditions. For example, change the definitions of B and pB so that they have dimensions equal to the number of DoFs along the Dirichlet boundary.

This is great. Thank you @cmd !
BTW, do we have H^2 FEM space?

Hi Chris @cmd , thanks for your previous help. Do we have to use PETSc to implement this?
I simply did this in FreeFem:
heat3d_lagrange_multiplier.edp (1 KB)
but it turns out to be a zero result, however, I cannot see what’s wrong?

diff forum3*
58c58
< u[]=sol;
---
> 
marchywka@happy:/home/documents/cpp/proj/freefem/play$ 

Thank you. What’s a silly mistake!

Dear @cmd @marchywka , thank you both for your previous help and I have made good process about using Lagrange multiplier to implement Dirichlet BC. However, there are problems when I try to parallelise my code.
It seems FreeFemm++ and ff-mpirun -n 1 … compile code differently. I found the fowllowing three operations are not allowed using ff-mpirun -n 1…, while it works well for FreeFem++

1. int[int] labs = [1];
    ThS = extract(ThG, label=labs);
2. matrix UAtemp = ua(Vh,Sh);
3. createMat(ThS, UU, Pk)

it seem createMat(ThS UU, Pk) does not work for a surface mesh ThS ? also it seems the second command is illegal if Vh and Sh are in two different FE spaces defined on two different meshes?

Can you please help me have a look? the sequential code works well.
stokes_lagrange_parallel.edp (2.2 KB)
stokes_lagrange.edp (1.5 KB)

You need this to make it work: FreeFem-sources/helmholtz-3d-surf-PETSc-complex.edp at master · FreeFem/FreeFem-sources · GitHub.

Thank you @prj , I am trying do the following, but it seems macro dimension()3 and macro dimension()3S conflict with each other.

int nn = 10;

real mu=1.e0;
real u0x=1, u0y=0, u0z=0, u1x=0,u1y=0,u1z=0;

mesh3 Th = cube(nn, nn, nn);
mesh3 ThG=Th;

func Pk=[P2,P2,P2,P1];
func Pm=[P2,P2,P2];

int[int] fn2o;
macro ThN2O() fn2o //

buildDmesh(Th);

int[int] labs = [1];
meshS ThS = extract(Th, label=labs);

fespace Sh(ThS, Pm);
Sh [ax,ay,az];

fespace Vh(Th, Pk);
Vh [ux,uy,uz,p];

fespace VhG(ThG, Pk);		
VhG [uxG,uyG,uzG,pG],[uxR,uyR,uzR,pR];

int[int] findex = restrict(Vh, VhG, fn2o);

macro Grad(u) [[dx(u#x), dy(u#x), dz(u#x)],
	       [dx(u#y), dy(u#y), dz(u#y)],
	       [dx(u#z), dy(u#z), dz(u#z)]] //

macro div(u) (dx(u#x)+dy(u#y)+dz(u#z)) //

macro vect(u) [u#x, u#y, u#z] //

  
varf uu([ux,uy,uz,p],[uhx,uhy,uhz,ph]) = int3d(Th)(mu*(Grad(u):Grad(uh)) -div(uh)*p -div(u)*ph)
  +on(2,3,4,5,6, ux=0, uy=0, uz=0);

varf ua([ux,uy,uz,p],[ahx,ahy,ahz]) = int2d(ThS)(vect(u)'*vect(ah));

varf aa([ax,ay,az],[ahx,ahy,ahz]) = int2d(ThS)(0.*vect(ah)'*vect(a));

varf af([ax,ay,az],[ahx,ahy,ahz]) = int2d(ThS)(vect(u0)'*vect(ah));

Mat UU, AA;
{
    macro dimension()3//
    macro def(i)[i, i#B, i#C, i#D]//
    macro init(i)[i, i, i, i]//
    createMat(Th, UU, Pk)
    }
{
    macro dimension()3S//
    macro def(i)[i, i#B, i#C]//
    macro init(i)[i, i, i]//
    createMat(ThS, AA, Pm)
}

UU = uu(Vh, Vh, tgv=-1);
AA= aa(Sh,Sh);

matrix UA = ua(Vh,Sh);
Mat UA(UU,AA,UAtemp);

real[int] Af = af(0,Sh);

Mat M=[[UU, UA'],
       [UA, AA]];

How can we build the non-square Mat UA ?

See FreeFem-sources/helmholtz-coupled-2d-PETSc-complex.edp at 1312e9d9c06fec8d3d49f1452df6f951901d85c9 · FreeFem/FreeFem-sources · GitHub.
You need to use the « restriction » parameter for the second diagonal block.

Dear @prj , @marchywka, @cmd ,

Thank you for your previous help and sorry to bother you again. I have now implemented the parallel version of Lagrange multiplier method. However, there are two problems:

  1. it only works for one CPU. For two cores, it shows the following error;

— global numbering created (in 3.909512e-03)
— global CSR created (in 5.946880e-04)
— global numbering created (in 5.056100e-05)
— global CSR created (in 5.102900e-05)
— global numbering created (in 4.742200e-05)
— global CSR created (in 4.979300e-05)
malloc(): invalid size (unsorted)
malloc(): invalid size (unsorted)

  1. I arrange the pressure and Lagrange multiplier bock in one block, and the solver’s convergence becomes very slow.
    stokes_lagrange_parallel.edp (3.1 KB)

Looking forward to your reply.

Best wishes,
Yongxing

1 Like

Even with a single process, your code is not right:

Linear solve converged due to CONVERGED_RTOL iterations 204
 --- system solved with PETSc (in 7.606550e-01)
CHECK_KN: !u.step || u.n == n in file: ./../femlib/RNM_opc.hpp, line 72
  current line = 153 mpirank 0 / 1
Assertion fail : (0)
	line :53, in file ./../femlib/RNM.hpp
Assertion fail : (0)
	line :53, in file ./../femlib/RNM.hpp
 err code 6 ,  mpirank 0

You need to be careful with the dimensions of your arrays.

With two processes, you are not passing the proper indices to fieldsplit, see the PETSc error below.

[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Arguments are incompatible
[0]PETSC ERROR: Could not find index set
[0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc!

This is wrong:

us(0:undof - 1) = 1.;
us(undof : tndof - 1) = 2.;

You must pass PETSc dimensions, not FreeFEM dimensions, see stokes-block-2d-PETSc.edp.

OK, thank you. I will have a look. Interestingly, one core works on my machine.

No, you think it works, but it doesn’t, you should build FreeFEM with debugging enabled to check the error yourself. It’s obvious that there is an error line 153 in your script, the left-hand side ux is only for the velocities while the right-hand side has both velocities and pressure => dimension mismatch.

1 Like

Dear @prj @marchywka , sorry to bother you again…
I have now used ChangeNumbering() to converge the FreeFem dimension to PETSc dimension as follows. However, the error is still there


 f(0:undof + pndof - 1) = 0;
//f(undof + pndof : tndof - 1) = Af;
 real[int] Afnew;
ChangeNumbering(AA, Afx [], Afnew);
 f(undof + pndof : tndof - 1) = Afnew;

us(undof:tndof - 1) = 2.; 
real[int] ufnew;
 ChangeNumbering(UU, ufx [], ufnew);
us(0 : undof - 1) = ufnew;

When I use one core to run my code, there is no error and the result is correct, but there is a problem of using two cores.
stokes_lagrange_parallel_1.edp (3.4 KB)

The error looks like:

BTW, what was the command to output the error message as you showed above? I use
ff-mpirun -n 2 stokes_lagrange_parallel_1.edp -v 10

Thanks very much for your help.

Your dimensions are not consistent. Just run your code in debug mode, you’ll get:

CHECK_KN: !u.step || u.n == n in file: ./../femlib/RNM_opc.hpp, line 72
  current line = 136 mpirank 0 / 2
Assertion fail : (0)
	line :53, in file ./../femlib/RNM.hpp

Thank you @prj .

It seems there is still a mismatch of the dimensions, but I cannot understand why.

So, do you have an example of using Lagrange multiplier to implement Dirichlet boundary conditions – PETSc parallel version?

Because there are two meshes: volume mesh + its surface mesh (where the multiplier is defined), which is a bit complicated — there may be an error about this in my code.

I am not sure should I do meshS ThS = extract(Th, label=labs); after
buildDmesh(Th) or before?

You are again mixing FreeFEM and PETSc dimensions. andof is a FreeFEM dimension, f is a FreeFEM vector, but you assign a PETSc vector Afnew of PETSc dimension. That is not consistent and you need to fix that. buildDmesh() is properly called in your script.

If I do
meshS ThS = extract(Th, label=labs);
after
buildDmesh(Th),
does this mean ThS has already been partitioned and distributed to different cores?

and ThS would not include overlapped surfaces, instead, it is partitioned based on the original labels label=labs ?

1 Like