Boundary condition

Dear community FreeFem, I need your help to introduce the boundary condition U.N=0 on Gamma where U=(ux,uy,uz) is the velocity and N=(Nx,Ny,Nz) is the unit normal vector.
For U=0 we put +on(Gamma, ux=0,uy=0, uz=0) but I don’t know what to write for the condition U.N=0 !?
Thanks for your help!

The most simple is to put


where (vx,vy,vz) is the test function.

It will work if your boundary is the union of a finite number of plane parts.
However it will give bad results if your boundary is really curved,
because then the normal changes direction from one boundary cell
to the other, and the condition uxN.x+uyN.y+uz*N.z=0
will impose more conditions because u is continuous from one boundary cell to the other, leading to u.N1=0 and u.N2=0 at the intersection of the two boundary cells with respective normals N1 and N2 (in 2d it would thus impose ux=uy=0).

A more involved answer is to take


where r is a new unknown (lagrange multiplier) and w is its associated test function.
This will work for a curved boundary.


Many thanks for your help. My geometry is a cylinder, and the boundary where I want to impose the condition U.n=0 is on the curved face. I am trying to solve this problem by introducing the new variable (lagrange multiply) in the problem.
Thanks again.


I have a similar problem. In my case, what I did was: I only prescribed the Dirichlet conditions and, before solving the linear system Ax = b, I removed the rows and columns associated with the degrees of freedom on the boundary that I want to satisfy du/dn = 0 in matrix A and vector b.

Your method is possible when you can identify the degrees of freedom that you want to keep and the degrees of freedom that you want to eliminate (ie make vanish).
When you cannot do that (because the condition you want to impose involves several degrees of freedom simultaneously in such a way that you cannot simply decouple them) you need to apply some penalty method either directly or with a Lagrange multiplier.
This is the case for the condition U.n=0 that couples the components Ux,Uy of U in a way that depends on the position on the boundary (because n changes direction over the boundary). (Remark that your method could work however if you use cylindrical decomposition of the velocity vector).
Except for very particular cases or methods, for a really curved boundary only the Lagrange multiplier method is possible.

If you want to avoid using a Lagrange multiplier (because it enlarges the matrix to invert), some other ways to penalize the normal boundary conditions are possible.

In 3D you can use for example the boundary vertices. You have to define first a continuous approximate normal on the boundary (label 1 is for boundary)

fespace Vh1(Th,[P2,P2,P2]);
varf bdryn([w1,w2,w3],[v1,v2,v3])=int2d(Th,1)(v1*N.x+v2*N.y+v3*N.z);
Vh1 [normalappx,normalappy,normalappz];

Then you can add to your problem


Setting qft=qf1pTlump is a trick to get the integral on the boundary triangle performed on the three vertices of the triangle. Thus the int2d plays the role of an int0d on the boundary vertices.

Another way is to use the boundary edges:


This edge formula will work if your velocity fespace is P2, but not for P1 (there are too many edges compared to the boundary degrees of freedom of P1).
You can find the explainations in

Example files (2d and 3d) are:
normal2d.edp (1.3 KB)

normal3d.edp (2.1 KB)


Dear Bouchut, I met relevant problem with a curved slippery boundary. I tried to solve Stokes equation with a slip boundary condition. When it comes to Couette flow, such penalty method works and in accord with analytic solution quite well. However, when I impose such B.C. on the internal boundary of Taylor-Couette flow, the error is a bit large, especially b is around 0.001.

Will the Lagrange multiplier method give a more accurate result?
Here my code attached:
Stokes_Rayleigh-Taylor_slip.edp (2.4 KB)
Could you give me some advice?

Dear Quentin,
I have evaluated the L2 error by setting

real error=sqrt(int2d(Th)((u1-u1a)^2+(u2-u2a)^2));
cout<<"error L2="<<error<<endl<<endl;

The output gives a smooth behavior with respect to the values of b, with small values of the L2 error. I think that the numerical results are correct.
It is misleading to look at the relative error as in your formula. When evaluating a quantity (q-qa)/qa,
the answer is large when qa is small, which does not mean that you have a bad approximation, since here qa is not a norm (qa small does not mean that |ua| is small).
I replaced err=(umin1-umin2)/umin1 by err=(umin1-umin2)/umax1 where umax1 is the Linfty norm of ua, it gives excellent results.

Dear Bouchut,
I think you are right, thanks a lot.
I replaced the derivation of Ua_min, which is from a FEfunction U = sqrt(u1a^2+u2a^2); to a expression, and I saw the difference.
I have found that the Ua is not correct at a specific interval around b_i, which also relevant to mesh size h, if I use a coarser mesh, the interval would shift to a larger order. I tried a range of n=10, 20, 40, 80, n is number of vertices on the internal boundary, then I get a rough approximate b_i~h^2.

I don’t know why such error exists, and I used to take it for granted that the result come from a analytic expression should be precise and there only exists a rounding error.The abnormal behaviour of Ua around b_i: the min value is very close to zero! Following your suggestion, I tried to plot |u-ua| by U2=sqrt((u1-u1a)^2+(u2-u2a)^2); I plot U2 below:

I don’t know when such error happens. If it roots in the computation U = sqrt(u1a^2+u2a^2);I think if the Ua approaches zero on x, u1a and u2a should approach zero simultaneously, but there should not exist such point because of the analytic expression.If the computation of U is correct, the error should from u1a and u2a, and it is generated by FE interpolation.
Do you have any comments about this?

Indeed in general it is not correct to evaluate the error as (umin1-umin2)/umin1
because it could be that umin1=0, and umin2=1e-10 for example. This corresponds to an excellent approximation, whereas (umin1-umin2)/umin1=infty !

Your error estimation based on the norm makes sense, which gives a insight of a entire domain, rather than a single point.

Since my research focuses on the implementation of a slip boundary condition, my considerations for errors are mainly based on the accuracy of the velocity on the boundary, maybe I should do something like


on the boundary.

And there remains something unclear. Now my confusion is about why the result U=|u1a, u2a| deviate from a precise analytic solution. Compromising a certain accuracy of the numerical solution, the previous plot of |u-ua|shows that such deviation would happen on where is adjacent to the internal boundary.
If my understanding is correct, such deviation comes from either

u1a = (Ar + r/B)*(-y/r)

, (similarly,

u2a=(Ar + r/B)*(x/r)



Each step may bring some deviation from the absolutely correct analytic solution.

I once thought that the finite element function given by the analytic expression was somewhat accurate and could be easily compared with the numerical solution, but now, I have some doubts about my previous approach. In general, it is a meaningful case to me.

I think you should look at the error in the whole domain, because the boundary values are almost imposed optimally. Thus measuring only the boundary error would be misleadingly accurate. But measuring the boundary error is interesting to check that you are not doing wrong!
A fundamental idea in the numerical analysis of finite elements is that the optimal approximation error is obtained if you reach the interpolation of the exact solution in your approximation space. Indeed you cannot hope to obtain a better numerical solution than the interpolation (of any kind) of the analytic solution onto the mesh. In this interpolation, the important point is the order of accuracy. This accuracy is related to the finite element we use, here it is P2 thus we expect at best an L2 error in h^2 with h the mesh size.
Maybe you were hoping for an error smaller than that, which is not possible!

In any case you have to remember that interpolation of the exact solution on the mesh is different from the exact solution.

Sorry for the previous bother, now I understand that :

\int_\Omega \nabla u\cdot (\nabla v)^T=\int_\Omega div(v\cdot \nabla u)=\int_\Gamma (v\cdot \nabla u)\cdot \textbf{n}=-\int_\Gamma \kappa u\cdot v,

and the transpose operator ' seems not to be compatible with the tensor scalar product operator : , such that the term 0.5*Grad(v1,v2):(Grad(u1,u2)+Grad(u1,u2)')actually performed as Grad(v1,v2):Grad(u1,u2)), this leads to an additional term related to the curvature need to be added to the weak form.

I do not understand well what you mean. Let me call
Then Grad(v1,v2):Du=Dv:Du but this is different from Grad(v1,v2):Grad(u1,u2).
In particular they induce different boundary terms when integrating by parts.
In mechanics the form Grad(v1,v2):Du=Dv:Du is the right one.

Dear Bouchut,
I think it is because when macro Du(u1,u2) (0.5*Grad(u1,u2)+0.5*Grad(u1,u2)')// identified by FreeFEM, the transpose operator ' seems to be neglected. When FreeFEM read Grad(v1,v2):Du(u1,u2) in the problem statement, a different equation solved with a wrong boundary term while it won’t report an error. I have already made some attempts, to substitute Grad(v1,v2):Du(u1,u2) by Grad(v1,v2):Grad(u1,u2),which gives the same result, so the problem is the syntax like A:B' maybe illegal in FreeFEM.
I found the right practice is to use
macro epsilon(u1,u2) [dx(u1), dy(u2), (dx(u2)+dy(u1))*0.5, (dx(u2)+dy(u1))*0.5] //
and write a vector scalar product like 2*epsilon(u1,u2)'* epsilon(v1,v2), which gives a correct result.

I see. Probably FreeFem cannot do both : and ’ in the same instruction, you need to write two lines, one for each.

I found that the L2 error mainly influenced by a fluctuation of velocity on the boundary. The error will reach its maximum at the vertices and attenuated to a rather low level just on a single triangular element:

The results given by
+ int1d(Th,qfe=qf1pE,1)(c2*[-N.y,N.x]'*[v1,v2]*[-N.y,N.x]'*[u1,u2]*(1.0/b))
seems to be more precise but always below the analytic one, and the normal vector approximated by [normalappx, normalappy] will give a result which is much larger than the analytic solution when b<<1.
I have tried something like adding an thin layer of mesh near the boundary, and use a mixed boundary term like

+ int1d(Th,qfe=qf1pE,1)(c1*(u2*normalappx-u1*normalappy)*(v2*normalappx-v1*normalappy)*(1.0/b))
+ int1d(Th,qfe=qf1pE,1)(c2*[-N.y,N.x]'*[v1,v2]*[-N.y,N.x]'*[u1,u2]*(1.0/b))
+ int1d(Th,qfe=qf1pElump,1)(c3*(u2*normalappx-u1*normalappy)*(v2*normalappx-v1*normalappy)*(1.0/b))

where c_1+c_2+c_3=1 , such trials work sometimes but, I can’t get a general approach to smooth the velocity profile.
Now that the L2 error is small, I am hoping for a smooth velocity profile, is there a better approach?

Remark, I Have correct this bug

 macro Du(u1,u2) (0.5*Grad(u1,u2)+0.5*Grad(u1,u2)')//

two week ago, so it is no on current version,

Sorry, for this.

Thanks for the debug!

You should take care that the computation of [normalappx, normalappy] that I gave is simplified and not fully consistent: it gives something that is colinear to the normal. This is not a problem for the penalty terms since there is a tgv factor, hence normal or c*normal is similar.
But for your terms that are not penalty terms, the constant matters.
Therefore maybe you should use a fully consistent approximation of the normal via an L2 projection (that needs to solve a linear system), as for example (not tested)

solve projN([normalappx, normalappy],[v1, v2])=
int1d(Th)([normalappx, normalappy]'*[v1, v2])-int1d(Th)(N'*[v1, v2])
+int2d(Th)(1e-10*[normalappx, normalappy]'*[v1, v2]);