Laplace eigenproblem with Robin boundary condition

Dear FreeFem users and developers,

I need to solve the eigenequation of the Laplace operator in two
spatial dimensions, with a homogeneous Robin boundary condition:

Laplace phi = lambda phi in the domain
a phi + b \partial_n phi = 0 at the boundary
int phi^2 = 1 integral over the domain

I have seen in the documentation and the examples that FreeFem allows
to solve the eigenequation with Dirichlet boundary conditions, and I
also found some hints on Neumann bc. In some sense, I need everything
between these two: The parameters a and b take any values (constant
for the moment, space-dependent later).

Is there a way to formulate this problem in FreeFem? I searched the
documentation and thought about it, but I see no way. Maybe you have
an idea?

If I understand the documentation correctly, Dirichlet bc are
implemented by penalisation, that is some corresponding entries in the
matrices of the discretised eigensystem are set to so large values
that the values of phi can only be zero in the end. Can this strategy
be extended to arbitrary linear boundary conditions? Is the FreeFem
problem language ready to formulate such a penalisation (beyond
Dirichlet)?

All hints and good ideas are welcome,
Michael

Hi
For the Robin condition I would says something like

int1d(Th,2)(K * v * w)

for Robin conditions on boundary labelled 2. For your case, I think K=-a/b or something like this… This is discussed in the documentation.

For the constraint, I suppose, it is integral on the whole domain right? If so, it is not worth implementing this (as long as your problem is linear). Indeed you can always normalize your solution afterwards…

Hello Julien,

thanks for your quick answer. Let me reformulate the problem in terms of matrices: The bulk eigensystem becomes a generalised eigensystem, where matrix A is the discretised Laplace, and B is the weights from the FEM discretisation. The vector x is the degrees of freedom of the field phi:

A x = lambda B x

The boundary condition comes as another matrix C: We get the constraint on the eigensystem

C x = 0

The matrix C depends on the parameters a,b. I agree that the matrix C has an implementation such as you indicate. It is an integral of some test functions over the boundary of the domain:

C(a,b) = int1d(Th,2)(K(a,b) * v * w)

The point that makes it difficult is to impose that constraint on the above eigensystem. This is the core of my question.

As you say, the integral constraint is not a problem, it is dealt with by any eigensolver (FreeFem probably uses SLEPc?)

x.T B x = 1

I read the documentation you referred to. There, the Poisson equation is discussed. I try to rephrase the idea in terms of matrices: we have the bulk equation

A x = f

and the boundary condition

C x = b

The final equation we solve is then eq. (23) on the documentation webpage:

(A+C) x = f+b

I have two problems now:

  1. I fail to see how to apply this idea to the eigenproblem, and
  2. I fail to see why the matrix equation (A+C)x=f+b is equivalent to the Poisson problem with boundary conditions. I would have expected that somehow we constrain the degrees of freedom at the boundary to satisfy Cx=b, while the bulk degrees of freedom satisfy Ax=f. How does the addition of the matrices A+C achieve this?

did you looked at the example codes here?
I would say, simply to change the variationnal form for A by adding the Robin boundary condition.
Again, normalize afterward…
how about that?
LapEigenValueRobin.edp (2.0 KB)

Your example in the file LapEigenValueRobin.edp works very well. It produces results that look very convincing. I am a bit surprised about this, because I still do not understand why the implemented equations produce a result of the initially posed problem (that is the point 2 in my previous post, about the difference of A+C and the matrix A, projected on the orthogonal subspace of C – any comments on that are very welcome). I think that I will have to do further checks to convince me that the produced result is indeed the result I want. There should be an analytic solution available in the square domain your example uses. We will see…

For the moment, I thank you for your help.

I would like to come back to my problem with the Robin boundary conditions. I now passed to space-dependent prefactors in the Robin boundary condition: alpha(x,y) and beta(x,y). The fact that they depend on space changes nothing in the procedure you advertised in your example LapEigenValueRobin.edp, so I would expect that it should work. I chose the Robin boundary conditions as simple sine functions, visualised in LapEigenValueVaryRobin_AlphaBeta.pdf (28.9 KB)

However, the eigensolutions fail to satisfy the boundary conditions: They do not fail just a bit, they seem to ignore them entirely. They rather fall back to some zero-normal-derivative boundary condition instead. Here is the FreeFem script that I used: LapEigenValueVaryRobin.edp (2.2 KB) Already at the first eigenvector, which is a constant function, with eigenvalue zero, you can see clearly that it does not satisfy the nontrivial boundary conditions.

I verified that I did not make any obvious mistake: I played with lots of different functions alpha(x,y) and beta(x,y). I implemented a crude finite-differences scheme to have an idea of how the eigenvectors might look like: LapEigenValueVaryRobin_FiniteDifferences.pdf (873.3 KB) . And there is some chance to get an analytic solution.

The above observation has grave consequences: The boundary conditions cannot be implemented in matrices as (A+C). It is probably necessary to pass to (PAP), where P is the projector on the null-space of C. This point is not limited to eigenequations, but it concerns the general way how boundary conditions are implemented. I would expect that this issue has been debated largely within the finite-element communities, but I find no trace of it. Are you aware of any scientific paper discussing this issue? Or could you provide me with the right keyword for my search for this issue?

Hi.
I am not sure of what you want to do exactly with the alpha and beta, but
the way you implemented it, K is zero everywhere. it follows from

(xi-0.2) % 1.0

which is always zero…
So indeed, you have just Neuman conditions, because you have K=0…
if you implement you K differently, I think it works

To add more explanations, I want to solve the following problem:

Laplace phi(x,y) = lambda phi(x,y) in the rectangle
alpha(x,y)phi(x,y) + beta(x,y)\partial_n phi(x,y) = 0 as boundary condition
integral phi^2(x,y) = 1

the functions alpha and beta vary as indicated in the plot of my previous post. For this, I first get xi as the parameter along the boundary, starting at 0 and ending at 1. I take alpha and beta to be periodic continuous functions of this parameter xi.
The 0.2 is a shift, and the modulo maps it back to the interval [0,1].

Maybe I misunderstood what the % operator does in Freefem?

Is there a way I can just plot the functions alpha, beta, K?

I guess so. If you simply plot this function, (xi-0.2) % 1.0, you’ll see that it is zero everywhere. So I think this is the way you actually implement alpha and beta that is the problem. Indeed, if you plot alpha, beta, or their ratio K, you’ll see that it is constant and zero…
I suggest that you check that the implementation of these coefficients does what you expect. After that, the eigenvalue solver should work as expected…

The Freefem documentation calls % the “Remainder from the division”. For example, when I divide 0.5 by 1.0, the remainder is 0.5. I know this function as “%” in Python, and as “fmod” in C. Both give me 0.5 as the result. The manpage of fmod defines “remainder of the division” as follows:

these functions return the value x - n*y, for some integer n, such that the returned value has the same sign as x and a magnitude less than the magnitude of y

Freefem, however, calculates 0.5 % 1.0 = 0.0.
Is there any good reason why Freefem deviates from the above definition?

% is likely only defined for integers, so 0.5 % 1.0 may be parsed as 0 % 1, thus the result. This is only my conjecture.

The documentation claims it to be defined for int and for real.

Do not trust the documentation.

I confess that I did not consult the documentation for % before Julien pointed me to the problem with the modulo operator :wink: I expected it to work as in Python and C.

Independent of this, I think that you might want to change the behaviour of the % operator, not the documentation.

Feel free to submit a pull request changing the behaviour how you see fit.