I am learning to write FreeFem code for the Cahn-Hilliard equation with Neumann boundary conditions. I have simplified one of the nonlinear terms, but it seems that the int2d function requires a bilinear form. I believe this is where my problem lies. How should I write the integration for a nonlinear form?
Thank you very much for your guidance. I have reviewed the code Cahn-Hilliard.edp you shared. However, I encountered an issue during the Newton iteration process. It seems that the Jacobian matrix varf vhJ in the code should be derived as varf vdJ by differentiating with respect to the variables. However, I am unable to derive varf vhJ. Could you kindly provide some guidance on this?
I agree with your equations. You have to solve \int 2\frac{v^{n+1/2}-v^n}{\delta t}\phi+\int\nabla z^{n+1/2}\cdot\nabla\phi-\int g^{n+1/2}\phi=0, \int -z^{n+1/2}\psi+\int a\nabla v^{n+1/2}\cdot\nabla\psi+\int F'(v^{n+1/2})\psi=0.
The Newton method defines a sequence (v_k,z_k) such that as k\to\infty, one has v_k\to v^{n+1/2}, z_k\to z^{n+1/2}. The sequence is defined through \int 2\frac{v_{k+1}-v^n}{\delta t}\phi+\int\nabla z_{k+1}\cdot\nabla\phi-\int g^{n+1/2}\phi=0, \int -z_{k+1}\psi+\int a\nabla v_{k+1}\cdot\nabla\psi+\int \bigl(F'(v_k)+F''(v_k)(v_{k+1}-v_k)\bigr)\psi=0.
Denoting \delta v=v_{k+1}-v_k, \delta z=z_{k+1}-z_k it can be written \int 2\frac{\delta v}{\delta t}\phi+\int\nabla \delta z\cdot\nabla\phi=-\Bigl(\int 2\frac{v_k-v^n}{\delta t}\phi+\int\nabla z_{k}\cdot\nabla\phi-\int g^{n+1/2}\phi\Bigr), \int -\delta z\psi+\int a\nabla \delta v\cdot\nabla\psi+\int \bigl(F''(v_k)\delta v\bigr)\psi=-\Bigl(\int -z_{k}\psi+\int a\nabla v_{k}\cdot\nabla\psi+\int F'(v_k)\psi\Bigr).
In this system the linear operator on the left-hand side is the jacobian matrix (defined in the code by vhJ), and the right-hand side is minus the residual.
The instruction auxv[]=vdJ(0,Vh2) computes the residual.
Then vv[]=H^-1*auxv[] computes vv=-(\delta v,\delta z).
Finally oldv[]-=vv[] (or equivalently oldv[]=oldv[]-vv[]) computes (v^{k+1},z^{k+1})=(v^k,z^k)+(\delta v,\delta z).
Thank you so much for your guidance. Previously, when using Newton’s iteration, I always computed the Jacobian matrix through differentiation and then determined the solution’s progression direction. Your approach has been truly enlightening to me, and I deeply appreciate it!
Indeed a trick for Newton’s method is as follows.
When you have to solve F(X)=0 (X and F(X) are vectors), the Newton method writes X_{k+1}-X_k=-dF(X_k)^{-1}F(X_k).
You can write it as F(X_k)+dF(X_k)(X_{k+1}-X_k)=0\qquad (1)
In other words, instead of solving F(X_{k+1})=0, you replace F by its first order expansion around X_k, it gives (1).
This is what I wrote in my previous message: starting from the equations on (v^{n+1/2},z^{n+1/2}) I derive the equations on (v_{k+1},z_{k+1}) by making a first order expansion around (v_k,z_k).
Thanks again for your answer. I learned the code Cahn-Hilliard.edp again. The space and time steps change together in the code, which I feel is not correct for testing the convergence order. So I fixed one to test another convergence order. When I fixed the time step, the spatial convergence order was still close to the second order; However, after I fixed the spatial step size, whether it is a coarse grid or a fine grid, the time convergence order only hovers around 0.01. Why is that?
You can varry time step and space size both together. It is not incorrect. If you do separately, it will take much your time. According to me, it is the best way to do. May be there another issue to your code. You can post your code here.
A simple way to understand what happens is to imagine that the error is the sum of two terms, one for space error and one for time error: E=Ah^\alpha+Bdt^\beta,
for some positive constants A,B, \alpha,\beta.
Then you see that if h is fixed and you let dt decay, E behaves like dt^\beta as long as dt is not too small (Bdt^\beta>>Ah^\alpha) but when dt becomes small, E does no longer decay, E\simeq Ah^\alpha which is a positive constant. Thus if you measure the order of convergence in time at fixed h, you will find zero (except if you measure it for not too small time step Bdt^\beta>>Ah^\alpha).
The same is true for the converse (fixed dt and h decreasing).
Therefore you see that the only correct way to evaluate the convergence rate is to have both dt and h varying together.
The standard way to do it is as follows.
–Imagine that you want to prove that your scheme is at least first-order in time and space (i.e. \alpha\geq 1 and \beta\geq 1).
You take dt=Ch for some constant C and measure the rate (in terms of dt or h, it is equivalent since they are proportional). If you find a rate at least 1, you are done.
–Next, if you want to prove that your scheme is at least second-order in time and space, you do the same dt=Ch, and see if the observed rate is at least 2.
–Now, imagine that you want to prove that your scheme is at least second-order in space and first-order in time (i.e. \alpha\geq 2 and \beta\geq 1). Then you take dt=Ch^2 and measure the rate. Because of this relation, the rate in h that you measure is necessarily twice the measured rate in dt.
Thus if you find at least second-order in space (or equivalently first-order in time), you are done.
– If you don’t know the respective rate in time and space, you can do several tries (dt=Ch, dt=Ch^2…).
I hope this is clear.
Thank you very much for your explanation! When saw the expression, I understood where my misunderstanding was. I hadn’t thought about this issue so deeply before—I was only familiar with the CFL condition. It was only when I encountered problems with the convergence order in my test code that I thought of your code, which led me to this question. I truly lack knowledge in this area, and I sincerely appreciate your guidance and teaching.