Parallel 3D Schrodinger Eigenvalues/vectors error with Dirichlet BC

Hi, I’m looking to solve the 3D Schrodinger equation using SLEPc w/ FreeFEM. As part of my code testing I’ve simulated the first 4 eigenvalues/vectors for the “particle in a box” model, with a 10x10x10 box. The eigenvalues I expect are (in units hbar = m =1) 0.049 with 3 fold degeneracy, and 0.197. The states corresponding should be of the form sin(pix/10),sin(piy/10),sin(pi*z/10) and sin(2pix/10). The last state is degenerate with the corresponding y and z states; I’m not sure how FreeFEM would decide which state to pick. However when I attempt to solve for this model I instead find eigenvalues 14.97, 30,1, 30.1, 30.36. Furthermore the eigenstates appear to be incorrect in their shape, and at the very least unnormalised. However, when I remove all boundary conditions I find what appears to be the correct eigenstates for Neumann boundary conditions on the box, correctly normalised, and the correct eigenvalues, with the exception of a spurious mode with a negative eigenvalue ~10^-17, and a corresponding spurious eigenstate. Introducing an isotropic quadratic potential I find all eigenvalues and states to be correct with Neumann BC. My questions are as follows;

  1. How can I get the correct eigenvalues/eigenvectors for the Dirichlet BC case?
  2. What is causing the spurious mode in the Neumann case and how can I prevent it?
  3. In line 198 of my .edp file I attempt to calculate the density from the exact ground state for 3 non-interacting particles in a box w/Neumann conditions. However this integral gives 0, and I’m not sure why?

I’m receiving an error that new users cannot upload files, so please find below a copy of my code;
load “msh3” //Converts mesh files to FreeFem format
load “medit” //.mesh Medit format, use if .msh isnt working
load “mshmet” //Used for mesh truncation/adaptation
load “parmmg” //Used with mshmet
load “gmsh” //Read .msh files for input
load “PETSc” //Library for parallel processing
macro dimension()3// //Tells FreeFem we’re in 3D
include “macro_ddm.idp” //Set of macros for partitioning mesh
macro def(i)i// //Macro for field definition
macro init(i)i//
macro grad(u) [dx(u), dy(u), dz(u)] // //Macro to calculate gradient scalar field

//“C:\Program Files\Microsoft MPI\Bin\mpiexec.exe” -n 16 FreeFem+±mpi.exe AnalyticSchrodinger.edp - v 0 -wg
//string meshFileName = “CubeSmall”; //Testing w/ medit format
//mesh3 Mesh = readmesh3(meshFileName);
//mesh3 Mesh = gmshload3(“CubeSmall.msh”); //Testing w/ gmsh format
//Loads mesh from gmsh .msh file
//DM plex(“CubeSmallv4.msh”, communicator = mpiCommSelf);
//mesh3 Mesh = plex;
mesh3 Mesh = cube(10,10,10);
real vacPerm = 0.008854188; //Vacuum permittivity in farads/nanometre
real elecCharge = 1.6022*(10^(-19)); //Electron charge in Coulombs
real kineticFactor = 3.80998*(10^(-20)); //hbar^2/2m in units eV(nm^2)

real[int] GaAs = [13.1, 0.063, 1.521];

real[int] AlGaAs = [12.11, 0.0904, 1.726];

Mesh = trunc(Mesh, 1, split=2, label = 11);
//Refines mesh by conveting from n nodes to n*split nodes
//Increasing split gives worse eigenvalues
//Increasing cores gives better eigenvalues
if(mpirank == 0)
cout << labels(Mesh) << endl;

mesh3 meshGlobal = Mesh;
//Copy of mesh saved in order to work with functions over the whole finite element space

int[int] n2omesh;
macro MeshN2O()n2omesh//
//Macro keeps global numbering of mesh in array n2omesh

if(mpirank == 0)
cout << labels(Mesh) << endl;

int[int][int] intersection; //Renumbering for local mesh to global
real[int] D; //Partition of unity to convert from local to global
func Pk = P1; //Finite element order for psi
int s = 1; //refinement factor local mesh

build(Mesh,
s,
intersection,
D,
Pk,
mpiCommWorld
) //Builds local distributed mesh

//buildDmesh(Mesh)
//Partitions mesh into -n subdomains with ghost elements for BCs, called local mesh

fespace Vh(Mesh, Pk); //Local finite element space for Psi
fespace Wh(Mesh, P0); //Local finite element space for effective mass
Wh Vtrial = (10^100)*(z >= 5);

//Wh elecMass = (0<=z<=5)(GaAs[1])+(5<z<=70)(AlGaAs[1])+(5<z<=10)(GaAs[1]); //Electron mass as a function of z
//Wh bandgap = (0<=z<=5)
(GaAs[2])+(5<z<=7)(AlGaAs[2])+(7<z<=10)(GaAs[2]); //Band gap as a function of z
varf vSchrodinger(psi,v) = int3d(Mesh)(grad(psi)'grad(v)0.5) //variational form Schrodinger eq kineticFactor/elecMass(10^(20))
//+int3d(Mesh)(psi
v)//bandgap
vpsi(10^(20))
//+int3d(Mesh)(0.5*((x-5)^2+(y-5)^2+(z-5)^2)vpsi); //Potential term
//+on(11, psi=0.0) //Wavefunction vanishes on device surface
//+on(15, psi=0.0) //No boundary conditions gives right eigenvalues
//+on(9, psi=0.0) //Probably 1e-20 giving right BC?
//+on(16, psi=0.0)
//+on(8, psi=0.0)
//+on(13, psi=0.0)
+on(6, psi=0.0)
+on(5, psi=0.0)
+on(4, psi=0.0)
+on(3, psi=0.0)
+on(2, psi=0.0)
+on(1, psi=0.0);
//+on(17, psi=0.0)
//+on(14, psi=0.0);
//+on(7, psi=0.0)
//+on(10, psi=0.0);

varf RHS(psi,v) = int3d(Mesh)(psi*v); //Right hand side variational form

matrix A = vSchrodinger(Vh,Vh, tgv=-2); //FreeFEM matrices for LHS and RHS
matrix B = RHS(Vh,Vh,eps=1e-20);

Mat DistA(A, intersection, D, clean = true); //Ditributed PETSc matrices over processes
Mat DistB(DistA, B, clean = true);

real[int,int] space(DistA.n,1);
space = 1;

real[int] eigenval(0); //Array for eigenvalues
Vh[int] def(eigenvec)(1); //Array for eigenvectors

int nEV = 4;
real relTol = 1e-15;
int maxIt = 20000;
string ssparams = //Parameters for eigenvalue solver
" -eps_nev " + nEV + //Number of eigenvalues nEV
" -eps_type krylovschur" + //Krylov Schur decomposition
" -eps_target "+ 0.0 + //Shift value
" -st_type sinvert " + //Using shift invert method
" -st_pc_type lu " + //Cholesky preconditioner
" -eps_view " + //Outputs eigenvalues to terminal
" -eps_gen_nonhermitian " + //Hermitian matrix
" -eps_tol " + relTol + //Tolerance on egenvalue calculation
" -eps_max_it " + maxIt //Maximum no. iterations for solver
;

int k = EPSSolve(DistA,DistB, //SLEPc Parallel eigenvalue solver
vectors = eigenvec, //Array to store eigenfunctions
values = eigenval, //Array to store eigenvalues
sparams = ssparams
);
k=min(k,nEV); //Ignores converged eigienvalues outside range
Vh Temp;

for(int i=0;i<k;i++){
if(!mpirank) cout << " Eigenvalue #“+i+” = “+eigenval[i]<<endl;
Temp = eigenvec[i];
plotMPI(Mesh, // The local mesh
Temp, // The local solution
Pk, // Local FE-space
def, // Macro for field definition
real, // Type
cmm = “Psi(”+i+”) EV = "+eigenval[i]
)
}
//Eigenvectors are normalised, can test using this loop
Vh densityTotal;
for(int j=0;j<k;j++){
Vh scaledPsi = eigenvec[j];
for[j,v : DistA.D] scaledPsi[j] = v;
densityTotal = densityTotal + scaledPsi
eigenvec[j];
real localNorm = int3d(Mesh)(scaledPsi*eigenvec[j]); //Calculates square integral
real globalNorm;
mpiAllReduce(localNorm, globalNorm, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm
if(mpirank == 0)
cout << globalNorm << endl; //Shows eigenvectors are normalised
}

//for(int j=0;j<k;j++){
//Vh scaledPsi = eigenvec[j];
//for[j,v : DistA.D] scaledPsi[j] = v;
//real localNorm = int3d(Mesh)(scaledPsi
eigenvec[j]); //Calculates square integral
//real globalNorm;
//mpiAllReduce(localNorm, globalNorm, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm
//if(mpirank == 0)
//cout << reduce << endl; //Shows eigenvectors are normalised
//}
for(int j=0;j<k;j++){
Vh scaledPsi = eigenvec[j];
for[j,v : DistA.D] scaledPsi[j] = v;
Vh density = scaledPsi
eigenvec[j];
plotMPI(Mesh, // The local mesh
density, // The local solution
Pk, // Local FE-space
def, // Macro for field definition
real, // Type
cmm = “|Psi|^2(”+j+") EV = "+eigenval[j]
)

}
plotMPI(Mesh, // The local mesh
densityTotal, // The local solution
Pk, // Local FE-space
def, // Macro for field definition
real, // Type
cmm = “n”
)

Vh exactPlaneWaveDensity3 = ((1/500)^(0.5))((cos(pix/10))^2+(cos(piy/10))^2+(cos(piz/10))^2);
real localElectronNumber = int3d(Mesh)(densityTotal);
real globalElectronNumber;
mpiAllReduce(localElectronNumber, globalElectronNumber, mpiCommWorld, mpiSUM);
if(mpirank == 0)
cout << "globalElectronNumber = " + globalElectronNumber << endl; //Checks density
densityTotal = densityTotal * nEV/globalElectronNumber; //Corrects for compounding floating point error in density

//real correctedLocalElectronNumber = int3d(Mesh)(densityTotal);
//real correctedGlobalElectronNumber;
//mpiAllReduce(correctedLocalElectronNumber, correctedGlobalElectronNumber, mpiCommWorld, mpiSUM);
//if(mpirank == 0)
// cout << "correctedGlobalElectronNumber = " + correctedGlobalElectronNumber << endl; //Checks density has been corrected

real localError = int3d(Mesh)(exactPlaneWaveDensity3 - densityTotal);
real globalError;
mpiAllReduce(localError, globalError, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm
if(mpirank == 0)
cout << "globalError = " + globalError << endl; //Shows eigenvectors are normalised
real localExactNumber = int3d(Mesh)(exactPlaneWaveDensity3);
real globalExactNumber;
mpiAllReduce(localExactNumber, globalExactNumber, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm
if(mpirank == 0)
cout << "globalExactNumber = " + globalExactNumber << endl; //Shows eigenvectors are normalised

//int[int] fforder(1);
//fforder = 1; //1 for P1 or higher, 0 for P0
//savevtk(“pretty_test_quantum.vtu”, Mesh, densityTotal, order = fforder);

After some more testing it seems changing line 112 to eps_gen_hermitian solved the issue of normalisation. Furthermore I believe the degeneracy pattern of the eigenvalues* is correct for the Dirichlet BC case. However I am unsure as to why the scaling is being applied to the eigenvalues.

*I made a slight error in my original post; for boundary conditions of 0 on the box walls
\Psi_{0} = sin(\pix/10) sin(\piy/10) sin(\piz/10)
\Psi_{0} = sin(\pix/10) sin(\piy/10) sin(\piz/10)
\Psi_{0} = sin(\pix/10) sin(\piy/10) sin(\piz/10)

Why aren’t you setting tgb when getting matrix B? If these are not consistent that would
screw up Dirichlet and if one of the coeffs is huge that will confuse the solver probably.
Setting them both to one of the negative values gets rid of spurious elements.

There is no way to specify eigenvectors on degenerate eigenvalues although given
a set of them you could probably come up with something on that subspace.

If you can post your code as an attachment that makes it easier to run locally.

Hi, thanks for replying. I wasn’t aware that i needed to set tgv on B as well, I’ll try that and report back. In the meantime I checked the eigenvalues with eps_gen_hermitian, which gave eigenvalues of 45, and 3 ~ 60. I would expect the next highest energy level to be twice the lowest, so I’m not sure what is causing this behaviour.

load "msh3"             //Converts mesh files to FreeFem format
load "medit"            //.mesh Medit format, use if .msh isnt working
load "mshmet"           //Used for mesh truncation/adaptation
load "parmmg"           //Used with mshmet
load "gmsh"             //Read .msh files for input
load "PETSc"            //Library for parallel processing
macro dimension()3//    //Tells FreeFem we're in 3D
include "macro_ddm.idp" //Set of macros for partitioning mesh
macro def(i)i//         //Macro for field definition
macro init(i)i//
macro grad(u) [dx(u), dy(u), dz(u)] // //Macro to calculate gradient scalar field

//"C:\Program Files\Microsoft MPI\Bin\mpiexec.exe" -n 16  FreeFem++-mpi.exe AnalyticSchrodinger.edp - v 0 -wg
//string meshFileName = "CubeSmall";    //Testing w/ medit format
//mesh3 Mesh = readmesh3(meshFileName);
//mesh3 Mesh = gmshload3("CubeSmall.msh");  //Testing w/ gmsh format
//Loads mesh from gmsh .msh file
//DM plex("CubeSmallv4.msh", communicator = mpiCommSelf);
//mesh3 Mesh = plex;
mesh3 Mesh = cube(10,10,10);
real vacPerm = 0.008854188;  //Vacuum permittivity in farads/nanometre
real elecCharge = 1.6022*(10^(-19)); //Electron charge in Coulombs
real kineticFactor = 3.80998*(10^(-20)); //hbar^2/2m in units eV(nm^2)

real[int] GaAs = [13.1, 0.063, 1.521];

real[int] AlGaAs = [12.11, 0.0904, 1.726];

Mesh = trunc(Mesh, 1, split=1, label = 11);
//Refines mesh by conveting from n nodes to n*split nodes
//Increasing split gives worse eigenvalues
//Increasing cores gives better eigenvalues
if(mpirank == 0)
    cout << labels(Mesh) << endl;

mesh3 meshGlobal = Mesh;
//Copy of mesh saved in order to work with functions over the whole finite element space

int[int] n2omesh;
macro MeshN2O()n2omesh//
//Macro keeps global numbering of mesh in array n2omesh

if(mpirank == 0)
    cout << labels(Mesh) << endl;

int[int][int] intersection;   //Renumbering for local mesh to global
real[int] D;                  //Partition of unity to convert from local to global
func Pk = P1;                 //Finite element order for psi
int s = 1;                    //refinement factor local mesh

build(Mesh,
        s,
        intersection,
        D,
        Pk,
        mpiCommWorld
        ) //Builds local distributed mesh

//buildDmesh(Mesh)
//Partitions mesh into -n subdomains with ghost elements for BCs, called local mesh

fespace Vh(Mesh, Pk);   //Local finite element space for Psi 
fespace Wh(Mesh, P0);   //Local finite element space for effective mass 
Wh Vtrial = (10^100)*(z >= 5);

//Wh elecMass = (0<=z<=5)*(GaAs[1])+(5<z<=70)*(AlGaAs[1])+(5<z<=10)*(GaAs[1]);  //Electron mass as a function of z
//Wh bandgap = (0<=z<=5)*(GaAs[2])+(5<z<=7)*(AlGaAs[2])+(7<z<=10)*(GaAs[2]);   //Band gap as a function of z
varf vSchrodinger(psi,v) = int3d(Mesh)(grad(psi)'*grad(v)*0.5)  //variational form Schrodinger eq *kineticFactor/elecMass*(10^(20))
                    //+int3d(Mesh)(psi*v)//bandgap*v*psi*(10^(20))
                    +int3d(Mesh)(0.5*((x-5)^2+(y-5)^2+(z-5)^2)*v*psi) //Potential term
                    //+on(11, psi=0.0)  //Wavefunction vanishes on device surface
                    //+on(15, psi=0.0)  //No boundary conditions gives right eigenvalues
                    //+on(9, psi=0.0)   //Probably 1e-20 giving right BC?
                    //+on(16, psi=0.0)   
                    //+on(8, psi=0.0)
                    //+on(13, psi=0.0)
                    +on(6, psi=0.0)
                    +on(5, psi=0.0)
                    +on(4, psi=0.0)
                    +on(3, psi=0.0)
                    +on(2, psi=0.0)
                    +on(1, psi=0.0);
                    //+on(17, psi=0.0)
                    //+on(14, psi=0.0);   
                    //+on(7, psi=0.0)
                    //+on(10, psi=0.0);

varf RHS(psi,v) = int3d(Mesh)(psi*v); //Right hand side variational form

matrix<real> A = vSchrodinger(Vh,Vh, tgv=-2); //FreeFEM matrices for LHS and RHS
matrix<real> B = RHS(Vh,Vh,eps=1e-20, tgv=-2);

Mat DistA(A, intersection, D, clean = true);  //Ditributed PETSc matrices over processes
Mat DistB(DistA, B, clean = true);

real[int,int] space(DistA.n,1);
space = 1;

real[int] eigenval(0);          //Array for eigenvalues
Vh<real>[int] def(eigenvec)(1); //Array for eigenvectors

int nEV = 4;
real relTol = 1e-15;
int maxIt = 20000;
string ssparams =            //Parameters for eigenvalue solver
  " -eps_nev " + nEV       + //Number of eigenvalues nEV
  " -eps_type krylovschur" + //Krylov Schur decomposition
  " -eps_target "+ 0.0     + //Shift value
  " -st_type sinvert "     + //Using shift invert method
  " -st_pc_type lu " + //Cholesky preconditioner
  " -eps_view "            + //Outputs eigenvalues to terminal
  " -eps_gen_hermitian "   + //Hermitian matrix
  " -eps_tol " + relTol    + //Tolerance on egenvalue calculation
  " -eps_max_it " + maxIt    //Maximum no. iterations for solver
  ;

int k = EPSSolve(DistA,DistB,   //SLEPc Parallel eigenvalue solver
vectors = eigenvec,             //Array to store eigenfunctions
values  = eigenval,             //Array to store eigenvalues
sparams = ssparams  
);
k=min(k,nEV); //Ignores converged eigienvalues outside range
Vh<real> Temp;

for(int i=0;i<k;i++){
    if(!mpirank) cout << " Eigenvalue #"+i+" = "+eigenval[i]<<endl;
    Temp = eigenvec[i];
    plotMPI(Mesh,   // The local mesh
            Temp,   // The local solution
            Pk,     // Local FE-space
            def,    // Macro for field definition
            real,   // Type
            cmm = "Psi("+i+")  EV = "+eigenval[i]
           )
}
//Eigenvectors are normalised, can test using this loop
Vh<real> densityTotal;
for(int j=0;j<k;j++){
  Vh scaledPsi = eigenvec[j];
  for[j,v : DistA.D] scaledPsi[][j] *= v;
  densityTotal = densityTotal + scaledPsi*eigenvec[j];
  real localNorm = int3d(Mesh)(scaledPsi*eigenvec[j]);   //Calculates square integral
  real globalNorm;
  mpiAllReduce(localNorm, globalNorm, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm 
  if(mpirank == 0)
      cout << globalNorm << endl; //Shows eigenvectors are normalised
}

//for(int j=0;j<k;j++){
    //Vh scaledPsi = eigenvec[j];
    //for[j,v : DistA.D] scaledPsi[][j] *= v;
    //real localNorm = int3d(Mesh)(scaledPsi*eigenvec[j]);   //Calculates square integral
    //real globalNorm;
    //mpiAllReduce(localNorm, globalNorm, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm 
    //if(mpirank == 0)
      //cout << reduce << endl; //Shows eigenvectors are normalised
//}
for(int j=0;j<k;j++){
    Vh scaledPsi = eigenvec[j];
    for[j,v : DistA.D] scaledPsi[][j] *= v;
    Vh density = scaledPsi*eigenvec[j]; 
    plotMPI(Mesh,   // The local mesh
            density,   // The local solution
            Pk,     // Local FE-space
            def,    // Macro for field definition
            real,   // Type
            cmm = "|Psi|^2("+j+")  EV = "+eigenval[j]
           )
    
}
plotMPI(Mesh,   // The local mesh
        densityTotal,   // The local solution
        Pk,     // Local FE-space
        def,    // Macro for field definition
        real,   // Type
        cmm = "n"
        )

Vh exactPlaneWaveDensity3 = ((1/500)^(0.5))*((cos(pi*x/10))^2+(cos(pi*y/10))^2+(cos(pi*z/10))^2);
real localElectronNumber = int3d(Mesh)(densityTotal);
real globalElectronNumber;
mpiAllReduce(localElectronNumber, globalElectronNumber, mpiCommWorld, mpiSUM); 
if(mpirank == 0)
    cout << "globalElectronNumber = " + globalElectronNumber << endl; //Checks density
densityTotal = densityTotal * nEV/globalElectronNumber; //Corrects for compounding floating point error in density

//real correctedLocalElectronNumber = int3d(Mesh)(densityTotal);
//real correctedGlobalElectronNumber;
//mpiAllReduce(correctedLocalElectronNumber, correctedGlobalElectronNumber, mpiCommWorld, mpiSUM); 
//if(mpirank == 0)
//    cout << "correctedGlobalElectronNumber = " + correctedGlobalElectronNumber << endl; //Checks density has been corrected

real localError = int3d(Mesh)(exactPlaneWaveDensity3 - densityTotal);
real globalError;
mpiAllReduce(localError, globalError, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm 
if(mpirank == 0)
    cout << "globalError = " + globalError << endl; //Shows eigenvectors are normalised
real localExactNumber = int3d(Mesh)(exactPlaneWaveDensity3);
real globalExactNumber;
mpiAllReduce(localExactNumber, globalExactNumber, mpiCommWorld, mpiSUM); //Accounts for ghost element multiplicity in norm 
if(mpirank == 0)
    cout << "globalExactNumber = " + globalExactNumber << endl; //Shows eigenvectors are normalised

//int[int] fforder(1);
//fforder = 1; //1 for P1 or higher, 0 for P0
//savevtk("pretty_test_quantum.vtu", Mesh, densityTotal, order = fforder);

tgv=-2 on B did not seem to fix this eigenvalue scaling issue

That’s because you need -20 on B, not -2, and of course you need to fix your varf as well.

1 Like

Apologies about the varf error, I’ve now set the tgv to -20 on B, which is giving eigenvalues of 15.41, 31.45, 31.45, and 32.56. Fixing the varf has restored the correct pattern, and after setting eps_gen_hermitian my eigenstates are normalised. However the scaling on the eigenvalues remains, and I’m still unsure why I’m getting 0 from the integral on line 198. Is there some documentation on the various tgv values?

Typical error, something/10 will return 0, you need to do something/10.0 (floating point division instead of integer division).

1 Like

You can search the manual
for “tgv” there is a short paragraph on the values. As stated above, your “1/500” is zero :slight_smile:

However, take a look at your potential and see if it is what you think. The square mesh
AFAICT has coords [0,1] and your quadratic is centered in 5.
forum12.edp (8.7 KB)

This should display your potential at z=0 .

For tgv, you can look at both FreeFem-sources/tgv-test.edp at 1312e9d9c06fec8d3d49f1452df6f951901d85c9 · FreeFem/FreeFem-sources · GitHub and slide 9 of https://joliv.et/FreeFem-tutorial/main.pdf.

1 Like

Thabk you for the tgv documentation links, i think I understand whats going on there. Changing the integers to floating points in my integral gave a non zero value
@marchywka It seems the cube is 1x1x1 as you say; when I change the value of the arguments of cube the eigenvalues remain the same, and match a 1x1x1 box. Thanks very much both of you for your help

But do you use the quadratic potential which is apaprently centered at 5 instead
of .5? You can change the medit params or just dump it. Note too that
fixing one problem can make the results a lot worse- you need to fix
everything :slight_smile: Also check the signs as the grad grad term had an integration
by parts. And you really have 2 potentials the intended one and the finite
side of the mesh. Higher states will look like a particle in a box not harmonic oscialltor.