TetGen for Adaptation using residual error indicator

Hi everyone,

I am trying to implement something similar to “Adaptation using residual error indicator” but in 3D.
Since to my understanding adaptmesh does not work in 3D, I tried to use tetgreconstruction from TetGen instead. My questions concerns the application of this function:
Can I use the parameter sizeofvolume similar to h in adaptmesh, i.e. to give a new volume size to each degree of freedom?
In the 2D case, one can use MeshSizecomputation to compute the current h. How can I compute the current volume sufficiently exact?

Thanks for your help.

Just a quick note: for 3D, as adaptation can be quite costly, you might have a better chance trying mshmet + parmmg3D, which can be done on distributed meshes in parallel. I am not familiar with these tools, but I think they are explained in this and this example.

Thank you @aszaboa for your suggestion!
I have been looking into MPI but I feel like I don’t know enough about computers to understand what it is doing. Wouldn’t each of the four processors do the same since it isn’t specified anywhere where they should differ?

The command buildDmesh(Th) in the program automatically distributes the meshes between processes so that each process sees only a part of the computational domain. If you want to get familiar with the parallel capabilities of FreeFem, I recommend this tutorial (and also part two).

Thanks again, I watched both videos and they were very helpful!
Therefore I’m now trying to use SLEPc to solve my problem, also in 2D. But there I have a new problem: My eigenvalues in the SLEPc code are totally wrong and I can’t find where I went wrong. I attach both the parallel and sequential code, which should yield the same eigenvalues but obviously don’t. If you could have a look I would be very grateful.
Edit: I notice there is a Segmentation fault, but I don’t know where that is coming from, though it might have something to do with the faulty calculation.
annulus-petsc.edp (1.3 KB)
annulus-seq.edp (1.5 KB)

You are missing -st_type invert in your sparams. Then, you’ll get:

Eigenvalues = 
   39.14136
   41.01602
   41.01719
   46.62223
   46.62730

Thank you, now this works!
As I said in the beginning, I’m doing adaptive mesh refinement. For now in 2D, could you help me get it sorted with MPI? I’m stuck on the following: I have decomposed my mesh and computed the eigenvalues. Then I estimated the error based on the resulting eigenfunction and eigenvalue and refined the mesh accordingly. This all works, the meshes are refined per process though and thus the overlaps don’t not coincide anymore. Now I want to re-compute the matrices, eigenfunctions and eigenvalues to do another mesh adaptation. I think the program gets stuck on createMat and I fear it is because my mesh is still decomposed. In your video from the FreeFem Days “Pierre Jolivet introduction to parallel FreeFEM part 2” you talked briefly about mesh adaptation in the end, where in the first part you assumed the mesh to be global. Do I need to make it global and if so, does this even work given that I have changed the decomposed parts from the original mesh?
annulus-petsc-adaptation-loop.edp (7.7 KB)

Your script is quite long. However, I think the issue is that if you want to use adatpmesh, you need to call that on a global mesh, that is not partitioned. This means you need to save a copy of your mesh before buildDmesh (lets call it ThGlobal), call adaptmesh on ThGlobal, and update the mesh Th from ThGlobal, then call buildDmesh again. The following example demonstrates the adaptation procedure.

Thank you for your input. If what I hope for won‘t work I‘ll gladly try it like this!
But what I am hoping for is to be able to do the mesh refinement also with MPI, as it takes quite a long time to execute. This of course also might be due to my inexperienced coding: I am looping through all triangles to find which ones I want to refine, i.e. which ones should get a smaller size h. (This happens in the end of macro ReMeshIndicator.)
Do you have an idea to optimise my loop or to maybe still do adaptation on the distributed mesh?

In 2D, there are no available tools to do distributed mesh adaptation. But what you are doing seems quite expensive and I’d highly suggest that you switch to Mmg, which is quite cheap in 2D.
If you want to stick to adaptmesh, I’d suggest that you rewrite your function ReMeshIndicator in a small C++ plugin because loops in FreeFEM are usually quite slow.

I do not prefer any of the possible tools, I’m rather overwhelmed by the choices :slight_smile: (I’m also generally a bit overwhelmed by FreeFem, so excuse me asking silly questions.)
But given that I need to write the analogue code in 3D after, it makes sense not to stick with adaptmesh.
What do you mean by switching to Mmg? As I understand, I could use parmmg3d and mmg3d to remesh (in 3D), but to compute the new mesh size I still need something else, right?
My idea now is, that I could compute the new mesh size (which is an fe-space function) with what I coded inside ReMeshIndicator on each process and then “pull” it over onto the global mesh using ThN2O as in the video. Then I could adapt the global backup mesh with whichever mesh adaptation tool.
But here I encounter already the next problem. I tried to copy exactly what you do in the video but then when I apply the restrict function I get an error which I don’t understand.
I’m so sorry for asking so many questions but I very much appreciate your help!
annulus-petsc-N2O.edp (870 Bytes)

restrict must be called after buildDmesh.

Of course, thanks!

I’m back on 3D trying to remesh using ParMmg. I have been looking at distributed-parmmg.edp and have a few questions regarding ParMmg:
What is the purpose of ThParMmg? I see it is also a distributed mesh and I don’t understand what it is needed for, why can’t the metric be defined on Vh(Th,P1) ?
What is the difference between reconstructDmesh and buildDmesh?

ParMmg only deals with non-overlapping distributed meshes. So Th can’t be used because there are ghost elements. reconstructDmesh is used to rebuild ghost elements and subdomain connectivity starting from a non-overlapping distributed mesh, while buildDmesh build ghost elements and subdomain connectivity from a global mesh which is duplicated on all processes.

I see. And is there a way to “reduce” a function from Vh(Th,Pk) onto VhParMmg(ThParMmg,Pk) in order to use it as a metric? I reckon createParMmgCommunicators generates some sort of correspondence, but what are the roles of n2o and communicators respectively?

That’s done in FreeFem-sources/laplace-adapt-dist-3d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub. n2o is to go back and forth between Th and ThParMmg and communicators is a variable needed by ParMmg.

Thanks again for all your help.
I have now organised my 2D problem the following way: I create the mesh Th and its backup ThBackup, then I distribute Th onto the processes, solve the eigenvalue problem and compute the error estimate and the new mesh size there. Then I reduce the new mesh size onto ThBackup and adapt ThBackup with that.
The problem now is that there is always quite a large error estimate at the “artificial boundary” that is generated by splitting the mesh for the processes. Since these elements are overlapping anyway, my idea is to just disregard the error on the triangles on this boundary.
Is there a way to know the indices of exactly those ghost elements?

And then I have another question: I’m not sure what you are assuming but I am working with shared memory, not distributed memory. Hence my tutor asked if there are other parallelisation tools than MPI which might be more optimal for shared memory?

The problem now is that there is always quite a large error estimate at the “artificial boundary” that is generated by splitting the mesh for the processes. Since these elements are overlapping anyway, my idea is to just disregard the error on the triangles on this boundary.
Is there a way to know the indices of exactly those ghost elements?

Could you please share a minimal running example?
But yes, you can access ghost degrees of freedom by looking at A.D (assuming A is a Mat). This will give you a real[int]. Whenever abs(A.D[i]) < 1e-4 (or any epsilon value really), then this is a ghost degree of freedom. You could also do something similar as in https://github.com/FreeFem/FreeFem-sources/blob/develop/examples/hpddm/minimal-surface-Tao-2d-PETSc.edp#L18-L21.

other parallelisation tools than MPI which might be more optimal for shared memory?

MPI is the only way to do assembly in parallel in FreeFEM. You can solve linear system with PARDISO, which is using shared memory parallelism, but you’ll need the Intel MKL.

I’ve attached my code reduced as much as I could, though I’m sorry it’s still quite long (but I feel it needs all the components that are still there). You can see that basically from adaptation 2 on, the errors on the boundary prevent anything else being refined.
large-error.edp (8.4 KB)

Is there a way to immediately get the index of the triangle or to get the indices of the triangles a given degree of freedom is on? Because my error indicator is a function on Ph(Th,P0).

This is very interesting, but I somehow run into the problem that after changing the mesh, the local to global correspondence ThN2O doesn’t work anymore, even if I define it after truncating.

I’ve attached my code reduced as much as I could

I don’t think this is the case. It looks like you have a problem of global reduction, so this could probably reduced to 10 lines or so. I’m sorry, I can’t debug a 200+ lines of code example.

Because my error indicator is a function on Ph(Th,P0)

Then you can do

real[int] D;
createPartition(Th, D, P0);

and use the same condition as above (abs(D[i]) < 1e-4) to get the list of ghost elements.

even if I define it after truncating

You’ll need two n2o arrays, one for local no overlap to local with overlap, and another for local with overlap to global.