GMSH and FREEFEM for 3D meshes

Hi everybody,

I have some doubts on the correct use of GMSH and Freefem for 3D meshes.

(1) I would like to understand what is the difference between :
load “msh3”
load “gmsh”

mesh3 Th = gmshload3(“maillage.msh”);
and
mesh3 Th = readmesh3(“maillage.mesh”);

(2) As I read on an article of 2018, Freefem can’t read *.msh and only *.mesh?
But with correct use of Physical Volume or Surface it seems to me that *.msh work. So, what’s true?

(3) Another information saw on a forum claims that we should take care of format following :
Mesh.Format=1;
// Select older mesh version, for compatibility of FF++
Mesh.MshFileVersion = 2.2;

(4) And finally on this link : https://www.um.es/freefem/ff++/pmwiki.php?n=Main.GMSH3D
mesh3 Th1 = gmshload3(“essai.msh”);
mesh3 Th = tetg(Th1,switch=“pqaAAYYQ”);
as I read that the mesh read from gmsh is not correct for freefem and we should transform by the command tetg etc…
For the two last commands I have an error.


So, I would be happy to understand and follow the best way to use GMSH and Freefem for 3D meshes.
Thanks in advance.

In 3D, .msh => Gmsh format, .mesh => Inria MEDIT format. If you use Gmsh format, you need to stick to version 2.2 of the format. You can use a newer version by using DMPlex from PETSc (which can load many other file formats like .h5, see this example).
The script you are linking from https://www.um.es is too old, the syntax is not the same anymore.

Thank you for your prompt and thorough response. Thus, I keep the *.msh format with the 2.2 format as I use to do.

In fact, I wanted to understand this problem because I have a model that includes mesh element with a wide size range for microtech problems. In GMSH I have created a mesh with 20 000 nodes (4 ddl/node) with dimensions in [µm] (else GMSH fails, there is a thin layer of 50 nm). The smallest element size is about 10 nm and the biggest about 50 µm. Then I export this .msh in freefem with gmshload3 etc. And after the commands :
real scale=1e-6;
Th3 = movemesh(Th3, [x * scale, y * scale, z * scale]);
in order to keep the size in [m] in Freefem regarding the physics constants in USI, it works…

But, when I use a model with 35 000 nodes (4 ddl/node) the results are ZERO everywhere. And I wonder if it is a problem of double precision in the link between GMSH and Freefem or if it is a problem of numerical computations. Why it works at 20 000 nodes but non at 35 000 nodes? The firtst idea I have deals with the meshes so my questions…

Thanks a lot.

  1. you can have Gmsh export the mesh to Inria MEDIT format
  2. it’s hard to tell why it would fail without the script/mesh file

Thanks!

Ok.
(1) Why MEDIT format, you think it is better for precision or another thing?
(2) if I don’t solve my problem, ok I will put it (because it is rather long), and I am sure that you will give me advices not only for this problem but for other topics!
Thank you, a lot.

Maybe better, not sure. Just let me know.

Ok.
I tried with MEDIT format and my second file (i.e. with 35000 nodes) works. And I tried again with the gmsh format and it works!! But I have inserted the command :
Mesh.Format=1; in the geo file for GMSH.

I saw the commands facemerge and ptmerge for movemesh but I didn’t use it as it works…
And I saw also the checkmesh command, should I use it?

In order to conclude, I will try with another file with 50 000 nodes and the use of commands like (in geo for gmsh) :
Field[1] = MathEval;
Field[1].F = Sprintf( “%g + (%g-%g)/%g*Sqrt(x * x+y * y+z * z)”, lcmin, lc3, lcmin, dmax3);
for refining mesh in a region, because I now use the CSG commands of gmsh.

(I use sparsesolver for solving).
thanks again.

Hi,

So I did the models (last post) and it works fine.

The solving used were either UMFPACK or sparsesolver in P1 interpolation. But with P2 interpolation I have ZERO solution everywhere… And when I use a model of 2000 nodes P1 and P2 work but if I use a model with 7000 nodes the P2 fails? Have you an idea why P2 fails?

Thanks in advance.

Probably because UMFPACK runs out of memory. I can help you switch to more efficient methods, mostly parallel solvers, if you want, but I need to have a look at the meshes/scripts.

Hi, Sorry to answer so late I have a lot of teaching activities these few days. Here my code, perhaps naive as I am not very familiar with C++ syntax. I insert in the code some remarks (REM:) where I have doubts.

If you can debug my file then you can either try the other parameters inside {tau1=tau2=1e-9 and freq=1e9}.

This code shows a way of working with [u1,u2,u3,uT] because I have an error with my first code where I chose [u1,u2,u3] and [uT] and I used block matrix, the final matrix is square but when I started the code it was written that some matrices were not square?? Of course there were rectangulare matrices but when assemblied the whole should be square?

I think that use block matrix is more efficient but I failed, I give here the two files.
I hope to understand my mistakes and aim to have an elegant file.

Sincerely, thanks in advance.fichiers.zip (1.0 MB)

The code is quite huge. Two questions:

  1. what’s the issue with load "PETSc-complex"
  2. it will be easier for me to fix only test.edp, is that OK?

The issue indeed comes from the memory consumption of the direct solve. I can solve that no problem using parallel computing, but we need to fix load "PETSc-complex" first, because otherwise you won’t be able to run the code I send you.

Also, your mesh is not correct.
You can just do the following to see that it’s breaking FreeFEM.

load "gmsh"
load "msh3"
mesh3 Th = gmshload3("maillage.msh");
Th = trunc(Th, 1);

Thanks for your help…

  1. FreeFEM gives :
    current line = 54
    Assertion fail : (ii==MotClef.end())
    line :120, in file lex.cpp
    error Assertion fail : (ii==MotClef.end())
    line :120, in file lex.cpp
    code = 6 mpirank: 0

  2. if you fix the first one, it would be very kind from you. In fact the second file is the same as the first one except the method with block matrix and related varf.

Do you need another information?
Thanks.

  1. does that happen even if you comment load "PETSc" and just leave load "PETSc-complex" in you .edp?
  2. I need a working mesh file, see my second message later today, maillage.msh can’t be used.

My mesh is not correct? Why? I can give you my geo file for GMSH.
Excuse me, but what’s the aim of the truc command?

Yes, please either send a .mesh or .geo.
It is not needed per say there, but it is usually sign that there is something not correct with the mesh. Also, it is needed for parallel computing to define subdomains (trunc is used to extract local domains on each process).

Indeed it seems to work when I comment load “PETSc” and leave the other. The computations are on…
Ok I give you right away the geo file, and I can either give you and test too the *.mesh file.

.geo + command used to generate the mesh is good. The parallel script is ready, but I want to double-check I’m not sending you garbage first, so I need a working mesh :slight_smile:

In fact I used many files and I show you the main geo file .Here you get 19030 nodes whereas with the original mesh it is 15010 nodes, but it is exactly the same file. You can play with lc1, lc2 and lc3 to mudulate the mesh size and refinement.geo.zip (1.2 KB)

Yes, so your .geo is definitely wrong. If you open it with Gmsh, you’ll see that as surfaces, you only have the two following.


This is wrong, all boundary elements must be assigned to a surface, otherwise FreeFEM does not know about the geometry of the domain.