Overlapping mesh - integral calculation

Dear FreeFem++ Users!

I am trying to solve the linearized Navier-Stokes equation using and overlapping mesh, based on the available codes. Although I am managing getting things done, there are a few things I do not understand, which annoy me very much.

Calculating and integral the while computational domain for an overlapping distributed mesh: is this the only way? And if so, could someone please explain the code a little bit more? I do not understand what the commands

Ph part;
if (mpirank == 0) partitionerSeq(part[], Th, mpisize);
partitionerPar(part[], Th, mpiCommWorld, mpisize);

do.

Also, what is the way to gather a function from and overlapping distributed domain into a single processor? In mesh adaptation examples, something similar is done, but I am not quite sure how that works either.

Thank you for your help
András

Dear András,
You may be interested in the following tutorial which explain a little bit more why things work that way.
If you have specific questions, please feel free to ask more questions here.

Ph part;
if (mpirank == 0) partitionerSeq(part[], Th, mpisize);
partitionerPar(part[], Th, mpiCommWorld, mpisize);

This does domain decomposition. Basically splits a global mesh into local meshes.
There are multiple ways to go from a distributed to a centralized solution. A slightly better approach is given there, using the your_mesh_name#N2O parameter.
If you want to solve Navier–Stokes equations, there is a code readily available here (2D) or there (3D).

Let me know if you need some more details.

1 Like

Thank you so much for the fast reply. If further questions arise, I will let you know!

Hello,

I have a question. I have a code which solves the Navier-Stokes equation, and based on the solution I can do an adaptation. This works fine, I do this with by creating an overlapping mesh with the command build. However, since I want to calculate a bunch of integrals on the domain later, I would like to also create a non-overlapping mesh.
The creation of this mesh work fine, I tested is with simple integral, but after the adaptation if I want to solve the Navier-Stokes equation again, EPSSolve fails. If I plot the solution after the adatptation, I get the following image:

this tells me that maybe the adaptation works fine, but later the function values get mixed up. Here is the portion of my code which I think is relevant:

SNESSolve(dJ, funcJ, funcRes, xPETSc, sparams = "-snes_monitor -snes_linesearch_order 1 -snes_atol " + tolNewton);
changeNumbering(dJ, ub1[], xPETSc, inverse = true, exchange = false);
/* ########## adapt ########## */
{
fespace VhZero(thzero, Pk);
VhZero def(uG), def(uReduce);
def(uReduce) = [ub1, ub2, pb];
mpiAllReduce(uReduce[], uG[], mpiCommWorld, mpiSUM);

if(mpirank==0){
	thzero = adaptmesh(thzero, def(uG),nbvx=nNbvx,errg = errgPar, ratio = ratioPar, hmin = hminPar, hmax = hmaxPar, err = errPar);
	cout << "successful adaptation step, number of triangles: " +  thzero.nt + ",number of vertices: " + thzero.nv + ", max. edge size: " + thzero.hmax + ", min. edge size: " + thzero.hmin <<endl;
};
broadcast(processor(0), thzero);
/*  interpolation of the old solution to the new grid  */ 
def(uG) = def(uG);
th = thzero;
[ub1, ub2, pb] = def(uG);
intersection.resize(0);
D.resize(0);
s=1;
mpiBarrier(mpiCommWorld);
part = 0;
if (mpirank == 0) partitionerSeq(part[], th, mpisize);
partitionerPar(part[], th, mpiCommWorld, mpisize);
buildWithPartitioning(th, part[], s, intersection, D, Pk, mpiCommWorld); // data for PETSc or HPDDM
part = part;
thNo = trunc(th, abs(part - mpirank) < 0.1);
Mat dAdapt(Uvvp.ndof, intersection, D);
dJ = dAdapt;
[ub1, ub2, pb] = [ub1, ub2, pb];
}
set(dJ, sparams = linsolvePar);
xPETSc.resize(0);

Thanks for the help in advance!
András

Any chance you could paste the full code so that I can run it, surrounded by three single quotes ```, please?

like this;
easier for me to paste in my editor;

FWIW, I’ve just added mesh adaptation in this commit. There are some comments so I hope it’s clear to you.

I’d advise not using buildWithPartitiong(), because this is a rather low-level macro. Instead, you can use higher-level macros, more user friendly, like buildDmesh() or createMat(). It will allow you to do much more powerful operations like interpolation in parallel, mesh adaptation in parallel, and such.

While creating the MWE from my multiple .idp files, I managed to find the mistake. It was related to improper usage of different variables.

Thank you for the new example with comments, this helps a lot. Also thanks for pointing out the new high level macros, they are barely mentioned in the documentation. I will definitely try to use them.

Good to know you fixed your issue. I need to update my tutorial indeed… :slight_smile:

Dear @prj,

I ran into troubles while trying to interpolate data between two distributed meshes. On the same computational domain, I have two meshes with different resolution. I have a distributed-overlapping, distributed-non-overlapping and non-distributed copy in case of both mesh resolutions. I am struggling to figure out how to interpolate a function from one mesh resolution to the other. Direct parallel interpolation does not seem to work properly. I also tried to interpolate between to two different mesh resolutions on a single processor on the non-distributed mesh (something similar to the adaptation examples), but in this case over- the function values “over-summed”, and furthermore the interpolation also does not work properly.

I have a feeling I have to use the command restrict, but I am not sure how. I created the following MWE to illustrate the problem:

load "PETSc"

macro trueRestrict()true//
macro removeZeros()true//
macro dimension()2//
include "macro_ddm.idp"

macro def(i)[i, i#B, i#C]// 
macro init(i)[i,i,i]//

int[int] Order = [1];  

/* ################### domain splitting variables PETSc ################### */
int s = 1;                                             // Refinement factor / split factor??
int[int][int] intersection;                            // Local-to-neighbors renumbering
real[int] D;                                           // Partition of unity
func Pk = [P2,P2,P1];                                  // must be the same as the one used during the solution

mesh th = square(50,50);
mesh thzero = th;
mesh thNo = th;

/* FE space definition */ 
fespace Uvvp(th,Pk);    
fespace UUU(th,P1); 
fespace Ppart(th,P0); 

Uvvp [ub1, ub2, pb];
UUU uuu=0, vvv = 0; 
Ppart part;

{

    if(mpirank == 0){
		partitionerSeq(part[], th, mpisize);
	}
    partitionerPar(part[], th, mpiCommWorld, size);
    buildWithPartitioning(th, part[], 1, intersection, D, Pk, mpiCommWorld);
    part = part;				
    thNo = trunc(th, abs(part - mpirank) < 1e-1);

}


/* ################### domain splitting variables PETSc, mesh2 ################### */
int sBF = 1;                                             // Refinement factor / split factor??
int[int][int] intersectionBF;                            // Local-to-neighbors renumbering
real[int] DBF;                                           // Partition of unity

mesh thBF = square(100,100);
mesh thzeroBF = thBF;
mesh thNoBF = square(100,100);

/* FE space definition */ 
fespace UvvpBF(thBF,Pk);   
fespace UUUBF(thBF,P1); 
fespace PpartBF(thBF,P0); 

UvvpBF [ubBF1, ubBF2, pbBF];
UUUBF uuuBF=0, vvvBF = 0; 
PpartBF partBF;

{

    if(mpirank == 0){
		partitionerSeq(partBF[], thBF, mpisize);
	}
    partitionerPar(partBF[], thBF, mpiCommWorld, size);
    buildWithPartitioning(thBF, partBF[], sBF, intersectionBF, DBF, Pk, mpiCommWorld);
    partBF = partBF;				
    thNoBF = trunc(thBF, abs(partBF - mpirank) < 1e-1);

}
mpiBarrier(mpiCommWorld);

/* ############################### interpolation? ############################### */

// test function
[ubBF1, ubBF2, pbBF] = [x, y, x*y];

// test function plot on original mesh
uuuBF = ubBF1;
vvvBF = ubBF2;
savevtk("orig_thBF",thBF,[uuuBF,vvvBF],dataname = "u", order = Order); 

// ##################### direct interpolation (?) - does not seem to work
[ub1, ub2, pb] = [ubBF1, ubBF2, pbBF];
uuu = ub1;
vvv = ub2;
savevtk("orig_th",th,[uuu,vvv],dataname = "u", order = Order); 

  
// ##################### first to non-overlapping, then collapsing - over-summing, and also
{
	// to non-overlapping
	fespace UvvpBFNo(thNoBF,Pk);  
	UvvpBFNo [ubBFNo1, ubBFNo2, pbBFNo] = [ubBF1, ubBF2, pbBF];

	// to the non-distributed
	fespace UvvpBFzero(thzeroBF,Pk);
	UvvpBFzero def(uGBFzero), def(uRBFzero);
	def(uRBFzero) = [ubBF1, ubBF2, pbBF];
	mpiAllReduce(uRBFzero[], uGBFzero[], mpiCommWorld, mpiSUM);
	
	fespace Uvvpzero(thzero,Pk);
	Uvvpzero def(uGzero);
	
	if(mpirank==0){
		def(uGzero) = def(uGBFzero);
	}
	
	broadcast(processor(0), uGzero[]);
	[ub1, ub2, pb] = def(uGzero);
	
	
}
 

uuu = ub1;
vvv = ub2;
savevtk("orig_th_v2",th,[uuu,vvv],dataname = "u", order = Order); 

I include PETSc at the beginning because I check the meshes using to command savevtk.
I ran the code in Ubuntu 18.04 LTS, using FreeFem++ version 4.4

Please help me figure out what is wrong with the code, or point me toward the example code that can help me. I looked over the documentation, but I did not manage to figure this out.

Thanks
András

You cannot use direct interpolation: this is a sequential kernel, you’d need to have the same subdomain meshes, which is no the case given that you partition both global meshes without any constraints.

That being said, you are using the low-level API, which I explicitly recommended not to do in a previous post.

Instead, you can use higher-level macros, more user friendly, like buildDmesh() or createMat() . It will allow you to do much more powerful operations like interpolation in parallel […].

You problem is in fact addressed in this other post. You can also have a look at this other example. Let me know if something is not clear to you.
What you do could work but you are not using the partition of unity so everything is over-summed on the overlap. Plus, you have to go from local to global to local again, which is rather inefficient.

1 Like

Thanks for the quick reply! I will definitely switch to the suggested commands!

No problem, let me know if you face problems while updating, but it should significantly simplify your scripts.