Dear @prj,
I ran into troubles while trying to interpolate data between two distributed meshes. On the same computational domain, I have two meshes with different resolution. I have a distributed-overlapping, distributed-non-overlapping and non-distributed copy in case of both mesh resolutions. I am struggling to figure out how to interpolate a function from one mesh resolution to the other. Direct parallel interpolation does not seem to work properly. I also tried to interpolate between to two different mesh resolutions on a single processor on the non-distributed mesh (something similar to the adaptation examples), but in this case over- the function values “over-summed”, and furthermore the interpolation also does not work properly.
I have a feeling I have to use the command restrict, but I am not sure how. I created the following MWE to illustrate the problem:
load "PETSc"
macro trueRestrict()true//
macro removeZeros()true//
macro dimension()2//
include "macro_ddm.idp"
macro def(i)[i, i#B, i#C]//
macro init(i)[i,i,i]//
int[int] Order = [1];
/* ################### domain splitting variables PETSc ################### */
int s = 1; // Refinement factor / split factor??
int[int][int] intersection; // Local-to-neighbors renumbering
real[int] D; // Partition of unity
func Pk = [P2,P2,P1]; // must be the same as the one used during the solution
mesh th = square(50,50);
mesh thzero = th;
mesh thNo = th;
/* FE space definition */
fespace Uvvp(th,Pk);
fespace UUU(th,P1);
fespace Ppart(th,P0);
Uvvp [ub1, ub2, pb];
UUU uuu=0, vvv = 0;
Ppart part;
{
if(mpirank == 0){
partitionerSeq(part[], th, mpisize);
}
partitionerPar(part[], th, mpiCommWorld, size);
buildWithPartitioning(th, part[], 1, intersection, D, Pk, mpiCommWorld);
part = part;
thNo = trunc(th, abs(part - mpirank) < 1e-1);
}
/* ################### domain splitting variables PETSc, mesh2 ################### */
int sBF = 1; // Refinement factor / split factor??
int[int][int] intersectionBF; // Local-to-neighbors renumbering
real[int] DBF; // Partition of unity
mesh thBF = square(100,100);
mesh thzeroBF = thBF;
mesh thNoBF = square(100,100);
/* FE space definition */
fespace UvvpBF(thBF,Pk);
fespace UUUBF(thBF,P1);
fespace PpartBF(thBF,P0);
UvvpBF [ubBF1, ubBF2, pbBF];
UUUBF uuuBF=0, vvvBF = 0;
PpartBF partBF;
{
if(mpirank == 0){
partitionerSeq(partBF[], thBF, mpisize);
}
partitionerPar(partBF[], thBF, mpiCommWorld, size);
buildWithPartitioning(thBF, partBF[], sBF, intersectionBF, DBF, Pk, mpiCommWorld);
partBF = partBF;
thNoBF = trunc(thBF, abs(partBF - mpirank) < 1e-1);
}
mpiBarrier(mpiCommWorld);
/* ############################### interpolation? ############################### */
// test function
[ubBF1, ubBF2, pbBF] = [x, y, x*y];
// test function plot on original mesh
uuuBF = ubBF1;
vvvBF = ubBF2;
savevtk("orig_thBF",thBF,[uuuBF,vvvBF],dataname = "u", order = Order);
// ##################### direct interpolation (?) - does not seem to work
[ub1, ub2, pb] = [ubBF1, ubBF2, pbBF];
uuu = ub1;
vvv = ub2;
savevtk("orig_th",th,[uuu,vvv],dataname = "u", order = Order);
// ##################### first to non-overlapping, then collapsing - over-summing, and also
{
// to non-overlapping
fespace UvvpBFNo(thNoBF,Pk);
UvvpBFNo [ubBFNo1, ubBFNo2, pbBFNo] = [ubBF1, ubBF2, pbBF];
// to the non-distributed
fespace UvvpBFzero(thzeroBF,Pk);
UvvpBFzero def(uGBFzero), def(uRBFzero);
def(uRBFzero) = [ubBF1, ubBF2, pbBF];
mpiAllReduce(uRBFzero[], uGBFzero[], mpiCommWorld, mpiSUM);
fespace Uvvpzero(thzero,Pk);
Uvvpzero def(uGzero);
if(mpirank==0){
def(uGzero) = def(uGBFzero);
}
broadcast(processor(0), uGzero[]);
[ub1, ub2, pb] = def(uGzero);
}
uuu = ub1;
vvv = ub2;
savevtk("orig_th_v2",th,[uuu,vvv],dataname = "u", order = Order);
I include PETSc at the beginning because I check the meshes using to command savevtk.
I ran the code in Ubuntu 18.04 LTS, using FreeFem++ version 4.4
Please help me figure out what is wrong with the code, or point me toward the example code that can help me. I looked over the documentation, but I did not manage to figure this out.
Thanks
András