Use of restriction on a meshL and matrix free operator

Hi,

I would like to define a square matrix-free operator which dimension corresponds to the size of the restricted space that corresponds to the left boundary of the 2d domain. So basically the operator behaves as:
another_vector_on_1D_space = matrix_on_1D_space * input_vector_on_1D_space
vector_on_2D_space = prolongation_matrix * another_vector_on_1D_space
another_vector_on_2D_space = matrix_on_2D_space * vector_on_2D_space
return_vector_on_1D_space = restrictionMatrix * another_vector_on_2D_space

I started to implement this in the following script: meshL_2d_SLEPc.edp (1.5 KB)
Obviously there is something I do not understand, I guess I have non consistent operator sizes.

Could someone please help me here ?

Best,

Lucas

PS: It is not clear to me why A.n is different from Vh.ndof. Could someone please explain me why?

I’ll look into your script, but first.

PS: It is not clear to me why A.n is different from Vh.ndof. Could someone please explain me why?

A.n is the local size of the PETSc Mat (without overlapping unknown), while Vh.ndof is the local size of the FreeFEM `fespace) (with overlapping unknowns).

Thanks for this explanation.

Knowing this I may be able to correct something in the script, I will do that tomorrow.

In fact, your script is correct, just the sizes are wrong. You are using FreeFEM notation (*) instead of MatMult, so you need to use FreeFEM lengths, like this:

  real[int] u(VhL.ndof), v(VhL.ndof), w(Vh.ndof), vv(Vh.ndof);

I now get a consistent result with an increasing number of processes.

$ mpirun -n 1 FreeFem++-mpi meshL_2d_SLEPc.edp -eps_view_values -v 0
Mat Object: 1 MPI processes
  type: shell
  rows=41, cols=41
Eigenvalues =
   0.02498
$ mpirun -n 4 FreeFem++-mpi meshL_2d_SLEPc.edp -eps_view_values -v 0                                                                           
Mat Object: 4 MPI processes
  type: shell
  rows=41, cols=41
Eigenvalues =
   0.02498
1 Like

And one additional question on that point: what does happen if the boundary labelled 1 is partitioned on multiple procs? Do I have something to do then ? Communicating the ghost values ?

Best,

Lucas

It should be done for you, unless something else is broken.