Dear FreeFem users,
I’m interested in perform some matrix-vector matrix-matrix products in parallel. The matrices, in general, come from varf and interpolate, and vectors are the solutions of FE problems. Right now it seems that all the products are performed in serial, but since I’m dealing with 3D problems, at some point, some products get stuck due to large dimensions of matrices and vectors.
Is there any way to parallelize the way of compute products matrix(sparse)-vector, or matrix(sparse)-matrix(sparse)?
Thank you in advance,
Yes, you can use PETSc, which does all this efficiently, see for example this line FreeFem-sources/diffusion-2d-PETSc.edp at master · FreeFem/FreeFem-sources · GitHub which computes a distributed (sparse) matrix-vector product.
And for the case of matrix-matrix product? There is where I’m having more problems, since I’m trying to multiply to FE sparse matrices and, when the dof are too large, it does not work.
You can, for example, call
MatPtAP. It depends on what kind of distributed matrices you want to multiply (same
fespace or not, same mesh or not, etc.).
I’m interested in compute a matrix-matrix, M1* M2, where M1 comes from a varf with same fespaces, and M2 comes from a interpolate(Vh1, Vh2).
Then I should use this M=M1* M2 to compute, M*u, with u a FE solution.
Is this MatPtAP useful for this case?
Why do you need to compute M explicitly? Why not do two successive matrix-vector products? Also, you did not answer to my previous question, are the fespace defined on the same mesh?
All the fespace are defined on the same mesh. I would try to think if I can do it without matrix-matrix products.