PC failed due to FACTOR_OUTMEMORY

Hello, I encountered the following problem when running FreeFree+±mpi (load “PETSc-complex” ) :

set(A, sparams =“-pc_type lu -pc_factor_mat_solver_type mumps -ksp_converged_reason”);

Linear solve did not converge due to DIVERGED_PC_FAILED iterations 0
PC failed due to FACTOR_OUTMEMORY
Linear solve did not converge due to DIVERGED_PC_FAILED iterations 0
PC failed due to FACTOR_OUTMEMORY
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see FAQ — PETSc 3.20.1 documentation
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
e[1;31m[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
e[0;39me[0;49m[0]PETSC ERROR: Signal received
[0]PETSC ERROR: See FAQ — PETSc 3.20.1 documentation for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.16.1, Nov 01, 2021
[0]PETSC ERROR: FreeFem+±mpi on a named LAPTOP-4T98F0QS by weiqi Fri Mar 11 10:04:30 2022
[0]PETSC ERROR: Configure options --prefix=/mingw64/ff-petsc/c MAKEFLAGS= --with-debugging=0 COPTFLAGS=“-O3 -mtune=generic” CXXOPTFLAGS=“-O3 -mtune=generic” FOPTFLAGS=“-O3 -mtune=generic” --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-cc=gcc --with-cxx=g++ CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=“-g -O2” --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/c/builds/workspace/deployEXE/3rdparty/include/msmpi --with-mpiexec=“/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec” --with-fc=gfortran --with-scalar-type=complex --with-blaslapack-include= --with-blaslapack-lib=/mingw64/bin/libopenblas.dll --with-metis-dir=/mingw64/ff-petsc/r --with-ptscotch-dir=/mingw64/ff-petsc/r --with-mmg-dir=/mingw64/ff-petsc/r --with-parmmg-dir=/mingw64/ff-petsc/r --with-superlu-dir=/mingw64/ff-petsc/r --with-suitesparse-dir=/mingw64/ff-petsc/r --with-parmetis-dir=/mingw64/ff-petsc/r --with-tetgen-dir=/mingw64/ff-petsc/r --download-slepc --download-hpddm --download-htool --with-scalapack-dir=/mingw64/ff-petsc/r --with-mumps-dir=/mingw64/ff-petsc/r PETSC_ARCH=fc
[0]PETSC ERROR: #1 User provided function() at unknown file:0
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.

job aborted:
[ranks] message

[0] application aborted
aborting MPI_COMM_WORLD (comm=0x44000000), error 59, comm rank 0

Get more memory in your machine or use a less expensive preconditioner.

Hello, does that mean -pc_type needs to use other more efficient ones? For example,

Yes, that is what it means.

Hello, I have tried several iterative methods, but the following problems will still occur, but my memory is enough. Is it possible that there is a bug in my program code?resulting in the following errors

set(A, sparams =“-pc_type lu -ksp_type bcgs -ksp_converged_reason”);

Linear solve converged due to CONVERGED_RTOL iterations 1
— system solved with PETSc (in 1.196600e-03)
– Square mesh : nb vertices =289 , nb triangles = 512 , nb boundary edges 64
— partition of unity built (in 3.340000e-05)
— global numbering created (in 7.240000e-05)
— global CSR created (in 3.937000e-04)
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see FAQ — PETSc 3.20.1 documentation
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
e[1;31m[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
e[0;39me[0;49m[0]PETSC ERROR: Signal received
[0]PETSC ERROR: See FAQ — PETSc 3.20.1 documentation for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.16.1, Nov 01, 2021
[0]PETSC ERROR: FreeFem+±mpi on a named LAPTOP-4T98F0QS by weiqi Sun Mar 13 22:04:19 2022
[0]PETSC ERROR: Configure options --prefix=/mingw64/ff-petsc/c MAKEFLAGS= --with-debugging=0 COPTFLAGS=“-O3 -mtune=generic” CXXOPTFLAGS=“-O3 -mtune=generic” FOPTFLAGS=“-O3 -mtune=generic” --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-cc=gcc --with-cxx=g++ CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=“-g -O2” --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/c/builds/workspace/deployEXE/3rdparty/include/msmpi --with-mpiexec=“/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec” --with-fc=gfortran --with-scalar-type=complex --with-blaslapack-include= --with-blaslapack-lib=/mingw64/bin/libopenblas.dll --with-metis-dir=/mingw64/ff-petsc/r --with-ptscotch-dir=/mingw64/ff-petsc/r --with-mmg-dir=/mingw64/ff-petsc/r --with-parmmg-dir=/mingw64/ff-petsc/r --with-superlu-dir=/mingw64/ff-petsc/r --with-suitesparse-dir=/mingw64/ff-petsc/r --with-parmetis-dir=/mingw64/ff-petsc/r --with-tetgen-dir=/mingw64/ff-petsc/r --download-slepc --download-hpddm --download-htool --with-scalapack-dir=/mingw64/ff-petsc/r --with-mumps-dir=/mingw64/ff-petsc/r PETSC_ARCH=fc
[0]PETSC ERROR: #1 User provided function() at unknown file:0
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.

job aborted:
[ranks] message

[0] application aborted
aborting MPI_COMM_WORLD (comm=0x44000000), error 59, comm rank 0

There is a bug in your program, nothing is related to PC failed due to FACTOR_OUTMEMORY in your last log…

Hello, how can I check what bug caused the program crash? After checking many times, the above errors will still appear, or can you provide an example of using PETSc to calculate the error and convergence order? Thank you very much.

Which error and which convergence order?

Hello,A program for calculating the spatial error and convergence order is written with freefem + +. The obtained spatial convergence order and error are consistent with the theoretical results. However, after the spatial grid is encrypted, the calculation is very time-consuming, and the following errors will occur during the calculation,

Error umpfack umfpack_zl_numeric status -1
Error umfpack_di_solve status -3

so it is rewritten with the parallel version, but the memory will always collapse, I don’t know what caused it. Therefore, could you please provide an example of calculating spatial error and convergence order?

Hello, what is the cause of this?

– Square mesh : nb vertices =4225 , nb triangles = 8192 , nb boundary edges 256
— partition of unity built (in 4.049999e-05)
— global numbering created (in 1.829900e-03)
— global CSR created (in 3.854000e-04)
— global numbering created (in 3.106000e-04)
— global CSR created (in 7.018900e-03)
— system solved with PETSc (in 1.927317e-01)
— global numbering created (in 4.037000e-04)
— global CSR created (in 6.861900e-03)
(nan,nan) != (nan,nan) diff (nan,nan) => Sorry error in Optimization (e) add: int2d(Th,optimize=0)(…)
remark if you add (… , optimize=2) then you remove this check (be careful);
current line = 63 mpirank 0 / 1
Exec error : In Optimized version
– number :1
Exec error : In Optimized version
– number :1

Something is wrong in your varf line 63.