times: compile 0.4s, execution 7313.08s, mpirank:1 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 1, memory leak =474560 CodeAlloc : nb ptr 10268, size :756536 mpirank: 1 times: compile 0.4s, execution 7313.09s, mpirank:7 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 7, memory leak =469840 CodeAlloc : nb ptr 10268, size :756536 mpirank: 7 times: compile 0.39s, execution 7312.54s, mpirank:4 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 4, memory leak =477520 CodeAlloc : nb ptr 10268, size :756536 mpirank: 4 times: compile 0.42s, execution 7313.1s, mpirank:5 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 5, memory leak =475296 CodeAlloc : nb ptr 10268, size :756536 mpirank: 5 times: compile 2.800000e-01s, execution 7.312630e+03s, mpirank:0 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 0, memory leak =460112 CodeAlloc : nb ptr 10265, size :756424 mpirank: 0 Ok: Normal End times: compile 0.36s, execution 7313.1s, mpirank:3 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 3, memory leak =480016 CodeAlloc : nb ptr 10268, size :756536 mpirank: 3 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /home/mdebastiani/FreeFem-sources/src/mpi/FreeFem++-mpi on a named compute-4-7 with 8 processors, by mdebastiani Tue Nov 30 00:16:09 2021 Using Petsc Release Version 3.16.1, Nov 01, 2021 Max Max/Min Avg Total Time (sec): 7.345e+03 1.000 7.345e+03 Objects: 5.200e+01 1.000 5.200e+01 Flop: 6.619e+13 1.170 6.078e+13 4.863e+14 Flop/sec: 9.012e+09 1.170 8.275e+09 6.620e+10 MPI Messages: 2.155e+02 1.731 1.725e+02 1.380e+03 MPI Message Lengths: 5.592e+07 1.222 3.010e+05 4.154e+08 MPI Reductions: 8.100e+01 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 7.3451e+03 100.0% 4.8628e+14 100.0% 1.380e+03 100.0% 3.010e+05 100.0% 6.300e+01 77.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 7 1.0 4.2107e+00 4.3 0.00e+00 0.0 1.7e+02 4.0e+00 7.0e+00 0 0 13 0 9 0 0 13 0 11 0 BuildTwoSidedF 3 1.0 4.2094e+00 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 4 0 0 0 0 5 0 MatMult 3 1.0 8.7536e-02 1.1 4.49e+07 1.0 1.9e+02 1.5e+04 1.0e+00 0 0 14 1 1 0 0 14 1 2 4035 MatSolve 6 1.0 1.3173e+01 1.0 6.62e+13 1.2 9.9e+02 4.0e+05 2.1e+01 0100 72 95 26 0100 72 95 33 36910610 MatLUFactorSym 1 1.0 5.7673e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 1 0 0 0 5 1 0 0 0 6 0 MatLUFactorNum 3 1.0 7.0932e+03 1.0 5.98e+09 1.2 0.0e+00 0.0e+00 0.0e+00 97 0 0 0 0 97 0 0 0 0 6 MatAssemblyBegin 4 1.0 4.2094e+00 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 4 0 0 0 0 5 0 MatAssemblyEnd 4 1.0 2.7687e+0084.1times: compile 0.38s, execution 7312.64s, mpirank:2 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 2, memory leak =479424 CodeAlloc : nb ptr 10268, size :756536 mpirank: 2 times: compile 0.38s, execution 7312.66s, mpirank:6 ######## We forget of deleting 1 Nb pointer, 0Bytes , mpirank 6, memory leak =476864 CodeAlloc : nb ptr 10268, size :756536 mpirank: 6 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 5 0 0 0 0 6 0 MatZeroEntries 2 1.0 1.3750e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecMDot 3 1.0 3.7821e-03 2.4 2.81e+06 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 4 0 0 0 0 5 5838 VecNorm 6 1.0 7.9971e-03 1.7 5.61e+06 1.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 7 0 0 0 0 10 5522 VecScale 6 1.0 1.9835e-03 1.5 2.81e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11133 VecCopy 3 1.0 3.8776e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 8 1.0 1.1161e-02 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 3 1.0 4.0760e-03 1.4 2.81e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5417 VecMAXPY 6 1.0 4.0975e-03 1.1 5.61e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10778 VecScatterBegin 15 1.0 6.1542e-02 1.3 0.00e+00 0.0 7.3e+02 4.2e+05 1.0e+01 0 0 53 74 12 0 0 53 74 16 0 VecScatterEnd 15 1.0 6.9603e-02 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 6 1.0 9.6202e-03 1.6 8.42e+06 1.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 7 0 0 0 0 10 6886 SFSetGraph 4 1.0 7.2432e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 5 1.0 2.1337e-02 1.1 0.00e+00 0.0 3.5e+02 1.3e+05 4.0e+00 0 0 25 10 5 0 0 25 10 6 0 SFPack 15 1.0 3.2780e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 15 1.0 9.6803e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 3 1.0 6.4166e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 3 1.0 1.3290e+01 1.0 6.62e+13 1.2 1.2e+03 3.4e+05 3.1e+01 0100 86 95 38 0100 86 95 49 36587155 KSPGMRESOrthog 3 1.0 5.2869e-03 1.7 5.61e+06 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 4 0 0 0 0 5 8353 PCSetUp 3 1.0 7.1509e+03 1.0 5.98e+09 1.2 0.0e+00 0.0e+00 4.0e+00 97 0 0 0 5 97 0 0 0 6 6 PCApply 6 1.0 1.3173e+01 1.0 6.62e+13 1.2 9.9e+02 4.0e+05 2.1e+01 0100 72 95 26 0100 72 95 33 36910573 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 6 6 169508808 0. Vector 21 21 72606184 0. Index Set 12 12 11885344 0. Star Forest Graph 7 7 8440 0. Krylov Solver 1 1 18856 0. Preconditioner 1 1 1024 0. Distributed Mesh 1 1 5056 0. Discrete System 1 1 904 0. Weak Form 1 1 624 0. Viewer 1 0 0 0. ======================================================================================================================== Average time to get PetscTime(): 2.37022e-08 Average time for MPI_Barrier(): 2.50479e-06 Average time for zero size MPI_Send(): 1.72039e-06 #PETSc Option Table entries: -log_view -nw Single_DTT_TF_linear_ramp_discharge_MPI_v2.edp -pc_type lu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/home/mdebastiani/FreeFem-sources/ff-petsc/r MAKEFLAGS= --with-debugging=0 COPTFLAGS="-O3 -mtune=native" CXXOPTFLAGS="-O3 -mtune=native" FOPTFLAGS="-O3 -mtune=native" --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-cc=/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpicc --with-cxx=/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpic++ --with-fc=/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpif90 --with-scalar-type=real --download-f2cblaslapack --download-metis --download-ptscotch --download-hypre --download-parmetis --download-mmg --download-parmmg --download-superlu --download-suitesparse --download-tetgen --download-slepc --download-hpddm --download-cmake --download-scalapack --download-mumps --download-slepc-configure-arguments=--download-arpack=https://github.com/prj-/arpack-ng/archive/b64dccb.tar.gz PETSC_ARCH=fr ----------------------------------------- Libraries compiled on 2021-11-25 19:36:22 on compute-4-7 Machine characteristics: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.6.1810-Core Using PETSc directory: /home/mdebastiani/FreeFem-sources/ff-petsc/r Using PETSc arch: ----------------------------------------- Using C compiler: /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O3 -mtune=native Using Fortran compiler: /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpif90 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O3 -mtune=native ----------------------------------------- Using include paths: -I/home/mdebastiani/FreeFem-sources/ff-petsc/r/include ----------------------------------------- Using C linker: /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpicc Using Fortran linker: /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/bin/mpif90 Using libraries: -Wl,-rpath,/home/mdebastiani/FreeFem-sources/ff-petsc/r/lib -L/home/mdebastiani/FreeFem-sources/ff-petsc/r/lib -lpetsc -Wl,-rpath,/home/mdebastiani/FreeFem-sources/ff-petsc/r/lib -L/home/mdebastiani/FreeFem-sources/ff-petsc/r/lib -Wl,-rpath,/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/lib -L/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4/lib -Wl,-rpath,/opt/ohpc/pub/compiler/gcc/8.3.0/lib/gcc/x86_64-pc-linux-gnu/8.3.0 -L/opt/ohpc/pub/compiler/gcc/8.3.0/lib/gcc/x86_64-pc-linux-gnu/8.3.0 -Wl,-rpath,/opt/ohpc/pub/compiler/gcc/8.3.0/lib64 -L/opt/ohpc/pub/compiler/gcc/8.3.0/lib64 -Wl,-rpath,/opt/ohpc/pub/compiler/gcc/8.3.0/lib -L/opt/ohpc/pub/compiler/gcc/8.3.0/lib -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lspqr -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu -lf2clapack -lf2cblas -lparmmg -lmmg -lmmg3d -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lparmetis -lmetis -ltet -lm -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lrt -lquadmath -lstdc++ -ldl -----------------------------------------