************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /home/aszabo/FFinstall/FreeFem-sources/src/mpi/FreeFem++-mpi on a arch-FreeFem named aszabo-VirtualBox with 3 processors, by aszabo Wed Feb 23 17:19:40 2022 Using Petsc Development GIT revision: v3.16.2-556-g9562881bde GIT Date: 2022-01-03 17:09:36 +0000 Max Max/Min Avg Total Time (sec): 1.169e+03 1.000 1.169e+03 Objects: 5.960e+02 1.002 5.953e+02 Flop: 2.435e+11 1.056 2.357e+11 7.072e+11 Flop/sec: 2.083e+08 1.056 2.017e+08 6.050e+08 MPI Messages: 4.682e+04 1.000 4.682e+04 1.405e+05 MPI Message Lengths: 1.117e+09 1.968 1.817e+04 2.552e+09 MPI Reductions: 2.774e+04 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 1.1690e+03 100.0% 7.0725e+11 100.0% 1.405e+05 100.0% 1.817e+04 100.0% 2.773e+04 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 78 1.0 1.2652e+01120.3 0.00e+00 0.0 6.6e+01 4.0e+00 7.2e+01 1 0 0 0 0 1 0 0 0 0 0 BuildTwoSidedF 57 1.0 1.2638e+01121.5 0.00e+00 0.0 0.0e+00 0.0e+00 5.7e+01 1 0 0 0 0 1 0 0 0 0 0 MatMult 12872 1.0 1.2569e+02 1.1 1.11e+11 1.1 7.7e+04 1.8e+04 7.0e+00 11 46 55 56 0 11 46 55 56 0 2577 MatSolve 10440 1.0 6.8209e+01 1.1 7.17e+10 1.1 0.0e+00 0.0e+00 0.0e+00 5 29 0 0 0 5 29 0 0 0 3009 MatLUFactorNum 24 1.0 5.0093e+00 1.1 2.29e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1348 MatILUFactorSym 3 1.0 5.5856e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 209 1.0 1.2639e+01120.5 0.00e+00 0.0 0.0e+00 0.0e+00 5.7e+01 1 0 0 0 0 1 0 0 0 0 0 MatAssemblyEnd 209 1.0 5.7296e+0031.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.6e+01 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 3 1.0 8.7200e-07 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMats 24 1.0 1.4826e+00 1.0 0.00e+00 0.0 2.2e+02 3.5e+05 3.0e+00 0 0 0 3 0 0 0 0 3 0 0 MatCreateSubMat 56 1.0 7.4251e+00 1.0 0.00e+00 0.0 9.6e+01 4.6e+04 1.3e+02 1 0 0 0 0 1 0 0 0 0 0 MatGetOrdering 3 1.0 9.2361e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatIncreaseOvrlp 3 1.0 5.7279e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 7 1.0 2.1803e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 72 1.0 1.9404e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 8 1.0 2.4582e+02 1.0 2.43e+11 1.1 1.4e+05 1.8e+04 2.7e+04 21100100 98 99 21100100 98 99 2870 KSPGMRESOrthog 9630 1.0 3.0414e+01 1.0 5.08e+10 1.0 0.0e+00 0.0e+00 9.6e+03 3 21 0 0 35 3 21 0 0 35 4894 PCSetUp 57 1.0 1.4124e+01 1.0 2.29e+09 1.0 3.5e+02 2.5e+05 2.9e+02 1 1 0 3 1 1 1 0 3 1 478 PCSetUpOnBlocks 1212 1.0 5.0742e+00 1.1 2.29e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1330 PCApply 404 1.0 2.0466e+02 1.0 1.93e+11 1.1 1.4e+05 1.7e+04 2.7e+04 18 79 98 94 96 18 79 98 94 96 2731 PCApplyOnBlocks 10440 1.0 6.9938e+01 1.1 7.17e+10 1.1 0.0e+00 0.0e+00 0.0e+00 6 29 0 0 0 6 29 0 0 0 2935 KSPSolve_FS_0 404 1.0 9.6449e+01 1.0 1.00e+11 1.1 6.5e+04 1.7e+04 1.1e+04 8 41 46 42 39 8 41 46 42 39 3010 KSPSolve_FS_1 404 1.0 4.0734e+01 1.0 3.87e+10 1.1 2.9e+04 1.7e+04 4.8e+03 3 16 21 19 17 3 16 21 19 17 2750 KSPSolve_FS_2 404 1.0 3.4766e+01 1.0 3.23e+10 1.1 2.5e+04 1.7e+04 4.1e+03 3 13 17 16 15 3 13 17 16 15 2692 KSPSolve_FS_3 404 1.0 1.1237e+00 1.0 1.34e+09 1.1 1.2e+04 2.9e+03 6.9e+03 0 1 9 1 25 0 1 9 1 25 3451 VecDot 8 1.0 7.9846e-03 1.8 6.07e+06 1.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 2228 VecMDot 9630 1.0 1.5982e+01 1.1 2.54e+10 1.0 0.0e+00 0.0e+00 9.6e+03 1 11 0 0 35 1 11 0 0 35 4657 VecTDot 4444 1.0 2.1574e-01 1.6 1.44e+08 1.1 0.0e+00 0.0e+00 4.4e+03 0 0 0 0 16 0 0 0 0 16 1944 VecNorm 13285 1.0 1.6632e+01 3.0 2.93e+09 1.0 0.0e+00 0.0e+00 1.3e+04 1 1 0 0 48 1 1 0 0 48 516 VecScale 12072 1.0 1.0122e+00 1.1 1.52e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 4416 VecCopy 4474 1.0 4.5559e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 34612 1.0 5.3038e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 5264 1.0 4.1918e-01 1.4 4.32e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3011 VecAYPX 1616 1.0 2.0053e-02 1.1 5.25e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7605 VecWAXPY 8 1.0 1.0456e-02 1.2 3.04e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 851 VecMAXPY 10852 1.0 1.6158e+01 1.0 2.79e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 12 0 0 0 1 12 0 0 0 5067 VecPointwiseMult 2424 1.0 4.9232e-02 1.1 3.94e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2323 VecScatterBegin 57878 1.0 9.5285e+00 1.0 0.00e+00 0.0 1.4e+05 1.7e+04 1.5e+01 1 0100 96 0 1 0100 96 0 0 VecScatterEnd 57878 1.0 1.6129e+01 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecReduceArith 16 1.0 7.4496e-03 1.1 1.21e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4777 VecReduceComm 8 1.0 1.9542e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 10440 1.0 3.8492e+00 1.1 3.79e+09 1.0 0.0e+00 0.0e+00 1.0e+04 0 2 0 0 38 0 2 0 0 38 2888 SFSetGraph 22 1.0 1.4876e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 21 1.0 3.8760e-02 1.2 0.00e+00 0.0 1.3e+02 6.9e+03 1.5e+01 0 0 0 0 0 0 0 0 0 0 0 SFReduceBegin 10440 1.0 1.8989e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFReduceEnd 10440 1.0 8.2362e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFPack 57878 1.0 6.8432e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 57878 1.0 3.5253e-02 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESSolve 1 1.0 1.1553e+03 1.0 2.43e+11 1.1 1.4e+05 1.8e+04 2.8e+04 99100100100100 99100100100100 612 SNESSetUp 1 1.0 3.0625e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESFunctionEval 9 1.0 3.4247e+02 1.0 0.00e+00 0.0 5.4e+01 2.0e+05 0.0e+00 29 0 0 0 0 29 0 0 0 0 0 SNESJacobianEval 8 1.0 5.6385e+02 1.0 0.00e+00 0.0 1.8e+02 1.3e+05 2.9e+02 48 0 0 1 1 48 0 0 1 1 0 SNESLineSearch 8 1.0 3.0588e+02 1.0 6.07e+08 1.1 9.6e+01 1.2e+05 3.2e+01 26 0 0 0 0 26 0 0 0 0 6 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 44 44 1299577960 0. Krylov Solver 9 9 808728 0. DMKSP interface 1 1 664 0. Preconditioner 9 9 9256 0. Index Set 107 107 20652304 0. IS L to G Mapping 3 3 5695272 0. Vector 371 371 742459024 0. Star Forest Graph 32 32 37728 0. Viewer 2 1 848 0. SNES 1 1 1548 0. DMSNES 1 1 696 0. SNESLineSearch 1 1 1008 0. Distributed Mesh 5 5 25240 0. Discrete System 5 5 4520 0. Weak Form 5 5 3120 0. ======================================================================================================================== Average time to get PetscTime(): 4.06999e-08 Average time for MPI_Barrier(): 1.66828e-05 Average time for zero size MPI_Send(): 3.86033e-06 #PETSc Option Table entries: -Dfirstmesh=1 -fieldsplit_p_ksp_max_it 5 -fieldsplit_p_ksp_type cg -fieldsplit_p_pc_type jacobi -fieldsplit_vX_ksp_converged_reason -fieldsplit_vX_ksp_gmres_restart 50 -fieldsplit_vX_ksp_pc_side right -fieldsplit_vX_ksp_rtol 0.1 -fieldsplit_vX_ksp_type gmres -fieldsplit_vX_pc_asm_overlap 1 -fieldsplit_vX_pc_type asm -fieldsplit_vX_sub_pc_type ilu -fieldsplit_vY_ksp_converged_reason -fieldsplit_vY_ksp_gmres_restart 50 -fieldsplit_vY_ksp_pc_side right -fieldsplit_vY_ksp_rtol 0.1 -fieldsplit_vY_ksp_type gmres -fieldsplit_vY_pc_asm_overlap 1 -fieldsplit_vY_pc_type asm -fieldsplit_vY_sub_pc_type ilu -fieldsplit_vZ_ksp_converged_reason -fieldsplit_vZ_ksp_gmres_restart 50 -fieldsplit_vZ_ksp_pc_side right -fieldsplit_vZ_ksp_rtol 0.1 -fieldsplit_vZ_ksp_type gmres -fieldsplit_vZ_pc_asm_overlap 1 -fieldsplit_vZ_pc_type asm -fieldsplit_vZ_sub_pc_type ilu -inintMesh MVG0mult1p30.mesh -ksp_gmres_restart 200 -ksp_monitor -ksp_rtol 0.1 -ksp_type fgmres -log_view -nw ../include/Nonlinear-solver.edp -ocase MVG0mult1p30 -outFinalMesh 1 -pc_fieldsplit_type multiplicative -pc_type fieldsplit -Re 350 -snes_atol 1e-06 -snes_linesearch_order 1 -snes_monitor -v 0 -wdir MVG0 #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --download-mumps --download-parmetis --download-metis --download-hypre --download-superlu --download-slepc --download-hpddm --download-ptscotch --download-suitesparse --download-scalapack --download-tetgen --with-fortran-bindings=no --with-scalar-type=real --with-debugging=no ----------------------------------------- Libraries compiled on 2022-01-03 20:34:29 on aszabo-VirtualBox Machine characteristics: Linux-5.4.0-91-generic-x86_64-with-Ubuntu-18.04-bionic Using PETSc directory: /home/aszabo/FFinstall/petsc Using PETSc arch: arch-FreeFem ----------------------------------------- Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O Using Fortran compiler: mpif90 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O ----------------------------------------- Using include paths: -I/home/aszabo/FFinstall/petsc/include -I/home/aszabo/FFinstall/petsc/arch-FreeFem/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/home/aszabo/FFinstall/petsc/arch-FreeFem/lib -L/home/aszabo/FFinstall/petsc/arch-FreeFem/lib -lpetsc -Wl,-rpath,/home/aszabo/FFinstall/petsc/arch-FreeFem/lib -L/home/aszabo/FFinstall/petsc/arch-FreeFem/lib -Wl,-rpath,/usr/lib/x86_64-linux-gnu/openmpi/lib -L/usr/lib/x86_64-linux-gnu/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/7 -L/usr/lib/gcc/x86_64-linux-gnu/7 -lHYPRE -lspqr -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu -llapack -lblas -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lparmetis -lmetis -ltet -lm -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lrt -lquadmath -lstdc++ -ldl -----------------------------------------