Compiling with your own PETSc and SLEPc builds

I’m trying to systematize the compilation of FreeFEM in my machines, making it as clean as possible, and I’d like to build FreeFEM using my own PETSc and SLEPc builds (installed in custom prefixes), but I wasn’t able to succeed. Apparently, you can do --with-petsc="$PETSC_DIR"/lib/conf/petscvariables but there’s no equivalent for SLEPc.

How are you supposed to tell configure where to find SLEPc?

Also, even using the --with-petsc="$PETSC_DIR"/lib/conf/petscvariables, it does find my PETSc build, and shows several lines related to having found PETSc, but however configure ends telling “with PETSC: no”

I’d prefer to avoid FreeFEM building its own PETSc/SLEPc, because that custom build looks like a spaghetti Makefile prone to problems when using the tree in future systems.

Thanks for any suggestions you can give!


Yes, you are definitely right. Please have a look at this tutorial to figure out the exact variables that need to be set. There is no --with-slepc in FreeFEM ./configure, so I’d suggest (as done in the tutorial) to configure your PETSc with --download-slepc. If you see a bunch of no with respect to PETSc, it probably means that FreeFEM ./configure is not finding the same MPI implementation that you used to compile PETSc. Feel free to post your configure.log and we will help you figure this out.

Thanks a lot, but if I add --download-slepc, it will do another build of SLEPc, won’t it? I wish to avoid that. Is it not possible to tell FreeFEM to use an already existing SLEPc build?

It will build SLEPc in PETSc, not in FreeFEM. Do you want to have separate PETSc and SLEPc builds? Why not build SLEPc with PETSc? As I said before, there is no --with-slepc in FreeFEM ./configure. It is probably not too hard to fix that, but I can’t give you a time estimate. In the meantime, if you don’t want to build SLEPc with PETSc (again, I’m curious why, if it’s failing, please feel free to post a PETSc issue, I’ll fix it there), you can just, after ./configure, edit your plugin/seq/WHERE_LIBRARY-config and put the following lines:

slepc LD -L/something/lib -lslepc
slepc INCLUDE  -I/something/include
slepccomplex LD -L/something-complex/lib -lslepc
slepccomplex INCLUDE  -I/something-complex/include

Thanks a lot!! Yes, I have no problem in building both with the same installation prefix (I tend to use a different installation prefix for each third-party package I install because this is much cleaner for my systems and development environment, but I can rebuild PETSc/SLEPc to same prefix). I’ll try again, and if it fails I’ll report back.

BTW, can I also build the real and the complex versions with the same prefix, or should I avoid that because of maybe conflicting clashing of headers/libs?

So much cleaner? If you have hypre, MUMPS, SCOTCH, Metis, SuperLU, PETSc, SLEPc… builds in both debug and release mode, in both real and complex arithmetics, with both 32-bit and 64-bit integers, and scattered in different folders, I wouldn’t say this is much cleaner. But this is just my personal taste I guess. You still haven’t told me why you don’t want to use --download-slepc in your PETSc build. Do you ./configure SLEPc with some custom parameters?
You can’t use the same prefix (why do you need a prefix? why not adjust ${PETSC_ARCH}?), most of these libraries are plain C, so they have the same function signatures, and one build will overwrite the other.

Yes, as said in my previous message, I have no problem in building PETSc together with SLEPc. However, for the moment and because of time constraints, I cannot perform the build with the approach that I follow for every third-party software I use (read: GoboLinux-like approach, with every package installed into isolated directories --the reasons for that choice would be off-topic, but whenever I adopt some software for continued use in the future, I always install it that way, and yes, I have important reasons for that).

But in this moment, being able to run some FreeFEM simulations has a higher priority than installing FreeFEM with my standard approach, so I’m going to follow the appendix for the “optimal compilation process” from the tutorial above and build FreeFEM as a temporary installation. When I have time, I’ll do it with my standard approach.

BTW, the tutorial above is in a format such that the text cannot be copied and pasted, so you need to type the commands for yourself (no problem, though).

Kind regards, and thanks a lot.

You can access a copy/paste-friendly version here.

1 Like

How can I make the Python configure build system avoid refreshing source code that it’s already downloaded? When configuring PETSc, and indirectly building scalapack from PETSc (because of the --download-scalapack flag) I’m getting an error in MacOS because the ${objslamov} objects in Makefile.parallel from scalapack are missing the -isysroot flag pointing to the MacOS SDK (while the rest of objects in Makefile.parallel are keeping the flag). The fix is shockingly simple (just add the -isysroot flag to Makefile.parallel and then scalapack builds successfully), but then when I try to rerun the PETSc configure line from your tutorial, it cleans scalapack, downloads it again, and of course it fails :frowning_face:

Is there any way for telling the Python configure not to redownload a package and just use the version already downloaded?

That’s a strange error, maybe it’s best you send the PETSc configure.log to In the meantime, you can patch the ScaLAPACK .tar.gz the way you want. Then, instead of --download-scalapack, you can just do --download-scalapack=/path/to/patched/scalapack.tar.gz.

1 Like

Thanks a lot for the workaround!! Maybe it’s not a bug but that perhaps I’m not putting the ‘-isysroot’ flag in the variables that scalapack expects (I define it in my CFLAGS, CXXFLAGS and FFLAGS). The reason for the failure is that the Makefile.parallel in the scalapack version used by PETSc (which by the way is not the latest scalapack: the latest doesn’t come with any Makefile.parallel at all) uses different variables for compiling 4 object files (CCFLAGS and CDEFS) while for the rest of files it uses CFLAGS, CCFLAGS and CDEFS. So, these 4 object files (whose optimization flag is forced to -O0, by the way) fail, while the rest don’t. I guess that if I had added my isysroot flag to either CCFLAGS or CDEFS, it would have worked.

Thanks a lot!

I’ve submitted a MR in order to use the latest ScaLAPACK in PETSc, and thus in FreeFEM. Thanks for reporting this.

Building FreeFEM is not for the faint of heart :slightly_smiling_face: …certainly I’m glad I followed your tutorial before attempting building it as separate packages.

Anyway, it seems like I still didn’t succeed (or at least I found a suspicious situation): when arriving to the part that says “if PETSc is not detected, overwrite MPIRUN”, the thing is that PETSc was detected, but SLEPc wasn’t (and despite of that, HPDDM is said to be enabled for the build, which I find suspicious, because I thought that both PETSc and SLEPc had to be available for HPDDM support).

This is what I’m getting (BTW: also a bit surprising that complex support is not there, I thought your tutorial provided it):

configure: WARNING: unrecognized options: --disable-iohdf5
configure:   freefem++ used download:  
configure:   --  Dynamic load facility: yes 
configure:   --  ARPACK (eigen value): no 
configure:   --  UMFPACK (sparse solver): yes 
configure:   --  BLAS: yes 
configure:   --  with MPI:     yes
configure:     --  with PETSC: yes / PETSC complex: no 
configure:     --  with SLEPC: no / SLEPC complex: no 
configure:     --  with hpddm: yes (need MPI & c++11: yes) 
configure:     --  with htool: no (need MPI & c++11: yes) 
configure:   --  without libs:  tetgen ipopt mmg3d mshmet gmm mumps_seq nlopt superlu yams pipe MMAP NewSolver pardiso htool 
configure:   --  without plugin:  tetgen.dylib ff-Ipopt.dylib mmg3d-v4.0.dylib mshmet.dylib aniso.dylib ilut.dylib MUMPS_seq.dylib MUMPS.dylib ff-NLopt.dylib SuperLu.dylib freeyams.dylib pipe.dylib ff-mmap-semaphore.dylib NewSolver.dylib PARDISO.dylib htool.dylib 
configure:     progs: FreeFem++-nw bamg cvmsh2  FreeFem++-mpi ffmedit ffglut 
configure:  petsc dirs do no exist  , do build do:
configure: cd 3rdparty/ff-petsc/;  make

The first install is never easy. Once you get the hang of it, it’s easy peasy.
It’s weird that there is no SLEPC : yes (unless you didn’t use the --download-slepc option in PETSc ./configure). My tutorial does not provide the instructions for PETSc complex, but it’s fairly easy. Just export a new PETSC_ARCH, export PETSC_ARCH=freefem-complex, and reconfigure PETSc with the added flag --with-scalar-type=complex. Then, pass to FreeFEM ./configure the added flag --with-petsc_complex=${PETSC_DIR}/${PETSC_ARCH}/lib/conf/petscvariables

Yes, I followed your tutorial step by step, so it was with --download-slepc (and moreover, I checked that it was built with PETSc).

Anyway, if I try to make after configure I immediately run into an error (with libMesh, which I guess is the first thing being built):

libtool: error: **unrecognised option: '-static'**
make[4]: *** [libMesh.a] Error 1
make[3]: *** [lib/libMesh.a] Error 2
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

which leads me to ask if the official MacOS builds are done with some script. If affirmative, having access to such script would be of great help to me.

I don’t want to install the precompiled package because I do have a very strict rule of never installing anything on /usr/local on my machines, and the precompiled packages install everything there.

As a workaround, I might try to use FreeFEM on a Linux VM inside MacOS.

First the problem with ScaLAPACK, now this. I think there is something wrong with your working environment. This is a very trivial file to compile and link, that has nothing to do with PETSc. Anyway, if you want everything in one place, just follow these instructions, but don’t forget to --prefix=${PWD} when configuring.