GCC Compile Farm Project
Contents
What is the GCC Compile Farm
The GCC Compile farm project maintains a set of machines of various architectures and provides ssh access to Free Software developers, GCC and others (GPL, BSD, MIT, ...) to build, test and debug Free, Libre and Open Source Software. It is not a free cluster for computationally intensive computing using Free Software. Once your account application (see below) is approved, you get full SSH access to all the Compile Farm machines (current and new).
Architectures currently available:
- i686
- x86_64
- riscv64le
- ppc64be/le (including POWER7, POWER8, POWER9, POWER10)
- chrp32be
- aarch64le
- sparc32/64be
- mips64be
- loongarch64
Operating systems currently available:
- AIX 7.1, 7.3
AlmaLinux 8.10, 9.4
- Alpine Linux
- CentOS 7.9, 8 Stream
- Debian stretch, buster, bullseye, bookworm, jessie, trixie, unstable, testing
- FreeBSD 14.0, 15.0
- MacOS X 12.6
- NetBSD 10.0
- OpenBSD 7.5, 7.6
- OpenSUSE Leap 15
- Rocky 9.4
- Solaris 10, 11.3, 11.4
- Ubuntu jammy
See the detailed list here, with architectures and OS: https://portal.cfarm.net/machines/list/
How to obtain an account?
If you are working on a piece of free software (GCC or any other GPL, BSD, MIT, ...) and need ssh access to the farm for compilation, debug and test on various architectures you may apply for a GCC Compile Farm account at https://portal.cfarm.net/users/new/
After approval and account creation the compile farm machines should be used only for free software development, see this free software license list.
See "usage" below for usage rules.
How to Get Involved
There is a mailing list for farm-related discussions: https://lists.tetaneutral.net/listinfo/cfarm-users
Please don't use this mailing list for support requests (see https://portal.cfarm.net/tickets/ instead)
Hardware Wishlist
- Any suggestions? Vendor contacts welcomed.
- RISC-V
Similar projects
Public Hurd Boxen (GNU/Hurd)
Usage
Warning: compile farm machines disks are not RAID and not backed up. Data may be removed or lost without notice. Don't assume anything about data persistence, please use git/SVN or rsync to put your scripts and crontab somewhere safe.
Mailing list, SSH keys and a ticket system are hosted at https://portal.cfarm.net/. If you need to replace your ssh key, use the Lost password button on https://portal.cfarm.net/login/
To request installation of a package or signal a problem with a machine please use the tracking system in preference to the mailing list.
Use the command ulimit to reduce the risk of a DOS attack by your script/program. Example: ulimit -S -t 3600 -v 2000000 -u 1200
Some machines have limited RAM and CPU so please do not set up crontab on those machine without discussing it on the mailing list first.
On machine with limited disk please clean up automatically as much as possible and on other machines remember to remove old unused stuff.
For automatic jobs on N-core please launch no more than N/2 runnable processes (total) and if you see that your cron is running at the same time as another user, please coordinate a time shift. You can also use https://portal.cfarm.net/munin/ to plan your cron jobs.
The systems are provided for the development of Free, Libre and Open Source Software, not for using software or running computationally-intensive computations unrelated to building, testing and debugging.
Services and software installed on farm machines
On most CFARM machines:
Many system packages are pre-installed, if you are missing something feel free to request it: https://portal.cfarm.net/tickets/
Various pre-built toolchains are available under /opt/cfarm/ (writable for everyone if you wish to add something)
Munin is installed to collect system statistics, graphs are available at https://portal.cfarm.net/munin/
Build tips:
To build trunk on some machines you have to configure --with-mpfr=/opt/cfarm/mpfr-2.4.1 --with-gmp=/opt/cfarm/gmp-4.2.4 --with-mpc=/opt/cfarm/mpc-0.8 (other versions are also available depending on machine)
If you configure LTO you need to use --with-libelf=/opt/cfarm/libelf-0.8.12
To build trunk in 64 bits on sparc64, powerpc64 you have to use --with-mpfr=/opt/cfarm/mpfr-2.4.1-64 --with-gmp=/opt/cfarm/gmp-4.2.4-64 --with-mpc=/opt/cfarm/mpc-0.8-64 For gcc64, use --with-mpfr=/opt/cfarm/mpfr-latest --with-gmp=/opt/cfarm/gmp-latest --with-mpc=/opt/cfarm/mpc-latest --disable-libstdcxx-pch, and export LD_LIBRARY_PATH='/opt/cfarm/mpfr-latest/lib:/opt/cfarm/gmp-latest/lib:/opt/cfarm/mpc-latest/lib:/opt/cfarm/gmp-5.0.5/lib'
To debug 64 bits binaries on sparc64 and powerpc64 you have to use /opt/cfarm/gdb-6.8-64/bin/gdb
On mips64el-linux to compile default GCC use --build=mipsel-linux-gnu --enable-targets=all, if you need a 64-bit build, use export CC="gcc -mabi=n32" and --with-mpfr=/opt/cfarm/mpfr-2.4.1-n32 --with-gmp=/opt/cfarm/gmp-4.2.4-n32 --with-mpc=/opt/cfarm/mpc-0.8-n32.
- To build trunk on powerpc-ibm-aix (AIX):
PATH=/opt/freeware/bin:$PATH
.../src/configure --disable-werror --enable-languages=c,c++ --with-gmp=/opt/cfarm --with-lbiconv-prefix=/opt/cfarm --disable-libstdcxx-pch
make SHELL=/bin/bash CONFIG_SHELL=/bin/bash
To be able to git push to the AIX machine cfarm119, set git config remove.cfarm119.receivepack /opt/freeware/bin/git-receive-pack on the system where you want to push from.
Projects Ideas
ChristianJoensson could set up a sparc-linux cross compiler
TomTromey proposed GCJ testing and the Java free software often has testsuites which are useful for GCC testing. Also, setting up BuildBot would be very handy. http://buildbot.sourceforge.net
- Several developers (3) have expressed interest in doing normal GCC development on the machines instead or in addition of their own.
AndrewPinski suggested setting up Openbench http://www.exactcode.de/oss/openbench/
SpenserGilliland suggested building binary images for ARM Multimedia Support in Buildroot.
Currently Running
Port GCC to Intel's 16-bit architecture.
RaskIngemannLambertsen is trying to port GCC to the Intel 8086 CPU family. Nodes gcc01, gcc03, gcc04, gcc07, gcc08 and gcc09 are used for testing patches that could affect existing targets. Tests are run at low priority and use of the nodes is sporadic. The Intel 8086 CPU has both 8-bit registers and 16-bit registers. The work on getting GCC to fully support such CPUs includes:
- Fixing the assumption in subreg_get_info() (rtlanal.c) that if a value is stored in multiple hard register, then those hard register all have the same size. To fix that, subreg_get_info() will be rewritten. The targets that are the trickiest to get right are i?86-*-* with -m128bit-long-double and powerpc-unknown-eabispe. Note: This part has been postponed because the new lower-subreg pass reduces the problem and I've worked around the cases that subreg_get_info() can't currently handle.
- In reload.c, fixing find_valid_class() and callers having the same problem as subreg_get_info().
- Fixing unspecified reload bugs as they turn up.
General bug fixes and enhancements are also tested from time to time.
Automatic bootstrap and regression testing
One can use the script from GCC sources contrib/patch_tester.sh for setting up an automatic tester on a machine. The patch should contain several markers that instruct the tester where to send the results by email, what branch and revision of GCC to use, and the configure and make flags. One can use the prepare_patch.sh script to fill in all this information and to select the defaults for each case.
An example of a patch header for the HEAD version of autovect-branch, configuring only C, C++ and Fortran using vectorization during bootstrap and only checking the vectorization specific tests:
email:foo@bar.com branch:autovect-branch revision:HEAD configure:--enable-languages=c,c++,fortran make:CFLAGS="-g" BOOT_CFLAGS="-O2 -g -msse2 -ftree-vectorize" check:RUNTESTFLAGS=vect.exp
Autobuilds for coLinux
HenryNe is using node gcc11 for building coLinux from source. It uses cross target mingw32 and runs once per day with low priority.
CGNU Project
rpeckhoff is documenting the operation of the current GCC build system on nodes gcc11-gcc14. He is using graphviz, Doxygen, and his own scripts to help discover and document source interdependencies. His project's progress is at http://cgnu.rpeckhoff.org
Testing
StevenBosscher is running LRA branch bootstrap and check in loop with languages C, C++ and Fortran on ia64-unknown-linux-gnu and powerpc64-unknown-linux-gnu. There's also a cross-tester from powerpc64-linux to mipsisa64-elf for languages C and C++ only. Successful build+test cycles are reported to the gcc-testresults mailing list. The testers run within 24 hours of a change on the LRA branch.
host arch branch loop time gcc110 powerpc64 lra 2h00 (-j 8) gcc110 mipsisa64 lra 2h00 (-j 8) gcc66 ia64 lra 8h30 (-j 2)
Developing the Win64 port of GCC
http://mingw-w64.sf.net is committed to creating a viable platform for using GCC on Windows natively. We run build and testsuites constantly and foster development and porting of mainstream applications such as Firefox (http://www.mozilla.org) and VLC (http://www.videolan.org) to the Win64 platform.
Cross compile testing
MikeStein is running cross compile tests at a low priority and report the results to gcc-testresult. He tests various branches, patches and targets.
RTEMS Project
JoelSherrill is periodically running cross compile tests of various RTEMS (http://www.rtems.org) targets and reporting the results to gcc-testresults. The current focus is on the GCC SVN trunk with the binutils, gdb and newlib CVS heads. C, C++, and Ada languages are tested where possible. The targets currently tested are listed below along with the RTEMS Board Support Package (BSP) and simulator used.
- arm-rtems (edb7312 BSP on Skyeye)
- h8300-rtems (h8sim BSP on gdb h8 simulator)
- i386-rtems (h8sim BSP on qemu simulator)
- m32c-rtems (h8sim BSP on gdb m32c simulator)
- m32r-rtems (h8sim BSP on gdb m32c simulator)
- mips-rtems (h8sim BSP on gdb mips jmr3904 simulator)
- sh-rtems (h8sim BSP on gdb SuperH simulator)
- powerpc-rtems (h8sim BSP on gdb psim simulator)
- sparc-rtems (h8sim BSP on gdb sis/erc32 simulator)
The bfin and m68k (Coldfire) will be added once Skyeye (http://www.skyeye.org) addresses some missing instructions that GCC 4.3 and newer generate which are currently not supported by Skyeye.
There are some test infrastructure issues which negatively impact the results on all RTEMS targets.
- Because each tested BSP is compiled with a specific set of CPU CFLAGS, there are a number of tests which fail on each target because the BSP CPU compilation flags override those being tested and the assembly which is expected to be generated is not.
- RTEMS targets do not gather profiling information. As such all profiling tests fail.
RTEMS testing is normally done on gcc12. It is not currently run automatically and may move to another machine when it is done automatically.
BTG Project
BTG is a BitTorrent p2p client with daemonized backend. Daily builds/packaging/regression testing.
GNU Guile daily builds
Ludovic Courtès builds and runs the test suite of GNU Guile on gcc11 (x86-64), gcc30 (alphaev56) and gcc31 (sparc64) using Autobuild. Build results are available here.
GNU SASL, Libidn, Shishi, GSS, GnuTLS, etc daily builds
Simon Josefsson builds and runs the test suite of several projects. Build results are available here.
SBCL testing
GaborMelis and NikodemusSiivola build and test SBCL on x86-64, sparc, alpha and ppc.
CSQL Main Memory Database Testing
prabatuty builds and tests CSQL main memory database on gcc14(x86_64), sparc and ppc.
C++ library testing
JonathanWakely uses Compile Farm machines to build and test libstdc++.
lvv::array
(C++) STL compatible container x86_64 specialized, vector operation capable
LOPTI
mathematical optimization library (derivative-free, unconstrained solvers)
Botan
Botan is a BSD licensed crypto library. JackLloyd uses the compile farm to test builds and develop CPU-specific optimizations, mostly on the non-x86/x86_64 machines.
YAPET
YAPET is a GPL licensed text based password manager. RafaelOstertag uses the compile farm to test and assure interoperability of the binary file structure between different architectures.
lnostdal
Building and testing of latest SBCL vs. several Common Lisp packages via a clbuild fork; http://gitorious.org/clbuild
Development and testing of some of the Common Lisp projects hosted here; http://gitorious.org/~lnostdal
FIM : Fbi IMproved
FIM (Fbi IMproved) http://savannah.gnu.org/projects/fbi-improved automated build and testing on various platforms.
GNU CLISP
CLISP is an ANSI Common Lisp implementation.
SamSteingold uses the compile farm for automated build and testing on various platforms.
Ruby
TanakaAkira uses compile farm to test Ruby.
Perl
NicholasClark uses the compile farm to test perl portability on platforms he doesn't otherwise have access to.
Stellarium
HansLambermont uses the compile farm for continuous build integration of the Stellarium project.
Stellarium is a free open source planetarium for your computer. It shows a realistic sky in 3D, just like what you see with the naked eye, binoculars or a telescope.
BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes.
Xapian
Olly Betts is using the compile farm for portability testing of Xapian.
Buildroot
PeterKorsgaard is running randpackageconfig builds of Buildroot on gcc10.
FATE
MichaelKostylev is running Libav Automated Testing Environment on various machines.
Concurrency Kit
Samy Al Bahra is porting and performance testing Concurrency Kit on idle machines.
DynCall
DanielAdler is porting dyncall to sparc/sparc64 on gcc54 and gcc64.
OpenSCAD
Don B is does porting, regression testing, and bug fixing of OpenSCAD on gcc20, gcc76, and gcc110.
BRL-CAD
ChristopherSeanMorrison is involved in on-going cross-platform compilation testing of BRL-CAD. Most day-to-day testing is on gcc10 and gcc110. The compile farm is a fantastic resource being provided (thank you!).
POWER architecture
SeanMcGovern uses the IBM POWER nodes to both learn the architecture and do some light porting work.
Libamqp
EamonWalshe is using the compile farm to build and test libamqp with various gcc versions and different architectures.
Biosig and related projects
Biosig is a software library for processing of biomedical signal data (like EEG, ECG). It consists of several parts "biosig for octave and matlab" provides functions for analysing the data in Octave and Matlab. biosig4c++/libbiosig provides a common interface for accessing over 40 different data formats. libbiosig is used for data conversion, within sigviewer - a viewing and scoring software, and has bindings to various languages. libbiosig supports also big/little endian conversion for the various data formats and target platforms. AloisSchloegl is using the compile farm for testing libbiosig on various platforms and the testing will be extended to related projects.
NaN-toolbox for Octave with OpenMP enabled (octave-nan)
The NaN-toolbox is a statistics and machine learning toolbox for the use in Octave and supports parallel data processing on shared-memory multi-core machines. So far, some performance tests comparing Octave and Matlab have been done http://pub.ist.ac.at/~schloegl/publications/TR_OpenMP_OctaveMatlabPerformance.pdf AloisSchloegl would like to compare different multi-core systems with the same performance test.
Free Pascal nightly testsuite
The Free Pascal compiler is a freeware Pascal compiler. It has a testsuite with a number of machines performing compilation and testsuite runs every night. Results are visible on a special page. Pierre Muller added these nightly build/testsuite runs for powerpc and powerpc64 on gcc110 machine.
GNU Texinfo
PatriceDumas uses the compile farm to test GNU Texinfo portability on platforms he doesn't otherwise have access to.
Wolfpack Empire
Empire is a real time, multiplayer, Internet-based war game, and one of the oldest computer games still being played. MarkusArmbruster uses the compile farm to ensure it stays portable.
Dpkg
GuillemJover uses the non-Linux hosts to port dpkg to those systems.
Libbsd, Libmd
GuillemJover uses the non-Linux hosts to port libbsd and libmd to those systems.
Your Project here
Your description here.
Compile farm system administration
Improvements to the management software running the farm
See https://framagit.org/compile-farm/gccfarm/-/issues/
Creating a new user
Users must request account creation themselves:
An admin can then approve or reject the pending requests:
Once approved, the user receives an email to define his/her password.
TODO: describe the scripts that actually create the user on all machines
Retiring/suspending a user
Go to the administration page and find the user: https://portal.cfarm.net/admin/gccfarm_user/farmuser
On the user page, uncheck "Active" and save. It will prevent login on the management website, and also queue a task to remove the SSH keys from all farm machines. Either wait for the cron job that does it, or run the task manually if it's urgent.
Then, cleanup needs to be done manually if it's necessary. There are scripts to help (SSH root on portal.cfarm.net):
~/ansible/disable_crontab.sh: self-explanatory. Beware of offline machines, the script will of course not do anything on these.
~/ansible/show_processes.sh: self-explanatory, you then need to manually kill any residual process on farm machines. The output of the script is currently messy.
~/ansible/disk_usage.sh: this script can take a very long time to run if the user has many files. Be patient. See "Cleanup data when a machine has a full disk" below for a strategy to cleanup.
Adding a new machine
Preliminary work
- collect information about the hoster: organization name, should it appear on our website?
- collect contact information: email, website, name/email of somebody from the organization, URL to ticket system, IRC...
- ensure that the admin VM (root on portal.cfarm.net) can access the machine as root over SSH
determine a name gccXXX. Currently: <100 in France, 1XY in the US, 2XY in Europe, 3XY in Canada. Try to keep machines hosted by the same organisation close together (same X) and switch to the next X for a different host (for instance "250")
- ask guerby to add the name to the DNS
Ansible setup
Everything here is done in /root/ansible on the admin VM.
- add the machine to the "hosts" file, in the right section according to its OS and OS version
- create "host_vars/gccXXX.yml" with specific configuration variables if needed.
- run the "setup.yml" playbook in test mode:
# ansible-playbook setup.yml -l 'gccXXX' --diff --check
- if everything seems good, run it for real:
# ansible-playbook setup.yml -l 'gccXXX' --diff
- do the same for the "munin-node.yml" playbook that installs munin
- the "munin-master.yml" is different because it runs on the local machine to configure the munin master:
# ansible-playbook munin-master.yml --diff --check # ansible-playbook munin-master.yml --diff
- finally, install packages. This playbook takes a long time to run and is the most prone to breakages.
# ansible-playbook packages.yml --diff --check # ansible-playbook packages.yml --diff
Finishing the munin setup
After some time, the machine should appear in https://portal.cfarm.net/munin
The munin master runs every 30 minutes, so the new machine will not appear immediately. In addition, it may need several munin runs to start showing data.
The final configuration is to add the machine to the aggregate disk graph: https://portal.cfarm.net/munin/gccfarm/all/df_abs.html
To do this, we need to know the "munin internal name" of the data disk (either "/home" if it is separate, or "/"). Look for the internal name here: https://portal.cfarm.net/munin/gccfarm/gccXXX/df_abs.html
Then add it "group_vars/munin-master.yml", and run the "munin-master.yml" playbook again.
Management software setup
To register the new machine on the compile farm website and deploy user accounts and SSH keys:
go to the admin site at https://portal.cfarm.net/admin/
- add the hoster if it does not exist already:
- enter the organization name and whether it should be displayed publicly
- add one or more contacts, which can be a person or a contact option (ticket system, generic contact address, IRC, etc). This will not be made public.
- add a new machine. Most fields should be self-explanatory:
- make sure to use "gccXXX" as name and "gccXXX.fsffrance.org" as hostname
- ansible configured sshd to listen on port 443, try to see if you can actually connect on this port (there might be a firewall) and set the ports to "22,443" if it works
- description: try to keep it short. The OS, CPU and arch will be detected automatically and displayed separately in the list of machines.
- other fields should have reasonable defaults
- check that the system can access the machine over SSH to fetch technical information:
su - admin-ansible ./manage.sh gather_info gccXXX
if it works, it should print information to stdout and fill up https://portal.cfarm.net/machines/list/
next, we'll create users. This can take a long time. First try to see if it works, and kill it after a few users:
./manage.sh deploy_users -m gccXXX ^C
- then run it for real:
./manage.sh deploy_users --apply -m gccXXX
- go take a coffee or two, and then do the same with SSH keys:
./manage.sh deploy_sshkeys -m gccXXX ^C ./manage.sh deploy_sshkeys --apply -m gccXXX
congratulations, the machine is now ready for general usage! You can write a news article and even send an email to the cfarm-announces mailing list if the occasion warrants it.
Install software packages on all machines
This is done in /root/ansible/packages.yml on the management host.
Cleanup data when a machine has a full disk
Tools
The first step is to determine how much data is used and by who: "ncdu" is a great tool for this.
Then there is a script that uses ansible to compute the disk usage of a user on all farm machines:
/root/ansible/disk_usage.sh <username>
It might need a very long time to run if the user has lots of files. If you kill the script, the "du" operation will still continue to run on farm machines, so be careful not to kill/run the script several times or you will seriously impact I/O on farm machines.
Cleanup strategy
Currently the strategy looks like this; it should be automated:
1) look at who is taking the most space on machines that are near full
2) run the script that computes the disk usage of the user on *all* farm machines
# On portal.cfarm.net /root/ansible/disk_usage.sh <login>
3) send an email asking to look at the data and clean up stale data, with
- the output of the script
4) if the email bounces, try to find another email address that works
5) if you can't easily find another email, look manually at the data (with "ncdu", "ls -lhart") and clean up data that takes the most space and "looks old" (i.e. created/accessed for the last time several years ago)
OS upgrades on the management host
The management host runs Debian, and it must be upgraded to the latest stable release from time to time.
Preliminary work:
- check that the future version of Python is supported by our Django and Ansible code
- make sure the machine has up-to-date packages
- make sure that borg backups are working
- take an additional backup of the Django database:
su - admin-ansible ./manage.sh dumpdata --natural-primary --natural-foreign -o cfarm-database-YYYYMMDD.json
- scp this database to somewhere safe (it contains personal data)
You can now perform a standard Debian full-upgrade. Keep modified config files to the local version.
After the upgrade:
- recreate the python virtualenv for the Django-based portal:
su - admin cd /var/www/html/gccfarm/ python3 -m venv venv-pyXY # change pyXY with actual version . venv-pyXY/bin/activate pip install -r requirements/production.txt
- update the systemd service file at /etc/systemd/system/gccfarm.service to update path to venv
- update admin scripts to update path to venv:
su - admin-ansible vim manage.sh deploy_ssh_backlog.sh deploy_ssh_queue.sh
- reboot
- check that the web portal works
- check that management tasks work:
su - admin-ansible ./manage.sh gather_info
- update /etc/ssh/ssh_config.d/cfarm.conf with specific client SSH config if needed to connect to older hosts
- upgrade postgresql version (carefully, read man pages before):
# Replace NEW_VERSION and OLD_VERSION appropriately below. # Check current state pg_lsclusters du -hsc /var/lib/postgresql/* # Drop empty database created automatically with the new version: pg_dropcluster --stop NEW_VERSION main # Upgrade old to new (this copies data): pg_upgradecluster OLD_VERSION main # Check that web portal works, then drop the old database: pg_dropcluster OLD_VERSION main
reinstall global version of Ansible used for setup, munin and package installs. We try to use the same base version as our gccfarm_py3 branch for consistency with user and SSH key deployments.
# This is the version we use as of 2024-05. # With recent pip, we need to force installation to /usr/local pip install --break-system-packages 'ansible-core>=2.14,<2.15'
- check that Ansible works:
cd /root/ansible ansible --version ansible-playbook setup.yml --check
Final steps as usual for Debian upgrades:
- apt autoremove
- apt clean
- last reboot to make sure everything is OK
List of machines
See https://portal.cfarm.net/machines/list/ for an up-to-date and automatically-generated list of machines. The information below may be out-of-date.
Datacenter http://www.fsffrance.org/ Rennes , static public IP, 100 Mbit/s up/down
name disk CPU Notes gcc10 2TB 2x12x1.5 GHz AMD Opteron Magny-Cours / 64 GB RAM / Supermicro AS-1022G-BTF / Debian x86-64 gcc12 580G 2x 2x2.0 GHz AMD Opteron 2212 / 4GB RAM / Dell SC1345 / Debian x86-64
Datacenter http://www.smile.fr/ , static public IP, 100 Mbit/s up/down
name disk CPU Notes gcc13 580G 2x2x2.0 GHz AMD Opteron 2212 / 4GB RAM / Dell SC1345 / Debian x86-64 gcc14 750G 2x4x3.0 GHz Intel Xeon X5450 / 16GB RAM / Dell Poweredge 1950 / Debian x86-64
Note: incoming port for user services are limited to tcp/9400-9500
Datacenter http://www.inria.fr/saclay/ , static public IP , ssh only
name disk CPU Notes gcc15 160G 1x2x2.8 GHz Intel Xeon 2.8 (Paxville DP) / 1 GB RAM / Dell SC1425 / Debian x86-64 gcc16 580G 2x4x2.2 GHz AMD Opteron 8354 (Barcelona B3) / 16 GB RAM / Debian x86-64
Datacenter http://www.irill.org/ , static public IP
name disk CPU Notes gcc20 1TB 2x6x2.93 GHz Intel Dual Xeon X5670 2.93 GHz 12 cores 24 threads / 24 GB RAM / Debian amd64
Datacenter http://iut.ups-tlse.fr/ , static public IP
name disk CPU Notes gcc21 15TB 1x6x1.6 GHz Intel Xeon E5-2603 v3 6 cores / 64 GB RAM / Ubuntu amd64
Datacenter Infosat Telecom http://www.infosat-telecom.fr/ , Static public IP
name port disk CPU Notes gcc70 160G 2x3.2 GHz Intel Xeon 3.2E (Irwindale) / 3 GB RAM / Dell Poweredge SC1425 / NetBSD 5.1 amd64
Datacenter tetaneutral.net http://tetaneutral.net/ , Toulouse, FRANCE, Static public IP and IPv6
name port disk CPU Notes gcc45 1TB 4x3.0 GHz AMD Athlon II X4 640 / 4 GB RAM / Debian i386 gcc22 NFS MIPS Cavium Octeon II V0.1 EdgeRouter Pro gcc23 NFS MIPS Cavium Octeon II V0.1 EdgeRouter Pro gcc24 NFS MIPS Cavium Octeon II V0.1 EdgeRouter Pro gcc67 3TB 8x2x3.4 GHz AMD Ryzen 7 1700X / 32G RAM / Debian GNU/Linux 9 (stretch) gcc68 3TB 4x2x3.2 GHz AMD Ryzen 5 1400 / 32G RAM / Debian GNU/Linux 9 (stretch)
Datacenter INSA Rouen http://www.insa-rouen.fr , France
name port disk CPU Notes gcc75 2TB 4x2x3.4 GHz Core i7-2600 / 16 GB RAM / 2TB gcc76 2TB 4x2x3.4 GHz Core i7-2600 / 16 GB RAM / 2TB
Gcc76 contains over 10 virtual machines, listed in /etc/hosts. /scratch is an NFS mount shared between them.
Datacenter OSUOSL http://osuosl.org/ , Oregon, USA, Static public IP
name port disk CPU Notes gcc110 2TB 2x8x4x3.55 GHz IBM POWER7 / 64 GB RAM / IBM Power 730 Express server / CentOS 7 ppc64 gcc111 2TB 4x6x4x3.70 GHz IBM POWER7 / 128 GB RAM / IBM Power 730 Express server / AIX 7.1 gcc112 2TB 2x10x8x3.42 GHz IBM POWER8 / 256 GB RAM / IBM POWER System S822 / CentOS 7 ppc64le gcc113 500GB 8x2.4 GHz aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS gcc114 500GB 8x2.4 GHz aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS gcc115 500GB 8x2.4 GHz aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS gcc116 500GB 8x2.4 GHz aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS gcc117 500GB 8x2 GHz aarch64 / 16 GB RAM / AMD Opteron 1100 / openSUSE Leap 42.1 gcc118 500GB 8x2 GHz aarch64 / 16 GB RAM / AMD Opteron 1100 / openSUSE Leap 42.1 gcc119 2TB 2x8x4x4.15 GHz IBM POWER8 / 192 GB RAM / IBM POWER System S822 / AIX 7.2 gcc120 11TB 4x24x2x3.9 GHz IBM POWER10 / 2 TB RAM / IBM POWER System E1050 / AlmaLinux 9 gcc135 26TB 2x16x4x3.8 GHz IBM POWER9 / 264 GB RAM / IBM POWER System / CentOS 7 ppc64le
Datacenter http://www.fsffrance.org/ Paris , FTTH static IP, 100 Mbit/s down, 50 Mbit/s up
Currently empty
Datacenter Laurent GUERBY, http://www.guerby.org/, France, DSL dynamic IP, 10 Mbit/s down, 1 MBit/s up
Currently empty.
Datacenter http://www.hackershells.com/ San Francisco, USA, static public IP 1 Mbit/s
Currently empty.
Datacenter Melbourne, Australia 10 Mbit/s DSL
Currently empty.
Datacenter http://isvtec.com/ , static public IP, 100 Mbit/s up/down
Currently empty.
Datacenter http://www.macaq.org/ , DSL dynamic IP, 10 Mbit/s down, 1 MBit/s up, ubuntu breezy 5.10
Currently empty.
Datacenter http://www.mekensleep.com/ , DSL dynamic IP, 10 Mbit/s down, 1 MBit/s up
Currently empty.
Datacenter http://www.skyrock.com/ , static public IP, 1000 Mbit/s up/down
Currently empty.
Datacenter http://www.pateam.org/ http://www.esiee.fr/ , 100 MBit/s up/down
Currently empty.
Offline
name port disk CPU Notes gcc01 9061 16G 2x1.0 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550 + additional 32 GB disk, donated gcc02 9062 16G 2x1.0 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc03 9063 16G 2x1.26 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc05 9065 16G 2x1.0 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc06 9066 16G 2x1.0 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc07 9067 32G 2X1.26 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc09 9068 32G 2x0.93 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc08 32G 2x1.26 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated gcc11 580G 2x 2x2.0 GHz AMD Opteron 2212 / 4GB RAM / Dell SC1345 / Debian x86-64 /home/disk dead 20120613 gcc17 580G 2x4x2.2 GHz AMD Opteron 8354 (Barcelona B3) / 16 GB RAM / Debian x86-64 gcc30 17G 0.4 GHz Alpha EV56 / 2GB RAM / AlphaServer 1200 5/400 => offline, to relocate gcc31 51G 2x0.4 GHz TI UltraSparc II (BlackBird) / 2 GB RAM / Sun Enterprise 250 => offline, to relocate gcc33 19033 1TB 0.8 GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l gcc34 19034 1TB 0.8 GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l gcc35 19035 1TB 0.8 GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l gcc36 19036 1TB 0.8 GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l gcc37 19037 1TB 0.8 GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l gcc38 1TB 3.2 GHz IBM Cell BE / 256 MB RAM / Sony Playstation 3 / Debian powerpc ex IRILL gcc40 160G 1.8 GHz IBM PowerPC 970 (G5) / 512 MB RAM / Apple PowerMac G5 / Debian powerpc gcc41 9091 18G 0.73 GHz Itanium Merced / 1GB RAM / HP workstation i2000 => too old please use gcc60 gcc42 160G 0.8 GHz ICT Loongson 2F / 512 MB RAM / Lemote Fuloong 6004 Linux mini PC / Debian mipsel gcc43 9093 60G 1.4 GHz Powerpc G4 7447A / 1GB RAM / Apple Mac Mini gcc46 250G 1.66 GHz Intel Atom D510 2 cores 4 threads / 4 GB RAM / Debian amd64 ex IRILL gcc47 250G 1.66 GHz Intel Atom D510 2 cores 4 threads / 4 GB RAM / Debian amd64 ex IRILL gcc49 2TB 4x0.9 GHz ICT Loongson 3A / 2 GB RAM / prototype board / Debian mipsel gcc50 9080 250G 0.6 GHz ARM XScale-80219 / 512 MB RAM / Thecus N2100 NAS gcc51 60G 0.8 GHz ICT Loongson 2F / 1 GB RAM / Lemote YeeLoong 8089 notebook / Debian mipsel gcc52 9082 1TB 0.8 GHz ICT Loongson 2F / 512 MB RAM / Gdium Liberty 1000 notebook / Mandriva 2009.1 mipsel gcc53 9083 80G 2x1.25 GHz PowerPC 7455 G4 / 1.5 GB RAM / PowerMac G4 dual processor gcc54 36G 0.5 GHz TI UltraSparc IIe (Hummingbird) / 1.5 GB RAM / Sun Netra T1 200 / Debian sparc gcc55 9085 250G 1.2 GHz Marvell Kirkwood 88F6281 (Feroceon) / 512 MB RAM / Marvell SheevaPlug / Ubuntu armel gcc56 9086 320G 1.2 GHz Marvell Kirkwood 88F6281 (Feroceon) / 512 MB RAM / Marvell SheevaPlug / Ubuntu armel gcc57 9087 320G 1.2 GHz Marvell Kirkwood 88F6281 (Feroceon) / 512 MB RAM / Marvell SheevaPlug / Ubuntu armel gcc60 9200 72G 2x1.3 GHz Intel Itanium 2 (Madison) / 6 GB RAM / HP zx6000 / Debian ia64 gcc61 9201 36G 2x0.55 GHz HP PA-8600 / 3.5 GB RAM / HP 9000/785/J6000 / Debian hppa gcc62 9202 36G 6x0.4 GHz TI UltraSparc II (BlackBird) / 5 GB RAM / Sun Enterprise 4500 / Debian sparc gcc63 9203 72G 8x4x1 GHz Sun UltraSparc T1 (Niagara) / 8 GB RAM / Sun Fire T1000 / Debian sparc gcc64 9204 72G 1 GHz Sun UltraSPARC-IIIi / 1 GB RAM / Sun V210 / OpenBSD 5.1 sparc64 gcc66 9206 72G 2x1.3 GHz Intel Itanium 2 (Madison) / 12 GB RAM / HP rx2600 / Debian ia64 gcc100 1TB 2x2.6 GHz AMD Opteron 252 / 1GB RAM running Debian x86_64 gcc101 1TB 2x2.6 GHz AMD Opteron 252 / 1GB RAM running FreeBSD 8 x86_64 gcc200 8010 80G 4x0.4 GHz TI UltraSparc II (BlackBird) / 4 GB RAM / Sun E250 / Gentoo sparc64 gcc201 8011 80G 4x0.4 GHz TI UltraSparc II (BlackBird) / 4 GB RAM / Sun E250 / Gentoo sparc64
Note: /home is shared between gcc100 and gcc101.
News
See https://portal.cfarm.net/news/ for more recent news and a RSS feed.
20170721 gcc202 UltraSparc T5 is now online
- 20170519 gcc67 and gcc68 AMD donated Ryzen machines are now online
20170526 gna.org closed, new site is https://cfarm.tetaneutral.net/
- 20161122 gcc117 and gcc118 are now online
20130628 gcc111 is now online at http://osuosl.org
- 20111130 gcc66 is now online
20111102 gcc110 is now online at http://osuosl.org
20111027 gcc42/45/51 are back online at http://tetaneutral.net
- 20110318 gcc46 and gcc47 are now online
- 20110311 gcc20 is now online
- 20110212 gcc70 is now online
- 20110120 gcc45 is now online
- 20100729 gcc13/14 are back online
- 20100720 gcc10 is now online
- 20100702 gcc13/14 are offline
- 20100610 gcc43 is now online
- 20100610 gcc64 is now online
- 20100610 gcc52 repaired back online
- 20100510 gcc33/34/35/36/37 are now online
- 20100510 gcc63 is now online
- 20100311 gcc201 is now online
- 20100310 gcc16 power supply replaced, the machine is back online.
- 20100309 gcc100 is now online.
- 20100305 gcc11 /home disk replaced.
- 20100301 gcc200 is now online.
- 20100226 gcc16 and gcc17 power supply swapped, gcc16 offline, power supply replacement ordered
- 20100226 gcc11 /home read only: disk failing, replacement ordered
- 20100119 FSF Paris datacenter is now using cable for internet access
- 20100115 FSF Paris datacenter is back online using Wifi as temporary solution
- 20100105 FSF Paris datacenter is offline due to DSL line cut
- 20091223 gcc17 offline, after investigation: power supply dead
- 20091216 New machines gcc56, gcc57 and gcc42 are now online
- 20091204 gcc52 failing, now offline, support contacted
- 20091130 /opt/cfarm/mpc-0.8 installed.
- 20091005 gcc6x no longer need a proxy for http/ftp/web
- 20091005 installed /opt/cfarm/libelf-0.8.12 for LTO
- 20090929 gcc40/50/53/54 are up in their new FSF France datacenter in Paris
- 20090921 gcc11/12 are up in their new FSF France datacenter in Rennes
- 20090831 gcc12 is down, please use gcc13 until gcc12 is restored
- 20090814 planned downtime for gcc11/gcc12 at the end of august
- 20090601 gcc13 is now a hot backup of gcc12 for openvpn.
- 20090518 gcc62 Sun Enterprise 4500 with 6 processor is online
- 20090511 gcc61 upgraded: processor now PA8600 at 0.55 GHz and disk doubled to 36GB
- 20090504 gcc51 got a new working BIOS from Lemote engineers and is now back online.
- 20090501 farm user count reaches 101
- 20090423 discussions with hardware support for gcc51
- 20090408 installed /opt/cfarm/gmp-4.2.4 and /opt/cfarm/mpfr-2.4.1 on x86 and x86_64 machines
- 20090407 gcc0x are back online
- 20090312 gcc55 armv5tel-linux machine in online
- 20090304 gcc61 hppa-linux machine is online
- 20090304 gcc60 ia64-linux machine is online
- 20090301 gcc51 mips64el-linux machine is online, including 64 bits toolchain
- 20090224 gcc41 ia64-linux machine is online
- 20090220 gcc54 sparc64-linux machine is online
- 20090218 gcc40 powerpc64-linux machine is online
- 20090205 gcc52 mips64el-linux machine is online, 32 bits toolchain only
- 20090129 gcc53 powerpc-linux machine is online
- 20090123 gcc51 mips64el-linux machine is online but not stable yet, accounts to be created when stable.
- 20090122 gcc50 disk failed, new disk installed
- 20081214 add arm platform gcc50
- 20080704 all macaq.org machines (gcc01,2,3,5,6,7,9) are back online but without NFS crossmounts
- 20080618 gcc11 disk failed early morning and the machine has been reinstalled with a brand new disk
- 20080616 gcc17 was unreachable again, reboot and it's back. Need to plug them on UPS remote controllable plug.
- 20080606 gcc17 has been off for about 2 days and is now back online (no reason in logs).
- 20080524 installed gcc30, dual alpha EV56 machine
- 20080522 two qemu-arm machines available from gcc17: gcc171 and gcc172, thanks to arthur
- 20080518 gcc11 and gcc12 moved to new datacenter (same IP).
20080515 applied Debian security patch http://www.debian.org/security/2008/dsa-1576
- 20080515 gcc16 and gcc17 now have a public IP
- 20080509 installed gcc16 and gcc17
- 20080426 upgrated gcc04 to Ubuntu 8.04 LTS unfortunately not stable
- 20080415 installed c++ testsuite needed locales on all machines
- 20080414 macaq.org machines are back online but gcc03
- 20080411 skyrock.com network down for a few hours, no machine reboot
- 20080321 macaq.org machines unreachable
20080314 gcc15 is online in http://www.inria.fr/saclay/ datacenter
- 20080313 gcc14 is online in skyrock.com datacenter
- 20080307 gcc04 is online for tests, AMD Phenom quad core based machine
- 20080229 gcc11 /home disk has been replaced by a brand new disk
- 20080223 gcc13 Mail now works.
20080222 gcc13 back online in new http://www.skyrock.com/ datacenter
- 20080220 gcc11 offline for disk testing and FS rebuild
- 20080213 gcc08 offline for two monthes, Mail now works on all CFARM machines.
20080204 CFARM reaches 25 users (26 with LaurentGuerby)
20080122 Created GNA project and mailing list https://gna.org/projects/gcc-cfarm/
- 20080121 gcc11 up again after FS rebuild, gcc13 disk failure analysis started
20080116 gcc01..7+9 up are up again at http://www.macaq.org/ datacenter (email not working though)
- 20071206 gcc11 and gcc13 down, gcc13 disk dead
- 20071204 Everything is back to normal, Mail is working.
- 20071202 Mail is still not working and gcc11 and 13 crashed, stay tuned
- 20071125 gcc11/12/13 moved to new datacenter, downtime 1700 UTC to 2000 UTC
- 20070722 gcc11/12/13 moved to datacenter, gcc01..09 stopped, gcc08 online at a temporary location
- 20070624 gcc11/12/13 installed
- 20070519 GCC 4.2.0 installed in /n/b01/guerby/release/4.2.0/bin
- 20070222 8 GCC release installs (all languages, Ada included) are available in /n/b01/guerby/release/X.Y.Z/bin
- 20061113 svn 1.4.2 is available in /opt/cfarm/subversion-1.4.2/bin (shaves 200MB from a checkout/update)
- 20061101 Packages so that 4.3 bootstraps (thanks manuel)
- 20060122 Automatic regression tester (a build and check for c,ada for each revisions on trunk)
- 20060121 Ada regression hunt 14 build and check on CFARM
- 20051215 Uniform NFS mounts between machines
- 20051214 Created a Welcome guide for CFARM users
- 20051213 8 machines up in biprocessor, all should reboot without console intervention as long as no extended power cut
- 20051213 1 machine has a dead disk
- 20051212 6 machines setup with tools and user accounts, updated to ubuntu 5.10
20051210 First full bootstrap+test cycle, see http://gcc.gnu.org/ml/gcc-testresults/2005-12/msg00569.html
- 20051208 isvtec.com staff has now put the machines online (one machine has a disk problem, all are recognizing only one CPU)
- 20051201 The machines are now in their new datacenter (telecity, Paris, France) but not online yet
- 20050830 The machines are now in their datacenter (redbus, Paris, France) but not online yet
20050815 Following the directions of EmmanuelDreyfus, LaurentGuerby has installed NetBSD 2.0.2 on one of the machines.
20050807 LaurentGuerby has installed Ubuntu 5.04 on the 9 machines
History and Sponsors
In August 2005 FSF France received in donation from BNP Paribas 9 Dell poweredge 1550 bi processor 1U machines with one SCSI disk and 1GB RAM, processors total 19.5 GHz distributed as follows:
- 3 bi pentium III 1.25 GHz (two 36 GB disks, one 18 GB)
- 5 bi pentium III 1.00 GHz (18 GB disk each)
- 1 bi pentium III 0.933 GHz (36 GB disk but disk dead 20051213)
The machines are about four years old, so of course there may be hardware problems in the coming years, but we might also be able to get cheap parts on the used market (or from other donations).
Hosting for those 9 1U machines is donated by the http://isvtec.com/ staff in a Paris datacenter (provided we maintain low use of external bandwidth).
In June 2007 FSF France purchased 3 Dell SC1345 to replace older Dells that were taken offline in http://isvtec.com datacenter.
In January 2008 http://www.macaq.org/ donated hosting for the older Dells which were brought back online.
In February 2008 http://www.skyrock.com/ donated hosting and gcc13 was moved in the new datacenter.
In March 2008
- BNP Paribas convinced Dell to give FSF France a discounted price for gcc14.
http://www.skyrock.com/ donated hosting for one more machine gcc14.
INRIA Saclay donated hosting for gcc15, http://www.inria.fr/saclay/
The GCC Compile Farm wants to thank all the sponsors that make this project to help free software a reality.
In May 2008 the GCC Compile Farm gained two bi-quad core machines gcc16 and gcc17 donated by AMD in hosting donated by INRIA Saclay, many thanks to:
- Sebastian Pop and Christophe Harle of AMD for donating the two machines
- Albert Cohen, Sylvain Girbal and Philippe Lubrano of INRIA Saclay for donating hosting and setup help
- Loic Dachary and Eva Mathieu of FSF France for handling orders of various equipment including an UPS
In May 2008 the GCC Compile Farm gained access to an alphaev56 machine at LRI: http://www.lri.fr/
In July 2008 the GCC Compile Farm gained access to a sparc machine at LRI: http://www.lri.fr/
In December 2008 the GCC Compile Farm gained access to an ARM machine.
In January 2009 the GCC Compile Farm gained access to MIPS and powerpc32 machine.
In February 2009 the GCC Compile Farm gained access to powerpc64 provided by a private donor and an ia64 machine donated by LORIA http://www.loria.fr/ who got it from HP http://www.hp.com/
In March 2009 the GCC Compile Farm gained access to a dual ia64 Madison machine and a dual PA8500 machine both hosted and donated by Thibaut VARENE from http://www.pateam.org/ , hosting provided by ESIEE Paris http://www.esiee.fr/
In March 2009 the GCC Compile Farm gained access to a machine with ARM Feroceon 88FR131 at 1.2 GHz, a "SheevaPlug" prototype donated by Marvell http://www.marvell.com
In May 2009 the GCC Compile Farm gained access to a Sun Enterprise 4500 with 6 cpus, machine donated by William Bonnet http://www.wbonnet.net/ , installed by Thibaut VARENE from http://www.pateam.org/ , hosting provided by ESIEE Paris http://www.esiee.fr/
In March 2010 the GCC Compile Farm gained access to a pair of Sun E250 with 4 cpus each, hosting and machine donated by Chris from Melbourne
In March 2010 the GCC Compile Farm gained access to a bi-Opteron machine in San Francisco, USA, hosting donated by vianet and machine donated by http://www.hackershells.com/
In May 2010 the GCC Compile Farm gained access to 5 Efika MX Client Dev Boards donated by Genesi USA http://www.genesi-usa.com
In June 2010 the GCC Compile Farm gained access to a powerpc G4 Mac Mini donated by Jerome Nicolle, installed by Dominique Le Campion
In June 2010 the GCC Compile Farm gained access to a sparc64 V210 server donated by Arthur Fernandez, installed by Thibaut VARENE from http://www.pateam.org/ , hosting provided by ESIEE Paris http://www.esiee.fr/
In July 2010 the GCC Compile farm gained one 24 cores machine with 64 GB of RAM, gcc10, the two twelve core Magny Cours processors were donated by AMD and funding for the rest of the machine was provided by FSF France.
In July 2010 http://smile.fr donated hosting for gcc13 and gcc14.
In February 2011 Infosat Telecom http://www.infosat-telecom.fr/ donated hosting for gcc70
In March 2011 Intel http://www.intel.com donated one 12 cores 24 threads machine with 24 GB of RAM, and two Atom D510 systems
In March 2011 IRILL http://www.irill.org/ donated hosting for many farm machines
In October 2011 FSF France http://www.fsffrance.org/ sponsored hosting of farm machines at http://tetaneutral.net in Toulouse, France
In November 2011 IBM http://ibm.com/ made available a POWER7 server hosted at OSUOSL http://osuosl.org/
In August 2016 the GCC Compile farm gained two 8 core AArch64 machines with 16 GB of RAM, gcc117 and gcc118, donated by AMD.