GCC Compile Farm Project

What is the GCC Compile Farm

The GCC Compile farm project maintains a set of machines of various architectures and provides ssh access to Free Software developers, GCC and others (GPL, BSD, MIT, ...) to build, test and debug Free, Libre and Open Source Software. It is not a free cluster for computationally intensive computing using Free Software. Once your account application (see below) is approved, you get full SSH access to all the Compile Farm machines (current and new), architectures currently available:

See the detailed list here, with architectures and OS: https://portal.cfarm.net/machines/list/

Were available :

How to obtain an account?

If you are working on a piece of free software (GCC or any other GPL, BSD, MIT, ...) and need ssh access to the farm for compilation, debug and test on various architectures you may apply for a GCC Compile Farm account at https://portal.cfarm.net/users/new/

After approval and account creation the compile farm machines should be used only for free software development, see this free software license list.

See "usage" below for usage rules.

How to Get Involved?

There is a mailing list for farm-related discussions: https://lists.tetaneutral.net/listinfo/cfarm-users

Please don't use this mailing list for support requests (see https://portal.cfarm.net/tickets/ instead)

Hardware Wishlist

Similar projects

Usage

Warning: compile farm machines disks are not RAID and not backed up. Data may be removed or lost without notice. Don't assume anything about data persistence, please use git/SVN or rsync to put your scripts and crontab somewhere safe.

Mailing list, SSH keys and a ticket system are hosted at https://portal.cfarm.net/. If you need to replace your ssh key, use the Lost password button on https://portal.cfarm.net/login/

To request installation of a package or signal a problem with a machine please use the tracking system in preference to the mailing list

Use the command ulimit to reduce the risk of a DOS attack by your script/program. Example: ulimit -S -t 3600 -v 2000000 -u 1200

Some machines have limited RAM and CPU, so please do not set up crontab on those machine without discussing it on the mailing list first.

On machine with limited disk please clean up automatically as much as possible and on other machines do not fill the disk with old unused stuff.

For automatic jobs on N-core please launch no more than N/2 runnable processes (total) and if you see that your cron is running at the same time as another user one please coordinate a time shift. You can also use https://portal.cfarm.net/munin/ to planify your cron jobs.

The systems are provided for the development of Free, Libre and Open Source Software, not for using software or running computationally-intensive computations unrelated to building, testing and debugging.

Services and software installed on farm machines

On most CFARM machines:

Build tips (probably not up-to-date):

Projects Ideas

Currently Running

Port GCC to Intel's 16-bit architecture.

RaskIngemannLambertsen is trying to port GCC to the Intel 8086 CPU family. Nodes gcc01, gcc03, gcc04, gcc07, gcc08 and gcc09 are used for testing patches that could affect existing targets. Tests are run at low priority and use of the nodes is sporadic. The Intel 8086 CPU has both 8-bit registers and 16-bit registers. The work on getting GCC to fully support such CPUs includes:

  1. Fixing the assumption in subreg_get_info() (rtlanal.c) that if a value is stored in multiple hard register, then those hard register all have the same size. To fix that, subreg_get_info() will be rewritten. The targets that are the trickiest to get right are i?86-*-* with -m128bit-long-double and powerpc-unknown-eabispe. Note: This part has been postponed because the new lower-subreg pass reduces the problem and I've worked around the cases that subreg_get_info() can't currently handle.
  2. In reload.c, fixing find_valid_class() and callers having the same problem as subreg_get_info().
  3. Fixing unspecified reload bugs as they turn up.

General bug fixes and enhancements are also tested from time to time.

Maintaining the GNU/Hurd tool chain

tschwinge is using node gcc45 for maintaining the GNU/Hurd toolchain. This means building cross-binutils, cross-GCC, cross-compiling glibc and suchlike. Working with various versions of the involved programs means using a lot of disk space, however feel free to request a clean up if you need space on the machine's storage.

Automatic bootstrap and regression testing

One can use the script from gcc sources contrib/patch_tester.sh for setting up an automatic tester on a machine. The patch should contain several markers that instruct the tester where to send the results by email, what branch and revision of GCC to use, and the configure and make flags. One can use the prepare_patch.sh script for filling up all this information, and for selecting the defaults for each case.

An example of a patch header for the HEAD version of autovect-branch, configuring only c, c++, and fortran, using vectorization during bootstrap, and only checking the vectorization specific tests:

email:foo@bar.com
branch:autovect-branch
revision:HEAD
configure:--enable-languages=c,c++,fortran
make:CFLAGS="-g" BOOT_CFLAGS="-O2 -g -msse2 -ftree-vectorize"
check:RUNTESTFLAGS=vect.exp

Autobuilds for coLinux

HenryNe is using node gcc11 for building coLinux from source. It uses cross target mingw32 and runs ons per day with low priority.

CGNU Project

rpeckhoff is documenting the operation of the current gcc build system on nodes gcc11-gcc14. He is using graphviz, Doxygen, and his own scripts to help discover and document source interdependencies. His project's progress is at http://cgnu.rpeckhoff.org/.

Testing

StevenBosscher is running LRA branch bootstrap and check in loop with languages c,c++,fortran on ia64-unknown-linux-gnu and powerpc64-unknown-linux-gnu. There's also a cross-tester from powerpc64-linux to mipsisa64-elf for languages c,c++ only. Successful build+test cycles are reported to the gcc-testresults mailing list. The testers run within 24 hours of a change on the LRA branch.

host   arch      branch loop time
gcc110 powerpc64 lra    2h00 (-j 8)
gcc110 mipsisa64 lra    2h00 (-j 8)
gcc66  ia64      lra    8h30 (-j 2)

Developing the Win64 port of GCC

http://mingw-w64.sf.net/ is committed to creating a viable platform for using gcc on Windows natively. We run build and testsuites constantly, and foster development and porting of mainstream applications such as Firefox (http://www.mozilla.org) and VLC (http://www.videolan.org) to the Win64 platform.

Cross compile testing

MikeStein is running cross compile tests at a low priority and report the results to gcc-testresult. He tests various branches, patches, and targets.

RTEMS Project

JoelSherrill is periodically running cross compile tests of various RTEMS (http://www.rtems.org) targets and reporting the results to gcc-testresults. The current focus is on the GCC SVN trunk with the binutils, gdb, and newlib CVS heads. C, C++, and Ada languages are tested where possible. The targets currently tested are listed below along with the RTEMS Board Support Package (BSP) and simulator used.

The bfin and m68k (Coldfire) will be added once Skyeye (http://www.skyeye.org) addresses some missing instructions that GCC 4.3 and newer generate which are currently not supported by Skyeye.

There are some test infrastructure issues which negatively impact the results on all RTEMS targets.

RTEMS testing is normally done on gcc12. It is not currently run automatically and may move to another machine when it is done automatically.

BTG Project

BTG is a BitTorrent p2p client with daemonized backend. Daily builds/packaging/regression testing.

GNU Guile daily builds

Ludovic Courtès builds and runs the test suite of GNU Guile on gcc11 (x86-64), gcc30 (alphaev56) and gcc31 (sparc64) using Autobuild. Build results are available here.

GNU SASL, Libidn, Shishi, GSS, GnuTLS, etc daily builds

Simon Josefsson builds and runs the test suite of several projects. Build results are available here.

SBCL testing

GaborMelis and NikodemusSiivola build and test SBCL on x86-64, sparc, alpha and ppc.

CSQL Main Memory Database Testing

prabatuty build and test CSQL main memory database on gcc14(x86_64), sparc and ppc.

C++ library testing

JonathanWakely uses compile farm machines to build and test libstdc++.

lvv::array

(C++) STL compatible container x86_64 specialized, vector operation capable

LOPTI

mathematical optimization library (derivative-free, unconstrained solvers)

Botan

Botan is a BSD licensed crypto library. JackLloyd uses the compile farm to test builds and develop CPU-specific optimizations, mostly on the non-x86/x86_64 machines.

YAPET

YAPET is a GPL licensed text based password manager. RafaelOstertag uses the compile farm to test and assure interoperability of the binary file structure between different architectures.

lnostdal

(..don't have a project name yet; this a mix of projects really .. reach me at larsnostdal@gmail.com ..)

FIM : Fbi IMproved

GNU CLISP

Ruby

TanakaAkira uses compile farm to test Ruby.

Perl

NicholasClark uses the compile farm to test perl portability on platforms he doesn't otherwise have access to.

Stellarium

HansLambermont uses the compile farm for continuous build integration of the Stellarium project.

Xapian

Olly Betts is using the compile farm for portability testing of Xapian.

Buildroot

PeterKorsgaard is running randpackageconfig builds of Buildroot on gcc10.

FATE

MichaelKostylev is running Libav Automated Testing Environment on various machines.

Concurrency Kit

Samy Al Bahra is porting and performance testing Concurrency Kit on idle machines.

DynCall

DanielAdler is porting dyncall to sparc/sparc64 on gcc54 and gcc64.

OpenSCAD

Don B is does porting, regression testing, and bug fixing of OpenSCAD on gcc20, gcc76, and gcc110.

BRL-CAD

ChristopherSeanMorrison is involved in on-going cross-platform compilation testing of BRL-CAD. Most day-to-day testing is on gcc10 and gcc110. The compile farm is a fantastic resource being provided (thank you!).

POWER architecture

SeanMcGovern wants to use the IBM POWER nodes to both learn the architecture and do some light porting work.

Libamqp

EamonWalshe is using the compile farm to build and test libamqp with various gcc versions and different architectures.

Biosig is a software library for processing of biomedical signal data (like EEG, ECG). It consists of several parts "biosig for octave and matlab" provides functions for analysing the data in Octave and Matlab. biosig4c++/libbiosig provides a common interface for accessing over 40 different data formats. libbiosig is used for data conversion, within sigviewer - a viewing and scoring software, and has bindings to various languages. libbiosig supports also big/little endian conversion for the various data formats and target platforms. AloisSchloegl is using the compile farm for testing libbiosig on various platforms. Eventually, the testing will be extended to related projects

NaN-toolbox for Octave with OpenMP enabled (octave-nan)

The NaN-toolbox is a statistics and machine learning toolbox for the use in Octave and supports parallel data processing on shared-memory multi-core machines. So far some performance tests comparing Octave and Matlab have been done http://pub.ist.ac.at/~schloegl/publications/TR_OpenMP_OctaveMatlabPerformance.pdf. AloisSchloegl would like to compare different multi-core systems with the same performance test.

Free Pascal nightly testsuite

The Free Pascal compiler is a freeware pascal compiler. It has a testsuite with a number of machines performing compilation and testsuite runs every night. Results are visible on a special page. Pierre Muller added these nightly build/testsuite runs for powerpc and powerpc64 on gcc110 machine.

GNU Texinfo

PatriceDumas uses the compile farm to test GNU Texinfo portability on platforms he doesn't otherwise have access to.

Wolfpack Empire

Empire is a real time, multiplayer, Internet-based war game, and one of the oldest computer games still being played. MarkusArmbruster uses the compile farm to ensure it stays portable.

Dpkg

GuillemJover uses the non-Linux hosts to port dpkg to those systems.

Libbsd, Libmd

GuillemJover uses the non-Linux hosts to port libbsd and libmd to those systems.

Your Project here

your description here

Compile farm system administration

Improvements to the management software running the farm

See https://framagit.org/compile-farm/gccfarm/-/issues/

New user

Users must request account creation themselves:

An admin can then approve or reject the pending requests:

Once approved, the user receives an email to define his/her password.

TODO: descrive scripts that actually create the user on all machines

Retiring/suspending a user

Go to the administration page and find the user: https://portal.cfarm.net/admin/gccfarm_user/farmuser/

On the user page, uncheck "Active" and save. It will prevent login on the management website, and also queue a task to remove the SSH keys from all farm machines. Either wait for the cron job that does it, or run the task manually if it's urgent.

Then, cleanup needs to be done manually if it's necessary. There are scripts to help (SSH root on portal.cfarm.net):

Add a new machine

Preliminary work

Ansible setup

Everything here is done in /root/ansible on the admin VM.

# ansible-playbook setup.yml -l 'gccXXX' --diff --check

# ansible-playbook setup.yml -l 'gccXXX' --diff

# ansible-playbook munin-master.yml --diff --check
# ansible-playbook munin-master.yml --diff

# ansible-playbook packages.yml --diff --check
# ansible-playbook packages.yml --diff

Finishing the munin setup

After some time, the machine should start appearing in https://portal.cfarm.net/munin/

The munin master runs every 30 minutes, so the new machine will not appear immediately. In addition, it may need several munin runs to start showing data.

The final configuration is to add the machine to the aggregate disk graph: https://portal.cfarm.net/munin/gccfarm/all/df_abs.html

To do this, we need to know the "munin internal name" of the data disk (either "/home" if it is separate, or "/"). Look for the internal name here: https://portal.cfarm.net/munin/gccfarm/gccXXX/df_abs.html

Then add it "group_vars/munin-master.yml", and run the "munin-master.yml" playbook again.

Management software setup

We will now register the new machine on the compile farm website, and deploy user accounts and SSH keys.

su - admin-ansible
./manage.sh gather_info gccXXX

./manage.sh deploy_users -m gccXXX
^C

./manage.sh deploy_users --apply -m gccXXX

./manage.sh deploy_sshkeys -m gccXXX
^C
./manage.sh deploy_sshkeys --apply -m gccXXX

Install software packages on all machines

TODO

Cleanup data when a machine has a full disk

Tools

The first step is to determine how much data is used and by who: "ncdu" is a great tool for this.

Then there is a script that uses ansible to compute the disk usage of a user on all farm machines:

/root/ansible/disk_usage.sh <username>

It might need a very long time to run if the user has lots of files. If you kill the script, the "du" operation will still continue to run on farm machines, so be careful not to kill/run the script several times or you will seriously impact I/O on farm machines...

Cleanup strategy

Currently the strategy looks like this, it should be automated:

1) look at who is taking the most space on machines that are near full

2) run the script that computes the disk usage of the user on *all* farm machines

# On portal.cfarm.net
/root/ansible/disk_usage.sh <login>

3) send an email asking to look at the data and clean up stale data, with

4) if the email bounces, try to find another email address that works

5) if you can't easily find another email, look manually at the data (with "ncdu", "ls -lhart") and clean up data that takes the most space and "looks old" (i.e. created/accessed for the last time several years ago)

List of machines

See https://portal.cfarm.net/machines/list/ for an up-to-date and automatically-generated list of machines. The information below may be severely out-of-date.

Datacenter http://www.fsffrance.org/ Rennes , static public IP, 100 Mbit/s up/down

name   disk  CPU          Notes
gcc10  2TB   2x12x1.5 GHz AMD Opteron Magny-Cours / 64 GB RAM / Supermicro AS-1022G-BTF / Debian x86-64
gcc12  580G  2x 2x2.0 GHz AMD Opteron 2212 / 4GB RAM / Dell SC1345 / Debian x86-64

Datacenter http://www.smile.fr/ , static public IP, 100 Mbit/s up/down

name   disk  CPU         Notes
gcc13  580G  2x2x2.0 GHz AMD Opteron 2212 / 4GB RAM / Dell SC1345 / Debian x86-64
gcc14  750G  2x4x3.0 GHz Intel Xeon X5450 / 16GB RAM / Dell Poweredge 1950 / Debian x86-64

Note: incoming port for user services are limited to tcp/9400-9500

Datacenter http://www.inria.fr/saclay/ , static public IP , ssh only

name   disk  CPU         Notes
gcc15  160G  1x2x2.8 GHz Intel Xeon 2.8 (Paxville DP) / 1 GB RAM / Dell SC1425 / Debian x86-64
gcc16  580G  2x4x2.2 GHz AMD Opteron 8354 (Barcelona B3) / 16 GB RAM / Debian x86-64

Datacenter http://www.irill.org/ , static public IP

name   disk  CPU          Notes
gcc20   1TB  2x6x2.93 GHz Intel Dual Xeon X5670 2.93 GHz 12 cores 24 threads / 24 GB RAM / Debian amd64

Datacenter http://iut.ups-tlse.fr/ , static public IP

name   disk  CPU          Notes
gcc21   15TB  1x6x1.6 GHz Intel Xeon E5-2603 v3 6 cores / 64 GB RAM / Ubuntu amd64

Datacenter Infosat Telecom http://www.infosat-telecom.fr/ , Static public IP

name   port disk   CPU        Notes
gcc70       160G   2x3.2 GHz  Intel Xeon 3.2E (Irwindale) / 3 GB RAM / Dell Poweredge SC1425 / NetBSD 5.1 amd64

Datacenter tetaneutral.net http://tetaneutral.net/ , Toulouse, FRANCE, Static public IP and IPv6

name   port disk   CPU        Notes
gcc45        1TB   4x3.0  GHz AMD Athlon II X4 640 / 4 GB RAM / Debian i386
gcc22        NFS              MIPS Cavium Octeon II V0.1 EdgeRouter Pro
gcc23        NFS              MIPS Cavium Octeon II V0.1 EdgeRouter Pro
gcc24        NFS              MIPS Cavium Octeon II V0.1 EdgeRouter Pro
gcc67        3TB  8x2x3.4 GHz AMD Ryzen 7 1700X / 32G RAM / Debian GNU/Linux 9 (stretch)
gcc68        3TB  4x2x3.2 GHz AMD Ryzen 5 1400 / 32G RAM / Debian GNU/Linux 9 (stretch)

Datacenter INSA Rouen http://www.insa-rouen.fr , France

name   port disk   CPU        Notes
gcc75       2TB    4x2x3.4 GHz Core i7-2600 / 16 GB RAM / 2TB
gcc76       2TB    4x2x3.4 GHz Core i7-2600 / 16 GB RAM / 2TB

Gcc76 contains over 10 virtual machines, listed in /etc/hosts. /scratch is an NFS mount shared between them.

Datacenter OSUOSL http://osuosl.org/ , Oregon, USA, Static public IP

name   port disk   CPU           Notes
gcc110      2TB    2x8x4x3.55 GHz IBM POWER7 / 64 GB RAM / IBM Power 730 Express server / CentOS 7 ppc64
gcc111      2TB    4x6x4x3.70 GHz IBM POWER7 / 128 GB RAM / IBM Power 730 Express server / AIX 7.1
gcc112      2TB    2x10x8x3.42 GHz IBM POWER8 / 256 GB RAM / IBM POWER System S822 / CentOS 7 ppc64le
gcc113      500GB  8x2.4 GHz     aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS
gcc114      500GB  8x2.4 GHz     aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS
gcc115      500GB  8x2.4 GHz     aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS
gcc116      500GB  8x2.4 GHz     aarch64 / 32 GB RAM / APM X-Gene Mustang board / Ubuntu 14.04.3 LTS
gcc117      500GB  8x2 GHz       aarch64 / 16 GB RAM / AMD Opteron 1100 / openSUSE Leap 42.1
gcc118      500GB  8x2 GHz       aarch64 / 16 GB RAM / AMD Opteron 1100 / openSUSE Leap 42.1
gcc119      2TB    2x8x4x4.15 GHz IBM POWER8 / 192 GB RAM / IBM POWER System S822 / AIX 7.2
gcc120      11TB   4x24x2x3.9 GHz IBM POWER10 / 2 TB RAM / IBM POWER System E1050 / AlmaLinux 9
gcc135      26TB   2x16x4x3.8 GHz IBM POWER9 / 264 GB RAM / IBM POWER System / CentOS 7 ppc64le

Datacenter http://www.fsffrance.org/ Paris , FTTH static IP, 100 Mbit/s down, 50 Mbit/s up

Currently empty

Datacenter Laurent GUERBY, http://www.guerby.org/, France, DSL dynamic IP, 10 Mbit/s down, 1 MBit/s up

Currently empty.

Datacenter http://www.hackershells.com/ San Francisco, USA, static public IP 1 Mbit/s

Currently empty.

Datacenter Melbourne, Australia 10 Mbit/s DSL

Currently empty.

Datacenter http://isvtec.com/ , static public IP, 100 Mbit/s up/down

Currently empty.

Datacenter http://www.macaq.org/ , DSL dynamic IP, 10 Mbit/s down, 1 MBit/s up, ubuntu breezy 5.10

Currently empty.

Datacenter http://www.mekensleep.com/ , DSL dynamic IP, 10 Mbit/s down, 1 MBit/s up

Currently empty.

Datacenter http://www.skyrock.com/ , static public IP, 1000 Mbit/s up/down

Currently empty.

Datacenter http://www.pateam.org/ http://www.esiee.fr/ , 100 MBit/s up/down

Currently empty.

Offline

name   port disk  CPU        Notes
gcc01  9061  16G   2x1.0  GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550 + additional 32 GB disk, donated
gcc02  9062  16G   2x1.0  GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc03  9063  16G   2x1.26 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc05  9065  16G   2x1.0  GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc06  9066  16G   2x1.0  GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc07  9067  32G   2X1.26 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc09  9068  32G   2x0.93 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc08        32G   2x1.26 GHz Pentium 3 / 1 GB RAM / Dell Poweredge 1550, donated
gcc11        580G  2x 2x2.0 GHz AMD Opteron 2212 / 4GB RAM / Dell SC1345 / Debian x86-64 /home/disk dead 20120613
gcc17  580G  2x4x2.2 GHz AMD Opteron 8354 (Barcelona B3) / 16 GB RAM / Debian x86-64
gcc30        17G     0.4  GHz Alpha EV56 / 2GB RAM / AlphaServer 1200 5/400 => offline, to relocate
gcc31        51G   2x0.4  GHz TI UltraSparc II (BlackBird) / 2 GB RAM / Sun Enterprise 250 => offline, to relocate
gcc33 19033  1TB     0.8  GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l
gcc34 19034  1TB     0.8  GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l
gcc35 19035  1TB     0.8  GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l
gcc36 19036  1TB     0.8  GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l
gcc37 19037  1TB     0.8  GHz Freescale i.MX515 / 512 MB RAM / Efika MX Client Dev Board / Ubuntu armv7l
gcc38   1TB      3.2  GHz IBM Cell BE / 256 MB RAM / Sony Playstation 3 / Debian powerpc ex IRILL
gcc40  160G      1.8  GHz IBM PowerPC 970 (G5) / 512 MB RAM / Apple PowerMac G5 / Debian powerpc
gcc41  9091  18G     0.73 GHz Itanium Merced / 1GB RAM / HP workstation i2000 => too old please use gcc60
gcc42       160G     0.8  GHz ICT Loongson 2F / 512 MB RAM / Lemote Fuloong 6004 Linux mini PC / Debian mipsel
gcc43  9093  60G     1.4  GHz Powerpc G4 7447A / 1GB RAM / Apple Mac Mini
gcc46  250G      1.66 GHz Intel Atom D510 2 cores 4 threads / 4 GB RAM / Debian amd64 ex IRILL
gcc47  250G      1.66 GHz Intel Atom D510 2 cores 4 threads / 4 GB RAM / Debian amd64 ex IRILL
gcc49        2TB   4x0.9  GHz ICT Loongson 3A / 2 GB RAM / prototype board / Debian mipsel
gcc50  9080 250G     0.6  GHz ARM XScale-80219 / 512 MB RAM / Thecus N2100 NAS
gcc51        60G     0.8  GHz ICT Loongson 2F /   1 GB RAM / Lemote YeeLoong 8089 notebook / Debian mipsel
gcc52  9082  1TB     0.8  GHz ICT Loongson 2F / 512 MB RAM / Gdium Liberty 1000 notebook / Mandriva 2009.1 mipsel
gcc53  9083  80G   2x1.25 GHz PowerPC 7455 G4  / 1.5 GB RAM / PowerMac G4 dual processor
gcc54   36G      0.5  GHz TI UltraSparc IIe (Hummingbird) / 1.5 GB RAM / Sun Netra T1 200 / Debian sparc
gcc55  9085 250G     1.2  GHz Marvell Kirkwood 88F6281 (Feroceon) / 512 MB RAM / Marvell SheevaPlug / Ubuntu armel
gcc56  9086 320G     1.2  GHz Marvell Kirkwood 88F6281 (Feroceon) / 512 MB RAM / Marvell SheevaPlug / Ubuntu armel
gcc57  9087 320G     1.2  GHz Marvell Kirkwood 88F6281 (Feroceon) / 512 MB RAM / Marvell SheevaPlug / Ubuntu armel
gcc60  9200  72G   2x1.3  GHz Intel Itanium 2 (Madison) / 6 GB RAM / HP zx6000 / Debian ia64
gcc61  9201  36G   2x0.55 GHz HP PA-8600 / 3.5 GB RAM / HP 9000/785/J6000 / Debian hppa
gcc62  9202  36G   6x0.4  GHz TI UltraSparc II (BlackBird) / 5 GB RAM / Sun Enterprise 4500 / Debian sparc
gcc63  9203  72G   8x4x1  GHz Sun UltraSparc T1 (Niagara) / 8 GB RAM / Sun Fire T1000 / Debian sparc
gcc64  9204  72G       1  GHz Sun UltraSPARC-IIIi / 1 GB RAM / Sun V210 / OpenBSD 5.1 sparc64
gcc66  9206  72G   2x1.3  GHz Intel Itanium 2 (Madison) / 12 GB RAM / HP rx2600 / Debian ia64
gcc100       1TB   2x2.6 GHz  AMD Opteron 252 / 1GB RAM running Debian x86_64
gcc101       1TB   2x2.6 GHz  AMD Opteron 252 / 1GB RAM running FreeBSD 8 x86_64
gcc200 8010  80G   4x0.4 GHz  TI UltraSparc II (BlackBird) / 4 GB RAM / Sun E250 / Gentoo sparc64
gcc201 8011  80G   4x0.4 GHz  TI UltraSparc II (BlackBird) / 4 GB RAM / Sun E250 / Gentoo sparc64

Note: /home is shared between gcc100 and gcc101.

News

See https://portal.cfarm.net/news/ for more recent news and a RSS feed.

History and Sponsors

In August 2005 FSF France received in donation from BNP Paribas 9 Dell poweredge 1550 bi processor 1U machines with one SCSI disk and 1GB RAM, processors total 19.5 GHz distributed as follows:

The machines are about four years old, so of course there may be hardware problems in the coming years, but we might also be able to get cheap parts on the used market (or from other donations).

Hosting for those 9 1U machines is donated by the http://isvtec.com/ staff in a Paris datacenter (provided we maintain low use of external bandwidth).

In June 2007 FSF France purchased 3 Dell SC1345 to replace older Dells that were taken offline in http://isvtec.com datacenter.

In January 2008 http://www.macaq.org/ donated hosting for the older Dells which were brought back online.

In February 2008 http://www.skyrock.com/ donated hosting and gcc13 was moved in the new datacenter.

In March 2008

The GCC Compile Farm wants to thank all the sponsors that make this project to help free software a reality.

In May 2008 the GCC Compile Farm gained two bi-quad core machines gcc16 and gcc17 donated by AMD in hosting donated by INRIA Saclay, many thanks to:

In May 2008 the GCC Compile Farm gained access to an alphaev56 machine at LRI: http://www.lri.fr/

In July 2008 the GCC Compile Farm gained access to a sparc machine at LRI: http://www.lri.fr/

In December 2008 the GCC Compile Farm gained access to an ARM machine.

In January 2009 the GCC Compile Farm gained access to MIPS and powerpc32 machine.

In February 2009 the GCC Compile Farm gained access to powerpc64 provided by a private donor and an ia64 machine donated by LORIA http://www.loria.fr/ who got it from HP http://www.hp.com/

In March 2009 the GCC Compile Farm gained access to a dual ia64 Madison machine and a dual PA8500 machine both hosted and donated by Thibaut VARENE from http://www.pateam.org/ , hosting provided by ESIEE Paris http://www.esiee.fr/

In March 2009 the GCC Compile Farm gained access to a machine with ARM Feroceon 88FR131 at 1.2 GHz, a "SheevaPlug" prototype donated by Marvell http://www.marvell.com

In May 2009 the GCC Compile Farm gained access to a Sun Enterprise 4500 with 6 cpus, machine donated by William Bonnet http://www.wbonnet.net/ , installed by Thibaut VARENE from http://www.pateam.org/ , hosting provided by ESIEE Paris http://www.esiee.fr/

In March 2010 the GCC Compile Farm gained access to a pair of Sun E250 with 4 cpus each, hosting and machine donated by Chris from Melbourne

In March 2010 the GCC Compile Farm gained access to a bi-Opteron machine in San Francisco, USA, hosting donated by vianet and machine donated by http://www.hackershells.com/

In May 2010 the GCC Compile Farm gained access to 5 Efika MX Client Dev Boards donated by Genesi USA http://www.genesi-usa.com

In June 2010 the GCC Compile Farm gained access to a powerpc G4 Mac Mini donated by Jerome Nicolle, installed by Dominique Le Campion

In June 2010 the GCC Compile Farm gained access to a sparc64 V210 server donated by Arthur Fernandez, installed by Thibaut VARENE from http://www.pateam.org/ , hosting provided by ESIEE Paris http://www.esiee.fr/

In July 2010 the GCC Compile farm gained one 24 cores machine with 64 GB of RAM, gcc10, the two twelve core Magny Cours processors were donated by AMD and funding for the rest of the machine was provided by FSF France.

In July 2010 http://smile.fr donated hosting for gcc13 and gcc14.

In February 2011 Infosat Telecom http://www.infosat-telecom.fr/ donated hosting for gcc70

In March 2011 Intel http://www.intel.com donated one 12 cores 24 threads machine with 24 GB of RAM, and two Atom D510 systems

In March 2011 IRILL http://www.irill.org/ donated hosting for many farm machines

In October 2011 FSF France http://www.fsffrance.org/ sponsored hosting of farm machines at http://tetaneutral.net in Toulouse, France

In November 2011 IBM http://ibm.com/ made available a POWER7 server hosted at OSUOSL http://osuosl.org/

In August 2016 the GCC Compile farm gained two 8 core AArch64 machines with 16 GB of RAM, gcc117 and gcc118, donated by AMD.

History of this page before 20081219

None: CompileFarm (last edited 2023-10-24 21:12:56 by GuillemJover)