It would be a very good thing to have an official way to generate 16-bit binaries on x86 (i386), even if all that is provided is a command-line flag that issues a .code16gcc assembly directive first in the generated assembly file.
In the Linux kernel, we currently have to do a bunch of hacks to make sure that gcc doesn't issue any instructions *before* the .code16gcc directive:
$(call cc-option, -fno-toplevel-reorder,\
$(call cc-option, -fno-unit-at-a-time)) \
$(call cc-option, -fno-stack-protector) \
... and the list is likely to grow, which is why providing an actually supported compiler flag for this would be desirable.
Note GCC does not even support real 16bit code for x86. So pretending GCC's output is 16bit code is a joke.
Why can't you just write the 16bit binary support in assembly for the kernel?
It is much cleaner to have it in C. We converted the assembly code to C back in 2007 and it has been much easier to maintain ever since. It works fine, thankyouverymuch; it isn't *optimal* 16-bit code, but it is real and valid 16-bit code and we use it as such.
Sure, optimization would be nice. Do we care? Not a lot.
Nice idea. Not likely to be implemented any time soon, because it's quite complicated to get right.
Takes more than just placing a directive right. With the right set of flags the compiler would still emit 32bit instructions. Actually disabling 32bits instructions is a lot more work: add a TARGET_16BITS or an attribute_enabled to almost all patterns in i386.md and friends. Not helpful for the vast majority of users, and kernel people will complain more about the inevitable slowdown (and, as usual, point out the general incompetence of gcc hackers :-)
(Why/when is 16bit code still necessary anyway? Before entering protected mode?)
You are missing the plain fact that *it is already working*.
.code16gcc is a binutils directive which takes 32-bit code emitted by gcc and assembles it to produce valid (although suboptimal) 16-bit code. So it really *is* just a matter of putting the directive in the right place.
And yes, it is used before entering protected mode.
Note that LLVM/clang has a -m16 option now which does the same thing. Again, not needing dirty hacks to ensure that asm(".code16gcc") really *is* the first thing the assembler sees.
This could also be implemented in binutils as a --code16gcc option, in which case gcc users would have to use "-m32 -Wa,--code16gcc". Ugly but would work.
I put the initial -m16 support on hjl/x86/m16 branch at:
We need to add some run-time testcases and fix any bugs.
Date: Tue Jan 28 16:22:45 2014
New Revision: 207196
Add -m16 support for x86
The .code16gcc directive was added to binutils back in 1999:
'.code16gcc' provides experimental support for generating 16-bit code
from gcc, and differs from '.code16' in that 'call', 'ret', 'enter',
'leave', 'push', 'pop', 'pusha', 'popa', 'pushf', and 'popf'
instructions default to 32-bit size. This is so that the stack pointer
is manipulated in the same way over function calls, allowing access to
function parameters at the same stack offsets as in 32-bit mode.
'.code16gcc' also automatically adds address size prefixes where
necessary to use the 32-bit addressing modes that gcc generates.
It encodes 32-bit assembly instructions generated by GCC in 16-bit format
so that GCC can be used to generate 16-bit instructions. To do that, the
.code16gcc directive must be placed at the very beginning of the assembly
code. This patch adds -m16 to x86 backend by:
1. Add -m16 and make it mutually exclusive with -m32, -m64 and -mx32.
2. Treat -m16 like -m32 so that --32 is passed to assembler.
3. Output .code16gcc at the very beginning of the assembly code.
4. Turn off 64-bit ISA when -m16 is used.
* config/i386/gnu-user64.h (SPEC_32): Add "m16|" to "m32".
* config/i386/i386.c (ix86_option_override_internal): Turn off
OPTION_MASK_ISA_64BIT, OPTION_MASK_ABI_X32 and OPTION_MASK_ABI_64
(x86_file_start): Output .code16gcc for TARGET_16BIT.
* config/i386/i386.h (TARGET_16BIT): New macro.
* config/i386/i386.opt: Add m16.
* doc/invoke.texi: Document -m16.
Thanks. This appears to work for me to build the Linux kernel's 16-bit boot code with the patch at
Out of curiosity: given that hjl's -m16 patch has been merged, is there a reason for keeping this open? Is anyone hoping to have a more complete support for x86-16 target, or at least to drop the .code16gcc kludge?
Fixed for 4.9.0.