This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: a nifty feature for c preprocessor


On 28/12/11 21:57, R A wrote:

yes, i do realize that c preprocessor is but a text substitution tool from days past when programmers where only starting to develop the rudimentaries of high-level programming. but the reason i'm sticking with the c preprocessor if the fact that code that i write from it is extremely portable. copy the code and you can use it in any IDE or stand-alone compiler, it's as simple as that. i have considered using gnu make, writing scripts with m4 and other parsers or lexers, but sticking with the preprocessor's minimalism is still too attractive an idea.


If you want portable, use features that already exist. Lots of people write lots of C code that is portable across huge ranges of compilers and target processors.


And if you want portable pre-processing or code generation, use something that generates the code rather than inventing tools and features that don't exist, nor will ever exist. It is also quite common to use scripts in languages like perl or python to generate tables and other pre-calculated values for inclusion in C code.

about the built in features in c and C++ to alleviate the extensive
use for the preprocessor, like inline functions, static consts. the
fact is NOT ALL compilers out there would optimize a function so that
it will not have to use a return stack. simply using a macro FORCES
the compiler to do so. the same goes for static const, if you use a
precompiled value, you are forcing an immediate addressing, something
of a good optimization. so it's still mostly an issue of portability
of optimization.


Most modern compilers will do a pretty reasonable job of constant propagation and calculating expressions using constant values. And most will apply "inline" as you would expect, unless you intentionally hamper the compiler by not enabling optimisations. Using macros, incidentally, does not "FORCE" the compiler to do anything - I know at least one compiler that will take common sections of code (from macros or "normal" text) and refactor it artificial functions, expending stack space and run time speed to reduce code size. And "immediate addressing" is not necessarily a good optimisation - beware making generalisations like that. Let the compiler do what it is good at doing - generating optimal code for the target in question - and don't try to second-guess it. You will end up with bigger and slower code.


templates, i have no problem with, i wish there could be a C dialect
that can integrate it, so i wouldn't have to be forced to use C++ and
all the bloat that usually come from a lot of it's implementation (by
that i mean a performance close to C i think is very possible for
C++'s library).


C++ does not have bloat. The only feature of C++ that can occasionally lead to larger or slower code, or fewer optimisations, than the same code in C is exceptions - if you don't need them, disable them with "-fno-exceptions". Other than that C++ is zero cost compared to C - you only pay for the features you use.


but, of course, one has to ask "if you're making your code portable
to any C compiler, why do you want gcc to change (or modify it for
your own use)? you should be persuading the c committee." well,
that's the thing, it's harder to do the latter, so by doing this, i
can demonstrate that it's a SIMPLE, but good idea.


It's not a good idea, and it would not be simple to implement.


I really don't want to discourage someone from wanting to contribute to gcc development, but this is very much a dead-end idea. I applaud your enthusiasm, but keep a check on reality - you are an amateur just starting C programming. C has been used for the last forty years - with gcc coming up for its 25th birthday this spring. If this idea were that simple, and that good, it would already be implemented. As you gain experience and knowledge with C (and possibly C++), you will quickly find that a preprocessor like you describe is neither necessary nor desirable.

mvh.,

David



----------------------------------------
Date: Wed, 28 Dec 2011 10:57:28 +0100 From: david@westcontrol.com
To: ren_zokuken01@hotmail.com CC: gcc@gcc.gnu.org Subject: Re: FW:
a nifty feature for c preprocessor

On 28/12/2011 07:48, R A wrote:

i'm an amateur programmer that just started learning C. i like most of the features, specially the c preprocessor that it comes packed with. it's an extremely portable way of implementing metaprogramming in C.

though i've always thought it lacked a single feature -- an
"evaluation" feature.



I think you have missed the point about the C pre-processor. It is
not a "metaprogramming" language - it is a simple text substitution
macro processor. It does not have any understanding of the symbols
(except for "#") in the code, nor does it support recursion - it's
pure text substitution. Your suggestion would therefore need a
complete re-design of the C pre-processor. And the result is not a
feature that people would want.

Many uses of the C pre-processor are deprecated with modern use of
C and C++. Where possible, it is usually better programming
practice to use a "static const" instead of a simple numeric
"#define", and a "static inline" function instead of a
function-like macro. With C++, even more pre-processor
functionality can be replaced by language features - templates give
you metaprogramming. There are plenty of exceptions, of course, but
in general it is better to use a feature that is part of the
language itself (C or C++) rather than the preprocessor.

It looks like you are wanting to get the compiler to pre-calculate
results rather than have them calculated at run-time. That's a
good idea - so the gcc developers have worked hard to make the
compiler do that in many cases. If your various expressions here
boil down to constants that the compiler can see, and you have at
least some optimisation enabled, then it will pre-calculate the
results.


If you have particular need of more complicated pre-processing, then what you want is generally some sort of code generator. C has a simple enough syntax - write code in any language you want (C itself, or anything else) that outputs a C file. I've done that a few times, such as for scripts to generate CRC tables.

And if you really want to use a pre-processing macro style, then
there are more powerful languages suited to that. You could use
PHP, for example - while the output of a PHP script is usually
HTML, there is no reason why it couldn't be used as a C
pre-processor.




say i have these definitions: #define MACRO_1 (x/y)*y #define
MACRO_2 sqrt(a) #define MACRO_3 calc13() .... #define MACRO_15 (a
+ b)/c


now, all throughout the codebase, whenever and whichever of MACRO_1, or MACRO_2 (or so forth) needs to be called, they are conveniently "indexed" by another macro expansion:

#define CONCAT(a, b) a##b #define CONCAT_VAR(a, b) CONCAT(a, b)

#define MASTER_MACRO(N) CONCAT_VAR(MACRO_, N)

now, if we use MASTER_MACRO with a "direct" value:

MASTER_MACRO(10) or #define N 10 MASTER_MACRO(10) both will
work.


but substitute this with:


#define N ((5*a)/c + (10*b)/c + ((5*a) % c + (10*b) % c)/c)

and MASTER_MACRO expands to: MACRO_((5*a)/c + (10*b)/c + ((5*a) %
c + (10*b) % c)/c)

which, of course is wrong. there are other workarounds or many
times this scheme can be avoided altogether. but it can be made
to work (elegantly) by adding an "eval" preprocessor operation:

so we redefine MASTER_MACRO this way: #define MASTER_MACRO(N)
CONCAT_VAR(MACRO_, eval(N)) which evaluates correctly.

this nifty trick (though a bit extended than what i elaborated
above) can also be used to *finally* have increments and
decrements (among others). since "eval" forces the evaluation of
an *arithmetic* expression (for now), it will force the
evaluation of an expression, then define it to itself. this will
of course trigger a redefinition flag from our beloved
preprocessor, but the defined effect would be:

#define X (((14*x)/y)/z) /* say this evaluates to simply 3 */

incrementing X, will simply be: #define X eval(eval(X) + 1) /* 1)
will be evaluated as 4 before any token substitution */ #define X
eval(eval(X) + 1) /* 2) will be evaluated as 5 before any token
substitution */

that easy.

to suppress the redef warnings, we can have another directive
like force_redef (which can only work in conjunction with eval)
#force_redef X eval(eval(X) + 1)


i'm just confused :-S... why hasn't this been suggested? i would love to have this incorporated (even just on test builds) to gcc. it would make my code so, so much more manageable and virtually extensible to more platforms.

i would love to have a go at it and probably modify the gcc
preprocessor, but i since i know nothing of it's implementation
details, i don't know where to begin. i was hoping that this
being a gnu implementation, it's been heavily modularized (the
fact that gcc was heavily revised back then to use abstract
syntax trees, gimple, etc, past version 2.95 -- ???). so i can
easily "interrupt" the parsing operation (i wouldn't dare
implement a pre-preprocessing operation, being big and
redundant), then substitute the eval, then make the whole prasing
go again.

any advice for a novice? thnx.






Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]