This section lists changes that people frequently request, but which we do not make because we think GCC is better without them.
Such a feature would work only occasionally--only for calls that appear in the same file as the called function, following the definition. The only way to check all calls reliably is to add a prototype for the function. But adding a prototype eliminates the motivation for this feature. So the feature is not worthwhile.
Shift count operands are probably signed more often than unsigned. Warning about this would cause far more annoyance than good.
Such assignments must be very common; warning about them would cause more annoyance than good.
Coming as I do from a Lisp background, I balk at the idea that there is
something dangerous about discarding a value. There are functions that
return values which some callers may find useful; it makes no sense to
clutter the program with a cast to void
whenever the value isn't
useful.
-fshort-enums
the default.
This would cause storage layout to be incompatible with most other C compilers. And it doesn't seem very important, given that you can get the same result in other ways. The case where it matters most is when the enumeration-valued object is inside a structure, and in that case you can specify a field width explicitly.
The ISO C standard leaves it up to the implementation whether a bit-field
declared plain int
is signed or not. This in effect creates two
alternative dialects of C.
The GNU C compiler supports both dialects; you can specify the signed
dialect with -fsigned-bitfields
and the unsigned dialect with
-funsigned-bitfields
. However, this leaves open the question of
which dialect to use by default.
Currently, the preferred dialect makes plain bit-fields signed, because
this is simplest. Since int
is the same as signed int
in
every other context, it is cleanest for them to be the same in bit-fields
as well.
Some computer manufacturers have published Application Binary Interface standards which specify that plain bit-fields should be unsigned. It is a mistake, however, to say anything about this issue in an ABI. This is because the handling of plain bit-fields distinguishes two dialects of C. Both dialects are meaningful on every type of machine. Whether a particular object file was compiled using signed bit-fields or unsigned is of no concern to other object files, even if they access the same bit-fields in the same data structures.
A given program is written in one or the other of these two dialects. The program stands a chance to work on most any machine if it is compiled with the proper dialect. It is unlikely to work at all if compiled with the wrong dialect.
Many users appreciate the GNU C compiler because it provides an environment that is uniform across machines. These users would be inconvenienced if the compiler treated plain bit-fields differently on certain machines.
Occasionally users write programs intended only for a particular machine type. On these occasions, the users would benefit if the GNU C compiler were to support by default the same dialect as the other compilers on that machine. But such applications are rare. And users writing a program to run on more than one type of machine cannot possibly benefit from this kind of compatibility.
This is why GCC does and will treat plain bit-fields in the same fashion on all types of machines (by default).
There are some arguments for making bit-fields unsigned by default on all machines. If, for example, this becomes a universal de facto standard, it would make sense for GCC to go along with it. This is something to be considered in the future.
(Of course, users strongly concerned about portability should indicate explicitly in each bit-field whether it is signed or not. In this way, they write programs which have the same meaning in both C dialects.)
__STDC__
when -ansi
is not used.
Currently, GCC defines __STDC__
unconditionally. This provides
good results in practice.
Programmers normally use conditionals on __STDC__
to ask whether
it is safe to use certain features of ISO C, such as function
prototypes or ISO token concatenation. Since plain gcc
supports
all the features of ISO C, the correct answer to these questions is
"yes".
Some users try to use __STDC__
to check for the availability of
certain library facilities. This is actually incorrect usage in an ISO
C program, because the ISO C standard says that a conforming
freestanding implementation should define __STDC__
even though it
does not have the library facilities. gcc -ansi -pedantic
is a
conforming freestanding implementation, and it is therefore required to
define __STDC__
, even though it does not come with an ISO C
library.
Sometimes people say that defining __STDC__
in a compiler that
does not completely conform to the ISO C standard somehow violates the
standard. This is illogical. The standard is a standard for compilers
that claim to support ISO C, such as gcc -ansi
--not for other
compilers such as plain gcc
. Whatever the ISO C standard says
is relevant to the design of plain gcc
without -ansi
only
for pragmatic reasons, not as a requirement.
GCC normally defines __STDC__
to be 1, and in addition
defines __STRICT_ANSI__
if you specify the -ansi
option,
or a -std
option for strict conformance to some version of ISO C.
On some hosts, system include files use a different convention, where
__STDC__
is normally 0, but is 1 if the user specifies strict
conformance to the C Standard. GCC follows the host convention when
processing system include files, but when processing user files it follows
the usual GNU C convention.
__STDC__
in C++.
Programs written to compile with C++-to-C translators get the
value of __STDC__
that goes with the C compiler that is
subsequently used. These programs must test __STDC__
to determine what kind of C preprocessor that compiler uses:
whether they should concatenate tokens in the ISO C fashion
or in the traditional fashion.
These programs work properly with GNU C++ if __STDC__
is defined.
They would not work otherwise.
In addition, many header files are written to provide prototypes in ISO
C but not in traditional C. Many of these header files can work without
change in C++ provided __STDC__
is defined. If __STDC__
is not defined, they will all fail, and will all need to be changed to
test explicitly for C++ as well.
Historically, GCC has not deleted "empty" loops under the assumption that the most likely reason you would put one in a program is to have a delay, so deleting them will not make real programs run any faster.
However, the rationale here is that optimization of a nonempty loop cannot produce an empty one, which holds for C but is not always the case for C++.
Moreover, with -funroll-loops
small "empty" loops are already
removed, so the current behavior is both sub-optimal and inconsistent
and will change in the future.
It is never safe to depend on the order of evaluation of side effects. For example, a function call like this may very well behave differently from one compiler to another:
void func (int, int); int i = 2; func (i++, i++);
There is no guarantee (in either the C or the C++ standard language
definitions) that the increments will be evaluated in any particular
order. Either increment might happen first. func
might get the
arguments 2, 3
, or it might get 3, 2
, or even 2, 2
.
Strictly speaking, there is no prohibition in the ISO C standard against allowing structures with volatile fields in registers, but it does not seem to make any sense and is probably not what you wanted to do. So the compiler will give an error message in this case.
Some ISO C testsuites report failure when the compiler does not produce an error message for a certain program.
ISO C requires a "diagnostic" message for certain kinds of invalid
programs, but a warning is defined by GCC to count as a diagnostic. If
GCC produces a warning but not an error, that is correct ISO C support.
If test suites call this "failure", they should be run with the GCC
option -pedantic-errors
, which will turn these warnings into
errors.