Reputation: 224
I'd like information on the behavior of pre-standard "K&R-style" function declaration syntax when used in conjunction with explicit function protoypes as introduced by ANSI. Specifically, the syntax that looks like this:
int foo(a)
int a;
{
/* ... */
}
as opposed to like this:
int foo(int a) {
/* ... */
}
Note that I am referring specifically to the function declaration syntax, not the usage of unprototyped functions.
Much has been made of how the former syntax does not create a function prototype. My research indicates that, if the function were defined as above, a subsequent call foo(8, 6, 7, 5, 3, 0, 9)
would result in undefined behavior; whereas with the latter syntax, foo(8, 6, 7, 5, 3, 0, 9)
would actually be invalid. This makes sense, but I explicitly forward-declare all my functions in the first place. If the compiler ever had to rely on a prototype generated from the definition, I'd already consider that a flaw in my code; so I make sure to use compiler warnings that notify me if I ever fail to forward-declare a function.
Assuming that proper forward-declarations are in place (in this case, int foo(int);
), is the K&R function declaration syntax still unsafe? If so, how? Does the usage of the new syntax negate the prototype that's already there? At least one person has apparently claimed that forward-declaring functions before defining them in the K&R style is actually illegal, but I've done it and it compiles and runs just fine.
Consider the following code:
/* 1 */ #include <stdio.h>
/* 2 */ void f(int); /*** <- PROTOTYPE IS RIGHT HERE ****/
/* 3 */ void f(a)
/* 4 */ int a;
/* 5 */ {
/* 6 */ printf("YOUR LUCKY NUMBER IS %d\n", a);
/* 7 */ }
/* 8 */
/* 9 */ int main(argc, argv)
/* 10 */ int argc;
/* 11 */ char **argv;
/* 12 */ {
/* 13 */ f(1);
/* 14 */ return 0;
/* 15 */ }
When given this code verbatim, gcc -Wall
and clang -Weverything
both issue no warning and produce programs that, when run, print YOUR LUCKY NUMBER IS 1
followed by a newline.
If f(1)
in main()
is replaced with f(1, 2)
, gcc
issues a "too many arguments" error on that line, with the "declared here" note notably indicating line 3, not line 2. In clang
, this is a warning, not an error, and no note indicating a declaration line is included.
If f(1)
in main()
is replaced with f("hello world")
, gcc
issues an integer conversion warning on that line, with a note indicating line 3 and reading "expected 'int' but argument is of type 'char *'". clang
gives a similar error, sans note.
If f(1)
in main()
is replaced with f("hello", "world")
, the above results are both given, in sequence.
My question is this: assuming function prototypes are already provided, is the K&R syntax any less safe than the style with inline type keywords? The answer indicated by my research is, "Nope, not a bit", but the overwhelmingly negative, apparently near-unanimous opinion of the older style of type declaration makes me wonder if there isn't something I'm overlooking. Is there anything I'm overlooking?
Upvotes: 0
Views: 626
Reputation: 222526
My research indicates that, if the function were defined as above, a subsequent call foo(8, 6, 7, 5, 3, 0, 9) would result in undefined behavior; whereas with the latter syntax, foo(8, 6, 7, 5, 3, 0, 9) would actually be invalid.
This is correct. Given int foo(a) int a; {}
, the call has undefined behavior per C 2018 6.5.2.2 6:
If the expression that denotes the called function has a type that does not include a prototype,… If the number of arguments does not equal the number of parameters, the behavior is undefined.
And, given int foo(int a);
, the call violates a constraint, per C 2018 6.5.2.2 2:
If the expression that denotes the called function has a type that includes a prototype, the number of arguments shall agree with the number of parameters.
Assuming that proper forward-declarations are in place (in this case, int foo(int);), is the K&R function declaration syntax still unsafe?
If a function has both a declaration with a prototype, as it would in your forward declaration, and a definition without the prototype (using the old K&R syntax), the resulting type of the identifier is that of the prototyped version. The type of a function declared with a parameter list can be merged with the type of a function declared with the K&R syntax. First C 2018 6.7.6.3 15 tells us the two types are compatible:
For two function types to be compatible, both shall specify compatible return types. … If one type has a parameter type list and the other type is specified by a function definition that contains a (possibly empty) identifier list, both shall agree in the number of parameters, and the type of each prototype parameter shall be compatible with the type that results from the application of the default argument promotions to the type of the corresponding identifier.…
Then C 2018 6.2.7 3 tells us they can be merged:
A composite type can be constructed from two types that are compatible; it is a type that is compatible with both of the two types and satisfies the following conditions:
…
— If only one type is a function type with a parameter type list (a function prototype), the composite type is a function prototype with the parameter type list.
…
And C 2018 6.2.7 4 tells us the identifier takes on the composite type:
For an identifier with internal or external linkage declared in a scope in which a prior declaration of that identifier is visible, if the prior declaration specifies internal or external linkage, the type of the identifier at the later declaration becomes the composite type.
Thus, if you have both int foo(int a);
and int foo() int a; {}
, foo
has type int foo(int a)
.
This implies that if every function is declared with a prototype, defining them without a prototype is just as safe as defining them with a prototype, in regard to the semantics of calling them. (I do not comment with regard to the possibility that style or another might be more or less susceptible to errors caused by mistaken edits or other aspects unrelated to actual semantics of function calls).
Note however, that the types in the prototype must match the types in the K&R-style definition after default argument promotion. For example, these types are compatible:
void foo(int a);
void foo(a)
char a; // Promotion changes char to int.
{
}
void bar(double a);
void bar(a)
float a; // Promotion changes float to double.
{
}
and these types are not:
void foo(char a);
void foo(a)
char a; // char is promoted to int, which is not compatible with char.
{
}
void bar(float a);
void bar(a)
float a; // float is promoted to double, which is not compatible with float.
{
}
Upvotes: 2
Reputation: 753675
The code in the question isn't very problematic; handling int
presents few problems. Where it gets tricky is in function like this one:
int another(int c, int s, double f);
int another(c, s, f)
short s;
float f;
char c;
{
return f * (s + c); // Nonsense - but it compiles cleanly enough
}
Note that the prototype for that is not
int another(char c, short s, float f);
It is interesting that GCC accepts both prototypes unless you add -pedantic
(or -Wpedantic
) to the compilation options. This is a documented GCC extension — §6.38 Prototypes and Old-Style Function Definitions. By contrast, clang
complains (as a warning if the -Werror
option isn't specified — and it complains by default, even without -Wall
or -Wextra
, etc.):
$ clang -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes -c kr19.c
kr19.c:7:10: error: promoted type 'int' of K&R function parameter is not compatible with the
parameter type 'char' declared in a previous prototype [-Werror,-Wknr-promoted-parameter]
char c;
^
kr19.c:2:18: note: previous declaration is here
int another(char c, short s, float f);
^
kr19.c:5:11: error: promoted type 'int' of K&R function parameter is not compatible with the
parameter type 'short' declared in a previous prototype [-Werror,-Wknr-promoted-parameter]
short s;
^
kr19.c:2:27: note: previous declaration is here
int another(char c, short s, float f);
^
kr19.c:6:11: error: promoted type 'double' of K&R function parameter is not compatible with the
parameter type 'float' declared in a previous prototype [-Werror,-Wknr-promoted-parameter]
float f;
^
kr19.c:2:36: note: previous declaration is here
int another(char c, short s, float f);
^
3 errors generated.
$
As long as you recognize this discrepancy for the shorter types and your prototypes match the promoted types, you should not actually run into trouble defining the functions using K&R notation and declaring them using prototype notation.
However, there is no obvious benefit to the discrepancy — if you've written the prototype correctly in a header, why not use that prototype declaration as the basis of the function definition?
I am still working on a code base that has some K&R function definitions from the 80s and early 90s. Most such functions do have a prototype in a header, with the promoted types in the prototypes. I am actively cleaning it up to convert all function definitions to prototype notation, ensuring that there is always a prototype in scope for functions before (non-static
) functions are defined or any function is called. The local convention is to redundantly declare static
functions at the top of the file; that works too.
In my own code, I never use K&R notation — not even for parameterless functions (I always use function(void)
for those).
Since it is explicitly marked as 'obsolescent' in the standard (C11 §6.11 Future directions, I recommend strongly against using the K&R notation in any modern C code:
The use of function declarators with empty parentheses (not prototype-format parameter type declarators) is an obsolescent feature.
The use of function definitions with separate parameter identifier and declaration lists (not prototype-format parameter type and identifier declarators) is an obsolescent feature.
Upvotes: 1
Reputation: 180181
Note that I am referring specifically to the function declaration syntax, not the usage of unprototyped functions.
You seem to be playing a little fast and loose with language. I think you mean you're talking about the effect of the choice of function definition syntax. Function definitions provide function declarations, and those declarations may (ANSI style) or may not (K&R style) include prototypes. Function declarations that are not part of definitions (forward declarations) also may or may not provide prototypes. It is important to understand that behavior differs a bit depending on whether there is an in-scope prototype.
Much has been made of how the former syntax does not create a function prototype. My research indicates that, if the function were defined as above, a subsequent call foo(8, 6, 7, 5, 3, 0, 9) would result in undefined behavior;
Yes. This arises from a language constraint (so not only is behavior undefined, but violations must be diagnosed). In C11, the relevant text is paragraph 6.5.2.2/2:
If the expression that denotes the called function has a type that includes a prototype, the number of arguments shall agree with the number of parameters. Each argument shall have a type such that its value may be assigned to an object with the unqualified version of the type of its corresponding parameter.
Note that that is about agreement of function-call arguments with an in-scope prototype, regardless of whether the prototype actually matches the function's definition. (But other rules require all declarations of the same function type, including its definition, to be compatible.)
whereas with the latter syntax, foo(8, 6, 7, 5, 3, 0, 9) would actually be invalid.
Yes? You seem to be drawing a distinction that I do not follow between what is "invalid" and what has undefined behavior. Code that is "invalid" by any definition I can think of definitely has undefined behavior. Anyway, the behavior in this case too is explicitly undefined, for if there is an in-scope prototype, then 6.5.2.2/2 (above) applies regardless of the form of the function's definition. If there is not an in-scope prototype then paragraph 6.5.2.2/6 (in C11) applies instead. It says, in part:
If the expression that denotes the called function has a type that does not include a prototype [...] If the number of arguments does not equal the number of parameters, the behavior is undefined.
The compiler might not recognize or diagnose such an error, and you might test it on some particular implementation without observing any ill effects, but the behavior is definitely undefined, and under some circumstances, on some implementations, things will break.
My question is this: assuming function prototypes are already provided, is the K&R syntax any less safe than the style with inline type keywords?
If the same prototype for function f()
is in scope both where the function is defined and where it is called (good practice in any case), and if the K&R-style definition of f()
specifies the same return type and parameter types that the prototype does, then everything will work fine. The multiple compatible declarations in scope at the function definition are combined to form a composite type for the function that will happen to be the same as the type declared by the prototype. The semantics are exactly as if the definition used ANSI syntax directly. This is addressed in section 6.2.7 and related sections of the standard.
There is, however, an additional maintenance burden in maintaining a function definition that does not lexically match its prototype. There is also increased opportunity for errors arising from the parameter types in the parameter declaration list not matching the prototype, and in this sense the approach is less safe than simply using ANSI style throughout.
However, if there is no prototype for f()
in scope at the point of its K&R-style definition, and if any of its parameters have type float
or integer types smaller than int
, then there is more opportunity for error. Notwithstanding the declared parameter types, the function implementation will expect the arguments to have been subjected to the default argument promotions, and the behavior is undefined if in fact they have not been. It is possible to write a prototype that anticipates that need, for use where f()
is called, but it is all too easy to get that wrong. In this sense the combination you propose is indeed less safe.
Upvotes: 0
Reputation: 1
My question is this: assuming function prototypes are already provided, is the K&R syntax any less safe than the style with inline type keywords?
It is downright WRONG. See Default argument promotions in C function calls for details.
It is also undefined behavior. Per 6.5.2.2 Function calls, paragraph 9 of the C11 standard:
If the function is defined with a type that is not compatible with the type (of the expression) pointed to by the expression that denotes the called function, the behavior is undefined.
You should NEVER mix prototypes with functions defined in the old K&R style.
A function written in K&R style expects that its arguments have undergone default argument promotion when they were passed.
Code that calls a function with a prototype does not promote the arguments to the function.
So the arguments passed to the function are not what the function expects to get.
End of story.
Do not do it.
Upvotes: -2
Reputation: 67476
There is no difference at all. You can define them as you prefer. They are exchangeable. But of course no one sane would advice to use the prehistoric notation.
My research indicates that, if the function were defined as above, a subsequent call foo(8, 6, 7, 5, 3, 0, 9) would result in undefined behavior; whereas with the latter syntax, foo(8, 6, 7, 5, 3, 0, 9) would actually be invalid.
Your research is wrong. If the language standard allows using functiuons without the prototypes,
https://onlinegdb.com/SkKF4DAtE
int main()
{
printf("%d\n", foo(2,3,4,5,6,7,8,9));
printf("%d\n", foo1(2,3,4,5,6,7,8,9));
return 0;
}
int foo(a)
int a;
{
return a*a;
}
int foo1(int a)
{
return a*a;
}
Upvotes: 0