Reputation: 170539
This question is not specific to C++, just uses C++ stuff as examples.
A widespread opinion is that "treat all warnings as errors" (like /WX
Visual C++ option) is good because a warning is a bug waiting to happen (btw the linked thread is full of "I target for zero warnings" statements).
The only counterargument I've seen so far is that some third-party code will not compile without warnings.
Okay, let's for the duration of this question pretend the compiler has means of temporarily disabling warnings in some code (like this thing in Visual C++):
#pragma warning(push)
#pragma warning(disable:X)
#include <ThirdParty.h>
#pragma warning(pop)
and then the third party code is not a problem anymore.
Assuming we fully control all the code (no third party or we can disable warnings in third party code only) what are reasons for not treating warnings as errors?
Upvotes: 4
Views: 1916
Reputation: 4016
I disagree with the assertions "no reason at all".
I personally think the correct approach is a zero-warnings policy, but without treating warnings as errors. My assertion is that this increases programmer productivity and responsibility.
I have done both approaches within teams: 1) warnings as errors, and 2) zero-warnings as policy, but not enforced by the compiler. In both cases releases were made without warnings, and the warning levels were kept around zero. In the latter case however, there were occasionally states where the warning level crept up to a handful for a brief period of time.
This led to higher quality code, and a better team. To see how I'll try to come up with a reasonable example from memory. If you're sufficiently clever and motivated you'll be able to poke holes in, but try to stretch your memory and imagination and I think you'll f see my point, regardless of any flaws in my example.
Let's say you have some legacy, c-style code that has been using signed-ints for indexing, and using negative cases for some kind of special handling. You want to modernize the code to take advantage of std algorithms, and maybe something offered by boost. You fix one corner of the code and it's a great proof of concept so you want to add it to the code-review stack because you're pretty sure you want to do the whole thing that way.
Eventually the signed stuff will disappear, but at the moment you're getting warnings that your comparing signed and unsigned ints. If your company is enforcing warning free builds, you could:
This same thing can occur for a large number of reasons. All of these are inferior to pushing a few warnings, discussing the warnings in your team meeting, and cleaning them up at some point in the next few weeks. Casts and temporary code linger and pollute the code for years to come, whereas tracked warnings in a motivated team will get cleaned up quickly.
At this point some people will claim "yeah but people won't be motivated enough" or "not my team, they are all crap" or so on (at least I've frequently heard these arguments). I find this is usually an example of the Fundamental attribution error. If you treat your colleagues like they are irresponsible, uncaring sots, they will tend to behave that way.
While there is a large body of social science to back this up, I can only offer my personal experiences, which are of course anecdotal. I have worked in two teams where we began with a large legacy code base and crapload of warnings (thousands). This is a terrible situation to be sure. In addition to the thousands of warnings, many more were ignored since this would pollute the compiler output too much.
Team 1 : warnings as warnings, but not tolerated
In team 1, we tracked the # of warning in jenkins, and in our weakly team meetings we talked about the number of warnings were doing. It was a 5 man team, of which two of us really cared. When one of us would reduce the warning level, the other would sing their praises in the meeting. When cleaning up a warning removed an undiscovered bug, we advertised the fact. After a few months of this 2 of the other 5 coders joined in and we quickly (within a year) had the warnings down to zero. From time to time the warnings creeped up, sometimes in the tens or twenties. When that happened the developer responsible would come in and say sorry, and explain why they were there and when they expected to get them cleaned up. The one guy who never really got motivated was at least sufficiently motivated by the peer pressure not to add any warnings.
This led to a much improved team atmosphere. In some cases we had productive discussions about what was the best way to deal with a particular warning or class of warnings, which led to us all being better programmers, and the code improving. We all got in the habit of caring about cleaner code, which made other discussions -- like whether or not a 1000 line recursive function with ten parameters was good coding or not -- much easier.
Team 2: Warnings as errors
Team 2 was virtually identical to team 1 at the beggining. Same big legacy code full of warnings. Same number of developers, motivated and unmotivated. In team 2 though one of us (me) wanted to leave warnings as warning, but concentrate on reducing the warnings. My other motivated colleague made the claim that all the other developers were a**holes, and if we didn't make warnings errors they would never get rid of them, and what possible benefit could you get from not doing it? I explained my experience in team 1, but he wasn't having any of it.
I still work in team one. It's a year later and our code is warning free, but it's full of quick hacks to get rid of warnings and unnecessary code. While we have eliminated a lot of real problems, many potential problems have been swept under the rug.
Further, the team cohesion didn't improve a bit. If anything it's degraded, and management is contemplating breaking up the team for that reason. The colleagues who never cared about warnings still don't, and peoples concern for quality hasn't increased. In fact the opposite has occurred. Whenever anyone talks about code quality, that set of people rolls their eyes and thinks of it as another oppressive, silly management obsession that only decreases productivity and has little benefit.
Even worse, whenever we want to introduce another warning that would provide useful information, it's a major project since we would either have to change our warnings-as-errors policy, or fix all the existing warnings before introducing the warning. Because we didn't get practice watching and fixing warnings through intrinsic motivation, the former would likely cause real problems, so instead we add the task to out backlog, and it lingers and lingers.
Another example, which led me to discover this question and write this answer, is C++11's [[deprecated]] feature, which I'd like to use to mark deprecated code so we can gradually phase it out as part of our warnings cleanup. That however is incompatible with all-warnings-as-errors.
My advice: Treat warnings as errors psychologically within your team, but don't tell your compiler to do so. Perhaps treat some particularly pernicious warnings as errors, but be discriminating.
Upvotes: 3
Reputation: 81277
In many cases, it's possible for changes in one assembly, or changes in a language, to cause code which used to compile cleanly to start issuing warnings. In some cases, tolerating such warnings temporarily may be better than requiring that all of the edits necessary to eliminate them be performed before anything can be tested.
Upvotes: 1
Reputation: 552
I'd hugely prefer working with a codebase that compiles with a few warnings than one where warning-generating code has been allowed to proliferate due to certain 'not negotiable' warnings being turned off (and I won't pretend this wouldn't happen). At least in the former case the areas that might need some code review are clearly visible just by rebuilding.
Also, having the three categories of Fatal / Warning / Benign allows for a lot of scope to add tiny bits of useful information to the Warning category (see C4061) without it breaking builds. With only a fatal-or-not distinction the list of warnings would have to be a lot more judicious.
Upvotes: 2
Reputation: 48327
Because sometimes you know better than the compiler.
It's not necessarily often with modern compilers, but there are times when you need to do something slightly outside of the spec or be a little tricky with types, and it is safe in this particular case, but not correct. That'll cause a warning, because technically it's usually mostly wrong some of the time and the compiler is paid to tell you when you might be wrong.
It seems to come down to the compiler usually knowing best but not always seeing the whole picture or knowing quite what you mean. There are just times when a warning is not an error, and shouldn't be treated as one.
As you stray further from standard use, say hooking functions and rewriting code in memory, the inaccurate warnings become more common. Editing import tables or module structure is likely to involve some pointer arithmetic that might look a little funny, and so you get a warning.
Another likely case is when you're using a nonstandard feature that the compiler gives a warning about. For example, in MSVC10, this:
enum TypedEnum : int32_t
{
...
};
will give a non-standard extension warning. Completely valid code when you're coding to your compiler, but still triggers a warning (under level 4, I believe). A lot of features now in C++11 that were previously implemented as compiler-specific features will follow this (totally safe, totally valid, still a warning).
Another safe case that gives a warning is forcing a value to bool, like:
bool FlagSet(FlagType flags) { return (flags & desired); }
This gives a performance warning. If you know you want that, and it doesn't cause a performance hit, the warning is useless but still exists.
Now, this one is sketchy as you can easily code around it, but that brings up another point: there may be times when there are two different methods of doing something that have the same results, speed and reliability, but one is less readable and the other is less correct. You may choose the cleaner code over the correct code and cause a warning.
There are other cases where there is a potential problem that may occur, which the warning addresses. For example, MSVC C4683's description literally says "exercise caution when..." This is a warning in the classic sense of the word, something bad could happen. If you know what you're doing, it doesn't apply.
Most of these have some kind of alternate code style or compiler hint to disable the warning, but the ones that don't may need it turned off.
Personally, I've found that turning up the warnings and then fixing them helps get rid of most little bugs (typos, off-by-one, that sort of thing). However, there are spots where the compiler doesn't like something that must be done one particular way, and that's where the warning is wrong.
Upvotes: 8
Reputation: 300259
I've seen 3 reasons:
The legacy code is of course a concern, but it can be dealt with efficiently:
The Great Wall strategy is best used when the tests are as bad as the code or time is scarce. Otherwise, when resources and test confidence allow, I obviously urge you to take an incremental rewrite approach.
On a fresh codebase ? No reason at all. In the worst case, if you really need something tricky (type puning, ...) you can always selectively deactivate the warning for the area of code concerned. And what's really great is that it documents that something fishy is going on!
Upvotes: 4