DMKing
DMKing

Reputation: 1715

Has TRUE always had a non-zero value?

I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?

Upvotes: 22

Views: 16471

Answers (22)

Jin Thakur
Jin Thakur

Reputation: 2773

The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or less bit columns in a table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2 bytes, and so on. The string values TRUE and FALSE can be converted to bit values: TRUE is converted to 1 and FALSE is converted to 0. Converting to bit promotes any nonzero value to 1.

Every language can have 0 as true or false So stop using number use words true Lol Or t and f 1 byte storage

Upvotes: 0

DJClayworth
DJClayworth

Reputation: 26856

In the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write

if (2) {
  alwaysDoThis();
} else {
  neverDothis();
}

Fortunately C++ allowed a dedicated boolean type.

Upvotes: 1

Sam Stokes
Sam Stokes

Reputation: 14807

I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":

/* like my constants better */
#undef TRUE
#define TRUE 1

#undef FALSE
#define FALSE 0

Upvotes: 13

davenpcj
davenpcj

Reputation: 12684

I have heard of and used older compilers where true > 0, and false <= 0.

That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.

Similarly, I've worked on systems where NULL wasn't zero.

Upvotes: 1

neu242
neu242

Reputation: 16575

It's easy to get confused when bash's true/false return statements are the other way around:

$ false; echo $?
1
$ true; echo $?
0

Upvotes: 1

unexist
unexist

Reputation: 2528

The funny thing is that it depends on the language your are working with. In Lua is true == zero internal for performance.. Same for many syscalls in C.

Upvotes: 1

hometoast
hometoast

Reputation: 11782

General rule:

  1. Shells (DOS included) use "0" as "No Error"... not necessarily true.

  2. Programming languages use non-zero to denote true.

That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.

Upvotes: 2

IanM
IanM

Reputation:

For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.

For Unix shells though, they use the opposite convention.

Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).

This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:

if command
then
   # successful
fi

If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.

Upvotes: 1

Ray Hayes
Ray Hayes

Reputation: 15015

DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!

DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!

IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR

Upvotes: 0

Javier
Javier

Reputation: 62593

Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.

In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.

From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.

On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.

Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.

Upvotes: 4

Hi I am a troll
Hi I am a troll

Reputation:

I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.

Upvotes: 1

Richard
Richard

Reputation: 8920

System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.

eg: IF (I-15) 10,20,10

would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.

Sam is right about the problems of relying on specific knowledge of implementation details.

Upvotes: 2

Nick Berardi
Nick Berardi

Reputation: 54854

In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?

Upvotes: 0

Nathan Feger
Nathan Feger

Reputation: 19496

I recall doing some VB programming in an access form where True was -1.

Upvotes: 1

webmat
webmat

Reputation: 60586

The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).

Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).

As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.

Upvotes: 31

David Nehme
David Nehme

Reputation: 21572

Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.

Instead of this

int x;    
....
x = 0;
if (x)  // might be ambiguous
{
}

Make is explicit

if (0 != x)
{
}

Upvotes: 1

Dima
Dima

Reputation: 39389

For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if statement will execute the if clause if the conditional expression evaluates to anything other than 0.

I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)

If you are using a language that has a built in boolean, like C++, then keywords true and false are part of the language, and you should not rely on how they are actually implemented.

Upvotes: 0

GSerg
GSerg

Reputation: 78174

I can't recall TRUE being 0. 0 is something a C programmer would return to indicate success, though. This can be confused with TRUE.

It's not always 1 either. It can be -1 or just non-zero.

Upvotes: 0

Sam Erwin
Sam Erwin

Reputation: 335

I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).

Upvotes: 3

17 of 26
17 of 26

Reputation: 27382

In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.

Upvotes: 0

stephenbayer
stephenbayer

Reputation: 12431

It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."

Upvotes: 21

zigdon
zigdon

Reputation: 15063

If nothing else, bash shells still use 0 for true, and 1 for false.

Upvotes: 8

Related Questions