Why are there no octal literals in C#?

Why didn't those who develop C# include octal literals beyond hexadecimal and binary literals? What would seem to be the reason for that decision?

Upvotes: 4

Views: 1404

Answers (2)

Hans Passant
Hans Passant

Reputation: 941635

Octal encoding is a relic from computing in the late 1950s and 60s. Back then it was quite useful, companies built machines with a 6-bit byte, commonly packed in an 18-bit or 36-bit word. 6-bits was enough for everybody, teletypes and printers did not yet have lowercase letters and English was the dominant language.

Octal is nice for such 6-bit bytes, takes 2 digits and all bits are used.

That did peter out, inevitably, the IBM-360 was very influential and it had an 8-bit byte. The PDP-11 of 1970 was important, an affordable 16-bit machine with an 8-bit byte. Around 1975 the olden architectures acquired dinosaur status and programmers started heavily favoring hex. Octal is clumsy to encode 8 bit bytes, hex gave us the 2 digits back. Microprocessor kits of the era all used hex.

Octal did last a lot longer than it should have. DEC manuals always used octal notation, even for the 8-bitters. The C language brought it into the curly-brace languages, narrowly, it started life on GE-635 and PDP-8, 6-bit machines. Narrowly, it didn't become real C until the PDP-11, but the seed was planted. With a way to specify octal values that was far too easy, a leading 0 on the literal.

Producing countless bugs and extremely confused programmers. The C# team thoroughly removed such common bug generators from their curly-brace language. They did a terrific job.

Upvotes: 11

Minn
Minn

Reputation: 6134

A lot of people use 0 as prefix for visual alignments, having octal literals is commonly seen as footgun in programming languages.

var x =  11;
var y = 011;

This can happen by accident while hex and binary literals require a special alphanumeric prefix 0x or 0b. Modern languages try to reduce these kinds of footguns.

Upvotes: 3

Related Questions