Reputation: 5939
It's a well-known inconsistency in integer division, as defined in C and many other programming languages: division by an integer N yield a remainder in the range 0..N rather than 0..|N|. IMO this negatively affects some applications, for example if you are displaying an image defined by on an integer grid you'd better put (0,0) outside the image area: otherwise you'll get a visible line at x==0 and another one at y==0 on some image operations.
Could you name an example of a practical integer division usage where truncation to 0 would serve the programming intent better than truncation to negative infinity?
Upvotes: 0
Views: 440
Reputation: 283733
This isn't a question of programming language design, really.
High level programming languages truncate for consistency with low-level languages.
Low level languages truncate because that's what the hardware operation does.
Later generations of hardware chose to be backward compatible with early generations.
To really answer this question you have to go waaaaay back, and the reason could be as subtle as "it made a carry chain shorter".
Please note, though, that any other choice breaks distributivity:
// both of these are true if integer division truncates toward zero, else uncertain
-1 * b / c == -1 * (b / c)
-1 * b % c == -1 * (b % c) % c
are rather nice identities to preserve.
Upvotes: 4