temporary_user_name
temporary_user_name

Reputation: 37038

Why should I use ints instead of floats?

I'm preparing for a class lesson (I'm teaching) and I'm trying to predict any possible questions from the students and I ran into one that I can't answer:

If we have floats, why do we ever use ints at all? What's the point?

I know (or at least I think) that floats take more memory because they have more accuracy, but surely the difference is nearly negligible as far as memory usage goes for most non-embedded applications.

And I realize in many cases we actually don't need a float, but honestly, why do we have ints in the first place? What's the point? There's nothing an int can do that a float can't.

So why are they there at all?

Edit: You could argue they're easier to write (3 vs. 3.0) but you could just make all numbers default to float, so 3 would be treated the same as 3.0. Why make it a different type?

Upvotes: 7

Views: 14512

Answers (6)

donny Matteson
donny Matteson

Reputation: 1

I asked chatGPT why there were rounding error with floating point numbers This is interesting.

The potential for rounding issues in floating-point arithmetic arises from the way computers represent real numbers. Most computers use a binary representation for floating-point numbers, and not all decimal numbers can be precisely represented in binary. This can lead to rounding errors when converting between decimal and binary representations.

Consider the following example in base-10 (decimal):

1/3 = 0.333333...

Now, in base-2 (binary), representing 1/3 as a finite fraction results in a repeating binary fraction:

1/3 = 0.0101(01)...

In this case, the binary representation is not exact, and it repeats indefinitely. This repeating fraction cannot be represented precisely in a finite number of binary digits, leading to a loss of precision.

Similarly, some decimal numbers cannot be represented exactly in binary, and when these numbers are involved in calculations, rounding errors can accumulate. For example:

0.1 (decimal) = 0.0001100110011001100110011001100110011001100110011...(binary)

When working with floating-point numbers, especially in repetitive or extensive calculations, these rounding errors can accumulate and lead to discrepancies between the expected and actual results.

While floating-point representations are highly useful and versatile for a broad range of applications, developers need to be aware of these limitations and take precautions when precision is critical. Techniques such as rounding, using fixed-point arithmetic, or employing specialized numerical libraries can help mitigate some of the issues associated with floating-point arithmetic.

#include <stdio.h>

int main() {
  int num1 = 5;
  int num2 = 2;
  float sum = (float) num1 / num2;

  printf("%f", sum);
  return 0;

THE ABOVE WORKS AND THE BELOW WORKS

#include <stdio.h>

int main() {
  float num1 = 5;
  float num2 = 2;
  float sum =  num1 / num2;

  printf("%f", sum);
  return 0;
}

BUT I THINK I WILL GO WITH WHAT W3 SUGGESTS.

Upvotes: -3

Stu
Stu

Reputation: 1

I am just learning python, and I too believe that it seems more simple to just use floats instead of both floats and int ,and try to remember which one we saved it as. Someone said "Don't use what you don't need." But what about the limited storage in my own brain? Isn't that why we use code for computers in the first place?IDK.

Upvotes: 0

Steve Jessop
Steve Jessop

Reputation: 279245

There are various historical reasons that apply to most languages:

  • A philosophy of "don't use what you don't need". A lot of programs have no need for non-integer values but use integer values a lot, so an integer type reflects the problem domain.

  • Floating point arithmetic used to be far more expensive than integer. It's still somewhat more expensive, but in a lot of cases in Python you'd hardly notice the difference.

  • A 32 bit IEEE float can only represent all integers up to 2**24 then loses precision. A 16 bit float ("half precision") only represents all integers to 2048. So for 16 and 32 bit computing, when register sizes impose a serious trade-off between performance and value range, float-for-everything makes that trade-off even more serious.

  • An 8-bit integer type (or whatever byte size exists on the platform), is very useful for low-level programming because it exactly maps to any data representable in memory. Same goes for a register-sized integer type with some efficiency advantage to working in words rather than bytes. These are the (signed and unsigned) char and int types in C.

There is an additional reason specifically for Python:

  • The int type automatically promotes to long when a computation goes beyond its range, thereby retaining precision. float doesn't get bigger to remain precise. Both behaviours are useful in different circumstances.

Note that Javascript doesn't provide an integer type. The only built-in numbers in Javascript are 64 bit floating-point. So for any reason why an integer type is beneficial, it's instructive to consider how Javascript gets on without it.

Upvotes: 10

Tim Pietzcker
Tim Pietzcker

Reputation: 336128

Floating point numbers are approximations in many cases. Some integers (and decimals) can be exactly represented by a float, but most can't. See Floating Point Arithmetic: Issues and Limitations.

>>> a = 1000000000000000000000000000
>>> a+1 == a
False
>>> a = 1000000000000000000000000000.0
>>> a+1 == a
True

Resulting from this approximative nature of floats, some calculations may yield unexpected results (this isn't directly pertinent to the question, but it illustrates the point quite well):

>>> sum(1.1 for _ in range(9))
9.899999999999999

For example, when you're dealing with money calculations, it's better to use integers, or (if speed is not an issue) the decimal module.

Upvotes: 15

EyalAr
EyalAr

Reputation: 3170

There are four reasons which I can currently think of (and I'm sure there are more):

  1. Memory. Choosing wisely data types can dramatically affect the memory requirements (large databases, for example).
  2. Speed. Hardware implementation of integer arithmetic is much faster (and simpler) than floating point arithmetic.
  3. Programming practices. Having data types enforces better programming practices, as the programmer must be aware of kind of data each variable stores. This also allows early errors detection (compile time vs runtime).
  4. History. Memory used to be expensive (and still is on some systems for some applications).

Upvotes: 2

David Heffernan
David Heffernan

Reputation: 612884

It's important to use data types that are the best fit for the task they are used for. A data type may not fit in different ways. For instance, a single byte is a bad fit for a population count because you cannot count more than 255 individuals. On the other hand a float is a bad fit because many possible floating point values have no meaning. For example, 1.5 is a floating point value that has no meaning as a count. So, using an appropriately sized integer type gives us the best fit. No need to perform sanity checks to weed out meaningless values.

Another reason to favour integers over floats is performance and efficiency. Integer arithmetic is faster. And for a given range integers consume less memory because integers don't need to represent non-integer values.

Another reason is to show intent. When a reader of the code sees that you used an integer, that reader can infer that the quantity is only meant to take integer values.

Upvotes: 9

Related Questions