0.99999... = 1

In mathematics, the recurring decimal 0.999?, which is also written as 0.(9 with bar above the number) , 0.(9 with dot above the number) or 0.9, denotes a real number equal to 1.

This should explain it clearly,
http://en.wikipedia.org/wiki/0.999...
 
I think you are getting confused. 0.99 = 0.9900... which is different to 0.999...
I meant recurring decimals, which I by mistake forgot to mention. :p

For example, you have a calculation where you have 1/3 * 3, but in the deciaml forms. In that case, if the power of the 10 in the scientific notation is very high, it can make a lot of difference.

Kshitiz_Indian added 1 Minutes and 39 Seconds later...

Suren, that link generalyl says that it is assumed to be equal to 1, it can't be 1 though, because real numbers have only 1 value. Atleast that's what is meant by the definition of a real number.

At the bottom line, we assume 0.999... to be equal to 1 because our divide system cannot completely solve the problems of recurring decimals. :)
 
At the bottom line, we assume 0.999... to be equal to 1 because our divide system cannot completely solve the problems of recurring decimals. :)


In certain places, the approximations do have to take place. I don't see any harm or a confusion in it.. In any calculation sort of application, 0.9999 ~ 1 doesn't make any drastic impact.. or do they? I can see even 0.9995 ~ 1 in certain applications. We can always maintain 4 decimals for standard manipulations.

like, 20/3 = 0.66666666666666666 ~ 0.6667. And 0.9999999999999 ~ 0.9999 or even if you are making it as 1, i don't see any drastic change in applications because of that 0.0001 difference. ( which does look like negligible ) And infact thats why we do have a word called, " Round-Of " or "Approximation ". :)


And you know what, 1/3 I can even say "0". :what

Its simple,

Code:
int a= 1;
int answer = a/3;
System.out.println("Solution is: " + answer);

or
Code:
int a=1;
long answer = a/3;
System.out.println("Solution is: " + Math.roundOf(answer));

:p

It all depends on the way we take the value and manipulate it.. and the way our application's calculation demands! Thats it..

1) If we had to be very accurate.... Keep it as 0.9999!
2) If we wish to make our manipulation simple, make it as 1.

But in both 1 and as well as in 2, the accuracy is not going to be 100%. So why should we confuse ourselves making it more complex? ;) We do have to spend time usefully rather than thinking about this negligible 0.00000000...001 :)
 
1) If we had to be very accurate.... Keep it as 0.9999!
2) If we wish to make our manipulation simple, make it as 1.

I disagree. If you were going to be accurate, 1/3 would not be .33333 in the first place. It would just be 1/3. 0.333333333 is as equal to 1/3 as 0.99999999 is equal to 1. Getting 0.3333333333333 is rounding off. Rounding 0.99999999999999 off to 1 rounds it back to what 0.33... x 3 would have been.

I honestly did hope we would've all agreed by now that 0.999... = 1. Those who don't think it does, are you arguing for the sake of arguing? You would hope so.
 
I disagree. If you were going to be accurate, 1/3 would not be .33333 in the first place. It would just be 1/3. 0.333333333 is as equal to 1/3 as 0.99999999 is equal to 1. Getting 0.3333333333333 is rounding off. Rounding 0.99999999999999 off to 1 rounds it back to what 0.33... x 3 would have been.

I honestly did hope we would've all agreed by now that 0.999... = 1. Those who don't think it does, are you arguing for the sake of arguing? You would hope so.
I'm not saying that I don't approximate it to be 1, I do, I have to when it comes to calculations, its a well known fact that 0.999 ... is taken to be 1, either you call it round off or approximation. I'm just saying its not EQUAL to 1. Its really close, but there's a difference.
 
I disagree. If you were going to be accurate, 1/3 would not be .33333 in the first place.

I only got used to do the fraction calculation(if at all it has to be expressed in terms of decimals) with only upto 4 decimals. And not 0.3333333333..333 or 0.99999..9999 or whatever... This is the standard we prefer here even in crucial exams.

surendar added 2 Minutes and 29 Seconds later...

Its really close, but there's a difference.

Such a negligible difference hardly matters when comes to practical applications. Can you point just one real time application which do have a drastic impact or abnormal deviations because of this 0.00000000..001? I am also curious to know if there is any!
 
I only got used to do the fraction calculation(if at all it has to be expressed in terms of decimals) with only upto 4 decimals. And not 0.3333333333..333 or 0.99999..9999 or whatever... This is the standard we prefer here even in crucial exams.

surendar added 2 Minutes and 29 Seconds later...



Such a negligible difference hardly matters when comes to practical applications. Can you point just one real time application which do have a drastic impact or abnormal deviations because of this 0.00000000..001? I am also curious to know if there is any!
Its nothing that I'm denying the current system or its wrong, I'm just merely saying that its not equal to one, its really close, so close we assume it to be 1.

One example would be, for example, if you have 1/3 moles of some molecules, then when you calculate the particles it would come to 1/3 * 6.023 x 10^23 molecules. Suppose we take three such batches of molecules, it would come to 1/3 * (something) x 10^23 x 3. Now if we keep it 1/3 there's no problem, but imagine we take it 0.33333, then multiply it by 3 we get 0.99999 and then multiplying by the power of 10^23, we would get 9999900000000000000000, whereas we should have got 10000000000000000000. I think its pretty clear what kind of difference it can make. :)


Anyways, I'm not arguing for heaven's sake. :)
 
I still don't get how there are this many posts.

Scientifically, if you get given the number 1/3, you approximate it to 0.3333. 0.3333 is NOT 1/3, it is very close, and that is why you would use it, it is a practical number. Multiply this practical number by 3 and you will get 0.9999. Very close to 1, but the difference won't matter.

Mathematically, however, you would leave the number 1/3. Therefore, if you multiply 1/3 * 3, you will get 1. Mathematically, 0.999..., if it were to keep going to infinity, would equal 1. Obviously it is not practical or possible to write out infinity 9s, but the fact is it is a sequence nearing 1 and when it reaches infinite digits it will be 1.

The main lesson of the day:
Rounding stuff off changes numbers a tiny bit and if you forget that then you can come up with all sorts of ridiculous mathematical statements. 1/3 - 0.3333 = 0.000033333333... There is a difference.

tassietiger added 2 Minutes and 42 Seconds later...

Such a negligible difference hardly matters when comes to practical applications. Can you point just one real time application which do have a drastic impact or abnormal deviations because of this 0.00000000..001? I am also curious to know if there is any!

It doesn't matter. You'd never use 0.999... to infinte digits in any practical situation. Mathematically though, it is 1. Mathematics is more theoretical and will consider numbers like 0.999...

1 = 0.999...

It is as sure as the fact that PC Roulette is rigged.
 
It can never reach infinite 9's, because there's no end to it. Every time you add a 9 it gets closer, that's it. :)

And btw, PC roulette is not rigged. I've seen the casino code and it definitely isn't rigged. :p
 
Three points.

1. Two unequal real numbers must have a positive difference.

2. Two unequal real numbers must have a real number in between.

3. Every real number has a unique representation as an INFINITE decimal. Terminating representations are a luxury that very few real numbers can afford - such representations are unofficial and purely for convenience.
 
It can never reach infinite 9's, because there's no end to it. Every time you add a 9 it gets closer, that's it. :)

And btw, PC roulette is not rigged. I've seen the casino code and it definitely isn't rigged. :p
Infinity is not a number. It is a concept. A concept of what would happen if an increasing number could actually get to where it is heading towards.

1/1000 is pretty close to zero. 1/999999 is even closer to zero. But to express the number that actually would be zero, we say 1/infinity. 1/infinity is 0. In the same way, if you put infinity 9s after the decimal place, you get 1. You can't actually put infinity 9s there, it is merely a concept. If it was possible to put them there though, the number would finally equal 1.

This infinity concept is one of the main ideas behind sequences and series.
 
Infinity is not a number. It is a concept. A concept of what would happen if an increasing number could actually get to where it is heading towards.

1/1000 is pretty close to zero. 1/999999 is even closer to zero. But to express the number that actually would be zero, we say 1/infinity. 1/infinity is 0. In the same way, if you put infinity 9s after the decimal place, you get 1. You can't actually put infinity 9s there, it is merely a concept. If it was possible to put them there though, the number would finally equal 1.

This infinity concept is one of the main ideas behind sequences and series.
I know, infinity is a concept, which is why I'm saying that the assumption that 0.999... = 1 is based on a concept. There's no such practical proof for it, not that's its possible to do it practically though.

1/infinity is assumed to be zero, yes, but 1/infinity and 0.infinite nines have a difference. You know, if you keep on adding a 9, there will be some time when you think you are actually measuring 1 now, then I would come and add another 9 and laugh, and say I'm even closer to 9, and then, you again, then I, and so on.

The bottom line is, whether 0.9999... is equal to 1 or not, we just take it as that because we weren't alive when all these concepts were being evolved(:p) but even so because practically it doesn't make a difference in MOST cases. :)

:)
 

Users who are viewing this thread

Top