Quick note on this question, which pops up every once in a while. I'm soliciting comments before I bump this to the Library for it to die in peace.
Confusion over whether 0.9999999.... = 1 stems from a fundamental misunderstanding of what real numbers are. So, let's first review natural numbers, integers, and rational numbers (fractions).
Natural numbers Natural numbers are, well, natural. We all know what the natural numbers are: {1, 2, 3, 4, ...}. Counting numbers are used to represent groups of discrete objects, so they are a generalization of concepts like "number of cattle in my herd" or "number of people in the city." The direct applications of the natural numbers made them one of the first mathematical ideas invented; virtually every primitive civilization created the natural number system. There is also an operation on the natural numbers, that is, a way of putting two numbers together and getting another out: addition. Everybody understands how this works (and a rigorous discussion of defining the natural numbers and addition in terms of the axioms of set theory is, of course, beyond the scope of this document).
Question: what if I have no cattle? What if no transaction occurs? What if nobody enters or leaves a room? Let's invent another number: 0. This innovation was developed in India and spread through the West.
Integers Natural numbers are fine for simple counting, but for accounting we need a system a little bit more subtle. We need another operation: subtraction. (If I have fifty cattle and I owe you twenty, you add twenty cattle to your herd and I subtract twenty from mine.) What if, however, I have no cattle and I still owe you twenty cattle?
Mathematically, we call this state of affairs "incomplete under subtraction." To deal with this, we define another set of numbers and append them to the natural numbers and zero: {..., 3, 2, 1, 0, 1, 2, 3, ...}. Now we can deal with the accounting of discrete objects. Note that this set is complete under addition and subtraction: if you add or subtract any two integers, you get another integer.
There is another operation on the integers: multiplication. This leads us to the next question: what if I have an inheritance, say, of thirty cattle and three sons to whom I wish to leave them? What if I have thirty acres of land and four sons? We can do the first operation by undoing multiplication and dividing: thirty cattle distributed among three sons leaves ten cattle per son. But the discrete number thirty does not divide evenly among four sons.
Rational numbers (fractions) This is known as the "distribution problem". To solve it, we can invent more numbers! (Mathematically, we are appending an additional set of numbers to the integers in order to make it complete under division.) Let's say we have an integer (4 sons) and another integer we want to divide: (30 acres). We can define the number "30/4" as what each son gets. We know from experience that each son gets 7.5 acres. So we define our fractions (rational numbers) as all possible ratios of integers, save those with 0 in the denominator.
Straightforward so far. But there is a hidden subtlety: if there are 15 acres and 2 sons, each son still gets the same amount. What's the problem with this? 15/2 and 30/4 are, under our schema, two different numbers. (Pay attention here; this is an important detail.) But we know that they're the same in practice. So we make the rational numbers smaller: we define two fractions (p/q) and (r/s) to be the same if ps = qr. These, then, are the fractions we know well and love.
(For future reference: all fractions can be expressed as decimals, either terminating (e.g., 1.43 = 143/100) or repeating (e.g., 0.333333... = 1/3).)
We're making significant progress. But rational numbers are not adequate for describing things like distance. For instance, the diagonal of a square is not rational, i.e., cannot be expressed as the ratio of two integers. (Can you prove this, reader?)
Real numbers The discovery that there existed quantities which could not be described by fractions created a bit of a rumpus (I've heard it said, variously, that the man who discovered it was killed or committed suicide; he was a devoted Pythagorean). But, to modern people with no ideological attachment to the purity of integers, we need only repeat the process: we define numbers which can describe distances and other continuous quantities, and then append those to the rational numbers.
We can't exactly describe things like distance, but we can approximate them as close as we like. For instance, it's wellknown (although nontrivial to prove) that the number pi is a nonrational number. But we can approximate it as close as we might like: {3, 3.1, 3.14, 3.141, 3.1415, ...}. The further out in the sequence we get, the closer we get to where we think pi ought to be.
So why not define pi to be that sequence? In fact, let's just define a whole new set of numbers by the sequences that "converge" to them.
After a bit of mathematical formalism, we get the real numbers. Problem: for every real number, there are a whole boatload of sequences converging to it. For examples, {1/3, 1/3, 1/3, 1/3, 1/3, ...} and {0.3, 0.33, 0.333, 0.3333, ...} both go to 1/3 and {3, 3.1, 3.14, 3.141, ...} and {3, 3.2, 3.1, 3.15, 3.145, 3.1415,...} converge to pi. This is the same problem we had with fractions: lots of fractions are the same number. Same here: lots of sequences are the same number!
So we just define the sequences which converge to each other to be the same number! This is just like the fractions: when we have "numbers" which ought to be the same, we just define them to be the same and see if the new scheme is selfconsistent. After some mathematical gymnastics to prove that it is selfconsistent, we get: the real numbers we know and love.
Application: 1 and 0.999... So let's clear up the confusion about 1 and 0.999... . Let's see about the decimal 0.9999 ... . It's a real number, so let's pick a sequence which converges to it: {0, 0.9, 0.99, 0.999, ...}. It converges to 1 if the sequence {10, 10.9, 10.99, 10.999, ...} converges to zero (that is, if 1  0.999... = 0). But the sequence {1, 0.1, 0.01, 0.001, 0.0001, ...} gets smaller and smaller and smaller, so it goes to zero. That means that 0.999... = 1.
The big confusion here is that 1 and 0.9999... are two different ways of writing the same number. This is just like fractions: 1/2 and 2/4 are the same because they are two ways of writing the same number. The only difference is that the real numbers require a little bit more mathematical sophistication to treat rigorously; however, as you (dear reader) prove, the conceptual construction of the real numbers is certainly not beyond the layman.
"... alas, too many people think consistency the hobgoblin of little minds." Publius
Daily Nugget of Wisdom from Goldman Sachs: "I say 'keep the change' purely for my own convenience."
"A space shuttle on the back of an aircraft carrier in New York City is perhaps the most American thing you could have without the help of a deep fryer. I'm surprised anyone in the US opposes it."  Gandalf
WARNING: May become overexcited by mathematics or monetary policy.
