maybe this is a hot take for some reasons but in the context of "1 = 0.9999..."
decimals are not numbers, they are syntactic conveniences or representations, they are syntax.
they represent sometimes recursive operation chains, changing meaning based on the context they are in.
but instead of doing those recursive computations we have shortcuts we use while dealing with decimals. those shortcuts makes us treat them like numbers. but that doesn't make them numbers.
similar to how you can add different meaning to an operator in computer languages like for example 10 + "10" = "1010". when you combine a number with a decimal in an operation, that "+" operator doesn't really mean "add" anymore.
you cant always use the shortcuts on infinite decimals. the issue is, it looks predictable, just 9s going forever, but when you inline the syntax, its not just 9s going forever, and sometimes its not even possible to predict the pattern of the computations that syntax going to create recursively forever, so then your shortcuts might also fail.
0.5 by itself, doesnt mean anything. it gets meaning, when you try to "multiply" it, or "add" it etc. its unexpected end of syntax by itself.
the shortcuts work almost all the time that you start to think they are numbers.
---
people will also say 2*(1/2) = 1
but "2*(1/2)" is not equal to 1 because its not a number, its a syntax for a computation, when you run it you get "1", so when you inline the result of "2*(1/2)=1" it means "1=1" not '"2*(1/2)=1".
mathematicians gets too used to playing around with shortcuts, they think things that are not number are in-fact numbers. but without realizing they don't even handle them like numbers they treat them differently.
maybe instead of operators they used functions calls like `add(1, 2)`, they would realize that `add(1, 0.5)` works because `add()` functions has an overload that looks like `add(number, decimal) => decimal | number`
its not the same addition computation.
being able to "think" something in a human mind is not a good measure to decide what is a basic operation and what is a number. humans have lookup tables, and they invent shortcuts.
shortcuts are not the underlying operations, they are shortcuts.
similarly, just because i can quickly say 1_000_000_000 * 10 = 10_000_000_000 doesnt make multiplication a basic operation. its just a shortcut. pattern recognition. learned base-10 arithmetic rules. a cognitive shortcut that bypasses the actual multiplicative operation.
0.99 as divider, what does divide mean?
1/.99=1.01010101...??? that's also a decimal at least bigger than 1 but infinite, because you need resolution, like in PI.
its not a number that's still an operation, computation, waiting to be used.
not a result number. its waiting input. you wrote a generated a operation/code/computation, waiting an input with a big enough resolution, and some of these loop more as the input get bigger.
as soon as you reach a decimal in a computation, that means you dont have enough resolution in your unit to give a full output. so what remaining is the remaining computation, which represent next to your number with a nice syntax. and its just we have a different set of rules and systems to handle these remaining computations when we continue computing without resolving it. its almost like stuff in the closure waiting to be resolved. or it will just stackoverflow lol.
0.999... = 1 is a shortcut giving wrong result, not the underlying operation.
ok lets think a little, lets try to do: 1 / 2 = 0.5
0.5 is not a number, its telling you that your resolution is not enough to show the full result but i will add it at the end, so you can use it to continue your computation. like in closures in functions, in programming, its waiting on the stack.
so lets say 1 there is 1cm, i can convert that to 1mm and now i get a result without remaining computation.
but lets say we wanna keep use this "result".
and multiply it by 20
0.5 * 20 = 10 we used the remaining computation with our next operation and got 10 which is a number.
now lets think a bit.
what does 0.5 mean while multiplying?
it means divide by 2?
how do we know because.
1/0.5 is 2? yes but how do we use it here its still a decimal.
and what does dividing mean?
well we can try to scale the units up by 10?
10/5 ok this is readable
but now how did we multiply 0.5 with 10? one is number one is not. different types.
its simple 0.5 never meant "0", "." "5"
its a syntax.
it just means divide by 2 one more time.
so, how did we reach 0.5?
we tried to divide 1 by 2!!! (there is your TWO), but we couldnt divide because we didnt have enough resolution. so we transferred same computation as a result in-case numbers later gets big enough to get a result.
lets try something different 5/2=2.5
so how did we get 2.5. well we certainly get 2 but remember .5 is syntax we made up.
result is "2 (then / 2)". its just syntax. its not a number, its telling us what to do, not what this is.
so how did we find the first 2?
division is not a basic operation, its a loop/recursion.
its multiple subtractions.
(similar to decimals negative numbers are also remaining operations. so we can't go under 0).
so we subtract divider from the dividend and count. until we cant anymore.
thats what dividing really means at its core. dividing is not a basic operation, because its made out of a basic operation. its not a primitive.
so we were able to subtract 2 times, and operation is still remaining.
so we have "2 (then / 2)". this is just different syntax, you can write it as "2.(10/2)" => "2.5" like you normally would too, but this syntax just more verbose about what is happening.
so what if we multiply this by 10?
(2 (then / 2)) * 10
= (2 * 10) + (10 / 2)
= 20 + 5
= 25
this how it works, but still in between these operation there are still hidden operations, because dividing and multiplying are not fully open still. but im not gonna write the same thing 10 times, and thats why we have syntaxes, rules and shortcuts. but shortcuts are shortcuts not the underlying operation.
in math we can get a unresolved "result", and wait to use it somewhere else. but thats not a result.
thats computation on pause. because in math we can cut the computation in the middle, store it, write it somewhere. and then insert it into another computation later.
so these are not numbers, these are syntax for unresolved operations. we just found ways to handle and work with these operations, that doesn't mean these ways are always correct in all conditions.
if we draw more parallels with programming, numbers are integers going from 0 to infinity. decimals are a tuple of a (number, and a function) of the remaining operation. based on what the types of operands on each side are operator overloads decide what to do.