Mathematically, 0.1 + 0.2 is obviously 0.3.
But if you type 0.1 + 0.2 == 0.3 in the Python interpreter, it'll tell you False.
So why does Python "get it wrong"? Or is it even really a mistake?
Let's take a look.
Key points: floating-point precision, computer storage, binary conversion.