In high-frequency systems, math isn't just about addition; it's about Precision and Overflow. If you add 1 to the largest possible int, C# will "Wrap Around" to a massive negative number without telling you. This silent bug can destroy financial or healthcare systems.
Choosing the right size matters. A byte (0-255) consumes a fraction of the memory of a long.
| Type | Bytes | Range Example |
|---|---|---|
byte | 1 | 0 to 255 |
short | 2 | +/- 32,000 |
int | 4 | +/- 2 Billion |
long | 8 | Astronomical |
By default, C# math is Unchecked. If you want a crash (OverflowException) instead of a silent wrap-around, you must use the checked block.
int val = int.MaxValue;
// ❌ SILENT BUG: becomes -2,147,483,648
val = val + 1;
// ✅ THROWS EXCEPTION: Your app crashes safely so you can fix the logic!
checked
{
val = val + 1;
}
NEVER use double for money. Double uses binary approximation and suffers from "Rounding Errors." Always use decimal for precision-critical data.
double d = 0.1 + 0.2; // Might result in 0.30000000000004
decimal m = 0.1m + 0.2m; // Exactly 0.3m
Q: "Why is decimal slower than double or float if it's more accurate?"
Architect Answer: "Float and Double are 'Native' types. Modern CPUs have physical circuits (FPUs) designed specifically to do binary floating-point math at the speed of light. `Decimal`, on the other hand, is not a native CPU type; it is a 128-bit structure managed by the C# runtime. Every addition or multiplication on a decimal requires several CPU operations and manual normalization by the CLR. We trade incredible raw speed for absolute decimal precision, which is non-negotiable in financial engineering."