首页 > web前端 > js教程 > 正文

Beyond JavaScript - Why + doesn&#t equal in programming

DDD
发布: 2024-09-13 22:17:02
原创
961 人浏览过

JavaScript is frequently ridiculed when developers first encounter this seemingly baffling result:

0.1 + 0.2 == 0.30000000000000004
登录后复制

Memes about JavaScript's handling of numbers are widespread, often leading many to believe that this behaviour is unique to the language.

Beyond JavaScript - Why  +  doesn

However, this quirk isn't just limited to JavaScript. It is a consequence of how most programming languages handle floating-point arithmetic.

For instance, here are code snippets from Java and Go that produce similar results:

Beyond JavaScript - Why  +  doesn

Beyond JavaScript - Why  +  doesn

Computers can natively only store integers. They don't understand fractions. (How will they? The only way computers can do arithmetic is by turning some lights on or off. The light can either be on or off. It can't be "half" on!) They need some way of representing floating point numbers. Since this representation is not perfectly accurate, more often than not, 0.1 + 0.2 does not equal 0.3.

All fractions whose denominators are made of prime factors of the number system's base can be cleanly expressed while any other fractions would have repeating decimals. For example, in the number system with base 10, fractions like 1/2, 1/4, 1/5, 1/10 are cleanly represented because the denominators in each case are made up of 2 or 5 - the prime factors of 10. However, fractions like 1/3, 1/6, 1/7 all have recurring decimals.

Similarly, in the binary system fractions like 1/2, 1/4, 1/8 are cleanly expressed while all other fractions have recurring decimals. When you perform arithmetic on these recurring decimals, you end up with leftovers which carry over when you convert the computer's binary representation of numbers to a human readable base-10 representation. This is what leads to approximately correct results.

Now that we've established that this problem is not exclusive to JavaScript, let's explore how floating-point numbers are represented and processed under the hood to understand why this behaviour occurs.

In order to understand how floating point numbers are represented and processed under the hood, we would first have to understand the IEEE 754 floating point standard.

IEEE 754 standard is a widely used specification for representing and performing arithmetic on floating-point numbers in computer systems. It was created to guarantee consistency when using floating-point arithmetic on various computing platforms. Most programming languages and hardware implementations (CPUs, GPUs, etc.) adhere to this standard.

This is how a number is denoted in IEEE 754 format:

Beyond JavaScript - Why  +  doesn

Here s is the sign bit (0 for positive, 1 for negative), M is the mantissa (holds the digits of the number) and E is the exponent which determines the scale of the number.

You would not be able to find any integer values for M and E that can exactly represent numbers like 0.1, 0.2 or 0.3 in this format. We can only pick values for M and E that give the closest result.

Here is a tool you could use to determine the IEEE 754 notations of decimal numbers: https://www.h-schmidt.net/FloatConverter/IEEE754.html

IEEE 754 notation of 0.25:

Beyond JavaScript - Why  +  doesn

IEEE 754 notation of 0.1 and 0.2 respectively:

Beyond JavaScript - Why  +  doesn
Beyond JavaScript - Why  +  doesn

Please note that the error due to conversion in case of 0.25 was 0, while 0.1 and 0.2 had non-zero errors.

IEEE 754 defines the following formats for representing floating-point numbers:

  • Single-precision (32-bit): 1 bit for sign, 8 bits for exponent, 23 bits for mantissa

  • Double-precision (64-bit): 1 bit for sign, 11 bits for exponent, 52 bits for mantissa

For the sake of simplicity, let us consider the single-precision format that uses 32 bits.

The 32 bit representation of 0.1 is:

0 01111011 10011001100110011001101
登录后复制

Here the first bit represents the sign (0 which means positive in this case), the next 8 bits (01111011) represent the exponent and the final 23 bits (10011001100110011001101) represent the mantissa.

This is not an exact representation. It represents ≈ 0.100000001490116119384765625

Similarly, the 32 bit representation of 0.2 is:

0 01111100 10011001100110011001101
登录后复制

This is not an exact representation either. It represents ≈ 0.20000000298023223876953125

When added, this results in:

0 01111101 11001101010011001100110 
登录后复制

which is ≈ 0.30000001192092896 in decimal representation.

In conclusion, the seemingly perplexing result of 0.1 + 0.2 not yielding 0.3 is not an anomaly specific to JavaScript, but a consequence of the limitations of floating-point arithmetic across programming languages. The roots of this behaviour lie in the binary representation of numbers, which inherently leads to precision errors when handling certain fractions.

以上是Beyond JavaScript - Why + doesn&#t equal in programming的详细内容。更多信息请关注PHP中文网其他相关文章!

来源:dev.to
本站声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板