Jump to content

User talk:BillF4/sandbox

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is a test topic

[edit]

This is a test topic created just to see how the new section feature works.

Number of significant digits in the 80-bit format

[edit]

Given that there are 64 bits of precision, the extended precision format can only represent 19 decimal digits of precision. The Integer(Log10 2Number of bits) formula is a time-honored formula for computing the number of decimal digits a given number of bits can store. For example, assume a 4-bit value. The representable values are 0, 1, 2, ... 14, 15. This 4-bit value can only one one decimal digit of precision because many two decimal-digit values (16, 17, ..., 98, 99) cannot be represented. Note that Log10 24 = 1.2041. 7 bits are needed to hold a 2 decimal digit value: Log10 27 = 2.1072.

I think that Prof. Kahan's paper is being taken a little out of context here. His paper mentions that the Extended format supports ≥18-21 significant digits. This is true because an IEEE-754 double-extended value has 64 or more bits of precision. The x86 Extended Precision value has exactly 64 bits of precision so it only supports 19 decimal digits.

Prof. Kahan's statement that Extended supports ≥18 decimal digits is true because 19 is greater than 18. He may have mentioned 18 digits because the x86 contains an instruction that will convert an extended precision value to 18 BCD digits.

The following statement should probably be removed from the article: "(if a decimal string with at most 18 significant decimal is converted to 80-bit IEEE 754 double extended precision and then converted back to the same number of significant decimal, then the final string should match the original; and if an IEEE 754 (80-bit) double extended precision is converted to a decimal string with at least 21 significant decimal and then converted back to double extended, then the final number should match the original."

The clause "...(if a decimal string with at most 18 significant decimal is converted to 80-bit IEEE 754 double extended precision and then converted back to the same number of significant decimal, then the final string should match the original;..." really talks about the quality of a piece of software that can perform such a conversion, not about the extended precision format itself.

The clause "...and if an IEEE 754 (80-bit) double extended precision is converted to a decimal string with at least 21 significant decimal and then converted back to double extended, then the final number should match the original" is misleading because the converse is not true. It is true that an 80-bit value can be converted to a 21-decimal digit value and then converted back to the original 80-bit value. The converted value will have several leading zeros. It is false that every 21-digit decimal value can be converted to an 80-bit binary value and then converted back to the original 21-digit decimal value. A 21-decimal digit value requires at least 70 bits of precision and the 80-bit format only has 64 bits.