Jump to content

Talk:Binary scaling

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

The category should be computer science but I do not know the source syntax ~~

Re-scaling after multiplication: The X's in the example should be replaced with 1's, since that is the number used in the following exampt (i think) -- 137.248.143.158 07:02, 16 October 2007 (UTC)[reply]




Question


Is there a way to find the original article I wrote. I still use B notation a little and used to use it as a crib sheet. B notation was heavily used in scientific and engineering sw up until the end of the 1980's. The STM32 processors use a q notation for fixed point now which is equivalent. i.e. q1.15 is B1 (where we are using 16 bit ints)

But could someone point me at how to get the original article Robin48gx (talk) 11:30, 4 September 2024 (UTC)[reply]



Comparison with Floating Point

[edit]

"Although floating point has taken over to a large degree, where speed and extra accuracy are required, binary scaling is faster and more accurate."

Binary scaling is faster and more accurate where speed and extra accuracy is required? I don't get it. 213.221.94.52 (talk) 13:45, 7 January 2008 (UTC)[reply]

 A floating point number needs to use some bits to keep its exponent. Also whenever it is used the exponent has to be checked and the values shifted before use. A binary scaled number can use the extra bits to provide more accuracy. Its also faster, generally because its a MUL and a SHIFT for each multiply operation, instead of variable shifts which can take more time. Robin48gx (talk) 11:03, 10 January 2019 (UTC).[reply]

Agreed, does anyone know of a source that says which is faster? It would make sense that binary scaling is more accurate if you know the range ahead of time, but the resulting speed is not intuitive. I am marking that phrase as confusing.Vickas54 (talk) 00:15, 16 March 2016 (UTC)[reply]

I guess it is confusing to state its always faster. Some FPUs might be amazing now. But its less work to handle fixed scalings Robin48gx (talk) 11:06, 10 January 2019 (UTC).[reply]
I believe that the IBM 360/91 floating point multiplier is faster than the fixed point multiplier. The machine was designed for scientific number crunching. Floating point add/subtrace require prenormalization and postnormalization which can be slow, or take a lot of special hardware. Gah4 (talk) 15:31, 16 January 2020 (UTC)[reply]
[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Binary scaling. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 19:35, 2 November 2016 (UTC)[reply]

uncertainty

[edit]

It seems to me that this article completely misses the point. It makes it sound like scaled fixed point is a poor substitute for floating point. That isn't right. Fixed point (scaled) is used for quantities that have an absolute uncertainty, and floating point for quantities with a relative uncertainty. Financial calculations commonly have an absolute uncertainty: I expect the bank to keep my account to the last cent, when I have $1 or $1,000,000 in it. Many physical quantities have an uncertainty that gets larger when the quantity gets larger. These quantities are appropriate for floating point. Many CS courses don't do a good job of teaching this, and emphasize floating point for anything that is fractional. Gah4 (talk) 15:38, 16 January 2020 (UTC)[reply]

school

[edit]

It occurs to me that the real problem with this article is that decimal fractions are commonly taught in school, with the decimal point not immediately to the right of the least-significant digit. In the "new math" days that many of us remember, teaching of non-decimal bases was added to school math (maybe about fourth grade), but teaching of non-decimal fractions seems to have been lost in school. Scaled fixed point, in any base, is not a poor-mans floating point. It is often used in digital signal processing, though this case isn't completely obvious. In the case of audio, 16 bits is about the range between the quietest sound we can hear, and the loudest we can safely hear. Video is commonly done is 8 or 12 bits, which seems to be enough. Floating point is appropriate for quantities that vary over many orders of magnitude, but where the uncertainty (measurement error) increases or decreases with the size of the value. We can measure the distance to some stars to one or two decimal digits. The lattice constant (atomic spacing in a crystal) is known to about 10 digits. Gah4 (talk) 03:27, 17 January 2020 (UTC)[reply]