Jump to content

User talk:Vincent Lefèvre

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

IEEE 754 preferred width and double rounding

[edit]

Someone was adding a bit to IEEE 754-2008 about expression evaluation and a bit of the standard there struck me as worth querying.

My reading of the preferred width recommendations in IEEE 754-2008 is that if you have a statement

x=y+z

where x, y, z are all double but the block has preferred width extended then y and z should be added using extended precision and then assigned to x, so one would have double rounding. Is that correct do you think? Thanks Dmcq (talk) 23:33, 6 April 2012 (UTC)[reply]

Yes, in this case, one has double rounding. In order to avoid double rounding, one can use the format-Of operations (Section 5.4) instead of the ones affected by preferredWidth. Note that IEEE 754-2008 doesn't define bindings, so that when one writes x=y+z in a language, it is up to the language to specify which kind of operation is used (depending on the context). Vincent Lefèvre (talk) 00:27, 7 April 2012 (UTC)[reply]
Thanks very much. Well setting preferred width to none should do that I believe so that's okay. I guess whatever one does something one doesn't think of will happen! Dmcq (talk) 15:09, 7 April 2012 (UTC)[reply]

A barnstar for you!

[edit]
The Technical Barnstar
I am awarding you this Technical Barnstar for your work on IEEE floating point. Good Job! Guy Macon (talk) 03:21, 13 November 2012 (UTC)[reply]

Double-quad, quad-single, etc.

[edit]

Could you take a look at Quadruple-precision floating-point format#Double-double arithmetic an perhaps expand it a bit? In particular, many embedded processors have 32-bit floating point arithmetic, and there is a lot of interest in combining two, three or four 32-bit numbers to get extended precision. Yet double-single and quad-single don't seem to be covered anywhere on Wikipedia. Thanks! --Guy Macon (talk) 08:51, 15 November 2012 (UTC)[reply]

"Rounding" article undo (incorrect formulas)

[edit]

Why the formulas are incorrect? If you mean rounding half-integer always up (like MS Excel function EVEN() do), or always down - it is not bankers' rounding.

If the fraction of number is 0.5 (half-integer), then rounded number is the even integer nearest (maybe up, maybe down) to initial number.

0.5 rounded to 0; 1.5 to 2; 2.5 to 2; 3.5 to 4; 4.5 to 4 and so on.

-0.5 rounded to 0; -1.5 to -2; -2.5 to -2; -3.5 to -4; -4.5 to -4 and so on.

On the other hand each even number has two entries of half-integers: 0 has 0.5 and -0.5; 2 has 1.5 and 2.5; -2 has -1.5 and -2.5 an so on.

That is the point of banker's rounding - unbiased rounding.

It is almost the same arguments for rounding half to odd formula.

If you are confused with multiplier (factor) before floor brackets, or addend inside floor brackets, you should understand that the floor or ceiling brackets are not usual brackets (parentheses) - you can't carry out or in anything, except integer addend (subtrahend), as usual as you do with simple brackets. Floor and ceiling functions have some unique rules. Anyway, just try my formulas with few half-integer numbers and say where I have mistaken. :-)

P.S. Sorry for my English.

Borman05 (talk) 16:49, 11 April 2014 (UTC)[reply]

The incorrectness is for non-half-integers. For Round half to even on y = 1, gives 2 instead of 1. For Round half to odd on y = 0, gives 1 instead of 0.
Vincent Lefèvre (talk) 17:00, 11 April 2014 (UTC)[reply]
I understand this, but this is not common formulas for all cases. As it was said earlier in article: "Rounding a number y to the nearest integer requires some tie-breaking rule for those cases when y is exactly half-way between two integers — that is, when the fraction part of y is exactly 0.5"
These simple formulas only for half-integers
Borman05 (talk) 17:32, 11 April 2014 (UTC)[reply]
No, there is a tie-breaking rule for special cases, but then, from this tie-breaking rule, one can find formulas that are valid for all numbers (those given for Round half up to Round half towards zero). Note that if you wanted formulas for half-integers only, such formulas could be simpler than those given.
Vincent Lefèvre (talk) 21:44, 11 April 2014 (UTC)[reply]

PWD meaning.

[edit]

Read the PWD talk page

[edit]

You have removed my edit in that page qithout understanding what was posted. Please make sure to READ the relevant talk page for a deeper clarification. Only after you have debunked the claims in that page (if indeed they are wrong) is that you may remove my post. Talk:Pwd#PWD meaning. — Preceding unsigned comment added by JustToHelp (talkcontribs) 05:38, 22 December 2014 (UTC)[reply]

Vincent Lefèvre doesn't need your permission to remove your edits, so you might as well stop giving orders as if you are in charge. The Wikipedia pages that best explain what behavior is and is not allowed when two editors disagree about the content of a page are WP:BRD and WP:CONSENSUS. --Guy Macon (talk) 01:42, 13 August 2015 (UTC)[reply]

C Data Types

[edit]

Genepy Quantum (talk) 01:57, 11 November 2015 (UTC)[reply]

Why are you reverting my correct changes? You can calculate the range of data types in this way: short int: 2 bytes i.e. 16 bits. 216 = 65536 possibilities. Now let's consider number including zero: we have 65536 numbers from 0 to 65535 (including 0 and 65535). If you split this range with the sign behaviour of the type you have: 32768 numbers from -32768 to -1 and 32768 numbers from 0 to 32767. So for a 2 bytes signed data type the range is [-32768 ; 32767]. for a 4 bytes signed data type is [-2147483648 ; 2147483647] etc...
If you still don't want to understand, compile and run this easy C source code, testing it with, for example, -32768 -32769 +32768. you can also change the type of 'a' to test it more.



'#include <stdio.h>

int main()

{

short int a;

printf ("\nInsert a number: ");

scanf("%hi",&a);

printf ("\nYour number is: %hi \n\n", a);

return 0;

}


You're assuming two's complement. But the C standard also allows ones' complement and sign + magnitude, where one loses one value. The minimal ranges are given in Section 5.2.4.2.1 of the standard, e.g. −32767 for SHRT_MIN. Please read this section.
Giving a C code makes no sense because you are testing just one implementation. Not all implementations behave in the same way.
Vincent Lefèvre (talk) 02:14, 11 November 2015 (UTC)[reply]

Hi,
You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 12:58, 23 November 2015 (UTC)[reply]

C data types

[edit]

Here's the way it is now:

Various rules in the C standard make unsigned char the basic type used for arrays suitable to store arbitrary non-bit-field objects: its lack of padding bits and trap representations, the definition of object representation, and the possibility of aliasing.

There are three clauses:

its lack of padding bits and trap representations
the definition of object representation
the possibility of aliasing.

Number 1 is an something that is true of unsigned char arrays. They lack padding bits and trap representations. Number 3 is the opposite. There is no possible of aliasing in unsigned char arrays. You see how that's changing sides in the middle?

The first should be changed to "the possibility of padding bits and trap representations" exclusive-or the last should be changed to "the impossibility of aliasing". - Richfife (talk) 02:07, 15 December 2015 (UTC)[reply]

You're wrong. Aliasing is possible with unsigned char arrays. Vincent Lefèvre (talk) 08:04, 15 December 2015 (UTC)[reply]
Interesting.. I thought you where ok with any word-length (or smaller) aligned data type, at least a char/byte.. Am I confusing two issues (I see now "A translator is free to ignore any or all aliasing implications of uses of restrict" in the standard. I was thinking with restrict keyword I guess or at the CPU level not C level)? Maybe you are safe in practice, but not on some weired CPUs.. comp.arch (talk) 12:15, 3 May 2016 (UTC)[reply]
There are two completely different concepts of aliasing. The first one concerns storage in memory and data types: the notion of effective type (in C11, §6.5 Expressions). The second one concerns whether different pointer variables (in general of the same type) can point to the same object (or a part of the same object) or not: hence the keyword restrict (in C11, §6.7.3.1). Vincent Lefèvre (talk) 12:48, 3 May 2016 (UTC)[reply]

High! You've just reverted an edit on page C data types. I would like to discuss type ranges. Really, standard says that, for example, short should be from -32767 to 32767. But in fact any compiler (clang-3.6, gcc and microsoft according to msdn) allows you to set signed short -32768 without any warnings ( -Wall, Wextra, -Wpedantic, even with -std=iso9899:1990 ). So I guess we should change range to from −(2N − 1) to +(2N − 1 − 1) Yanpas (talk) 20:23, 17 January 2016 (UTC)[reply]

The exact range depends on the implementation. The standard says at least [−32767,32767], and this is also what is said on the WP page. On your implementation, the range is [−32768,32767], which contains the interval [−32767,32767]. So, everything is OK. Note that with a 16-bit short and either sign-magnitude or ones' complement representation (both allowed by the C standard), the value −32768 is not possible and the range is [−32767,32767]. Such implementations existed in the past, and might still exist nowadays. There also exist implementations where a short has more than 16 bits. Vincent Lefèvre (talk) 23:47, 17 January 2016 (UTC)[reply]
"and might still exist nowadays", I hope not and think not.. Julia (programming language) also assumes the 8-bit byte (not 9- (or 6-)bit).. Less portable yes, but not really.. ARM is taking over anyway, and I can't remember any ones complement machine (there might be microcontrollers being phased out?). Ternary computers like the Setun would also screw us all over.. even the C language.. Julia handles signed and unsigned char/byte for C-types (FFI C API), but defaults to sane signed 32- or 64-bit. Hexadecimal floating-point is also not supported (I think though be C code emulation, that may have been wrapped already). My reading of IEEE-754-2008 (WP page): "and might still exist nowadays" says non-binary floating point only is ok..?! Julia has a package[s] for at least decimal64 floating-point format (emulated), binary uses machine registers, is faster. In case a decimal-only floating point would appear, I'm not sure if C would allow (as float and double), Julia might be easier to amend.. comp.arch (talk) 14:59, 26 April 2016 (UTC)[reply]
What matters is that the alternate integer representations have not been removed from the current C standard. There may be some good reason... An IEEE 754-2008 system can provide decimal only. In C, FLT_RADIX is still there. But now, decimal floating point tends to be implemented with _Decimal64, etc. (not yet in the C standard). I'm not sure about the pocket calculators, though. Vincent Lefèvre (talk) 15:32, 26 April 2016 (UTC)[reply]

Where does C standard tells about CHAR_BITS >= 8? — Preceding unsigned comment added by 86.57.146.226 (talk) 09:47, 26 February 2016 (UTC)[reply]

Section 5.2.4.2.1 "Sizes of integer types <limits.h>". It is said "CHAR_BIT 8", and the first paragraph of this section says: "Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign." So, this means CHAR_BIT ≥ 8. Vincent Lefèvre (talk) 11:24, 26 February 2016 (UTC)[reply]
As far as I know, Unisys still sells ones' complement machines with a C compiler. The last sign-magnitude (fixed point binary) machine that I know of is the IBM 7094. (S/360 and successors have a sign-magnitude BCD type.) Gah4 (talk) 01:02, 24 May 2020 (UTC)[reply]

Numbers, sign and unums..

[edit]

I'm ok with your edit. I even kind of like it (how "a bit more" is ambiguous). The sign bit is overhead that does not double.. [Mantissa is/could be larger, usually not double]. Note unums, that do actually have a sign bit in version 1, but he broke away with them in "Unums 2.0" (that is no separate + and - zero). This will make floating-point obsolete.. eventually (see his book, "End of error"). comp.arch (talk) 14:31, 26 April 2016 (UTC)[reply]

Not sure what you meant concerning the sign bit. The sign is not taken into account in the precision of the floating-point model. So, for IEEE double precision (binary64), the precision is 53 bits. For the double-double arithmetic, the precision is variable, but such a format contains a floating-point numbers with a 106-bit precision (that's exactly twice double precision), and even a 107-bit precision if one ignores issues near the overflow threshold. IEEE quadruple precision (binary128) is 113 bits, which is just 7 bits more than twice double precision. Thus, "a bit more" in practice. However, binary256 is excluded as its precision is much more than 106 bits.
Concerning the unums, no, they will not make floating-point obsolete. It is not even clear that they would ever be useful in practice (perhaps except as a storage format). The book "The End of Error" is just like commercials: it does not talk about the drawbacks and problems that occur in practice. FYI, the idea to have variable fields for the exponent and fraction (trailing significand) is not new; it was initially introduced in: R. Morris (1971). "Tapered Floating Point: A New Floating-Point Representation". IEEE Transactions on Computers. 20: 1578–1579. doi:10.1109/T-C.1971.223174. And it was patented and never implemented. Vincent Lefèvre (talk) 15:21, 26 April 2016 (UTC)[reply]
Interesting to know the patented 1971 idea. Haven't looked into if it is similar enough. Anyway, I didn't think through how he intended variable-length to work well, but do not care enough any more to check, as if I recall, Unum 2.0 is not variable length.
I guess what www.rexcomputing.com is building (and http://fortune.com/2015/07/21/rex-computing/ article talks about) was based on the his former idea. When I looked at it here https://github.com/REX-Computing/unumjl 2.0 wasn't out (good description). I'm not sure what is changed here https://github.com/JuliaComputing/Unums.jl
The source code, may help with prototyping (not only for new hardware), but yes, would be slower (and the other alternatives there). The pros of Unums, seem good to me, and while ordinary floating-point will not be replaced in practice (everywhere) for a long time, I do not really see the drawbacks, that means they shouldn't replace somewhere and possibly "everywhere" (with a slower fallback to floating-point available, for legacy software). comp.arch (talk) 22:13, 30 April 2016 (UTC)[reply]
You may know this, but a more recent mentions you: [1] [I was aware of MPFR, and these IS and MR, but not the new ones in the patent, or how your name relates to all this): "In a comparison between the (M+N, R) representation and the known IS and MR representations the simulation methodology included the use of the C++ MPFR library (see L. Fousse, G. Hanrot, V. Lefevre, P. Pelissier and P. Zimmerman, “MPFR”, ACM Transactions on Mathematical Software, vol. 33, pg. 13-es, June 2007)" comp.arch (talk) 23:20, 30 April 2016 (UTC)[reply]
For the 1971 idea, you can search for "tapered floating point" on Google. The corresponding patent is US3742198. The idea is that for a fixed-size format (e.g. 64 bits), you have one field whose goal is to give the size of the exponent field, the sum of the sizes of the exponent field and the fraction field being a constant. There is the same idea for unum's of fixed size. Now, in practice, FPU's work with a fixed number of digits. For instance, if thanks to the additional field, one may have 51 to 54 bits for the significand (depending on the magnitude of the exponent), then the FPU will be designed for 54 bits, and all the computations could internally be done on 54 bits, whatever the value of the exponent. What unum provides could be seen as some kind of compression (with loss). This has some advantages: results could be slightly more accurate in general, in particular if data needs to go out of the FPU. However, the format is no longer regular, which means that error analysis could be more pessimistic. Moreover, some simple algorithms such as TwoSum and Veltkamp's splitting, thanks to which one can efficiently emulate more precision (see e.g. double-double arithmetic), will no longer work.
Concerning the ubit, it is theoretically a nice idea, but takes one bit, which may be important for a 32-bit or 64-bit format, while most applications would not use it. For instance, with IEEE 754, there's an inexact flag, but almost no-one uses it. Moreover, with the ubit, inexactness information can be lost when variables are used for comparison (the result of a comparison is a boolean, which does not contain a ubit).
For guaranteed results (without error analysis done by the user), interval arithmetic can be used with both floating-point numbers and unum's. Both formats are very similar here, with the same issues due to interval arithmetic: intervals get bigger and bigger.
I didn't know about Intel's patent that mentions MPFR. But FYI, if I understand correctly, this (M+N,R) representation is not new at all. It is a particular case of midrad where the midpoint and the radius do not have the same format (in multiple precision, it doesn't make much sense to have a very accurate radius), and more restrictively, this is a particular case of midrad where the midpoint is a floating-point expansion and the radius is a floating-point number, which has been used at least since 2008 (thus before the patent has been filed, in 2012). See Rigorous High Precision Interval Arithmetic in COSY INFINITY by Alexander Wittig and Martin Berz. Vincent Lefèvre (talk) 00:03, 1 May 2016 (UTC)[reply]
To bug you a bit more, it seems to me you are not aware of Unum 2.0 [differences], please look over[2] (the only, other than Unum 1.0, slides from him I've seen, until [3] "This presentation is still being tweaked."). Slide 29: "100x the speed of sloppy IEEE floats." [that may be before(?) or after the embarrassingly parallelism is enabled by Unums] on slide 29, "all operations take 1 clock! Even x^y", "128 kbytes total for all four basic ops. Another 128 kbytes if we also table x^y." [Note caveat in slide 34: "Create 32-bit and 64-bit unums with new approach; table look-up still practical?", can't see a lookup-table working (I can't see a double-double trick working, but any such similar idea?), but also not sure of the need for 32-bit+.] [This is if I recall in common with Unum 1.0, at least in part, without the lookup idea] Slide 33: "Uncertainty grows linearly in general" vs. "exponentially in general" for floating-point [I may not understand all the slides or why this is]. See unread here on this: [4]
My take on this, lookup seems to work [for few bits, that seems may be enough], and its time has come (back). At least in z/Arch there is a latency, I'm not sure any CPU has 1-cycle anymore. E.g. less used square-root, already benefits from lookup-tables (but also Newton-Rapson Method).
You can assume I do not know much more on Unum 2.0, I was trying to google for a bit more I've read (see [5]
"I’ve just purchased Mathematica 10.4 so that I can explore unum 2.0 more easily.
For example, I want to explore the fruitful marriage between unum 2.0 and Logarithmic Number Systems (LNS).
Also, with unum 2.0, the number special cases that one has to consider, is much lower then unum 1.0.
Because of that, I strongly believe that unum 2.0 will require less code than unum 1.0."[6]
"Incidentally, I've been challenged to a public debate, unums versus floats, with the headline "The Great Debate: The End of Error?" as a 90-minute event during ARITH23 in San Jose CA, July 10-13.
My challenger is... Professor William Kahan. This should be quite an interesting discussion!"[7] Just reading this and other posts right now]. comp.arch (talk) 15:38, 2 May 2016 (UTC)[reply]
These new slides brings nothing at all: no formalization of the theory, no proofs, no code-based examples... The "reciprocal closure" just makes things more complex (except at very low precision since everything can be based on table look-up, so that this is quite arbitrary) without solving anything. For instance, think how you compute a sum of any two numbers of the system (that's the main problem of LNS).
Slides 29 is actually: "Low-precision rigorous math is possible at 100x the speed of sloppy IEEE floats." That's low precision, 6-bit precision. So, I can believe that it is 100 times as fast as 53-bit FPU. But it doesn't scale.
Gustafson convinced (almost) no-one at ARITH-22 (BTW, that was my tweet). I doubt that he can do better at ARITH-23. Vincent Lefèvre (talk) 21:39, 2 May 2016 (UTC)[reply]
And if anyone is interested in testing a computation system, I suggest the following sequence by Jean-Michel Muller:
 u[0] = 2
 u[1] = -4
 u[n+1] = 111 - 1130 / u[n] + 3000 / (u[n] * u[n-1])
Vincent Lefèvre (talk) 22:05, 2 May 2016 (UTC)[reply]
Yes, I quoted less from slide 29 (but did mention caveat at slide 34, that doesn't seems to worrying now): "Low-precision rigorous math is possible at 100x the speed of sloppy IEEE floats." Note his emphasis. I'm going to quote the new slides[8] (they have good commentary) from now on. Slide 41 (that has some code for you..): "9+ orders of magnitude [..] this is 1.5 times larger than the range for IEEE half-precision floats."
"Besides the advantages of unums listed here, perhaps it deserves mention that the SORNs store decimal numbers whereas the IEEE 16-bit floats are binary and in general make rounding errors converting to and from human-readable form. Also, there are many problems that traditional intervals cannot solve because [something you might want to read] The mathematical rigor claimed for traditional interval arithmetic is actually couched in lots of “gotcha” exceptions that make it even more treacherous to use than floats. Which is why people stick with floats instead of migrating to intervals." Read all of slide/page 43 (and 44–46, where 8-bit unums win 128-bit interval) carefully "Why unums don't have the interval arithmetic problem [..] => bounds grow linearly" [unlike in interval] at least for n-body (that I understand is a difficult problem). Maybe he's "lying" to me, implying unums are good for more than it is, I'm unsure if the linear growth is the general case. Even if it isn't, the error is bounded rigorously [as I assumed with intervals, and he points out flaws with], unlike with floats. Maybe floats are good enough for most/many things, e.g. matrix multiplication, and unums only good/needed where floats are not. Still I'm pretty convinced that the extra bandwidth is the killer there, if it is really true that he can get away with fewer bits. He also has a parallel advantage, that floats disallow [to a full extent].
Your tweet was prior to Unum 2.0. He has e.g. changed his conclusion (in Unum 2.0 and if I recall Unum 1.0): "This is a shortcut to exascale." to "This is path beyond exascale." I acknowledge, the claimed pros, I think where already in Unum 1.0 (he seems to say that), but maybe not all (I'm still wrapping my head around "SORNs", think I'm almost there). At least most of the drawbacks seem to be gone. You (and Kahan) sure know more about this than me.
I see you where also an Acorn/RISC OS users by your program on your web page. I never used Perl there (I noted: "use RISCOS::Filespec;") or anywhere.. Python has dropped RISC OS (AND Amiga) support. I've wandered how difficult it would be to get Julia to work on RISC OS, not that I need to, just nostalgia reasons.. :) comp.arch (talk) 10:28, 3 May 2016 (UTC)[reply]
I do not see anything rigorous with unums, except when used for interval arithmetic. But interval arithmetic can be implemented on any point arithmetic, such as floating point, and in any case, you'll get the usual issues with interval arithmetic. For instance, with the sequence I gave above, you'll end up with the full set of real numbers (−∞,+∞). About "this is 1.5 times larger than the range for IEEE half-precision floats", no-one cares. FYI, the gain in range has a drawback: a loss of precision somewhere in the considered subset of real numbers (Level 2 in IEEE 754's formalization of point arithmetics). So, for any choice of arithmetic, some compromise has to be chosen between range and precision. Concerning decimal numbers, IEEE 754 specifies decimal floating-point arithmetic too. But even when implemented in hardware, this is slower than binary floating-point arithmetic (with equivalent resources). Except for some specific applications, binary floating-point arithmetic is often preferred because it is faster and people don't care about the rounding errors in conversions, since such conversions typically occur only at the beginning and at the end of computations. And between them, there are already a lot of more significant rounding errors. Gustafson is wrong about interval arithmetic. First, there are no exceptions for interval arithmetic. Then, for the growth of the bounds, ask him for a proof. :) For interval arithmetic, you have the FTIA, i.e. something proved, showing that it is rigorous. For unums, you have nothing. And IMHO, Unum 2.0 is worse than Unum 1.0 (it is too complex and doesn't solve the real problems). I suggest that you try the sequence I've given above.
Re RISC OS, I've stopped working with it for many years, but I'm still in touch with the French community. Vincent Lefèvre (talk) 12:38, 3 May 2016 (UTC)[reply]
"I do not see anything rigorous with unums, except when used for interval arithmetic." That may be true, that he needs what he calls SORNs (that I do not recall from Unum pre-2.0, there he had ubounds, that's been dropped). I may not understand all the details, but so far, it seems like I (and him) do understand enough, and you haven't taken a good enough look. "For instance, with the sequence I gave above, you'll end up with the full set of real numbers", that may be true for this sequence, I haven't checked, but this problem you describe is exactly what he says he's solving over traditional interval arithmetic. It's like you didn't read slide 42 and the text with it (or do not agree):

We can almost compare apples with apples by comparing traditional interval arithmetic, using 16-bit floats, with SORNs restricted to connected sets so that they can be stored in only 32 bits. They both take 32 bits and they both use techniques to rigorously contain the correct mathematical result. SORNs win on every count.

Besides the advantages of unums listed here, perhaps it deserves mention that the SORNs store decimal numbers whereas the IEEE 16-bit floats are binary and in general make rounding errors converting to and from human-readable form. Also, there are many problems that traditional intervals cannot solve because all their intervals are closed at both endpoints. Sometimes it is crucial to know whether the exact endpoint is included, or just approached. You cannot do proper set operations with traditional intervals. Like, if you ask for the set of strictly positive real numbers, you get [0, ∞] which incorrectly includes zero (not strictly positive) and infinity (not a real number). If you ask for the complement of that set, well, the best you can do is [–∞, 0]. How can it be the complement if both sets contain the same number, zero? The mathematical rigor claimed for traditional interval arithmetic is actually couched in lots of “gotcha” exceptions that make it even more treacherous to use than floats.

Which is why people stick with floats instead of migrating to intervals.

"FYI, the gain in range has a drawback: a loss of precision somewhere". He called Unums, universial numbers, because they unified floating-point, interval arithmetic and integers (from memory). One thing that [double] float has, that JavaScript relies on (strangely) is that all integers up to 2^52 if I recall are exact. I'm reading into Unums 2.0, that this is dropped (and other stuff that float has that take up bit-pattern space: "Hardware designers hate gradual underflow because it adds almost thirty percent to the silicon area needed for the floating-point math. Subnormal numbers are such a hotly contested part of the IEEE standard that [..]"), as not important. It would be important for JavaScript yes.. :) but to me (and the whole world outside JS), it seems best (or at least ok) to have countables (integers) separate from measurables (floats or unums). That he has every other number exact and thus some integers and decimal fractions exact, may not be too important. The decimal floating-point spec, while *maybe* useful for engineering (I'm told, by Mike Cowlishaw, engineers are used to decimal numbers), seems at least to me overkill for banking.. that needs exact numbers down to the cent. Unums (at least 2.0, 1.0 has been made also with base-10), are probably not useful, at all, for banking, despite he mentioning "decimal".. and then not universal anymore.. I'm ok with that. You shouldn't read that into "Mathematically superior in every way, as sound as integers". Maybe that is a holdover in his slides from pre-2.0, or he's only talking about say unique zero (in 2.0). "no-one cares", then at least the range is enough. :) I'm just not sure what range is needed, I guess depends on the application. "for the growth of the bounds, ask him for a proof." I did point you to the slides (vs. interval), where he shows that. Are you saying his examples are the exception? "you have the FTIA", what is FTIA? "the real problems" What is the real problem? It seems to me he solved it, and floats do not.. Maybe the sequence is difficult, I'm just curious, does it have any special place to being important? Vs. say the n-body problem: "I’ve listed some workloads that I think would be good tests. William Kahan has suggested that the best acid test is to simulate the orbit of Halley’s Comet, with just the sun and the gas giant planets in the n-body problem. The uncertainty, he says, becomes severe with traditional interval arithmetic, and it is easy to see how that can happen. Can unums do better? We will find out once we get an implementation that is both fast and high-precision." I guess you (and him) are saying the precision isn't high enough. Getting bigger unums is a problem (his old method did allow for variable and no lookup-tables, might have been better..) by going to bigger lookup-tables. He does say "Create 32-bit and 64-bit with new approach; table look-up still practical?" and "we do not yet know where the SORN and the table-lookup approach become intractable. This is ongoing research, but there are some promising ways to deal with that apparent explosion in storage demand". I wander if some idea like the double-double trick (was used to good effect in the PlayStation 3 for matrix multiplication, that didn't have doubles, without losing much speed, as not done all the time) is the key here. It seems to me not exactly the same. comp.arch (talk) 15:00, 3 May 2016 (UTC)[reply]
SORNs have 32 bits, but far too low precision in practice. The problem with SORN is that it takes an exponential amount of memory compared to floating point or interval arithmetic: if you want to add 1-bit precision, each interval of SORN is split into two, so that the size is doubled (since you need 1 bit per interval). Concerning unums, on simple problems (e.g. math expressions), floating point, interval arithmetic and integers can already be unified with midrad: the midpoint is just a floating-point computation and floating-point numbers contain the integers (in some range, of course). On complex problems, they can't really be unified. The issue with subnormals is that they introduce an irregularity; with unums, and in particular with unums 2.0, the irregularity is much worse, at the point that only table look-up can be used in practice, which is OK for very low precision, but not if one needs at least 6 decimal digits for the computations. So, hardware designers will hate them even more than subnormals. Concerning JavaScript, I agree that integers should have been separated from inexact arithmetic, but that's an issue with JavaScript only. Concerning decimal arithmetic, it is useful for banking due to the rounding rules, which are specified on decimal numbers; if binary floating point is used, you get the well-known "double rounding" problem. Engineers don't need decimal arithmetic for internal computations. FTIA = Fundamental Theorem of Interval Arithmetic. This is the base for interval arithmetic, and which makes it rigorous. The sequence is a bit like chaotic systems: once you get a rounding error, the errors tend to get larger and larger. But AFAIK, this is a bit the same for the n-body problem in the long term. FYI, even double precision is not enough for some problems, for which GNU MPFR has to be used. Vincent Lefèvre (talk) 14:12, 7 May 2016 (UTC)[reply]
Slide 44 actually shows that unums/SORNs are not rigorous. Assume that numbers are represented by intervals that contain them (as in the slide). And consider the operation [2,4] − [2,4] like the first iteration of the example of the slide. The implementation (e.g. processor) doesn't know whether these intervals correspond to the same number or to different numbers (e.g. x = 2.1 and y = 3.4, both represented by [2,4], and one does xy). The implementation only sees the intervals [2,4], not the variables. Mathematically, [2,4] − [2,4] = [−2,2], so that interval arithmetic is correct and SORN arithmetic, which gives (−1,1), is incorrect. Note: at the language level, the language implementation could transform xx to 0 as this is mathematically true (that's out of the scope of the arithmetic itself, just related to language specification; in ISO C, this is enabled with the contraction of expressions, see FP_CONTRACT pragma). Vincent Lefèvre (talk) 14:30, 3 May 2016 (UTC)[reply]
"Slide 44 actually shows that unums/SORNs are not rigorous." Thanks! It looks like you are right.. :-/ Now I (and maybe he) have to rethink if this is a good idea.. how broken, can it be saved, is it better than floats, just not interval arithmetic (can version 2.0 still be reunified with them, I guess so..)? Did he just make a small mistake? To be fair, he did say "x := x - x", not "x := x - y", and then the answer is 0, but as you say assuming x = y, seems not useful.. When you can't assume that, I think you acknowledge that interval arithmetic is "unstable", but then again, it's the only thing it can do, as in the next step, the new x isn't assumed to have any relation with the previous one..
"With SORNS, the interval [2, 4] gets represented as set of unums. With the 8-bit unums defined a few slides back, it would be the set {2, (2, 2.5), 2.5, (2.5, r10), r10, (r10, 4), 4". So far so good, I've been writing down minor typos, questions etc. and that the "}" to close the set is now the least of his/my worries.. r10 must be sqrt(10) there (some trivia on that in other slides [but "r10" isn't what he must have intended do display there.]) How he thinks this "stable" shrinking range is allowed (vs. intervals) is not clear to me, but it seems at least not worse than floats (without intervals) to me. Maybe he just got a little carried away with showing how much better his idea is or there's a mistake somewhere. His pre-2.0 Unums where supposed to be a superset of floats AND intervals.. comp.arch (talk) 16:27, 3 May 2016 (UTC)[reply]

Just to let you know, [the debate with Kahan is over, while I can't find it online..] and I added info on Unum 2.0 implementeation (or modified called Pnum). I see there is an interview with Gustafson, I missed personally (and slides) [that are however not brand new], not sure if he has anything new to change your mind and his implementation, but Pnum might be different enough (just not looked too closely, if I recall not implementing SCORNs and other changes). comp.arch (talk) 14:58, 13 July 2016 (UTC)[reply]

I've just added a link to the video of the debate on the Unum page. Note that Jim's microphone wasn't working, but except this problem, the video is OK. Vincent Lefèvre (talk) 00:51, 20 July 2016 (UTC)[reply]

ArbCom Elections 2016: Voting now open!

[edit]

Hello, Vincent Lefèvre. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2016 election, please review the candidates' statements and submit your choices on the voting page. MediaWiki message delivery (talk) 22:08, 21 November 2016 (UTC)[reply]

WP:ANI discussion

[edit]

Information icon There is currently a discussion at Wikipedia:Administrators' noticeboard/Incidents regarding an issue with which you may have been involved. Yamla (talk) 11:38, 28 May 2017 (UTC)[reply]

Rounding

[edit]

Hi Vincent, I see you reverted my edit on the rounding article, the current phrasing is actually wrong, consider:

>>> x = 2**52 + 1
>>> round(x)
4503599627370497.0
>>> math.trunc(x + 0.5)
4503599627370498

Both should return 2**52 + 1, but adding 0.5 and truncating does not.

Franciscouzo (talk) 09:40, 1 November 2017 (UTC)[reply]

Hi, In practice, the data generally don't reach such large numbers, or the users could have other issues due to early loss of precision (I mean, double rounding effect). And if you want to consider the full generality, you need to also take into account that the current rounding mode may be any of the available ones, in which case the trick to add some fixed value then truncate will not work anyway. Perhaps these limitations should be noted in the article. Note also that your solution will not round halfway cases away from 0, which may be a problem as this rule for halfway cases is a common one, and possibly required in some contexts. And another important point is that if a round() function is not available (as assumed here), then nextafter() is probably not available either. Vincent Lefèvre (talk) 15:40, 1 November 2017 (UTC)[reply]

Vincent, you are correct that the three axioms for rounding are already too specific in order to encompass all types. Thank you for fixing it. In contrast, it is a pity that you don't like my idea to start the article with a formal definition of rounding (so with (R1) and (R2)?) and then specializing according to all types in use. Axiom0 (talk) 13:38, 4 March 2018 (UTC)[reply]

Hi, I reverted not because I didn't like your idea to start the article with a formal definition of rounding, but because it didn't really match the contents of the article. Actually, one problem is that there is no standard definition of rounding, and it is not up to the WP article to invent one, or to decide that one is better than others (Kulisch's article is not authoritative). So one needs to be careful in the presentation. Perhaps this should be discussed first in the Talk:Rounding page. Also, note that "approximating a fraction with periodic decimal expansion by a finite decimal fraction, e.g., 5/3 by 1.6667;", which is currently in the article, may be a problem with a formal definition of rounding. Perhaps this item should be removed (as one could say that it is not the fraction itself that is rounded, but the corresponding real number). Vincent Lefèvre (talk) 21:27, 4 March 2018 (UTC)[reply]
Yes, it is indeed a bit unclear how much of an "axiomatic rounding theory" can already be included in a WP article. It doesn't appear to be settled by research papers. As I am interested in this, I'll transfer the discussion to the Talk:Rounding page. Although it first needs to be investigated in the research coummunity and it's probably too early to discuss it on WP already. But it would be great if you join the discussion since you are an expert. Axiom0 (talk) 14:21, 7 March 2018 (UTC)[reply]

Please chime in...

[edit]

Hello Mr. Lefèvre,

This is algoHolic. I stumbled across your Floating-point arithmetic Wikipedia page three days ago while I was refreshing my memory on some of the finer points of floating-point representation in computers. Thank you so much, sir, for taking the time yesterday to make that section a heck of a lot more understandable to "the common man" than it was three days ago.

I'm not a mathematician. Nor am I an electrical engineer. I proudly represent the everyday laymen and laywomen who read Wikipedia to learn new stuff — just for fun.

Every now and then I like to dust the cobwebs off my rusty high school algebra brain cells. So when I saw that summation of pi formula in your Floating-point numbers section, I thought that trying to solve it would be good mental exercise for me. Except, in the state that section was in 3 days ago, the worked equation there was as confusing as Chinese! And the textual explanation read like Greek to me!

My original confusion led me to Math Stack Exchange to ask for clarification from those whose math skills are fresher than mine. My layperson's understanding of summations, plus what I learned from the answers on that math.stackexchange page compelled me to make the changes I made to that one pi conversion sigma notation and its worked equation.

So, in the Wikipedia spirit of the broadest-possible inclusiveness, I would like to invite you (and any other Floating-point numbers contributors) to feedback on some of the questions asked in that math.stackexchange page. Being that you "wrote the book" on the subject, Mr. Lefèvre, I'm sure that if you offered your expert's take on the summation questions there, you could clear up a lot of cobwebs of mine and a lot of other math students' and enthusiast's heads regarding how mathematical notation is actually used outside of academia.

Please consider chiming in with your answers or comments if you ever have any spare time. I'm looking forward to hearing more from you, sir.

Thanks again,

algoHolic (talk) 20:50, 8 November 2017 (UTC)[reply]

Hi,
Thanks for your message, but note that Floating-point arithmetic#Floating-point_numbers is not my page (and most of it wasn't written by me). I just try to contribute / correct it as much as I can (not obvious due to limited time). I've done some corrections and clarification (I hope) on your latest edits about rounding. Please check. I'll try to look at that math.stackexchange page tomorrow.
Vincent Lefèvre (talk) 00:18, 9 November 2017 (UTC)[reply]
Thank you, sir, for making that rounding paragraph easier to understand. To give due credit to the original contributor of that rounding paragraph, the edits I made to it were to do with notational style or lexical consistency; not technical correctness. I humbly defer to your good self and/or the original contributor of that paragraph on matters regarding the accuracy of the technical details of the subject.
Which reminds me to ask this admittedly naive question about something in that paragraph. There is a binary point to the right of the rightmost bit in the significand being discussed in that paragraph: Yet, in the paragraph immediately underneath that number, it says, "The significand is assumed to have a binary point to the right of the leftmost bit". I found that to be one of the most confusing things in that section when I first read it three days ago. What is the function of that binary point shown to the right of the rightmost bit in that rounding paragraph's significand, sir?
Thanks in advance for your patient reply, Mr. Lefèvre.
algoHolic (talk) 07:55, 9 November 2017 (UTC)[reply]
No, here, to agree with the formula given after this paragraph, the binary point is on the right of the leftmost bit (I corrected this yesterday): . To make it worse, there exist 3 different conventions for the binary point (the other two are: to the left of the leftmost digit as used in the C language and in GNU MPFR; and to the right of the rightmost bit so that the significand is an integer, as often used in proofs), so that the choice can be different in a different context. Both the first and the third conventions are used in the IEEE 754-2008 standard. Vincent Lefèvre (talk) 09:09, 9 November 2017 (UTC)[reply]


I'm sure that you are right, Mr. Lefèvre, sir. We are probably referring to two different things.
I just did a diff between your latest revision as of 8 November 2017 (the topmost revision on the History page) and contributor Tea2min's revision as of 3 February 2017 (chosen arbitrarily because it is the earliest/bottommost revision on the default page of 50)
The line that I'm referring to is exactly the same in both revisions...
: <math>11001001\ 00001111\ 1101101\underline{1}.</math>
And this is how that markup is rendered by the browser...
I refreshed my browser and then took a screenshot showing what I'm seeing. Hopefully you can see the attached screenshot.
Screenshot of a the Floating-point numbers section of the Floating-point arithmetic Wikipedia page
I hope this helps.
Many thanks,
algoHolic (talk) 18:03, 9 November 2017 (UTC)[reply]
Thanks for the information. I hadn't noticed this issue. I've just corrected it to:
: <math>11001001\ 00001111\ 1101101\underline{1}</math>.
This is actually the period at the end of the sentence, not the binary point (which is not shown). Vincent Lefèvre (talk) 00:28, 10 November 2017 (UTC)[reply]
Ahh! I see. That explains a lot. Thanks a million times for clearing that up. It really had me confused! I thought the text was referring to that "period" as the binary point.
Is it common to have a period in math markup that is not typeset in-line within a sentence? Why is the block formula style with the period at the end, preferred over the in-line style?
The period struck me as especially confusing in this instance, as the subject matter discusses binary points being embedded within binary numbers. How is the reader expected to differentiate between a sentence period and the mathematical notation for a binary point? They're the exact same glyph after all. Aren't they?
Many thanks,
algoHolic (talk) 01:49, 10 November 2017 (UTC)[reply]
It is the normal rule to have punctuation marks with block style too, even though I don't like that very much (sometimes, it's awkward, e.g. after a sum ∑ ... or a big array). You can see discussions in Periods and commas in mathematical writing on MathOverflow. Note that in the past, the fractional point was written as a centered dot (well, at least some mathematicians did); but now, it is no longer standard and it could be confused with a multiplication. A solution might be to put quotes. For instance, here's π rounded to 0 fractional digits: "3.". Vincent Lefèvre (talk) 10:15, 10 November 2017 (UTC)[reply]

ArbCom 2017 election voter message

[edit]

Hello, Vincent Lefèvre. Voting in the 2017 Arbitration Committee elections is now open until 23.59 on Sunday, 10 December. All users who registered an account before Saturday, 28 October 2017, made at least 150 mainspace edits before Wednesday, 1 November 2017 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2017 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery (talk) 18:42, 3 December 2017 (UTC)[reply]

Deletion of section "Causes of Floating Point Error" in Floating Point article

[edit]

In your reason for this massive deletion, you explained "wrong in various ways." Specifically, how is it wrong? This is not a valid criteria for deletion. See WP:DEL-REASON.

When you find errors in Wikipedia, the alternative is to correct the errors with citations. This edit was a good faith edit WP:GF.

Even if it is " badly presented", that is not a reason for deletion. Again, see WP:DEL-REASON.

And finally, "applied only to addition and subtraction (thus cannot be general)." Addition and subtraction are the major causes of floating point error. If you can make cases for adding other functions, such as multiplication, division, etc., then find a resource that backs your positions and add to the article.

I will give you some time to respond, but without substantive justification for your position, I am going to revert your deletion based on the Wikipedia policies cited. The first alternative is to reach a consensus. I am willing to discuss your point of view.

Softtest123 (talk) 20:08, 19 April 2018 (UTC)[reply]

I've answered in the Floating-point arithmetic talk page. Note that I've deleted this section (well, just the first part, not its subsection) mainly because it was incorrect, and there was no point to correct it as everything has already been covered in the article (which might be improved, though). Vincent Lefèvre (talk) 22:29, 19 April 2018 (UTC)[reply]

Hello. When you removed Bill Macy from the Golden Age list, you wrote: "due to the lack of references." What do you mean? I completely agree, the man started acting in films in the 1960s, but what exactly were you referring to by lack of references? I am just curious. :) Radiohist (talk) 00:38, 15 November 2018 (UTC)[reply]

I just meant that in case Bill Macy was added because he started before 1960 (assuming that there were missing credits in the IMDb), there were no references showing that. The user who added Bill Macy wrote "1948" for the debut, so that I was wondering. Vincent Lefèvre (talk) 18:02, 15 November 2018 (UTC)[reply]
OK. It's pretty clear there are no sources which suggest that he started acting in 1948.Radiohist (talk) 18:14, 15 November 2018 (UTC)[reply]
I assume that when his Wikipedia page Bill Macy says "1948–2011" for "Years active", that's including his work as a cab driver. But that's a bit unclear and unsourced. Vincent Lefèvre (talk) 18:27, 15 November 2018 (UTC)[reply]

ArbCom 2018 election voter message

[edit]

Hello, Vincent Lefèvre. Voting in the 2018 Arbitration Committee elections is now open until 23.59 on Sunday, 3 December. All users who registered an account before Sunday, 28 October 2018, made at least 150 mainspace edits before Thursday, 1 November 2018 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2018 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery (talk) 18:42, 19 November 2018 (UTC)[reply]

I have an e-mail from David Hough saying that the drafts should be freely available, with the usual restriction of not changing any copyright notice. For years, it has been well known that the drafts of the Fortran standards (at least recent ones) are available for download, but you have to pay for the approved version. David Hough seems to believe that for IEEE 754, but others here claim WP:COPYLINK. I suppose the delegation to WG does complicate things. ucbtest.org seems to be owned by 754WG, and so convenient for posting them. I could ask David Hough for a signed notarized statement, but I don't think he would be too happy with me for that. Gah4 (talk) 22:53, 8 March 2019 (UTC)[reply]

I confirm that ucbtest.org is David Hough's site (at least, under his control). And he has probably other things to do than sending a signed notarized statement (we have a lot of work with the new IEEE 754 revision). Issues are much more probable with many other WP links to web sites with no copyright information. Vincent Lefèvre (talk) 23:29, 8 March 2019 (UTC)[reply]
I know it is his site. Actually, I didn't know it was him, but sent to the e-mail on the SOA for ucbtest.org. (I used to do DNS administrating for our group, so I know how SOA works. Then the reply came from him.) But some seem to believe that he isn't good enough to say. Gah4 (talk) 23:50, 8 March 2019 (UTC)[reply]

It looks to me like the DOI pages have a copyright notice. Links to actual articles have CCC notice with a dollar amount. I suspect someone is going to say that they should stay, and I think the link to ucbtest.org should stay. Thanks, Gah4 (talk) 22:53, 8 March 2019 (UTC)[reply]

By design, a DOI redirects to the publisher's site. So, DOI links are necessarily safe. The CCC notice with a dollar amount does not mean much: the rules depend on the publisher and the date of the article. For instance, for IEEE articles, in the past, this was the version IEEE was authorizing authors to put on their web pages and institutional websites. Vincent Lefèvre (talk) 23:29, 8 March 2019 (UTC)[reply]
I thought that David Hough's site would be safe, too, but some don't believe that. Gah4 (talk) 23:50, 8 March 2019 (UTC)[reply]

The Goldberg paper has the notice: Permission to copy without fee all or part of this material i sgranted provided that th ecopies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its data appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.@1991ACM0360-0300/91/0300-0005$01.50. What is the rule about ones like that? Thanks, Gah4 (talk) 23:13, 8 March 2019 (UTC)[reply]

The Goldberg paper on Oracle's web site (previously Sun) has "reprinted by permission". Thus the web page should be assumed to be legal (Oracle being a well-known company, not some unknown individual), and there's nothing wrong to link to a legal web page. Copying the page without permission is not, but this is not what is done on WP. Vincent Lefèvre (talk) 23:29, 8 March 2019 (UTC)[reply]
I might have misread the diff, and so thought you took it out. There is the question about a well-known company that accidentally leaves things on an unprotected site. (There was recent news that U of Washington left some personal medical data open. No SSNs or credit cards, but still things that shouldn't have been out.) So, you believe that the ucbtest.org site is fine? Thanks. Gah4 (talk) 23:50, 8 March 2019 (UTC)[reply]
David Hough is the Working Group Chair and the website 754r.ucbtest.org is the web site he's maintaining for the working group of the revision of an IEEE standard. This website has been there for many years without any complaint from IEEE. Many mail messages of the stds-754 list, hosted on ieee.org, pointed to this website (i.e., as I showed on Talk:IEEE 754, the IEEE website itself had links to the drafts... Let me recall the evidence: https://web.archive.org/web/20070919030800/http://grouper.ieee.org/groups/754/email/msg03554.html). So this is not like IEEE could not know these links, and really, if there were any problem, I suppose that IEEE would have already asked David to remove the documents. And if IEEE decides that they should no longer be there, they would be removed, and any link to them would no longer work as a consequence. Thus I don't see any problem to link to the drafts.
Note also that IEEE is not some random big company. This is an association, and most members of the Working Group are also IEEE members. IEEE would not do something that would be against its members, like a legal action without notice. Vincent Lefèvre (talk) 00:23, 9 March 2019 (UTC)[reply]
And linking to the drafts would not fall under the conditions listed at Digital Millennium Copyright Act#Linking to infringing content. Vincent Lefèvre (talk) 00:46, 9 March 2019 (UTC)[reply]
It seems to be user:Glrx that needs most convincing. Thanks. Gah4 (talk) 01:17, 9 March 2019 (UTC)[reply]

ArbCom 2019 election voter message

[edit]
Hello! Voting in the 2019 Arbitration Committee elections is now open until 23:59 on Monday, 2 December 2019. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2019 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:04, 19 November 2019 (UTC)[reply]

Please read MOS:FORMULA (the relevant part of the style guideline) before considering reverting again. Thanks. --JBL (talk) 12:58, 22 November 2019 (UTC)[reply]

which says: "Even for simple formulae the LaTeX markup might be preferred if required for the uniformity through an article." And I think that this is what Rubenkazumov's edit was about, following "Style and formatting should be consistent within an article." from WP:MOS. Vincent Lefèvre (talk) 13:21, 22 November 2019 (UTC)[reply]
Style and formatting was not consistent through the article either before or after the edit. --JBL (talk) 13:35, 22 November 2019 (UTC)[reply]
Indeed, I had not seen that. Sorry. Vincent Lefèvre (talk) 13:37, 22 November 2019 (UTC)[reply]
No harm done. All the best, JBL (talk) 16:49, 24 November 2019 (UTC)[reply]

GNU MPC page

[edit]

Hi Vincent. I added a page o GNU MPC. Would you look it over when you have some time, please.

Jeffrey Walton (talk) 00:10, 24 May 2020 (UTC)[reply]

@Jeffrey Walton: Thanks. I have changed it to say "INRIA and others" like for GNU MPFR since the main authors are from INRIA. It seems that projects do not list individual authors. GNU software pages often say "GNU Project". Perhaps this could be added. But in the case of MPFR and MPC, the provenance of the authors is more restricted.
BTW, on the MPFR page, one could list some software that uses it, and this includes GNU MPC. Vincent Lefèvre (talk) 01:24, 24 May 2020 (UTC)[reply]
Ack, sounds good. Jeffrey Walton (talk) 01:35, 24 May 2020 (UTC)[reply]

Multimedia extensions template

[edit]

Hi Vincent. I am adding SuperH back to Multimedia extensions template. Please do not uno, this is not a mistake as I explained in talk page. If you are still not convinced please voice your opinion. Dawzab (talk) 20:21, 8 July 2020 (UTC)[reply]

Those who care

[edit]

the scientific community actually seems to fall into 2 categories: those who do not care and those who want to use the upright style – try to change this on WP, and you'll quickly hear from the third unmentioned category: those who do care and want it in the italic style, particularly in the WP mathematical community. My favourite complaint is that ex is used to denote exp(x) without disambiguation in contexts where this is ambiguous, which is pretty much anywhere outside of real analysis, e.g. with complex numbers. —Quondum 22:25, 16 August 2020 (UTC)[reply]

I'm not sure I understand your second sentence. I haven't said anything about exp, just e.g. ex vs ex. FYI, among what I've found about typesetting Euler's number (and other constants):
and all of them seem to favor the upright style. The AMS style guide uses italic e for Euler's number in its examples, but does not give any requirement or recommendation (it says that functions should be in roman, so that they are not confused with variables, but nothing about constants). Journals of the London Mathematical Society: house style and instructions for copy-editors and typesetters says that e should be in italic. Physical Review Style and Notation Guide says that roman is for words and italic is for mathematical symbols, so e is in italic.
Note that in computer arithmetic, when dealing with the floating-point representation, the exponent is denoted e in general, so that for Eurler's number, the upright style avoids possible confusion.
FYI, the French and Spanish Wikipedia pages for e (among others) use the upright style.
Vincent Lefèvre (talk) 00:29, 17 August 2020 (UTC)[reply]
Please ignore my mention of exp. It was merely meant as a light-hearted interjection for the mathematically inclined, but is irrelevant (except perhaps to show that my tone is intended to be light-hearted).
With regard to roman versus italic, I was only making an amused comment on an apparent misconception in your edit summary, and you are free to ignore it. If you are interested in the position on this in WP, it has been discussed in various places before, e.g. here and here. The MoS is not prescriptive on the topic, aside from discouraging changing the style in an existing article. Outside style guides, standards, etc. seem to carry little weight on WP. —Quondum 01:49, 17 August 2020 (UTC)[reply]
Quondum: Thanks for the pointers to the Wikipedia discussions (BTW, the first comment in this one is from a former colleague of mine, actually the leader of my team 15-19 years ago!). I was a bit surprised about the discussions on pi, as I thought that there were only one variant in TeX, and actually I find the one obtained in TeX (e.g. Latin Modern Math Italic) looks more like the usual upright π (e.g. from DejaVu Sans, Noto Sans Regular, or Nimbus Roman) than italic π from Nimbus Roman Italic. So, actually, this is not just about upright vs italic, but also about the font that is used. — Vincent Lefèvre (talk) 13:48, 17 August 2020 (UTC)[reply]

Possible review of Microsoft Binary Format description details?

[edit]

Hi Vincent,

in the Microsoft Binary Format#Technical details article we have a bit-level description of the MBF floating point number format. Further down, we even have a couple of example values including their binary representation derived from a byte-exact 6502 ROM disassembly using this format. And we have pieces of source code comments from a Borland document how to carry out conversions into/from IEEE 754. The values and the description in the article, however, do not seem to match up correctly in regard to the binary exponent value ranges and biased exponents, but, I think, it is important for historical reasons that we provide a bit-level accurate description of this format. It is also possible that I am just temporarily confused about it, therefore, if you have fun and time, I would appreciate a sharp eye on this so we get it right...

Thanks and greetings, --Matthiaspaul (talk) 12:12, 2 September 2020 (UTC)[reply]

reversion of z/OS addition to the quad precision list in the long double article.

[edit]

You reverted my change, citing the C/C++ Users guide. However, the portion of the document that you cited refers to IEEE(HEX) compilation (https://en.wikipedia.org/wiki/IBM_hexadecimal_floating_point). Extended precision with the FLOAT(HEX) compilation option (the default for 24-bit and 32-bit compilation modes) is also a 128-bit format (not x87), but has a 7 bit exponent (with a base 16 bias).

In the paragraph after the one you cited for rationale to revert my change, is the relevant text:

"z/OS XL C/C++ also supports IEEE 754 floating-point representation (base-2 or binary floating-point formats). By default, float, double, and long double values are represented in z/Architecture floating-point formats (base-16 floatingpoint formats). However, the IEEE 754 floating-point representation is used if you specify the FLOAT(IEEE) compiler option. For details on this support, see “FLOAT” on page 117."

z/OS floating point is confusing, but I've added an additional reference to substantiate my original edit.

Also see:

- https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format

- https://en.wikipedia.org/wiki/IBM_hexadecimal_floating_point

(the latter explains the FLOAT(HEX) extended precision format, which is the default long double representation in some compilation modes (but not for 64-bit.))

-- Peeter.joot (talk)

@Peeter.joot: Note that I was not thinking of the base-16 formats. I was just saying that "extended precision" does not necessarily mean binary128 (quadruple precision), which did not exist in IEEE 754-1985, on which the z/OS Version 2 Release 3 XL C/C++ User's Guide is based: page 119, "For more information about the IEEE format, refer to the IEEE 754-1985 IEEE Standard for Binary Floating-Point Arithmetic." The first reference you give just provides the range, which corresponds to the minimal range for "double extended" from IEEE 754-1985. Both the x87 extended format and the binary128 format have this range, and both typically have a 128-bit storage on 64-bit machines (the former, for alignment purpose thanks to padding bits, the latter because all the 128 bits are used), so that this reference does not allow one to conclude. The second reference says that z13 supports quadruple precision, but that's new in z13, so that I'm wondering whether the "long double" type is used for this format or another type (such as __float128) like on some Linux platforms. The reason for the latter choice would be to preserve the ABI in case quadruple precision were not used from the beginning. In short, one would need another reference to confirm. — Vincent Lefèvre (talk) 21:10, 10 October 2020 (UTC)[reply]
Well, for one, this doesn't have much to do with z/OS, but with z/ hardware. At least in the later versions, z/ supports hex, binary, and decimal floating point in up to 128 bit representations. That leaves the question of compiler support for the different formats, which I don't know so well. Gah4 (talk) 22:18, 10 October 2020 (UTC)[reply]
Extended (128 bit) precision has been supported by IBM starting with the 360/85 and continuing with S/370. There is software emulation when the hardware isn't available, and for DXR even when it is. (Until late in ESA/390 years when DXR was finally added.) I thought 128 bit was supported from the beginning of BFP, but I didn't try to follow that so closely. The IEEE 754-2008 DFP considers 64 bit and 128 bit as basic formats, where decimal 32 is considered not basic. Again, compiler support is still a question for all formats. Gah4 (talk) 22:24, 10 October 2020 (UTC)[reply]
It is the ABI that defines the format of the native types. The ABI is influenced by the ISA and the original hardware. If the initial ISA already supported binary128 (though not in hardware yet at that time), then long double is likely to correspond to binary128. But this does not seem to be the case: the z/Architecture page says for floating point: "16× 64-bit". So this is unclear. Quadruple precision support could have been added only with a new "vector long double" type (mentioned in https://www.ibm.com/support/pages/sites/default/files/inline-files/$FILE/vector_optimization.pdf). Whether it is directly based on the format used for "long double" or not, I don't know. Note that "extended precision" can be used for lots of IEEE conforming formats, not just for IEEE quadruple precision. In the past, 2 companies started to define quadruple precision (copying to each other) and implemented it in software, which led to binary128: HP and Sun, around 1985-1986. I don't know about IBM, but S/370 was in 1970, so that if "long double" uses binary128, it doesn't come from S/370. — Vincent Lefèvre (talk) 22:44, 10 October 2020 (UTC)[reply]
In this e-mail message from Eric Schwarz (2011): "IBM zSeries supports binary128 in hardware as well as binary32, binary64, decimal64 and decimal128." but I don't know the status before 2011. — Vincent Lefèvre (talk) 22:53, 10 October 2020 (UTC)[reply]
This is clarified by Schwarz and Krygowski's article The S/390 G5 floating-point unit (1999), which says "Extended" for the future binary128 and it was implemented in hardware. It was shortly before z/Architecture was introduced. This leaves no doubt that IEEE quadruple precision was used since the beginning. I've added this reference to the long double page. — Vincent Lefèvre (talk) 23:55, 10 October 2020 (UTC)[reply]

Response to the revision of floating point article.

[edit]

What are the typos? I fixed the error forgetting to mention the endianness. Here is the recent update: https://commons.wikimedia.org/wiki/File:Single-precision_floating_point_diagram.png — Preceding unsigned comment added by Joeleoj123 (talkcontribs) 20:41, 7 November 2020 (UTC)[reply]

@Joeleoj123: "In normalized notation": this is not a notation, but a representation; in this format, the representation is normalized, and thanks to normalization, the first bit 1 of a normal number is not stored (thus this is an implicit bit). The text in blue is unclear. In "Move radix point 3 Places to the right.", there should not be a line break after "3", and it should not be followed by a capital letter. This text is WP:OR anyway (the encoding is normally defined from the (s,m,e) representation, as what is described in Single-precision floating-point format; imagine the case where the exponent is −100, for instance: you would not do that). Next, "single-precision" should not have a hyphen here, and "offset-binary" is not a well-known term, thus should have an internal link on it; but "biased exponent" is more common in the context of floating-point formats and is the term used in the IEEE 754 standard. Next, "banary" should be "binary". In red, "23 bits of the significand" is incorrect; the significand has 24 bits, and what is stored (i.e., the 23 bits) is now called "trailing significand field". These are issues about the contents. There are also accessibility issues (I suppose that SVG could improve that, but I don't even think that an image is necessary). Moreover, such a description doesn't belong to Floating-point arithmetic, probably not in IEEE 754 either. Single-precision floating-point format already has such a level of detail; there may still be some improvements in the example in Single-precision floating-point format (e.g., the significand written in binary should be added), but see all my remarks I've just done. — Vincent Lefèvre (talk) 22:03, 7 November 2020 (UTC)[reply]

ArbCom 2020 Elections voter message

[edit]
Hello! Voting in the 2020 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 7 December 2020. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2020 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 01:21, 24 November 2020 (UTC)[reply]

revert at Rounding

[edit]

I was disappointed that you reverted my edit.[9] The point was to make it clear to a casual reader, that they can expect their favourite programming language to do this, after I was surprised to see this behaviour myself. I never claimed GNU C set the default, I was just giving some common examples. Surely making the connection to actual programming languages improves the article, and so should be left in (or at least improved rather than deleted) according to WP:IAR? Adpete (talk) 00:30, 5 December 2020 (UTC)[reply]

Adpete: Well, you can't say something just because you tested it, because there are many platforms, many compilers, etc. Concerning the C language, the ISO C standard does not give any guarantee. Moreover, there exist processors that do not conform to IEEE 754, so that there are chances that you will not necessarily get roudning-to-nearest-even with any programming languages on such processors. — Vincent Lefèvre (talk) 02:33, 5 December 2020 (UTC)[reply]
In the case of GNU C and Python (and some others) it is in the specification, so yes it is guaranteed. (Python documentation occasionally says things depend on the underlying architecture, for instance handling of underflow [10], but not in the case of rounding.) But my real point is that round-to-even is common and that point belongs in the article, to distinguish it from the other rounding schemes. Adpete (talk) 03:49, 5 December 2020 (UTC)[reply]
You misread the specifications. The ones you cited in the reverted text concern only some round-to-integer functions, not floating-point operations in general, on which the language implementation generally has no control (unless everything is reimplemented in software on non-IEEE machines). The fact that round-to-even is common on the basic floating-point operations is just because this is what the processor provides, to conform to IEEE 754. — Vincent Lefèvre (talk) 10:51, 5 December 2020 (UTC)[reply]

Thank you

[edit]

I have noticed that whenever I see an edit by you it really improves the article. I just wanted to drop you a line thanking you for all of the hard work. --Guy Macon (talk) 01:15, 29 March 2021 (UTC)[reply]

Paul Ritter DOB

[edit]

Hello, I’m confused as to where the 20th of December has come from as the website and sources such as IMDB have always said he was born on the 5th of March? Maryam01830 (talk) 11:11, 10 April 2021 (UTC)[reply]

Hi Maryam01830. The 20 December 1966 date comes from two reliable sources: The Guardian and The Times (for both of them, see at the end of the article). FYI, they are given on wikidata:Q7153247. The date 5 March 1966 was inconsistent with the age of 54, given by most sources (even when giving this DOB). The IMDb is not always reliable, even though this has improved for the DOB (they now require a reliable source, but this is not always checked, and one does not know the source they used); I've just requested a correction of the DOB on the IMDb, with the 2 sources I've given here.
BTW, I also reverted another of your changes: "the 5th of April 2021". This date format is not accepted on Wikipedia, as said at MOS:BADDATE. See MOS:DATE for complete information on date formats. — Vincent Lefèvre (talk) 12:11, 10 April 2021 (UTC)[reply]

Thankyou so much for the feedback and the correction! Maryam01830 (talk) 12:42, 10 April 2021 (UTC)[reply]

Hello again, I remembered I forgot to ask something; was his birthday updated to 20 December before or after the publishing of the times obituary and the guardian article? Many thanks. Maryam01830 (talk) 18:09, 10 April 2021 (UTC)[reply]

@Maryam01830: The 20 December date was added on Wikipedia in Special:Diff/1016400905, whose log message says that this date comes from The Times obituary. So The Times could not copy on Wikipedia. The article from The Guardian was published on the following day, but experience shows that The Guardian is reliable (I have never seen or heard of a wrong date copied on WP). However, neither says where this date comes from; currently, it could possibly be from the family (AFAIK, after a death, they normally don't lie since there is no interest to give a wrong date; they can even restore the truth, as this was done for Jean-Pierre Mocky, who had invented a story to make everyone believe that he was younger than he really was, despite official documents). — Vincent Lefèvre (talk) 20:30, 10 April 2021 (UTC)[reply]

Thanks so much for once again clearing that up, I had apprehensions about the guardian copying but having heard your positive experience with them It seems trustworthy. Oh I see, yes that could make sense! Especially as the other info that had been released has brought to light more truth. Thanks again for the info Maryam01830 (talk) 21:15, 10 April 2021 (UTC)[reply]

ARM architecture "license" revert

[edit]

Hi, I see that you reverted my edit from changing "licence" to "license". The American spelling of "license" and its derivatives are used everywhere else in the article. I am confused why you reverted changing it back to the British spelling when the usage of "license" is written in American English. Ordusifer (talk) 21:25, 21 June 2021 (UTC)[reply]

Hi Ordusifer,
This page has a {{Use British English|date=June 2012}} template, thus uses British spelling, not American spelling. The reason is that ARM was originally a British company.
I haven't seen any non-British spelling in this page (note that the spelling of the noun "licence" should not be confused with the one of the verb "license" and of "licensee").
Regards,
Vincent Lefèvre (talk) 22:06, 21 June 2021 (UTC)[reply]
Thank you for the clarification. Now I know what to look for.
Best regards,
Ordusifer (talk) 23:16, 21 June 2021 (UTC)[reply]

Bounded Floating Point

[edit]

Vincent,

Thank you for your interest in floating point and floating point error.

In my conversations with experts in the field I put the following explanation:

"*Note on the naming of BFP: We regret choosing the adjective “Bounded” to refer to our floating-point extension. Significant Bits Floating Point may have more accurately identified our work, since we are calculating, monitoring, and storing the number of significant bits available after a calculation."

Though it is possible to derive an interval with BFP (in general much tighter than interval arithmetic, IA) there are other important distinctions. To represent double precision intervals requires 128 bits while BFP this is accomplished with only 80 bits. BFP detects true zero when the significant bits of a result are all zero. And fundamentally, BFP does not blow up under catastrophic cancellation. We haven't built the hardware yet, but clearly BFP will out perform IA. I would hope that you would read the BFP literature and provide informed criticism.

I look forward to hearing from you.

Alan Softtest123 (talk) 23:55, 19 October 2021 (UTC)[reply]

About "To represent double precision intervals requires 128 bits while BFP this is accomplished with only 80 bits.", this just means that inf-sup double can represent more intervals than BFP, i.e. the objects take more space, but the system is more powerful. However, this is not the only way to represent intervals. With mid-rad intervals, one can have an approximation on 64 bits and an error term on 16 bits, thus a total of 80 bits. It seems that BFP does something like that.
About "BFP detects true zero when the significant bits of a result are all zero.", this is also the case with interval arithmetic: (0,0) is the true zero in both inf-sup and mid-rad representations.
And "BFP does not blow up under catastrophic cancellation" is not clear. Catastrophic cancellation is a notion in floating-point arithmetic (or similar), not interval arithmetic. There are two issues with interval arithmetic: First, dependency issues (e.g. a same variable used twice in the same expression), but ignoring them (like what one of the unum proposals did) could yield incorrect results. Second, even without dependency issues, bounds are pessimistic, but again trying to return smaller intervals could also yield incorrect results (an example is significance arithmetic... but perhaps BFP behaves like that). There are more advanced arithmetics to avoid these issues, like affine arithmetic and Taylor models, but it doesn't seem that BFP goes at this level as objects take much more space (certainly not 80 bits).
Vincent Lefèvre (talk) 02:29, 20 October 2021 (UTC)[reply]
The point about BFP is NOT intervals, but rather the retention of significant bits. Doesn't forgive my ignorance, though. Professor Kahan pointed out that I had a lot more reading to do but life is short. Most of my knowledge of interval arithmetic comes from Moore's "Interval Analysis" that describes interval operations as standard floating point operations on floating point number pairs. How is the number of intervals that can be produced, relevant?
Perhaps you could demonstrate for me the results that IA produces for (sqrt(pi^2) - pi) and (((sqrt(pi))^2 - Pi). BFP correctly produces 0.
Can you give me a specific example of a dependency relation that I can solve with BFP? Your statement, "Catastrophic cancellation is a notion in floating-point arithmetic (or similar), not interval arithmetic." implies to me that interval arithmetic does not use standard floating point. Is there some superior real number representation that I am not familiar with? If so, where can I read about it?
Still the cancellation problem presents itself. My paper, "Exact Floating Point" defines the loss of significant bits due to the subtraction of similar numbers. A ratio of M/S of 1/2 can double the amount of lost precision, where M is the minuend and S is the subtrahend. This would hold for any number representation. Where can I find out how IA deals with this issue?
To me, we seem to be having a good discussion about floating point error mitigation. There is no need to clutter your talk page with our discussion. We could exchange emails. My address can be found on our website, but we can continue here, if you prefer.
Softtest123 (talk) 13:18, 20 October 2021 (UTC)[reply]

ArbCom 2021 Elections voter message

[edit]
Hello! Voting in the 2021 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 6 December 2021. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2021 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:07, 23 November 2021 (UTC)[reply]

Floating point literal syntax revert

[edit]

Hey you! Yes, you, Mr. Revert Man! I see from your talk page that you love reverts. So I reverted your revert to make even more reverts! Twice as many reverts! A revert bonanza! I thought you'd enjoy so just letting you know --Mathnerd314159 (talk) 19:32, 29 January 2022 (UTC)[reply]

Yes, most reverts are for vandalism/spam or for changes whose only purpose is to break the rules. — Vincent Lefèvre (talk) 00:04, 31 January 2022 (UTC)[reply]

Bounded Floating Point 2

[edit]

Before you delete my work on Bounded Floating Point I would think it would be wise of you to understand BFP. There are many publications on Bounded Floating Point including the patents. It is a patented device and method for computing and retaining error during floating point computations. No other method of floating point calculation performs these functions. For example, it correctly performs the comparison operator A-B when the result is zero under all circumstances. Interval arithmetic does not do this and under certain circumstances blows up. You do floating point technology a disservice by deleting my work.

Stop icon

Your recent editing history shows that you are currently engaged in an edit war; that means that you are repeatedly changing content back to how you think it should be, when you have seen that other editors disagree. To resolve the content dispute, please do not revert or change the edits of others when you are reverted. Instead of reverting, please use the talk page to work toward making a version that represents consensus among editors. The best practice at this stage is to discuss, not edit-war. See the bold, revert, discuss cycle for how this is done. If discussions reach an impasse, you can then post a request for help at a relevant noticeboard or seek dispute resolution. In some cases, you may wish to request temporary page protection.

Being involved in an edit war can result in you being blocked from editing—especially if you violate the three-revert rule, which states that an editor must not perform more than three reverts on a single page within a 24-hour period. Undoing another editor's work—whether in whole or in part, whether involving the same or different material each time—counts as a revert. Also keep in mind that while violating the three-revert rule often leads to a block, you can still be blocked for edit warring—even if you do not violate the three-revert rule—should your behavior indicate that you intend to continue reverting repeatedly.

Softtest123 (talk) 04:13, 14 February 2022 (UTC)[reply]

@Softtest123: For the moment, I've tagged the section as Wikipedia:Conflict of interest. Now, concerning the content, this is still very unclear. Currently, this seems like significance arithmetic. Your article Bounded Floating Point: Identifying and Revealing Floating-Point Error contains various misconceptions. First, "these errors are incompatible since rounding error is linear and cancellation is exponential" is an over-simplification. Things can be more complex in practice, with examples that do not involve cancellations, but quickly return completely incorrect results with floating-point arithmetic, exclusively due to initial rounding errors: say, the condition number increases at each iteration of a sequence. Your article doesn't mention the condition number at all. So it seems that you missed the real issues. AFAIK, CADNA had similar issues in the past, before it was improved. BTW, note that a fixed-length format cannot detect a true zero, just because it doesn't store enough information to be able to do that; and values may appear to be equal, while they are not (see Heegner number, for instance). Well, perhaps BFP may improve things for some problems, but it will silently fail on others. — Vincent Lefèvre (talk) 09:38, 14 February 2022 (UTC)[reply]
Vincent Lefèvre
@Vincent Lefèvre: In a sense, BFP is related to significance arithmetic in that a BFP operation calculates and retains the number of significant bits in a result by calculating and retaining the number of insignificant bits. In this sense, rounding error increases the number of insignificant bits (ulps) in a linear fashion where cancellation, when it occurs, is multiplicative.
This is explained in detail in "Exact Floating Point" | July 27-30, 2020 | Luxor Hotel, 3900 Las Vegas Blvd. South, Las Vegas, 89109, USA | American Council on Science & Education | The 2020 World Congress in Computer Science, Computer Engineering, and Applied Computing | CSCE 2020 | ISBN # 1-60132-512-6. This paper mathematically defines the term "Similar" as used in cancellation error occurs when subtracting "similar" numbers.
Since BFP knows the number of significant bits in a result, when that number of significant bits of a result are zero, then the result is significantly zero. Thus BFP properly implements the equality relation, a feature not available in standard floating point. I have documented a demonstration that BFP detects unstable matrices.
I would find it very useful if you would provide an example where bounded floating point fails. I will include it in my extensive test suite.
Softtest123 (talk) 22:24, 14 February 2022 (UTC)[reply]
This is funny. IBM calls their binary (IEEE) format BFP, to distinguish from HFP and DFP. Otherwise, the IBM 7030 has a switch to select whether 0 or 1 bits are shifted in for post normalization. The idea was that one could run a program both ways and compare results. Seems to me that works until you subtract values that have been so treated, but otherwise seems like an interesting idea. (Though for modern systems, it would be better not to use a front panel switch.) Gah4 (talk) 05:29, 15 February 2022 (UTC)[reply]
@Softtest123 I could not find a link to the "Exact Floating Point" paper, and if it is has been published, you have not provided the DOI. There doesn't seem to be any BFP implementation available to the public either. It is difficult to provide an example without knowing the exact way BFP behaves. However, for the time being, there are generic examples, such as the sequence u[0] = 2; u[1] = -4; u[n+1] = 111 - 1130 / u[n] + 3000 / (u[n] * u[n-1]);. It would be interesting to know how BFP behaves on it. You did not provide any clue to guess what one will get. — Vincent Lefèvre (talk) 21:44, 16 February 2022 (UTC)[reply]


@Vincent Lefèvre:
The paper can be found in
Advances in Software Engineering, Education, and e-Learning
Print ISBN: 978-3-030-70872-6
Electronic ISBN: 978-3-030-70873-3
@Vincent Lefèvre:This is an interesting recursion. BFP detects a loss of all significance at step three of the recursion.
Softtest123 (talk) 03:09, 17 February 2022 (UTC)[reply]
@Softtest123 OK, the DOI of you paper is doi:10.1007/978-3-030-70873-3_26. Unfortunately, I do not have access to it via my employer's subscriptions. At least the full spec of BFP or an implementation should be publicly available. Currently, this just looks like significance arithmetic. FYI, on the sequence I've given, with correctly-rounded interval arithmetic in a 53-bit precision (thus corresponding to IEEE 754 binary64, which could be used with the directed rounding modes), there is still a 24-bit accuracy at step 8. — Vincent Lefèvre (talk) 09:44, 17 February 2022 (UTC)[reply]
@Vincent Lefèvre:
See patent US 2020/0150959 A1
I erred when I said is lost all significance at step 3 of the recursion. Rather, it loses all significance at step 14.

X[0] = 2.000000000000000
X[1] = -4.000000000000000
X[2] = 18.50000000000007
X[3] = 9.37837837837878
X[4] = 7.801152737756
X[5] = 7.15441448103
X[6] = 6.8067847377
X[7] = 6.592632780
X[8] = 6.44946611
X[9] = 6.348454
X[10] = 6.27448
X[11] = 6.2193
X[12] = 6.187
X[13] = 6.32
X[14] = qNaN.sig

Where qNaN.sig is the non-signalling NaN generated by BFP when all significance is lost.
Output from BFP is restricted to the number of digits that support the number of significant bits available.
Softtest123 (talk) 16:33, 18 February 2022 (UTC)[reply]
@Softtest123 The patent is not very detailed. Anyway, you say "The true, real value is absolutely contained by these bounds." So this is some form of interval arithmetic, and in particular a mid-rad one, since if I understand correctly, in your representation, you have an approximation and an error bound (radius). — Vincent Lefèvre (talk) 17:29, 18 February 2022 (UTC)[reply]
@Vincent Lefèvre:The patent seems sufficiently detailed to construct a hardware device that will perform BFP functions. The name "Bounded Floating Point" was an unfortunate name selection since it is actually a new form of floating point contained in a single word. It extends the format of standard floating point by adding an error field. This error field contains the number of units in the last place (ulps) that are not significant. It also contains the accumulated rounding error in fractions of a ulp. BFP correctly accommodates cancellation error as well as rounding error. We have debated at length the name selection but have not found a suitable replacement. We seem to be stuck with BFP. BFP does, however, provide features that are not available in any other real number representation and in particular provides useful results when other representations do not. Of particular usefulness is BFP's ability to establish a zero value when a result is significantly zero (all significant bits are zero).
Softtest123 (talk) 19:49, 20 February 2022 (UTC)[reply]
@Softtest123 What you implement is just some form of interval arithmetic. There are various ways to represent intervals. One of them, which appears to correspond to BFP, consists in representing an interval by an approximation to the true value (e.g. with a floating-point number, as you do) together with an error bound, and there are several ways to represent the error bound. This is called mid-rad in the IEEE 1788-2015 standard. The term "ball arithmetic" is also used, e.g. by Arb.
When a function is implemented, there are two kinds of errors: truncation errors (which would occur even in infinite precision, e.g. on a real RAM machine) and rounding errors (due to the limited precision of the formats). With interval arithmetic, there are also the errors on the inputs that need to be taken into account. There is no such kind of cancellation error; if you mean the error due to a cancellation, then this is just the errors on the inputs of a subtraction that are amplified. Or do you mean that you take cancellation into account in a special way when representing the error bound? (This is not disallowed by interval arithmetic.)
And of course, in interval arithmetic, the exact zero can be represented by the interval [0,0] (in inf-sup) or, in mid-rad, by the approximation 0 with an error bound equal to 0 (thus meaning that the approximated value is actually exact). There is nothing specific to BFP.
Vincent Lefèvre (talk) 21:08, 20 February 2022 (UTC)[reply]
@Vincent Lefèvre:You seem to have a lot of faith in interval arithmetic. Would you demonstrate for me the interval arithmetic solutions for the following problems: sqrt(pi^2) - pi and (sqrt(pi))^2 - pi?
Softtest123 (talk) 00:15, 21 February 2022 (UTC)[reply]
@Softtest123 With interval arithmetic, in both cases, one gets a small interval containing 0; the width of the interval has the order of the rounding errors. Any arithmetic that satisfies the inclusion property (which you claim to satisfy in your patent: "The true, real value is absolutely contained by these bounds.") will return something similar (assuming that the intermediate values like pi, pi^2, sqrt(pi) cannot be assimilated to exactly representable values). The reason is that once you have done some error (rounding or something else), you cannot retrieve the exact value, because the information is lost. The general idea is that among all the possible expressions with a different real value, some (or many) of them will yield the same datum X (due to the pigeonhole principle); and X − X must not give a true zero (this would be wrong). The only case where X − X may yield a true zero is when you know the exact value associated with X; but such cases will cover only a small subset of your data.
In your examples, the mathematical result is exactly 0. But the behavior is expected to be the same when the mathematical result is not exactly 0. For instance, (pi+2^(−300)) − (pi−2^(−300)) should also yield a small interval containing 0.
Vincent Lefèvre (talk) 01:00, 21 February 2022 (UTC)[reply]
@Vincent Lefèvre:Bounded floating Point returns zero for both of my example cases. There are no significant bits in the results. I'd have to run a test on your 2^(-300) examples but the result is not zero for sufficient precision.
Softtest123 (talk) 01:29, 21 February 2022 (UTC)[reply]
@Softtest123 I'm asking the result for double precision. Another example: , also in double precision. — Vincent Lefèvre (talk) 02:26, 21 February 2022 (UTC)[reply]

There is this quote from the ASTESJ paper: "If all of the significant bits are zero, the resulting value must be zero. BFP detects this condition and sets all fields of the BFP result to zero." So it seems that zero detection normalizes to a true / exact zero value. But it is not clear if this happens on each operation (e.g. each addition) or only at the end of the computation. And the comparison in section 8 seems somewhat misleading because the interesting value is the value before zero detection normalized to true zero. But I guess it would just be what you get with the floating point. The interesting part of BFP is the interval arithmetic with the C, D, and R fields. --Mathnerd314159 (talk) 06:44, 1 March 2022 (UTC)[reply]

@Mathnerd314159: It seems that for two mid-rad intervals (x1,r1) and (x2,r2) such that x1 and x2 are equal (or very close to each other?), BFP considers that they represent the same value exactly, so that (x1,r1) − (x2,r2) returns a true zero (unlike interval arithmetic). Statistically, for some particular classes of problems, this might be true in most cases. But in general, this may give an incorrect result, contrary to interval arithmetic. I don't know at what threshold (for the "very close to each other") BFP starts to recognize a true zero, but if the threshold is low, it may fail quite often after a sequence of computations, and in order to avoid this issue, the larger threshold is, the more BFP will tend to yield incorrect intervals.
BTW, in floating-point arithmetic, the Excel and OpenOffice.org spreadsheets also try/tried to detect zero with their approxEqual function, and this surprised some users when this started to fail on some inputs. There was a long discussion in a French mailing-list about that, but the archive has unfortunately gone.
"The interesting part of BFP is the interval arithmetic with the C, D, and R fields." Except that the attempt to detect true zero voids interval arithmetic. BFP is more like significance arithmetic once one has catastrophic cancellation.
Vincent Lefèvre (talk) 09:08, 1 March 2022 (UTC)[reply]
Well, what I was trying to capture with the interval arithmetic statement is that (as the ASTESJ paper says) a BFP value represents the interval . Looking at the patent it seems zero detection/normalization does happen after every operation. So the operations are not the standard interval operators with external rounding, but conceptually BFP is still working with intervals.
This is a bit hard to verify with no source code or description of how BFP addition works besides the patent diagrams, so I guess taking it out and just leaving the "derivative of unums" sentence would be fine. --Mathnerd314159 (talk) 17:34, 1 March 2022 (UTC)[reply]

MOS:TENSE suggests that descriptions should be present tense, except when describing events. Designed, built, sold, and such are actual events. Even when no hardware exists, though in most cases some actually does, the documentation should still exist and the descriptions are present tense. The recent edits were mostly related to documentation of some older processors. I think the Word article looks fine now. Gah4 (talk) 05:25, 9 August 2022 (UTC)[reply]

@Gah4: Yes, but if you use present for descriptions, as a general rule, you also need to keep present for related events. — Vincent Lefèvre (talk) 11:04, 9 August 2022 (UTC)[reply]

ArbCom 2022 Elections voter message

[edit]

Hello! Voting in the 2022 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 12 December 2022. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2022 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:27, 29 November 2022 (UTC)[reply]

Please help group the new C2x features. I started some grouping. • SbmeirowTalk01:49, 24 December 2022 (UTC)[reply]

ArbCom 2023 Elections voter message

[edit]

Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 11 December 2023. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2023 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:22, 28 November 2023 (UTC)[reply]

reverting 'decimal64' improvements

[edit]

I don't like this 'block warden mentality', it hinders progress in Wikipedia. If my edits don't meet your quality standards then improve them, or give me time to work on them and get better, don't switch back to technically WRONG information. My post was technically and historically much better than what is in now. Even if you may be formally right in details, IEEE 854 did not define the data format decimal64 but 'standardized the framework to use data types with e.g. root 10', it is completely stupid to continue to spread misinformation about decimal64 having 'broken significands' and wrong values for the exponents. YOU ARE FRUSTRATING WELL-INTENTIONED PEOPLE!!! Newbie002 (talk) 10:43, 8 December 2023 (UTC)[reply]

@Newbie002: Your edit had too many issues, and I did not see any improvement. So the best thing was to revert it. I do not see anything wrong with the current article, none of the other editors saw anything wrong, and your edit did not mention anything wrong (the summary of your changes was even empty!). If you think that there is something wrong, open a discussion in the talk page of the article. — Vincent Lefèvre (talk) 15:00, 9 December 2023 (UTC)[reply]

A new article Computer arithmetic

[edit]

Hi @Vincent Lefèvre, I wonder if you have any time to spare / would be interested in helping figure out what content should belong at Computer arithmetic (previously a poorly chosen redirect), which was just made as a stub in response to some ongoing discussion at arithmetic. I don't feel like enough of an expert to properly organize or write an article at that title from scratch, but I'm happy to help with smaller tasks like discussing possible high-level organization, hunting for historical references, writing small pieces, ... –jacobolus (t) 18:47, 7 March 2024 (UTC)[reply]

reverting decimal128 improvement

[edit]

hello @Vincent Lefèvre, reverting that changes shows that you don't know about the details of this format. Pls. learn and then correct acc. your style, as of now it's misleading.

This was not an improvement. You should look at the IEEE 754 standard. Moreover, your style did not follow the WP conventions. — Vincent Lefèvre (talk) 18:08, 13 March 2024 (UTC)[reply]

CORE-MATH portability

[edit]

Hi, I've found your issue on the CORE-MATH project https://gitlab.inria.fr/core-math/core-math/-/issues/27, but as a non-inria member I can't comment there. Of the issues you raise, I believe

  • the __x86_64__ intrins all have somewhat reasonable (but probably untested) replacements, so someone just needs to test them
  • The int128 thing can be dealt with by saying C-M requires GNU C because of int128. Anyone interested could of course bring their own type and replace all the +-*/ with ugly function calls...
  • The lshift UB can either be hand-waved away again with GNU C, or some temporary unsigned-conversion would work too (would become something like "this is implementation-defined; we assume an implementation with two's complement").

Artoria2e5 🌉 22:46, 28 June 2024 (UTC)[reply]

Hi Artoria2e5,
The GCC manual says that "__int128 is supported for targets which have an integer mode wide enough to hold 128 bits." So it is not supported on every platform. In particular, it is not supported on the 32-bit x86. — Vincent Lefèvre (talk) 00:20, 29 June 2024 (UTC)[reply]
Well... I'm finding it hard to explain to a casual reader what an integer-mode is; apparently it's a matter of gcc internals (see specifically the linked manual). I am guessing that GCC might decide to implement the TI mode based on convenience: as in, someone might write the TI-mode code if an architecture has 64-bit general-purpose registers and some sort of a mul-hi instruction. That's a lot of words to replace "x86_64". Artoria2e5 🌉 10:03, 4 July 2024 (UTC)[reply]
Artoria2e5: Note also that the code depends on GCC and compatible compilers (Clang, and perhaps Intel's compiler too) also due to the use of various builtins (for some of them, there exist corresponding standard C functions, but they probably want to ensure that they do not use any library... well, except libgcc). — Vincent Lefèvre (talk) 10:41, 4 July 2024 (UTC)[reply]

OK with reversion of revision; but there were early specs as well as implementations

[edit]

In reverting a recent change, you wrote

    This does not make sense. In the past, there were no specifications
    in general, and some FP implementations could give surprising
    results (in addition to the more general problem of having
    different behaviors). So "implementations" is the right word.

Actually, that edit did make sense, there were specifications, as well as implementations -- in particular, IBM System/360 had a floating point architecture, as did Cray, as well as DEC with the VAX 11/780. But it's not worth having an edit skirmish over it. WhackTheWiki (talk) 18:33, 18 August 2024 (UTC)[reply]

@WhackTheWiki: Well, I suppose they had specifications for the instructions and the formats, but not for the exact behavior (i.e. the results for each input). For instance, on some Cray machines, the floating-point division 14/7 did not give 2 exactly, but the FP number below 2. — Vincent Lefèvre (talk) 21:03, 18 August 2024 (UTC)[reply]

Decimal32 article needs review

[edit]

FYI, the decimal32 floating-point format article needs review following extensive changes similar to those in floating-point arithmetic, decimal64 floating-point format, and decimal128 floating-point format by an anonymous editor also in 176.4.0.0/16. I don't have time to do that work right now, unfortunately. Taylor Riastradh Campbell (talk) 16:10, 13 November 2024 (UTC)[reply]

@Taylor Riastradh Campbell: I don't have the time either at the moment. In the meantime, I've added a template to require cleanup as there are many style issues. — Vincent Lefèvre (talk) 01:53, 14 November 2024 (UTC)[reply]

ArbCom 2024 Elections voter message

[edit]

Hello! Voting in the 2024 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 2 December 2024. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2024 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:08, 19 November 2024 (UTC)[reply]