User talk:Nbarth/Archive 2020
This is an archive of past discussions about User:Nbarth. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Uniqueness of Euclidean Metric
Hi! I came across the uniqueness claim in the Euclidean distance article that you added here and was hoping you could provide some additional clarification and/or a citation? I had initially thought this meant that the Euclidean metric was (up to scaling) the only metric in one dimension (which led to some confusion on my part on some claims made in the Medoid and Median articles) but I realize now that it's more about metrics induced by a norm, and other metrics in one dimension are possible. At least, I think that's what you were claiming (hence my asking for clarification.) For example, there are plenty of (strictly) subadditive metrics in a single dimension, but I'm assuming none of them are induced by a norm? Thanks for any help you can provide here.. Wclark (talk) 18:14, 3 January 2020 (UTC)
- Hi @Wclark:
- Thanks for asking, and sorry if it's unclear!
- I've:
- added a brief explanation and proof of uniqueness of the absolute-value norm in Old revision of Norm (mathematics),
- linked from Old revision of Euclidean distance,
- added a note that there are other metrics (just not other norms) at Old revision of Euclidean distance, and
- Old revision of Euclidean distance
- Do these help?
- In the actual statement I originally added, I'm not sure it's possible to be clearer than:
- In one dimension, there is a single homogeneous, translation-invariant metric
- ...without being distracting; this explicitly states the additional properties of the metric.
- However, the lack of context was confusing, as you note; hope these help a bit.
- —Nils von Barth (nbarth) (talk) 02:56, 5 January 2020 (UTC)
- Thank you! Your changes are perfect. Cheers, Wclark (talk) 04:25, 5 January 2020 (UTC)
Euclidean distance
Thanks for your edits to Euclidean distance. I have been trying to get the article in shape for a Good Article nomination, which in particular means that everything in it must be properly sourced. I added a source for the partition of TSS, but do you have one convenient for your other new material on gradients of sums of squares, and in particular for your claim that it is more convenient to use SED/2 than SED itself? —David Eppstein (talk) 05:37, 9 November 2020 (UTC)
- On second thought (and after searching and failing to find appropriate references) is this material even correct? The gradient of the squared Euclidean NORM is the vector itself. But this is not an article about the Euclidean norm — that is a more advanced topic that it links to other articles for. The gradient of the squared Euclidean distance, a function of two d-dimensional vectors, is a 2d-dimensional thing. I'm becoming unconvinced that this belongs in the article at all. —David Eppstein (talk) 06:13, 9 November 2020 (UTC)
- Also, in your material about partition of sums of squares, you explained it entirely in terms of sums of squares, not in terms of Euclidean distances. This is not an article about sums of squares. Is there a description of the partition of sums of squares that is concise enough for this context and involves only distances? Or does it really belong in an article about sums of squares and not in one that is mostly about distances and only a little bit about squared distances? Note for instance the explanation of least squares, already in the article, described as minimizing an average of squared distances between predicted and observed values. In the partition of sums of squares, what pairs of things are you taking squared distances between? —David Eppstein (talk) 06:24, 9 November 2020 (UTC)
- Sorry, that might be too advanced for an elementary article, and making in precise makes it rather lengthy.
- E.g. if you fix a point, the square Euclidean distance from that point is a function in one variable, and its gradient is the direction from the fixed point to the varying point, but that's a bit confusing.
- Similarly, the partition of sum of squares corresponds to orthogonal spaces: the points you're taking distances between are "diagonal(overall mean)", "model predictions", and "observations", and the fact that these sums of squares adds corresponds to these vectors being orthogonal.
- I'm not sure any of these connections to statistics and convex optimization can be stated concisely; SED is an elementary but subtle concept, and in an elementary article just linking to future directions seems the best we can do.
- I've removed the reference to gradient as too confusing as well.
- —Nils von Barth (nbarth) (talk) 03:57, 10 November 2020 (UTC)