Jump to content

Talk:Go ranks and ratings/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Untitled

Use the + tab at top to add new comments at the end. Sign comments with four tildes: ~~~~.

Separate page created

I've created this separate page, on a topic of basic interest, since it was crowding go (game). The existing go rank page has been merged here. Rather clearly, there is a long way ahead of copy editing to bring this all up to standard. Charles Matthews 09:49, 7 Sep 2004 (UTC)

11:09, 9 Sep 2004 (UTC) Tommie 11:09, 9 Sep 2004 (UTC) Tommie: I would like to widen the rank table at the bottom, to show 9p at top left and 30 k bottom right. How should I amend the cell spacings? I do not want to mess up.

Maybe the table isn't a great format, in fact, if we want to annotate. I just moved it in from the old go rank page. Some sort of graphic, plus a bulleted list, might be better? Charles Matthews 11:54, 9 Sep 2004 (UTC)

Tommie 18:30, 9 Sep 2004 (UTC): Actually I put already a figure at Go & Ranks, but I could not get it displayed, made smaller in size and moved to the right:

In Germany and The Netherlands a "classes"-system (German: "Klassen") was established. It comprised a further subdivision into Kyu/Dan halfgrades with classes 18 and 17 = amateur 1 dan with the 17 being on the stronger side. It is still in use for club ladders etc. where you pro-/demote after a won/lost game. [Image:http://www.gobond.nl/images/dan-kyu2.gif%7Cright%7C Kyu-Dan-classes]

I have made extensive edits on the page today. It needs further work because some of the material is duplicated two or three times. The tables need work. The list of ranks at the end is useless (of course 5k is between 4k and 6k). Hu 22:19, 2004 Dec 24 (UTC). I think the better exposition is early in the article and later sections that have duplicate material should be merged forward and the duplication eliminated. Hu 22:41, 2004 Dec 24 (UTC)

Greater than 9 Dan? [Done]

Does anyone know any reason why there isn't any higher ranking than 9-Dan?70.111.251.203 15:01, 7 February 2006 (UTC)

My understanding is that ranks higher than 9-dan were traditionally reserved for Generals and members of the emperial family. Stuartyeates 15:18, 7 February 2006 (UTC)

Pro 10-dan is not reserved for Generals and members of the emperial family. Pro 10-dan is a special title which is given to only 1 person in Japan, which is the champion in the 10-dan match. It is similar to the case where 8d is a special rank among amateur Go world. For example, Kato Masao once got pro 10-dan title four straight years (five titles total). --Wai Wai 14:03, 10 July 2006 (UTC)

I am a go player and having looked at this site I feel like saying that the suggested correlation of elo ratings to go rating is not very accurate at all. If the ratings are reasonable, a difference in 4 ranks in go (assuming the player rankings are accurate) would correspond to a very low chance of the stronger person losing. By the time there is a difference in 6 stones in rating I would call it a zero probability of losing. Also a 1pro would probably beat a 6d amateur with a much higher probability than just 80%.

Ranking system is not a system which we should place absolute faith on. It's true even for the strictest ones. Although it's not common, a lower pro dan player can still beat higher pro dan player. At the extreme, it is possible for a 1p to beat a 9p. For example, Abe Yumiko 1-dan beats 9-dan --Wai Wai 14:03, 10 July 2006 (UTC)

Should we reverse order of table?

The description in text is from kyu to pro dan. However this table is from pro dan to kyu. It seems to be a bit inconsistent. What do others think? Can anyone fix the table?--Wai Wai (talk) 03:33, 1 August 2006 (UTC)

Statistical validity of "Winning Percentages" table

Like the commentor above ("...the suggested correlation of elo ratings to go rating is not very accurate...") I'm concerned with the "Winning Percentages" table in the main article, on two grounds:

1) Math

The kyu-dan ranking system in Go is premised on handicap stones; almost every amateur game is handicapped, so a 4 dan gives 3 stones to a 1 dan. Thus, I am concerned that the author does not have game statistics to justify the "Winning Percentages" table, but inferred the result by analogy to chess. That is to say, I doubt that a difference of one standard deviation in the distribution of players ordered by strength from handicap games, gives the same winning percentage in even games that one SD in chess does, and there is no mathematical reason it should.

Since the creation of internet Go servers, this argument hold much less weight. There are at least 10 servers, on which hundreds of even amateur games are played daily, and the results are typically meticiously recorded, since recording game stats is something computers do very well. Translating these results into results recognised by national bodies is another thing, but it should at least be possble to analyses these multitude of results. Stuartyeates 13:57, 13 September 2006 (UTC)
2) Experience

Some of us suspect (merely from personal experience) that winning percentages are higher, in Go, for fixed skill discrepencies; i.e. a two stone handicap might give reliable 50% results for two players, but the stronger one (taking white, giving the two stones) would virtually always win an even game, with a higher percentage than inferred from the table for a 0.7 SD(approximately) Elo rating difference.

Anecdotal Example: I myself have never beaten someone more than two stones above me, in an even tournament game (or even in serious club games), but I have beaten people more than 0.7 SD above me in tournament chess (over 1 SD) (but I have played a great many more tournament chess games. Ironically, I'm about the same strength in both, 2050 USCF and 1 kyu AGA)

N.B. the "Greater than 9 dan?" subtopic is not the best place for this discussion, but I wanted to follow the comment cited, and this is my first attempted contribution to the Wiki. --Peter a.k.a. yoof

P.S.: I was about to link http://en.wikipedia.org/wiki/Arpad_Elo but have just read it for the first time. The statement "Elo was giving US players lower ratings than they deserved" is imo contraversial, but not disputed by any Chess organization to my knowledge. After the USCF modified the rating system, following Elo's retirement, around 1980, USCF ratings became consitently higher by about 100 points than FIDE ratings for the same players. The first master I beat (about 1983; I had been inactive from 77 to 83 for University, and my own rating jumped 100 points very abruptly when I returned) was 2200 FIDE (international) but 2300 USCF (american), and I began to track the differences (among the minority of players with both ratings) on the tournament crosstables. Some of us concluded that the USCF had introduced inflation for the apparent purpose of encouraging new players (the Fischer Boom had ebbed). I note that the chessmaster and professional statistician David Burris disagreed with me on this point (which is why I gave up trying to debate this). Peter H. St.John, M.S. 19:51, 4 August 2006 (UTC)

A Query

The depth aassertions are useless without references to other games that use an ELO system. Scrabble, Chess, Backgamon, Shogi, Amazons, Twixt - can we include comparisons with any of these?

Also it should be noted in the table borrowed from Ales Cieplys website that the Grades are often awarded by different sources. Since Claimed Grades are often not equivalent to Actual Strength, the statistical validty of that entire table has to be questioned. -Zinc Belief

Grammar

  1. I inferred that the clause "K is 30 and 20 for players below 2400 ELO" should read, "K is between 30 and 20 for players below 2400 ELO". Please correct this if I have made a mistake.
  2. Both the terms "go" and "Go" are used extensively throughout this page (presumably by different authors). Is there a consensus on which is correct for Wikipedia, and if not can we reach one? Note that the same question has been asked on the page Talk:Go (board game), and any comments you may have should really be made there (and the consensus applied to this and other go/Go pages if it is reached).

Thanks, Stelio 21:24, 14 September 2006 (UTC)

Go is the current choice to distinguish the game from the verb.--ZincBelief 16:19, 20 September 2006 (UTC)

Rate of improvement

I've seen similar numbers elsewhere regarding the rate of advancement (especially at the low kyu levels), and I must say that I'm highly dubious. At least for adults, these numbers seem to be inflated to the point of discouraging people from continuing to learn the game (although maybe it's more attractive to first-time players, I can't say), once they begin to imagine that they must not be cut out for playing it if they're advancing so slowly.

Are these numbers for people spending 8 hours a day with an instructor? Jdmarshall 03:23, 10 December 2006 (UTC)

I don't know about Go, but I can comment from my experience with Arimaa. The Arimaa server has a range of computer opponents rated from about 1100 to 1900. I have seen complete newcomers enter the system and master the top computer opponent within weeks, i.e. in less than 100 games. By the same token I have seen people play 1000 games over the course of months without mastering the top computers. I am not sure whether to attribute the difference to innate talent or method of study, but it is in any case it is misleading to say something like, "Newcomers can beat all computer opponents in under a month of study," because it isn't true of all newcomers.
Someone with more Go experience than myself should make the edit, but I believe the minimization of the difference between lower ranks is overblown. I suspect it is true that, "A jump from 30k to 10k could happen within weeks or even days for quick learners," but it is equally true that it could take some newcomers a year or more of play to get to single-digit kyu. It is simply rude for "quick learners" who breezed through introductory material to dismiss low-rank distinctions as insignificant. --Fritzlein 19:32, 10 December 2006 (UTC)

Game Depth

The section "Game Depth" should be deleted; it is (mistaken) Original Research, and it does not cite it's sources; but the interest level is high and hopefully we can replace it with someting statistically professional, if not insightful. I'll quote the whole thing here and critique it line by line.

I agree it is OR, but your critique below contains several errors. HermanHiddema 16:32, 20 September 2007 (UTC)

The ELO rating depth also states something about the depth of the game. In the abstract, the total depth of a game is defined by the number of ranks between a random player and the theoretical best play by an infallible creature.

Actually, no. Elo ratings say nothing whatever about the "depth of a game". They are a percentage measurement based on Gaussian Distribution; about 68% of Tiddly-winks players would have an Elo rating of between 1200 and 1600, same as in Chess and the same in Go (if the mean is taken as 1200 and the standard deviation as 200, which is what Elo uses in Chess) and any other competition of skill.

This is false. Elo ratings predict the chances of winning between two players, they say noting about the distribution of players. HermanHiddema 16:32, 20 September 2007 (UTC)

In practice, the depth of the game is the number of ranks between a beginner and the best player in the world. [Who's definition is this?] The number of ranks in this latter, practical definition is increased by the age and popularity of the game, as a richer literature and greater playing pool both tend to move apart the end points of the scale. Even so, the practical definition allows a rough comparison of Go to chess, as both have an extensive literature and a huge playing population.

A rank in this definition does not necessarily correspond to a traditional Go rank. Using the EGF scale of standard deviation there are about 25 standard deviations between 20 kyu and 9 dan.

If this were true (that go ranks were Normally distributed, which is true, and that there are 25 standard deviations from 20k to 9d, which is not accurate) then the Mean (average) would be about 6k, and almost sixty-eight percent of all players would be between 7k and 5k. (68% of everything in a Normal Distribution lies within one standard deviation of it's mean. 97% lie within 3 SD's, so 97% of us would be ranked between 9k and 1k. The standard deviation for go is closer to 3 stones than to 1.)

This is false. Again, Elo ratings say nothing about the distribution of players. HermanHiddema 16:32, 20 September 2007 (UTC)

Chess ratings run a range from about 1000 to 2800... No they don't. The minimum rating is about 200; the standard deviation is about 200 and the mean is about 1500, so the range would go about equally with half the players from 1500 up almost 7 SD's to 2800, and half down almost 7 sd's to about 200. (However, much more attention is paid to ratings of professionals close to 2800 than to beginnners close to 200). You can get statistics of chess ratings from the USCF's website at uschess.org.

Theoretically, there is no minimum rating in chess, but for practical purposes, organisations may define a cutoff point below which they don't measure. In Go, the EGF does not measure ranks below ELO 100, even though there are plenty (children are often 30-40 kyu, rating -1000 to -2000). Chess organisations can do domething similar, which may be different from organisation to organisation. HermanHiddema 16:32, 20 September 2007 (UTC)

... but they are differently scaled. Converting the FIDE formula...to be on the same scale as the EGF formula yields...and (2800-1000)/174 = 10.4. Therefore, when converted to be on a similar standard deviation, we can say that chess has a depth of about 10.4 compared to a depth of about 25 for Go.

This is comparing the precision of one measurement (incorrectly) to the precision of another; it's like saying, my one pound apple is better than your 0.493001 kilogram orange. Both games have averages, so does tiddlywinks if you record results of tournaments; and someone will be 99th percentile, in any game with more than 100 participants. Also, the number 1000 in his formula means nothing at all. I think he's mistakenly using the mean from an old AGA rating alternative.

Given the same Elo formula, there is nothing inherently wrong with comparing two tabels of ratings. This is more like saying 'My 2.5 kilogram oranges weighs more than your 1 kilogram of apples'. There is nothing wrong with such a statement, just as you might equally validly then claim 'Yes, but my 1 kg apples costs $2, while your 2.5 kg oranges cost $1, so mine are worth more'. HermanHiddema 16:32, 20 September 2007 (UTC)

It is a bit artificial to cut off chess ratings at 1000, but this is no more artificial than cutting off Go ranks at 20kyu.... There is no such cutoff. For a published example, the rating formula described at http://beta.uschess.org/frontend/section_171.php uses 750 as a starting point for rating beginners who do not have prior ratings.

This cutoff exists in Go. The cutoff in chess was apparently introduced by the author to be at a similar level of chess (ie, just passed the beginner level). HermanHiddema 16:32, 20 September 2007 (UTC)

...The USCF measures chess ratings down to zero... I thought he said they were cut off at 1000?

The author does not state that this is an official cutoff, so it seems to be one he introduced himself. As you quote above, he acknowledges that the USCF does not cut off ratings HermanHiddema 16:32, 20 September 2007 (UTC)

...usually among kindergarten players, so one could argue that there are as many as 16 ranks of depth in chess. However, the AGA also has no cutoff, and measures ratings below 40 kyu as well as above 9 dan, so if chess can be said to have 16 ranks, then Go can be said to have over 50 ranks.

Well, go ranks are a 2 digit number, and Chess ratings are 4 digits, so Go has only 40+9 = 49 divisions and chess has 2800-200=2600 divisions, so what, is he saying chess must be 50 times more deep than go?

This statement makes no sense. The author has given a definition of a rank in terms of the Elo formula (as used by the EGF) and has done some math to match the chess Elo ratings to go Elo ratings. The basic definition of a rank given as something like "a stronger player will score 68% in games against a player 1 rank weaker". HermanHiddema 16:32, 20 September 2007 (UTC)

There's alot to be said comparing go and chess, and for some reason we all want to. Here are some observations of mine own.

  • The complexity of a game can be measured with Game, Ergodic, Complexity, and Coding Theories. One could bound the computability of determining a winning strategy or the existence of a winning strategy, calculate the entropy of a board postion, etc.
  • I would estimate that the statistical reliability of the outcome of a game of go would be comparable to the reliability of the outcome of a two to four game match of chess. I don't know of anyone measuring this; it might be a good project for statistics students in college. There is something to be said for the aphorism chess is a battle, go is a war; not that war is more complex than battle, but there are differences in pace and scale and statistical reliablity (the outcome of a large-scale war is more deterministic than the outcome of a small-scale battle).
  • Both Chess and Go are Too Hard. Go may be in some sense more complex than chess, but both games defy human comprehension and nobody would say that Garry Kasparov must be dumber than Honinbo becaue Honinbo plays a harder game.
  • It is often said that go must be more complex than chess because computers master chess but not go. There is indeed something to that, programming go is very hard, but not as much as you think. Research in artificial intelligence began with chess; von Neumann himself built a chess computer (with a small board and only a few pieces) in the process of more or less inventing computer science as the first electronic computers were built. The World Chess Champion Emanuel Lasker was a mathematcian who tried to invent Game Theory before von Neumann, and another world champion Mikhail Botvinnik was an electrical engineer who built a chess computer shortly afterwards. Chess has been integral to the study of artificial intelligence from before the beginning. Contrast this to the situation in Asia; China and Japan were both very busy socioeconomically after World War II. While the west could invent chess computers the East was rebuilding it's commerce and indusdtry. Now, sure, NEC has as much money to throw at Go as IBM has to throw at Chess, but it's decades later. Meanwhile, I could give about 15 stones (an absurd handicap, rarely even used to teach children) to Many Faces of Go in the early 90's; now, I can give maybe 6 stones to the best "robots" at KGS. That's an improvement of about 4 standard deviations, similar to chess computers going from about 1300 before Ken Thompson to about 2100 with his computer Belle in the 80's. Go is just behind in this chronologically. We can enjoy our anthropocentric superiority in go for a another decade maybe, the way chess players did in the 70's.
  • NB: I may have found a source for some of this confusion in the discussion of Go Rankings. The Standard Deviation I'm talking about is "sigma" of the Normal distribution of ranked players; that is, 16% of all players are higher ranked than one standard deviation above average, by definition. But for example this PDF from the AGA uses "sigma" to refer to the variability in the probability of one player beating another, based on rating; which is a different thing.

Pete St.John 18:24, 23 March 2007 (UTC)

While I agree that the section on game depth could be deleted as original research, or much improved if it stays, I must say that most of the criticism above is centered on a misunderstanding, which is alluded to only in the final comment, and even there still misunderstood. Elo ratings say nothing whatsoever about the distribution of skill within a playing population. Also they say nothing about the "variability in the probability of one player beating another". The standard deviation that Elo refers to in his book defines how much a single player's performance varies from one game to the next. From this standard deviation and a given rating difference, we can infer a probability of winning, but no variability of the probability is specified, i.e. the inferred probability is assumed to be accurate.
The criticism states that, no matter how we scale our rating system by choosing the average rating and the standard deviation, 68% of the playing population will have a rating within one standard deviation of the mean, by definition. No Elo system, whether for chess, Go, Scrabble, table tennis, or what have you, uses standard deviation in this sense. What a given difference in Elo ratings always measures is a percentage chance of winning. Thus a 68% chance of winning (in the FIDE rating scale) corresponds to a rating difference of 131 points at all levels, whether it is 1700 vs. 1831, or 2700 vs. 2831.
This is why, once we divide by the standard deviation of each individual's performances, the spread of ratings says something about the depth of a game. If you can beat me 68% of the time, and someone else can beat you 68% of the time, and she in turn can be beaten 68% of the time by someone still better, it is reasonable to ask how far such a chain could extend. How many such "levels" are there between a beginner and the world champion? The answer is, roughly, three or four times as many levels for Go as for chess.
I can see how the section as written is very confusing. Perhaps the term "standard deviation" should be entirely excised. The fundamental concept in this method of measuring game depth is not ratings at all, but rather a chain of dominance. Instead of defining a "standard deviation", one could define a "level" as a certain probability of winning. For example, we could say that if A beats B 75% of the time, then A is by definition one level higher than B. Then all of the various Elo scales can be converted into some number of these intuitively understandable levels. Would that be clearer?
--Fritzlein 03:02, 5 June 2007 (UTC)
The statistics term "Standard Deviation" (of a Normal Gaussian Distribution) is unambiguous, referring to a measured population. It is not, however, what you mean. My theory is that the use of the greek letter "sigma" for the parameter in the USCF article is a source of the confusion; tradtionally that letter is used for the standard deviation, also. I have a copy of Elo's book, which I have not read since the 70's. If I can find it I will reread it and post a clarification of the rating formula in unambiguous language, so at least we can talk about the same thing. Meanwhile, I agree that Go seems to have a distinctly greater "game depth" in this specific sense of an ordered chain of winning probabilities. Chess is closer to checkers, in the sense of the ability of a human to learn to play without errors, than is Go; and becuase you can win by a half-point, or few points, the outcome is more sensitive to small mistakes, while in chess you can often make several very small mistakes and still draw. Pete St.John 15:43, 5 June 2007 (UTC)
Standard deviation is indeed unambiguous if the distribution to which it applies is unambiguous. However, if it is not clear whether we are talking about the distribution of measured skill among players, or about the distribution of performances of a single player, then it isn't clear which standard deviation is meant. Meanwhile, I agree that the drawishness of chess can conceal small differences in skill which Go reveals. I also suspect, however, that subtle improvements in Go understanding have a greater impact on winning percentage than subtle increases in chess understanding, because chess is a more crudely material-oriented game. There are not so many gradations of vivacity between a dead chess piece and a live one as there are between dead stones and live stones, nor are there as many unclear tradeoffs between material and non-material values in chess. In other words, my intuition is that Go is a deeper game, not just in the chain of dominance sense, but also in the sense of having more ways to tangibly boost performance. That's just a hunch, of course, since I am no good at either game. --Fritzlein 18:31, 5 June 2007 (UTC)
As I reconsider the application of Elo ratings to Go, I am increasingly convinced that the scaling of the Go ratings is invalidated by the predominance of handicap games. Every Elo rating system for Go has some conversion from stones of handicap into rating points (and thus into winning percentage), yet everyone says that a stone of handicap has greater and greater influence on winning percentage as the skill of the players improves. Thus it seems probable that including handicap games in Elo ratings for Go is drastically warping the scale, tightly compressing the dan ratings while spreading out the kyu ratings. This makes the scale impossible to compare to Elo ratings for chess. If we had Elo ratings for Go which were based purely on even games, then we would would be able to make valid comparisons to chess ratings, but since we don't have such Go ratings, all bets are off. Handicap games spoil everything. If this section on game depth stays, then I'd like to add a careful explanation of why the results are probably bogus. --Fritzlein 18:48, 5 June 2007 (UTC)
An Elo rating could be calculated only on even games; then the Elo's of players who also have AGA ranks (say) could be used to compare chess and go player pools statistically. But yes, I agree, the handicap system if very powerful but complicates the job of rating players. However, there are lots of mathematicians and statisticians who play go. Maybe somebody can get a paper out of this :-) Pete St.John 15:57, 6 June 2007 (UTC)

I pity the fool who reads this article

There is no universally applied system. The means of awarding each of those ranks and the corresponding levels of strength vary from country to country and among online go servers.

Okay. But are they really so divergent that so little can be said about them in general? Are there no major systems that Wikipedia could describe? At least for the higher levels? I mean, there are regional divergences in the Go rules as well. Wikipedia still did a half-decent job of explaining them.

Ranks between about 10k and 30k have very limited usefulness and meaning since there are few discernible differences in each level. It is not surprising to see a 19k defeat a 15k player, so these ranks are of little significance. They are mainly used in teaching, to mark learning progression.
The requirement of a "rank up" at this stage is very loose.

Sure, but then, they are stricter later? In what way?

The progression from DDK to SDK is a significant turning-point for learners, as in one saying "you are not a Go player if you cannot attain a SDK".

Okay, and why is it significant? What is a bloody 9k anyway?

This page hints at the existence of rank systems, has a really unhelpful rank table down the right side and lots of bizarre statistical tables, and runs a severe risk exploding the heads of our dear readers. Regards, --anon 192.75.48.150 16:15, 28 March 2007 (UTC)

  • I pity the fools trying to write this article :-) Go is like Polo, which gives handicaps by goals, and ranks the players with those goals; if a professional is a "3 Goal" it means his team must give 3 goals to the opposing side in a handicap match. Adding up the goals of all the players determines the net effect. In Go, we give moves; a "three stone handicap" means essentially that the weaker player gets to make 3 moves in a row at the start of the game (usually these moves are placed at standard grid points). This is used to rank players, so a 5 Dan gives a 2 stone handicap to a 3 Dan. In Japan, tradition is to start ranks at 9kyu (in Karate this is "White belt"), with the numbers going down as experience increases, up to 1 kyu ("Brown Belt"). Then the first step of "experienced" practitioner, 1 dan ("First Degree Black Belt"), counting up to 9 dan ("9th Degree Black Belt", usually an elderly and much honored teacher in Karate). So a 3 dan in Go would give 4 stones to a 2 kyu, who in turn would give 5 stones to a 7 kyu (the arithmetic across the 1k/1d boundary can be confusing because there is no "zero" kyu). In Karate, ranks are not given to people below 9k, as it is assumed they can be caught up to that level before taking any rank tests; this leads to the idea that go players may as well get up to 9k before playing ranked games, but it's not so easy. Matching this traditional, and in Go very practical, ranking system to modern rating systems (particlulary Elo) is not so easy. Pete St.John 18:28, 28 March 2007 (UTC)
Thanks for that insight about Japan. I had always been taught that rankings start with 9 kyu being assigned to all first beginners and go up from there (8k etc.), and was confused by people in the USA talking about 15 kyu etc. Is it really meaningful for a 28 kyu player to get a 2 stone handicap from a 26 kyu player? 199.125.109.41 22:12, 28 May 2007 (UTC)
The reliability of the handicap goes up with higher rank; for example, there is a fellow who always beats me at 2, but I always win at 3; he's 2d KGS, I'm 2K. But I wouldn't expect that a 26k who always beats a 28k at 2 stones to always lose with a change of just one stone; also, at that level, someone might improve several stones over the weekend by reading a book, while it may take years for an adult 2d to become 3d, if ever. So they are moving targets. Nevertheless, on average, the 2 stone handicap would be right, and would tend to even the chances, even at that level. Pete St.John 14:00, 29 May 2007 (UTC)

What can be done with this article

I am reading over this article and wondering where on earth do you start in order to improve it. Is anyone else of the opinion that it should just be scrapped and started again from fresh?--ZincBelief 09:42, 21 August 2007 (UTC)

I've done a major rewrite of the first part of the article, and I think this much clearer than the earlier version. I will continue doing this with the rest of the article. In the process, most of the current article may well be completely rewritten or removed. HermanHiddema 14:50, 24 September 2007 (UTC)

Compensation points

This section had two recent changes; one overcomplicated the statement that the extra half point in komi prevents draws (I reverted that) and the other a musing that the score in a game is always a multiple of two points (because "one more point for White is one less point for Black") and therefore the difference between 6.5 and 7.5 komi has little practical significance. That isn't true at all; just picture the case where in the endgame you have to throw in, to make eyeshape, before all the dame are filled; or maybe you don't. It's bad logic, and since we don't want original research, we can remove it without proving the logic bad.Pete St.John 20:46, 21 August 2007 (UTC)

It is true for area scoring (Chinese rules), that there is little practical difference between 5.5 and 6.5 points komi, which is why China switched from 5.5 to 7.5 immediately. The same is not the case under Japanese rules, which is why Japan switched from 5.5 to 6.5 komi. Such a thing might be mentioned, but one must then take care to specifically mention that this is only for Chinese rules. HermanHiddema 14:54, 24 September 2007 (UTC)

Attribution to "2th century"

I just deleted this line from the History section: "The Go ranks starts in 2th century China,when Handan Chun(chinese:邯郸) rank 9 Pin Zhi (九品制) in his book Classic of Arts (艺经)." First, it's "2nd Century", not "2th". More importantly, there are no books explicitly mentioning Go (Wei Chi) that early; if you find one, by all means provide the ISBN, or upload a photograph of the cover, or something. The first chessbook is from the Fifteenth century. People just didn't write books like that in the 2nd century, on any continent. It may well be that the book mentioned exists, and describes ranks used in court; politics had certainly been invented that early. Please don't confuse undocumented tradtions with history. We all know the Chinese invented Go, it isn't necessary to exaggerate. Pete St.John 16:30, 18 September 2007 (UTC)

Your statements are dogmatic (chess is entirely irrelevant). http://babelstone.blogspot.com/2006/03/origins-of-go.html mentions this exact book. Further if you go to the page Classic of Arts and follow the external link, you will find an article by John Fairbairn that shows how much literature there was a couple of thousand years ago that references go. Charles Matthews 18:56, 18 September 2007 (UTC)
I mentioned the Fifteenth Century chess book because it actually exists; but I was off on the date, it was 1512 (see Damiano which has links to books that have photographs of it). Such books can only be found in museums but they exist; the University of Maryland has a modern reprinting of it, see |List of chess books at UMBC. Please show me a Go book from the 2nd Century AD, I'd be delighted. You won't find one. You'll find tradtions about games being invented by Emperors. There are such tradtions about chess, too. But no books from the 2nd Century. Pete St.John 20:11, 18 September 2007 (UTC)
I'll give you one thing, though, Charles; the first link you give (to Babelstone) says:
The earliest unambiguous references to the game of Go are not found until the Eastern Han dynasty, and the first complete description of the game is in the work Yi Jing 藝經 "Classic of Arts" by Handan Chun 邯鄲淳, who lived during the third century (Three Kingdoms period)
which is a century off (he is less specific about earlier works) but does cite a specific reference, which could be followed. I'd be interested in seeing this 3rd century item (in translation). Pete St.John 20:16, 18 September 2007 (UTC)
Please read the excellent work by John Fairbairn at [1]. If you want to see an actual book, you might try looking at the picture on the Zuo Zhuan page, a 4th century BC book mentioning go. HermanHiddema 14:58, 20 September 2007 (UTC)
it would seem that Go books predate Chess books and I don't want to quibble about 4th century vs 2nd century. However, it would be great if someone could provide a quotation (in English) from one of these books, as "...earliest book to mention Go..." can mean very little. Consider Genji, about 1000 A.D., which has this passage:
She brimmed with good spirits as she placed a stone upon a dead spot to signal the end of the game.... "Just a minute, if you please," said the other very calmly. "It is not quite over. You will see that we have a ko to get out of the way first."
If you think about reading Shakespeare, from the fifteenth century, you can imagine the difficulties translating Japanese from 1000 A.D. The translation of the word "ko" may have presumed the game mentioned was Go; but most people interpret this passge as "filling a dame to signal the end of a game" and indeed "working out a ko" and we think this book really is about Go. But you need context. Merely mentioning a boardgame played with stones may not be Go at all (for examle, it could be Gomoku, played on identical equipment, or some game that is a precursor to modern go.) Pete St.John 18:06, 20 September 2007 (UTC)

Yes, Go books predate chess books, though there is some evidence that XiangQi (Chinese Chess) predates the Indian Chaturanga (normally thought to be the origin of chess), and that it is mentioned as early as the second or first century BC. eg: Shuo yüan, presented to the throne in 17 BC by Liu Xiang 劉向 (79-8 BC), contains the following passage: (for more info, see [2]) HermanHiddema 09:04, 21 September 2007 (UTC)

…ér chân yú (.) yàn zé dòu Xiàngqì ér wû Zhèng nû (.) 而諂諛(。)燕則鬥象棋而舞鄭女(。) …and flatter (.) If you have leisure, then fight at Xiangqi or dance with the women from Zheng(.)…

Please note that Zuo Zhuan is 4th century BC, whereas Yi Jing is 2nd (or 3rd) century AD. I will provide a quotation below, for a more thorough explanation, read John Fairbairns work. HermanHiddema 09:04, 21 September 2007 (UTC)

The "Zuo Zhuan" text (Duke Xiang, Year 25) reads:

"Duke Xian of Wei ... ... went to speak with Ning Xi. Ning Xi promised [to collaborate with him]. When the Grand Uncle Wen Zi heard about this, he said: "Alas! ... Ning is now dealing with his ruler with less care than if he were playing go. How can he escape disaster? If a go player establishes his groups without making them safe, he will not defeat his opponent. How much worse if he establishes a ruler without making him safe."

The Chinese word used for go here is yi, not the modern weiqi. How do we know it refers to go, then? "Fang Yan" [Dialects] by the Han scholar Yang Xiong (53 BC - 18 AD) says, "Yi refers to weiqi. East of the Hangu Pass in the states of Qi and Lu everyone says yi." HermanHiddema 09:04, 21 September 2007 (UTC)

Excellent, Herman. Since this all started with the ungrammatical line about "2th century", perhaps you would care to compose a paragraph about the origin of Go in China, for the article? Thanks! Pete St.John 16:16, 21 September 2007 (UTC)
The origin of Go in China is described both on the main Go (board game) page and on the History of Go page, this page is not the right place for such a thing. Mentioning Yi Jing in relation to ranks and ratings is appropriate here and could be elaborated upon. This page, however, needs a major rewrite anyway. I will do so if I can find the time, until then, I will simply reinstate the mention of Yi Jing. HermanHiddema 10:03, 24 September 2007 (UTC)

Article mostly rewritten

I have mostly rewritten the article. I think the current version is much clearer to the average reader and have removed the cleanup/confusing template. If anyone disagrees, please let me know. HermanHiddema 17:51, 24 September 2007 (UTC)

Heroic, Herman, thank you. I'm sure I'll find something to kvetch about on a closer read, but that was a big effort and it shows. Pete St.John 18:33, 24 September 2007 (UTC)

Number of moves played

The artivle says that an average game of go lasts 120 moves ... but i think its more like 240 or do you mean its 120 moves per player (then it should be written there i guess). —Preceding unsigned comment added by 87.234.92.56 (talk) 18:02, 27 November 2007 (UTC)

Yeah that would be a mistake. Chessplayers count a move as two plays (by White and Black), which is an idiosyncratic distinction; but Go players count moves as plays. I don't know the average but it would be more like 240 than 120. Typical sets accommodate about 360 moves, and it's rare that players have to trade prisoners to make moves beyond that. Pete St.John (talk) 18:50, 28 November 2007 (UTC)

American spelling

An anon contributor just replaced "organisation" with "organization". I'm letting it stand, but it prods me into reminding folks that this is the English Language Wikipedia, not the American wiki. British spellings like "colour" for "color" and terms like "lift" for "elevator" are OK. Spelling and grammar and vocabulary are not laws handed down by the Crown or legislated by the People (except maybe in France); if you communicate effectively then you are doing well. Pete St.John (talk) 18:37, 12 December 2007 (UTC)

Graph or ratings statistics, comparing Go and Chess

The graph comparing Go and Chess ratings by percentile rank was deleted, and I've just reverted. The comment in the deletion included:

1. All beginners should start at the bottom. Maybe, but but that is not how the rating systems actually work (my first chess rating was about average for tournament players; I had played the game for years before my first tournament).

2. Graph is meaningless. Not to everyone. Some of us are interested in the statistical distribution of players, and apparently it's not Gaussian for either group. Pete St.John (talk) 20:54, 24 January 2008 (UTC)

3. Chess doesn't belong at all. Sure it does; chess got the first mathematical ratings system (see Elo) and the AGA has tried applying the same system to Go (with mixed sucess). Percentile rank and handicap given are two different things, and the relationships between them are interesting (to some of us). We can take advantage of the literature and studies of chess ratings to develop and utilize go ranks; for example, we might use Elo's method of estimating an historical chessplayer's rating (e.g. Paul Morphy) to estimate historical go players (e.g. the Honinbo Shusai). Pete St.John (talk)


My original comment on User_talk:Kibiusa:
You removed the image Image:GoRatingComparison.png from Go ranks and ratings with the edit summary "Graph is meaningless; all beginners should start at the bottom; chess :doesn't belong at all".
I strongly disagree with this.
  • The graph is not meaningless, it is based on actual data.
  • Because different rating systems are not aligned, all beginner should not start at the bottom, "bottom" is system dependent.
  • The chess data illustrates to anyone familiar with Chess Elo ratings how go ratings compare, which is a much larger group than those already familiar with go ratings.
Furthermore, the image illustrates several points made in the text, amongst others the second half of the "Rating Base" section and most of the "Winning Probabilities" section.
Could you either give a better reason for removal, or refute the points I made? Otherwise, please reinstate the image.
HermanHiddema (talk) 21:10, 24 January 2008 (UTC)

OK here goes: "All beginners start at the bottom": A scale is supposed to cover the entire spectrum being measured, from weakest to strongest, the chess scale must be able to capture ratings weaker than your first one. The weakest chess player ever rated was not equal to a 15K go player. If you must include chess, start chess beginners with go beginners, at the bottom.

"Meaningless": I accept that the data are meaningful to others, for instance those interested in distribution of players. Although it's hard to know whether the distribution represents "actual" player strength, or artifacts of each measurement process. However, "actual data" does not necessarily lead to meaningful analysis. For instance: the weakest possible player on KGS and in EGF starts at 20K. AGA starts at 30K. So a player with a low skill level could be called 20K in one, and 30K in the other. Your graph assumes that 20K players in KGS are all stronger than >20K players in AGA, but they're equal. So it needs some work to express something valid and significant, but I think we all appreciate the work that must have gone into creating a potentially interesting summary of data.

"Chess": OK, there is a meaningful fact here: chess has fewer levels of difficult from beginner to top level. I think the math is in the go article, or I can get it for you. This comes back to my point that all beginners should start at the bottom of your graph for it to make sense. kibi (talk) 23:30, 24 January 2008 (UTC)

I think you misunderstand the graph somewhat. I can't say I blame you, it is quite complicated, I will try to explain:
The graph plots percentile vs rating. So for example if we look at the data at the 95 percentile mark, We see that this is AGA 6d, KGS 4d, EGF 2339 and USCF 2124 (I looked those last two number up from the original data. So this means that an AGA 6d is stronger than 95% of the total playing population, the same goes for a KGS 4d, an EGF 2339 rated player or an USCF 2124 rated player. This is how you can use the graph. Suppose you have a USCF rating of 2000, but don't know much about go ratings. Take the graph, see at what percentile UCF 2000 is (90%) and now see what 90% is in go rating systems. Se if you have USCF 2000, you now know that this is roughly equivalent to a EGF 2d (2200 rated), a strong KGS 2d (halfway toward 3d) or an AGA 5d. Or the other way around, if a go player tells you he is 3 kyu with the EGF, you can look in the graph at the percentile (about 65) and therefore conclude that EGF 3k is roughly equivalent to someone with a chess rating of 1600.
So, to address your first point, the weakest chess players in the graph are those at the 1% point, with USCF rating 444. This means that according to the USCF, someone with rating 444 is stronger than 1% of all USCF rated chess players (and weaker than the remaining 99%), which puts USCF 444 equivalent to 20k EGF and 31k AGA. These numbers are slightly flawed, because the EGF does not measure below 20k, so more realistically, USCF 444 is roughly equivalent to EGF 25k or AGA 34k.
Your second point is partly valid, the data at the very lowest end of the scale (the weakest 2% of players) is flawed because there is an artifical cutoff there (20k at EGF, 31k at AGA). The graph does not, however, say that 20k KGS players are stronger than 20k AGA players. Quite the contrary. A 20k AGA player is at the 15% point (better than 15% of AGA players), while a KGS 20k is at the 3% point (stronger than 3% of KGS players). Which suggests an AGA 20k is stronger than a KGS 20k, and is more like a KGS 13k. One thing is quite certain however, and that is that AGA 20k and KGS 20k are not equal. That is what the second half of the "Rating Base" section is talking about. Separate playing populations drift away from eachother in ratings.
Perhaps a good way to look at this is to consider the left side of the graph as the "bottom" and the right side as the top. All playing populations, independent of rating system used, have a bottom 1% percentile and a top 99% percentile. This graph just show where that point is in their respective rating systems. Chess having fewer levels from beginner to top player is not important in a percentile system, when it comes to precentiles, all rating systems are equal.
HermanHiddema (talk) 08:51, 25 January 2008 (UTC)

Graphs are supposed to summarize data in clear, simple, accurate, meaningful ways. Your "quite complicated" graph does not achieve this goal and creates a misleading impression.

As you admit, there are fewer levels of difficulty between beginner and top level in chess than in go. Then you say it doesn't matter because of percentiles. You're misusing percentiles as a Procrustean bed to create the false impression that top chess players are as strong as top go players. Let me give you an example of why this thinking is flawed.

Suppose you use the same procedure to look at the height of daisies vs. the heights of redwoods. You would measure a bunch of daisies, then measure a bunch of redwoods, and plot their percentiles on the same graph. Looking at the graph, a reader would assume that the daisies are as tall as the redwoods. Very very mi sleading. The level of difficulty issue is a key fact that is obscured by your graph. Please revise it to show that USCF 2000 and EGF 2000 are NOT equal. At present it is a misleading piece of original research.

kibi (talk) 17:51, 25 January 2008 (UTC)

Your example of daisies and redwoods is exactly what makes percentiles meaningful; a tall daisy (taller than 90% of all daisies) is certainly not the same height as a tall redwood (taller than 90% of all redwoods). The question of "levels of difficulty" is not the same as standard deviations; but the latter may help in analyzing the former, e.g. "does a one stone handicap correspond to a constant percentile range? (no) or to a fraction of a sigma? (closer, probably) or...? You don't have to appreciate everything in the article yourself Kibi. No article can be all things to all people. Incidentally I don't know what the statement "USCF 2000 = EGF 2000" would mean; although, I'm about 2070 in chess, and 1.5d in AGA, and I tell people that roughly I'm about the same strength in both sports. Pete St.John (talk) 23:28, 25 January 2008 (UTC)

Yes, and a graph that makes them look the same height could be terribly misleading. This graph falsely implies that beginning chess player is equal to 1 15K go player. I know people who have played and studied for years and not achieved that level -- rare but it happens. In view of the acknowledge fact that go has more levels of difficulty than chess, the chart should show the true fact -- chess beginners and go beginners are equal. It would be easy enough to adjust the scales to allow for that. kibi (talk) 14:02, 28 January 2008 (UTC)

yes Kibi I agree about that; the table, particularly, explicitly equating Elo 200 with 20kyu, should at least say which ranking system. In the AGA, they go down to 30k I think. However, ratings, and ranks, that low, do not mean very much; what is the percentage increase in a beginner's strength when he learns that two eyes live? How many stones handicap a kid who learned Tuesday, and knows that idea, give to a kid who leanred Wednesday, and doesn't? Comparing the two rating systems is useful (IMO) but the data underlying the graph should be specific. I agree that someone with zero knowledge of Go is in some sense comparable to someone with zero knowledge of chess :-) and that the low ends of the table and graph seem odd to me. Pete St.John (talk) 20:36, 28 January 2008 (UTC)

Thanks Pete! So let's hope that the author of the table fixes it, or responds to defend it. You're also correct that ratings at low end are often unstable. If we don't see a response I'm going to yank the misleading chart again. Re: your question about "Tuesday's child' and "Wednesday's child" -- I'd have them play an even game -- give Tuesday White -- and adjust the handicap based on the result until it settles. kibi (talk) 21:15, 28 January 2008 (UTC)

Can we maybe fix the table? by finding specific data, e.g. at senseis.xmp.net? I'd prefer that to deleting the table, because the low end is just bad data, the comparison is not intrinsically bad. As for the hypothetical kids, I think that the result wouldn't settle (or not very soon). That's what's bad about measuring the performance of beginners, they are changing while you are measuring them, so the measurements aren't meaningfull. (You might meaningfully measure how fast a beginner learns, but not their current strength, because their current strengh changes so fast...but the rate they improve may be stable) Pete St.John (talk) 21:24, 28 January 2008 (UTC)
btw, if you look above, Herman (with whom I agree that the graph is generally worthwhile) agrees with both of us (I think) that the low end of the graph is flaky. So maybe, Herman, you can fix that up for us? Thanks, Pete St.John (talk) 21:27, 28 January 2008 (UTC)

I agree Pete, the graph can stay but the flakiness at the bottom has to be fixed. I retract that it is totally "meaningless." But we don't seem to be hearing from Herman -- maybe deleting it will get his attention? It's his graph, I'm not into twiddling with it. kibi (talk) 17:08, 29 January 2008 (UTC)

How about tagging the graph (and the table) with a "needs improvement" thing? I tend to prefer big time patience, this is a case where reasonable people are acting reasonably towards improvement, which is in such stark contrast to the contentious pages, which can crop up (!!) anywhere. Erdos Numbers, sheesh. Pete St.John (talk) 18:40, 29 January 2008 (UTC)

Except that the author is ignoring the discussion, while posting on other pages, for example criticizing alleged "dumbshit speculation and philosophizing" on the computer go page. Do you know how to apply a "needs improvement" tag? If so please do. I prefer not to present misleading data, even if we alert readers that something is wrong, they still have to figure it out themselves, but I would settle for a tag. kibi (talk) 13:42, 30 January 2008 (UTC)

Sigh. I am not ignoring this discussion, but my time is quite limited, and I think the discussion requires a thoughtful response. The other edits I made were all very short and required little time. I am currently at work, so I will make a full reply later. Please have some patience. HermanHiddema (talk) 14:56, 30 January 2008 (UTC)

Thanks H. Take your time, just trying to move things forward. kibi (talk) 17:56, 30 January 2008 (UTC)

Yeah some edits take more thought than others. Now I'm curious to go look at Computer Go, but I suspect it would just unmellow my wa. Pete St.John (talk) 20:01, 30 January 2008 (UTC)

Ok, a lot of talk has been going on, so if I fail to address any point, please let me know. First off, apparently the graph is too complicated, because despite my explanation, both of you have failed to understand it fully. This qualifies it for deletion or replacement. To address some specific points:

  • The graph does not equal EGF 2000 to USCF 2000 (it equals EGF 2200 to USCF 2000, roughly). It also does not equal 20k to Elo 200. It does not equals a beginner at chess to a 15kyu. (It equals 500 USCF to 23k KGS and 31k AGA, roughly). Apparently however, it is more natural for people to assume that values that are at the same height are equivalent than it is to assume that values that are at the same width are equivalent. The only measure of equivalence in the graph is percentiles, so data points that are at the same percentile (ie, the same width) are roughly equivalent. Perhaps if the graph is turned on its side, this will make sense. As said before: Consider the left side of the graph to be ther "bottom" and the right side to be the "top".
  • This data is from senseis library, so fixing the table by finding specific data on senseis is not a workable suggestion. Also, as the data comes from senseis, it is not OR, because someone other than me gathered that data (though I added some data points to it and plotted the graph from it).
  • On the redwoods vs. daisies issue, this comparison doen't hold, because in that case you are measuring the same thing (height), while this graph measures different things (skill at go and skill at chess). The reason I included chess is because it is more familiar. So a more fair comparison would be to plot in the same graph the lenght of humans and the height of redwoods. If such a graph shows that a 100m redwood is at the 95th percentile, and showed that a 2m human is also at the 95th percentile, this would give readers a feel for how exceptional a 100m redwood is.
  • The statement "You're misusing percentiles as a Procrustean bed to create the false impression that top chess players are as strong as top go players." is meaningless. As strong? At what? Top chess players are much better than top go players. At chess. Top go players are much better than top chess players. At go.
  • The bottom of the EGF graph is flaky, yes, because the EGF does not measure grades under 20k. I cannot fix that without making data up, though I can remove the left part of the EGF graph altogether. Alternately I could remove all the 1% and 2% data points from all the graphs, effectively cutting that part of the graph off.
  • There seems to be a lot of confusion over the ratings/grades in general. What this graph really needs is 4 Y-axis label sets. One for USCF, one for EGF, one for AGA and one for KGS. But since I only have the left and the right side of the graph to work with, I decide to merge these. USCF ratings used range from 444 to 2789. EGF ratings range from 100 to 2803. Since they are roughly the same range, they have been merged and put on the left. Similarly, KGS ranks range from 30k to 9p and AGA range from 35k to 9d, so these have also been merged, and put on the right side. Which is also why the left side bears the label "USCF or EGF Elo rating".

Ok, that should address most issues. As stated at the beginning, the graph is apparently too complicated. Perhaps we should therefore replace it with this table:

% AGA KGS USCF EGF
5% -27.69 -19.20 663 153
10% -23.47 -15.36 793 456
20% -18.54 -11.26 964 953
30% -13.91 -8.94 1122 1200
40% -9.90 -7.18 1269 1387
50% -7.10 -5.65 1411 1557
60% -4.59 -4.19 1538 1709
70% -1.85 -2.73 1667 1884
80% 2.10 -1.28 1807 2039
90% 4.71 2.52 1990 2217
95% 6.12 3.88 2124 2339
98% 7.41 5.29 2265 2460
99% 8.15 6.09 2357 2536
99.5% 8.70 7.20 2470 2604
99.9% 9.64 pro 2643 2747

Or this one with raw ratings replaced with kyu/dan grades:

% AGA KGS USCF EGF
5% 27 kyu 19 kyu 663 19 kyu
10% 23 kyu 15 kyu 793 16 kyu
20% 18 kyu 11 kyu 964 11 kyu
30% 13 kyu 8 kyu 1122 9 kyu
40% 9 kyu 7 kyu 1269 7 kyu
50% 7 kyu 5 kyu 1411 5 kyu
60% 4 kyu 4 kyu 1538 4 kyu
70% 1 kyu 2 kyu 1667 2 kyu
80% 2 dan 1 kyu 1807 1 kyu
90% 4 dan 2 dan 1990 2 dan
95% 6 dan 3 dan 2124 3 dan
98% 7 dan 5 dan 2265 5 dan
99% 8 dan 6 dan 2357 5 dan
99.5% 8 dan 7 dan 2470 6 dan
99.9% 9 dan pro 2643 7 dan

HermanHiddema (talk) 13:10, 31 January 2008 (UTC)

OK, it sounds like we're getting somewhere, that's good. I look forward to seeing the revised table which assumes that all players are equally without skill.

I had to laugh at your dismissal of the "daisies vs. redwoods" comparison -- which is perfectly valid because your are measuring skill, as in my comparison one would measure height -- when you went on to make my point. Chess players are good at chess. Go players are good at go. So why are you comparing them?

The most meaningful fact IMO is that chess has fewer degrees of difficulty from beginner to expert than go does. I believe you have acnowledged this. So adjust the table to reflect this -- starting all beginner at the same point and showing that chess tops out below goo -- and you resolve my objections. (Actually all the scales should start out equally.) kibi (talk) 14:19, 31 January 2008 (UTC)

I assume that first sentence contains some form of typo? :-)
As to the daisies/redwood go/chess issue. I am not comparing Go and chess players. I am including chess ratings for reference, for those who are unfamiliar with go ratings but familiar with chess ratings. Making a graph that contains daisies and redwoods would be pointless, unless many people were very familiar with the height of daisies. If the statement "A redwood over 100m tall, that's like a daisy over 10cm tall" would serve as a clarification to many people, it would be useful. But it isn't. On the other hand, a statement saying "A redwood over 100m tall, that's like a man over 2m tall" is a useful clarification to many people. Its the kind of statement that will make them think "Ah, so redwoods over 100m are pretty tall then, as redwoods go". The chess data is there for reference only, and does nothing to compare go and chess players.
I think "degrees of difficulty" is a very vague term. As far as I know, there are fewer distinguishable levels in chess, if you define a "level" to mean something like "scores an average of 68% against a player one level lower". This graph however, is meaningless in that context. Go Elo ratings are not the same as chess Elo ratings. And different Go rating systems (EGF, AGA, KGS) are not the same either. This is because they use different values for several variables in the Elo formula, such as the K-value and the variable a in the section Go_ranks_and_ratings#Elo_Ratings_as_used_in_Go. In this formula, chess uses the denominator 1+10D/400, while the EGF uses 1+eD/a, so in chess a is constant at 400, while the EGF uses a value that varies with playing level. Unless the same factors are used, comparing the outcome of these Elo formula's is meaningless. Now if someone want to go and do the research required, by taking all the results in the European Go Database and putting them through the chess version of the Elo formula, that would give us meaningful results on the issue. Until that is done however, we cannot give exact statements on this. If you want to take this graph to conclude that there are more levels in Go than there are in chess, you must also conclude that there are more levels in AGA Go than there are in EGF Go, which is clearly wrong.
I can redo the graph so that all the lines start in the lower left corner, but that wouldbn't be any more meaningful than the current graph, and it would force the use of 4 different Y-axis scales, which would IMO, hugely complicate the graph.
I think it is best to use one of the tables suggested by me in the previous comment instead. HermanHiddema (talk) 15:59, 31 January 2008 (UTC)

Herman, you say that "degrees of difficulty" is a "vague" term, then you proceed to define it with great precision, so I'm sure you know that there are about 25% more such degrees in go than chess, and there's nothing "vague" about that. You say you are not "comparing Go and chess players," perhaps you're not -- but your table does. You say that you included USCF ratings for people who are already familiar with them . . . still scratching my head over that one.

You seem indifferent to the fact that your tables can easily lead readers to flase conclusions about difficulty of chess vs. go. Do you plan to revise it? It can't stay in its present form.

kibi (talk) 14:27, 4 February 2008 (UTC)

The vague I was referring to was the term degrees of difficulty, which is why I used an alternative term distinguishable levels of skill. "Difficulty" is not a good word in this context. Is a 3 dan more difficult than a 2 dan? No, he has a distinguisably higher level of skill. The fact that there are more such levels might say something about whether or not go is a more difficult game than chess, but is certainly not conclusive evidence. I could write quite a lot of text about that, but it'd all be OR. I think, however, that it may still interest people, so if I find the time I will write something about it on this talk page.
As for including USCF ratings for people already familiar with them, this is because there are at least 10 times as many chess players in the west as there are Go players. Including USCF ratings for reference therefore makes the page more accessible, in my opinion.
I am not indifferent to the idea that the graph can lead readers to false conclusions. I had not expected that it would, but the fact that both you and Pete St. John found the graph confusing shows that it can indeed lead to false impressions. This led me to support replacing or deleting the graph, as you mentioned in your recent edit removing it from the page.
HermanHiddema (talk) 12:44, 11 February 2008 (UTC)
First, Herman, thanks for your in-depth analysis. My sense is that you have what it takes to clarify all this, regarding technicals, pedagogy, and effort, all three good ingredients, so we're lucky to have you at this page. Then, I just want to ask about the scales; my understanding is that after the relatively recent scale shift at KGS, 2k in KGS is about 1d in AGA. This makes sense to me, as I'm 2k in KGS and 1d in AGA myself :-) but I'm only one data point of course. But I had seen that in some statistical tables; I thought, at senseis's. Also, and this is a harder comparison, I'm 2070 USCF and 1.5d AGA (to be precise) and I've felt that these are sorta comparable (just a subjective impression from knowing people at many levels in both groups). I'm very sceptical about being 1700 at go, in some sense. But that doesn't prove anything, it just makes me want to scrutinize a bit more at certain places. Meanwhile, I have to look at that graph again; maybe tilting my head :-) anyway I appreciate your effort here and I expect you to solve our problems for us :-) Pete St.John (talk) 21:35, 5 February 2008 (UTC)
The graph is no longer on the page, but can still be found at http://senseis.xmp.net/?RatingHistogramComparisons for reference. 1d AGA and 2k KGS sounds about right. In the graph 1d AGA is at about the 75%, and looking at the point where the KGS graph crosses 75%, it is between 1k and 2k, so a strong 2k.
2070 USCF on the other hand is at about 93%, which is roughly equivalent with 3d EGF/KGS and 5.5d AGA. In my experience, that sounds about right. For comparison, the highest Elo rating ever achieved in Go was Lee Chang-ho with 3035 (see http://web.archive.org/web/20070218222957/www.goweb.cz/progor/maxim.html for a list). The highest ever in chess is 2851 by Kasparov. So adding about 200 points to your USCF rating is a good estimate for your EGF rating (So 2070+200 = 2270, which is 3d (2250-2350 is 3d in EGF terms).
HermanHiddema (talk) 12:44, 11 February 2008 (UTC)

The author of the graph agreed on 1/30 that it is "too complicated" and qualifies for "deletion or replacement." No word since then, in over a week; so I am deleting the current graph, pending replacement with a better one. kibi (talk) 16:37, 7 February 2008 (UTC)

Ok. Any suggestion for a better graph then? I can't think of anything. Alternatively, how do people feel about including one of the tables I suggested above? HermanHiddema (talk) 12:44, 11 February 2008 (UTC)
We're perfecting the mathematical analysis of Go. That's complicated in itself. We're Go players. That means we are patient. I would hugely prefer marking the graph as under current discussion for improvement, and pointing to this section of the talk page. There are precedents for that. Some students aren't back from winter break yet. Some have exams already now. This is not a real-time environment. Please be patient. Be grateful someone with the technical chops is also taking the time to address it; if this were left to me it might never get done right, I'm spread thin. Pete St.John (talk) 19:57, 7 February 2008 (UTC)

OK since you feel so strongly. It's just that bad aji meakes me nervous. 8>} kibi (talk) 20:33, 10 February 2008 (UTC)

Kibi: yeah, I guess I'm too used to bad aji :-) Herman: Several points. First, re: my rank :-), I'm willing to believe that I'm a couple stones stronger at chess; I'm certainly more erudite there, and my subjective impression must have some error interval. Regarding distinguishable levels (you expressed that well, btw), I think the range of go players distinguishes into more measureable levlels than chess, because the reliability of the outcome of go is higher than chess; I suspect that one game of go is statistically equivalent to a short match, maybe like two games of chess, in terms of reliabilty and repeatability. I don't think that means Go is more complex than chess. In terms of test metrics (see e.g. ETS publications), one test has higher reliability than the other. They could be measuring the same thing(s) (e.g. IQ, preparedness for college, or strategic acumen). Go players like to say that Go is better than chess, because 1. It has more levels of stronger players, so the highest level of go must be higher than chess (false), and 2. Computers can't play go as well as chess (true, but computer chess had a huge head start; and even if chess is more amenable to approaches that are easier with computers, it doesn't mean that Go is better). Both games exhaust any human abilities to reason effectively, so both games are sufficient challenges as sports. Go is more cost-effective, I think, than chess in terms of reliable outcomes per hour, but people will play whichever is more fun for them, and both can be tons of fun. I just want to resist some facile assumptions that I hear alot. Meanwhile, I trust you to make a good presentation of the (possibly modified) graph, I'm confident you are on top of the issues and are responsive to everyone's concerns, so if you have time, great. I'm hoping you do all of the actual work :-) and thanks very much for your patience, effort, thoughtful replies. Pete St.John (talk) 21:02, 11 February 2008 (UTC)

Nonsense

In Go, once a player has achieved an advantage, conservative moves are even more effective than in chess. In chess you must take some risks to avoid a draw, but in modern go a draw is impossible, due to the komi system, so a small advantage in skill often results in victory. Also, an average game of Go lasts for 120 moves, compared to 40 in chess, so there are more opportunities for a weaker player to make sub-optimal moves. The ability to transform a small advantage into a win increases with playing strength. Due to this ability, stronger players are more consistent in their results against weaker players and will generally score a higher percentage of wins against opponents at the same rank distance[1]

The cited reference does not support the assertion (very bad). If the rank distance is defined by the P(win) then this statement is obviously false.

The cited reference does in fact support the assertion. If you look at the table referenced, you will see that the winning percentages of the weaker player against opponents 1, 2, 3 or 4 ranks stronger drop significantly from about 4kyu and stronger (downward in the table). Furhtermore, ranks are not defined by P(win), they are defined by the number of handicap stones needed to give players an even game. HermanHiddema (talk) 13:27, 31 January 2008 (UTC)
I misread the paragraph, confounding the comparison with chess with the later statements (chess doesn't have 'handicap stones'). So the one paragraph is actually talking about two (or more?) different things. perhaps it should be rewritten. —Preceding unsigned comment added by 99.240.215.79 (talk) 13:46, 31 January 2008 (UTC)
Yes, I can see how that is confusing, thanks for pointing that out! The second part (from "The ability to transform..." onward is indeed no longer in any way related to chess. Making that more obvious would be a good thing, I will ponder this and try to rewrite the paragraph soon. HermanHiddema (talk) 13:58, 31 January 2008 (UTC)

Edits by Milker

I'm sorry, but your edits contains many factual errors, here are some:

  • Kyu means pupil and Dan means master in Japanese
    • Untrue. Kyu means "step", dan means "grade".
  • The difference in strength from one to another rank is insignificant and it is not surprised to see a 25 Kyu player beats a 15 Kyu player
    • Untrue. The EGF estimates that a 20 kyu has only a 7.6% chance to beat a 15 kyu. For a 25 kyu to beat a 15 kyu is exceptional.
  • the handicap system in Go does not apply to Double Digit Kyu
    • Untrue, handicaps are used quite often in the DDK range, and are as valid there as they are in other ranges.
  • Even for serious learners, it would take a few years of playing and experiences to advance from 9k to 1k while they may only take a few weeks to advance from 30k to 10k.
    • Untrue. Serious students in the game have reached 1d within a year from starting, while other players take years to reach 10k

These are just some of the problems with the edit. HermanHiddema (talk) 09:48, 17 June 2008 (UTC)

That third point is absolutely absurd. Games are not decided by chance are they? If not then "statistics" has nothing to do with actual games, only *VERY* LARGE numbers of games. —Preceding unsigned comment added by 98.244.81.179 (talk) 16:37, 27 July 2010 (UTC)

Professional Ranks

We need to revisit this section I think. Pro ranks are obtained in various ways, the article doesn't say how. A 9 dan isn't necessarily stronger than a 1 dan. Pro ranks tend to show career acheivements, not current strengths.--ZincBelief (talk) 13:54, 19 June 2008 (UTC)

Winning Chances

I don't approve of much of the text here. Conservative moves are not always adequate. A draw is not impossible, they can happen even with non integer komi due to triple ko (although this is often called no result instead). Integer komi very much creates the chance of a draw. What this has to do with ranks and ratings is questionable.--ZincBelief (talk) 14:25, 19 June 2008 (UTC)

Eye Candy

There are a lot of graphs knocking around that could be added to show the distribution of ratings in Go player populations. Is it possible to add these in anywhere, or must they be published somewhere first? I think they might make the article look more exciting--ZincBelief (talk) 14:15, 4 December 2008 (UTC)

There was a long discussion of this issue earlier this year, see above. Not all data is meaningful. Let's continue the practice of not cluttering the article with ghraphs whose real significance is difficult to discern; how about discussing the value of a particular data set here before putting it up? kibi (talk) 14:23, 4 December 2008 (UTC)

That's a pity, Herman's graph is actually quite good in my opinion. Probably not using a cumulative graph would have been a better idea, and just picking 2 datasets would have been easier. I'm sure something like this should be included to illustate the ability range in the Go playing population. I have graphs like Herman's, but mine where built to show the shift in rating demographic as a function of time in the EGF population.--ZincBelief (talk) 14:34, 4 December 2008 (UTC)

How about this one:

HermanHiddema (talk) 15:29, 4 December 2008 (UTC)

This is a better image, but there are no labels on the axes of the graph. I would prefer a graph that dealt only with ranks or ratings, but this graph (which mixes the two) I could accept. It would need a note explaining [1]that professionals are treated as '7dan', 7d is the highest amateur grade allowed [2] 20kyu is the lowest rank allowed in the system. [3] Each rank corresponds to 100 points.--ZincBelief (talk) 15:57, 4 December 2008 (UTC)

Could somebody please be so kind and add a description for the x-axis in the graph shown in the article? What ranking system is used there? Thanks. 212.23.103.65 (talk) 20:44, 9 August 2013 (UTC)

professional dan as p: ping?

Just added a "citation needed" to the Professional ranks section. It's the first time I've heard of rendering the p in professional ranks as "ping". If someone can find the edit that added it, or some sort of evidence for/against, I would appreciate it. 85.138.128.15 (talk) 13:05, 15 September 2012 (UTC)

Removed the statement that the "p" for professional ranks was rendered as "ping", since it was uncited, and I'd never heard of it before. 85.138.128.15 (talk) 13:20, 4 October 2012 (UTC)

  1. ^ Official European Ratings. "Statistics on Even Games".