Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Bulgaria - Summer of Chess
#11
Thanks for the information on the ban – I wonder what will happen at the end of the four months if the committee finds it has insufficient information to justly ban him. If titled players en masse are refusing to play him, sounds like finding him guilty will be the easier option; I think they could do without this influence on the verdict!

Ivanov’s results are surprising, but I’m not sure what your analysis adds to them because if any player is going to somehow score five very sharp tactical victories against GM players, I doubt it could be done without hitting that sort of match (90% plus) with the top three engine moves. I think we are still left with our impressions and explanations. My impression was that the way Ivanov won these games suggests he should hardly ever lose to players below 2000. Hence I think he was ‘probably’ cheating, but I’m not certain.

You mention courts, which I think is the right way to think. But courts have a process and a standard of evidence, both of which are lacking here. No standard has been set. There is no direct evidence whatsoever. No accuser has taken responsibility for making the accusation. The accusation itself hasn’t even been specified. Was he using the tournament transmission as originally thought, or transmitting his own picture of the board (as Lilov now says he believes)? Is it that the top engine move was transmitted to him, or the top three moves? I ask because people are trying to decide guilt on the basis of presentations like Lilov’s (a slight digression admittedly, as the ‘Kerr presentation’ is better) in which the ‘top three’ is typically being used for ‘illustration’, on the basis that we don’t know how much time the computer had, so a ‘top three’ move might have later popped up as the best. Once or twice when I stopped to look at what was actually on the screen I found quite a bit of leeway was being applied in saying things like ‘this is the computer move’ when it wasn’t the top, or it wasn't in the first 3 at that point - possibly cutting corners trying to keep the thing short.

OK I know this is not your analysis - and thanks for making the effort to actually count what is being discussed. In your analysis you found a ‘top three’ match of 91.2% which rose to 96.4% if you excluded games 2 and 8. As I say, on it’s own I don’t think this adds that much to the spectacle of five duffed-up GMs. I appreciate that you then dug a bit deeper, looking at the moves not in the top 3, of which you say

“I found that the moves he made were often in fact the top choice for Houdini when viewed at a different ply from my original analysis, in all cases, within just a few ply.”

But did you also re-check your stated percentages when calculated at these different ply levels? If you just checked the moves that were not in the top three to see if they were close being included without also checking the ones that were originally included (to see if they remained in the top three on the altered basis) you would be biasing your numbers - you’re bound to make the stats look better that way.

I’d just like to comment on your other subjective explanations. Carlsen was a misleading example that might mislead people. Strike that from the record, your honour! =) . Regarding game 2, excluded for time pressure issues – what time pressure issues? I thought that was a suggestion of Valeri Lilov’s in his video, but he seemed to be speculating without any information on the matter. In any case, if you are ‘improving’ the data by removing 36% of it, I’d say it needs a lot more justification. You also explain that in that game GM Jovanovic ‘was on to him’ and so played quietly - but then why aren’t all of the other GM’s ‘on to him’ and playing safely, rather than losing sharp tactical games?

Lilov said the game 2 endgame blunder on move 115 was probably a glitch, so there are two competing theories. This is one subjective explanation I agree with. It does look like a glitch! I doubt even a 2200 player would have missed that Nf4 won the d-pawn. He then exchanged into the obviously lost pawn ending, resigning almost immediately – quite consistent with someone blindly following the engine (choosing e.g. a meaningless minus four over minus five) and then realizing he was lost. Trouble is, if you are playing the evidence-counting game you still have to count objectively, even if it weakens your case.

It would help for someone to estimate a figure for how often a 2600 player matches Houdini in the game timescales, especially in tactical games. Even one example might shed some light.

What standard to apply? ‘Balance of probabilities’ means people could be banned on fifty-fifty hunches – not appropriate. I also think it’s important that any drastic action only follows a due process. Trouble is, this could be onerous! But as I say, I think the committee had less drastic options, I doubt he could keep cheating for long under such scrutiny, especially if they had varied the transmission.

Cheers
Reply


Messages In This Thread

Forum Jump:


Users browsing this thread: 1 Guest(s)