MatPlus.Net

 Website founded by
Milan Velimirović
in 2006

22:59 UTC
ISC 2024
 
  Forum*
 
 
 
 

Username:

Password:

Remember me

 
Forgot your
password?
Click here!
SIGN IN
to create your account if you don't already have one.
CHESS
SOLVING

Tournaments
Rating lists
1-Oct-2024

B P C F





 
 
MatPlus.Net Forum General Iqbal strikes again (FLAMEBAIT :-)
 
You can only view this page!
(1) Posted by Hauke Reddmann [Monday, Oct 22, 2018 17:01]; edited by Hauke Reddmann [18-10-23]

Iqbal strikes again (FLAMEBAIT :-)


By chance, I ran into the following article:
A Computer Composes A Fabled Problem: Four Knights vs. Queen,
https://arxiv.org/abs/1709.00931

It's rather obvious that Baldur Kozdon, the master of bQ miniature,
wouldn't touch the presented problem with a ten foot pole (the key
goes from an undefended to an immediately attacking position; the
"main" variant declared by Iqbal is QxS which reduces things to
a trivial KSSS/K), but that is beside the point. The more relevant
question is: a) Will computers never learn esthetics or b) is it
merely that we haven't found the right algorithms yet and/or must
throw more MIPS on the problem? After all, when Torres built his
Ajedrecista for KR/K, he probably couldn't imagine that a mere
100 years later Alpha Zero wipes the floor with us.
If b), give a time frame, you don't lose Karma points if you
are off by the one or other millennium :-)

Hauke (b, my lifetime, sue me)

[EDIT: proper name spelling]
 
(Read Only)pid=16860
(2) Posted by Juraj Lörinc [Monday, Oct 22, 2018 23:17]

Monty Pythons would confirm: It has been incorporated into a computer program called Chesthetica and since around tea time used to compose chess problems of various types which have been published online (automatic composition using earlier methods started around noon).

Seriously now. The main difference between game and most composition directions is in the existence of clearly defined goal. Game has clearly defined goal. Most composing efforts do not. Well, once you are able to define goal exactly, you can have naive algorithm: check all possible positions for solution, if unique, check for the presence of the goal (formal theme, strategic theme...). Obviously, this can be sometimes considerably optimized, as e.g. Torsten Linss has done for his reflexmates tablebases or Vaclav Kotesovec has done it for echo problems. I have seen Vaclav's program VKSACH, properly setup, producing clearly cut diamonds ready for publication.
 
 
(Read Only)pid=16861
(3) Posted by Hauke Reddmann [Saturday, Dec 8, 2018 16:05]

Another article on the subject:
ICGA Journal, Vol. 29, No. 1
Fridel Fainshtein, Yaakov HaCohen-Kerner, A Chess Composer of Two-Move Mate Problems
http://homedir.jct.ac.il/~kerner/pdf_docs/ICGA_computer_composer.pdf
Fasten your seatbelts :-)
 
   
(Read Only)pid=16900
(4) Posted by Juraj Lörinc [Saturday, Dec 8, 2018 21:46]

Well, I like this article much more than the previous one. Even if the qualities of "improvements" are debatable, this might be an interesting step forward.
Just imagine that one would be able to define quality function taking into account many more elements (themes, bonuses, penalties) and calibrate it better.
Then one might actually get real improvements of construction or added variations from existing problems.
Then the next step might be e.g. application of improvement algorithm with well calibrated quality function to some random setup of pieces.
Etc.

Of course, this all hand in hand with improvements of hardware...

The challenge is now to get complex enough and quick enough quality function...

Also the continuing support of experts help a lot, including correction of the notion "improvement". This can be helped, I think.

But I do not feel like joking about this article as I can theoretically imagine it might work in the described sense in not-so-close future. Nobody admitted seriously in 1980s AI could be unbeatable by top human players.
 
   
(Read Only)pid=16902
(5) Posted by Kevin Begley [Saturday, Dec 8, 2018 23:01]

Is "Alpha Zero-Position" trademarked yet?

We laugh now -- the way chess players used to laugh at hxg5?? (the toaster took my knight!!) -- but, it's coming.
Err, actually, that day has already dawned -- even the first Babson was computer tested -- it's only a matter of further reducing human involvement (until night falls on human involvement).

Bad enough how automation can depress labor markets, imagine humanity obsoleted from our most passionate hobbies (the composing of poetry, music, chess problems, art).

Even the software designers are less directly involved today -- create a learning neural network, and the computer does the rest. You needn't be an expert on standards -- merely provide a quantized feedback of success versus failure.
If you want more direct involvement, consider a job promoting composing machines (note: if history is a guide, free-thinkers need never apply).

At some point, even taking your inspired idea to the composing computer would be like some patzer knocking on Team Caruana's door, insisting his vital analysis of the Petroff demands immediate entry.

Imagine young Bobby Fischer never winning a single game against the Soviet machine. NEVER.
Get beyond the bleakness, and ask this: what would be his next move (when passionately determined to be of some consequence in this endeavor)?
 
   
(Read Only)pid=16903
(6) Posted by Marjan Kovačević [Sunday, Dec 9, 2018 02:30]; edited by Marjan Kovačević [18-12-09]

Diagram 3, "The best improvement by CHESS COMPOSER", is obviously misprinted. Also, the statement that it "is considered much better than the original of Diagram 1" is unclear to me. By whom it is considered to be much better?
 
   
(Read Only)pid=16904
(7) Posted by Hauke Reddmann [Sunday, Dec 9, 2018 11:13]; edited by Hauke Reddmann [18-12-09]

Well, by the goodness function of the algorithm,
which has been written in by the authors of the article
(after discussion with experts, as they state). I just see
I better should have linked to their 1st article too,
I didn't do the (references) research...Here we go:

EDIT: oops, here we don't, since the link is mangled -
you have to e.g. enter "HaCohen-Kerner ICP" into
Google Scholar and DL "An Improver of Chess Problems"...
let's try another source in the good a last time...

http://www.jct.ac.il/sites/default/files/library/Publications/articles/1997-1999/kerner-1999b.pdf

Here the evaluation function is made explicit. Frankly, to
me this looks like Robovaux :-), but as it was already stated
in the thread, lets laugh as long as we still can.

The scientific follow-up seems obvious to me: Throw
a million chess problem together with eventual awards
before a neural net and hope something sensible comes out.
This isn't half as far-fetched as it might sound - IF the
approach of the authors is sensible at all, the features
they use look easily learnable by a neural net to me since
it is implicitely contained in the data.
"Modern" stuff (read: alphabet soup :P) is also formalizable
and learnable.

Hey, I still have a Master Thesis and Ph.D. in computer
science to write, this sounds like a fun theme :-)


Hauke
 
 
(Read Only)pid=16905
(8) Posted by Kevin Begley [Sunday, Dec 9, 2018 11:13]; edited by Kevin Begley [18-12-09]

Agreed. Their standards for measuring quality/improvement are misbegotten.

It's unfair to blame the problem consultants they've named in the paper. If you've never been involved with academics, be aware: they want only to name some authorities on a given subject, then proceed without/against good counsel. They went the wrong direction, and I very much doubt their consultants can be happy about being associated with the poorly conceived standards of problem quality.

These researchers are somehow under the false impression that moving a black King toward the center must always constitute a major improvement, and increasing the number of variations is always favorable -- even when incurring duals in those variations.
They must wonder how so many composers lazily neglect to retest after clicking the position shift button.
Yeah, we really need a robot to handle the centering of our art in the frame. Thank you, Claude Shannon!

Sigh. They have the right idea for automating the composing process, but they've grossly underestimated the challenge in measuring a problem's quality.

In all honesty, top composers will disagree about more than the fine points of constructional quality. To the extent that this can be considered a science (e.g., legal position, sound problem, key takes no unprovided flight, etc), the machine should be looking ONLY to reject severely flawed realizations. It's enough to narrow the options, and folly to presume a machine can select the version of optimal quality when, again, top composers do not agree on this (that's an art, not a quantifiable science).

If we could provide a scientific measure of quality, problem composing would increasingly require computer science credentials.
Automated composer designers should avoid the misguided attempt to improve published problems -- there's little reward there (even when you make a substantial savings/contribution); plus, they are bound to favor a version which was smartly rejected by the composer (for reasons which even experienced composers may need a few lectures to appreciate). They should pan for new treasures (reject what does not sparkle and allow independent consultants to help make the final selection -- wherein they may learn the finer points of artistic considerations and style).

They should focus on realizing thematic tasks. Create a fairy tablebase (using a promising variant), and pan for new Forsberg twinning possibilities, for example. If you get the right kind of Forsberg, maybe a good retro algorithm nets a Babson.

Chase after something consequential. Bring down some white whale. Don't go shifting miniature #2 problems around the damn board, attempting quantum-precision measurements of problem quality (registering fluctuations in the queen-odds-game fabric). Such an absurd endeavor risks calling Jonathon Swift from his grave. We don't want that. He'd accuse us all of talking to horses.
 
   
(Read Only)pid=16906
(9) Posted by Torsten Linß [Monday, Dec 10, 2018 05:35]

About two years ago I had a look at Iqbal's PhD thesis. I was -- to a certain extend -- surprised that a PhD had been awarded for this naive "research". I was even more surprised by the publicity it got. He completely failed to get himself acquainted with some of the fundamentals of chess compositions. He picked a dozen of formal (!!) criteria for assessing the quality of a chess problem, which MOE proved to be insufficient more than 3 decades earlier. Worst of all he arbitrarily (!!) picked one (!!) variation as the main variation to base his program's "judgement" on. Thus, completely missing the point that the relation between two equally weighted main variations may be the essence.

With regard to Kevin's [I hope I understood you correctly or at least to 90%] insertion that a neural network (which seem to be the solution to almost everything these days) will be able to judge the quality of chess compositions and subsequently maybe able to improve upon the construction I'm very skeptical. Even if you train a neural network with all existing and all honoured chess problems I doubt the network will be able to make any sense of this. It will only get confused. There is too little consistency in the awards because too many non-objective aspects are involved.

On the other hand, computer programs and technology have changed the way we compose in the last 40 years. And there is no end to this. There has been a first revolution with the advent of solving/testing programs in the early 1980s. Over the years those programs have enabled composers to realise more and more complex or more subtle ideas in more brilliant and polished settings.

More recently, we have seen Christian Poison's WinChloe and Viktoras Paliulionis' Helpmate analyser. Programs that are able to detect themes. And we have seen the extensive use of table bases resulting in discoveries that 20 years ago nobody would have thought possible. And there is more to come!

However, I'm convinced not matter how powerful computers will get, human creativity will always be superior. But in contrast to some modern-times luddites [there are quite a number among us problemists] one has to use modern technologies to excel and to take advantage of them, but not to ban it.

Anyway, I'm looking forward to what lies ahead in CACC (computer aided chess composition) and hope to contribute a little bit too.
 
   
(Read Only)pid=16907
(10) Posted by Hauke Reddmann [Tuesday, Dec 11, 2018 13:21]; edited by Hauke Reddmann [18-12-12]

Neural net was my suggestion. Maybe I even use it for my studium
(even if the result is EPIC FAIL, that would also be a result).
Depends. Don't hold your breath.

In the meantime, I asked Udo Degener (yes, the Albrecht database
would be suitable for science) and he provided me with a first
interesting statistics as I was musing whether...

WinChloe, 195.218 #2
Prize = 20.794 (material 9,33+8,35 = 17,69)
Recomm = 14.678 (material 9,12+8,13 = 17,26)
Mention = 12.499 (material 8,86+7,8 = 16,66)
NoPrize = rest (material 8,15+6,98 = 15,13)

...i.e, the stonier, the merrier :-) (I didn't compute whether
1 piece is significant, but with that large database, I bet it is.)
(SON OF EDIT: Udo confirmed this tendency prevails in 3# and n#
and blames it on miniatures.)

So, the net could detect material instead of prize-y-ness,
like in the famous tank-finder example where the net didn't
detect tanks but sunny weather... :-)

Hauke
 
   
(Read Only)pid=16908
(11) Posted by Geoff Foster [Tuesday, Dec 11, 2018 23:14]

Could a neural network be given all current examples of a certain "letter-pattern" theme, and then become adept at composing problems showing that theme?
 
   
(Read Only)pid=16909
(12) Posted by Kevin Begley [Wednesday, Dec 12, 2018 07:46]; edited by Kevin Begley [18-12-12]

There's no point trying to quantify the value of a chess problem. And yet, we do do that (see the FIDE Album point scores).
That's probably another discussion, and I'll not have it -- I have sworn off any conversation which lands me on an old soapbox.
Suffice it to say, there is a matter of subjective human irrationality to consider (humans can make zero sense, whereas computers are prohibited from this desperate act of humanity).

But, as I stated earlier, an automated composer should focus on realizing unprecedented tasks/themes (where it need never bother about the finer points of artistic style).
If the pursuit is significant, nobody cares whether the construction might have been centered in the frame of the blooming chessboard.
If the pursuit is insignificant, people talk about the centering.

If a composer says he saved the life of an animal, and the judge ponders whether he could have achieved this without the headband, we may conclude the composer has saved a mosquito.

Automated problem composers could today achieve substantial successes absent any human involvement -- though, this strikes me as an academic goal (for the sake of academia); it is enough to greatly reduce the amount of human involvement necessary to produce gems, so that humans may pursue, ummm, the discovery of computer-resistant hobbies, I suppose.

Why ask a machine to subjectively differentiate between two sound, thematically equivalent problems, when you can declare success upon having found one realization? Pursue constraints where the demonstration of validity is a major achievement, and you need never worry about petty constructional tradeoffs.
I have encountered no human who spends time admiring the constructional brilliancy which was required to center a problem on the chessboard.

And, if presented two sound, thematically equivalent problems, rarely could even the best problemist guess which one was favored by the human/computer.
If humans had developed clear, objective rules governing our artistic discernment, the human is only more likely to violate our own rules (for reasons which could only be revealed after a full cup of hot tea).

Most composers will admit their best work was denied a prize, and their worst work was (embarrassingly) celebrated.
It's folly for humans to use awards as a rational basis for evaluating their own works -- imagine what dystopian horror would ensue if computers began using awards as a quality-control measurement.

Imagine grading a child entirely on the basis of having no bubblegum under their desk -- what could be more counterproductive?

At any rate, I'm glad somebody earned a PhD for that research; otherwise, it would have all been for nothing.
 
   
(Read Only)pid=16910
(13) Posted by Hauke Reddmann [Wednesday, Dec 12, 2018 14:04]; edited by Hauke Reddmann [18-12-12]

@Geoff: A neural net can detect ANY pattern. It may just be hopelessly inefficient for this task.
(It might need a trillion chess problems to find the pattern, so I'd prefer to run
WinChloe on the task, or use the annotated Albrecht database.)
(EDIT: Composing would then amount to randomly generate another trillion new problems and
keep the ones that have the theme and some composers wishes. It is obvious that the
knowledge-based approach is still superior.)

Hauke
 
 
(Read Only)pid=16912

No more posts


MatPlus.Net Forum General Iqbal strikes again (FLAMEBAIT :-)