Website founded by Milan Velimirović in 2006
23:53 UTC
| |
MatPlus.Net Forum General Notation of C+ problems... |
|
|
|
You can only view this page!
| | (1) Posted by Kevin Begley [Monday, Oct 28, 2013 17:28]; edited by Kevin Begley [13-10-28] | Notation of C+ problems... Food for thought...
Various preferences have been suggested for a universal "computability" symbol, in chess problems.
Personally, I always favored the simple trio: C+/C-/C?...
Without getting into a subjective argument about preferences, I'd like to suggest here that two entangled pieces of information are noticeably absent from the commonly applied standards of our symbolism. That is: the limitations imposed upon solvers, and the composer's intended audience.
Most problems which are C? (read: neither cooked {C-}, nor 100% computer verified {C+}) are implicitly intended to be solved exclusively by one individual human (prohibited even from development of an independent solving algorithm).
The challenge of most composed studies, for example, might diminish rapidly, if solvers are permitted any reliance upon computer resources (except, that is, when I'm the solver, lulled into trusting Fritz's +9 evaluation).
It was simpler, I'm sure, to make challenging problems, prior to computers (back when soundness was the big challenge). But, the same can be said of arithmetic problems (now rendered pointless, when taken beyond the basic complexity of algorithmic demonstration). From that perspective, some clear frontiers of chess problem evolution would seem readily predictable.
But, that future can not fully arrive, until there is some recognition that an historical bias persists, due to the implicitly fictional assumption of an audience which will forever be universally constrained by solving standards of the past.
Surely, problems composed for an audience of "advanced" solvers (team of human, and computer), can be uniquely appreciated as a means to PRESERVE some lost aspect of our full heritage -- adaptation has always been our best tradition.
I personally believe that this class of problems (with no restraints on computer solving) should merit distinct consideration -- and, beyond formalizing the rules for solvers, an explicit recognition of the intended audience might help judges avoid an unconscious bias. I know that sounds like too much imposition upon the politics of title competition; but, I'm actually suggesting nothing beyond a recognition that the composer's intended solving audience merits consideration, with respect to the computability symbol (and only because this conveys important information, which is prerequisite to the appreciation of any problem).
And, perhaps this could be elegantly facilitated within the context of one symbol, already striving for universality.
One other vital piece of information is still missing: the question of whether retro-analysis applies (information which is lost, the moment a problem is removed from the inherent rules of genre groupings).
Genre Classification, obviously, should have no import upon the solver's task -- this information must be encoded in a more intelligent manner (read: non-degradable, and with consistent purpose).
If other information is missing (or lost, or poorly classified, or unduly assumed, or redundant, etc), please feel free to share your views, here. I'm hopeful that a good universal encoding might someday emerge, but it's important to first consider what information is vital.
Until then, no quibbling about subjective preferences.
ps: given the rather high number of chess diagram errors, from even reputable journals, I also wonder whether the old redundancy/error-correction scheme (w + b + n) should be reconsidered. I don't suppose improvements are very useful, if nobody (except the frustrated solver) bothers to observe the data.
| | (2) Posted by Ian Shanahan [Tuesday, Oct 29, 2013 18:18]; edited by Ian Shanahan [13-10-29] | If C+ represents 100% computer-verification, then regarding C? problems, I'd like to see a fourth symbol introduced - for problems which have been extensively computer-tested in conjunction with intelligent and apparently exhaustive human decision-making. For example: In the latest issue of "The Problemist", a Ser.S=25 by me has been published. Whilst it is not C+, I did work out every possible stalemating configuration 'by hand' and computer-tested each scenario individually using Popeye's ser.a=>b command. Therefore, it is "almost" C+, the only doubt about soundness arising from the (unlikely?) possibility that I erroneously overlooked some scenario. If I had to quantify it, I'd say that this problem, whilst strictly being C?, is 99% C+! | | (3) Posted by Kevin Begley [Wednesday, Oct 30, 2013 14:51]; edited by Kevin Begley [13-10-30] | Good point, Ian -- thanks for raising that issue!
This information certainly deserves explicit consideration -- in fact, I meant to bring it up, myself (before my attention dashed to infinity).
I suspect that validation is nearly complete (somewhere between 95% - 99% confidence), in the majority of cases you have described; but, there are related scenarios which must be considered.
For example, validation based upon assumption techniques.
This is often encountered (or so I have been told) in testing long selfmates, which only surface on the computer's horizon, when accompanied by an additional (assumed) constraint (e.g., King must not be allowed relocation).
In such cases, confidence of correctness depends entirely upon the value of the external assumption.
And, as you know, there are cases where validation may fall into zones of confidence which become very difficult to classify (or quantify).
It would seem unwise to proceed with encoding the gamut of incomplete validation scenarios, before we have a more complete picture of all the information necessary.
Hopefully, some grouping patterns will emerge, later (which help suggest an economical symbolism).
Until then, it seems wise to gather many opinions about all information which might be perceived necessary.
I hope this thread might help, in that respect.
And, I certainly agree that you -- and others -- have good reason to wish to express partial validation (important information).
So, thus far (beyond the obvious: diagram, stipulation, composer, source, year, etc), a chess problem must require the following external information:
1) Validation
a) Completely Verified by computer.
b) Completely Invalidated (by computer/solver).
i) Cook.
ii) Major Duals (especially in thematic lines).
iii) Short Solution (demolished).
iv) No Solution (e.g., stipulation can not be forced / intent involved illegal move / etc).
c) Partially Verified by computer.
i) Assumption Based Techniques.
ii) Plausible Scenarios Considered (?)
iii) Final ("mating") Position guaranteed (required to achieve the aim/stip).
d) Partially Invalidated (by computer/solver).
i) Minor Duals (non thematic /on promotion).
2) Retro Considerations
a) Problem has no retro content.
b) Problem requires consideration of previous moves.
c) Problem is retro illegal (ignore retro).
3) Intended Solving Audience
a) Humans only (no computer assistance).
i) Individuals
ii) Teams
b) Advanced Solving Welcomed (computers OK).
i) Individual may use any available resources
ii) Individual may use only self-developed algorithm (??)
iii) Teams (?)
c) According to Time Constraint (??).
4) Redundancy Check / Error Correction.
If this outline has noticeable vacancies (or might be regrouped to better effect), please share your insights here -- even if the information pertains only to future considerations (chess enthusiasts should always look ahead)!
We can all later discuss the import of various pieces of information. | | No more posts |
MatPlus.Net Forum General Notation of C+ problems... |
|
|
|