User Tools

Site Tools


wg3

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
wg3 [2017/10/28 13:00]
wagner
wg3 [2018/09/26 16:28] (current)
sudholt [PART A: Identifying the benchmark functions]
Line 59: Line 59:
  
 A current article with an overview and comparison of relevant approaches: https://​cs.adelaide.edu.au/​~markus/​pub/​2017joh-ttp.pdf (Springer: http://​link.springer.com/​article/​10.1007%2Fs10732-017-9328-y) ​ A current article with an overview and comparison of relevant approaches: https://​cs.adelaide.edu.au/​~markus/​pub/​2017joh-ttp.pdf (Springer: http://​link.springer.com/​article/​10.1007%2Fs10732-017-9328-y) ​
 +
 +
 +=== Suggestions by Thomas Stuetzle ===
 +
 + I think SAT and TSP could be some problems but this very likely at least for SAT leads to duplication,​ as in the SAT community there is already since long times regular efforts to evaluate complete SAT solvers and local search solvers in the SAT competitions (I think they are annual or every two years).
 +
 +For TSP there was the TSP challenge by David Johnson et al. I think for TSP it would make sense to try with new benchmarks.
 +
 +> For both concerns I would like to draw your attention to the following:
 +>
 +> "These benchmark functions should capture the difficulties of combinatorial optimization problems in practice, but at the same time be comprehensible so that the resulting algorithm behaviors can be understood or interpreted."​
 +>
 +> and
 +>
 +> "The sought benchmark functions should ideally be scalable with the problem size and non-trivial in the black-box optimization sense, i.e., allow for shifting the optimum to any point."​
 +
 +If we do not get instances at least as hard as in these competitions [SAT,TSP] and algorithms that are competitive with those already used in these competitions,​ I don’t see what particular type of impact one would make. Obviously, it is interesting to study algorithm behavior and so on but I think the most interesting is to study that of the high-performing ones (as is also typically done in the BBOB case!).
 +
 +
 +I think for each of the benchmarks one should give some baseline algorithms that are publicly available and that people can use to see whether they could compete (if this is a goal).
 +
 +By the way, when doing such things seriously on combinatorial optimization,​ one notices that BBOB is so much easier, as there is only one problem with many instances, while for covering combinatorial optimization one would have to cover many problems each with many instances and often specialized algorithms.
 +
 +The added difficulty is that in combinatorial optimization usually what counts is the computation time not function evaluations. So one needs a very good surrogate for computation time, conversion of computation times or evaluate all algorithms on a same platform.
 +
 +As a side thing: I’d exclude polynomially solvable problems (at least those where n log n or also n^2 algorithms exist) except one goes to very large instances.
 +
 +> Quite a few people would like also to have easy problems to show when some algorithms or parameters are really bad.
 +
 +I think it is fine to do so when this is well motivated (which would be the case in what you mention). However, I would then avoid to make a challenge on solving well polynomial problems if this has now positive effect on the original NP-hard ones as this can easily de-generate in a pseudo-competition (at least if one does it only experimentally—in occasional situations I would now exclude that something interesting results ..)
 +
 +=== Suggestions by Hernan Aguirre (On Twitter) ===
 +
 +Kauffman'​s NK-landscapes for single-objective optimization and its extensions to multi- and many-objectives (MNK-landacapes and rhoMNK-landscapes) offer the possibility of testing on sub-classes of problems that require binary representation. Are a must for serious benchmarking.
 +
 +=== Suggestions by Dirk Sudholt ===
 +
 +There are a lot of well-studied problems with a property called **bit-flip symmetry**: flipping all bits gives a solution of the same fitness. This means that there are always at least two global optima. Such problems have been popular as search algorithms need to break the symmetry between good solutions [1].
 +
 +Well-known examples include the function **TwoMax** := \max\{\sum_{i=1}^n x_i, \sum_{i=1}^n (1-x_i)\} [1], which has been used as a challenging test bed in theoretical studies of diversity-preserving mechanisms [2-4].
 +The function **H-Iff (Hierarchical If and only If)** [5] consists of hierarchical building blocks that need to attain equal values in order to contribute to the fitness. It was studied theoretically in [6,7] and is frequently used in empirical studies, see, e.\,g. [8,9].
 +
 +In terms of classical combinatorial problems, the **Graph Colouring** problem asks for an assignment of colours to vertices such that no two adjacent vertices share the same colour. For two colours, a natural setting is to use a binary encoding for the colours of all vertices and to maximise the number of bichromatic edges (edges with differently coloured end points). A closely related setting is that of simple **Ising models**, where the goal is to minimise the number of bichromatic edges. For bipartite (that is, 2-colourable) graphs, this is identical to maximising the number of bichromatic edges as inverting one set of the bipartition turns all monochromatic edges into bichromatic ones and vice versa. An advantage of the Ising model setting is that good quality solutions are easily recognisable:​ subgraphs with equal colours contribute well to the fitness. So I suggest to add the Ising model to the benchmark suite, but to keep in mind that it's essentially equivalent to the classical Graph Coloring problem. ​
 +The following graph classes are of particular interest; all of them are scalable, subject to mild constraints (square numbers, n being one less than a power of 2):
 +  - **Ring/​cycle graphs** [10] can be optimised by the (1+1)EA in expected time O(n^3) and by a GA with crossover and fitness sharing in expected time O(n^2) [10]. The landscape contains plateaus that are easy to overcome.
 +  - **Toroids** with side lengths sqrt(n)*sqrt(n) [11] are more difficult to optimise as they contain difficult local optima. Nevertheless,​ a Metropolis algorithm succeeds expected polynomial time [11].
 +  - **Complete binary trees** [12] contain many local optima, some of which are extremely difficult to escape from (e.g. a root with differently coloured subtrees). While mutation-only EAs need exponential time in expectation,​ a (2+2)GA with fitness sharing succeeds in expected time O(n^3) [12].
 +
 +<sub>
 +[1] Goldberg, D. E., Van Hoyweghen, C., and Naudts, B. (2002). From TwoMax
 +to the Ising model: Easy and hard symmetrical problems. In Proceedings
 +of the Genetic and Evolutionary Computation Conference (GECCO ’02),
 +pages 626–633. Morgan Kaufmann.\\
 +
 +[2] Oliveto, P. S., Sudholt, D., and Zarges, C. (2018+). On the benefits and risks of using fitness sharing for multimodal optimisation. Theoretical Computer Science. To appear.\\
 +
 +[3] Covantes Osuna, E. and Sudholt, D. (2018+a). On the runtime analysis of
 +the clearing diversity-preserving mechanism. Evolutionary Computation.
 +To appear.\\
 +
 +[4] Covantes Osuna, E. and Sudholt, D. (2018b). Runtime analysis of probabilistic crowding and restricted tournament selection for bimodal optimisation. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), pages 929–936. ACM.\\
 +
 +[5] Watson, R. A., Hornby, G. S., and Pollack, J. B. (1998). Modeling building-block interdependency. In Parallel Problem Solving from Nature — PPSN V, pages 97–106. Springer Berlin Heidelberg.\\
 +
 +[6] Dietzfelbinger,​ M., Naudts, B., Hoyweghen, C. V., and Wegener, I. (2003). The analysis of a recombinative hill-climber on H-IFF. IEEE Transactions on Evolutionary Computation,​ 7(5):​417–423.\\
 +
 +[7] Goldman, B. W. and Sudholt, D. (2016). Runtime analysis for the parameter-less population pyramid. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2016), pages 669–676. ACM.\\
 +
 +[8] Thierens, D. and Bosman, P. A. N. (2013). Hierarchical problem solving
 +with the linkage tree genetic algorithm. In Proceedings of the 15th Annual
 +Conference on Genetic and Evolutionary Computation (GECCO 2013),
 +pages 877–884. ACM.\\
 +
 +[9] Goldman, B. W. and Punch, W. F. (2015). Fast and efficient black box
 +optimization using the parameter-less population pyramid. Evolutionary
 +Computation,​ 23(3):​451–479.\\
 +
 +[10] Fischer, S. and Wegener, I. (2005). The one-dimensional Ising model:
 +Mutation versus recombination. Theoretical Computer Science, 344(2–3):​208–225. \\
 +
 +[11] Fischer, S. (2004). A polynomial upper bound for a mutation-based algorithm on the two-dimensional Ising model. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’04), pages 1100–1112.
 +Springer.\\
 +
 +[12] Sudholt, D. (2005). Crossover is provably essential for the Ising model
 +on trees. In Proceedings of the Genetic and Evolutionary Computation
 +Conference (GECCO ’05), pages 1161–1167. ACM Press. ​
 +</​sub>​
  
 ==== PART B: Evaluating Algorithm performance ==== ==== PART B: Evaluating Algorithm performance ====
Line 107: Line 194:
 === Recent New Members (Since October 2017) === === Recent New Members (Since October 2017) ===
 Jörg Lässig <​jlaessig@googlemail.com>, ​   ​ Jörg Lässig <​jlaessig@googlemail.com>, ​   ​
-Thomas Weise <​tweise@ustc.edu.cn>​. ​   +Thomas Weise <​tweise@ustc.edu.cn>​,  
 +Thomas Stuetzle <​stuetzle@ulb.ac.be>​. ​  
  
 ~~DISCUSSION~~ ~~DISCUSSION~~
wg3.1509192038.txt.gz · Last modified: 2017/10/28 13:00 by wagner