Important Dates:

For Participants



How to proceed

The competition essentially consists of two phases: the training phase and the challenge phase. The training phase is from 21.05.-30.06.2013 and it is intended to warm up the participants. The challenge phase is from 05.07.-30.09.2013.

You may proceed as the following:

  • Step 1. Download the problems (java, or C), the properties, and where given the traces, which partiuclarly address participants (also) using passive learning methods.
  • Step 2. Select your favourite method or tools to find out which properties are satisfied. You may use your own tool or other peoples' tools or any combination thereof - the challenge is completeley free-style. Some prominent tools you may consider to involve are:
    1. SPIN
    2. mCRL2
    3. LTSmin
    4. LearnLib
    If you want your tool to be listed here or if you have any other suggestion for tools, please contact us!
  • Step 3. Test the strength your methods or tools using the training problems. We will upload the solutions on the 05.07.2013
  • Step 4. One you are warmed up by working with the training problems, you will be ready for the actual challenge. Repeat Step 1 to Step 3 for the challenge problems. Good luck!
  • Step 5. Submit you solutions here.
  • Step 6. Receive your evaluation and hopefully also your achievements and rewards during the RERS meeting which in particular also discusses the profile of future RERS problems.

We have setup the challenge problems for Java and C almost identically. The main difference is that input and output symbols are given as strings in the Java setting and as plain numbers in the C setting (an explicit request from that community). Despite this difference, one can proceed exactly in the same fashion, e.g.:

  • For solving the implicit problems, you might analyse the code for errors/exceptions/assertions labels. Each such label occuring in the code defines a reachability problem, which you can solve with the method of your choice. There are no limitations.
  • The explicit problems, even if reminding of typical model checking problems, may also be dealt with in any fashion you like, e.g. dataflow analysis, symbolic execution, testing, learning, (statistical) model checking, run-time methods,....
  • The challenge is free-style. You are allowed to patch the code in any way you like, but the validity has, of course, to be stated according to the original problems.
  • You should first concentrate on the problems and properties you can master well. You do not have to give an anser to all problems. Of course, the more problems you can tackle, to more points you may be able to win, but please be aware, wrong answers have a big penalty!

Concerning the second form of ranking, please write a short summary of your approach, the encountered hurdles, your solutions, and your results. In these summaries, honesty, e.g. also concerning weaknesses of the employed methods, is important. Our challenge aims at profiling the various approaches and methods, which in particular means that weaknesses need to be identified. Of course, we are also very interested in new ideas and solutions that were motivated by the challenge. Details about the ranking or evaluation method can be found here.

If you have completed all (or some) tasks, we kindly request you to submit your solution in a CSV-style format. Solutions for each of the problems should be specified in the following way:



  • spec is the property specification identifier, where number 0 to 99 correspond to the LTL formulae for each problem and the number 100 to 159 are reserved for the reachability questions concerning the error labels (100 is for error_0 and 159 for error_59). Or you can specify directly the LTL expression, or the name of the error label respectively.
  • answer expresses whether or not you believe this error label to be reachable, or the respective LTL property to be satisfied. This may be expressed as true/false, yes/no or 1/0.

Please choose for each of the CSV-files a name which clearly identifies the respective problem along with an id uniquely identifying the solution as yours such as "problemX-.csv". Alternatively, you may combine your results into a global CSV with 3 columns per row; the first one designating the number of the problem. Hence, a line in this file for a hypothetical problem 0 might look like this:

0, error_37, true

Submission is closed now and all solutions have been uploaded to the respective problem pages.