We conducted a number of traversal experiments with and without dynamic reordering activated. In all cases a suitable variable order was derived. We used the same initial variable order and dynamic reordering parameters in all tools. For each set of benchmarks we have three tables: a general table where information about the benchmarks is given, and two result tables, one for traversal methods without reordering and one with dynamic reordering on.
In Table 3 we give the number of variables needed to encode the benchmark and the number of reachable states. We also give the number of parts (TR P) in which the TR was split when using NuSMV, VIS or TranSyT. The number of parts using TranSyT corresponds exactly to the number of events defined in the system (as well as the number of processes in NuSMV). Also information about the TR sizes for both partitioning is given (total non-shared number of BDD nodes of all clusters or events) (TR S). The variable order used to gather this statistics was the same for all the tools and was equal to the order used in fixed order traversal. Note that we used several partitioning schemes in VIS, so this tool has two entries. The Hybrid and IWLS methods share the same partitioning, as we used IWLS method to conjoin in the Hybrid. Remember that TranSyT partitioning is disjunctive while in NuSMV and VIS is conjunctive.
For the result tables (Table 4 and Table 5), the factors compared were the CPU time needed to complete the reachability analysis and the maximum representation size needed by the tool, in terms of BDD nodes. This corresponds to the peak number of live nodes (in thousands of nodes) in the BDD manager.
In the fixed order tables, we also include the iterations needed by the algorithm to finish the traverse, but this numbers are really only directly comparable between BFS and BFS with chaining methods, as other methods have different concepts of ``iteration''. For this reason we also included the number of events fired. This is the number of parts of a TR that have been applied to complete the process. For instance, in NuSMV and VIS this corresponds to the number of iterations multiplied by the number of clusters created. This value is not exact for the Hybrid method, as several conjunctions are counted as one conjunction depending on the number of splits, but we decided to keep the value to have a reference.
For the tables where dynamic reordering was allowed, we also evaluated the impact of reordering in the results. Obviously some of the parameters were exactly the same using reordering or not (i.e. the number of iterations and number of events fired) so instead of these sections we include the number of reorderings triggered during traversal process as well as the total time consumed in this task.
The experiments were conducted on a Pentium IV with 1 Gb of memory, using Linux. The processes were limited to use an hour of elapsed time. If in this time, the verification process was not finished this is marked by a dash in the appropriate entry of the table.
We analyzed two parameterizable benchmarks: a tree arbiter and the well known Muller's C-element pipeline. Both benchmarks are signal transition graphs. Additional benchmarks are briefly described here: