The output consists in

  • An upper bound on the execution time. (*)
  • A trace, which is likely to have execution time close to the maximum (but we do not know for sure) — given as a sequence of blocks to be executed with values for the variables.

Ideally, we would like to replay the trace in a processor or processor simulator in order to check for its feasibility and count the actual execution time, so that we can get a lower bound on the worst-case execution time. (This is a bit like what Patrice Godefroid's SAGE tool does when it replays traces inside the fuzzer, if I'm correct.)

One needs to extract input values from the abstract trace, stuff them into the execution (e.g. by putting them into memory locations at the start according to debugging maps), run the simulator... and I suspect there are tons of pitfalls.

I'm no expert on processor simulators and would be interested in collaborations in this respect.


(*) Technically, because we use SMT modulo integer arithmetic and not bitvector theory, we could possibly be unsound on this point. Using SMT modulo integer arithmetic is possible.)