user1953221
user1953221

Reputation: 429

Trace inclusion between program and state machine

I'm trying to show that the semantics of a program and state machine are equivalent.

For some background, say we have the following language with sequencing and print statements.

Inductive prog : Type :=
  | Print : nat -> prog -> prog
  | Skip : prog.

A state machine is defined like this:

Definition pre := state -> bool.
Definition post := state -> state.

Inductive transition :=
  | Transition : pre -> nat -> post -> transition.

Notation state_machine := (list transition).

Compilation from program to state machine adds an extra pc variable for control state. The program Output 1 (Output 2 Skip) thus results in these two transitions:

transition 1
pre: pc = 0
post: write 1, set pc to 1

transition 2
pre: pc = 1
post: write 2, set pc to 2

When pc = 2, the state machine terminates as no more preconditions are enabled.


The problem I'm having is that the state machine does not "get smaller" in the course of execution; preconditions merely get disabled and never re-enabled once corresponding program steps execute.

This causes trouble when I try to prove the following trace inclusion lemma (pretty much following the development from chapter 10 of FRAP: compile is a function which translates program to state machine, pstep and lstep are small-step reduction relations, traces are lists of nats).

Inductive ptrace : prog -> trace -> Prop :=
| PEnd : forall p, ptrace p []
| PTerminate : ptrace Skip []
| PStep : forall p1 p2 t ts,
  pstep p1 t p2 ->
  ptrace p2 ts ->
  ptrace p1 (t :: ts).

Inductive strace : state_machine -> trace -> Prop :=
| SEnd : forall m, strace m []
| SStep : forall state1 state2 m ms t ts,
  In m ms ->
  sstep state1 m state2 t ->
  strace ms ts ->               (* <- ms does not get smaller, we just choose some transition m to execute *)
  strace ms (t :: ts).

Theorem trace_incl : forall p s t, ptrace p t -> compile p = s -> strace s t.

I end up with hypotheses compile p1 = s and compile p2 = s where p1 reduces to p2.

How should I avoid this problem? Is there a better way to model state machines?

The reason I'm using this approach is that I want to eventually add recursion to programs, allowing the same transitions to be repeatedly taken.

Any recommended (introductory) reading on verifying compilers and simulation relations would also be welcome. I've read FRAP and the paper mentioned here, but not much else.

Upvotes: 1

Views: 97

Answers (1)

nadeem
nadeem

Reputation: 721

For situations like this, in the formal development you would probably introduce an explicit variable that tracks the length of the trace, and then you can proceed by (strong) induction on that. So the statement of the lemma would look more like:

Lemma trace_incl_expl : forall n p s t, n = length t -> ptrace p t -> compile p = s -> strace s t.

This is what's alluded to in the proof idea of Theorem 10.11 of http://adam.chlipala.net/frap/frap_book.pdf, for example. The final statement version of the theorem doesn't necessarily include the explicit length value, but within the Coq development it would either be there, or the theorem would simply delegate to a lemma that includes it. The code for the CompilerCorrectness chapter, for example, at one point it introduces a predicate that explicitly keeps track of "how many steps it took to generate a trace."

Upvotes: 0

Related Questions