2023.19: More Temporal-Mode Forge

Looking ahead, we'll spend a couple more days on Temporal Forge and Linear Temporal Logic (LTL). After that, we'll start talking technically about how Forge works. Many of you have been curious, and it will set you up well for the upcoming homework where you'll be building your own solver.

Livecode for today is here.

Linear Temporal Logic (LTL)

Formally, the temporal operators Forge provides correspond to a language called Linear Temporal Logic (or LTL). It's temporal because it has temporal operators like always and eventually, and it's linear because it's interpreted over (infinite) linear traces.

LTL is commonly used in industry. And even where it isn't used directly, many other temporal specification languages are either related to LTL (e.g., branching-time logics like CTL) or inspired by it (e.g., Lamport's and other more recent languages). There are a lot of industrial model-checking tools out there, using a lot of different languages, but learning LTL will help you build intuition for nearly all of them.

(And on the research side of things, there's been work right here at Brown to use LTL for specifying robot behaviors! For example, this paper.)

How To Read A Temporal Formula

Recall that:

  • time is implicit in temporal mode, and
  • temporal mode only ever finds lasso traces.

When you write a constraint in temporal mode, it's true with respect to an instance (which is always a lasso trace) and a time index into that trace. Thus the init predicate may be true, or not true, depending on which state you're looking at.

Evaluation always starts at the first state. This corresponds to the top-level run command or test. I didn't say "the initial state", because if we've got a predicate that encodes "initial state", it won't be enforced unless we've told Forge to do so. This is why, usually, you'll start by putting:

init

(or whatever your initial-state predicate is) in your top-level run.

As soon as temporal operators become involved, however, the "evaluate in the first state" rule starts to fail.

Moving Forward In Time

You can refer to the next state (relative to the current one, whatever it is) by using the next_state operator. If I wanted to say that the second and third states would also be acceptable as initial states, I'd write:

init
next_state init
next_state next_state init

in the top-level run block. It's rare you'd do something like this in practice, but it's a good first demonstration of the operator.

Why next_state?

The keyword is, admittedly, a little verbose. But it was the best of the various candidates at hand:

  • In LTL, the operator is usually called X, which is not very descriptive.
  • In Forge's parent language, Alloy, the operator is called after, but this can lead to some misconceptions since A after B might be misinterpreted as a binary operator, and Forge and Alloy both have implicit and via newlines.
  • I've heard afterward suggested, but that risks confusion with always or eventually.

Quantifying Over Time

What does it mean for something to always be true, or to eventually hold? These terms effectively quantify over time: if something is always true, it's true at all time indexes (starting now). If something is eventually true, it's true at some time index (possibly now).

So if we wanted to say that every state in the trace transitions to its successor in accordance with our move predicate, we'd say:

always move

Nesting Operators

Just like you can nest all and some, you can nest always and eventually. We'll need to do this to express properties like non-starvation. In fact, let's think about how to express non-starvation now!

We had informally written non-starvation in our mutex model as something like "once a process becomes interested, it eventually gets access". How would you write this using temporal operators, assuming that interested and access were predicates describing the process becoming interested and getting access respectively?

Think, then click!

We might start with: interested => eventually access. That would be a reasonable start: if the process is interested, it eventually gets access. The problem is that the interest is measured now---that is, at whatever time index Forge is currently looking.


Clearly we need to add some sort of temporal operator that prevents the above issue. Here's a possible candidate: (eventually interested) => (eventually access).

Think, then click! The problem here is that there's no connection between the time at which the left-hand side holds, and the time at which the right-hand side holds. To force that relationship (access _after_ interest) we need to nest the two temporal quantifiers.

How about eventually (interested => (eventually access))?

Think, then click! This constraint isn't strong enough. Imagine a trace where the process never gets access, but is interested only (say) half the time. Then any of those disinterested states will satisfy the subformula `interested => (eventually access)`.

Why? Think about how an implication is satisfied. It can be satisfied if the right-hand side is true, but also if the left-hand side is false---in the case where no obligation needs to be incurred! So the implication above evaluates to true in any state where the process isn't interested. And using eventually means any single such state works...

It seems like we need a different temporal operator...

Think, then click!

We'll say: always {interested => eventually access}. Now, no matter what time it is, if the process is interested, it has to eventually get access.

This sort of always-eventually pattern is good for (contingent) "infinitely often" properties, which is exactly what non-starvation is.

Let's Try It Out!

I'm going to ask you to play the role of Forge. I've listed some temporal constraints below, and would like you to come up with some instances (lasso traces) that satisfy them. Don't use Forge unless you've already tried, and are stumped.

For all examples, you may come up with your own shape of the world. That is, you might pick a University (where a state is a semester, with courses offered and taken) or your experience waiting for an elevator in the CIT, or anything else from your life! I'll use X, Y and Z to denote arbitrary facts that might be true, or not true---your job is to plug in specifics, and then find a satisfying trace!

I'll use a lot of parentheses, to avoid confusion about operator precedence...

eventually (always (X or Y))

Think, then click! Suppose `X` stands for weekday, and `Y` for weekend. Then the normal progression of time satisfies the constraint: after some point (perhaps right now) it will always be either a weekday or a weekend.

I am probably abstracting out some important details here, like the heat-death of the universe. But that's not really the point. The point is that alternation between X and Y is allowed---it's always either one or the other, or possibly even both.

always (eventually (X and (next_state Y)))

Think, then click! Suppose `X` stands for "Saturday", and `Y` for "Sunday". Then it's always true that, at _any point_ in time, there is a Saturday-Sunday pair of days in the future.

always ((eventually X) or (always Y)))

Think, then click! Suppose `X` stands for "final exam or demo", and `Y` for "graduated". Then, at _any point_ in time, either there's a final in your future _or_ you've graduated (and stay graduated forever).

Note that this doesn't mean you can't take a final exam after graduating if you want to. Both sides of the or can be true. It just means that, at any point, if you haven't graduated permanently, you must eventually take an exam.

Fixing Our Lock Model

A deadlock state is one where no outgoing transitions are possible. How can we write a test in temporal mode that tries to find a reachable deadlock state? There are two challenges:

  • How do we phrase the constraint, in terms of the transition predicates we have to work with?
  • How do we even allow Forge to find a deadlock, given that temporal mode only ever finds lasso traces? (A deadlock in a lasso trace is impossible, since a deadlock prevents progress!)

Let's solve the second challenge first, since it's more foundational.

Finding Deadlocks Via Lassos

We could prevent this issue by allowing a doNothing transition from every state. Then from Forge's perspective there's no "deadlock", and a lasso trace can be found.

But this fix causes new problems. If we allow a doNothing transition to happen anywhere, our liveness property is definitely destroyed, even if we were modeling a smarter algorithm. So we need to reduce the power of doNothing somehow.

Put another way: we started with an overconstraint bug: if only lassos can be found, then we can't find a trace exhibiting deadlock. Adding doNothing fixes the overconstraint but adds a new underconstraint, because we'll get a lot of garbage traces where the system can just pause arbitrarily (while the trace continues).

We saw this phenomenon earlier when we were modeling tic-tac-toe, and wanted to work around the fact that the is linear annotation forces exact bounds. We can use the same ideas in the fix here.

Finding Deadlock

Let's look at one of our transitions:

pred raise[p: Process] {
    World.loc[p] = Disinterested
    World.loc'[p] = Waiting
    World.flags' = World.flags + p
    all p2: Process - p | World.loc'[p2] = World.loc[p2]
}

Notice it's split (implictly) into a "guard" and an "action". If all the constraints in the guard are true, the transition can occur. Formally, we say that if all the guard constraints hold, then the transition is enabled. When should doNothing be enabled? When no other transition is.

pred doNothing {
    -- GUARD (nothing else can happen)
    not (some p: Process | enabledRaise[p]) 
    not (some p: Process | enabledEnter[p]) 
    not (some p: Process | enabledLeave[p]) 
    -- ACTION
    flags' = flags
    loc' = loc
}

We won't create a separate enabledDoNothing predicate. But we will add doNothing to the set of possible moves:

pred trans {
    some p: Process | 
        raise[p] or
        enter[p] or 
        leave[p] or 
        doNothing 
}

And we'd also better create those 3 enabled predicates, too.

Finally, we can write a check looking for deadlocks:

test expect {
    noDeadlocks_counterexample: {
        init
        always delta
        not always {
            some p: Process |
                enabledRaise[p] or
                enabledEnter[p] or
                enabledLeave[p] 
        }
    } is sat
}

which fails. But why?

The counterexample (at least, the one I got) is 3 states long. And in the final state, both processes are Waiting. Success! Or, at least, success in finding the deadlock.

But how should we fix the algorithm? And how can we avoid confusion like this in the future?