To demonstrate the versatility of owl
, we implemented Safra’s determinization procedure from NBA to DRA. Although this procedure often is described as being tedious to implement, it required only roughly 60 lines of code in (plus a few lines for simple data structures). In the following, we present the full implementation, pruned of assertions and logging statements for brevity. The full version can be found in the repository.
The integration section provides further insights in how to integrate a new translation into owl
.
We first show the complete code block used to implement Safra’s construction. In the following section, we present the used utility classes Label
and Tree
.
public static <S> Automaton<Tree<Label<S>>, RabinAcceptance>
build(Automaton<S, BuchiAcceptance> nba) {
int nbaSize = nba.size();
int pairCount = nbaSize * 2;
RabinAcceptance acceptance = RabinAcceptance.of(pairCount);
Tree<Label<S>> initialState = Tree.of(Label.of(Set.copyOf(nba.initialStates()), 0));
BiFunction<Tree<Label<S>>, BitSet, Edge<Tree<Label<S>>>> successor = (tree, valuation) -> {
BitSet usedIndices = new BitSet(nbaSize);
tree.forEach(node -> usedIndices.set(node.index()));
BitSet edgeAcceptance = new BitSet(nbaSize);
Tree<Label<S>> successorTree = tree.map((father, children) -> { // Successor
Set<Edge<S>> fatherEdges = father.states().stream().flatMap(state ->
nba.edges(state, valuation).stream()).collect(Collectors.toSet());
if (fatherEdges.isEmpty()) return Tree.of(father.with(Set.of()));
Label<S> newFather = father.with(Edges.successors(fatherEdges));
Set<S> newChildStates = fatherEdges.stream().filter(edge -> edge.inSet(0))
.map(Edge::successor).collect(Collectors.toUnmodifiableSet());
int index = usedIndices.nextClearBit(0);
usedIndices.set(index);
return Tree.of(newFather, Collections3.concat(children,
List.of(Tree.of(Label.of(newChildStates, index)))));
}).map((father, children) -> { // Horizontal merge
Set<S> olderStates = new HashSet<>();
List<Tree<Label<S>>> prunedChildren = new ArrayList<>();
for (Tree<Label<S>> child : children) {
Label<S> prunedLabel = child.label().without(olderStates);
if (prunedLabel.states().isEmpty()) {
edgeAcceptance.set(acceptance.pairs().get(prunedLabel.index()).finSet());
usedIndices.clear(prunedLabel.index());
} else {
// Recursive pruning of the child
prunedChildren.add(child.map((subNode, subChildren) ->
Tree.of(subNode.without(olderStates), subChildren)));
olderStates.addAll(prunedLabel.states());
}
}
return Tree.of(father, prunedChildren);
}).map((father, children) -> {
List<Tree<Label<S>>> nonEmptyChildren = children.stream().filter(
child -> !child.label().states().isEmpty()).collect(Collectors.toList());
if (nonEmptyChildren.isEmpty()) return Tree.of(father);
Set<S> childStates = nonEmptyChildren.stream().map(Tree::label).map(Label::states)
.flatMap(Set::stream).collect(Collectors.toUnmodifiableSet());
// Vertical merge
if (childStates.equals(father.states())) {
edgeAcceptance.set(acceptance.pairs().get(father.index()).infSet());
children.forEach(child -> child.forEach(node -> usedIndices.clear(node.index())));
return Tree.of(father);
}
return Tree.of(father, nonEmptyChildren);
});
usedIndices.flip(0, nbaSize);
BitSets.forEach(usedIndices, index ->
edgeAcceptance.set(acceptance.pairs().get(index).finSet()));
return Edge.of(successorTree, edgeAcceptance);
};
// Create on-the-fly automaton from initial state and successor function
return AutomatonFactory.create(initialState, nba.factory(), successor, acceptance);
}
In the above code, we used the following two data structure classes. The implementations are generated by the Immutables framework.
@Tuple @Value.Immutable
public abstract static class Label<S> {
abstract Set<S> states();
abstract int index();
static <S> Label<S> of(Collection<S> states, int index) {
return LabelTuple.create(states, index);
}
Label<S> without(Set<S> states) { return of(Sets.difference(states(), states), index()); }
Label<S> with(Collection<S> states) { return of(states, index()); }
}
@HashedTuple @Value.Immutable
public abstract class Tree<L> {
abstract L label();
abstract List<Tree<L>> children();
static <L> Tree<L> of(L label) { return TreeTuple.create(label, List.of()); }
static <L> Tree<L> of(L label, List<Tree<L>> children) {
return TreeTuple.create(label, children);
}
public Tree<L> with(L label) { return of(label, children()); }
public Tree<L> map(BiFunction<L, List<Tree<L>>, Tree<L>> function) {
return function.apply(label(), Lists.transform(children(), child -> child.map(function)));
}
public Tree<L> map(Function<L, L> function) {
return of(function.apply(label()), Lists.transform(children(),
child -> child.map(function)));
}
public void forEach(Consumer<L> action) {
children().forEach(child -> child.forEach(action));
action.accept(this.label());
}
public void forEach(BiConsumer<L, List<L>> action) {
action.accept(label(), Lists.transform(children(), Tree::label));
children().forEach(child -> child.forEach(action));
}
}
To integrate the new construction with our pipeline, we only needed to register the following field with the central parser.
public static final TransformerParser CLI = ImmutableTransformerParser.builder()
.key("safra") // Name of the module in pipelines
.description("Translates NBA to DRA using Safra's construction")
.parser(settings -> environment -> (input, context) ->
SafraBuilder.build(AutomatonUtil.cast(input, BuchiAcceptance.class)))
.build();
The registration process is as simple as OwlModuleRegistry.DEFAULT_REGISTRY.register(CLI)
. Similarly, a dedicated entry point can be created by
public static void main(String... args) {
PartialModuleConfiguration config = PartialModuleConfiguration.builder("nba2dra")
.reader(InputReaders.HOA) // Start the pipeline skeleton with a (preconfigured) reader, expecting HOA input
.addTransformer(CLI) // Adds the yet to be configured CLI parser for our module
.writer(OutputWriters.HOA) // End with a prefconfigured HOA serializer
.build()
// Run the created pipeline config with the given args, configuring our module and basic I/O
PartialConfigurationParser.run(args, config);
}
Adding this construction to the testing framework then only required the following two changes: First, we declare our new construction in tools.json
by
"ltl-safra": {
"type": "owl",
"input": "ltl",
"output": "hoa",
"name": "safra",
"pre": [
"simplify-ltl",
[ "ltl2aut-ext", "-t", "ltl2tgba -B" ]
],
"post": [ "minimize-aut" ]
}
Now, ltl-safra
refers to a pipeline which reads and simplifies LTL formulas, passes them to Spot’s ltl2tgba
, producing a Büchi automaton, and then applies the above construction to it. Specifically, the test utility builds the pipeline
ltl --- simplify-ltl --- ltl2aut-ext -t "ltl2tgba -B" --- safra --- minimize-aut --- hoa
We declare a test case in tests.json
by
"safra": {
"tools": "ltl-safra",
"data": "small"
}
This test now takes our above declared pipeline ltl-safra
and runs it on the small
data set. It can be manually executed by running python
scripts/util.py
test safra
, or integrated in the CI pipeline by adding
Safra:
stage: test
variables:
TEST_NAME: "safra"
<<: *ltlcross_template
to .gitlab-ci.yml
, i.e. GitLab’s CI specifications.