Recursion

This concept guide demonstrates how to write and successfully use recursive relations.

Download this guide as a RAI notebook by clicking here.

Goal

The goal of this concept guide is to demonstrate how to write and successfully use recursive relations (which means relations that depend on themselves) in Rel.

Preliminaries

This concept guide uses the following Rel features:

It is good to know how ICs work, but not required for understanding this guide.

Introduction

A program or a query is called recursive when a predicate is defined in terms of itself. For many computational problems, recursion helps to reduce complexity and to express solutions in a cleaner and more understandable way. A common side effect is that recursive solutions are often shorter, more compact, and easier to maintain.

Rel supports recursive computations, which is also a key ingredient for making the language Turing complete. This means Rel can be used to perform any computation that any other programming language supports. This does not mean that expressions in Rel are more compact or easier to write even though we believe that is generally the case.

With the capability to express queries recursively, Rel is able to perform a variety of computations that at first glance one would not expect from a database language:

  • Iterative algorithms can be expressed as recursive Rel programs (e.g.: Fibonacci, Prim’s Minimal spanning tree);

  • Relations requiring a variable recursion depth (aka an unbounded query length) can be expressed more compactly and efficiently (e.g., transitive closure, PageRank, moving averages).

To define recursive computations, we need the following ingredients:

  1. A base case, and
  2. A recursive rule.

The base case (or cases) is the starting point of the recursive computation. The recursive rule (or rules) is the heart of the recursive computation, which references itself.

Termination of Recursive Computations

Computationally, we need a third ingredient in order to have a successful recursive algorithm: termination conditions, which ensure that the recursive computation doesn’t run forever.

Generally, a recursive computation in Rel terminates when a fixpoint is reached, meaning that the elements in a relation don’t change anymore from one iteration to the next (see Section Behind the Scenes for more details).

More formally, this means the solution to the recursive computation is a relation (or a set of relations) R, which is a fixpoint of the following definition:

def R(x...) = base_case(x...) or recursive_rule[R](x...)

A recursive computation defining an infinite relation (or infinite function), such as the Fibonacci sequence, currently requires a domain restriction for the computation to terminate (see examples below).

For certain recursive computations no explicit domain restriction is needed because the definitions naturally reach a fixpoint. The reachability problem within a graph (see below) is such a problem because the output domain, which is all possible pairs of graph nodes, is finite and the recursion is monotonically increasing.

(We say that the recursion is monotonically increasing if at each iteration we can only add new elements, and never remove any.)

Future releases of Rel will relax this restriction, and compute the desired results in an on-demand fashion, without having to compute the entire (infinite) relation.

Introductory Examples

Recursive Computation on Graphs

One of the most fundamental graph queries asks if there is a path between two nodes. Such questions appear frequently in the real world. In terms of international travel, this question could be

Can I fly from Providence to Honolulu?

Let’s try to answer that question in Rel. First we need a few airports:

install
def airport = {
"Providence (PVD)"; "New York (JFK)"; "Boston (BOS)";
"San Francisco (SFO)"; "Los Angeles (LAX)"; "Honolulu (HNL)";
}

and a flight network,

install
def directional_flight = {
("Providence (PVD)", "Boston (BOS)"); ("Boston (BOS)", "San Francisco (SFO)");
("San Francisco (SFO)", "Honolulu (HNL)"); ("New York (JFK)", "Los Angeles (LAX)");
}

def flight(a, b) = directional_flight(a, b) or directional_flight(b, a)

// integrity constraint
ic flight_between_airports(a, b) = flight(a, b) implies airport(a) and airport(b)

where we introduced directional flights, directional_flight, and bidirectional flights, flight. The integrity constraint checks that flights can only occur between airports.

Now, we can calculate all pairs of airports that are (directly or indirectly) connected with each other,

install
def connected(a, b) = flight(a, b) // the base case
def connected(a, b) = exists(c : connected(a, c) and flight(c, b))

which is the recursive part of the problem.

The original question now reads:

query
def output =
if connected("Providence (PVD)", "Honolulu (HNL)") then
"yes"
else
"no"
end

// This integrity constraint will fail if we don't compute that
// PVD and HNL are connected:
ic {equal(output, "yes")}

Relation: output

"yes"

Let’s ask another question and check if we can fly from Boston (BOS) to Los Angeles (LAX).

query
def output =
if connected("Boston (BOS)", "Los Angeles (LAX)") then
"yes"
else
"no"
end

// This integrity constraint will fail if we compute that
// BOS and LAX are connected:
ic {equal(output, "no")}

Relation: output

"no"

The answer is no, because in our toy world LAX and JFK are connected with each other but isolated from all the other airports.

For the mathematical enthusiasts, the problem of computing all pairs of connected nodes in a graph is called the transitive closure problem. In our example, the edges of the graph are defined in the flight relation.

Single Recursion

One of the simplest mathematical recursion formulas is the definition of the factorial $n! = n \cdot (n-1) \cdot \ldots \cdot 1$ for $n\in \mathbb{N}_+$, which fulfills the recursive rule, $n! = n \cdot (n-1)!$. In Rel this reads

query
def F[1] = 1 // the base case
def F[n] = n * F[n - 1], n <= 10

def output = F

Relation: output

11
22
36
424
5120
6720
75040
840320
9362880
103628800

There are several interesting aspects worth mentioning about this recursive definition:

Relation F has arity 2, where the first entry is the iteration step n and the second entry is the value of n!.

The definition contains also all ingredients for a successful recursion:

  • Base case: def F[1] = 1 is the starting point and translates to 1! = 1.
  • Recursive rule: def F[n] = n * F[n - 1] translates to $n! = n\cdot(n-1)!$.
  • Termination condition: n<=10 makes the output domain finite, forces the fixpoint to be reached when $n$ hits 10, and consequently terminates the recursive evaluation. (In the future, once demand transformations are implemented, this explicit statement of the termination condition will not be required anymore.)

Multiple recursion

Fibonacci sequence

Multiple recursion, where the recursive rule contains multiple self-references, can also be easily expressed in Rel.

The most famous example is the Fibonacci sequence $F(n) = F(n-1) + F(n-2)$ with $F(0)=0$ and $F(1)=1$. In Rel this relation reads:

query
def F[0] = 0  // base cases
def F[1] = 1 // base cases
def F[n] = F[n-1] + F[n-2], n<=10

def output = F

Relation: output

00
11
21
32
43
55
68
713
821
934
1055

We can easily see, all ingredients for a successful recursive computation are present.

Connected Nodes in a Graph

In our previous international travel example, we could have defined the connected relation using multiple recursions:

query
def connected(a, b) = flight(a, b) // the base case
def connected(a, b) = connected[a].connected(b) // recursive rule

Here the recursive rule contains also two self-references and reads: Two airports a, b are connected if there exists a third airport that is connected to both airports.

Negation in Recursion

Recursive dependencies can be made as complex as needed, and can go well beyond just asking for a single element of a previous iteration (e.g.: F[n - 1]).

The Recamán sequence is a nice example to demonstrate a more complex recursive computation. The sequence is defined as:

$$a_n = \left\{ \begin{array}{ll} 0 & \text{if}\ n=0 \newline a_{n-1}-n & \text{if}\ a_{n-1}-n>0\ \text{and the value isn’t already in the sequence} \newline a_{n-1}+n & \text{otherwise} \newline \end{array}\right.$$

The recursive case requires that we check that the potential next value isn’t already in the sequence, and if it is then assign a different value.

The Recamán sequence can be visualized in a very neat way by connecting subsequent values with an arch.

(CC license)

One way to define the Recamán sequence In Rel is as follows.

query
@inline
def reject(n,x) = x < 0 or exists(m: recaman(m, x) and m < n)

def recaman[0] = 0
def recaman[n](x) = recaman[n - 1] - n = x and not reject(n, x) and 0 < n < 20
def recaman[n](x) = recaman[n - 1] + n = x and reject(n, x - 2 * n) and 0 < n < 20

def output = recaman
// This IC verifies the correctness of the results by comparing with hand-computed results.
ic recaman_sequence = equal(
recaman[_],
{0;1;3;6;2;7;13;20;12;21;11;22;10;23;9;24;8;25;43;62;}
)

Relation: output

00
11
23
36
42
57
613
720
812
921
1011
1122
1210
1323
149
1524
168
1725
1843
1962

where we limited ourselves to calculating only the first 20 terms. We used several logical elements here, i.e.:

  • negation,
  • existential quantification,
  • disjunction,

to express the recursive Recamán rule.

Additionally, we used the @inline functionality to factor out the condition evaluation in a separate relation (reject) helping to keep the main recursive rule compact and readable.

Finally, we added an integrity constraint (IC) to check that the set of computed values is correct. (Note that this IC does not check that the values are in the correct order, but still gives us additional confidence in the results.)

Advanced Examples

Rel supports complex recursive rules that may involve multiple relations (aka mutual recursion) that depend on each other and may be recursive themselves.

Before diving into them, let’s look at relations that are not monotonically increasing in cardinality as the iteration advances. This is possible because elements can be not just added but also deleted during an iterative step.

Recursive Rules that Eliminate Elements

For all the examples we have seen so far, the number of elements in the relation (aka cardinality) is larger after we have successfully exited the recursive evaluation. However, thanks to the Rel capability to also delete (not just add) elements during an iteration, it is also possible that we may end up with fewer elements in the relation than we started with.

In the following Rel program, we have two recursive rules for D. The first one adds the elements 1 through 10, if D is empty. The second one keeps only elements that are smaller than or equal to 3, or different from the maximum element in D. The effect is that the largest element is iteratively removed until only the elements 1; 2; 3 remain:

query
def D(n) = range[1, 10, 1](n) and empty(D)
def D(n) = D(n) and (n <= 3 or n != max[D])

def output = D
@inline
def empty(R) = not exists(x... : R(x...))

Relation: output

1
2
3

Notice that we defined only the recursive rules. We did not explicitly state a base case, which means that the base case is, by default, an empty relation D.

The numbers 1 through 10 get added if and only if D is empty. We used the opportunity to define a custom relation empty(D) that checks whether a relation D is empty.

The key difference between the two recursive rules is: the first rule probes the non-existence of elements whereas the second rule depends on an aggregated view of existing elements. Either of these aspects (negation or aggregation) is enough to achieve a non-monotonic recursion. It is even true that in the absence of both features, recursion is always monotonic.

Asymptotic Convergence: PageRank

Another class of recursive formulas that we can easily implement in Rel is infinite series that converge asymptotically (in the infinite limit, $n→∞$). That is in contrast to the example in the section Recursive Computation on Graphs where we reached convergence after a finite number of steps.

In practice, we define a convergence criterion, where we stop the iteration if the value(s) between consecutive iterations are closer than a (small) user-defined $ϵ$.

$$| a_{n+1} - a_n |^2 < ϵ \quad \text{for}\ n>N_\text{converged} \enspace,$$

where $N_\text{converged}$ is the number of iterations needed to achieve our desired accuracy.

Many machine learning algorithms fall in this category, where we declare the model as trained once the model parameters and/or the accuracy of the model change only marginally between iterations.

Let’s take the PageRank algorithm with damping as an example. It is defined as

$$PR_a = \frac{1-d}{N} + d \sum_{b \in \mathcal{S}_a} \frac{PR_b}{L_b} \enspace ,$$

where $PR_a$ is the page rank of website $a$, $L_b$ is the number of outbound links from site $b$, $\mathcal{S}_a$ is the collection of websites that have a link to website $a$, $N$ is the total number of websites, and $d$ is a damping parameters, which is usually set around 0.85.

First, we need to define some pages and links between them. Let’s build a toy example, inspired by Wikipedia.

install
module graphA
def edge = {
("N1", "B"); ("N1", "E");
("N2", "B"); ("N2", "E");
("N3", "B"); ("N3", "E");
("N4", "E"); ("N5", "E");
("E", "F"); ("F", "E");
("E", "B"); ("E", "D");
("D", "A"); ("D", "B");
("B", "C"); ("C", "B");
("F", "B"); }

def node = x : edge(x, _) or edge(_, x)
end

Here is a visualization of the graph,

where the size of a node is proportional to its PageRank.

The PageRank algorithm assumes that sink nodes, which have no outgoing edges, are connected to all other nodes in the graph. We can derive a new graph from the original as follows:

install
module graphB
def node = graphA:node
def edge = graphA:edge
def sink(n in node) = empty(graphA:edge[n])
def edge(a in node, b in node) = sink(a) // add new edges
end

In Rel, the PageRank algorithm may read:

install
// parameters
def eps = 10.0^(-10)
def damping = 0.85
def node = graphB:node
def edge = graphB:edge

def node_count = count[node]
def outdegree[x] = count[edge[x]]

def MAX_ITER = 100

// a version of sum that gives 0.0 for the empty set,
// needed if there are edges with no incoming nodes:
@inline
def sum_default[R] = sum[R] <++ 0.0

// pagerank(iteration, site, rank)
def pagerank[0, a in node] = 1.0 / node_count //base case: equal rank
def pagerank[n, a in node] = // recursive rule
(1-damping)/node_count + damping * sum_default[
pagerank[n-1, b]/outdegree[b] for b where edge(b, a)
],
pagerank(n-1, a, _) // grounding `n` and `a`
and not converged(n-1) // termination condition
and n<=MAX_ITER // safeguard

// track convergence:
def converged(n) =
forall(a in node : (pagerank[n, a] - pagerank[n-1, a])^2 < eps)
and range(1, MAX_ITER, 1, n)

The relation converged stores the iteration numbers for which the convergence criterion is fulfilled. The not converged(n-1) condition in the main recursive definition for pagerank ensures that the iteration stops once convergence is reached. We also added a safeguard condition that stops the recursive computation if 100 iterations have been reached.

Notice that relations pagerank and converged depend on each other and must be solved together.

We use the sum_default relation to ensure that the sum over an empty set returns 0.0 instead of false, which is the default behavior of sum[]. (This is needed if there are no sink nodes in the original graph, and a node with no incoming edges.)

Finally, let’s look at the results. After 62 iterations we have reached our desired accuracy level with the following results:

query
def max_iteration = max[n : pagerank(n, _, _)]
def did_converge = "yes", converged = max_iteration

def ranks = pagerank[max_iteration]
def sumranks = sum[pagerank[max_iteration]]

Relation: max_iteration

62

Relation: did_converge

"yes"

Relation: ranks

"A"0.03278149315934399
"B"0.3843965154422865
"C"0.34291471887964714
"D"0.039087092099966095
"E"0.08088569323449775
"F"0.039087092099966095
"N1"0.016169479016858404
"N2"0.016169479016858404
"N3"0.016169479016858404
"N4"0.016169479016858404
"N5"0.016169479016858404

Relation: sumranks

0.9999999999999996

Since PageRank corresponds to a probability distribution over nodes, the ranks should sum to (approximately) 1.0.

Prim’s Minimum Spanning Tree

Prim’s algorithm is a greedy algorithm to calculate the minimum spanning tree (MST) performing the following steps:

  1. Initialize a tree with a single vertex, chosen arbitrarily from the graph.
  2. Grow the tree by one edge: of the edges that connect the tree to vertices not yet in the tree, find the minimum-weight edge, and transfer it to the tree.
  3. Repeat step 2 (until all vertices are in the tree).

As a note, it is possible to implement this algorithm with one recursive relation. To demonstrate the interplay of several recursive relations, we will implement the algorithm using two recursive relations that track the nodes that are and aren’t already in the MST.

First, lets define an undirected weighted graph inspired by www.geeksforgeeks.org.

install
def graph[:vertex] = range[0, 8, 1]

def graph[:edge_weight] = {
(0, 1, 4); (1, 2, 9); (2, 3, 7); (3, 4, 9);
(4, 5, 10);(5, 6, 2); (6, 7, 1); (7, 8, 7);
(7, 0, 8); (1, 7, 11); (5, 2, 4);(5, 3, 14);
(2, 8, 2); (6, 8, 6);
}

// make edges undirected
def graph[:edge_weight](x, y, w) = graph[:edge_weight](y, x, w)

where graph[:vertex] contains the graph nodes and graph[:edge_weight] contains the edge information, i.e., the two nodes connected by the edge and the edge weight, which is the last element in the tuple.

Below is one way to implement Prim’s algorithm:

query
def mst[:vertex] = 0 // initialize MST with a vertex
// collect all vertices that have been already reached by the MST
def mst[:vertex](x) = mst[:edge](x, _) or mst[:edge](_, x)

// main recursive rule
def mst[:edge](x, y) =
mst[:vertex](x) and
unvisited(y) and
graph[:edge_weight, x, y] = lightest_edge_weight[unvisited, mst[:vertex]]

// don't remove edges that have been added to the MST
def mst[:edge](x, y) = mst[:edge](x, y)

// list of vertices not in the MST yet
def unvisited(x) = graph[:vertex](x) and not mst[:vertex](x)

// weight of the lightest edge
// between visited and unvisited vertices
@inline
def lightest_edge_weight[S, T] = min[v : v=graph[:edge_weight][S, T]]

// check that all vertices are visited:
ic {empty(unvisited)}

The main recursive part of this algorithm involves the three relations mst[:vertex], mst[:edge], and unvisited. At each iteration, the mst[:edges] includes all edges we already computed, mst[:edge](x, y) = mst[:edge](x, y), plus one new edge, which is the lightest edge connecting a visited vertex, mst[:vertex](x), to an unvisited vertex, unvisited(y).

The recursive dependencies is quite non-trivial here because the three relations refer to each other in a cyclical way where unvisited depends on mst[:vertex] which depends on mst[:edge] which in turn depends on mst[:vertex] and unvisited again. The only direct recursive dependence is def mst[:edge](x, y) = mst[:edge](x, y) . The purpose of this rule is to ensure that edge entries, once inserted, aren’t removed in a future iteration.

This safeguard is needed because the edge tuple (x,y) in mst[:edge] requires that y is not in mst[:vertex] but it might be in a subsequent iteration. If this is the case, then the edge would be removed again from mst[:edge] in a future iteration and potentially also the vertices attached to this edge.

Now, the condition that y should not be in mst[:vertex] might be fulfilled again and we would enter a cyclical loop that never ends. (See section Common Pitfalls) for more details on never-ending recursive computations). This safeguard prevents all that.

Furthermore, we have a nice example in unvisited of a relation that decreases in size/cardinality as the recursive evaluation proceeds (see also the section Recursive Rules that Eliminate Elements). Initially, unvisited contains all vertices in the graph, but once the recursive evaluation is done, unvisited is actually empty, which we also double check with an integrity constraint, ic {not unvisited(_)}.

The edge weights of the MST are not stored in mst because we can look them up in graph[:edge_weight]. Let’s visualize the minimal spanning tree we just calculated.

Logging Recursive Progress

Normally, we can only access the relation after the recursive evaluation has finished (aka converged). It would be great to be able to see also the intermediate progress to better understand and debug the logic we write.

Let’s, for example, take our example above where elements are successively removed until only three elements are left (see the section Recursive Rules that Eliminate Elements . If we remove the the condition, n<=3, we create a recursive algorithm that never converges (see section Common Pitfalls).

Let’s see if this is actually what is happening by creating a logging relation Log that records the size of D after each iteration. To avoid the endless loop we add a condition that forces the recursive evaluation to terminate after 10 iterations have been reached.

install
// number of iterations
def Niter = 10

// endless loop
def D(n) = n=range[1, 3, 1] and not D(_) and continue
def D(n) = D(n) and n!=max[D] and continue

// logging
def Log = 0,0 : not Log(_, _) // dummy initialization
def Log(n, D_size) =
n = max[first[Log]]+1 and
D_size = sum[count[D]; 0] and
continue

def Log(x, y) = Log(x, y) and x > 0 // don't remove entries

// termination condition
def continue = count[Log] < Niter

Note, we also modified the relation a bit and we insert 1;2;3 instead of 1;2;...;10 from the original version above. Let’s have look if Log really contains the size/cardinality of relation D after each iteration:

query
Log

Relation: output

13
22
31
40
53
62
71
80
93
102

This confirms that D starts with 3 elements and decreases to 0 (aka empty), at which point the first recursive rule holds again and D is again initialized with 1;2;3. This cycle repeats forever, except that in our case we terminate the iteration when the relation continue turns false, which is the case when our logging relation Log contains at least Niter=10 entries.

Common Pitfalls

Probably the single most common problem users will face when using recursion is non-convergent computation.

There are two main types of non-convergence:

  • Unbounded domain: Recursive computations where there is no fixpoint or a fixpoint that takes infinitely many iterations to reach. PageRank is an example where the fixpoint lies at infinity. The Fibonacci Sequence and calculating the Length of All Paths in a Cyclical Graph are examples of computation with no fixpoint.

    This requires restricting the domain to make sure that the recursive computation terminates after a finite number of iterations.

    (This will be addressed in the future with the demand transformation feature — see section Running to Infinity).

  • Oscillations: No fixpoints exist even though the output domain is finite. This can occur if the recursive computation gets stuck in a cycle. This situation is much harder to spot and can be quite subtle as the discussion in the MST example above demonstrated. In the future, the query evaluator will be able to detect this behavior and report a failure after a while, when we repeat a solution we have seen before. (See section Stuck in a Cyclical Loop).

Running to Infinity

Factorial

How can one end up with an infinite loop? One obvious way is, of course, when one forgets to restrict the output domain in a recursive rule that explores a larger and larger domain space as the recursive computation progresses. An example is the factorial, $n!$. Imagine that we write the factorial but don’t state at which $n$ to stop:

query
def F[0] = 1
def F[n] = n * F[n-1]

Since Rel computes fixpoints in a bottom-up fashion, deriving new facts from existing ones, the iteration will go on forever, as $n$ grows larger and larger. In the future, this issue will be solved with on-demand evaluation, which only performs computations needed when a specific set of values are requested.

Length of All Paths in a Cyclical Graph

Also in graph problems, we can run in to infinite recursion even though the graph itself is finite. For cyclical graphs, for instance, calculating the lengths of all paths will not terminate and will run forever because the cyclical nature of the graph ensures that one can always find a path of arbitrary length by passing through the loop (that exists in the graph) multiple times.

To demonstrate that point, let’s construct a minimal graph with 3 vertices and one loop between nodes 1 and 2.

install
def cyclical_graph[:vertex] = {1; 2; 3}
def cyclical_graph[:edge] = {(1, 2); (2, 1); (2, 3)}

// path_length(start_node, end_node, distance)
def path_length = cyclical_graph[:edge], 1 // base case
def path_length(a, c, len) =
path_length(a, b, len1) and
path_length(b, c, len2) and
len = len1 + len2 and
len <= 4 // termination to avoid infinite iteration
from b, len1, len2

To avoid an endless loop, we insert the condition len <= 4 such that only paths with length not longer than 4 are included. Inspecting the relation path_length shows that between each node pair we have multiple path lengths increasing in steps of 2, which is the length of the loop in our cyclical graph:

query
path_length

Relation: output

112
114
121
123
132
134
211
213
222
224
231
233

Stuck in a Cyclical Loop

There exists another way that one can get stuck in an infinite recursive computation. Elements can be added and removed during the iteration. Hence, it is possible to end up in a cyclical loop that never ends.

Let’s take the example in Section Recursive Rules that Eliminate Elements where we start with the elements 1 to 10 and remove iteratively the largest one but only if the largest element is larger than 3. If we take this condition away,

query
def D = n : n=range[1, 10, 1] and not D(_)
def D(n) = D(n) and n != max[D]

we will end up in a endless cyclical loop. Why? Because once we removed the last element from D, we are again fulfilling the initial condition again used to populate the relation in the first place. Now, we are back at the beginning and the cycle repeats itself forever. Such a recursive relation has no fixpoint and will never terminate.

As mentioned above, this problem will be solved in the future and Rel will be able to recognize that a solution has been already visited but the recursive computation has not converged yet.

Behind the Scenes: Recursion and Fixpoints

In this section, we discuss in more detail how recursively defined rules get evaluated in Rel. The content discussed here is not needed to write successful recursive relations. It might be, however, very helpful for writing complex recursive algorithms and understanding why Rel returns what it does.

At a high level, Rel solves the recursive relations $R$ by starting at step 0 with the base case(s), $R_0$, which may be the empty set (in case no explicit base case is defined). The relation $R_0$ represents the starting point of our recursive computation. If $f$ stands for a function mapping relation $S$ to relation $T$, then we can express the converged relation $R$ in the following way:

$R = f(R_{N-1}) = (f \circ f)(R_{N-2}) = \underbrace{(f \circ \ldots \circ f)}_{N}(R_{0})$,

where after $N$ iterative applications of $f$ we have converged to a final result by reaching the fixpoint, $f(R) = R$, and the iteration stops as a repetitive application of $f$ doesn’t change the content of $R$.

Rel solves the recursive computations in a bottom-up fashion where we start with the base case(s) and then apply iteratively the recursive rule until we reach a fixpoint. Hence, it is currently difficult to implement a top-down algorithm. Many (naive) implementations of Divide-and-conquer algorithms (e.g.: mergesort, FFT, and Tower of Hanoi) are solved in a top-down fashion.

Besides the Common pitfalls discussed above, we need to be also aware of the scenarios where we might have multiple fixpoints.

Multiple Fixpoints

Mathematical Function

It is possible that a recursive algorithm has multiple fixpoints, $f(x)=x$. Take, for instance, the function $f(x) = -2(x-1)^2+2$. It has the fixpoints $x=0$ and $x=\frac{3}{2}$.

Fixpoints can be found by recursively applying the rule $x_n = f(x_{n-1})$. However, which fixpoint we reach or whether we reach a fixpoint at all depends (for this situation) solely on the initial conditions.

When we start the recursion at $x=1.0$, we will reach the fixpoint $x=0$ within two iterations.

query
@inline
def f[a] = -2*(a-1)^2+2

def x[0] = 1.0
def x[n] = value : y(n-1, value) and y[n-1] != x[n-1]
def y[n] = f[x[n]]

When we start the recursion at $x=0.5$, we will reach the other fixpoint $x=3/2$ within one iteration.

query
@inline def f[a] = -2*(a-1)^2+2

def x[0] = 0.5
def x[n] = value : y(n-1, value) and y[n-1] != x[n-1]
def y[n] = f[x[n]]

If we don’t keep track of the iteration index, we don’t even need to explicitly restrict the output domain because the fixpoint behavior ensures that the recursive computation terminates by itself.

query
@inline
def f[a] = -2*(a-1)^2+2

def xy = f[x] for x where x = {0.5; 1.0} and not xy(_, _)
def xy = f[x] for x where xy(_, x)

ic fixpoints = equal(xy, {(0.0, 0.0); (1.5, 1.5)})

Because we remove the starting points with the first iteration (via the expression not xy(_, _)), the only elements that survive in xy are the fixpoints.

Note, the starting conditions are hand-picked and for almost any other starting point we will never reach either fixpoint.

Even though this toy situation is highly artificial, it demonstrates very clearly what users should keep in mind when writing recursive rules.

Social Network

A more knowledge graph related example is

query
def man = "Dilbert"
def single(x) = man(x) and not husband(x)
def husband(x) = man(x) and not single(x)

where we have two fixpoints:

  • Dilbert is single and not a husband: single(Dilbert) and not husband(Dilbert) or
  • Dilbert is a husband and not single: husband(Dilbert) and not single(Dilbert).

Both solutions are valid. In the code, as it is written, Dilbert is initially only in the relation man and not in single and husband. This starting point creates a problem and will cause the recursive evaluation to be stuck in an infinite loop. In the first iteration, Dilbert will be added to both relations (single and husband). In the next iteration he will be removed from both before he will be added again to both in the subsequent iteration.

As we can see here, this situation is very likely not intended by the person who wrote the code. This example demonstrates that negations and/or aggregations should be used with caution in recursive definitions because either of them can result in non-monotonic recursion, which in subtle ways can lead to unintended behaviors like getting stuck in an infinite loop.

Symmetry-Induced Fixpoints

Symmetry is a very fundamental concept in mathematics and physics, and can result in very fundamental properties like the law of conservation of energy. One major cause for the existence of fixpoints is that there exists a symmetry in the problem definition. The existence of the multiple fixpoints in the Dilbert example above is, for instance, solely due to the symmetry between the relation definitions of husband and single.

In more complex situations the underlying symmetry might be not that easily to spot. For example, in this example

query
def circle = 1, 0
def circle = -y, x from x, y where circle(x, y)

we will reach the fixpoint after four iterations. Why? Because we will be back at $(1,0)$ after visiting the points $(0,1), (-1,0)$, and $(0,-1)$.

The reason for this cyclical behavior is that we have encoded a rotational symmetry in our problem. More specifically, with each iteration we perform a $90^\circ$ rotation starting at $(1,0)$.

Hence, once we visited all the points of the rotational group $C_4$ (modulo the starting point) we will have reached the fixpoint and no new elements will be added to circle.

This toy example shows how the underlying symmetry in our algorithm enforces that we stay within a restricted domain and all elements in our relation are elements of that domain. Only symmetry-breaking operations will be able to escape this domain.

If the existence of the symmetry is unwanted, or the developer is unaware, this restriction may lead to unintended behaviors since not the entire argument domain will be explored. However, this behavior is often beneficial, because this restrictions can be exploited and can lead to significant speed-ups in calculations without loss of generality.