Skip to content
Recursion

Recursion

This concept guide introduces the logic behind recursion and shows you how to successfully write recursively defined relations in Rel. It also describes common challenges you may encounter when using recursion and shows you how to look out for and resolve them.

Introduction

A program or a query is called recursive when a relation is defined in terms of itself. For many computational problems, recursion helps to reduce complexity and to express solutions in a cleaner and more understandable way. A common side effect is that recursive solutions are often shorter, more compact, and easier to maintain.

Rel supports recursive computations, which is also a key ingredient for making the language Turing complete. This means Rel can be used to perform any computation that any other programming language supports.

With the capability to express queries recursively, Rel is able to perform a variety of computations that at first glance you would not expect from a database language:

  • Iterative algorithms can be expressed as recursive Rel programs; for example, Fibonacci or Prim’s Minimal spanning tree.

  • Relations requiring a variable recursion depth can be expressed more compactly and efficiently; for example, transitive closure, PageRank, and moving averages.

To define recursive computations, you need the following ingredients:

  1. A base case.
  2. A recursive rule.

The base case is the starting point of the recursive computation. The recursive rule is the heart of the recursive computation, which references itself.

Termination of Recursive Computations

Computationally, you need a third ingredient in order to have a successful recursive algorithm: termination conditions, which ensure that the recursive computation doesn’t run forever.

Generally, a recursive computation in Rel terminates when a fixpoint is reached, meaning that the elements in a relation don’t change anymore from one iteration to the next. See the Behind the Scenes section for more details.

More formally, this means the solution to the recursive computation is a relation (or a set of relations) R, which is a fixpoint of the following definition:

def R(x...) = base_case(x...) or recursive_rule[R](x...)

A recursive computation defining an infinite relation, or infinite function, such as the Fibonacci sequence, currently requires a domain restriction for the computation to terminate. See the examples below.

For certain recursive computations, no explicit domain restriction is needed because the definitions naturally reach a fixpoint. Graph reachability (see below) is one such problem because the output domain, which is all possible pairs of graph nodes, is finite, and the recursion is monotonically increasing.

Recursion is said to be monotonically increasing if at each iteration you can only add new elements and never remove any. The query optimizer will automatically determine whether or not a recursively defined relation is monotonic. If the recursion is found to be monotonic, the system uses a more efficient semi-naive algorithm to find a fixpoint. A more detailed discussion on whether or not a recursion is monotonic can be found in the Recursive Rules That Eliminate Elements section.

Introductory Examples

Recursive Computation on Graphs

One of the most fundamental graph queries asks if there is a path between two nodes. Questions like this appear frequently in the real world. In terms of international travel, this question could be:

Can I fly from Providence to Honolulu?

To answer that question using Rel, you first need a few airports:

// model
 
def airport = {
    "Providence (PVD)"; "New York (JFK)"; "Boston (BOS)";
    "San Francisco (SFO)"; "Los Angeles (LAX)"; "Honolulu (HNL)";
}

You also need a flight network:

// model
 
def directional_flight = {
    ("Providence (PVD)", "Boston (BOS)"); ("Boston (BOS)", "San Francisco (SFO)");
    ("San Francisco (SFO)", "Honolulu (HNL)"); ("New York (JFK)", "Los Angeles (LAX)");
}
 
def flight(a, b) = directional_flight(a, b) or directional_flight(b, a)
 
// Integrity constraint
ic flight_between_airports(a, b) = flight(a, b) implies airport(a) and airport(b)

The network introduces directional flights, directional_flight, and bidirectional flights, flight. The integrity constraint checks that flights can only occur between airports.

Now, you can calculate all pairs of airports that are directly or indirectly connected to each other:

// model
 
def connected(a, b) = flight(a, b) // The base case
def connected(a, b) = exists(c : connected(a, c) and flight(c, b))

This is the recursive part of the problem.

The original question now reads:

// read query
 
def output =
    if connected("Providence (PVD)", "Honolulu (HNL)") then
        "yes"
    else
        "no"
    end
 
// This integrity constraint will fail if you don't compute that
// PVD and HNL are connected:
ic {equal(output, "yes")}

Suppose you want to ask another question and check whether you can fly from Boston (BOS) to Los Angeles (LAX):

// read query
 
def output =
    if connected("Boston (BOS)", "Los Angeles (LAX)") then
        "yes"
    else
        "no"
    end
 
// This integrity constraint will fail if you compute that
// BOS and LAX are connected:
ic {equal(output, "no")}

The answer is no, because in this toy world LAX and JFK are connected to each other but isolated from all the other airports.

For the mathematical enthusiasts, the problem of computing all pairs of connected nodes in a graph is called the transitive closure problem. In this example, the edges of the graph are defined in the flight relation.

See Recursion with Higher-order Arguments for a more general way to define an instance of the transitive closure problem.

Single Recursion

One of the simplest mathematical recursive formulas is the definition of the factorial (opens in a new tab) n!=n(n1)1n! = n \cdot (n-1) \cdot \ldots \cdot 1 for nN+n\in \mathbb{N}_+, which fulfills the recursive rule: n!=n(n1)!n! = n \cdot (n-1)!. In Rel, this reads:

// read query
 
def F[0] = 1 // The base case
def F[n] = n * F[n - 1], n <= 10
 
def output = F

There are several interesting aspects worth mentioning about this recursively defined relation:

Relation F has arity 2, where the first entry is the iteration step n, and the second entry is the value of n!.

The definition also contains all ingredients for a successful recursion:

  • Base case: def F[0] = 1 is the starting point and translates to 0! = 1.
  • Recursive rule: def F[n] = n * F[n - 1] translates to n!=n(n1)!n! = n\cdot(n-1)!.
  • Termination condition: n<=10 makes the output domain finite, forces the fixpoint to be reached when nn hits 10, and consequently terminates the recursive evaluation.

If you want to compute the factorial of a single number, you can avoid having to update the termination condition (here: n <= 10) by taking advantage of a feature known as demand transformation. See Demand Transformation for more details.

Multiple Recursion

Fibonacci Sequence

Multiple recursion, where the recursive rule contains multiple self-references, can also be easily expressed in Rel.

The most famous example is the Fibonacci sequence (opens in a new tab) F(n)=F(n1)+F(n2)F(n) = F(n-1) + F(n-2) with F(0)=0F(0)=0 and F(1)=1F(1)=1. In Rel, this relation reads:

// read query
 
def F[0] = 0  // Base cases
def F[1] = 1  // Base cases
def F[n] = F[n-1] + F[n-2], n <= 10
 
def output = F

You can see that all the ingredients for a successful recursive computation are present.

Connected Nodes in a Graph

In the air-travel example described in Recursive Computation on Graphs , you could have defined the connected relation using multiple recursions:

// query
 
def connected(a, b) = flight(a, b)                                      // The base case
def connected(a, b) = exists (c : connected(a, c) and connected(c, b))  // Recursive rule

Here, the recursive rule contains two self-references and reads:

Two airports, a and b, are connected if there exists some third airport, c, that is connected to both airports.

With recursion support for higher-order relations, you can write this transitive closure more generally, separating it from the underlying dataset. See Recursion with Higher-order Arguments for more details.

Negation in Recursion

Recursive dependencies can be made as complex as needed and can go well beyond just asking for a single element of a previous iteration, such as F[n - 1].

The Recamán sequence (opens in a new tab) demonstrates a more complex recursive computation. The sequence is defined as:

an={0if n=0an1nif an1n>0 and the value isn’t already in the sequencean1+notherwisea_n = \left\{\begin{array}{ll} 0 & \text{if}\ n=0 \\ a_{n-1}-n & \text{if}\ a_{n-1}-n>0\ \text{and the value isn't already in the sequence} \\ a_{n-1}+n & \text{otherwise} \\ \end{array}\right.

The recursive case requires you to check that the potential next value isn’t already in the sequence, and if it is, to assign a different value.

The Recamán sequence can be visualized in a very neat way by connecting subsequent values with an arch:

(CC license) (opens in a new tab)

One way to define the Recamán sequence in Rel is as follows:

// read query
 
@inline
def reject(n,x) = x < 0 or exists(m: recaman(m, x) and m < n)
 
def recaman[0] = 0
def recaman[n](x) = recaman[n - 1] - n = x and not reject(n, x) and 0 < n < 20
def recaman[n](x) = recaman[n - 1] + n = x and reject(n, x - 2 * n) and 0 < n < 20
 
def output = recaman
// This IC verifies the correctness of the results by comparing them with hand-computed results.
ic recaman_sequence = equal(
    recaman[_],
    {0;1;3;6;2;7;13;20;12;21;11;22;10;23;9;24;8;25;43;62;}
)

Only the first 20 terms are calculated. Several logical elements are used here to express the recursive Recamán rule:

  • Negation.
  • Existential quantification.
  • Disjunction.

Additionally, you used the @inline functionality to factor out the condition evaluation in a separate relation, reject, helping to keep the main recursive rule compact and readable.

Finally, adding an integrity constraint (IC) checks that the set of computed values is correct. Note that this IC does not check that the values are in the correct order, but it still gives additional confidence in the results.

Advanced Examples

Rel supports complex recursive rules that may involve multiple relations that depend on each other and may be recursive themselves. This is known as mutual recursion.

Before diving into them, look at relations that are not monotonically increasing in cardinality as the iteration advances. This is possible because elements can not only be added but also deleted during an iterative step.

Recursive Rules That Eliminate Elements

For all the examples seen so far, the number of elements — also known as cardinality — in the recursively defined relation is larger at the end of the computation. However, Rel can also delete, not just add, elements during an iteration, so it is also possible to finish with fewer elements in the relation than at the start.

The following Rel program has two recursive rules for D. The first adds the elements 1 through 10, if D is empty. The second keeps only elements that are smaller than or equal to 3, or different from the maximum element in D. The effect is that the largest element is iteratively removed until only the elements 1; 2; 3 remain:

// read query
 
def D(n) = range[1, 10, 1](n) and empty(D)
def D(n) = D(n) and (n <= 3 or n != max[D])
 
def output = D
 

Notice that the program contains only the two recursive rules and does not explicitly state a base case. This means that the base case is, by default, an empty relation D. The first rule adds the numbers 1 through 10 if and only if D is empty.

The key difference between the two recursive rules is: The first rule probes the non-existence of elements whereas the second rule depends on an aggregated view of existing elements. Either of these aspects, negation or aggregation, is enough to achieve a non-monotonic recursion. In the absence of both features, recursion is always monotonic.

Asymptotic Convergence: PageRank

Another class of recursive formulas that can be implemented in Rel is infinite series that converge asymptotically. That is in contrast to the example in the Recursive Computation on Graphs section, where you reached convergence after a finite number of steps.

In practice, you can define a convergence criterion, where you stop the iteration if the value(s) between consecutive iterations is closer than a small, user-defined ϵϵ:

an+1an2<ϵfor n>Nconverged,\| a_{n+1} - a_n \|^2 < ϵ \quad \text{for}\ n>N_\text{converged} \enspace,

where NconvergedN_\text{converged} is the number of iterations needed to achieve the desired accuracy.

Many machine learning algorithms fall into this category, where you declare the model as trained once the model parameters and/or the accuracy of the model change only marginally between iterations.

Take the PageRank (opens in a new tab) algorithm with damping as an example. It is defined as:

PRa=1dN+dbSaPRbLb,PR_a = \frac{1-d}{N} + d \sum_{b \in \mathcal{S}_a} \frac{PR_b}{L_b} \enspace ,

PRaPR_a is the page rank of website aa, LbL_b is the number of outbound links from site bb, Sa\mathcal{S}_a is the collection of websites that have a link to website aa, NN is the total number of websites, and dd is a damping parameter, which is usually set around 0.85.

First, define an undirected weighted graph, inspired by Wikipedia (opens in a new tab):

// model
 
module graphA
    def edge = {
        ("N1", "B"); ("N1", "E");
        ("N2", "B"); ("N2", "E");
        ("N3", "B"); ("N3", "E");
        ("N4", "E"); ("N5", "E");
        ("E", "F"); ("F", "E");
        ("E", "B"); ("E", "D");
        ("D", "A"); ("D", "B");
        ("B", "C"); ("C", "B");
        ("F", "B"); }
 
    def node = x : edge(x, _) or edge(_, x)
end
 

Here is a visualization of the graph:

Here, the size of a node is proportional to its PageRank.

The PageRank algorithm assumes that sink nodes, which have no outgoing edges, are connected to all other nodes in the graph. You can derive a new graph from the original as follows:

// model
 
module graphB
    def node = graphA:node
    def edge = graphA:edge
    def sink(n in node) = empty(graphA:edge[n])
    def edge(a in node, b in node) = sink(a) // Add new edges.
end

In Rel, the PageRank algorithm may read:

// model
 
// parameters
def eps = 10.0^(-10)
def damping = 0.85
def node = graphB:node
def edge = graphB:edge
 
def node_count = count[node]
def outdegree[x] = count[edge[x]]
 
def MAX_ITER = 100
 
// A version of sum that gives 0.0 for the empty set,
// needed if there are edges with no incoming nodes.
@inline
def sum_default[R] = sum[R] <++ 0.0
 
// pagerank(iteration, site, rank)
def pagerank[0, a in node] = 1.0 / node_count  // Base case: equal rank
def pagerank[n, a in node] = // Recursive rule
    (1-damping)/node_count + damping * sum_default[
        pagerank[n-1, b]/outdegree[b] for b where edge(b, a)
    ],
    pagerank(n-1, a, _) // Grounding `n` and `a`
    and not converged(n-1) // Termination condition
    and n<=MAX_ITER  // Safeguard
 
// Track the convergence.
def converged(n) =
    forall(a in node : (pagerank[n, a] - pagerank[n-1, a])^2 < eps)
    and range(1, MAX_ITER, 1, n)

The relation converged stores the iteration numbers for which the convergence criterion is fulfilled. The not converged(n-1) condition in the main recursive definition for pagerank ensures that the iteration stops once convergence is reached. A safeguard condition is also added that stops the recursive computation if 100 iterations have been reached.

Notice that the relations pagerank and converged depend on each other and must be solved together.

You can use the sum_default relation to ensure that the sum over an empty set returns 0.0 instead of false, which is the default behavior of sum[]. This is needed if there are no sink nodes in the original graph, and in case of a node with no incoming edges.

Now you can see the results. After 62 iterations, you have reached the desired accuracy level with the following results:

// read query
 
def max_iteration = max[n : pagerank(n, _, _)]
def did_converge = "yes", converged = max_iteration
 
def ranks = pagerank[max_iteration]
def sumranks = sum[pagerank[max_iteration]]
 

max_iteration

did_converge

ranks

sumranks

Since PageRank corresponds to a probability distribution over nodes, the ranks should sum to approximately 1.0.

Prim’s Minimum Spanning Tree

Prim’s algorithm is a greedy algorithm (opens in a new tab) to calculate a graph’s minimum spanning tree (MST) (opens in a new tab):

  1. Initialize a tree with a single node, chosen arbitrarily from the graph.
  2. Grow the tree by one edge: Of the edges that connect the tree to nodes not yet in the tree, find a minimum-weight edge, and add it to the tree.
  3. Repeat step 2, until all nodes are in the tree.

The input to the algorithm is an edge-weighted undirected graph, which is assumed to be connected; that is, it has no disjoint components. This one is inspired by www.geeksforgeeks.org (opens in a new tab):

// model
 
def graph:edge_weight = {
    (0, 1, 4); (1, 2, 9); (2, 3, 7); (3, 4, 9);
    (4, 5, 10);(5, 6, 2); (6, 7, 1); (7, 8, 7);
    (7, 0, 8); (1, 7, 11); (5, 2, 4);(5, 3, 14);
    (2, 8, 2); (6, 8, 6);
}
 
def graph:node(x) = graph:edge_weight(x,_,_) or graph:edge_weight(_,x,_)
 
// Make the graph undirected by adding reverse edges,
// which makes the code a little simpler.
def graph:edge_weight(x, y, w) = graph:edge_weight(y, x, w)

The relation graph:edge_weight contains the edge information. If graph:edge_weight(x,y,w) holds, there is an edge of weight w from x to y. The relation graph:node is the set of nodes.

It is possible to implement Prim’s algorithm with one recursive definition. To demonstrate the interplay of several recursive definitions, the code below uses three: Two of them track the edges and nodes in the MST, and one tracks the nodes not yet in the MST:

// read query
 
def mst:node = 0 // Initialize MST with an arbitrary node.
 
// Collect all nodes that have been already reached by the MST.
def mst:node(x) = mst:edge(x, _) or mst:edge(_, x)
 
// Define the main recursive rule.
def mst:edge = just_one[x, y :
       mst:node(x) and
       unvisited(y) and
       graph:edge_weight[x, y] = lightest_edge_weight[unvisited, mst:node]
    ]
 
@inline
def just_one[R] = (top[1,R])[1]
 
// Don't remove edges that have been added to the MST.
def mst:edge = mst:edge
 
// Define the list of nodes that are not in the MST yet.
def unvisited(x) = graph:node(x) and not mst:node(x)
 
// Define the smallest weight of an edge between visited and unvisited nodes.
@inline
def lightest_edge_weight[S, T] = min[v : v=graph:edge_weight[S, T]]
 
// Check that all the nodes are visited.
ic { empty(unvisited) }
// Check that the number of edges in a spanning tree is the number of nodes minus one.
ic { count[mst:edge] = count[graph:node] - 1 }
 
def output:edge = mst:edge
def output:weight = sum[mst:edge <: graph:edge_weight]
 

The main recursive part of this algorithm involves the three relations mst:node, mst:edge, and unvisited. At each iteration, mst:edge includes all the edges already added (mst:edge(x, y) = mst:edge(x, y)), plus one new edge, which is the lightest edge connecting a visited node, mst:node(x), to an unvisited node, unvisited(y).

The just_one higher-order inline relation makes sure only one edge is added at a time. In general, there could be two such edges, with the same minimum weight, leading to the same unvisited node; adding both edges would create a cycle, which must be avoided for the result to be a tree.

The recursive dependencies are quite non-trivial here because the three relations refer to each other in a cyclical way: unvisited depends on mst:node, which depends on mst:edge, which in turn depends on mst:node and unvisited again. The only direct recursive dependency is def mst:edge = mst:edge. This rule ensures that edges, once inserted, are not removed in a future iteration.

This safeguard is needed because the edge tuple (x,y) in the other mst:edge rule requires that y is unvisited; that is, not in mst:node. Without this safeguard, new edges would be removed from mst[:edge] in future iterations, when y is in visited.

Now, the condition that y should not be in mst:node might be fulfilled again, entering a cycle that never ends. See the Common Pitfalls section for more details on never-ending recursive computations. This safeguard prevents all that.

The relation unvisited is an example of a relation that decreases in size as the recursive evaluation proceeds. See the Recursive Rules That Eliminate Elements section. Initially, unvisited contains all the nodes in the graph, but once the recursive evaluation is done, unvisited is actually empty, which you can also double check with an integrity constraint, ic {not unvisited(_)}.

The edge weights of the MST are not stored in mst. Instead, you can look them up in graph[:edge_weight]. You can visualize the minimal spanning tree you just calculated:

Logging Recursive Progress

Normally, you can only access the relation after the recursive evaluation has finished, i.e., converged. It would be great to also be able to see the intermediate progress to better understand and debug the logic you write.

Take the example above where elements are successively removed until only three elements are left. See the Recursive Rules That Eliminate Elements section. If you remove the condition, n<=3, you create a recursive algorithm that never converges. See the Common Pitfalls section.

To see if this is actually what is happening, you can create a logging relation Log that records the size of D after each iteration. To avoid the endless loop, you can add a condition that forces the recursive evaluation to terminate after 10 iterations have been reached:

// model
 
// Number of iterations
def Niter = 10
 
// Endless loop
def D(n) = n=range[1, 3, 1] and not D(_) and continue
def D(n) = D(n) and n!=max[D] and continue
 
// Logging
def Log = 0,0 : not Log(_, _) // Dummy initialization
def Log(n, d_size) =
    n = max[first[Log]]+1 and
    d_size = sum[count[D]; 0] and
    continue
 
def Log(x, y) = Log(x, y) and x > 0 // Don't remove entries.
 
// Termination condition
@inline
def continue = count[Log] < Niter

Note that the relation was modified from the original above and 1;2;3 was inserted instead of 1;2;...;10. You can check whether Log really contains the size/cardinality of relation D after each iteration:

// read query
 
Log

This confirms that D starts with three elements and decreases to zero, i.e., empty, at which point the first recursive rule holds again, and D is again initialized with 1;2;3. This cycle repeats forever, except that in this case you terminate the iteration when the relation continue turns false, which is the case when the logging relation Log contains at least Niter=10 entries.

Recursion With Higher-order Arguments

Once you have identified a general pattern — such as that of computing a transitive closure — you may be able to express it directly in the form of a higher-order relation. This allows you to avoid defining transitive closure over and over again for different graphs.

A higher-order relation is a relation whose arguments are other relations. If you want a higher-order relation to be recursive, you must annotate the definitions of its rules with @outline. You can define a higher-order relation that computes the transitive closure of a binary relation as follows:

// model
 
@outline
def transitive_closure[R] = R
@outline
def transitive_closure[R](x, y) =
    exists(z : R(x, z) and transitive_closure[R](z, y))

Here, R is the parameter that represents the argument of transitive_closure, that is, some relation. The first rule expresses the fact that any relation is a subset of its transitive closure. The second rule is the recursive rule for transitive closure. This rule is very similar to the first-order version discussed in Recursive Computation on Graphs): the relation connected is simply replaced with transitive_closure[R].

Having defined transitive-closure, you can use it to compute the transitive closure of any binary relation. For example:

// read query
 
def edges = (1, 2); (2, 3); (3, 2)
 
def output = transitive_closure[edges]

You can also express connected much more simply than before:

def connected = transitive_closure[flight]

See @outline for a more general definition of the transitive closure relation.

Tips and Tricks

Demand Transformation

If you just want to compute the factorial of a single number, you can omit the explicit statement of the termination condition by taking advantage of the compiler’s demand transformation. You can request the transformation by annotating the definition with @ondemand. Here’s a modified definition of the factorial relation that was introduced in Single Recursion:

// read query
 
@ondemand
def F[0] = 1
def F[n] = n * F[n - 1], n > 0
 
def output = F[12]

The weaker condition n > 0 is still necessary. See @ondemand for more information.

Demand transformation can be used for many forms of recursive logic. It is important to note, though, that the arguments of the relations must be simple values, not relations. Demand transformation cannot be applied to higher-order relations. If you want to know more about recursion with higher-order arguments, see Recursion With Higher-order Arguments.

Common Pitfalls

Probably the single most common problem users will face when using recursion is nonconvergent computation.

There are two main types of nonconvergence:

  • Unbounded domain: Recursive computations where there is no fixpoint or a fixpoint that takes infinitely many iterations to reach. PageRank is an example where the fixpoint lies at infinity. The Fibonacci Sequence and calculating the lengths of all paths in a cyclical graph are examples of computations with no fixpoint. This requires restricting the domain to make sure that the recursive computation terminates after a finite number of iterations. This will be addressed in the future with the demand transformation feature. See the Running to Infinity section.

  • Oscillations: No fixpoints exist even though the output domain is finite. This can occur if the recursive computation gets stuck in a cycle. This situation is much harder to spot and can be quite subtle as the discussion in the MST example above demonstrated. In the future, the query evaluator will be able to detect this behavior and report a failure when you repeat a solution you have seen before. See the Stuck in a Cycle section.

Running to Infinity

Factorial

How can you end up with an infinite loop? One obvious way is when you forget to restrict the output domain in a recursive rule that explores a larger and larger domain space as the recursive computation progresses. An example is the factorial n!n!. Imagine that you write the factorial but don’t state at which nn to stop:

// query
 
def F[0] = 1
def F[n] = n * F[n-1]

Since Rel computes fixpoints in a bottom-up fashion, deriving new facts from existing ones, the iteration will go on forever, as nn grows larger and larger. In the future, this issue will be solved with on-demand evaluation, which only performs computations needed when a specific set of values are requested.

Lengths of All Paths in a Cyclical Graph

In graph problems, infinite recursion can occur even though the graph itself is finite. For cyclical graphs, for instance, calculating the lengths of all paths will not terminate and will run forever because the cyclical nature of the graph ensures that you can always find a path of arbitrary length by passing through the loop, which exists in the graph, multiple times.

To demonstrate that point, you can construct a minimal graph with three nodes and one loop between nodes 1 and 2:

// model
 
def cyclical_graph:node = {1; 2; 3}
def cyclical_graph:edge = {(1, 2); (2, 1); (2, 3)}
 
// path_length(start_node, end_node, distance)
def path_length = cyclical_graph[:edge], 1  // Base case
def path_length(a, c, len) =
    path_length(a, b, len1) and
    path_length(b, c, len2) and
    len = len1 + len2  and
    len <= 4  // Termination to avoid infinite iteration
    from b, len1, len2

To avoid an endless loop, you can insert the condition len <= 4. Inspecting the relation path_length shows that between each node pair you have multiple path lengths increasing in steps of two, which is the length of the loop in the cyclical graph:

// read query
 
path_length

Stuck in a Cycle

There is another way that you can get stuck in an infinite recursive computation. Elements can be added and removed during the iteration. Hence, it is possible to end up in a cycle that never ends.

Take the example in the Recursive Rules That Eliminate Elements section where you start with the elements 1 to 10 and iteratively remove the largest element, but only if it is larger than 3. Suppose you take this condition away:

// Query
 
def D = n : n=range[1, 10, 1] and not D(_)
def D(n) = D(n) and n != max[D]

You will end up in an endless cycle. Why? Because once you remove the last element from D, you are again fulfilling the initial condition used to populate the relation in the first place. Now, you are back at the beginning, and the cycle repeats itself forever. This recursive computation has no fixpoint and will never terminate.

As mentioned above, this problem will be solved in the future and Rel will be able to recognize when a solution has already been visited, but the recursive computation has not converged.

Behind the Scenes: Recursion and Fixpoints

This section goes into more detail in how recursive rules are evaluated in Rel. You don’t need to understand this to successfully write recursively defined relations. It might be, however, very helpful for writing complex recursive algorithms and understanding why Rel returns what it does.

At a high level, Rel solves the recursive computation for RR by starting at step 0 with the base case(s), R0R_0, which may be the empty set (in case no explicit base case is defined). The relation R0R_0 represents the starting point of the recursive computation. If ff stands for a function mapping relation SS to relation TT, then you can express the converged relation RR in the following way:

R=f(RN1)=(ff)(RN2)=(ff)N(R0)R = f(R_{N-1}) = (f \circ f)(R_{N-2}) = \underbrace{(f \circ \ldots \circ f)}_{N}(R_{0}),

where after NN iterative applications of ff you have converged on a final result by reaching the fixpoint, f(R)=Rf(R) = R, and the iteration stops as a repetitive application of ff doesn’t change the content of RR.

Rel solves the recursive computations in a bottom-up fashion where you can start with the base case(s) and then iteratively apply the recursive rule until you reach a fixpoint. Hence, it is currently difficult to implement a top-down algorithm. Many naive implementations of divide-and-conquer algorithms (opens in a new tab), such as mergesort (opens in a new tab), FFT (opens in a new tab), and Tower of Hanoi (opens in a new tab), are solved in a top-down fashion.

Besides the common pitfalls discussed above, you also need to be aware of scenarios where you might have multiple fixpoints.

Multiple Fixpoints

A function ff can have multiple fixpoints, that is, more than one value of xx where f(x)=x.f(x)=x \enspace. This is easy to see with mathematical functions, such as polynomials. Take, for instance, the function f(x)=2(x1)2+2.f(x) = -2(x-1)^2+2 \enspace. It has the fixpoints x=0x=0 and x=32x=\frac{3}{2}, which are the two points where the function intersects the y=xy=x line. Whether iterative methods can find these values may depend on the initial conditions chosen.

This applies to relations in general, and it arises in Rel, particularly when using negation.

Negations and Cycles

Consider the following example, featuring recursion and negation:

def man = "Dilbert"
def single(x) = man(x) and not husband(x)
def husband(x) = man(x) and not single(x)
def output:single = single
def output:husband = husband

There are two different least fixpoints:

  • Dilbert is single and not a husband: single(Dilbert) and not husband(Dilbert)
  • Dilbert is a husband and not single: husband(Dilbert) and not single(Dilbert)

They are incomparable: Neither is smaller than the other. Rel computes one of them, but it can be argued that they are both equally valid.

Note that single depends on husband, which in turn depends on single again, so you have a dependency cycle, with intervening negations.

Models with a cyclical definition that involves negation are non-stratifiable and you should avoid them when possible. Using them can lead to non-terminating logic or multiple fixpoints, creating a situation where the system may:

  • Find an unintended fixpoint.
  • Find different fixpoints each time it runs.
  • Find no fixpoint.

In the example above, having multiple fixpoints leads to a non-terminating recursive evaluation and the system throws an Exception error.

In general, negations and aggregations should be used with caution in recursive definitions, since they can result in nonmonotonic recursion and unintended behaviors.

Symmetry-Induced Fixpoints

Symmetry is a fundamental concept in mathematics and physics and is relevant to properties like the law of conservation of energy (opens in a new tab). Symmetries in the problem definition are a common cause for multiple fixpoints. The multiple fixpoints in the Dilbert example above come from the symmetry between the definitions of husband and single.

In more complex situations, the underlying symmetry might not be so easy to spot. For example:

// Query
 
def circle = 1, 0
def circle = -y, x from x, y where circle(x, y)

The computation reaches the fixpoint after four iterations. This is because the recursion returns to (1,0)(1,0) after visiting the points (0,1),(1,0)(0,1), (-1,0), and (0,1)(0,-1).

The reason for this cyclical behavior is that you have encoded a rotational symmetry in your problem. With each iteration, you perform a 9090^\circ rotation starting at (1,0)(1,0).

Hence, once you visit all the points of the rotational group C4C_4, modulo the starting point, you will have reached the fixpoint, and no new elements will be added to circle.

This toy example shows how the underlying symmetry in the algorithm forces you to stay within a restricted domain, and that all elements in your relation are elements of that domain. Only symmetry-breaking operations will be able to escape this domain.

Symmetries are often beneficial, because this restriction can be exploited and can lead to significant speedups in calculations without loss of generality.

Summary

In summary, a recursive definition is one that references the relation being defined in the body of the definition. To successfully write a recursive algorithm in Rel, you need a base case, a recursive rule, and the termination conditions that set the fixpoint to terminate the algorithm.

In Rel, you can express recursive computations for simple mathematical formulas, for computations involving several relations including multiple self-references, and for computations involving negation and aggregation statements. Understanding how recursion works helps to write logic that does not lead to programs that never terminate. It is also useful to know the algorithm for how recursive logic is evaluated in Rel, particularly when writing complex recursive algorithms.

See Also

For further readings on the Rel features used in this concept guide, see:

Was this doc helpful?