Distributed minimum spanning tree
The distributed minimum spanning tree (MST) problem involves the construction of a minimum spanning tree by a distributed algorithm, in a network where nodes communicate by message passing. It is radically different from the classical sequential problem, although the most basic approach resembles Borůvka's algorithm. One important application of this problem is to find a tree that can be used for broadcasting. In particular, if the cost for a message to pass through an edge in a graph is significant, an MST can minimize the total cost for a source process to communicate with all the other processes in the network.
The problem was first suggested and solved in time in 1983 by Gallager et al.,[1] where is the number of vertices in the graph. Later, the solution was improved to [2] and finally[3][4] where D is the network, or graph diameter. A lower bound on the time complexity of the solution has been eventually shown to be[5]
Overview
[edit]The input graph is considered to be a network, where vertices are independent computing nodes and edges are communication links. Links are weighted as in the classical problem.
At the beginning of the algorithm, nodes know only the weights of the links which are connected to them. (It is possible to consider models in which they know more, for example their neighbors' links.)
As the output of the algorithm, every node knows which of its links belong to the minimum spanning tree and which do not.
MST in message-passing model
[edit]The message-passing model is one of the most commonly used models in distributed computing. In this model, each process is modeled as a node of a graph. Each communication channel between two processes is an edge of the graph.
Two commonly used algorithms for the classical minimum spanning tree problem are Prim's algorithm and Kruskal's algorithm. However, it is difficult to apply these two algorithms in the distributed message-passing model. The main challenges are:
- Both Prim's algorithm and Kruskal's algorithm require processing one node or vertex at a time, making it difficult to make them run in parallel. For example, Kruskal's algorithm processes edges in turn, deciding whether to include the edge in the MST based on whether it would form a cycle with all previously chosen edges.
- Both Prim's algorithm and Kruskal's algorithm require processes to know the state of the whole graph, which is very difficult to discover in the message-passing model.
Due to these difficulties, new techniques were needed for distributed MST algorithms in the message-passing model. Some bear similarities to Borůvka's algorithm for the classical MST problem.
GHS algorithm
[edit]The GHS algorithm[1] of Gallager, Humblet and Spira is one of the best-known algorithms in distributed computing theory. This algorithm constructs an MST in the asynchronous message-passing model.
Assumptions
[edit]The GHS algorithm requires several assumptions.
- The input graph is connected and undirected.
- Each edge in the input graph has distinct finite weights. This assumption is not needed if there is a consistent method to break ties between edge weights.
- Each node initially knows the weight of each edge incident to that node.
- Initially, each node is in an inactive state. Each node either spontaneously awakens or is awakened by receipt of any message from another node.
- Messages can be transmitted independently in both directions on an edge and arrive after an unpredictable but finite delay, without error.
- Each edge delivers messages in FIFO order.
Properties of MSTs
[edit]Define a fragment of an MST to be a sub-tree of . That is, a fragment is a connected set of nodes and edges of . MSTs have two important properties in relation to fragments:[1]
- Given a fragment of an MST , let be a minimum-weight outgoing edge of the fragment. Then joining and its adjacent non-fragment node to the fragment yields another fragment of an MST.
- If all the edges of a connected graph have different weights, then the MST of the graph is unique.
These two properties form the basis for proving correctness of the GHS algorithm. In general, the GHS algorithm is a bottom-up algorithm in the sense that it starts by letting each individual node be a fragment, and then joining fragments until a single fragment is left. The above properties imply that the remaining fragment must be an MST.
Description of the algorithm
[edit]The GHS algorithm assigns a level to each fragment, which is a non-decreasing integer with initial value 0. Furthermore, each fragment with a non-zero level has an ID, which is the ID of the core edge in the fragment, which is selected when the fragment is constructed. During the execution of the algorithm, each node can classify each of its incident edges into three categories:[1][6]
- Branch edges are those that have been determined to be part of the MST.
- Rejected edges are those that have been determined not to be part of the MST.
- Basic edges are all edges that are neither branch edges nor rejected edges.
In level-0 fragments, each awakened node will do the following:
- Choose its minimum-weight incident edge and mark that edge as a branch edge.
- Send a message via the branch edge to notify the node on the other side.
- Wait for a message from the other end of the edge.
The edge that is chosen by the two nodes it connects becomes the core edge, and is assigned level 1.
In non-zero-level fragments, a separate algorithm is executed in each level. This algorithm can be separated into three stages: broadcast, convergecast, and change core.
Broadcast
[edit]The two nodes adjacent to the core broadcast messages to the rest of the nodes in the fragment. The messages are sent via the branch edge but not via the core. Each broadcast message contains the ID and level of the fragment. At the end of this stage, each node has received the new fragment ID and level.
Convergecast
[edit]In this stage, all nodes in the fragment cooperate to find the minimum weight outgoing edge of the fragment. Outgoing edges are edges connecting to other fragments. The messages sent in this stage are in the opposite direction of the broadcast stage. Initialized by all the leaves (the nodes that have only one branch edge), a message is sent through the branch edge. The message contains the minimum weight of the incident outgoing edge it found (or infinity if no such edge was found). The way to find the minimum outgoing edge will be discussed later. For each non-leaf node, given the number of its branch edges as , after receiving convergecast messages, it will pick the minimum weight from the messages and compare it to the weights of its incident outgoing edges. The smallest weight will be sent toward the branch it received the broadcast from.
Change core
[edit]After the completion of the previous stage, the two nodes connected by the core can inform each other of the best edges they received. Then they can identify the minimum outgoing edge from the entire fragment. A message will be sent from the core to the minimum outgoing edge via a path of branch edges. Finally, a message will be sent out via the chosen outgoing edge to request to combine the two fragments that the edge connects. Depending on the levels of those two fragments, one of two combined operations are performed to form a new fragment; details discussed below.
Finding the minimum-weight incident outgoing edge
[edit]As discussed above, every node needs to find its minimum weight outgoing incident edge after the receipt of a broadcast message from the core. If node receives a broadcast, it will pick its minimum weight basic edge and send a message to the node on the other side with its fragment's ID and level. Then, node will decide whether the edge is an outgoing edge and send back a message to notify node of the result. The decision is made according to the following:
- . That is, nodes and belong to same fragment, so the edge is not outgoing.
- and . That is, nodes and belong to the different fragments, so the edge is outgoing.
- and . We cannot make any conclusion. The reason is that the two nodes may belong to the same fragment already, but node has not discovered this fact yet due to the delay of a broadcast message. In this case, the algorithm lets node postpone the response until its level becomes higher than or equal to the level it received from node .
Combining two fragments
[edit]Let and be the two fragments that need to be combined. There are two ways to do this:[1][6]
- Merge: This operation occurs if both and share a common minimum weight outgoing edge, and . The level of the combined fragment will be .
- Absorb: This operation occurs if . The combined fragment will have the same level as .
Furthermore, when an "Absorb" operation occurs, must be in the stage of changing the core, while can be in an arbitrary stage. Therefore, "Absorb" operations may be done differently depending on the state of . Let be the edge that and want to combine with, and let and be the two nodes connected by in and , respectively. There are two cases to consider:
- Node has received broadcast message but it has not sent a convergecast message back to the core. In this case, fragment can simply join the broadcast process of . Specifically, we image and have already combined to form a new fragment , so we want to find the minimum weight outgoing edge of . In order to do that, node can initiate a broadcast to to update the fragment ID of each node in and collect minimum weight outgoing edge in .
-
Node has already sent a convergecast message back to the core. Before node sent a convergecast message, it must have picked a minimum weight outgoing edge. As we discussed above, does that by choosing its minimum weight basic edge, sending a test message to the other side of the chosen edge, and waiting for the response. Suppose is the chosen edge, we can conclude the following:
The second statement follows if the first one holds. For the first statement, suppose chose the edge and sent a test message to via edge . Then, node will delay the response (according to case 3 of "Finding the minimum weight incident outgoing edge"). Then, it is impossible that has already sent its convergecast message. By the aforementioned conclusions 1 and 2, we can conclude it is safe to absorb into since is still the minimum outgoing edge to report after is absorbed.
Maximum number of levels
[edit]As mentioned above, fragments are combined by either "Merge" or "Absorb" operation. The "Absorb" operation does not change the maximum level among all fragments. The "Merge" operation may increase the maximum level by 1. In the worst case, all fragments are combined with "Merge" operations, so the number of fragments decreases by half in each level. Therefore, the maximum number of levels is , where is the number of nodes.
Progress property
[edit]The GHS algorithm has a nice property that the lowest level fragments will not be blocked, although some operations in the non-lowest level fragments may be blocked. This property implies the algorithm will eventually terminate with a minimum spanning tree.
Approximation algorithms
[edit]An -approximation algorithm was developed by Maleq Khan and Gopal Pandurangan.[7] This algorithm runs in time, where is the local shortest path diameter[7] of the graph.
References
[edit]- ^ a b c d e Robert G. Gallager, Pierre A. Humblet, and P. M. Spira, "A distributed algorithm for minimum-weight spanning trees," ACM Transactions on Programming Languages and Systems, vol. 5, no. 1, pp. 66–77, January 1983.
- ^ Baruch Awerbuch. “Optimal Distributed Algorithms for Minimum Weight Spanning Tree, Counting, Leader Election, and Related Problems,” Proceedings of the 19th ACM Symposium on Theory of Computing (STOC), New York City, New York, May 1987.
- ^ Juan Garay, Shay Kutten and David Peleg, “A Sub-Linear Time Distributed Algorithm for Minimum-Weight Spanning Trees (Extended Abstract),” Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS), 1993.
- ^ Shay Kutten and David Peleg, “Fast Distributed Construction of Smallk-Dominating Sets and Applications,” Journal of Algorithms, Volume 28, Issue 1, July 1998, Pages 40-66.
- ^ David Peleg and Vitaly Rubinovich “A near tight lower bound on the time complexity of Distributed Minimum Spanning Tree Construction,“ SIAM Journal on Computing, 2000, and IEEE Symposium on Foundations of Computer Science (FOCS), 1999.
- ^ a b Nancy A. Lynch. Distributed Algorithms. Morgan Kaufmann, 1996.
- ^ a b Maleq Khan and Gopal Pandurangan. "A Fast Distributed Approximation Algorithm for Minimum Spanning Trees,"" Distributed Computing, vol. 20, no. 6, pp. 391–402, Apr. 2008.