Videos

Control of Rigid Formation

Presenter
June 6, 2014
Keywords:
  • Robots
MSC:
  • 70B15
Abstract
We will review several recent results concerned with the maintenance of formations of mobile autonomous agents {eg robots} based on the idea of a rigid framework. We will talk briefly about certain classes of “directed” formations for which there is a moderately complete methodology. We then turn to “undirected” formations which is the main focus of this presentation. By an undirected rigid formation of mobile autonomous agents is meant a formation based on “graph rigidity” in which each pair of “neighboring” agents i and j is responsible for maintaining the prescribed distance dij between them. Recent research by several different groups has led to the development of an elegant potential function based theory of formation control which provides gradient laws for asymptotically stabilizing a large class of rigid, undirected formations in two-dimensional space assuming all agents are described by kinematic point models. This particular methodology is perhaps the most comprehensive currently in existence for maintaining undirected formations based on graph rigidity. The main purpose of this talk is to explain what happens if neighboring agents i and j using such gradient controls have slightly different understandings of what the desired distance dij between them is suppose to be. The question is relevant because no two positioning controls can be expected to move agents to precisely specified positions because of inevitable imprecision in the physical comparators used to compute the positioning errors. The question is also relevant because it is mathematically equivalent to determining what happens if neighboring agents have differing estimates of what the actual distance between them is. In either case, what one would hope for would be a gradual distortion of the formation from its target shape as discrepancies in desired or sensed distances increase. While this is observed for the gradient laws in question, something else quite unexpected happens at the same time. In this talk we will describe what occurs and explain why. The robustness issues raised here have broader implications extending well beyond formation maintenance to the entire field of distributed optimization and control. In particular, this research illustrates that when assessing the efficacy of a particular distributed algorithm, one must consider the consequences of distinct agents having slightly different understandings of what the values of shared data between them is suppose to be. For without the protection of exponential stability/convergence, it is likely that such discrepancies will cause significant misbehavior to occur.