Next Article in Journal
Pre and Postprocessing for JPEG to Handle Large Monochrome Images
Next Article in Special Issue
Parameterized Algorithms in Bioinformatics: An Overview
Previous Article in Journal
Walking Gait Phase Detection Based on Acceleration Signals Using LSTM-DNN Algorithm
Previous Article in Special Issue
Solving Integer Linear Programs by Exploiting Variable-Constraint Interactions: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FPT Algorithms for Diverse Collections of Hitting Sets

1
Institute of Optimization and Operations Research, Ulm University, 89081 Ulm, Germany
2
Department of Informatics, University of Bergen, 5008 Bergen, Norway
3
Department of Applied Mathematics of the Faculty of Mathematics and Physics, Charles University, 11800 Prague, Czech Republic
4
Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, 02-097 Warszawa, Poland
5
Chennai Mathematical Institute, Chennai 603103, India
6
International Joint Unit Research Lab in Computer Science (UMI ReLaX), Chennai 603103, India
7
Fachbereich Mathematik und Informatik, Freie Universität Berlin, D-14195 Berlin, Germany
*
Authors to whom correspondence should be addressed.
Submission received: 25 October 2019 / Revised: 22 November 2019 / Accepted: 23 November 2019 / Published: 27 November 2019
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)

Abstract

:
In this work, we study the d-Hitting Set and Feedback Vertex Set problems through the paradigm of finding diverse collections of r solutions of size at most k each, which has recently been introduced to the field of parameterized complexity. This paradigm is aimed at addressing the loss of important side information which typically occurs during the abstraction process that models real-world problems as computational problems. We use two measures for the diversity of such a collection: the sum of all pairwise Hamming distances, and the minimum pairwise Hamming distance. We show that both problems are fixed-parameter tractable in k + r for both diversity measures. A key ingredient in our algorithms is a (problem independent) network flow formulation that, given a set of ‘base’ solutions, computes a maximally diverse collection of solutions. We believe that this could be of independent interest.

1. Introduction

The typical approach in modeling a real-world problem as a computational problem has, broadly speaking, two steps: (i) abstracting the problem into a mathematical formulation that captures the crux of the real-world problem, and (ii) asking for a best solution to the mathematical problem.
Consider the following scenario: Dr. O organizes a panel discussion and has a shortlist of candidates to invite. From that shortlist, Dr. O wants to invite as many candidates as possible, such that each of them will bring an individual contribution to the panel. Given two candidates, A and B, it may not be beneficial to invite both A and B, for various reasons: their areas of expertise or opinions may be too similar for both to make a distinguishable contribution, or it may be preferable not to invite more than one person from each institution. It may even be the case that A and B do not see eye-to-eye on some issues which could come up at the discussion, and Dr. O wishes to avoid a confrontation.
A natural mathematical model to resolve Dr. O ’s dilemma is as an instance of the Vertex Cover problem: each candidate on the shortlist corresponds to a vertex, and for each pair of candidates A and B, we add the edge between A and B if it is not beneficial to invite both of them. Removing a smallest vertex cover in the resulting graph results in a largest possible set of candidates such that each of them may be expected to individually contribute to the appeal of the event.
Formally, a vertex cover of an undirected graph G is any subset S V ( G ) of the vertex set of G such that every edge in G has at least one end-point in G. The Vertex Cover problem asks for a vertex cover of the smallest size:
Vertex Cover
Input:Graph G.
Solution:A vertex cover S of G of the smallest size.
While the above model does provide Dr. O with a set of candidates to invite that is valid in the sense that each invited candidate can be expected to make a unique contribution to the panel, a vast amount of side information about the candidates is lost in the modeling process. This side information could have helped Dr. O to get more out of the panel discussion. For instance, Dr. O may have preferred to invite more well-known or established people over ‘newcomers’, if they wanted the panel to be highly visible and prestigious; or they may have preferred to have more ‘newcomers’ in the panel, if they wanted the panel to have more outreach. Other preferences that Dr. O may have had include: to have people from many different cultural backgrounds, to have equal representation of genders, or preferential representation for affirmative action; to have a variety in the levels of seniority among the attendants, possibly skewed in one way or the other. Other factors, such as the total carbon footprint caused by the participants’ travels, may also be of interest to Dr. O . This list could go on and on.
Now, it is possible to plug in some of these factors into the mathematical model, for instance by including weights or labels. Thus, a vertex weight could indicate ‘how well-established’ a candidate is. However, the complexity of the model grows fast with each additional criterion. The classic field of multicriteria optimization [1] addresses the issue of bundling multiple factors into the objective function, but it is seldom possible to arrive at a balance in the various criteria in a way which captures more than a small fraction of all the relevant side information. Moreover, several side criteria may be conflicting or incomparable (or both); consider in Dr. O ’s case ‘maximizing the number of different cultural backgrounds’ vs. ‘minimizing total carbon footprint’.
While Dr. O ’s story is admittedly a made-up one, the Vertex Cover problem is in fact used to model conflict resolution in far more realistic settings. In each case, there is a conflict graph G whose vertices correspond to entities between which one wishes to avoid a conflict of some kind. There is an edge between two vertices in G if and only if they could be in conflict, and finding and deleting a smallest vertex cover of G yields a largest conflict-free subset of entities. We describe three examples to illustrate the versatility of this model. In each case, it is intuitively clear, just like in Dr. O ’s problem, that formulating the problem as Vertex Cover results in a lot of significant side information being thrown away, and that while finding a smallest vertex cover in the conflict graph will give a valid solution, it may not really help in finding a best solution, or even a reasonably good solution. We list some side information that is lost in the modeling process; the reader should find it easy to come up with any amount of other side information that would be of interest, in each case.
  • Air traffic control. Conflict graphs are used in the design of decision support tools for aiding Air Traffic Controllers (ATCs) in preventing untoward incidents involving aircraft [2,3]. Each node in the graph G in this instance is an aircraft, and there is an edge between two nodes if the corresponding aircraft are at risk of interfering with each other. A vertex cover of G corresponds to a set of aircraft that can be issued resolution commands which ask them to change course, such that afterwards there is no risk of interference.
    In a situation involving a large number of aircraft, it is unlikely that every choice of ten aircraft to redirect is equally desirable. For instance, in general, it is likely that (i) it is better to ask smaller aircraft to change course in preference to larger craft, and (ii) it is better to ask aircraft which are cruising to change course, in preference to those which are taking off or landing.
  • Wireless spectrum allocation. Conflict graphs are a standard tool in figuring out how to distribute wireless frequency spectrum among a large set of wireless devices so that no two devices whose usage could potentially interfere with each other are allotted the same frequencies [4,5]. Each node in G is a user, and there is an edge between two nodes if (i) the users request the same frequency, and (ii) their usage of the same frequency has the potential to cause interference. A vertex cover of G corresponds to a set of users whose requests can be denied, such that afterwards there is no risk of interference.
    When there is large collection of devices vying for spectrum, it is unlikely that every choice of ten devices to deny the spectrum is equally desirable. For instance, it is likely that denying the spectrum to a remote-controlled toy car on the ground is preferable to denying the spectrum to a drone in flight.
  • Managing inconsistencies in database integration. A database constructed by integrating data from different data sources may end up being inconsistent (that is, violating specified integrity constraints) even if the constituent databases are individually consistent. Handling these inconsistencies is a major challenge in database integration, and conflict graphs are central to various approaches for restoring consistency [6,7,8,9]. Each node in G is a database item, and there is an edge between two nodes if the two items together form an inconsistency. A vertex cover of G corresponds to a set of database items in whose absence the database achieves consistency.
    In a database of large size, it is unlikely that all data are created equal; some database items are likely to be of better relevance or usefulness than others, and so it is unlikely that every choice of ten items to delete is equally desirable.
Getting back to our first example, it seems difficult to help Dr. O with their decision by employing the ‘traditional’ way of modeling computational problems, where one looks for one best solution. If, on the other hand, Dr. O was presented with a small set of good solutions that, in some sense, are far apart, then they might hand-pick the list of candidates that they consider the best choice for the panel and make a more informed decision. Moreover, several forms of side-information may only become apparent once Dr. O is presented some concrete alternatives, and are more likely to be retrieved from alternatives that look very different. That is, a bunch of good quality, dissimilar solutions may end up capturing a lot of the “lost” side information. In addition, this applies to each of the other three examples as well. In each case, finding one best solution could be of little utility in solving the original problem, whereas finding a small set of solutions, each of good quality, which are not too similar to one another may offer much more help.
To summarize, real-world problems typically have complicated side constraints, and the optimality criterion may not be clear. Therefore, the abstraction to a mathematical formulation is almost always a simplification, omitting important side information. There are at least two obstacles to simply adapting the model by incorporating these secondary criteria into the objective function or taking into account the side constraints: (i) they make the model complicated and unmanagable, and, (ii) more importantly, these criteria and constraints are often not precisely formulated, potentially even unknown a priori. There may even be no sharp distinction between optimality criteria and constraints (the so-called “soft constraints”).
One way of dealing with this issue is to present a small number r of good solutions and let the user choose between them, based on all the experience and additional information that the user has and that is ignored in the mathematical model. Such an approach is useful even when the objective can be formulated precisely, but is difficult to optimize: After generating r solutions, each of which is good enough according to some quality criterion, they can be compared and screened in a second phase, evaluating their exact objective function or checking additional side constraints. In this context, it makes little sense to generate solutions that are very similar to each other and differ only in a few features. It is desirable to present a diverse variety of solutions.
It should be clear that the issue is scarcely specific to Vertex Cover. Essentially any computational problem motivated by practical applications likely has the same issue: the modeling process throws out so much relevant side information that any algorithm that finds just one optimal solution to an input instance may not be of much use in solving the original problem in practice. One scenario where the traditional approach to modeling computational problems fails completely is when computational problems may combined with a human sense of aesthetics or intuition to solve a task, or even to stimulate inspiration. Some early relevant work is on the problem of designing a tool which helps an architect in creating a floor plan which satisfies a specified set of constraints. In general, the number of feasible floor plans—those which satisfy constraints imposed by the plot on which the building has to be erected, various regulations which the building should adhere to, and so on—would be too many for the architect to look at each of them one by one. Furthermore, many of these plans would be very similar to one another, so that it would be pointless for the architect to look at more than one of these for inspiration. As an alternative to optimization for such problems, Galle proposed a “Branch & Sample” algorithm for generating a “limited, representative sample of solutions, uniformly scattered over the entire solution space” [10].
The Diverse X Paradigm. Mike Fellows has proposed the Diverse X Paradigm as a solution for these issues and others [11]. In this paradigm, “X” is a placeholder for an optimization problem, and we study the complexity—specifically, the fixed-parameter tractability—of the problem of finding a few different good quality solutions for X. Contrast this with the traditional approach of looking for just one good quality solution. Let X denote an optimization problem where one looks for a minimum-size subset of some set; Vertex Cover is an example of such a problem. The generic form of X is then:
X
Input:An instance I of X.
Solution:A solution S of I of the smallest size.
Here, the form that a “solution S of I” takes is dictated by the problem X; compare this with the earlier definition of Vertex Cover.
The diverse variant of problem X, as proposed by Fellows, has the form:
Diverse X
Input:An instance I of X, and positive integers k , r , t .
Parameter: ( k , r )
Solution:A set S of r solutions of I, each of size at most k, such that a diversity measure of S is at least t.
Note that one can construct diverse variants of other kinds of problems as well, following this model: it doesn’t have to be a minimization problem, nor does the solution have to be a subset of some kind. Indeed, the example about floor plans described above has neither of these properties. What is relevant is that one should have (i) some notion of “good quality” solutions (for X, this equates to a small size) and (ii) some notion of a set of solutions being “diverse”.
Diversity measures. The concept of diversity appears also in other fields, and there are many different ways to measure the diversity of a collection. For example, in ecology, the diversity of a set of species (“biodiversity”) is a topic that has become increasingly important in recent times—see, for example, Solow and Polasky [12].
Another possible viewpoint, in the context of multicriteria optimization, is to require that the sample of solutions should try to represent the whole solution space. This concept can be quantified for example by the geometric volume of the represented space [13,14], or by the discrepancy [15]. See ([16], Section 3) for an overview of diversity measures in multicriteria optimization.
In this paper, we follow the simple possibility of looking for a collection of good solutions that have large distances from each other, in a sense that will be made precise below, see Equations (1) and (2). Direction (2), i.e., taking the pairwise sum of all Hamming distances, has been taken by many practical papers in the area of genetic algorithms—see, e.g., [17,18]. This now classical approach can be traced as far back as 1992 [19]. In [20], it has been boldly stated that this measure (and its variations) is one of the most broadly used measures in describing population diversity within genetic algorithms. One of its advantages is that it can be computed very easily and efficiently unlike many other measures, e.g., some geometry or discrepancy based measures.

Our Problems and Results

In this work, we focus on diverse versions of two minimization problems, d-Hitting Set and Feedback Vertex Set, whose solutions are subsets of a finite set. d-Hitting Set is in fact a class of such problems which includes Vertex Cover, as we describe below. We will consider two natural diversity measures for these problems: the minimum Hamming distance between any two solutions, and the sum of pairwise Hamming distances of all the solutions.
The Hamming distance between two sets S and S , or the size of their symmetric difference, is
d H ( S , S ) : = | ( S \ S ) ( S \ S ) | .
We use
div min ( S 1 , , S r ) : = min 1 i < j r d H ( S i , S j )
to denote the minimum Hamming distance between any pair of sets in a collection of finite sets, and
div total ( S 1 , , S r ) : = 1 i < j r d H ( S i , S j )
to denote the sum of all pairwise Hamming distances. (In Section 5, we will discuss some issues with the latter formulation.)
A feedback vertex set of a graph G is any subset S V ( G ) of the vertex set of G such that the graph G S obtained by deleting the vertices in S is a forest; that is, it contains no cycle.
Feedback Vertex Set
Input:A graph G.
Solution:A feedback vertex set of G of the smallest size.
More generally, a hitting set of a collection F of subsets of a universe U is any subset S U such that every set in the family F has a non-empty intersection with S. For a fixed positive integer d, the d-Hitting Set problem asks for a hitting set of the smallest size of a family F of d-sized subsets of a finite universe U:
d-Hitting Set
Input:A finite universe U and a family F of subsets of U, each of size at most d.
Solution:A hitting set S of F of the smallest size.
Observe that both Vertex Cover and Feedback Vertex Set are special cases of finding a smallest hitting set for a family of subsets. Vertex Cover is also an instance of d-Hitting Set, with d = 2 : the universe U is the set of vertices of the input graph and the family F consists of all sets { v , w } , where v w is an edge in G. There is no obvious way to model Feedback Vertex Set as a d-Hitting Set instance, however, because the cycles in the input graph are not necessarily of the same size.
In this work, we consider the following problems in the Diverse X paradigm. Using div total as the diversity measure, we consider Diverse d-Hitting Set and Diverse Feedback Vertex Set, where X is d-Hitting Set and Feedback Vertex Set, respectively. Using div min as the diversity measure, we consider Min-Diverse d-Hitting Set and Min-Diverse Feedback Vertex Set, where X is d-Hitting Set and Feedback Vertex Set, respectively.
In each case, we show that the problem is fixed-parameter tractable ( FPT ), with the following running times:
Theorem 1.
Diversed-Hitting Setcan be solved in time r 2 d k r · | U | O ( 1 ) .
Theorem 2.
Diverse Feedback Vertex Setcan be solved in time 2 7 k r · n O ( 1 ) .
Theorem 3.
Min-Diversed-Hitting Setcan be solved in time
  • 2 k r 2 · ( k r ) O ( 1 ) if | U | < k r and
  • d k r · | U | O ( 1 ) otherwise.
Theorem 4.
Min-Diverse Feedback Vertex Setcan be solved in time 2 k r · max ( r , 7 + log 2 ( k r ) ) · ( n r ) O ( 1 ) .
Defining the diverse versions Diverse Vertex Cover and Min-Diverse Vertex Cover of Vertex Cover in a similar manner as above, we get
Corollary 1.
Diverse Vertex Covercan be solved in time 2 k r · n O ( 1 ) .Min-Diverse Vertex Covercan be solved in time
  • 2 k r 2 · ( k r ) O ( 1 ) if n < k r and
  • 2 k r · n O ( 1 ) otherwise.
Related Work. The parameterized complexity of finding a diverse collection of good-quality solutions to algorithmic problems seems to be largely unexplored. To the best of our knowledge, the only existing work in this area consists of: (i) a privately circulated manuscript by Fellows [11] which introduces the Diverse X Paradigm and makes a forceful case for its relevance, and (ii) a manuscript by Baste et al. [21] which applies the Diverse X Paradigm to vertex-problems with the treewidth of the input graph as an extra parameter. In this context, a vertex-problem is any problem in which the input contains a graph G and the solution is some subset of the vertex set of G that satisfies some problem-specific properties. Both Vertex Cover and Feedback Vertex Set are vertex-problems in this sense, as are many other graph problems. The treewidth of a graph is, informally put, a measure of how tree-like the graph is. See, e.g., ([22], Chapter 7) for an introduction of the use of the treewidth of a graph as a parameter in designing FPT algorithms. The work by Baste et al. [21] shows how to convert essentially any treewidth-based dynamic programming algorithm, for solving a vertex-problem, into an algorithm for computing a diverse set of r solutions for the problem, with the diversity measure being the sum div total of Hamming distances of the solutions. This latter algorithm is FPT in the combined parameter ( r , w ) , where w is the treewidth of the input graph. As a special case, they obtain a running time of O ( ( 2 k + 2 ( k + 1 ) ) r k r 2 n ) for Diverse Vertex Cover. Furthermore, they show that the r-Diverse versions (i.e., where the diversity measure is div total ) of a handful of problems have polynomial kernels. In particular, they show that Diverse Vertex Cover has a kernel with O ( k ( k + r ) ) vertices, and that Diverse d-Hitting Set has a kernel with a universe size of O ( k d + k r ) .
Organization of the rest of the paper. In Section 2, we list some definitions which we use in the rest of the paper. In Section 3, we describe a generic framework which can be used for computing solution families of maximum diversity for a variety of problems whose solutions form subsets of some finite set. We prove Theorem 1 in Section 3.3 and Theorem 2 in Section 4. In Section 5, we discuss some potential pitfalls in using div total as a measure of diversity. In Section 6, we prove Theorems 3 and 4. We conclude in Section 7.

2. Preliminaries

Given two integers p and q, we denote by [ p , q ] the set of all integers r such that p r q holds. Given a graph G, we denote by V ( G ) (resp. E ( G ) ) the set of vertices (resp. edges) of G. For a subset S V ( G ) , we use G [ S ] to denote the subgraph of G induced by S, and G \ S for the graph G [ V ( G ) \ S ] . A set S V ( G ) is a vertex cover (resp. a feedback vertex set) if G \ S has no edge (resp. no cycle). Given a graph G and a vertex v such that v has exactly two neighbors, say w and w , contractingv consists of removing the edges { v , w } and { v , w } , removing v, and adding the edge { w , w } . Given a graph G and a vertex v V ( G ) , we denote by δ G ( v ) the degree of v in G. For two vertices u , v in a connected graph G, we use dist T ( u , v ) to denote the distance between u and v in G, which is the length of a shortest path in G between u and v.
A deepest leaf in a tree T is a vertex v V ( T ) such that there exists a root r V ( T ) satisfying dist T ( r , v ) = max u V ( T ) dist T ( r , u ) . A deepest leaf in a forest F is a deepest leaf in some connected component of F. A deepest leaf v has the property that there is another leaf in the tree at distance at most 2 from v unless v is an isolated vertex or v’s neighbor has degree 2.
The objective function div total in (2) has an alternative representation in terms of frequencies of occurrence [21]: If y v is the number of sets of { S 1 , , S r } in which v appears, then
div total ( S 1 , , S r ) = v U y v ( r y v ) .
Auxiliary problems. We define two auxiliary problems that we will use in some of the algorithms presented in Section 3. In the Maximum Cost Flow problem, we are given a directed graph G, a target d R + , a source vertex s V ( G ) , a sink vertex t V ( G ) , and for each edge ( u , v ) E ( G ) , a capacity c ( u , v ) > 0 , and a cost a ( u , v ) . A ( s , t ) -flow, or simply flow in G is a function f : E ( G ) R , such that for each ( u , v ) E ( G ) , f ( u , v ) c ( u , v ) , and for each vertex v V ( G ) \ { s , t } , ( u , v ) E ( G ) f ( u , v ) = ( v , u ) E ( G ) f ( v , u ) . The value of the flow f is ( s , u ) E ( G ) f ( s , u ) and the cost of f is ( u , v ) E ( G ) f ( u , v ) · a ( u , v ) . The objective of the Maximum Cost Flow problem is to find the maximum cost ( s , t ) -flow of value d.
The second problem is the Maximum Weight b-Matching problem. Here, we are given an undirected edge-weighted graph G, and for each vertex v V ( G ) , a supply b ( v ) . The goal is to find a set of edges M E ( G ) of maximum total weight such that each vertex v V ( G ) is incident with at most b ( v ) edges in M.

3. A Framework for Maximally Diverse Solutions

In this section, we describe a framework for computing solution families of maximum diversity for a variety of hitting set problems. This framework requires that the solutions form a family of subsets of a ground set U that is upward closed: any superset T S of a solution S is also a solution.
The approach is as follows: In a first phase, we enumerate the class S of all minimal solutions of size at most k. (A larger class S is also fine as long as it is guaranteed to contain all minimal solutions of size at most k). Then, we form all r-tuples ( S 1 , , S r ) S k . For each such family ( S 1 , , S r ) , we try to augment it to a family ( T 1 , , T r ) under the constraints T i S i and | T i | k , for each i [ 1 , r ] , in such a way that div total ( T 1 , , T r ) is maximized.
For this augmentation problem, we propose a network flow model that computes an optimal augmentation in polynomial time, see Section 3.1. This has to be repeated for each family, O ( | S | r ) times. The first step, the generation of S , is problem-specific. Section 3.3 shows how to solve it for d-Hitting Set. In Section 4, we will adapt our approach to deal with a Feedback Vertex Set.

3.1. Optimal Augmentation

Given a universe U and a set S of subsets of U, the problem diverse r , k ( S ) consists of finding an r-tuple ( S 1 , , S r ) that maximizes div total ( S 1 , , S r ) , over all r-tuples ( S 1 , , S r ) such that, for each i [ 1 , r ] , | S i | k , and there exists S S such that S S i U .
Theorem 5.
Let U be a finite universe, r and k be two integers, and S be a set of s subsets of U. diverse r , k ( S ) can be solved in time r 2 s r · | U | O ( 1 ) .
Proof. 
The algorithm that proves Theorem 5 starts by enumerating all r-tuples ( S 1 , S 2 , , S r ) S r of elements from S . For each of these s r r-tuples, we try to augment each S i , using elements of U, in such a way that the diversity d of the resulting tuple ( T 1 , , T r ) is maximized and such that, for each i [ 1 , r ] , S i T i U and | T i | k . It is clear that this algorithm will find the solution to diverse r , k ( S ) .
We show how to model this problem as a maximum-cost network flow problem with piecewise linear concave costs. This problem can be solved in polynomial time. (See, for example, [23] for basic notions about network flows).
Without loss of generality, let U = { 1 , 2 , , n } . We use a variable 0 x i j 1 to decide whether element j of U should belong to set T i . In an optimal flow, these values are integral. Some of these variables are already fixed because T i must contain S i :
x i j = 1 for j S i .
The size of T i must not exceed k:
j = 1 n x i j k , for i = 1 , , r .
Finally, we can express the number y j of sets T i in which an element j occurs:
y j = i = 1 r x i j , for j = 1 , , n .
These variables y j are the variables in terms of which the objective function (3) is expressed:
maximize j = 1 n y j ( r y j ) .
These constraints can be modeled by a network as shown in Figure 1. There are nodes T i representing the sets T i and a node V j for each element j U . In addition, there is a source s and a sink t. The arcs emanating from s have capacity k. Together with the flow conservation equations at the nodes T i , this models the constraints (5). Flow conservation at the nodes V j gives rise to the flow variables y j in the arcs leading to t according to (6). The arcs with fixed flow (4) could be eliminated from the network, but, for ease of notation, we leave them in the model. The only arcs that carry a cost are the arcs leading to t, and the costs are given by the concave function (7).
There is now a one-to-one correspondence between integral flows from s to t in the network and solutions ( T 1 , , T r ) , and the cost of the flow is equal to the diversity (2) or (3). We are thus looking for a flow of maximum cost. The value of the flow (to total flow out of s) can be arbitrary. (It is equal to the sum of the sizes of the sets T i .)
The concave arc costs (7) on the arcs leading to t can be modeled in a standard way by multiple arcs. Denote the concave cost function by f y : = y ( r y ) , for y = 0 , 1 , , r . Then, each arc ( V i , t ) in the last layer is replaced by r parallel arcs of capacity 1 with costs f 1 f 0 , f 2 f 1 , …, f r f r 1 . This sequence of values f y f y 1 = r 2 y + 1 is decreasing, starting out with positive values and ending with negative values. If the total flow along such a bundle is y, the maximum-cost way to distribute this flow is to fill the first y arcs to capacity, for a total cost of ( f 1 f 0 ) + ( f 2 f 1 ) + + ( f y f y 1 ) = f y f 0 = f y , as desired.
An easy way to compute a maximum-cost flow is the longest augmenting path method. (Commonly, it is presented as the shortest augmenting path method for the minimum-cost flow). This holds for the classical flow model where the cost on each arc is a linear function of the flow. An augmenting path is a path in the residual network with respect to the current flow, and the cost coefficient of an arc in such a path must be taken with opposite sign if it is traversed in the direction opposite to the original graph.
Proposition 1
(The shortest augmenting path algorithm, cf. [23] (Theorem 8.12)). Suppose a maximum-cost flow among all flows of value v from s to t is given. Let P be a maximum-cost augmenting path from s to t. If we augment the flow along this path, this results in a new flow, of some value v . Then, the new flow is a maximum-cost flow among all flows of value v from s to t.
Let us apply this algorithm to our network. We initialize the constrained flow variables x i j according to Equation (4) to 1 and all other variables x i j to 0. This corresponds to the original solution ( S 1 , S 2 , , S r ) , and it is clearly the optimal flow of value i = 1 r | S i | because it is the only feasible flow of this value.
We can now start to find augmenting paths. Our graph is bipartite, and augmenting paths have a very simple structure: They start in s, alternate back and forth between the T-nodes and the V-nodes, and finally make a step to t. Moreover, in our network, all costs are zero except in the last layer, and an augmenting path contains precisely one arc from this layer. Therefore, the cost of an augmenting path is simply the cost of the final arc.
The flow variables in the final layer are never decreased. The resulting algorithm has therefore a simple greedy-like structure. Starting from the initial flow, we first try to saturate as many of the arcs of cost f 1 f 0 as possible. Next, we try to saturate as many of the arcs of cost f 2 f 1 as possible, and so on. Once the incremental cost f y + 1 f y becomes negative, we stop.
Trying to find an augmenting path whose last arc is one of the arcs of cost f y + 1 f y , for fixed y, is a reachability problem in the residual graph, and it can be solved by graph search in O ( n r ) time because the network has O ( n r ) vertices. Every augmentation increases the flow value by 1 unit. Thus, there are at most k r augmentations, for a total runtime of O ( k r 2 n ) . □

3.2. Faster Augmentation

We can obtain faster algorithms by using more advanced network algorithms from the literature. We will derive one such algorithm here. The best choice depends on the relation between n, k, and r. We will apply the following result about b-matchings, which are generalizations of matchings: Each node v has a given supply b ( v ) , specifying that v should be incident to at most v edges.
Proposition 2
([24]). A maximum-weight b-matching in a bipartite graph with N 1 + N 2 nodes on the two sides of the bipartition and M edges that have integer weights between 0 and W can be found in time O ( N 1 M log ( 2 + N 1 2 M log ( N 1 W ) ) ) .
We will describe below how the network flow problem from above can be converted into a b-matching problem with N 1 = r + 1 plus N 2 = n nodes and M = 2 r n edges of weight at most W = 2 r . Plugging these values into Proposition 2 gives a running time of O ( r 2 n log ( 2 + r n log ( r 2 ) ) ) = O ( r 2 n max { 1 , log r log r n } ) for finding an optimal augmentation. This improves over the run time O ( r 2 n k ) from the previous section unless r is extremely large (at least 2 k ).
From the network of Figure 1, we keep the two layers of nodes T i and V j . Each vertex T i gets a supply of b ( T i ) : = k , and each vertex V j gets a supply of b ( V j ) : = r . To mimic the piecewise linear costs on the arcs ( V j , t ) in the original network, we introduce r parallel slack edges from a new source vertex s to each vertex V i . The costs are as follows. Let g 1 > g 2 > > g r with g y = f y f y 1 denote the costs in the last layer of the original network, and let g ^ : = r . Since g 1 = r 1 , this is larger than all costs. Then, every edge ( T i , V j ) from the original network gets a weight of g ^ , and the r new slack edges entering each V j get positive weights g ^ g 1 , g ^ g 2 , , g ^ g r . We set the supply of the extra source node to b ( s ) : = r n , which imposes no constraint on the number of incident edges.
Now, suppose that we have a solution for the original network in which the total flow into vertex V j is y. In the corresponding b-matching, we can then use b ( V j ) y = r y of the slack edges incident to V j . The r y maximum-weight slack edges have weights g ^ g r , g ^ g r 1 , g ^ g y + 1 . The total weight of the edges incident to V j is therefore
r g ^ g r g r 1 g y + 1 = r g ^ + ( g 1 + g 2 + + g y ) ,
using the equation g 1 + g 2 + + g r = f r f 0 = 0 . Thus, up to an addition of the constant n r g ^ , the maximum weight of a b-matching agrees with the maximum cost of a flow in the original network.

3.3. Diverse Hitting Set

In this section, we show how to use the optimal augmentation technique developed in Section 3 to solve the Diverse d-Hitting Set. For this, we use the following folklore lemma about minimal hitting sets.
Lemma 1.
Let ( U , F ) be an instance ofd-Hitting Set, and let k be an integer. There are at most d k inclusion-minimal hitting sets of F of size at most k, and they can all be enumerated in time d k | U | 2 .
Combining Lemma 1 and Theorem 5, we obtain the following result.
Theorem 1.
Diversed-Hitting Setcan be solved in time r 2 d k r · | U | O ( 1 ) .
Proof. 
Using Lemma 1, we can construct the set S of all inclusion-minimal hitting sets of F , each of size at most k. Note that the size of S is bounded by d k . As every superset of an element of S is also a hitting set, the theorem follows directly from Theorem 5. □

4. Diverse Feedback Vertex Set

A feedback vertex set (FVS) (also called a cycle cutset) of a graph G is any subset S V ( G ) of vertices of G such that every cycle in G contains at least one vertex from S. The graph G S obtained by deleting S from G is thus an acyclic graph. Finding an FVS of small size is an NP-hard problem [25] with a number of applications in Artificial Intelligence, many of which stem from the fact that many hard problems become easy to solve in acyclic graphs. An example for this is the Propositional Model Counting (or #SAT) problem that asks for the number of satisfying assignments for a given CNF formula, and has a number of applications, for instance in planning [26,27] and in probabilistic inference problems such as Bayesian reasoning [28,29,30,31]. A popular approach to solving #SAT consists of first finding a small FVS S of the CNF formula. Assigning values to all the variables in S results in an acyclic instance of CNF. The algorithm assigns all possible sets of values to the variables in S, computes the number of satisfying assignments of the resulting acyclic instances, and returns the sum of these counts [32].
In this section, we focus on the Diverse Feedback Vertex Set problem and prove the following theorem.
Theorem 2.
thm:fvsDiverse Feedback Vertex Setcan be solved in time 2 7 k r · n O ( 1 ) .
In order to solve r-Diverse k-Feedback Vertex Set, one natural way would be to generate every feedback vertex set of size at most k and then check which set of k solutions provide the required sum of Hamming distances. Unfortunately, the number of feedback vertex sets is not FPT parameterized by k. Indeed, one can consider a graph containing k cycle of size n k , leading to n k k different feedback vertex sets of size k.
We avoid this problem by generating all such small feedback vertex sets up to some equivalence of degree two vertices. We obtain an exact and efficient description of all feedback vertex sets of size at most k, which is formally captured by Lemma 2. A class of solutions of a graph G, is a pair ( S , ) such that S V ( G ) and : S 2 V ( G ) is a function such that for each u S , u ( u ) , and for each u , v S , u v , ( u ) ( v ) = . Given a class of solutions ( S , ) , we define sol ( S , ) = { S V ( G ) : | S | = | S | and v S , | S ( v ) | = 1 } . A class of FVS solutions is a class of solutions ( S , ) such that each S sol ( S , ) is a feedback vertex set of G. Moreover, if S sol ( S , ) and S S V ( G ) , we say that S is described by ( S , ) . Note that S is also a feedback vertex set. In a class of FVS solutions ( S , ) , the meaning of the function is that, for each cycle C in G, there exists v S such that each element of ( v ) hits C. This allows us to group related solutions into only one set sol ( S , ) .
Lemma 2.
Let G be a n-vertex graph. There exists a set S of classes of FVS solutions of G of size at most 2 7 k such that each feedback vertex set of size at most k is described by an element of S . Moreover, S can be constructed in time 2 7 k · n O ( 1 ) .
Proof. 
Let G be a n-vertex graph. We start by generating a feedback vertex set F V of size at most k. The current best deterministic algorithm for this by Kociumaka and Pilipczuk [33] finds such a set in time 3.62 k · n O ( 1 ) . In the following, we use the ideas used for the iterative compression approach [34].
For each subset F F , we initiate a branching process by setting A : = F , B : = F F , and G : = G . Observe that, initially, as B F and | F | k , the graph G [ B ] has at most k components. In the branching process, we will add more vertices to A and B, and we will remove vertices and edges from G , but we will maintain the property that A V ( G ) and B V ( G ) . The set C will always denote the vertex set V ( G ) \ ( A B ) . Note that G [ C ] is initially a forest; we ensure that it always remains a forest.
We also initialize a function : V ( G ) 2 V ( G ) by setting ( v ) = { v } for each v V ( G ) . This function will keep information about vertices that are deleted from G. While searching for a feedback vertex set, we consider only feedback vertex sets that contain all vertices of A but no vertex of B. Vertices in C are still undecided. The function will maintain the invariant that, for each v V ( G ) , ( v ) V ( G ) = { v } , and, for each v C , all vertices of ( v ) intersect at exactly the same cycles in G \ A . Moreover, for each v A , the value ( v ) is fixed and will not be modified anymore in the branching process. During the branching process, we will progressively increase the size of A, B, and the sets ( v ) , v V ( G ) .
By reducing ( G , A , B , ) , we mean that we apply the following rules exhaustively.
  • If there is a v C such that δ G [ B C ] ( v ) 1 , we delete v from G .
  • If there is an edge { u , v } E ( G [ C ] ) such that δ G [ B C ] ( u ) = δ G [ B C ] ( v ) = 2 , we contract u in G and set ( v ) : = ( v ) ( u ) .
These are classical preprocessing rules for the Feedback Vertex Set problem; see, for instance, ([22], Section 9.1). Indeed, vertices of degree one cannot appear in a cycle, and consecutive vertices of degree 2 hit exactly the same cycles. After this preprocessing, there are no adjacent degree-two vertices and no degree-one vertices in C. (Degrees are measured in G [ B C ] ).
We start to describe the branching procedure. We work on the tuple ( G , A , B , ) . After each step, the value | A | cc ( B ) will increase, where cc ( B ) denotes the number of connected components of G [ B ] .
At each step of the branching, we do the following. If | A | > k or if G [ B ] contains a cycle, we immediately stop this branch as there is no solution to be found in it. If A is a feedback vertex set of size at most k, then ( A , | A ) is a class of FVS solutions, we add it to S and stop working on this branch. Otherwise, we reduce ( G , A , B , ) . We pick a deepest leaf v in G [ C ] and apply one of the two following cases, depending on the vertex v:
  • Case 1: The vertex v has at least two neighbors in B (in the graph G ).
    If there is a path in B between two neighbors of v, then we have to put v in A, as otherwise this path together with v will induce a cycle. If there is no such path, we branch on both possibilities, inserting v either into A or into B.
  • Case 2: The vertex v has at most one neighbor in B.
    Since v is a leaf in G [ C ] , it has at most one neighbor also in C. On the other hand, we know that v has degree at least 2 in G [ B C ] . Thus, v has exactly one neighbor in B and one neighbor in C, for a degree of 2 in G [ B C ] . Let p be the neighbor in C. Again, as we have reduced ( G , A , B , ) , the degree of p in G [ B C ] is at least 3. Thus, either it has a neighbor in B, or, as v is a deepest leaf, it has another child, say w that is also a leaf in G [ C ] , and w has therefore a neighbor in B. We branch on the at most 2 3 = 8 possibilities to allocate v, p, and w if considered, between A and B, taking care not to produce a cycle in B.
In both cases, either we put at least one vertex in A, and so | A | increases by one, or all considered vertices are added to B. In the latter case, the considered vertices are connected, at least two of them have a neighbor in B, and no cycles were created; therefore, the number of components in B drops by one. Thus, | A | cc ( B ) increases by at least one. As k | A | cc ( B ) k , there can be at most 2 k branching steps.
Since we branch at most 2 k times and at each branch we have at most 2 3 possibilities, the branching tree has at most 2 6 k leaves. Thus, for each of the at most 2 k subsets F of F, we add at most 2 6 k elements to S .
It is clear that we have obtained all solutions of FVS and they are described by the classes of FVS solutions in S , which is of size 2 7 k . □
Proof of Theorem 2.
We generate all 2 7 k r r-tuples of the classes of solutions given by Lemma 2, with repetition allowed.
We now consider each r-tuple ( ( S 1 , 1 ) , ( S 2 , 2 ) , , ( S r , r ) ) S r and try to pick an appropriate solution T i from each class of solutions ( S i , i ) , i [ 1 , k ] , in such a way that the diversity of the resulting tuple of feedback vertex sets ( T 1 , , T r ) is maximized. The network of Section 3.1 must be adapted to model the constraints resulting from solution classes. Let ( S , ) be a solution class, with | S | = b . For our construction, we just need to know the family { ( v ) v S } = { L 1 , L 2 , , L b } of disjoint nonempty vertex sets. The solutions that are described by this class are all sets that can be obtained by picking at least one vertex from each set L q . Figure 2 shows the necessary adaptations for one solution T = T i . In addition to a single node T that is either directly of indirectly connected to all nodes V 1 , , V n , like in Figure 1, we have additional nodes representing the sets L q . For each vertex j that appears in one of the sets L q , there is an additional node U j in an intermediate layer of the network. The flow from s to L q is forced to be equal to 1, and this ensures that at least one element of the set L q is chosen in the solution. Here, it is important that the sets L q are disjoint.
A similar structure must be built for each set T 1 , , T r , and all these structures share the vertices s and V 1 , , V n . The rightmost layer of the network is the same as in Figure 1.
The initial flow is not so straightforward as in Section 3.1 but is still easy to find. We simply saturate the arc from s to each of the nodes L q in turn by a shortest augmenting path. Such a path can be found by a simple reachability search in the residual network, in O ( r n ) time. The total running time O ( k r 2 n ) from Section 3.1 remains unchanged. □

5. Modeling Aspects: Discussion of the Objective Function

In Section 3 and Section 4, we have used the sum of the Hamming distances, div total , as the measure of diversity. While this metric is of natural interest, it appears that, in some specific cases, it may not be a useful choice. We present a simple example where the most diverse solution according to div total is not what one might expect.
Let r be an even number. We consider the path with 2 r 2 vertices, and we are looking for r vertex covers of size at most r 1 , of maximum diversity.
Figure 3 shows an example with r = 6 . The smallest size of a vertex cover is indeed r 1 , and there are r different solutions. One would hope that the “maximally diverse” selection of r solutions would pick all these different solutions. However, no, the selection that maximizes div total consists of r / 2 copies of just two solutions, the “odd” vertices and the “even” vertices (the first and last solution in Figure 3).
This can be seen as follows. If the selected set contains in total n i copies of the first i solutions in the order of Figure 3, then the objective can be written as
2 n 1 ( r n 1 ) + 2 n 2 ( r n 2 ) + + 2 n r 1 ( r n r 1 ) .
Here, each term 2 n i ( r n i ) accounts for two consecutive vertices 2 i 1 , 2 i of the path in the formulation (3). The unique way of maximizing each term individually is to set n i = r / 2 for all i. This corresponds to the selection of r / 2 copies of the first solution and r / 2 copies of the last solution, as claimed.
In a different setting, namely the distribution of r points inside a square, an analogous phenomenon has been observed ([16], Figure 1): Maximizing the sum of pairwise Euclidean distances places all points at the corners of the square. In fact, it is easy to see that, in this geometric setting, any locally optimal solution must place all points on the boundary of the feasible region. By contrast, for our combinatorial problem, we don’t know whether this pathological behavior is typical or rare in instances that are not specially constructed. Further research is needed. A notion of diversity which is more robust in this respect is the smallest difference between two solutions, which we consider in Section 6.

6. Maximizing the Smallest Hamming Distance

The undesired behavior highlighted in Section 5 is the fact that the collection that maximizes the sum of the Hamming distances uses several copies of the same set. In this section, we explore how to handle this unexpected behavior by changing the distance to the minimal Hamming distance between two sets of the collection. This modification naturally removes the possibility of selecting the same solution twice. We show how to solve Min-Diverse d-Hitting Set and r-Min-Diverse k-Feedback Vertex Set for this metric.
Theorem 3.
Min-Diversed-Hitting Setcan be solved in time
  • 2 k r 2 · ( k r ) O ( 1 ) if | U | < k r and
  • d k r · | U | O ( 1 ) otherwise.
Proof. 
Let ( U , F , k , r , t ) be an instance of Min-Diverse d-Hitting Set where | U | = n . If n < k r , we solve the problem by complete enumeration: There are trivially at most 2 n hitting sets of size at most k. We form all r-tuples ( T 1 , , T r ) of them and select the one that maximizes div min ( T 1 , , T r ) . The running time is at most O ( ( 2 n ) r r 2 n ) = O ( 2 k r 2 k r 3 ) .
We now assume that n k r . We use the same strategy as in Section 3: We generate all r-tuples ( S 1 , , S r ) of minimal solutions and try to augment each one to a r-tuple ( T 1 , , T r ) such that, for each i [ 1 , r ] , | T i | k and S i T i V ( G ) hold. The difference is that we try to maximize div min ( T 1 , , T r ) instead of div total ( T 1 , , T r ) in the augmentation. Given that we have a large supply of n k r elements in U, this is easy. To each set S i , we add k | S i | new elements, taking care that we pick different elements for each S i that are not in any of the other sets S j . The Hamming distance between two resulting sets is then d H ( T i , T j ) = d H ( S i , S j ) + ( k | S i | ) + ( k | S i | ) , and it is clear that this is the largest possibly distance that two sets T i S i and T j S j with | T i | , | T j | k can achieve. Thus, since our choice of augmentation individually maximizes each pairwise Hamming distance, it also maximizes the smallest Hamming distance. This procedure can be carried out in O ( k r + n ) = O ( n ) time. In addition, we need O ( k r 2 ) = O ( n 2 ) time to compute the smallest distance.
Using Lemma 1, we construct the set S of all minimal solutions of the d-Hitting Set instance ( U , F ) , each of size at most k. We then go through every r-tuple ( S 1 , , S r ) S r and augment it optimally, as just described. The running time is d k r · O ( n 2 ) . □
Theorem 4.
Min-Diverse Feedback Vertex Setcan be solved in time 2 k r · max ( r , 7 + log 2 ( k r ) ) · ( n r ) O ( 1 ) .
Proof. 
Let G be a n-vertex graph. If n < k r , we again solve the problem by complete enumeration: There are trivially at most 2 n feedback vertex sets of size at most k. We form all r-tuples ( T 1 , , T r ) of them and select the one that maximizes div min ( T 1 , , T r ) . The running time is at most O ( ( 2 n ) r r 2 n ) = O ( 2 k r 2 r 2 n ) .
We assume now that n k r . As in Section 4, we construct a set S of at most 2 7 k classes of FVS solutions of G, using Lemma 2. Then, we go through all ( 2 7 k ) r r-tuples of classes S = ( ( S 1 , 1 ) , , ( S r , r ) ) S r . For each such r-tuple, we look for the r-tuple ( T 1 , , T r ) of feedback vertex sets such that each T i is described by ( S i , i ) , and the objective value div min ( T 1 , , T r ) is maximized. Thus far, the procedure is completely analogous to the algorithm of Theorem 2 in Section 4 for maximizing div total ( T 1 , , T r ) .
Now, in going from a class ( S i , i ) to T i , we have to select a vertex from every set i ( v ) , for v S i , and we may add an arbitrary number of additional vertices, up to size k. We make this selection as follows: Whenever | i ( v ) | < k r , we simply try all possibilities of choosing an element of i ( v ) and putting it into T i . If | i ( v ) | k r , we defer the choice for later. In this way, we have created at most ( k r ) k r “partial” feedback vertex sets ( T 1 0 , , T r 0 )
For each such ( T 1 0 , , T r 0 ) , we now add the remaining elements. In each list i ( v ) which has been deferred, we greedily pick an element that is distinct from all other chosen elements. This is always possible since the list is large enough. Finally, we fill up the sets to size k, again choosing fresh elements each time. Each such choice is an optimal choice because it increases the Hamming distance between the concerned set T i and every other set T j by 1, which is the best that one can hope for. As we proceed to this operation for each S S r , where | S | 2 7 k , and that, for each such S, we create at most ( k r ) k r r-tuples, and we obtain an algorithm running in time 2 7 k r · ( k r ) k r · n O ( 1 ) . The theorem follows. □

7. Conclusions and Open Problems

In this work, we have considered the paradigm of finding small diverse collections of reasonably good solutions to combinatorial problems, which has recently been introduced to the field of fixed-parameter tractability theory [21].
We have shown that finding diverse collections of d-hitting sets and feedback vertex sets can be done in FPT time. While these problems can be classified as FPT via the kernels and a treewidth-based meta-theorem proved in [21], the methods proposed here are of independent interest. We introduced a method of generating a maximally diverse set of solutions from a set that either contains all minimal solutions of bounded size (d-Hitting Set) or from a collection of structures that in some way describes all solutions of bounded size (Feedback Vertex Set). In both cases, the maximally diverse collection of solutions is obtained via a network flow model, which does not rely on any specific properties of the studied problems. It would be interesting to see if this strategy can be applied to give FPT-algorithms for diverse problems that are not covered by the meta-theorem or the kernels presented in [21].
While the problems in [21] as well as the ones in Section 3 and Section 4 seek to maximize the sum of all pairwise Hamming distances, we also studied the variant that asks to maximize the minimum Hamming distance, taken over each pair of solutions. This was motivated by an example where the former measure does not perform as intended (Section 5). We showed that also, under this objective, the diverse variants of d-Hitting Set and Feedback Vertex Set are FPT . It would be interesting to see whether this objective also allows for a (possibly treewidth-based) meta-theorem.
In [21], the authors ask whether there is a problem that is in FPT parameterized by a solution size whose r-diverse variant becomes W [ 1 ] -hard upon adding r as another component of the parameter. We restate this question here.
Question 1
(Open Question [21]). Is there a problem Π with solution size k, such that Π is FPT parameterized by k, while Diverse Π , asking for r solutions, is W [ 1 ] -hard parameterized by k + r ?
To the best of our knowledge, this problem is still wide open. We believe that the div min measure is more promising to obtain such a result rather than the div total measure. A possible way to tackle both measures at once might be a parameterized (and strenghtened) analogue of the following approach that is well-studied in classical complexity. Yato and Seta propose a framework [35] to prove NP -completeness of finding a second solution to an NP -complete problem. In other words, there are some problems where given one solution it is still NP -hard to determine whether the problem has a different solution.
From a different perspective, one might want to identify problems where obtaining one solution is polynomial-time solvable, but finding a diverse collection of r solutions becomes NP -hard. The targeted running time should be FPT parameterized by r (and maybe t, the diversity target) only. We conjecture that this is most probably NP - or W [ ] hard in general. However, we believe it is interesting to search for well-known problems where it is not the case.

Author Contributions

Conceptualization, J.B., L.J., T.M., G.P. and G.R.; Methodology, J.B., L.J., T.M., G.P. and G.R.; Investigation, J.B., L.J., T.M., G.P. and G.R.; Writing–original draft preparation, J.B., L.J., T.M., G.P. and G.R.; Writing–review and editing, J.B., L.J., T.M., G.P. and G.R.

Funding

Tomáš Masařík received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme Grant Agreement No. 714704, and from Charles University student Grant No. SVV-2017-260452. Lars Jaffke is supported by the Bergen Research Foundation (BFS). Geevarghese Philip received funding from the following sources: the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant No. 819416), the Norwegian Research Council via grants MULTIVAL and CLASSIS, BFS (Bergens Forsknings Stiftelse) “Putting Algorithms Into Practice” Grant No. 810564 and NFR (Norwegian Research Foundation) Grant No. 274526d “Parameterized Complexity for Practical Computing”.

Acknowledgments

The first, second, third and fourth authors would like to thank Mike Fellows for introducing them to the notion of diverse FPT algorithms and sharing the manuscript “The Diverse X Paradigm” [11].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ehrgott, M. Multicriteria Optimization; Springer: Berlin/Heidelberg, Germany, 2005; Volume 491. [Google Scholar]
  2. Vela, A.E. Understanding Conflict-Resolution Taskload: Implementing Advisory Conflict-Detection and Resolution Algorithms in an Airspace. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2011. [Google Scholar]
  3. Idan, M.; Iosilevskii, G.; Ben-Yishay, L. Efficient air traffic conflict resolution by minimizing the number of affected aircraft. Int. J. Adapt. Control Signal Process. 2010, 24, 867–881. [Google Scholar] [CrossRef]
  4. Gandhi, S.; Buragohain, C.; Cao, L.; Zheng, H.; Suri, S. A general framework for wireless spectrum auctions. In Proceedings of the 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, Dublin, Ireland, 17–20 April 2007; pp. 22–33. [Google Scholar]
  5. Hoefer, M.; Kesselheim, T.; Vöcking, B. Approximation algorithms for secondary spectrum auctions. ACM Trans. Internet Technol. (TOIT) 2014, 14, 16:1–16:24. [Google Scholar] [CrossRef]
  6. Chomicki, J.; Marcinkowski, J. Minimal-change integrity maintenance using tuple deletions. Inf. Comput. 2005, 197, 90–121. [Google Scholar] [CrossRef]
  7. Arenas, M.; Bertossi, L.; Chomicki, J.; He, X.; Raghavan, V.; Spinrad, J. Scalar aggregation in inconsistent databases. Theor. Comput. Sci. 2003, 296, 405–434. [Google Scholar] [CrossRef]
  8. Pema, E.; Kolaitis, P.G.; Tan, W.C. On the tractability and intractability of consistent conjunctive query answering. In Proceedings of the 2011 Joint EDBT/ICDT Ph. D. Workshop, Uppsala, Sweden, 25 March 2011; pp. 38–44. [Google Scholar] [CrossRef]
  9. Ioannou, E.; Staworko, S. Management of inconsistencies in data integration. In Data Exchange, Integration, and Streams; Schloss Dagstuhl-Leibniz-Zentrum für Informatik: Dagstuhl, Germany, 2013; Volume 5, pp. 217–225. [Google Scholar] [CrossRef]
  10. Galle, P. Branch & sample: A simple strategy for constraint satisfaction. BIT Numer. Math. 1989, 29, 395–408. [Google Scholar] [CrossRef]
  11. Fellows, M.R. University of Bergen: Bergen, Norway, Unpublished work. 2018.
  12. Solow, A.R.; Polasky, S. Measuring biological diversity. Environ. Ecol. Stat. 1994, 1, 95–103. [Google Scholar] [CrossRef]
  13. Bringmann, K.; Cabello, S.; Emmerich, M.T.M. Maximum Volume Subset Selection for Anchored Boxes. In Proceedings of the 33rd International Symposium on Computational Geometry (SoCG 2017), Brisbane, Australia, 4–7 July 2017; Aronov, B., Katz, M.J., Eds.; Schloss Dagstuhl–Leibniz-Zentrum für Informatik: Dagstuhl, Germany, 2017; Volume 77, pp. 22:1–22:15. [Google Scholar] [CrossRef]
  14. Kuhn, T.; Fonseca, C.M.; Paquete, L.; Ruzika, S.; Duarte, M.M.; Figueira, J.R. Hypervolume Subset Selection in Two Dimensions: Formulations and Algorithms. Evol. Comput. 2016, 24, 411–425. [Google Scholar] [CrossRef] [PubMed]
  15. Neumann, A.; Gao, W.; Doerr, C.; Neumann, F.; Wagner, M. Discrepancy-based Evolutionary Diversity Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018; ACM: New York, NY, USA, 2018; pp. 991–998. [Google Scholar] [CrossRef]
  16. Ulrich, T.; Bader, J.; Thiele, L. Defining and Optimizing Indicator-Based Diversity Measures in Multiobjective Search. In Parallel Problem Solving from Nature, PPSN XI; Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 707–717. [Google Scholar] [CrossRef]
  17. Gabor, T.; Belzner, L.; Phan, T.; Schmid, K. Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms. In Proceedings of the 2018 IEEE International Conference on Autonomic Computing, ICAC 2018, Trento, Italy, 3–7 September 2018; pp. 131–140. [Google Scholar] [CrossRef]
  18. Morrison, R.W.; Jong, K.A.D. Measurement of Population Diversity. In Proceedings of the 5th International Conference, Evolution Artificielle, EA 2001, Le Creusot, France, 29–31 October 2001; pp. 31–41. [Google Scholar] [CrossRef]
  19. Louis, S.J.; Rawlins, G.J.E. Syntactic Analysis of Convergence in Genetic Algorithms. In Proceedings of the Second Workshop on Foundations of Genetic Algorithms, Vail, CO, USA, 26–29 July 1992; pp. 141–151. [Google Scholar] [CrossRef]
  20. Wineberg, M.; Oppacher, F. The Underlying Similarity of Diversity Measures Used in Evolutionary Computation. In Proceedings of the Genetic and Evolutionary Computation-GECCO 2003, Genetic and Evolutionary Computation Conference, Chicago, IL, USA, 12–16 July 2003; pp. 1493–1504. [Google Scholar] [CrossRef]
  21. Baste, J.; Fellows, M.; Jaffke, L.; Masařík, T.; de Oliveira Oliveira, M.; Philip, G.; Rosamond, F. Diversity in Combinatorial Optimization. arXiv 2019, arXiv:1903.07410. [Google Scholar]
  22. Cygan, M.; Fomin, F.; Kowalik, L.; Lokshtanov, D.; Marx, D.; Pilipczuk, M.; Pilipczuk, M.; Saurabh, S. Parameterized Algorithms; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
  23. Tarjan, R.E. Data Structures and Network Algorithms; SIAM: Philadelpia, PA, USA, 1983. [Google Scholar] [CrossRef]
  24. Ahuja, R.K.; Orlin, J.B.; Stein, C.; Tarjan, R.E. Improved algorithms for bipartite network flow. SIAM J. Comput. 1994, 23, 906–933. [Google Scholar] [CrossRef]
  25. Karp, R.M. Reducibility among combinatorial problems. In Complexity of Computer Computations; Springer: Berlin/Heidelberg, Germany, 1972; pp. 85–103. [Google Scholar]
  26. Domshlak, C.; Hoffmann, J. Fast probabilistic planning through weighted model counting. In Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling, Cumbria, UK, 6–10 June 2006; pp. 243–252. [Google Scholar]
  27. Palacios, H.; Bonet, B.; Darwiche, A.; Geffner, H. Pruning conformant plans by counting models on compiled d-DNNF representations. In Proceedings of the Fifteenth International Conference on Automated Planning and Scheduling, Menlo Park, CA, USA, 5–10 June 2005; pp. 141–150. [Google Scholar]
  28. Bacchus, F.; Dalmao, S.; Pitassi, T. Algorithms and complexity results for #SAT and Bayesian inference. In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, Cambridge, MA, USA, 11–14 October 2003; pp. 340–351. [Google Scholar] [CrossRef]
  29. Littman, M.L.; Majercik, S.M.; Pitassi, T. Stochastic Boolean satisfiability. J. Autom. Reason. 2001, 27, 251–296. [Google Scholar] [CrossRef]
  30. Sang, T.; Bearne, P.; Kautz, H. Performing Bayesian inference by weighted model counting. In Proceedings of the 20th National Conference on Artificial Intelligence, Pittsburgh, PA, USA, 9–13 July 2005; pp. 475–481. [Google Scholar]
  31. Apsel, U.; Brafman, R.I. Lifted MEU by weighted model counting. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012; pp. 1861–1867. [Google Scholar]
  32. Dechter, R.; Cohen, D. Constraint Processing; Morgan Kaufmann: Burlington, MA, USA, 2003. [Google Scholar]
  33. Kociumaka, T.; Pilipczuk, M. Faster deterministic Feedback Vertex Set. Inf. Process. Lett. 2014, 114, 556–560. [Google Scholar] [CrossRef]
  34. Reed, B.; Smith, K.; Vetta, A. Finding odd cycle transversals. Oper. Res. Lett. 2004, 32, 299–301. [Google Scholar] [CrossRef]
  35. Yato, T.; Seta, T. Complexity and completeness of finding another solution and its application to puzzles. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2003, 86, 1052–1060. [Google Scholar]
Figure 1. The network. The middle layer between the vertices T i and V j is a complete bipartite graph, but only a few selected arcs are shown. A potential augmenting path is highlighted.
Figure 1. The network. The middle layer between the vertices T i and V j is a complete bipartite graph, but only a few selected arcs are shown. A potential augmenting path is highlighted.
Algorithms 12 00254 g001
Figure 2. Part of the modified network for a solution T which is specified by b = 3 sets L 1 = { 1 , 2 } , L 2 = { 3 } , and L 3 = { 4 , 5 , 6 } .
Figure 2. Part of the modified network for a solution T which is specified by b = 3 sets L 1 = { 1 , 2 } , L 2 = { 3 } , and L 3 = { 4 , 5 , 6 } .
Algorithms 12 00254 g002
Figure 3. The r = 6 different vertex covers of size r 1 = 5 in a path with 2 ( r 1 ) = 10 vertices.
Figure 3. The r = 6 different vertex covers of size r 1 = 5 in a path with 2 ( r 1 ) = 10 vertices.
Algorithms 12 00254 g003

Share and Cite

MDPI and ACS Style

Baste, J.; Jaffke, L.; Masařík, T.; Philip, G.; Rote, G. FPT Algorithms for Diverse Collections of Hitting Sets. Algorithms 2019, 12, 254. https://0-doi-org.brum.beds.ac.uk/10.3390/a12120254

AMA Style

Baste J, Jaffke L, Masařík T, Philip G, Rote G. FPT Algorithms for Diverse Collections of Hitting Sets. Algorithms. 2019; 12(12):254. https://0-doi-org.brum.beds.ac.uk/10.3390/a12120254

Chicago/Turabian Style

Baste, Julien, Lars Jaffke, Tomáš Masařík, Geevarghese Philip, and Günter Rote. 2019. "FPT Algorithms for Diverse Collections of Hitting Sets" Algorithms 12, no. 12: 254. https://0-doi-org.brum.beds.ac.uk/10.3390/a12120254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop