On the convergence time of a natural dynamics
for linear programming
V. Bonifaci
ISAAC 2017
[ doi
]
[ abstract ]
We consider a system of nonlinear ordinary differential equations for the
solution of linear programming (LP) problems that was first proposed in the
mathematical biology literature as a model for the foraging behavior of
acellular slime mold Physarum polycephalum, and more recently considered as a
method to solve LP instances. We study the convergence time of the continuous
Physarum dynamics in the context of the linear programming problem, and
derive a new time bound to approximate optimality that depends on the
relative entropy between projected versions of the optimal point and of the
initial point. The bound scales logarithmically with the LP cost coefficients
and linearly with the inverse of the relative accuracy, establishing the
efficiency of the dynamics for arbitrary LP instances with positive costs.
A scheduling model inspired by control
theory
S. Baruah, V. Bonifaci, A. Marchetti-Spaccamela, and V. Verdugo
RTNS 2017
[ doi ]
[ abstract ]
Certain control computations may be modeled as periodic tasks with the
correctness requirement that for each task, the fraction of jobs of the task
that complete execution by their respective deadlines be no smaller than a
specified value. This appears to be a correctness requirement that has not
previously been studied in the real-time scheduling theory community; this
paper formulates the problem and proposes some solution strategies.
Algorithms for hierarchical and
semi-partitioned parallel scheduling
V. Bonifaci, G. D'Angelo, and A. Marchetti-Spaccamela
IPDPS 2017
[ doi ]
[ abstract ]
We propose a model for scheduling jobs in a parallel machine setting that takes
into account the cost of migrations by assuming that the processing time of a
job may depend on the specific set of machines among which the job is
migrated. For the makespan minimization objective, the model generalizes
classical scheduling problems such as unrelated parallel machine scheduling,
as well as novel ones such as semi-partitioned and clustered scheduling. In
the case of a hierarchical family of machines, we derive a compact integer
linear programming formulation of the problem and leverage its fractional
relaxation to obtain a polynomial-time 2-approximation algorithm. Extensions
that incorporate memory capacity constraints are also discussed.
Multiprocessor real-time scheduling with
hierarchical processor affinities
V. Bonifaci, B. Brandenburg, G. D'Angelo, and A. Marchetti-Spaccamela
ECRTS 2016
[ doi ]
[ abstract ]
Many multiprocessor real-time operating systems offer the possibility to
restrict the migrations of any task to a specified subset of processors by
setting affinity masks. A notion of “strong arbitrary processor affinity
scheduling” (strong APA scheduling) has been proposed; this notion avoids
schedulability losses due to overly simple implementations of processor
affinities. Due to potential overheads, strong APA has not been implemented
so far in a real-time operating system. We show that, in the special but
highly relevant case of hierarchical processor affinities (HPA), strong APA
scheduling can be implemented with a vastly improved runtime complexity. In
particular, we present a strong HPA scheduler with a runtime complexity of
O(m) per task arrival and O(log n+m^2) per task departure, where m is the
number of processors and n is the number of tasks, thus improving on the
previous bounds of O(m^2) and O(mn). The improved runtime algorithms allowed
us to implement support for strong hierarchical processor affinities in
LITMUS^RT. We benchmarked this implementation on a 24-core platform and
observed nonnegligible, but still viable runtime overheads. Additionally, in
the case of a bilevel affinity hierarchy and when job priorities are based on
deadlines, we argue that the performance of our strong HPA scheduler,
HPA-EDF, can be related to system optimality in the following way: any
collection of jobs that is schedulable (under any policy) on m unit-speed
processors subject to hierarchical affinity constraints is correctly
scheduled by HPA-EDF on m processors of speed 2.415.
Polynomial-time exact schedulability tests for
harmonic real-time tasks
V. Bonifaci, A. Marchetti-Spaccamela, N. Megow, and A. Wiese
IEEE RTSS 2013
[ doi ]
[ abstract ]
We study the preemptive scheduling of real-time sporadic tasks on a
uniprocessor. We consider both fixed priority (FP) scheduling as well as
dynamic priority scheduling by the Earliest Deadline First (EDF) algorithm.
We investigate the problems of testing schedulability and computing the
response time of tasks. Generally these problems are known to be
computationally intractable for task systems with constrained deadlines. In
this paper, we focus on the particular case of task systems with harmonic
period lengths, meaning that the periods of the tasks pairwise divide each
other. This is a special case of practical relevance. We present provably
efficient exact algorithms for constrained-deadline task systems with
harmonic periods. In particular, we provide an exact polynomial-time
algorithm for computing the response time of a task in a system with an
arbitrary fixed priority order. This also implies an exact FP-schedulability
test. For dynamic priority scheduling, we show how to test EDF-schedulability
in polynomial time. Additionally, we give a very simple EDF-schedulability
test for the simpler case where relative deadlines and periods are jointly
harmonic.
Physarum can compute shortest paths:
Convergence proofs and complexity bounds
L. Becchetti, V. Bonifaci, M. Dirnberger, A. Karrenbauer, and K. Mehlhorn
ICALP 2013
[ doi
]
[ abstract ]
Physarum polycephalum is a slime mold that is apparently able to solve shortest
path problems. A mathematical model for the slime's behavior in the form of a
coupled system of differential equations was proposed by Tero, Kobayashi and
Nakagaki (2007). We prove that a discretization of the model (Euler
integration) computes a (1 + eps)- approximation of the shortest path in
O(mL(log n + log L)/eps^3) iterations, with arithmetic on numbers of
O(log(nL/eps)) bits; here, n and m are the number of nodes and edges of the
graph, respectively, and L is the largest length of an edge. We also obtain
two results for a directed Physarum model proposed by Ito et al. (2011):
convergence in the general, nonuniform case and convergence and complexity
bounds for the discretization of the uniform case.
[ erratum ]
On the efficiency of restricted tolls in
network routing games
V. Bonifaci, M. Salek, and G. Schaefer
SAGT 2011
[ doi
]
[ abstract ]
An effective means to reduce the inefficiency of Nash flows in non-atomic
network routing games is to impose tolls on the arcs of the network. It is a
well-known fact that marginal cost tolls induce a Nash flow that corresponds
to a minimum cost flow. However, despite their effectiveness, marginal cost
tolls suffer from two major drawbacks, namely (i) that potentially every arc
of the network is tolled, and (ii) that the imposed tolls can be arbitrarily
large. In this paper, we study the restricted network toll problem in which
tolls can be imposed on the arcs of the network but are restricted to not
exceed a predefined threshold for every arc. We show that optimal restricted
tolls can be computed efficiently for parallel-arc networks and affine
latency functions. This generalizes a previous work on taxing subnetworks to
arbitrary restrictions. Our algorithm is quite simple, but relies on solving
several convex programs. The key to our approach is a characterization of the
flows that are inducible by restricted tolls for single-commodity networks.
We also derive bounds on the efficiency of restricted tolls for
multi-commodity networks and polynomial latency functions. These bounds are
tight even for parallel-arc networks. Our bounds show that restricted tolls
can significantly reduce the price of anarchy if the restrictions imposed on
arcs with high-degree polynomials are not too severe. Our proof is
constructive. We define tolls respecting the given thresholds and show that
these tolls lead to a reduced price of anarchy by using a
(lambda,mu)-smoothness approach.
A generalized parallel task model for recurrent
real-time processes
V. Bonifaci, A. Wiese, S. K. Baruah, A. Marchetti-Spaccamela, S. Stiller, and
L. Stougie
ACM Transactions on Parallel Computing, 6(1):3, 2019
Preliminary version in RTSS 2012, ECRTS 2013, ECRTS
2015
[ doi ]
[ abstract ]
A model is considered for representing recurrent precedence-constrained tasks
that are to execute on multiprocessor platforms. A recurrent task is
specified as a directed acyclic graph (DAG), a period, and a relative
deadline. Each vertex of the DAG represents a sequential job, while the edges
of the DAG represent precedence constraints between these jobs. All the jobs
of the DAG are released simultaneously and need to complete execution within
the specified relative deadline of their release. Each task may release jobs
in this manner an unbounded number of times, with successive releases
occurring at least the specified period apart. Conditional control structures
are also allowed. The scheduling problem is to determine whether a set of
such recurrent tasks can be scheduled to always meet all deadlines upon a
specified number of identical processors. This problem is shown to be
computationally intractable, but amenable to efficient approximate solutions.
Earliest Deadline First (EDF) and Deadline Monotonic (DM) are shown to be
good approximate global scheduling algorithms. Polynomial and
pseudo-polynomial time schedulability tests, of differing effectiveness, are
presented for determining whether a given task set can be scheduled by EDF or
DM to always meet deadlines on a specified number of processors.
Two results on slime mold computations
R. Becker, V. Bonifaci, A. Karrenbauer, P. Kolev, and K. Mehlhorn
Theoretical Computer Science, 773:79--106, 2019
[ doi ]
[ abstract ]
We present two results on slime mold computations. In wet-lab experiments
(Nature'00) by Nakagaki et al. the slime mold Physarum polycephalum
demonstrated its ability to solve shortest path problems. Biologists proposed
a mathematical model, a system of differential equations, for the slime's
adaption process (J. Theoretical Biology'07). It was shown that the process
convergences to the shortest path (J. Theoretical Biology'12) for all graphs.
We show that the dynamics actually converges for a much wider class of
problems, namely undirected linear programs with a non-negative cost vector.
Combinatorial optimization researchers took the dynamics describing slime
behavior as an inspiration for an optimization method and showed that its
discretization can $\epsilon$-approximately solve linear programs with
positive cost vector (ITCS'16). Their analysis requires a feasible starting
point, a step size depending linearly on $\epsilon$, and a number of steps
with quartic dependence on $\mathrm{opt}/(\epsilon\Phi)$, where $\Phi$ is the
difference between the smallest cost of a non-optimal basic feasible solution
and the optimal cost ($\mathrm{opt}$). We give a refined analysis showing
that the dynamics initialized with any strongly dominating point converges to
the set of optimal solutions. Moreover, we strengthen the convergence rate
bounds and prove that the step size is independent of $\epsilon$, and the
number of steps depends logarithmically on $1/\epsilon$ and quadratically on
$\mathrm{opt}/\Phi$.
ILP models for the allocation of recurrent
workloads upon heterogeneous multiprocessors
S. K. Baruah, V. Bonifaci, R. Bruni, and A. Marchetti-Spaccamela
Journal of Scheduling, 22(2):195--209, 2019
Preliminary version in ECRTS 2016
[ doi ]
[ abstract ]
The problem of partitioning systems of independent constrained-deadline
sporadic tasks upon heterogeneous multiprocessor platforms is considered.
Several different integer linear program (ILP) formulations of this problem,
offering different tradeoffs between effectiveness (as quantified by speedup
bound) and running time efficiency, are presented. One of the formulations is
leveraged to improve the best speedup guarantee known for a polynomial-time
partitioning algorithm, from 12.9 to 7.83. Extensive computational results on
synthetically generated instances are also provided to establish the
effectiveness of the ILP formulations
A revised model of fluid transport optimization
in Physarum polycephalum
V. Bonifaci
Journal of Mathematical Biology, 74:567--581, 2017
[ doi ]
[ abstract ]
Optimization of fluid transport in the slime mold Physarum polycephalum
has been the subject of several modeling efforts in recent literature.
Existing models assume that the tube adaptation mechanism in P.
polycephalum's tubular network is controlled by the sheer amount of fluid
flow through the tubes. We put forward the hypothesis that the controlling
variable may instead be the flow's pressure gradient along the tube. We carry
out the stability analysis of such a revised mathematical model for a
parallel-edge network, proving that the revised model supports the global
flow-optimizing behavior of the slime mold for a substantially wider class of
response functions compared to previous models. Simulations also suggest that
the same conclusion may be valid for arbitrary network topologies.
Exact response time analysis for fixed
priority memory-processor co-scheduling
A. Melani, M. Bertogna, R. I. Davis, V. Bonifaci, A. Marchetti-Spaccamela,
and G. Buttazzo
IEEE Transactions on Computers, 66(4):631--646, 2017
Preliminary version in RTNS 2015
[ doi ]
[ abstract ]
Recent technological advances have led to an increasing gap between memory and
processor performance, since memory bandwidth is progressing at a much slower
pace than processor bandwidth. Pre-fetching techniques are traditionally used
to bridge this gap and achieve high processor utilization while tolerating
high memory latencies. Following this trend, new computational models have
been proposed to split task execution in two consecutive phases: a memory
phase in which the required instructions and data are pre-fetched to local
memory (M-phase), and an execution phase in which the task is executed with
no memory contention (C-phase). Decoupling memory and execution phases not
only simplifies the timing analysis, but also allows a more efficient (and
predictable) pipelining of memory and execution phases through proper
co-scheduling algorithms. This paper takes a further step towards the design
of smart co-scheduling algorithms for sporadic real-time tasks complying with
the memory-computation (M/C) model, by proposing a theoretical framework
aimed at tightly characterizing the schedulability improvement obtainable
with the adopted M/C task model on single-core systems. In particular, a
critical instant is identified for M/C tasks scheduled with fixed priority
and an exact response time analysis with pseudo-polynomial complexity is
provided. Then, we investigate the problem of priority assignment for M/C
tasks, showing that a necessary condition to achieve optimality is to allow
different priorities for the two phases. Our experiments show that the
proposed techniques provide a significant schedulability improvement with
respect to classic execution models, placing an important building block
towards the design of more efficient partitioned multi-core systems.
Schedulability analysis of conditional parallel
task graphs in multicore systems
A. Melani, M. Bertogna, V. Bonifaci, A. Marchetti-Spaccamela, and G.
Buttazzo
IEEE Transactions on Computers, 66(2):339--353, 2017
Preliminary version in ECRTS 2015
[ doi ]
[ abstract ]
Several task models have been introduced in the literature to describe the
intrinsic parallelism of real-time activities, including fork/join,
synchronous parallel, DAG-based, etc. Although schedulability tests and
resource augmentation bounds have been derived for these task models in the
context of multicore systems, they are still too pessimistic to describe the
execution flow of parallel tasks characterized by multiple (and nested)
conditional statements, where it is hard to decide which execution path to
select for modeling the worst-case scenario. To overcome this problem, this
paper proposes a task model that integrates control flow information by
considering conditional parallel tasks (cp-tasks) represented by DAGs with
both precedence and conditional edges. For this task model, a set of
meaningful parameters are identified and computed by efficient algorithms and
a response-time analysis is presented for different scheduling policies.
Experimental results are finally reported to evaluate the efficiency of the
proposed schedulability tests and their performance with respect to classic
tests based on both conditional and non-conditional existing approaches.
Preemptive uniprocessor scheduling of
mixed-criticality sporadic task systems
S. K. Baruah, V. Bonifaci, G. D'Angelo, H. Li, A. Marchetti-Spaccamela, S.
van der Ster, and L. Stougie
Journal of the ACM, 62(2):14, 2015
Preliminary version in ECRTS 2012, ESA 2011
[ doi ]
[ abstract ]
Systems in many safety-critical application domains are subject to
certification requirements. For any given system, however, it may be the case
that only a subset of its functionality is safety-critical and hence subject
to certification; the rest of the functionality is non-safety-critical and
does not need to be certified, or is certified to lower levels of assurance.
The certification-cognizant runtime scheduling of such mixed-criticality
systems is considered. An algorithm called EDF-VD (for Earliest Deadline
First with Virtual Deadlines) is presented: this algorithm can schedule
systems for which any number of criticality levels are defined. Efficient
implementations of EDF-VD, as well as associated schedulability tests for
determining whether a task system can be correctly scheduled using EDF-VD,
are presented. For up to 13 criticality levels, analyses of EDF-VD, based on
metrics such as processor speedup factor and utilization bounds, are derived,
and conditions under which EDF-VD is optimal with respect to these metrics
are identified. Finally, two extensions of EDF-VD are discussed that enhance
its applicability. The extensions are aimed at scheduling a wider range of
task sets, while preserving the favorable worst-case resource usage
guarantees of the basic algorithm.
Physarum can compute shortest paths: A short
proof
V. Bonifaci
Information Processing Letters, 113(1--2):4--7, 2013
[ doi ]
[ abstract ]
The purpose of this note is to give a short proof that a standard model for the
Physarum polycephalum slime mold correctly computes the shortest path in an
undirected weighted graph [V. Bonifaci, K. Mehlhorn, G. Varma, Physarum can
compute shortest paths, in: Proc. of the 23rd ACM-SIAM Symposium on Discrete
Algorithms, SIAM, 2012, pp. 233-240].
Partitioned EDF scheduling on a few types of
unrelated multiprocessors
A. Wiese, V. Bonifaci, and S. Baruah
Real-Time Systems, 49(2):219--238, 2013
[ doi ]
[ abstract ]
A polynomial-time approximation scheme (PTAS) is derived for the partitioned
EDF scheduling of implicit-deadline sporadic task systems upon unrelated
multiprocessor platforms that are comprised of a constant number of distinct
types of processors. This generalizes earlier results showing the existence
of polynomial-time approximation schemes for the partitioned EDF scheduling
of implicit-deadline sporadic task systems on (1) identical multiprocessor
platforms, and (2) unrelated multiprocessor platforms containing a constant
number of processors.
Physarum can compute shortest paths
V. Bonifaci, K. Mehlhorn, and G. Varma
Journal of Theoretical Biology, 309:121--133, 2012
Preliminary version in SODA 2012
[ doi ]
[ abstract ]
Physarum polycephalum is a slime mold that is apparently able to solve shortest
path problems. A mathematical model has been proposed by Tero et al. (Journal
of Theoretical Biology, 244, 2007, pp. 553-564) to describe the feedback
mechanism used by the slime mold to adapt its tubular channels while foraging
two food sources s0 and s1. We prove that, under this model, the mass of the
mold will eventually converge to the shortest s0-s1 path of the network that
the mold lies on, independently of the structure of the network or of the
initial mass distribution. This matches the experimental observations by Tero
et al. and can be seen as an example of a "natural algorithm", that is, an
algorithm developed by evolution over millions of years.
Algorithms and complexity for periodic
real-time scheduling
V. Bonifaci, H.-L. Chan, A. Marchetti-Spaccamela, and N. Megow
ACM Transactions on Algorithms, 9(1):6, 2012
Preliminary version in SODA 2010
[ doi ]
[ abstract ]
We investigate the preemptive scheduling of periodic tasks with hard deadlines.
We show that, even in the uniprocessor case, no pseudopolynomial-time
algorithm can test the feasibility of a task system within a constant speedup
bound, unless P = NP. This result contrasts with recent results for sporadic
task systems. For two special cases, synchronous task systems and systems
with a constant number of different task types, we provide the first
polynomial-time constant-speedup feasibility tests for multiprocessor
platforms. Furthermore, we show that the problem of testing feasibility is
coNP-hard for synchronous multiprocessor task systems. The complexity of some
of these problems has been open for a long time. We also propose a weight
maximization variant of the feasibility problem, where every task has a
nonnegative weight, and the goal is to find a subset of tasks that can be
scheduled feasibly and has maximum weight. We give the first constant-speed,
constant-approximation algorithm for the case of synchronous task systems,
together with related hardness results.
Scheduling real-time mixed-criticality
jobs
S. K. Baruah, V. Bonifaci, G. D'Angelo, H. Li, A. Marchetti-Spaccamela, N.
Megow, and L. Stougie
IEEE Transactions on Computers, 61(8):1140--1152,
2011
Preliminary version in MFCS 2010
[ doi ]
[
abstract ]
Many safety-critical embedded systems are subject to certification
requirements; some systems may be required to meet multiple sets of
certification requirements, from different certification authorities.
Certification requirements in such "mixed-criticality" systems give rise to
interesting scheduling problems, that cannot be satisfactorily addressed
using techniques from conventional scheduling theory. In this paper, we study
a formal model for representing such mixed-criticality workloads. We
demonstrate first the intractability of determining whether a system
specified in this model can be scheduled to meet all its certification
requirements, even for systems subject to merely two sets of certification
requirements. Then we quantify, via the metric of processor speedup factor,
the effectiveness of two techniques, reservation-based scheduling and
priority-based scheduling, that are widely used in scheduling such
mixed-criticality systems, showing that the latter of the two is superior to
the former. We also show that the speedup factors we obtain are tight for
these two techniques.
[ erratum ]
Feasibility analysis of sporadic real-time
multiprocessor task systems
V. Bonifaci and A. Marchetti-Spaccamela
Algorithmica, 63(4):763--780, 2012
Preliminary version in ESA 2010
Co-winner of the ESA Best Paper Award
[ doi ]
[ abstract ]
We give the first algorithm for testing the feasibility of a system of sporadic
real-time tasks on a set of identical processors, solving an open problem in
the area of multiprocessor real-time scheduling (Baruah and Pruhs in Journal
of Scheduling 13(6):577-582, 2009). We also investigate the related notion of
schedulability and a notion that we call online feasibility. Finally, we show
that discrete-time schedules are as powerful as continuous-time schedules,
which answers another open question in the above mentioned survey.
A constant-approximate feasibility test for
multiprocessor real-time scheduling
V. Bonifaci, A. Marchetti-Spaccamela, and S. Stiller
Algorithmica, 62(3--4):1034--1049, 2012
Preliminary version in ESA 2008
[ doi ]
[ abstract ]
We devise an approximate feasibility test for multiprocessor real-time
scheduling in the sporadic task model. We give an algorithm that, given a
task system and eps>0, correctly decides either that the task system can be
scheduled using the Earliest Deadline First algorithm on m speed-(2-1/m+eps)
machines, or that the system is not schedulable by any algorithm on m unit
speed machines. This speedup bound is known to be the best possible for EDF.
The running time of the algorithm is polynomial in the size of the task
system and 1/eps. We also provide a generalized tight bound that trades off
speed with additional machines.
Minimizing flow time in the wireless gathering
problem
V. Bonifaci, P. Korteweg, A. Marchetti-Spaccamela, and L. Stougie
ACM Transactions on Algorithms, 7(3):33, 2011
Preliminary version in STACS 2008, ALGOSENSORS
2008
[ doi ]
[ abstract ]
We address the problem of efficient data gathering in a wireless network
through multi-hop communication. We focus on two objectives related to flow
times, that is, the times spent by data packets in the system: minimization
of the maximum flow time and minimization of the average flow time of the
packets. For both problems we prove that, unless P=NP, no polynomial
time algorithm can approximate the optimal solution within a factor less than
$\Omega(m^{1-\eps})$ for any $0<\eps<1$, where $m$ is the number of packets.
We then assess the performance of two natural algorithms by proving that
their cost remains within the optimal cost of the respective problem if we
allow the algorithms to transmit data at a speed 5 times higher than that of
the optimal solutions to which we compare them.
The distributed wireless gathering
problem
V. Bonifaci, P. Korteweg, A. Marchetti-Spaccamela, and L. Stougie
Theoretical Computer Science, 412(8-10):633--641,
2011
[ doi ]
[ abstract ]
We address the problem of data gathering in a wireless network using multi-hop
communication; our main goal is the analysis of simple algorithms suitable
for implementation in realistic scenarios. We study the performance of
distributed algorithms, which do not use any form of local coordination, and
we focus on the objective of minimizing average flow times of data packets.
We prove a lower bound of $\Omega(n)$ on the expected competitive ratio of
any acknowledgment-based distributed algorithm minimizing the maximum flow
time, where $n$ is the number of nodes of the network. Next, we consider a
distributed algorithm which sends packets over shortest paths, and we use
resource augmentation to analyze its performance when the objective is to
minimize the average flow time. If interferences are modeled as in Bar-Yehuda
et al.\ (J.\ of Computer and Systems Science, 45(1):104--126, 1992) we prove
that the algorithm is $(1+\epsilon)$-competitive, when the algorithm sends
packets a factor $O(\log (\delta/\epsilon) \log \Delta)$ faster than the
optimal offline solution; here $\delta$ is the radius of the network and
$\Delta$ the maximum degree. We finally extend this result to a more complex
interference model.
Improved multiprocessor global schedulability
analysis
S. K. Baruah, V. Bonifaci, A. Marchetti-Spaccamela, and S. Stiller
Real-Time Systems, 46(1):3--24, 2010
Preliminary version in ECRTS 2009
[ doi ]
[ abstract ]
A new technique was recently introduced by Bonifaci et al. for the analysis of
real-time systems scheduled on multiprocessor platforms by the global
Earliest Deadline First (EDF) scheduling algorithm. In this paper, this
technique is generalized so that it is applicable to the schedulability
analysis of real-time systems scheduled on multiprocessor platforms by any
work-conserving algorithm. The resulting analysis technique is applied to
obtain a new sufficient global Deadline Monotonic (DM) schedulability test.
It is shown that this new test is quantitatively superior to pre-existing DM
schedulability analysis tests; in addition, the degree of its deviation from
any hypothetical optimal scheduler (that may be clairvoyant) is
quantitatively bounded. A new global EDF schedulability test is also proposed
here that builds on the results of Bonifaci et al. This new test is shown to
be less pessimistic and more widely applicable than the earlier result was,
while retaining the strong theoretical properties of the earlier result.
Budgeted matching and budgeted matroid intersection
via the gasoline puzzle
A. Berger, V. Bonifaci, F. Grandoni, and G. Schaefer
Mathematical Programming, 128(1-2):355--372, 2011
Preliminary version in IPCO 2008
[ doi ]
[ abstract ]
Many polynomial-time solvable combinatorial optimization problems become
NP-hard if an additional complicating constraint is added to restrict the set
of feasible solutions. In this paper, we consider two such problems, namely
maximum-weight matching and maximum-weight matroid intersection with one
additional budget constraint. We present the first polynomial-time
approximation schemes for these problems. Similarly to other approaches for
related problems, our schemes compute two solutions to the Lagrangian
relaxation of the problem and patch them together to obtain a near-optimal
solution. However, due to the richer combinatorial structure of the problems
considered here, standard patching techniques do not apply. To circumvent
this problem, we crucially exploit the adjacency relations on the solution
polytope and, surprisingly, the solution to an old combinatorial puzzle.
Stackelberg routing in arbitrary
networks
V. Bonifaci, T. Harks, and G. Schaefer
Mathematics of Operations Research, 35(2):1--17,
2010
Preliminary version in WINE 2008
[ doi ]
[ abstract ]
We investigate the impact of \emph{Stackelberg routing} to reduce the price of
anarchy in network routing games. In this setting, an $\alpha$ fraction of
the entire demand is first routed centrally according to a predefined
\emph{Stackelberg strategy} and the remaining demand is then routed selfishly
by (nonatomic) players. Although several advances have been made recently in
proving that Stackelberg routing can in fact significantly reduce the price
of anarchy for certain network topologies, the central question of whether
this holds true in general is still open. We answer this question negatively
by constructing a family of single-commodity instances such that every
Stackelberg strategy induces a price of anarchy that grows linearly with the
size of the network. Moreover, we prove upper bounds on the price of anarchy
of the Largest-Latency-First (LLF) strategy that only depend on the size of
the network. Besides other implications, this rules out the possibility to
construct constant-size networks to prove an unbounded price of anarchy. In
light of this negative result, we consider bicriteria bounds. We develop an
efficiently computable Stackelberg strategy that induces a flow whose cost is
at most the cost of an optimal flow with respect to demands scaled by a
factor of $1 + \sqrt{1-\alpha}$. Finally, we analyze the effectiveness of an
easy-to-implement Stackelberg strategy, called SCALE. We prove bounds for a
general class of latency functions that includes polynomial latency functions
as a special case. Our analysis is based on an approach which is simple, yet
powerful enough to obtain (almost) tight bounds for SCALE in general
networks.
An approximation algorithm for the wireless
gathering problem
V. Bonifaci, P. Korteweg, A. Marchetti-Spaccamela, and L. Stougie
Operations Research Letters, 36(5):605--608, 2008
Preliminary version in SWAT 2006
[ doi ]
[ abstract ]
The Wireless Gathering Problem is to find an interference-free schedule for
data gathering in a wireless network in minimum time. We present a
4-approximate polynomial-time on-line algorithm for this NP-hard problem. We
show that no shortest path following algorithm can have an approximation
ratio better than 4.
The complexity of uniform Nash equilibria and
related regular subgraph problems
V. Bonifaci, U. Di Iorio, and L. Laura
Theoretical Computer Science, 401(1--3):144--152,
2008
Preliminary version in WINE 2005, FCT 2005
[ doi ]
[ abstract ]
We investigate the complexity of finding Nash equilibria in which the strategy
of each player is uniform on its support set. We show that, even for a
restricted class of win-lose bimatrix games, deciding the existence of such
uniform equilibria is an NP-complete problem. Our proof is graph-theoretical.
Motivated by this result, we also give NP-completeness results for the
problems of finding regular induced subgraphs of large size or regularity,
which can be of independent interest.
The on-line prize-collecting traveling salesman
problem
G. Ausiello, V. Bonifaci, and L. Laura
Information Processing Letters, 107(6):199--204,
2008
[ doi ]
[ abstract ]
We study the online version of the Prize-Collecting Traveling Salesman Problem
(PCTSP), a generalization of the Traveling Salesman Problem (TSP). In the
TSP, the salesman has to visit a set of cities while minimizing the length of
the overall tour. In the PCTSP, each city has a given weight and penalty, and
the goal is to collect a given quota of the weights of the cities while
minimizing the length of the tour plus the penalties of the cities not in the
tour. In the online version, cities are disclosed over time. We give a
7/3-competitive algorithm for the problem, which compares with a lower bound
of 2 on the competitive ratio of any deterministic algorithm. We also show
how our approach can be combined with an approximation algorithm in order to
obtain an O(1)-competitive algorithm that runs in polynomial time.
On the power of lookahead in on-line server
routing problems
L. Allulli, G. Ausiello, V. Bonifaci, and L. Laura
Theoretical Computer Science, 408(2--3):116--128,
2008
[ doi ]
[ abstract ]
We study the usefulness of lookahead in on-line server routing problems: if an
on-line algorithm is not only informed about the requests released so far,
but also has a limited ability to foresee future requests, what is the
improvement that can be achieved in terms of the competitive ratio? We
consider several on-line server routing problems in this setting, such as the
on-line traveling salesman and the on-line traveling repairman problem. We
show that the influence of lookahead can change considerably depending on the
particular objective function and metric space considered.
Online k-server routing problems
V. Bonifaci and L. Stougie
Theory of Computing Systems, 45(3):470--485, 2009
Preliminary version in WAOA 2006
[ doi ]
[ abstract ]
In an online k-server routing problem, a crew of k servers has to visit points
in a metric space as they arrive in real time. Possible objective functions
include minimizing the makespan (k-Traveling Salesman Problem) and minimizing
the sum of completion times (k-Traveling Repairman Problem). We give
competitive algorithms, resource augmentation results and lower bounds for
k-server routing problems in a wide class of metric spaces. In some cases the
competitive ratio is dramatically better than that of the corresponding
single server problem. Namely, we give a 1 + O((log k)/k)-competitive
algorithm for the k-Traveling Salesman Problem and the k-Traveling Repairman
Problem when the underlying metric space is the real line. We also prove that
a similar result cannot hold for the Euclidean plane.
The on-line asymmetric traveling salesman
problem
G. Ausiello, V. Bonifaci, and L. Laura
Journal of Discrete Algorithms, 6(2):290--298, 2008
Preliminary version in WADS 2005
[ doi ]
[ abstract ]
We consider two on-line versions of the asymmetric traveling salesman problem
with triangle inequality. For the homing version, in which the salesman is
required to return in the city where it started from, we give a frac(3 +
sqrt(5), 2)-competitive algorithm and prove that this is best possible. For
the nomadic version, the on-line analogue of the shortest asymmetric
Hamiltonian path problem, we show that the competitive ratio of any on-line
algorithm depends on the amount of asymmetry of the space in which the
salesman moves. We also give bounds on the competitive ratio of on-line
algorithms that are zealous, that is, in which the salesman cannot stay idle
when some city can be served.
An adversarial queueing model for online server
routing
V. Bonifaci
Theoretical Computer Science, 381(1--3):280--287,
2007
[ doi ]
[ abstract ]
In an online server routing problem, a vehicle or server moves in a network in
order to process incoming requests at the nodes. Online server routing
problems have been thoroughly studied using competitive analysis. We propose
a new model for online server routing, based on adversarial queueing theory.
The model addresses questions such as whether a server routing algorithm is
stable, that is, whether it is such that the number of unserved requests in
the system remains bounded at all times, assuming a bound on the global rate
of requests arrival. This captures a notion of throughput for which
competitive analysis typically does not give any useful result. In this
framework, we consider a number of natural algorithms and we analyze their
stability and performance.
A Java-based system for building animated
presentations over the Web
V. Bonifaci, C. Demetrescu, I. Finocchi, and L. Laura
Science of Computer Programming, 53(1):37--49, 2004
[ doi
]
[ abstract ]
We describe Leonardo Web, a collection of tools for building animated
presentations that can be useful for teaching, disseminating, and e-learning.
Presentations can be created via the combined use of a visual editor and a
Java library. The library allows it to generate animations in a batch fashion
directly from Java code according to an imperative specification style.
Batch-generated animations can then be refined and customized using the
editor. Presentations can be finally viewed with a simple Java player, which
ships both as a stand-alone application for off-line deployment and as a Java
applet embedded in a Web page. The player supports step-by-step and
continuous execution, reversibility, speed selection, and smooth animation.