Assignment Problems Solved Previously By Linear Programming Techniques Are Also Examples Of

Abstract

This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

Keywords: wireless sensor networks, design, optimization, tutorial

Part I
Introduction

As the technology evolves, the wireless sensors manufactured become technically more powerful and economically viable. In wireless sensor networks (WSN) each node consists basically of units for sensing, processing, radio transmission, position finding and sometimes mobilizers [1]. These sensors measure desired phenomenal conditions in their surroundings and digitize them to process the received signals to reveal some characteristics of the conditions in the surrounding area. A large number of these sensors can be networked in many applications that require unattended operations, hence producing a WSN. In general, the sensor nodes in a wireless sensor network WSN sense and gather data from surrounding environment and transmit it to one or more sinks, to perform more intensive processing. The number of applications for WSNs is large, many of these are in the fields of weather monitoring, surveillance, health care, detecting ambient conditions like temperature, sound, light, security related aspects, etc. More fields are deploying WSNs as their reliability, performance and capabilities keep getting even better and wider.

In many applications, replacement of damaged or energy depleted nodes is not possible. Moreover planned nodal placement may not be a possible thing to do. Therefore, two of the main requirements for WSNs to operate reliably are to consume the minimum amount of energy to prolong the network’s life time, and to be able to self organize themselves when the network topology changes. Other requirements (e.g., limited delay, good signal to noise ratio, etc.) are usually application specific. Moreover, there are differences in the nature of WSNs. For example, there could be WSNs with either rechargeable or non-rechargeable sensor batteries, either single sink WSNs or multiple sink WSNs, which could either be immobile or mobile. Depending on these different variants of WSNs, different types of applications and the traffic types they handle, different design considerations will need to be taken into account. An acoustic WSN, for example, would need to consider propagation delays in any design aspect which most likely will be ignored in radio based WSNs. Also, as another example if voice or sound is to be sensed and transmitted for surveillance applications, then strict delay requirements for routing the packets will have to be enforced.

Besides the application types and the WSN variants, there are many aspects such as routing, clustering, accurately estimating sensed data, visual target tracking etc. in which optimal design is needed. In all these design problems, energy is a common limitation for the reasons mentioned earlier. For this paper we selected key optimization aspects of design problems relating to WSNs, carried out a thorough review of the selected major papers and used that opportunity to present a tutorial on the subject.

The papers selected for this review and tutorial are based on how well they covered the topics of interest and the degree to which they highlight the key design issues for WSN. In part III of this paper, we expose the reader to the following design problems in WSNs:

  1. routing for multi-hop WSNs with a single immobile sink [2],

  2. routing in a delay-tolerant WSN with a single mobile sink [3],

  3. joint routing, power and bandwidth allocation in FDMA WSNs [4],

  4. joint energy allocation and routing in WSNs with rechargeable batteries [5],

  5. routing in multi-hop single fixed sink with different objectives under distance uncertainties [6],

  6. joint routing and scheduling in WSNs with multiple sinks with different sink location possibilities [7],

  7. delay sensitive routing for underwater WSNs [8],

  8. using mobile radio frequency (RF) power charger to charge the batteries of sensors in a WSN [9],

  9. assignment of data processing tasks across the nodes in WSN [10],

  10. hierarchical clustering in a heterogeneous network [11],

  11. energy efficient co-operative broadcasting at the symbol level [12],

  12. dispatching of mobile sensor nodes in a WSN to sense a particular region of interest [13],

  13. fusion of delay sensitive noise perturbed data sensed by different nodes in a given cluster [14],

  14. Energy Optimization in Wireless Visual Sensor Networks While Maintaining Image Quality [15].

As the taxonomy in Figure 1 shows, there are a number of different types of design problems that involve routing as a common key design aspect, possibly jointly with others like power allocation or scheduling in a subset of those. There are many papers in the WSN literature that have considered routing as a design aspect and is still being considered in recent research works like [16,17,18]. For this reason, in part II of this paper, we focus specifically on linear network problems and their algorithms, which deal with routing of network flows. Figure 1 shows that other than routing in different variants of WSNs, we also have selected topics that are necessarily not connected, like RF power charging, visual tracking etc. The main aim here is to illustrate and discuss how optimization is used in a wide number of applications, topologies (single sink, multiple sink, mobile sinks, etc.) and design aspects of WSNs. There are certainly many more in the literature that the readers can come across in the literature.

Figure 1

Taxonomy of the wireless sensor network (WSN) design problems considered in this paper.

This paper extends our recently published conference paper [19], which was a shorter version and considered only the 7 routing problems of the above 14 problems. The main approach we take is to identify how each design problem is formulated into an optimization problem. Some of the problems were reformulated later by the authors to make solution methods more efficient. We discuss both the first formulations and the reformulations and provide comments about both cases for each problem. Later we discuss the advantages and disadvantages associated with each formulation and then discuss the solution methods. While we give clear scientific reasons and arguments to support our views about each formulation, most of the comments are definitely based on our opinions and strongly influenced by our understandings of the models as presented by the authors of the papers reviewed. Our goal is to present a tutorial on the subject.

Optimization techniques that have been in the operations research (OR) literature for almost a century provide a rich reservoir of different types and classes of optimization problems that have been studied extensively. For these, different solution techniques are available that have experienced development over the years until they have reached to a mature level in which their computational and storage performance have been extensively tested and assessed. Among these are the different variants of Lagrangian relaxation [20], dual decomposition methods [21] column generation [22] and many others. We dedicate part II to explain linear optimization since this was the first type of optimization that appeared and lots of special structured linear programs have been studied for which efficient algorithms have been developed. Moreover, linear optimization forms the basis of non-linear optimization and other advanced optimization types like mixed integer problems and a large number of WSN problems can be formulated as linear optimization problems.

In the linear optimization part, we explain classical special structured problems like minimum spanning tree, and network flow problems as these seem to us to be highly useful when it comes to WSNs. We state our own examples of how these classical linear special structured problems can be used to model, and hence solve, design problems in WSNs. Nonlinear programming also receives some good attention in this tutorial paper we focus on its theoretical concepts and present some of the different methods available in the OR literature. Finally, for part II of the paper, we discuss decomposition solution schemes, which are suitable for linear or non linear optimization problems. Decomposition schemes are attractive for WSNs, which are mainly of ad-hoc nature i.e., no central controller responsible for managing the resources for communications purposes, and have been used in some of the papers we consider in this tutorial. Being able to decompose the optimization problem into a number of smaller ones that can be solved separately by each sensor node, would be highly desirable to reduce the computational effort compared to centrally solving the problem. Since we target researchers who are new in using optimization techniques, we strongly believe that part II of this paper is necessary to understand some of the techniques used in part III of this paper.

Distributed algorithms are highly desirable since a sensor node has limited processing capabilities, and hence using one node to solve the problem would consume too much processing energy. Moreover after the problem is solved, the components of the solution specific to each node need to be transmitted to the respective nodes. For a centralized scheme, these transmissions will be from the solving node to all the nodes in the network, while in the distributed scheme these will only be to a subset of nodes (usually the neighbors only). Therefore the communication overhead in distributed algorithms are lower.

We believe that in order to exploit the full strength of the extensive tools in the optimization literature, good formulation is necessary to reduce the design problem to one of the classical optimization problems for which those well studied solution techniques could be used. In this paper, such techniques are illustrated for the different design problems in WSNs mentioned earlier, for which we have selected a paper for each. One section is dedicated to explain each design problem which is further divided into subsections that emphasize on the following:

  • the system model and design objectives,

  • problem formulation,

  • any reformulation methods,

  • solution methods,

  • any important results,

  • our comments on some or all of the above.

The purpose of our observations and comments on some of the problems is to shed some light on the points that we believe should be taken into account in the future when using optimization techniques for solving WSN design problems. As much as we could we used similar notation and definitions to those used in the corresponding papers to make it easy for the reader to relate to them if they wish to get back to them for more details.

Part II
Overview on Mathematical Optimization

This part of the paper serves as the fundamentals of optimization techniques, required for being able to use optimization techniques in any WSN design as part III shows. The purpose of this part (part II) is to give a basic tutorial on optimization techniques. Section 1 provides a brief overview on the history of operations research. Section 2 provides an introduction to optimization and the different classes of optimization problems. Section 3 is focuses on linear programming, which is the first type of optimization that was considered in the optimization literature and whose concepts form the foundations of more advanced optimization problems. The simplex algorithm, linear duality, Karmarkar’s algorithm, dual simplex as well as a discussion on computational efficiency of simplex and Karmarkar’s algorithm are all provided in Section 3.

Section 4 discusses a special class of linear programs that is very important for WSNs. This type of problems is known as the network flow problems and the minimum spanning tree MST problem. Section 4 also presents examples on how these types of problems and their extremely efficient algorithms can be used in WSN design problems. Section 5 provides an introduction to non-linear programming (NLP), which is a very vast field of optimization problems. We discuss fundamental concepts like Lagrange functions and multipliers, KKT conditions, convexity, duality in NLPs etc. These form the basis of any algorithm design designed to solve an NLP. A taxonomy of the different solution methods used in NLP is presented in a summarizing figure. The details of each are beyond the scope of this paper though.

Finally, Section 6 presents primal and dual decomposition schemes for solving optimization problems, which enable us to break problems that are, partially decomposable except for a subset of complicating constraints or variables. Hence, a large problem can be broken down into a number of smaller subproblems and a master problem that can be solved separately in iterative fashion in a distributed scheme. Distributing the computations for the optimization problem across a large number of sensors yields alleviates the computational effort (and corresponding battery energy) that a single sensor node will expend in a centralized scheme.

1. The History of Optimization

The beginning of the science of optimization techniques, also known by the names Operations Research (OR) and Mathematical Programming, dates back to the early years of the World War II. At that time, it was extremely necessary to allocate scarce resources to the different military operations and to the activities for each operation in an efficient manner. For that purpose, the British and then the U.S. military management asked a large number of scientists to apply a scientific approaches for that purpose. In other words, they were asked to do research on military operations [23]. These teams of scientists were the first OR teams. By developing effective methods of using the the newly invented radar at that time, these teams had an important role in winning the Air Battle of Britain. Through their research on how to better manage convoy and antisubmarine operations. Similar efforts played a major contribution in winning the Battle of the North Atlantic.

When the war ended, the success of OR in the military field boosted interest in applying it other fields too. As industries started to boom after the war, the problems caused by the increasing complexity and specialization in organizations were arising. These seemed basically the same problems as the ones that used to be considered by the military but in the context of a different field. By the early 1950s, researchers had introduced the use of OR to a variety of organizations in business, industry, and government. The rapid spread of OR soon followed.

There were two key aspects that led to the development of optimization methods [23]:

  • A large progress was made early in improving the techniques of OR. After the war, many of the researchers who had participated on OR teams or who had heard about this work were motivated to continue research in that direction, which lead to advancements in the state of the art. A leading example is the simplex method for solving linear programming problems, developed by George Dantzig in 1947.

  • Computer revolution which lead to the development of electronic computers. This is because a large amount of computations is usually required to deal most effectively with the complex problems typically considered by OR. Doing this by hand would be impossible, besides the fact that in most of the practical problems, deriving a closed form expression for the solution is not possible neither.

2. Introduction to Mathematical Programming

In a mathematical programming problem, the decision maker wishes to choose decision variables to maximize or minimize an objective function, subject to the requirement that the decision variables satisfy certain constraints. In the case of WSNs, the decision maker can be one or more sensors, for e.g., cluster head sensor nodes. The objective function is basically a design criterion that we are trying to achieve, e.g., maximum throughput, minimum delay, maximum energy efficiency etc. The constraints can be physical restrictions like amount of available battery power, a subchannel capacity or a maximum allowable transmit duration etc.

There are four broad categories of optimization problems, where for each category, an extensive literature of different methods to solve certain sub categories have been proposed. These categories are:

  • Linear Programming (LP): This is the first category of optimization problems that were considered by early scientists and researchers in the OR field. Basically the decision variables are continuous, the objective function and all the constraints are linear in the decision variables. We would recommend the references [23,24,25] as an introduction with many solved examples for LPs.

  • Non-Linear Programming (NLP): In this category, we only have continuous variables but either the objective function or at least one of the constraints are non linear in the decision variables. Good references are the text book by the convex optimization pioneers S. Boyd and V. Vandenberghe [26] and Boyd’s notes for his Convex Optimization I (EE364a) and Convex Optimization II (EE364b) courses in the Stanford University [27,28].

  • Mixed Integer Linear Programs (MILP): This is a linear program, where a subset of the decision variables have the restriction that they can only take a set of integer values for each. Again the three textbooks in [23,24,25] would be a good reference for an introduction to this type of optimization problems.

  • Mixed Integer Non-Linear Programs (MINLP): This is a non linear program in which a subset of the decision variables must take integer values only. A good reference for this type of problems is the monograph in [29], which is a compilation of key MINLP papers published in a number of strong journals in the field of Operations Research. Another can be a textbook by the global optimization pioneer C.A. Floudas [30].

In all of the above broad classes of optimization problems, decision variables should completely describe the decisions to be made. In the case of WSNs in particular, the following can be an example of what the decision variables can be used to model:

  • The amount of radio power at a particular time slot a sensor is going should transmit (power management), this is usually modelled using continuous variables.

  • The time instants and/or durations for each sensor node’s transmission (scheduling). Both continuous and integer variables have been used for that purpose.

  • The amount of spectrum bandwidth to be used by each sensor node for transmission. Both continuous and integer variables have been used for that purpose.

  • The set of links to be used in a WSN. Binary variables have been used to model whether a link would be used or not.

  • Modulation schemes, and channel coding rate each node can use on any of its links. Integer variables are mostly a suitable choice for modeling that aspect.

  • The data flow rate in bps per link, usually modelled using continuous variables.

3. Linear Programming Problems

We strongly believe that it is important to understand linear programming very well, if a researcher intends to use optimization as his research tool in WSNs or in any design problem. Concepts in linear programming like linear duality, sensitivity analysis, complementary slackness and shadow prices are needed to understand NLPs and other advanced optimization techniques. Therefore, to have a solid understanding of NLP, a researcher who would like to lay their own foundation in using the tool is strongly advised not to bypass LP.

Linear Programs (LPs) were the first type of optimization problems to be considered, and as mentioned earlier, they date back to the days of World War II. It is for that class of problems, George Dantzig proposed the simplex method to find the optimum solution efficiently. Basically any LP can be represented in the form:

s.t.

where x ∈ ℝn is the vector of n decision variables, c ∈ ℝn is the vector of costs per unit of the corresponding decision variables, A ∈ ℝm×n is the coefficient matrix of all the constraints, b ∈ ℝm represents the vector of values a constraint must not exceed, xLB and xUB are vectors that hold the lower and upper bounds on each decision variable in x.

Generally speaking, the constraint set given by Equation (1b) either:

  1. geometrically forms a polyhedron P of a set of infinitely many feasible solutions in the space of the decision variables i.e., P ⊂ ℝn, or

  2. no solution at all for which we say the LP is infeasible.

An optimal solution for a feasible bounded LP is known to occur at a vertex point of the polyhedron representing the feasible region. We can have infinitely many optimal solutions when, for example, two vertices yield optimal solutions, then all points falling on the edge connecting them would return the same value of the objective function, hence all are optimal. The reader would probably guess that a key idea for devising an algorithm that solves general LPs, is to move around the vertices of the polyhedron of feasible solutions. In fact, that’s a correct guess, as the classic simplex algorithm relies in its core on this idea.

3.1. Solving LPs

There are two main methods for solving generic LPs (i.e., LPs whose coefficient matrix has low sparsity). The first one that was invented in 1947 was the simplex algorithm and its variants that came later like the dual simplex and revised simplex. The second is the Karmarkar’s algorithm which falls under the category of interior point methods [31] which appeared in the 1980s. Both are being used by today’s solvers like the optimization package developed by IBM, CPLEX.

3.1.1. Simplex Method

The simplex algorithm first needs a reformulation to be done to the LP in Equation (1), to put it in the standard form, before it can be applied. In optimization solvers, this is done automatically at a preprocessing stage, provided you provide the problem to the solver in the form in Equation (1). Basically the standard form of any LP requires representing all inequality constraints in the form of inequality constraints by the introduction of nonnegative slack and excess variables. Any unrestricted variable xi gets replaced by two variables such that and . A simplex tableau is first constructed [24], for an optimization problem in Equation (2), the initial tableau is given by Table 1.

maxz = 60x1 + 30x2 + 20x3

(2a)

s.t.

4x1 + 2x2 + 1.5x3 ≤ 20

(2c)

2x1 + 1.5x2 + 0.5x3 ≤ 8

(2d)

x2 ≤ 5; x1x2x3 ≥ 0

(2e)

Table 1

Example of a simplex tableau.

Then the simplex algorithm needs a basic feasible solution (bfs) to start from, in which all the variables in the standard form LP are non-negative. In case the LP has only (≤) constraints, slack variables become the set of basic variables and are assigned initial values equal to the right hand side values of the corresponding constraints as in Table 1. The rest of the decision variables are initially in the set of non-basic variables, i.e., take the values of zero. If there are any (≥) constraints, then two of the methods that can be used to find a starting bfs, are the Big-M method and the Two Phase Simplex method [23,24,25].

The simplex algorithm utilizes the Gauss-Jordan algorithm which originally solves a system of linear equations. Since we have many solutions for the system of linear equations for an LP formulation in the standard form, the simplex uses the Gauss-Jordan algorithm to move across bfss that are adjacent to each other as long as the objective function value (z in the tableau) decreases. In each iteration, elementary row operations (eros) are performed to obtain the improving bfs. If in a given iteration, the coefficients of all non-basic variables in the objective function row in the tableau (row 0 usually) are negative (assuming we are considering a minimization problem), then current bfs is optimal. If any variables in row 0 have negative coefficients, the simplex algorithm chooses the variable with the most negative coefficient in row 0 to enter the basis (this variable is called the entering variable) and (eros) make it a basic variable. The variable that leaves the basis to become a non-basic variable and the new value of the entering variable are decided by a ratio test [24].

3.1.2. Karmarkar’s Algorithm

This method requires putting the LP to be solved in the following form:

s.t

The point is required to be feasible and the value of the objective function at an optimal solution is required to be zero. Karmarkar’s method uses transformation from projective geometry to create the transformed variable vector y. The transformation given by a function f(x) = y always projects the current point into the center of the feasible region defined by y. The algorithm starts in the transformed space by moving from f(x0) in a direction that decreases the objective function value while maintaining the feasibility, obtaining the point y1 which is close to the boundary of the feasible region. The new point satisfies f(x1) = y1. This keeps repeating until the objective function value of Equation (3) for xk is close enough to 0. In the transformed space, the algorithm is always moving away from the center of the feasible region.

3.1.3. On the Efficiency of Simplex and Karmarkar’s Algorithms

Efficiency is measured in terms of the computational time required by a CPU to obtain an optimal solution. Theoretically, one of the measures for that is the worst case complexity, usually denoted by the big ’O’, and is a function that is proportional to the time of the number of required operations (or iterations) versus the size of the problem. The size of the problem is described by the number of decision variables n and the number of constraints m.

Theoretically, the worst case number of iterations required by the simplex algorithm is bounded by the number of possible bfs which is . This could be really big for a large problem. Fortunately, practical experience with simplex indicates that the optimal solution can be found in less than 3mbfss. Karmarkar’s algorithm, on the other hand, is a polynomial time algorithm which means that for an LP of size n, there exist positive numbers a and b such that it can be solved in a time at most anb.

We need to always keep in mind that the theoretical complexity is an upper bound function (since it is for the worst case) of the solution time required versus the problem size, which can be a loose bound. Therefore for an exponential time algorithm this may not be quite a reliable prediction of how they will perform in practice. For the theoretical time complexity of an algorithm to be reliable, especially if it’s worst case is exponential, we will need rigorous analysis of the algorithm as a stochastic process to derive at least a statistical average and variance of the algorithm. However this is something we have not come across in the optimization literature before, which may indicate that it is not commonly done. Therefore, in our opinion, extensive numerical experiments are needed to evaluate the performance of an algorithm whose theoretical worst case complexity is exponential, in order to evaluate its computational efficiency.

3.2. Linear Duality

Associated with any LP is another LP called its dual. Knowing how both are related is very important for understanding advanced linear programming and non-linear programming. It gives insights in another important topic which is sensitivity analysis, which studies the effect of changing some of the costs in the objective function, or the bound of a constraint function on the feasibility and optimality of an obtained optimal solution. Sensitivity analysis is extremely important when the parameters of an LP have non-zero uncertainty. In the field of WSNs, this can happen in the measurements of the channel coefficients, which have could have some degree of error.

There is a dual variable (or dual price) associated with each constraint in the primal problem. It represents the objective function improvement obtained by relaxing a binding constraint by one unit. A dual variable is nonzero only when the constraint is binding and zero otherwise. This is known as the complementary slackness property connecting the primal and dual problems. The units of the dual prices are the units of the objective function divided by the units of the constraint. Knowing the units of the dual prices can be useful when you are trying to interpret what the dual prices mean.

The dual of the primal LP given in Equation (1) (with a the minor modification of having x ≥ 0) is:

В левом верхнем углу появилось послание Танкадо: ТЕПЕРЬ ВАС МОЖЕТ СПАСТИ ТОЛЬКО ПРАВДА Правая часть экрана отображала внутренний вид мини-автобуса и сгрудившихся вокруг камеры Беккера и двух агентов. В центре возник нечеткий из-за атмосферных помех кадр, который затем превратился в черно-белую картинку парка. - Трансляция началась, - объявил агент Смит. Это было похоже на старое кино.

One thought on “Assignment Problems Solved Previously By Linear Programming Techniques Are Also Examples Of

Leave a Reply

Your email address will not be published. Required fields are marked *