## Optimal Control of Constrained Piecewise Affine Systems

Ebooks and eBook visitors provide substantial benefits above traditional reading. Ebooks slice down on the make use of of paper, as strongly suggested by environmental enthusiasts. There are no fixed timings for study. There will be no question of waiting-time for new editions. There is no transportation to the eBook shop. Typically the books in an eBook go shopping can be downloaded immediately, sometimes for free, occasionally to get a fee. Not simply that, the online variation of books are usually much cheaper, because publication residences save their print in addition to paper machinery, the advantages of which are passed on to customers.

Further, the reach of the e book shop is immense, enabling a person living in Sydney to source out to a publication house in Chicago. The newest craze in the online e book world is actually are known as eBook libraries, or e book packages.

An eBook bundle is something out of the ordinary. The procedure of this on-line optimal control technique is summarized in the following algorithm. Algorithm 3. In contrast, fast systems that need to be sampled and controlled in the range of milli- or microseconds limit the complexity of the overall optimization problem in Step 2 of Algorithm 3. This usually leads either to a setup of the optimization problem where only a very short prediction horizon can be considered, a simpler prediction model is taken into account, or, in the worst case, on-line receding horizon control cannot be applied to the particular system at all.

The solution to the resulting multi-parametric optimization program, cf. Section 1. Coloring corresponds to Figure 3. The state feedback control law 3. See Figure 3. In order to evaluate the control input one needs to identify the state space region Pi in which the measured state x t lies at the sampling instance t, i.

The second step in Algorithm 3. Therefore, the optimal lookup table becomes time-varying and thus, in all its generality, might change with time t. For this reason, some advantages and also disadvantages in this matter should be mentioned. Advantages High sampling rates.

For the point-location search, however, this can directly be guaranteed. Cheap implementation. Once the closed-form optimal solution is computed, it can e. Parts of this list are treated in the following chapters. However, most if not all control problems seem to be far less complex and the worst case exponential growth scenario is a very conservative upper bound on the overall complexity of the problem and does not seem to appear.

Solution complexity. As the optimization complexity grows, typically also the solution complexity, i. Many nice properties of the above optimization problem 3. Among others, the solution is unique, its similarity to the well known linear quadratic regulator, the smooth behavior of the controlled system close to the reference point, and strong activity of the controller if the system is far from the target point.

Compared to the quadratic cost function, typically the closed-loop system evolution results in a slower control action if the system is far from the reference point and becomes very active e. These optimal control problems can most often be solved using linear programs. For instance in order to avoid outliers of a signal the choice of a quadratic cost function results in simple smoothening of the outliers but not in their avoidance. Furthermore, he demonstrates in his example that the set in the T, R -parameter space, for which the closed-loop controlled system is stable, is a non-convex and disconnected set.

The well known survey paper [MRRS00] is an often used reference for this matter. An exception is the overview by Lazar et al. The result is stated here for completeness without proof. Consider the optimization problem 3. Moreover, the optimization problem 3. Simply speaking, Theorem 3. On the other hand, Theorem 3. The closed-form solution to receding horizon control, mentioned in Section 3. This is partially treated in Part III. This means that small perturbations can keep the closed-loop controlled system inside the feasible set X T but simultaneously far away from the equilibrium point.

A necessary condition for zero robustness in receding horizon control is that the value function is discontinuous for some point in the interior of X T. Although hybrid systems and in particular PWA systems are a special class of nonlinear systems, most of the nonlinear system and control theory does not apply because it requires certain smoothness assumptions. For the same reason one also cannot simply use linear control theory in some approximate manner to design controllers for PWA systems.

Remark 4. It should be remarked that most algorithms presented in the following apply also to general discontinuous PWA systems and give often reasonable results. With d the d-th mode, also called d-th dynamic, of the system is denoted. Henceforth, for brevity, as the system evolves with time, d t will simply be denoted with d. The following is a standing assumption throughout Part II of the manuscript. Assumption 4. The origin in the extended stateinput space is an equilibrium point of the PWA system 4. For simplicity, only the cost that penalizes the deviation of the state and control action from the origin equilibrium point in the extended state-input space is considered.

The point The optimization problem 1. A list of modeling, analysis and synthesis tools for the class of hybrid systems is given e. The reader is referred to the respective papers, manuals, and web-sites for further detail. As mentioned above, we here consider the class of discrete-time linear hybrid systems. As recently shown by Borrelli et al. For the class of constrained PWA systems very few and restrictive stability criteria are known, e. Unfortunately the technique presented in [GKBM04] introduces a certain level of suboptimality in the solution.

The algorithms combine a dynamic programming exploration strategy with a multiparametric linear programming solver and basic polyhedral manipulation. In the case of the CITOC problem the developed algorithm guarantees convergence of the solution to the Bellman equation if a bounded solution exists which corresponds to the solution to the CITOC problem and thus avoids potential pitfalls of other conservative approaches.

The problems considered in this part of the manuscript belong to the class of worst case combinatorial problems, which in general have an exponential worst case complexity. Furthermore, unlike the mp-MILP approach, the implemented algorithms proposed here can cope with moderate size problems, cf. Section 5. The algorithm combines a dynamic programming strategy with a multiparametric linear program solver.

In the rest of this part of the manuscript the focus lies on the following restriction to the very general CFTOC problem 6. Problem 6. Remark 6. Strictly speaking one should formulate Problem 6. However, from a practical point of view if we want to avoid unnecessary controller action while steering the state to the origin, the choice of a full column rank R is a necessity.

Theorem 6. A solution to the optimal control Problem 6.

## Objective function in optimization

Consider the optimal control Problem 6. Refer to Section 1. These issues can be remedied using an mp-LP algorithm based on lexicographic perturbation as proposed in [Jon05, JKM05]. An appropriate modeling framework for such a class of systems is mixed logical dynamical MLD systems [BM99a], refer also to Chapter 4, where the switching behavior as well as the constraints of the system are modeled with inequality conditions. As mentioned above, Heemels et al. The intermediate solution gives an upper bound on the optimal cost.

In order to limit the exploration of the state space, which grows exponentially with the time horizon and the number of possible switching sequences, a branch and bound technique is applied. Figure 6. Properties of the solution are given in the following theorem, cf. The solution to the optimal control problem 6. On the other hand, Theorem 6. Since we know that both solutions must be identical assuming that the optimizer is unique , this implies that there is a connection between the matrices Kk,i and Lk,i in 6.

Having this in mind, from this point onwards, when we speak of the solution to the CFTOC problem we consider the solution in the form given in Theorem 6. In Algorithm 6. As indicated before, by inspection of the DP problem 6. After that, by using polyhedral manipulation one has to compare all generated regions3 , check if they intersect and remove the redundant ones, before storing a new partition that consists of Nk regions. Thus the polyhedral nature of the feasible state space partition is preserved in each iteration. Algorithm 6. Thus the comparison operation is also performed on the different value function expressions.

The success of one or the other approach is highly dependent on the number of dynamics ND , the prediction horizon T, and the geometrical structure of the problem in intermediate steps and thus it is impossible to rigorously favor one or the other algorithm. Intermediate Solutions An important advantage of the dynamic programming approach, compared to the approach based on multi-parametric mixed-integer programming, shortly described in Section 6. Thus, after completing T dynamic programming iterations, the solutions to T different CFTOC problems with time horizons varying from 1 to T are simultaneously available and can be used for analysis and control purposes.

Same color corresponds to the same cost value. Such a case is e. Thirdly, due to the possible degeneracy of linear programs it is impossible to provide a unique solution in all cases4. However, the obtained number of polyhedral regions in the DP approach for any tested computed solution was always smaller than in the mp-MILP approach. This is due to the recursive structure of the mp-MILP method where unnecessary slicings of the state space are introduced.

Table 6. More examples are spread throughout the whole manuscript. Unnecessary slicing of the state space was produced by the recursive structure of the mp-MILP algorithm. In Table 6. Whereas the complexity 61 6. Section 6. The same color corresponds to the same cost.

For a 3-dimensional visualization of the value function see Figure 7. The minimum cost is naturally achieved at the origin. The force F t is assumed to be a linear function of the control action u t , i. The coloring corresponds to the open-loop value function JRH. Trajectories indicate the stable closed-loop behavior. The coloring corresponds to the open-loop value function JRH and ranges from dark blue to dark red, where dark blue refers to the value zero which is naturally obtained at the origin.

Sample trajectories show the stable closed-loop behavior.

### Bibliographic Information

The optimal control strategy for initial states in the intervals 2 — 4 with zero initial velocity, drives the car to the left slope 4 in order to gain enough speed to cope with the steep slope 2. Chapter 4. Moreover, 6. For the restriction of a linear vector norm based stage cost one gets Problem 7. In order to guarantee closed-loop asymptotic stability, one needs to assume that Q is of full column rank as it will be shown in the following, cf. Lemma 7. Assumption 7. Problem 7. The colored Fig. Figure 7. On the other side, with Assumption 7. Then Lemma 7. It should be pointed out that Lemma 7.

Furthermore, the limit value function of the dynamic program 7. Consider the DP problem 7. It can be discontinuous otherwise. By similar argument as in Lemma 7. The alternative proof is given in Appendix A. However, due to the analogy with Lemma 7. Moreover, please confer Remark 7. Tk denotes the k-times operator of T with itself, i. The DP procedure 7. According to [Ber01, Sec. On the 7. Under Assumption 7. Theorem 7. Furthermore, this does not limit the practicality of the proposed algorithms nor does it alter the properties or results in this chapter.

Remark 7. Due to the monotonicity property of the operator T, cf. By Lemma 7. Moreover, under Assumption 7. Let Assumption 7. The rest follows directly from part b of this theorem. Corollary 7. Let Jk , Xk be solutions to the DP procedure 7. Effectively, Theorem 7.

Here by convergence is meant that in two successive iterations of the dynamic program the value function including its domain does not change, i. Simultaneously a Lyapunov function, Jk , for the controlled system is given. Moreover, this result shows that if the value function iteration is started with some J0 confer e. Clearly, if one initializes from below with e.

Furthermore, equivalently to Section 6. Algorithm 7. By inspection of the DP problem 7. After that, by using polyhedral manipulation one has to compare all generated regions, check if they intersect and remove the redundant ones, before storing a new partition that has Nk regions.

Under the Assumptions 4. Example 7. Thus at the beginning, instead of focusing on the whole feasible state space and the whole domain of the system, one can limit the algorithm to the neighborhood of the origin and only after attaining convergence one proceeds with the exploration of the rest of the state space. In this way at each iteration step of the DP one would — on average — have to solve a much smaller number of mp-LPs. Note that in general the core C0 is a non-convex polyhedral collection. Any positively invariant set is a valid candidate for the core C0 , as long as an associated control strategy is feasible and steers the state to the origin.

The only prerequisite is that for any given initial feasible state, i. Thus, obtaining a good suboptimal solution for the core is desirable in order to limit the number of iterations which improve this very part of the state space in further DP iterations. In the second algorithm Algorithm 7. As stated in Section 6.

Thus the polyhedral nature of the feasible state space partition is preserved in each iteration of Algorithm 7. Since we are restricting ourselves to the neighborhood of the origin, the number of regions at each step is likely to remain rather small. After constructing the initial core, C0 , one can proceed with the exploration of the rest of the state space as described in the following Algorithm 7. This can be used in order to decrease or limit the number of iterations needed in the proposed algorithm. Moreover, at the same time an upper bound to the optimal solution can be given. In Algorithm 7.

Now, consider the special case of system 4. J0 of the type described by Equation 7. See Section 8. Same coloring in Figure a — e implies the same cost value. For system 6. The constructed state space partition consists of polyhedral regions.

For system 7. For a fair timing comparison these intermediate comparison computations should be added to the Trajectories indicate the ample in Section 7. Trajectory Fig. Such Lyapunov functions are utilized for a priori imposing closed-loop stability and feasibility for all time when using a receding horizon control policy. Chapter 6 such as receding horizon control cf. In accordance to energy consumption arguments in a system, the class of quadratic functions are most often used as Lyapunov function candidates. However, as argued in Section 3. However, the detailed construction for continuous-time systems having multiple eigenvalues or for discrete-time systems is left to the reader.

Even though similarities exist, the formulation does not straightforwardly translate to the discrete-time case as it will be shown in the following. Section 8. The here computed weight W of the weighted vector norm is of small size and all the considered steps during the construction are elementary and scale well for large systems. Therefore, this algorithm is practically superior to the purely optimization based search, confer also the examples in Section 8. The toolbox can be downloaded free of charge at: 8.

Here, we recapitulate the main existence theorem of the considered linear vector norm based Lyapunov functions for discrete-time systems proved in [KAS92, Pol95]. Theorem 8. A purely optimization based search for a real solution pair W, Z of the given bilinear matrix equation 8. Remark 8. This is indicated throughout the following text. In the following, the abovementioned overall problem is decomposed into several simpler problems.

Throughout Section 8. This, in turn, leads to purely polyhedral manipulations in further controller computations, see e. Part II. If all the eigenvalues of A are real then one can simply use the formulation in Lemma 8. However if one of the eigenvalues are in the complex domain, the construction of a real W becomes more involved, as shown in the following. Lemma 8. Recall that [ X ] i denotes i-th row of some matrix X. Note that the relation 8.

This test can performed by solving a simple feasibility LP with domain relint Z in 8. As an example Figure 8. It follows from Lemma 8. Corollary 8. Figure 8. Consider Lemma 8. This is due to the fact that as one increases 8. This is not true for the discrete-time case using the particular formulation for Wm , cf. Thus for a large set of stable systems the proposed formulation would not lead to a successful computation of Wm , respectively W.

From Corollary 8. In the case of the existence of complex eigenvalues with eigenvalue multiplicity, one needs to modify Lemma 8. If this is not the case one needs to increase m and perform a similar test again. How to derive and perform this test is explained in the following. Moreover, it is shown in Section 8. It should be mentioned that the Jordan decomposition might be, depending on the conditioning and structure of A, numerically problematic.

However, this does not necessarily imply that generally the Jordan decomposition is numerically more complex in higher dimensional spaces. So for control problems of practical size, i. Confer also the example Section 8. We point out that in this framework one needs to check the stability of the linear system, which is a trivial task, before starting to compute a Lyapunov function of the considered type. In the case of an unstable system the proposed algorithm would continue to iterate. However, the same would happen for the algorithms proposed in [Pol97, Pol95].

However, if A has a spectral radius close to 1, in the worst case one would need a large but bounded number m to approximate the unit circle appropriately, cf. In contrast to the decomposition based approach of [Pol95] or the algorithm presented here, the algorithm in [Pol97] mainly relies on the use of a sequence of linear programs and scaling operations without explicity computing the eigenvalues of the system matrix A. This test is performed using m linear programs.

### Get this edition

If W0 does not indicate a Lyapunov function, an additional scaling operation on W0 is performed. However, the success of this technique is highly dependent on the choice of these two parameters. When choosing W0 as the approximation of the n x -dimensional unit-sphere as also suggested in [Pol97] the algorithm seems to succeed with the same m as needed in the here presented algorithm.

State constrained optimal control problems via reachability approach by Athena Picarelli

And therefore the choice of linear vector norm based Lyapunov functions are of practical importance. The discrete-time system 8. We want to note, that in the context of receding horizon control, cf. Section 3. This can be performed with simple LPs. The success of this later technique is, however, highly dependent on the location of the eigenvalues of A. Having said that, together with Section 8. Note that in most engineering applications this is not a restricting requirement.

Algorithm 8. Table 8. For each case the CPU time was averaged over random systems. Except for the D case. The system dimension or the location of the eigenvalues of the system do not seem to limit the computability of a Lyapunov function of the considered class using the proposed algorithm. The here proposed approach found a Lyapunov function in 1. In this chapter an algorithm is presented that by analyzing the CFTOC solution a posteriori extracts regions of the state space for which closed-loop stability and feasibility for all time can be guaranteed. Unfortunately the technique presented in [GKBM05] introduces a certain level of suboptimality in the solution.

In this chapter the focus is on the a posteriori analysis of the CFTOC solution, and the goal is to extract the regions of the state space for which closed-loop stability and feasibility for all time can be guaranteed. A technique to compute the maximal positively invariant set and a Lyapunov stability region based on the linear cost function for constrained PWA systems is presented. The algorithm combines a reachability analysis with some basic polyhedral manipulations.

In the end of the chapter the applicability of the proposed algorithms is illustrated with several numerical examples. Remark 9. Furthermore, the receding horizon control law might drive the system state outside of X. Therefore, guaranteeing closedloop stability and feasibility for all time for constrained PWA systems for the whole set X is far from being a simple task.

Assumption 9. Otherwise it is possible to split the regions Pi further until Assumption 9. Having the receding horizon control law 6. Chapter 7. In many cases the numerical computation of the CITOC solution might not be possible or might not even be desired due to the complexity of the solution. This behavior is very likely to be expected also for most if not all constrained PWA systems with a linear performance index, confer also Figure 9. It is of major importance, however, to know for which subset of the open-loop feasible region X the computed suboptimal controller can guarantee closed-loop stability and feasibility.

The dashed lines depict possible trajectories starting in the respective sets. Note that V is a conservative indicator for the set of initial states with closedloop asymptotic stability based on a Lyapunov function candidate V. Figure 9.

• Criminal Investigation: Law and Practice?
• Advances in Low-Level Color Image Processing.
• Sampling-based Polytopic Trees for Approximate Optimal Control of Piecewise Affine Systems?

In [GLPM03] the authors compute the invariant set in an iterative procedure where at each iteration step a one-step reachability analysis is performed to extract the parts of the state space that remain closed-loop feasible. The algorithm has converged when the feasible state space remains constant. The approach presented here to compute the maximal positively invariant set Omax for a given PWA state feedback control law for constrained PWA systems can be considered as complementary to the algorithm presented in [GLPM03].

The detailed procedure of the maximal positively invariant set computation is given in Algorithm 9. Potential speed-ups of the algorithm are not mentioned here. The algorithm is divided into an initialization part and a main part. A schematical arrangement of the considered regions and sets in iteration step [r ] used in Algorithm 9. The dashed arrow denotes that the target set is reached in one step with the given control law.

In the initialization part a one-step reachability analysis is performed and the possible mappings from the polyhedral region Pi to the region P j are recorded in the mapping matrix M, i. See Figure 9. Additionally, regions that lead to infeasibility, denoted with Uk , in one step are computed. The remaining feasible set is the maximal positively invariant set Omax because all states starting in this remaining set will remain in Omax for all time by construction.

Therefore, the control law and cost expressions of P j[r ] are the same as for the original region but the region index of the subsets is changed from iteration step to iteration step. The same applies for Uk[r ]. Thus it is easy to check in iteration step [r ] which regions give a possibility to lead into feasibility or infeasibility in one step, and which transitions are impossible with the computed control law, e. This reduces the number of k possible combinations tremendously as the iterations evolve.

In this work the set difference is computed with the procedure presented in [Bao05, Sec. See [Bao05, Sec. Algorithm 9. The algorithm is based on the linear cost function JRH and additionally uses reachability analysis for the computation. Unlike techniques presented in the literature, cf. Here no LMI techniques are needed, and no possible additional conservatism is introduced. Therefore, it is not to be expected especially for PWA systems that with standard Lyapunov stability arguments with this candidate of a Lyapunov function, that the whole asymptotically stable region of the closed-loop system is obtained, but only a subset of it.

In the unfortunate case, when the value function 6. See Remark 9. However, even though not explicitly stated here, the extension to the non-full-dimensional case applies similarly. The detailed description is given in Algorithm 9. The notation is analog to the notation used in Algorithm 9. Also here, note that for simplicity in the following only operations on possibly non-convex sets and polyhedral regions are mentioned; however, it is also needed to keep track of the cost function and state feedback control law as in 6.

Speedups of the algorithm are again not discussed for simplicity. The initialization part of the algorithm is completely analog to the initialization part of Algorithm 9. The dashed lines depicts a possible trajectory starting in the region of attraction. Note that by construction the remaining set V is a Lyapunov stable and positively invariant set but not necessarily maximal positively invariant. The newly attained regions are denoted with Ai. Confer also Figure 9. It is clear from the construction that the set A[r ] is the set of states from which a trajectory is driven with the given control law into the computed Lyapunov stable region V in r steps, i.

The union of all A[r ] together with the Lyapunov stability set V is the region of attraction A to the origin, which itself is a Lyapunov stability set in the classical sense. Since the regions do not intersect, the union can simply be represented as a P-collection of these regions. See the example in Section 9. As in Algorithm 9. However, under stronger assumptions e. Theorem 9. The closed-loop controlled system 9. The proof follows by the construction of the region of attraction A Algorithm 9. As mentioned before, the set A is itself a Lyapunov stability region in the classical sense.

Let A be the region of attraction and Omax be the maximal positively invariant set of the closed-loop system 9. The proof of Corollary 9. Let Omax be the maximal positively invariant set of the closed-loop system 9. Same color implies same cost value. Green marked regions lead to infeasibility and do not belong to the set Omax.

This means that no limit cycle or stationary points other than the origin exist. Furthermore, the stability region is a strict subset of the open-loop feasible region X , i. Same color corresponds to the same cost value JRH x. Same color corresponds to the same control law expression.

In Figure 9. And for increasing prediction horizon the error vanishes quickly. The same color corresponds to the same cost value. Green marked regions lead to infeasibility. The results are reported in Figures 9. This means that no limit cycle or stationary points other than the origin exists. Furthermore, the stability region is a strict subset of the feasible region, i. Red marked trajectories lead to infeasibility. The purple star marks the origin. The results are reported in Table 9. There one sees that during the computation of the maximal positively invariant set 2 resp. By a posteriori analyzing a nominal stabilizing controller and the corresponding Lyapunov function of a general closed-loop nonlinear system one can compute a set, called the stability tube, in the augmented state-input space.

Furthermore, as soon as an approximation of the optimal min-max solution for implementation purposes is computed, the theoretical stability and performance guarantees might be lost. In recent years, a whole plethora of other robust synthesis techniques for systems with constraints have appeared; two examples are [RM05, MRVK06]. Here we take a different approach, where the inherent robust stability property of a closed-form nominal control law is investigated and the concept of the stability tube is introduced.

By a posteriori analyzing a nominal closedform stabilizing controller and corresponding Lyapunov function of the general closed-loop nonlinear system one can compute a set, called a stability tube, in the augmented state-input space. These stability tubes can serve, for example, as a robustness analysis tool for the given nominal controller as well as a means to obtain an approximation of the nominal control law with lower complexity while preserving closed-loop stability and a certain performance level.

Assumption Furthermore note, that if the parameters are chosen according to e. Theorem 3. Therefore, the following sections introduce the set concept of a stability tube of a given nominal controller. The underlying core idea presented here, lies in the inherent freedom of the Lyapunov decay inequality 2. Theorem 2. Simply speaking, if all the prerequisites of Theorem 2. Let V be a Lyapunov function for the nonlinear closed-loop system Theorem In addition to Theorem Stability tube with performance.

Figure Let the assumptions of CorolCorollary In addition to Corollary The stability tube allows a direct possibly conservative analysis of the inherent robust stability and robust performance From Theorem But even in an average case the number NP tends to be very large and above the storage limit of most control devices. Several authors recently addressed the complexity reduction or approximation issue by either modifying the original CFTOC problem, retrieving a suboptimal solution of the CFTOC problem, or by post-processing the computed optimal controller, cf.

Here, Theorem The stability tube for this example was obtained in seconds on a Pentium 4, 2. Notice from Figure As indicated in the previous sections, this stability tube can be used for different analysis or synthesis purposes. This procedure is called the point location problem, and the rate at which it can be solved determines the minimal sampling time of the system. Furthermore, the required off-line preprocessing is low and so the approach can be applied to very complex controllers.

The point-location problem comes into play on-line when evaluating the control law. One must identify the state space region in which the measured state lies at the current sampling instance. Moreover, control laws that do not form a polyhedral partition, but are composed of a collection of overlapping polytopic sets, are included naturally in the algorithm. This enables the algorithm to work for controller partitions with a large number of regions, which is demonstrated on numerical examples. The toolbox can be downloaded free of charge at: 1 A bounding box is the minimum-volume hyper-rectangle that contains the given set.

E VAL. In order to evaluate the control one needs to identify the state space region Pi in which the measured state x t lies at the sampling instance t, i. I denotes the cardinality of the discrete set I. The reader is referred to Part II for further detail. Note that the closedform receding horizon control law 6. A few interesting ideas are mentioned in the following. Utilizing standard Voronoi search methods the search time then reduces to O log NP. Note that this is more restrictive than the algorithm presented here, confer the introduction of this chapter and Section Relaxations to a larger number of regions is possible at the cost of data storage and a higher search complexity.

This however, can lead to a worst case combinatorial number of subdivisions of existing regions and therefore to an additional increase in the number of regions to be considered during the search procedure. The on-line point-location search time is in the best case logarithmic in the number of regions NP , but worst case linear in the total number of facets, which makes the procedure equivalent to sequential search in the worst case.