Understanding Complexity

Free download. Book file PDF easily for everyone and every device. You can download and read online Understanding Complexity file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Understanding Complexity book. Happy reading Understanding Complexity Bookeveryone. Download file Free Book PDF Understanding Complexity at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Understanding Complexity Pocket Guide.

If you know what logarithms are, feel free to skip this section. As a lot of people are unfamiliar with logarithms, or just haven't used them much recently and don't remember them, this section is here as an introduction for them. This text is also for younger students that haven't seen logarithms at school yet. Logarithms are important because they occur a lot when analyzing complexity. A logarithm is an operation applied to a number that makes it quite smaller — much like a square root of a number.

So if there's one thing you want to remember about logarithms is that they take a number and make it much smaller than the original See Figure 4. Now, in the same way that square roots are the inverse operation of squaring something, logarithms are the inverse operation of exponentiating something. This isn't as hard as it sounds. It's better explained with an example. Consider the equation:.

We now wish to solve this equation for x. So we ask ourselves: What is the number to which we must raise the base 2 so that we get ? That number is Logarithms help us denote this problem using new notation. In this case, 10 is the logarithm of and we write this as log and we read it as "the logarithm of ". Because we're using 2 as a base, these logarithms are called base 2 logarithms. There are logarithms in other bases, but we'll only use base 2 logarithms in this article. If you're a student competing in international competitions and you don't know about logarithms, I highly recommend that you practice your logarithms after completing this article.

In computer science, base 2 logarithms are much more common than any other types of logarithms. This is because we often only have two different entities: 0 and 1. We also tend to cut down one big problem into halves, of which there are always two. So you only need to know about base-2 logarithms to continue with this article. Solve the equations below. Denote what logarithm you're finding in each case.

Use only logarithms base 2. Let's now take a look at a recursive function. A recursive function is a function that calls itself. Can we analyze its complexity? The following function, written in Python, evaluates the factorial of a given number. The factorial of a positive integer number is found by multiplying it with all the previous positive integers together. We denote that "5! Let us analyze the complexity of this function. This function doesn't have any loops in it, but its complexity isn't constant either. What we need to do to find out its complexity is again to go about counting instructions.

Clearly, if we pass some n to this function, it will execute itself n times. If you're unsure about this fact, remember that you can always find the exact complexity by counting instructions. See Figure 5 for a diagram to help you understand the recursions performed when factorial 5 is called. One famous problem in computer science is that of searching for a value within an array.

We solved this problem earlier for the general case. This problem becomes interesting if we have an array which is sorted and we want to find a given value within it. One method to do that is called binary search. We look at the middle element of our array: If we find it there, we're done. Otherwise, if the value we find there is bigger than the value we're looking for, we know that our element will be on the left part of the array. Otherwise, we know it'll be on the right part of the array. We can keep cutting these smaller arrays in halves until we have a single element to look at.

Here's the method using pseudocode:. This pseudocode is a simplification of the actual implementation. In practice, this method is easier described than implemented, as the programmer needs to take care of some implementation issues. There are off-by-one errors and the division by 2 may not always produce an integer value and so it's necessary to floor or ceil the value. But we can assume for our purposes that it will always succeed, and we'll assume our actual implementation in fact takes care of the off-by-one errors, as we only want to analyze the complexity of this method.

If you've never implemented binary search before, you may want to do this in your favourite programming language. It's a truly enlightening endeavor. If you're unsure that this method actually works, take a moment now to run it by hand in a simple example and convince yourself that it actually works.

Let us now attempt to analyze this algorithm. Again, we have a recursive algorithm in this case. This is a fact that we would normally have to prove if we wanted to be prudent from a mathematical point of view, but practically it is intuitively obvious. Let's assume that our array has a size that is an exact power of 2, for simplicity. Again this assumption doesn't change the final results of our complexity that we will arrive at. The worst-case scenario for this problem would happen when the value we're looking for does not occur in our array at all.

In general, our array is split in half in every call, until we reach 1. So, let's write the number of elements in our array for every call:. This is because in every iteration we're cutting our array into half, meaning we're dividing its number of elements by two. This translates to multiplying the denominator with a 2. Now, this procedure continues and with every larger i we get a smaller number of elements until we reach the last iteration in which we have only 1 element left.

If we wish to find i to see in what iteration this will take place, we have to solve the following equation:. This will only be true when we have reached the final call to the binarySearch function, not in the general case.

Programme mission.

So solving for i here will help us find in which iteration the recursion will finish. Multiplying both sides by 2 i we get:. Now, this equation should look familiar if you read the logarithms section above. Solving for i we have:. This tells us that the number of iterations required to perform a binary search is log n where n is the number of elements in the original array. If you think about it, this makes some sense.

How many times do we have to cut this in half to get only 1 element? We did this 5 times, which is the logarithm of This last result allows us to compare binary search with linear search, our previous method. Clearly, as log n is much smaller than n, it is reasonable to conclude that binary search is a much faster method to search within an array than linear search, so it may be advisable to keep our arrays sorted if we want to do many searches within them.

Rule of thumb : Improving the asymptotic running time of a program often tremendously increases its performance, much more than any smaller "technical" optimizations such as using a faster programming language. You now know about analyzing the complexity of algorithms, asymptotic behavior of functions and big-O notation.

You also know how to intuitively figure out that the complexity of an algorithm is O 1 , O log n , O n , O n 2 and so forth. If you've come this far, this tutorial has already served its purpose. This final section is optional. It is a little more involved, so feel free to skip it if you feel overwhelmed by it. It will require you to focus and spend some moments working through the exercises.

Data Structures, Big 'O' Notations and Algorithm Complexity

However, it will provide you with a very useful method in algorithm complexity analysis which can be very powerful, so it's certainly worth understanding. We looked at a sorting implementation above called a selection sort. We mentioned that selection sort is not optimal. An optimal algorithm is an algorithm that solves a problem in the best possible way, meaning there are no better algorithms for this.

This means that all other algorithms for solving the problem have a worse or equal complexity to that optimal algorithm. There may be many optimal algorithms for a problem that all share the same complexity. The sorting problem can be solved optimally in various ways. We can use the same idea as with binary search to sort quickly. This sorting method is called mergesort. To perform a mergesort, we will first need to build a helper function that we will then use to do the actual sorting.

We will make a merge function which takes two arrays that are both already sorted and merges them together into a big sorted array. This is easily done:.


  • Tlingit Indians of Alaska.
  • Motivation.
  • Understanding complexity in health systems: international perspectives?
  • Joel.

The concat function takes an item, the "head", and an array, the "tail", and builds up and returns a new array which contains the given "head" item as the first thing in the new array and the given "tail" item as the rest of the elements in the array. For example, concat 3, [ 4, 5, 6 ] returns [ 3, 4, 5, 6 ]. Verify that the above function actually performs a merge. Rewrite it in your favourite programming language in an iterative way using for loops instead of using recursion.

Utilizing this function we can build a better sorting algorithm. The idea is the following: We split the array into two parts. We sort each of the two parts recursively, then we merge the two sorted arrays into one big array. In pseudocode:. This function is harder to understand than what we've gone through previously, so the following exercise may take you a few minutes.

Verify the correctness of mergeSort. That is, check to see if mergeSort as defined above actually correctly sorts the array it is given. If you're having trouble understanding why it works, try it with a small example array and run it "by hand". When running this function by hand, make sure leftHalf and rightHalf are what you get if you cut the array approximately in the middle; it doesn't have to be exactly in the middle if the array has an odd number of elements that's what floor above is used for.

As a final example, let us analyze the complexity of mergeSort. In every step of mergeSort , we're splitting the array into two halves of equal size, similarly to binarySearch. However, in this case, we maintain both halves throughout execution. We then apply the algorithm recursively in each half. Let's see what's going on here. Each circle represents a call to the mergeSort function. The number written in the circle indicates the size of the array that is being sorted.

The top blue circle is the original call to mergeSort , where we get to sort an array of size n. The arrows indicate recursive calls made between functions. This is indicated by the two arrows at the top. This diagram is called a recursion tree , because it illustrates how the recursion behaves and looks like a tree the root is at the top and the leaves are at the bottom, so in reality it looks like an inversed tree. Notice that at each row in the above diagram, the total number of elements is n.

To see this, take a look at each row individually. The first row contains only one call to mergeSort with an array of size n , so the total number of elements is n. So again we get n elements. Now notice that at each row in this diagram the caller will have to perform a merge operation on the elements returned by the callees.

At each row in our tree, the total number of elements merged is n. That yields n elements in total that need to be merged for the row we're looking at. We know that the number of rows in this diagram, also called the depth of the recursion tree, will be log n. The reasoning for this is exactly the same as the one we used when analyzing the complexity of binary search.

If this sounds complicated to you, don't worry: It's not easy the first time you see it. Revisit this section and reread about the arguments here after you implement mergesort in your favourite programming language and validate that it works. As you saw in this last example, complexity analysis allows us to compare algorithms to see which one is better.

Under these circumstances, we can now be pretty certain that merge sort will outperform selection sort for large arrays. This conclusion would be hard to draw if we didn't have the theoretical background of algorithm analysis that we developed. Notice that we have not proven that these sorting algorithms are optimal. Doing this requires a slightly more involved mathematical argument, but rest assured that they can't get any better from a complexity point of view. Having finished reading this tutorial, the intuition you developed for algorithm complexity analysis should be able to help you design faster programs and focus your optimization efforts on the things that really matter instead of the minor things that don't matter, letting you work more productively.

In addition, the mathematical language and notation developed in this article such as big-O notation is helpful in communicating with other software engineers when you want to argue about the running time of algorithms, so hopefully you will be able to do that with your newly acquired knowledge. This article is licensed under Creative Commons 3. Although you don't have to, if you base your work on mine, I encourage you to publish your own writings under Creative Commons so that it's easier for others to share and collaborate as well.

In a similar fashion, I have to attribute the work I used here. The nifty icons that you see on this page are the fugue icons.

Register for a free account

The beautiful striped pattern that you see in this design was created by Lea Verou. And, more importantly, the algorithms I know so that I was able to write this article were taught to me by my professors Nikos Papaspyrou and Dimitris Fotakis. I'm currently a cryptography PhD candidate at the University of Athens. When I wrote this article I was an undergraduate Electrical and Computer Engineering undergraduate at the National Technical University of Athens mastering in software and a coach at the Greek Competition in Informatics.

Industry-wise I've worked as a member of the engineering team that built deviantART , a social network for artists, at the security teams of Google and Twitter , and two start-ups, Zino and Kamibu where we did social networking and video game development respectively. Follow me on Twitter or on GitHub if you enjoyed this, or mail me if you want to get in touch.

Many young programmers don't have a good knowledge of the English language. E-mail me if you want to translate this article into your own native language so that more people can read it. Thanks for reading. I didn't get paid to write this article, so if you liked it, send me an e-mail to say hello. I enjoy receiving pictures of places around the world, so feel free to attach a picture of yourself in your city! Figure 1 : Artificial intelligence characters in video games use algorithms to avoid obstacles when navigating in the virtual world.

After that point it remains larger for ever.

Gifting Information

Exercise 2 Systematically analyze the number of instructions the above PHP program needs with respect to n in the worst-case to find f n , similarly to how we analyzed our first Javascript program. Figure 3 : A player that is located in the yellow dot will not see the shadowed areas. Splitting the world in small fragments and sorting them by their distance to the player is one way to solve the visibility problem. We can achieve O n without altering our program at all.

As n 2 is worse than n, this is true. As n 3 is worse than n 2 , this is true. As 1 is not worse than n, this is false. If a program takes n instructions asymptotically a linear number of instructions , we can't make it worse and have it take only 1 instruction asymptotically a constant number of instructions. This is true as the two complexities are the same. This may or may not be true depending on the algorithm. In the general case it's false. Exercise 5 Determine which of the following bounds are tight bounds and which are not tight bounds. Indeed, a bound of O n 2 would be a tight one.

So we can write that the algorithm is o n 3. A bound of O 1 would be a tight one. So we can point out that the O n bound is not tight by writing it as o n. We must have made a mistake in calculating this bound, as it's wrong. Remember that O gives an upper bound. This may seem like a bound that is not tight, but this is not actually true.

This bound is in fact tight. Solution This is a straight-forward application of the definitions above. A non-tight O-bound would be O n. Recall that O gives us an upper bound. As n is of larger scale than 1 this is a non-tight bound and we can write it as o n as well. So we'll have to do with the tight bound. For non-tight bounds we can have O n , as n is larger than and so it is an upper bound for. As we know this is a non-tight upper bound, we can also write it as o n. These are in fact pretty bad bounds, as they are far from the original complexities, but they are still valid using our definitions.

Although these bounds are not tight, they're better than the ones we gave above. Figure 4 : A comparison of the functions n, , and log n. Function n, the linear function, drawn in green at the top, grows much faster than the square root function, drawn in red in the middle, which, in turn, grows much faster than the log n function drawn in blue at the bottom of this plot.

Exercise 7 Solve the equations below. Solution There is nothing more to this than applying the ideas defined above. Here we notice that 2 2 x , by the properties of exponents, can be written as 2 2x. This is readily observed from the original equation, as using an exponent of 1 yields the base as a result. Recall that an exponent of 0 yields a result of 1. Here we have a sum and so we can't take the logarithm directly. Figure 5 : The recursion performed by the factorial function. Figure 6 : The recursion performed by binary search.

The A argument for each call is highlighted in black. While the Description and the Diagnosis are two tangible products of the Discovery component, the processes of arriving at these products are designed to resolve any of the issues related to the behavior pathologies; and also to assist in developing an appropriate object language with which to analysis, describe, and re design whatever situation is under observation or consideration.

Pg67 Description: best done by groups 8 to 15 because each member has distinctive knowledge that can be affregated with the help of Interactive Management. Products of Description: A type 1 Problematique, a problems field, an attributes field, and a Type 2 categories problematique. Diagnosis best done by an individual who is experienced in using the dibasic aids developed for interpreting the structures produced in Interactive Management Workshops, especially in interpreting the Problematique; provided the individual then shares the diagnosis in an interpretive Session with the group that produced the results, both to check out the interpretation, and to make any needed amendments.

Products of Diagnosis : An analysis and classification of problems from the Type 1 Problematique, based on its structural features; computation of indexes on complexity for comparison with earlier studies, and a comparison of the categories problematique with organizational components to assess who might be doing what in the organization to help resolve the complexity Resolution can be started, once the Discovery work has produced sufficient understanding to make possible the Design or redesign that is required in order to resolve the complexity associated with the problematic situation.

Resolution incorporates recognition of the need for resources for the purpose of implementing the design, and that such resources normally are found only in organizations, because of the size and scope of complexity. Design : best done by groups 8 to 15 because each member has distinctive knowledge that can be aggregated, with the help of Interactive Management. Products of Design: An options field with one category for each category in the problems fields, at least two independently-developed options profiles by subgroups, a final options profile selection done in plenary session; and a DELTA Chart showing a what tasks will be performed in order to implement the design and 9b who wil perform those tasks.

The Options Field shows the wide variety of possible options and the categories in which those options lie these being the same categories as found earlier in the Problems Field Implementation: best done by the organization whatever components that are required following the Interpretation Session with the group that produced the results, both to check out the interpretation, and to make any needed amendments.

Infrastructure: It is particularly vital, in most applications, that the host organization create an infrastructure that is appropriate as a learning means for person in the organization who were not present during the IM work, but who need to understand what was done by the participants in order to make their own contributions to implementation,. Problematique: A graphical portrayal, constructed to rigorous specifications, of a particular aspect of complexity which arises in a problematic situation.

Thought Leaders Numerous research studies have shown that several behavioral pathologies interfere with the possible resolution of complexity. Curiously, most of those studies rest in isolation from one another. Perhaps that accounts for the neglect of their findings in many circumstance where their insights are needed. The resolution of complexity relies on an integrated understanding of these pathologies. A careful design of a system for resolving complexity is required to be responsive to the collective pathologies, and to find ways to circumvent their mutually reinforcing effects.

Structural Incompetence; The definition of structural incompetence is; "the concept that describes the situation found in an organization by a program manager who is so constrained that he cannot apply his knowledge to resolve an issue, but instead must go along with a decision that inevitably appears to be a consequence of incompetence. Groupthink 1. The attribute defined carefully by Irving Janis. A slang expression used by the uninformed to represent anything that a group produces which appears to have involved thought.

Symptoms see Page Language language is a vital part of all communication; not merely scienctfic communication. But the quality of the natural language, which evolves constantly, and which incorporates many ambiguities, is not adequate for scientific language. So, according to I. Bochenski Bochenski, Gottfried Leibnitz offered the studied point of view that if scientists want to communicate with one another, they wil have to design a specialized language in order to do so.

Language Design see page 8. Structural Incompetence; The definition of structural incompetence is; "the concept that describes the situation found in an organization by a program manager who is so constrained that he cannot apply his knowledge to resolve an issue, but instead must go along with a decision that inevitably appears to be a consequence of incompetence Killer Assumptions A condition of unjustified belief which, if held, will greatly inhibit the likelihood that human beings can resolve complexity.

NOT A BOOK: Understanding Complexity

Thought Leader Charles Sanders Peirce "One singular deception of this sort, which often occurs, is to mistake the sensation produced by our own unclearness of thought for a character of the object we are thining. Instead of perceiving that the obscurity is purely subjective, we fancy that we contemplate a quality of the object which is essentially mysterious" Peirce. In summary. Complexity is that sensation experienced in the human mind when, in observing or considering a system, frustration arises from lack of comprehension of what is being explored.

Understanding complexity in WASH systems :: IRC

Thought Leader Frank Harary In the work, Harary et al span several branches of mathematics to present the analytical basis for the mathematic of modeling, the mathematics of structure. Taking that mathematics as a basis, John Warfield augmented it with the corresponding synthesis scheme, so that where Harary's approach analyzes structural models, and shows how they are presentable symbolically, Warfield's approach offers the algorithmic basis for model construction: i.

He insisted on rigorous scientific procedure, and contributed to standard philosophic method by his invention of the syllogism. His innovation in developing categories contributed to the thought process which he fostered. Thought Leader George Boole Boole presented a system of logic that encourages the representation of propositions by variables, so that a variable x might represent the proposition "this coin is a dime. Thought Leader Peter Abelard According to Bochenski , Peter Abelard first stated the concept of the syllogism in a single statement in each of the following ways: "Whatever implies the antecedent implies also the consequent.

Thought Leader David Hilbert Hilbert conceived the idea of "metalanguage", a concept that plays a foundational role in today's computer science, and one that formalized in language the idea f Lebnitz that scientists would have to develop languages of their own, if they were going to be able to communicate effectively. Bales Bales work shows clearly that individuals can disrupt the kinds of group work that promote success by activating negative behavior in the Social-Emotiona area. Thought Leader Kenneth Boulding Boulding has described the propensity of leaders a to accept and propagate uncritically ideas and concept that diminish productivity, b to allocate importance in spurious ways across possible options and c to suppress or at least to avoid incorporating valuable additions into the prevailing culture, defying those who have worked hard to make such additions available.



admin