Well we programmers write programs. The program can be written in any programming language that one knows, and we write programs in object oriented way, or procedural way or in any other paradigm depending on the language one choose. Although most of them are using object oriented programming languages only few are coding in object oriented way bcoz only few has got idea. If most of the fellows in the team don't know well about OOPS then most probably it would not be OOPs even though you are using a modern OOPS based languages such as C# or Java or C++.
Irrespective of any language one choose we write programs & programs has got an algorithm. And the algorithm that you define will define the performance of the program. When I was a student in Engineering I didn't understood the Big O because I was not good at listening plus teacher was bad. After 6 years of college education I am relearning the topic. And I am regretting now that for the past 5.5 years of my programming I was coding without giving thought about the performance of the algorithms that I have written. I finally started learning about the algorithms here I am putting as my notes so that I can refer later when the time I need to use or forget.
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm. If the performance of the system is vital, i.e. it's part of a life-critical system, then we must use the worst case in our design calculations as it represents the best guaranteed performance.
Anyone who’s read Programming Pearls or any other Computer Science books and doesn't have a grounding in Mathematics will have hit a wall when they reached chapters that mention O(N log N) or other seemingly crazy syntax. Hopefully this article will help you gain an understanding of the basics of Big O and Logarithms.
As a programmer first and a mathematician second (or maybe third or fourth) I found the best way to understand Big O thoroughly was to produce some examples in code. So, below are some common orders of growth along with descriptions and examples where possible.
O(1)
O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
Irrespective of any language one choose we write programs & programs has got an algorithm. And the algorithm that you define will define the performance of the program. When I was a student in Engineering I didn't understood the Big O because I was not good at listening plus teacher was bad. After 6 years of college education I am relearning the topic. And I am regretting now that for the past 5.5 years of my programming I was coding without giving thought about the performance of the algorithms that I have written. I finally started learning about the algorithms here I am putting as my notes so that I can refer later when the time I need to use or forget.
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm. If the performance of the system is vital, i.e. it's part of a life-critical system, then we must use the worst case in our design calculations as it represents the best guaranteed performance.
Anyone who’s read Programming Pearls or any other Computer Science books and doesn't have a grounding in Mathematics will have hit a wall when they reached chapters that mention O(N log N) or other seemingly crazy syntax. Hopefully this article will help you gain an understanding of the basics of Big O and Logarithms.
As a programmer first and a mathematician second (or maybe third or fourth) I found the best way to understand Big O thoroughly was to produce some examples in code. So, below are some common orders of growth along with descriptions and examples where possible.
O(1)
O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
bool IsFirstElementNull(String[] strings) { if(strings[0] == null) { return true; } return false; }
O(N) [we say this is a O(n), pronounce this "big-Oh-n" or "Oh-n"]O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. The example below also demonstrates how Big O favors the worst-case performance scenario; a matching string could be found during any iteration of the for loop and the function would return early, but Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations.
bool ContainsValue(String[] strings, String value) { for(int i = 0; i < strings.Length; i++) { if(strings[i] == value) { return true; } } return false; }Another example with O(N) algorithm is scanning of an array.
O(N2)
represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.bool ContainsDuplicates(String[] strings) { for(int i = 0; i < strings.Length; i++) { for(int j = 0; j < strings.Length; j++) { if(i == j) // Don't compare with self { continue; } if(strings[i] == strings[j]) { return true; } } } return false; }
O(2N)
O(2N) denotes an algorithm whose growth will double with each additional element in the input data set. The execution time of an O(2N) function will quickly become very large.
will continue later..
will continue later..
Logarithmic>
210=1024
log2 1024 = 10 i.e log 1024 with base 2.
Scalability and Efficiency
An O(1) algorithm scales better than an O(log N) algorithm,
which scales better than an O(N) algorithm,
which scales better than an O(N log N) algorithm,
which scales better than an O(N2) algorithm,
which scales better than an O(2N) algorithm.
But scalability isn't efficiency. A well-coded, O(N2) algorithm can outperform a sloppily-coded O(N log N) algorithm, but only for a while. At some point their performance curves will cross. Hopefully, this happens before your code goes into production with a customer who is using a lot more data than you tested with.
O(log N) algorithm>
Logarithmic running time O(log N) essentially means that the running time grows in proportion to the logarithm of the input size - as an example, if 10 items takes at most some amount of time x, and 100 items takes at most, say, 2x, and 10,000 items takes at most 4x, then it's looking like an O(log N) time complexity.
O(log N) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on.
Example: Binary search is of O(log N) logarithm. All the divide & conquer algorithms are of this order.
Binary Search
A technique for searching an ordered list in which we first check the middle item and - based on that comparison - "discard" half the data. The same procedure is then applied to the remaining half until a match is found or there are no more items left.
i.e first search of an item in a sorted array of N items we divide N/2. i.e like this:
From Cinema Paradiso |
For reminding myself to think about the complexities associated with many search algorithms, I am listing it in the table given below, will come back & analyze each one later Insha Allah.
0 comments:
Post a Comment