Explore Data Structures, Computer Science, and more!

Explore related topics

Efficient computation (in linear time) of the Failure Function for the KMP string matching algorithm. Using this computed failure function, one can efficiently search for a pattern (the string for which we computed the failure function) in a block of text.

Efficient computation (in linear time) of the Failure Function for the KMP string matching algorithm. Using this computed failure function, one can efficiently search for a pattern (the string for which we computed the failure function) in a block of text.

Computation of the Failure Function for the KMP string matching algorithm. This slide explains the basic idea behind what the failure function stands for and doesn't focus on efficient construction of the same.

Computation of the Failure Function for the KMP string matching algorithm. This slide explains the basic idea behind what the failure function stands for and doesn't focus on efficient construction of the same.

Median of 2 sorted arrays in O(log n + log m), where 'n' & 'm' are the sizes of the 2 sorted arrays. The same procedure can be used to find the 'k'-ranked element (for any 1<=k<=n+m) instead of the median.

Median of 2 sorted arrays in O(log n + log m), where 'n' & 'm' are the sizes of the 2 sorted arrays. The same procedure can be used to find the 'k'-ranked element (for any 1<=k<=n+m) instead of the median.

We see the same min-heap insertion procedure on a min-heap represented as an array in memory (instead of a tree with left & right child pointers).

We see the same min-heap insertion procedure on a min-heap represented as an array in memory (instead of a tree with left & right child pointers).

Linear time array partitioning. This is a very important algorithm that is used as a subroutine in the Linear time Selection algorithm as well as in Quick Sort.

Linear time array partitioning. This is a very important algorithm that is used as a subroutine in the Linear time Selection algorithm as well as in Quick Sort.

Efficient (linear time) construction of the Z-Box array for every array index of the string "Pattern$Text".    Most of the work is done while matching characters using the naive "two-finger" string matching algorithm. If a Z-Box completely covers the sub-string of interest, we know the answer in O(1) time. If not, we extend the Z-Box and do some useful work. Since we can extend the Z-Box by at most 'n' (n being the length of the string), our algorithm terminates in time O(n).

Efficient (linear time) construction of the Z-Box array for every array index of the string "Pattern$Text". Most of the work is done while matching characters using the naive "two-finger" string matching algorithm. If a Z-Box completely covers the sub-string of interest, we know the answer in O(1) time. If not, we extend the Z-Box and do some useful work. Since we can extend the Z-Box by at most 'n' (n being the length of the string), our algorithm terminates in time O(n).

The linear time algorithm to merge 2 sorted arrays into a single sorted array. This algorithm requires O(n) extra space, but an in-place version does exist. This forms the back-bone of the merge-sort algorithm (much like partition forms the back-bone of the quick-sort algorithm).

The linear time algorithm to merge 2 sorted arrays into a single sorted array. This algorithm requires O(n) extra space, but an in-place version does exist. This forms the back-bone of the merge-sort algorithm (much like partition forms the back-bone of the quick-sort algorithm).

An introduction to the Z-algorithm as proposed by Dr. Dan Gusfield. This slide introduces the idea of a Z-Box and explains how to naively construct the Z-Box array for a string. It also shows how the Z-Box values can be used to find a pattern within a text.    The Z-Box for an index 'k' (1 < k < |string|+1) is just the length of the longest sub-string starting at index 'k', that is common with a sub-string starting at index '1' of the original string (i.e. a prefix of the string).

An introduction to the Z-algorithm as proposed by Dr. Dan Gusfield. This slide introduces the idea of a Z-Box and explains how to naively construct the Z-Box array for a string. It also shows how the Z-Box values can be used to find a pattern within a text. The Z-Box for an index 'k' (1 < k < |string|+1) is just the length of the longest sub-string starting at index 'k', that is common with a sub-string starting at index '1' of the original string (i.e. a prefix of the string).

Building a Cartesian Tree. A Cartesian Tree is a heap-like structure such that the root element of every sub-tree is not greater than any of the elements in the sub-tree below it. Like the heap, a Cartesian Tree is also a recursive structure. We show a linear time algorithm to build a cartesian tree from an input array. The difference between a cartesian tree and heap is that the former preserves the relative order of elements in the input array while the latter does not.

Building a Cartesian Tree. A Cartesian Tree is a heap-like structure such that the root element of every sub-tree is not greater than any of the elements in the sub-tree below it. Like the heap, a Cartesian Tree is also a recursive structure. We show a linear time algorithm to build a cartesian tree from an input array. The difference between a cartesian tree and heap is that the former preserves the relative order of elements in the input array while the latter does not.

The classical merge-sort algorithm works by successively merging contiguous arrays of size {1, 2, 4, 8, 16, 32, etc...} till the complete input has been merged. Merge Sort achieves the optimal running time of the sorting bound of O(n log n).

The classical merge-sort algorithm works by successively merging contiguous arrays of size {1, 2, 4, 8, 16, 32, etc...} till the complete input has been merged. Merge Sort achieves the optimal running time of the sorting bound of O(n log n).

Pinterest
Search