Holiday Projects / python

Interesting Challenge… Divide and Conquer

Follows from the original post (here) and it’s children.

I have dealt with some O(n^2) algorithms (selection, insertion and bubble sort) simple to get a handle on sorting and perhaps give some inspiration on a novel approach to the original question. I am a little further along now, and am beginning to think that sorting algorithms are not really the way forward if I intend to achieve linear running time. But, they are a good compliment to the “Design and Analysis of Algorithms” course I am doing 🙂 So I will persist. Below are the latest, binary search, merge sort and a function to flatten nested lists:

Binary Search I am playing around with sorting algorithms, so why change to binary search? Well, it is the first step along the road towards divide and conquer sorting algorithms. Binary Search uses recursion to rapidly reduce the problem size… and here it is:

def binary_search(target, p):
    # arbitrarily pick the mid_value
    mid = len(p) // 2

    # split the problem (size n) into smaller subproblems (size n/2)
    # recursively call on the subproblems
    if target > p[mid]:
        return binary_search(target, p[mid:])
    elif target < p[mid]:
        return binary_search(target, p[:mid])
    elif target == p[mid]:
        return "Target found"
        return "Target {}, not in list.".format(target)

Merge Sort – the canonical divide and conquer sorting algorithm. I was introduced to this algorithm in Stanford’s “Design and Analysis of Algorithms” course that has just started on Coursera. I’m not sure I still fully understand merge sort… I get it, but I am finding it really difficult to construct a mental image that I can follow all the way through. However, at a conceptual level I do get it, and here is my implementation in Python:

def merge_split(p):
    recursively split the input list into smaller sub-lists
    call merge_sort() to sort the singletons and merge into full list
    # base-case, singleton list
    if len(p) <= 1:
        return p

    # divide into subproblems
    middle = len(p) // 2
    left, right = p[:middle], p[middle:]

    # recursive call
    left = merge_split(left)
    right = merge_split(right)
    # conquer
    return merge_sort(left, right)

def merge_sort(left, right):
    """ merge sorted lists back into one output list """

    merged = []
    while left and right:
        if left[0] < right[0]:

    if left:
        merged += left
    if right:
        merged += right

    return merged

Flattening a nested list – this is a favourite problem of many sites (see 99 problems for Haskell), and while there are other ways to do this,  (python can make good use of recursion and generators to do this in about 5 lines! [and I haven’t done it in Haskell yet, but that will be interesting]) I latched onto the idea that an adaptation of the merge algorithm will do this nicely. And here it is:

def flatten(p):
    """ walk the array and flatten any nested structures """

    # push nested structuers to the back and recurse
    if type(p[0]) == list:
        return flatten(p[1:]+p[0])

    if len(p) <= 1:
        return p

    middle = len(p) // 2
    left, right = p[:middle], p[middle:]

    left = flatten(p[:middle])
    right = flatten(p[middle:])

    return combine(left, right)

def combine(head, tail):
    """ combine the subproblems in ascending order """
    if head > tail:
        return tail + head
    return head + tail

So none of these really get me any closer to finding a novel solution to the original challenge. But who cares?!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s