How to find out if an problem is amenable to the principle of optimality:
If we can find an algorithm to solve the problem, such that the optimal solution to the "sub-problem" can be found using the same algorithm, then we say that the the <Algorithm, Problem> tuple is amenable to the "principle of optimality".
e.g. lets say the Problem: Find the Shortest path for a graph with no negative edges (but with cycles) Let's choose Dijkstra's algorithm. We observe that if we run the same algorithm on any 2 points on this shortest path say path1, we will see that the newly found shortest path, say path2 is a subset of path1. Thus <Shortest path on a graph with no negative edges, Dijkstra> satisfies principle of optimality.
<Longest path on a graph with no negative edges, Dijkstra> however is not amenable to the principle of optimality because e.g. imagine the longest path on a ring-like graph say made up of a-b-c-d-e-a . The longest path from a-d is a-b-c-d (and not a-e-d). And, the longest path from b-d is not b-c-d, but it is 'b-a-e-d'. This path cannot be found using Dijkstra. Therefore <Longest path on a graph with no negative edges, Dijkstra> does not satisfy principle of optimality. Of course <Longest path on a graph with no negative edges, some other algo A> may satisfy principle of optimality ! Is there any such algo? I do not know.
Once we have found out a tuple for which principle of optimality applies, we can try to think through whether the problem necessarily needs a dynamic programming kind of a solution, or a greedy one will do.
Have a look at this as a template for proving that greedy algorithms are a good choice for a particular optimization problem, and this as an intro to the concept of knowing/finding out "when can greedy algorithms work i.e. for what structure of problems i.e. apparently (i will revisit this), "Most problems for which greedy approach works can be formulated in terms of Weighted Matriods"
A difficult (for me) topic, but i think this is an important piece of work. I will follow up on this area soon. I want to try to explain the basics from the paper. http://crypto.stanford.edu/craig/craig-thesis.pdf
Nothing new for the networking community i guess, but this is relatively new for me, so making a note. Sounds very interesting. http://info.iet.unipi.it/~luigi/dummynet/
Let's say the goal is to find out if an error occurred during transmission, and if it did, correct it. Let us look at our codes
If the code is a binary code, i.e. the symbols are 0 and 1, then a 0 may in error come as 1 and 1 as 0. How will the decoder know ? It cannot ! there is no way to know
If instead we say that the symbols are from 2^2 i.e. 2 bits, then we can say that we have only 2 symbols (00 and 11) if we get a 01 or 10, we say that we have "detected an error"
Further, if we wan't to transmit say codes that are say 8 bits (2^3 e.g. ascii), how should we transmit it ? It turns out that if we wan't to add, subtract, divide, multiply elements together, we are essentially talking about a mathematical structure called a field. Why do we want to add, subtract, multiply, divide elements together ? The hope is that we can find some clever way to detect or correct errors. Let us investigate where this field related stuff can take us
One issue with a Finite field is that it can be constructed only with elements that are a power of , where is a prime. (See this). Therefore one thing to try is to look at a field with elements and see what addition, multiplication, etc leads us to. We need to find the here of course.
The trouble it turns out is that elements in do not all have inverses, so instead we move to polynomial arithmetic (See this for more clarification. I will also touch this again soon)
A quick and to-the-point intro to algebra of polynomials and the extensions of its coefficients from GF(2) to GF(2^k), based on what is defined as an irreducible polynomial in GF(2), (and we look for irredicible polynomials that are also "primitive polynomials" and construct a GF(2^k) using a symbol that would be the root of this primitive polynomial etc.) http://www.uotechnology.edu.iq/dep-eee/lectures/4th/communication/information%20theory/8.pdf
The point is, with GF(2), 0 and 1 symbols are interpreted as the values that the polynomial's variables can take, but with GF(2^k), k bits become the symbols and based on properties of the GF(2^k) certain error correction can be done. An example is a BCH code where the input symbols are first multiplied by the degree of the generator polynomial (of the field), (See this, page 13-14 for an example). I will revisit this soon.