Recurrence Relations: Introduction, Growth of functions. Combinatories: Counting Techniques, Polya’s Counting Theory.



Previously, we discussed recursively defined functions such as
(a) Factorial function,
(b) Fibonacci sequence,
(c) Ackermann function.
Here we discuss certain kinds of recursively defined sequences {an} and their solution. We note that a sequence is simply a function whose domain is
N = {1, 2, 3, . . .} or N0 = N ∪ {0} = {0, 1, 2, 3, . . .]
We begin with some examples.


Consider the following sequence which begins with the number 3 and for which each of the following terms is found by multiplying the previous term by 2:
3, 6, 12, 24, 48, . . .
It can be defined recursively by: a0 = 3, ak = 2ak−1 for k ≥1 or a0 = 3, ak+1 = 2ak for k ≥ 0 The second definition may be obtained from the first by setting k = k +1.
Clearly, the formula an = 3(2n) gives us the nth term of the sequence without calculating any previous term.

Growth of functions

We will use something called big-O notation (and some siblings described later) to describe how a function grows. What we're trying to capture here is how the function grows. … without capturing so many details that our analysis would depend on processor speed, etc. … without worrying about what happens for small inputs: they should always be fast.
For functions f(x) and g(x), we will say that “f(x) is O(g(x))” [pronounced “f(x) is big-oh of g(x)”] if there are positive constants C and k such that
|f(x)|≤C|g(x)| for all x>k. The big-O notation will give us a order-of-magnitude kind of way to describe a function's growth (as we will see in the next examples). Roughly speaking, the k lets us only worry about big values (or input sizes when we apply to algorithms), and C lets us ignore a factor difference (one, two, or ten steps in a loop). I might also say “f(x) is in O(g(x))”, then thinking of O(g(x)) as the set of all functions with that property.


The function f(x)=2x3+10x is O(x3).

Methods of solving recurrences:-

1) Substitution Method: We make a guess for the solution and then we use mathematical induction to prove the the guess is correct or incorrect.
For example consider the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(nLogn). Now we use induction to prove our guess.
We need to prove that T(n) = cnLogn. We can assume that it is true for values smaller than n.
T(n) = 2T(n/2) + n
= cn/2Log(n/2) + n
= cnLogn - cnLog2 + n
= cnLogn - cn + n
= cnLogn
2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time taken by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence tree, we start from the given recurrence and keep drawing till we find a pattern among levels. The pattern is typically a arithmetic or geometric series.
3) Master Method: Master Method is a direct way to get the solution. The master method works only for following type of recurrences or for recurrences that can be transformed to following type.
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
There are following three cases:
If f(n) = Θ(nc) where c < Logba then T(n) = Θ(nLogba)
If f(n) = Θ(nc) where c = Logba then T(n) = Θ(ncLog n)
3.If f(n) = Θ(nc) where c > Logba then T(n) = Θ(f(n))


Defining combinatorics within the larger field of mathematics is not an easy task. Typically, combinatorics deals with finite structures such as graphs, hypergraphs, partitions or partially ordered sets. However, rather than the object of study, what characterizes combinatorics are its methods: counting arguments, induction, inclusion-exclusion, the probabilistic method - in general, surprising applications of relatively elementary tools, rather than gradual development of a sophisticated machinery. That is what makes combinatorics very elegant and accessible, and why combinatorial methods should be in the toolbox of any mainstream mathematician. Let’s start with a few examples where combinatorial ideas play a key role.
1. Ramsey theory. In the 1950’s, a Hungarian sociologist S. Szalai studied friendship relationships between children. He observed that in any group of around 20 children, he was able to find four children who were mutual friends, or four children such that no two of them were friends. Before drawing any sociological conclusions, Szalai consulted three eminent mathematicians in Hungary at that time: Erd˝os, Tur´an and S´os. A brief discussion revealed that indeed this is a mathematical phenomenon rather than a sociological one. For any symmetric relation R on at least 18 elements, there is a subset S of 4 elements such that R contains either all pairs in S or none of them. This fact is a special case of Ramsey’s theorem proved in 1930, the foundation of Ramsey theory which developed later into a rich area of combinatorics.
2. Tournament paradox. Suppose that n basketball teams compete in a tournament where each pair of teams plays exactly one game. The organizers want to award the k best teams. However, whichever k teams they pick, there is always another team that beats them all! Is this possible? It can be proved using a random construction that for any k > 0 there is n > k such that this can indeed happen.
3. Brouwer’s Theorem. In 1911, Luitzen Brower published his famous Fixed Point Theorem: Every continuous map f : Bn → Bn (where Bn is an n-dimensional ball) has a fixed point, f(x) = x. The special case of n = 1 follows easily from the intermediate value theorem. For higher dimensions, however, the origianal proof was complicated. In 1928, Emanuel Sperner found a simple combinatorial result which implies Brouwer’s fixed point theorem in an elegant way. The proof of Sperner’s lemma is equally elegant, by double counting.
4. Borsuk’s conjecture. In 1933, Karol Borsuk published a paper which contained a proof of a conjecture stated by Stanislaw Ulam: Every continuous map f : S n → Rn


The Rules of Sum and Product The Rule of Sum and Rule of Product are used to decompose difficult counting problems into simple problems. The Rule of Sum − If a sequence of tasks T1,T2,…,TmT1,T2,…,Tm can be done in w1,w2,…wmw1,w2,…wm ways respectively (the condition is that no tasks can be performed simultaneously), then the number of ways to do one of these tasks is w1+w2+⋯+wmw1+w2+⋯+wm. If we consider two tasks A and B which are disjoint (i.e. A∩B=∅A∩B=∅), then mathematically |A∪B|=|A|+|B||A∪B|=|A|+|B| The Rule of Product − If a sequence of tasks T1,T2,…,TmT1,T2,…,Tm can be done in w1,w2,…wmw1,w2,…wm ways respectively and every task arrives after the occurrence of the previous task, then there are w1×w2×⋯×wmw1×w2×⋯×wm ways to perform the tasks. Mathematically, if a task B arrives after a task A, then |A×B|=|A|×|B||A×B|=|A|×|B|
Question − A boy lives at X and wants to go to School at Z. From his home X he has to first reach Y and then Y to Z. He may go X to Y by either 3 bus routes or 2 train routes. From there, he can either choose 4 bus routes or 5 train routes to reach Z. How many ways are there to go from X to Z?
Solution − From X to Y, he can go in 3+2=53+2=5 ways (Rule of Sum). Thereafter, he can go Y to Z in 4+5=94+5=9 ways (Rule of Sum). Hence from X to Z he can go in 5×9=455×9=45 ways (Rule of Product).


A permutation is an arrangement of some elements in which order matters. In other words a Permutation is an ordered Combination of elements.
Examples From a set S ={x, y, z} by taking two at a time, all permutations are −
We have to form a permutation of three digit numbers from a set of numbers S={1,2,3}S={1,2,3}. Different three digit numbers will be formed when we arrange the digits. The permutation will be = 123, 132, 213, 231, 312, 321
Number of Permutations The number of permutations of ‘n’ different things taken ‘r’ at a time is denoted by nPrnPr nPr=n!(n−r)! nPr=n!(n−r)! where n!=1.2.3.…(n−1).nn!=1.2.3.…(n−1).n Proof − Let there be ‘n’ different elements.
There are n number of ways to fill up the first place. After filling the first place (n-1) number of elements is left. Hence, there are (n-1) ways to fill up the second place. After filling the first and second place, (n-2) number of elements is left. Hence, there are (n-2) ways to fill up the third place. We can now generalize the number of ways to fill up r-th place as [n – (r–1)] = n–r+1 So, the total no. of ways to fill up from first place up to r-th-place −
nPr=n(n−1)(n−2).....(n−r+1)nPr=n(n−1)(n−2).....(n−r+1) =[n(n−1)(n−2)...(n−r+1)][(n−r)(n−r−1)…3.2.1]/[(n−r)(n−r−1)…3.2.1]=[n(n−1)(n−2)...(n−r+1)][(n−r)(n−r−1)…3.2.1]/[(n−r)(n−r−1)…3.2.1] Hence,
nPr=n!/(n−r)!nPr=n!/(n−r)! Some important formulas of permutation If there are n elements of which a1a1 are alike of some kind, a2a2 are alike of another kind; a3a3 are alike of third kind and so on and arar are of rthrth kind, where (a1+a2+...ar)=n(a1+a2+...ar)=n.
Then, number of permutations of these n objects is = n!/[(a1!(a2!)…(ar!)]n!/[(a1!(a2!)…(ar!)].
Number of permutations of n distinct elements taking n elements at a time = nPn=n!nPn=n! The number of permutations of n dissimilar elements taking r elements at a time, when x particular things always occupy definite places = n−xpr−xn−xpr−x The number of permutations of n dissimilar elements when r specified things always come together is − r!(n−r+1)!r!(n−r+1)! The number of permutations of n dissimilar elements when r specified things never come together is − n!–[r!(n−r+1)!]n!–[r!(n−r+1)!] The number of circular permutations of n different elements taken x elements at time = npx/xnpx/x The number of circular permutations of n different things = npn/n


A combination is selection of some given elements in which order does not matter. The number of all combinations of n things, taken r at a time is − nCr=n!r!(n−r)!

Pascal's Identity:-

Pascal's identity, first derived by Blaise Pascal in 19th century, states that the number of ways to choose k elements from n elements is equal to the summation of number of ways to choose (k-1) elements from (n-1) elements and the number of ways to choose elements from n-1 elements.
Mathematically, for any positive integers k and n: nCk=n−1Ck−1+n−1Ck

Pigeonhole Principle

In 1834, German mathematician, Peter Gustav Lejeune Dirichlet, stated a principle which he called the drawer principle. Now, it is known as the pigeonhole principle.
Pigeonhole Principle states that if there are fewer pigeon holes than total number of pigeons and each pigeon is put in a pigeon hole, then there must be at least one pigeon hole with more than one pigeon. If n pigeons are put into m pigeonholes where n > m, there's a hole with more than one pigeon.
Ten men are in a room and they are taking part in handshakes. If each person shakes hands at least once and no man shakes the same man’s hand more than once then two men took part in the same number of handshakes.
There must be at least two people in a class of 30 whose names start with the same alphabet. The Inclusion-Exclusion principle The Inclusion-exclusion principle computes the cardinal number of the union of multiple non-disjoint sets. For two sets A and B, the principle states −
For three sets A, B and C, the principle states −

Polya’s Counting Theory

In combinatorics, there are very few formulas that apply comprehensively to all cases of a given problem. P´olya’s Counting Theory is a spectacular tool that allows us to count the number of distinct items given a certain number of colors or other characteristics. Basic questions we might ask are, “How many distinct squares can be made with blue or yellow vertices?” or “How many necklaces with n beads can we create with clear and solid beads?” We will count two objects as ’the same’ if they can be rotated or flipped to produce the same configuration. While these questions may seem uncomplicated, there is a lot of mathematical machinery behind them. Thus, in addition to counting all possible positions for each weight, we must be sure to not recount the configuration again if it is actually the same as another. We can use Burnside’s Lemma to enumerate the number of distinct objects. However, sometimes we will also want to know more information about the characteristics of these distinct objects. P´olya’s Counting Theory is uniquely useful because it will act as a picture function - actually producing a polynomial that demonstrates what the different configurations are, and how many of each exist. As such, it has numerous applications. Some that will be explored here include chemical isomer enumeration, graph theory and music theory. This paper will first work through proving and understanding P´olya’s theory, and then move towards surveying applications. Throughout the paper we will work heavily with examples to illuminate the simplicity of the theorem beyond its notation. 2 Burnside’s Lemma
2.1 Group Theory We will first clarify some basic notation. Let S be a finite set. Then |S| denotes the number of its elements. If G is a group, then |G| represents the number of elements in G and is called the order of the group. Finally, if we have a group of permutations of a set S, then |G| is the degree of the permutation group. Gallian [3] provides the following definitions, necessary for Theorem 1 and Theorem 2.
Definition 1. Coset of H in G Let G be a group and H a subset of G. For any a ∈ G, the set {ah|h ∈ H} is denoted by aH. Analogously, Ha = {ha|h ∈ H} and aHa−1 = {aha−1 |h ∈ H}. When H is a subgroup of G, the set aH is called the left coset of H in G containing a, whereas Ha is called the right coset of H in G containing a. 1
Definition 2. Stabilizer of a Point Let G be a group of permutations of a set S. For each i ∈ S, let stabG(i) = {φ ∈ G|φ(i) = i}. We call stabG(i) the stabilizer of i in G. The set stabG(i) is a subset of G and is also a subgroup of G. We know it is nonempty because the identity element will certainly fix i ∈ S. If φa, φb ∈ stabG(i), then φa(i) = i and φb(i) = i. Thus φaφ −1 b (i) = φa(φ −1 b (i)) = φa(i) = i, which confirms that φaφ −1 b ∈ stabG(i). Definition 3. Orbit of a Point Let G be a group of permutations of a set S. For each s ∈ S, let orbG(s) = {φ(s)|φ ∈ G}. The set orbG(s) is a subset of S called the orbit of s under G. All the objects in a given orbit will be considered the same, or isomorphic. For counting purposes, we will be counting the number of orbits. Tucker [10] offers the following definition for the elements fixed by φ.
Definition 4. Elements Fixed by φ For any group G of permutations on a set S and any φ in G, we let f ix(φ) = {i ∈ S|φ(i) = i}. This set is called the elements fixed by φ.
2.2 The Orbit-Stabilizer Theorem Gallian [3] also proves the following two theorems. Theorem 1. Lagrange’s Theorem If G is a finite group and H is a subgroup of G, then |H| divides |G|. Moreover, the number of distinct left cosets of H in G is |G| |H| .
Proof. Let a1H,a2H,. . ., arH denote the distinct left cosets of H in G. For each a in G, we have aH = aiH for some i and a ∈ aH. Thus, each member of G belongs to one of the cosets aiH. In symbols, G = aiH ∪ · · · ∪ arH. Since these cosets are disjoint, |G| = |a1H| + |a2H| + · · · + |arH|. Because |aiH| = |H| for each i, we have |G| = r|H|. Here are some basic examples of this theorem.
Example 1. Consider the group Z15 and its subgroup H = {0, 3, 6, 9, 12}. Then |H| = 5, which divides 15, and there are 15 5 = 3 distinct left cosets. They are H, 1 + H = {1, 4, 7, 10, 13}, and 2 + H = {2, 5, 8, 10, 14}. 2
Example 2. Consider the dihedral group, D4. The dihedral group D4 consists of the elements R0, R90, R180, R270, H, V , D, and D0 , where the R0 i s are rotations by i degrees. H is a flip with respect to the horizontal line, V is a flip with respect to the vertical line, D is a flip with respect to the diagonal through the lower left corner and the upper right corner, and D0 is a flip with respect to the other diagonal. Then consider the subgroup S = {R0, H}. As expected, |S| = 2 divides |D4| = 8 and the subgroup has four distinct left cosets, R0H, R90H,R180H, and R270H.
Theorem 2. Orbit-Stabilizer Theorem Let G be a finite group of permutations of a set S. Then, for any i from S, |G| = |orbG(i)||stabG(i)|. Proof. We know that |G| |stabG(i)| is the number of distinct left cosets of stabG(i) in G by Lagrange’s Theorem. Define a correspondence T : G × S → S by T((φ, i)) = φ(i). If there is one-to-one correspondence between the left cosets of stabG(i) and the elements in the orbit of i, we are done. To show that T is well-defined, we must show that αstabG(i) = βstabG(i) implies α(i) = β(i). But αstabG(i) = βstabG(i) implies α −1β ∈ stabG(i), so that (α −1β)(i) = i and, therefore, β(i) = α(i). Because these arguments can be reversed from the last step to the first step, T is one-to-one. To show T is onto orbG(i), let j ∈ orbG(i). Then α(i) = j for some α ∈ G and T(αstabG(i)) = α(i) = j. Thus T is onto. 2.3 Burnside’s Lemma Tucker [10] provides the following theorem.
Theorem 3. Burnside’s Theorem If G is a finite group of permutations on a set S, then the number of orbits of G on S is
|f ix(φ)|.
Burnside’s Lemma can be described as finding the number of distinct orbits by taking the average size of the fixed sets. Gallian [3] provides the proof. Proof. Let n denote the number of pairs (φ, i), with φ ∈ G, i ∈ S, and φ(i) = i. We begin by counting these pairs in two ways. First, for each particular φ in G, the number of such pairs is exactly |f ix(φ)|, as i runs over S. So,
n =
|f ix(φ)|. (1)
Second, for each particular i in S, observe that |stabG(i)| is exactly the number of pairs of the form (φ, i), as φ runs over G. So,
n =