Showing posts with label algorithm. Show all posts
Showing posts with label algorithm. Show all posts

Thursday, December 12, 2013

Escape local optimum trap

Optimization is a very common technique in computer science and machine learning to search for the best (or good enough) solution.  Optimization itself is a big topic and involves a wide range of mathematical techniques in different scenarios.


In this post, I will be focusing in local search, which is a very popular technique in searching for an optimal solution based on a series of greedy local moves.  The general setting of local search is as follows ...

1. Define an objective function
2. Pick an initial starting point
3. Repeat
     3.1 Find a neighborhood
     3.2 Locate a subset of neighbors that is a candidate move
     3.3 Select a candidate from the candidate set
     3.4 Move to the candidate

One requirement is that the optimal solution need to be reachable by a sequence of moves.  Usually this requires a proof that any arbitrary state is reachable by any arbitrary state through a sequence of moves.

Notice that there are many possible strategies for each steps in 3.1, 3.2, 3.3.  For example, one strategy can examine all members within the neighborhood, pick the best one (in terms of the objective function) and move to that.  Another strategy can randomly pick a member within the neighborhood, and move to the member if it is better than the current state.

Regardless of the strategies, the general theme is to move towards the members which is improving the objective function, hence the greedy nature of this algorithm.

One downside of this algorithm is that it is possible to be trapped in a local optimum, whose is the best candidate within its neighborhood, but not the best candidate from a global sense.

In the following, we'll explore a couple meta-heuristic techniques that can mitigate the local optimum trap.

Multiple rounds

We basically conduct k rounds of local search, each round getting a local optimum and then pick the best one out of these k local optimum.

Simulated Anealing

This strategy involves a dynamic combination of exploitation (better neighbor) and exploration (random walk to worse neighbor).  The algorithm works in the following way ...

1. Pick an initial starting point
2. Repeat until terminate condition
     2.1 Within neighborhood, pick a random member
     2.2 If neighbor is better than me
           move to the neighbor
         else
           With chance exp(-(myObj - neighborObj)/Temp)
               move to the worse neighbor
     2.3 Temp = alpha * Temp

Tabu Search

This strategy maintains a list of previously visited states (called Tabu list) and make sure these states will not be re-visited in subsequent exploration.  The search will keep exploring the best move but skipping the previously visited nodes.  This way the algorithm will explore the path that hasn't be visited before.  The search also remember the best state obtained so far.

1. Initialization
     1.1 Pick an initial starting point S
     1.2 Initial an empty Tabu list
     1.3 Set the best state to S
     1.4 Put S into the Tabu list
2. Repeat until terminate condition
     2.1 Find a neighborhood
     2.2 Locate a smaller subset that is a candidate move
     2.3 Remove elements that is already in Tabu list
     2.4 Select the best candidate and move there
     2.5 Add the new state to the Tabu list
     2.6 If the new state is better that best state
          2.6.1 Set the best state to this state
     2.7 If the Tabu list is too large
          2.7.1 Trim Tabu list by removing old items

Tuesday, February 12, 2013

Basic Planning Algorithm

You can think of planning as a graph search problem where each node in the graph represents a possible "state" of the reality. A directed edge from nodeA to nodeB representing an "action" is available to transition stateA to stateB.

Planning can be thought of as another form of a constraint optimization problem which is quite different from the one I described in my last blog. In this case, the constraint is the goal state we want to achieve, where a sequence of actions needs to be found to meet the constraint. The sequence of actions will incur cost and our objective is to minimize the cost associated with our chosen actions

Basic Concepts 

A "domain" defined the structure of the problem.
  • A set of object types.  e.g. ObjectTypeX, ObjectTypeY ... etc.
  • A set of relation types  e.g. [ObjectTypeX RelationTypeA ObjectTypeY] or [ObjectTypeX RelationTypeA ValueTypeY]
A "state" is composed of a set of relation instances,  It can either be a "reality state" or a "required state".

A reality state contains tuples of +ve atoms.  e.g. [(personX in locationA), (personX is male)].  Notice that -ve atoms will not exist in reality state.  e.g. If personX is NOT in locationB, such tuple will just not show up in the state.

A required state contains both +ve and -ve atoms.  e.g. [(personX in locationA), NOT(personX is male)]  The required state is used to check against the reality state.  The required state is reached if all of the following is true.
  • All +ve atoms in the required state is contained in the +ve atoms of the reality state
  • None of the -ve atoms in the required state is contained in the +ve atoms of the reality state
Notice that there can be huge (or even infinite) number of nodes and edges in the graph if we are to expand the whole graph (with all possible states and possible actions).  Normally we will expressed only a subset of nodes and edges in an analytical way.  Instead of enumerating all possible states, we describe the state as a set of relations that we care, in particular we describe the initial state of the environment with all the things we observed and the goal state as what we want to reach.  Similarity, we are not enumerate every possible edges, instead we describe actions with variables such that it describe rules that can transition multiple situations of states.


An "action" causes transition from one state to the other.  It is defined as action(variable1, variable2 ...) and contains the following components.
  • Pre-conditions: a required state containing a set of tuples (expressed by variables).  The action is feasible if the current reality state contains all the +ve atoms but not any -ve atoms specified in the pre-conditions.
  • Effects: A set of +ve atoms and -ve atoms (also expressed by variables).  After the action is taken, it removes all the -ve atoms from the current reality state and then insert all the +ve atoms into the current reality state.
  • Cost of executing this actio.
Notice that since actions contains variables but the reality state does not, therefore before an action can be execution, we need to bind the variables in the pre-conditions to a specific value such that it will match the current reality state.  This binding will propagate to the variable in the effects of the actions and new atoms will be insert / removed from the reality state.

Planning Algorithm

This can be think of a Search problem.  Given an initial state and a goal state, our objective is to search for a sequence of actions such that the goal state is reached.



We can perform the search from the initial state, expand all the possible states that can be reached by taking some actions, and check during this process whether the goal state has been reached.  If so, terminate the process and return the path.

Forward planning build the plan from the initial state.  It works as follows ...
  1. Put the initial state into the exploration queue, with an empty path.
  2. Pick a state (together with its path from initial state) from the exploration queue as the current state according to some heuristics.
  3. If this current state is the goal state, then return its path that contains the sequence of action and we are done.  Else move on.
  4. For this current state, explore which action is possible by seeing whether the current state meet the pre-conditions (ie: contains all +ve and no -ve state specified in the action pre-conditions).
  5. If the action is feasible, compute the next reachable state, and the path (by adding this action to the original path), insert the next state into the exploration queue.
  6. Repeat 5 for all feasible actions of current state.

Alternatively, we can perform the search from the goal state.  We looked at what need to be accomplished and identify what possible actions can accomplish that (ie: the effect of the action meets the goal state).  Then we looked at whether those actions are feasible (ie: the initial state meets the action's pre-conditions).  If so we can execute the action, otherwise we take the action's pre-conditions as our sub-goal and expand our over goal state.

Backward planning build the plan from the goal state.  It works as follows ...
  1. Put the goal state into the exploration queue, with an empty path.
  2. Pick a regression state (a state that can reach the goal state, can be considered as a sub-goal) from the exploration queue according to some heuristics.
  3. If the regression state is contained in initial state, then we are done and return the path as the plan.  Else move on.
  4. From this regression state, identify all "relevant actions"; those actions who has some +ve effect is contained in the regression state; and all of its +ve effect is not overlap with the -ve regression state; and all of its -ve effect is not overlap with the +ve regression state.
  5. If the action is relevant, compute the next regression state by removing the action effect from the current regression state and adding the action pre-conditions into the current regression state, insert the next regression state into the exploration queue.
  6. Repeat 5 from all relevant actions of current regression state.

Heuristic Function

In above algorithms, to pick the next candidate from the exploration queue.  We can employ many strategies.
  • If we pick the oldest element in the queue, this is a breathe-first search
  • If we pick the youngest element in the queue, this is a depth-first search
  • We can pick the best element in the queue based on some value function.
Notice that what is "best" is very subjective and is also domain specific. A very popular approach is using the A* search whose value function = g(thisState) + h(thisState).

Notice that g(thisState) is the accumulative cost to move from initial state to "thisState", while h(thisState) is a domain-specific function that estimate the cost from "thisState" to the goal state.  It can be proved that in order for A* search to return an optimal solution (ie: the least cost path), the chosen h(state) function must not over-estimate (ie: need to underestimate) the actual cost to move from "thisState" to the goal state.

Here is some detail of A* search.

Wednesday, January 2, 2013

MapReduce: Detecting Cycles in Network Graph

I recently received an email from an audience of my blog on Map/Reduce algorithm design regarding how to detect whether a graph is acyclic using Map/Reduce.  I think this is an interesting problem and can imagine there can be wide range of application to it.

Although I haven't solved this exact problem in the past, I'd like to sketch out my thoughts on a straightforward approach, which may not be highly optimized.  My goal is to invite other audience who has solved this problem to share their tricks.

To define the problem:  Given a simple directed graph, we want to tell whether it contains any cycles.

Lets say the graph is represented in incidence edge format where each line represent an edge from one node to another node.  The graph can be split into multiple files.

node1, node2
node4, node7
node7, node1
....

Here is a general description of the algorithm.

We'll maintain two types of graph data.
  1. A set of link files where each line represent an edge in the graph.  This file will be static.
  2. A set of path files where each line represent a path from one node to another node.  This file will be updated in each round of map/reduce.
The general idea is to grow the path file until either the path cannot grow further, or the path contain a cycle, we'll use two global flags to keep track of that: "change" flag to keep track of whether new path is discovered, and "cycle" flag to keep track whether a cycle is detected.

The main driver program will create the initial path file by copying the link file.  And it will set the initial flag condition:  Set cycle and change flag to true.  In each round, the driver will
  • Check if the cycle is still false and change is true, exit if this is not the case
  • Otherwise, it will clear the change flag and invoke the map/reduce
Within each round, the map function will do the following
  • Emit the tuple [key=toNode, value=fromNode] if it is a path
  • Emit the tuple [key=fromNode, value=toNode] if it is a link
This ensure a path and all extended link reaches the same reducer, which will do the following
  • Check to see if the beginning point of the path is the same as the endpoint of the link.  If so, it detects a cycle and mark the cycle flag to be true.
  • Otherwise, it compute the product of every path to every link, and form the new path which effectively extend the length by one.

The following diagram illustrates the algorithm ...




Sunday, September 27, 2009

Reinforcement Learning

Reinforcement Learning (RL) is a type of Machine Learning other than "supervised learning" (having a teaching phase that shows the learning between inputs and correct answers) and "unsupervised learning" (discovering clusters and outliers from a set of input samples).

In RL, consider there exist a set of "states" (from the environment) where the agent is going to make some decision of which actions to take and this action will cause it to transfer to a different state. A reward is assigned to the agent after this state transition. During the RL process, the agent's goal is to go through a trial and error process to learn what would be the optimal decision at each state such that the reward is maximized.

The hard part of RL is to know which action has a long term effect on the final outcome. For example, a wrong decision may not have an immediate bad result and therefore may be hidden. RL is about how to assign blames to previous decisions when a bad outcome has been detected.

Basic Iteration Approach
There is a reward matrix, each row represents the from-state and each column represent the to-state. The cell (i, j) represent the "immediate reward" obtained when moving from state i to state j.

The goal is to find an optimal policy which recommends the action that should be taken at each state in order to maximize the sum of reward.

We can use a value vector of each element (i) to represent agent's perception of the overall gained reward if he is at state (i). At the beginning, the value vector is set with random value. We use the following iterative approach to modify the value vector until it converges.

def learn_value_vector
current_state = initial_state
set value_vector to all zeros
repeat until value_vector.converges
# Need to enumerate all reachable next states
for each state(j) reachable by current state(i)
Take action to reach next state(j)
Collect reward(i, j)
action_value(j) =
reward(i, j) + discount * value_vector(j)
end
# Since you will always take the path of max overall reward
value_vector(i) = max_over_j(action_value(j))
current_state = state(maxj)
end
end
After we figure out this value vector, deriving the policy is straightforward. We just need to look across all the value of subsequent next states and pick the one with the highest value.

def learn_policy1
for each state(i)
best_transition = nil
max_value = 0
for each state(j) reachable from state(i)
if value(j) > max_value
best_transition = j
max_value = value(j)
end
end
policy(i) = best_transition
end
end

One problem of this approach is requiring us to try out all possible actions and evaluate all the rewards to the next state. So there is an improve iterative approach described below.

Q-Learning
In Q-Learning, we use a Q Matrix instead of the value vector. Instead of estimating the value of each state, we estimate the value of each transition from the current state. In other words, we associate the value with the pair instead of just .

Therefore, the cell(i, j) of the Q matrix represents the agent's perceived value of the transition from state(i) to state(j). We use the following iterative approach to modify the value vector until it converges.

def learn_q_matrix
current_state = initial_state
set Q matrix to all zeros
repeat until Q matrix converges
select the next state(j) randomly
collect reward (i, j)
value(j) = max Q(j, k) across k
Q(i, j) = reward(i, j) + discount * value(j)
end
end
After figuring out the Q matrix, the policy at state (i) is simply by picking state(j) which has the max Q(i, j) value.

def learn_policy2
for each state(i)
best_transition = nil
max_value = 0
for each state(j) reachable from state(i)
if Q(i,j) > max_value
best_transition = j
max_value = Q(i,j)
end
end
policy(i) = best_transition
end
end

The relationship between the action (what the agent do) and the state transition (what is the new state end up) doesn't necessary be deterministic. In real life, the action and its effect is usually probabilistic rather than deterministic. (e.g. if you leave your house early, you are more likely to reach your office earlier, but it is not guaranteed). Imagine of a probabilistic state transition diagram, where each action has multiple branches leading to each possible next state with a probability assigned to each branch. Making decisions in this model is called the Marchov Decision Process.

The Q-learning approach described above is also good for Marchov Decision Process.

For some good articles in RL,
Reinforcement Learning: A tutorial
Q-learning by examples


Friday, May 8, 2009

Machine Learning: Linear Model

Linear Model is a family of model-based learning approaches that assume the output y can be expressed as a linear algebraic relation with the input attributes x1, x2 ...

Here our goal is to learn the parameters of the underlying model, which the coefficients.

Linear Regression

Here the input and output are both real numbers, related through a simple linear relationship. The learning goal is to figure out the hidden weight value (ie: the W vector).

Given a batch of training data, we want to figure out the weight vector W such that the total sum of error (which is the difference between the predicted output and the actual output) to be minimized.


Instead of using the batch processing approach, a more effective approach is to learn incrementally (update the weight vector for each input data) using a gradient descent approach.

Gradient Descent

Gradient descent is a very general technique that we can use to incrementally adjust the parameters of the linear model. The basic idea of "gradient descent" is to adjust each dimension (w0, w1, w2) of the W vector according to their contribution of the square error. Their contribution is measured by the gradient along the dimension which is the differentiation of the square error with respect to w0, w1, w2.

In the case of Linear Regression ...


Logistic Regression

Logistic Regression is used when the output y is binary and not a real number. The first part is the same as linear regression while a second step sigmod function is applied to clamp the output value between 0 and 1.

We use the exact same gradient descent approach to determine the weight vector W.

Neural Network

Inspired by how our brain works, Neural network organize many logistic regression units into layers of perceptrons (each unit has both input and outputs in binary form).

Learning in Neural network is to discover all the hidden values of w. In general, we use the same technique above to adjust the weight using gradient descent layer by layer. We start from the output layer and move towards the input layer (this technique is called backpropagation). Except the output layer, we don't exactly know the error at the hidden layer, we need to have a way to estimate the error at the hidden layers.

But notice there is a symmetry between the weight and the input, we can use the same technique how we adjust the weight to estimate the error of the hidden layer.



Support Vector Machine

Tuesday, May 5, 2009

Machine Learning: Nearest Neighbor

This is the simplest technique for instance based learning. Basically, find a previous seen data that is "closest" to the query data point. And then use its previous output for prediction.

The concept of "close" is defined by a distance function, dist(A, B) gives a quantity which need to observe the triangular inequality.
ie: dist(A, B) + dist(B, C) >= dist(A, C)

Defining the distance function can be domain specific. One popular generic distance function is to use the Euclidean distance.
dist(A, B) = square_root(sum_over_i(square(xai - xbi)))

In order to give each attribute the same degree of influence, you need to normalize their scale within the same range. On the other hand, you need to figure out a way to compute the difference between categorical values (ie: whether "red" is more similar to "blue" or "green"). A common approach is to see whether "red" and "blue" affects the output value in a similar way. If both colors has similar probability distribution across each output value, then we consider the two colors are similar.

Therefore you need to transform the attributes xi.
  • Normalize their scale: transform xi = (xi - mean) / std-deviation
  • Quantify categorical data: If xi is categorical, then (xai - xbi) = sum_over_k(P(class[k] | xai) – P(class[k] | xbi))
Nearest neighbor will be sensitive to outliers, say you have a few abnormal data and query point around these outliers will be wrongly estimated. One solution is to use multiple nearest neighbors and combine their output in a certain way. This is known as KNN (k-nearest-neighbor). If the problem is classification, every neighbor will cast a vote with a weight inversely proportional to the "distance" with the query point, and the majority win. If the problem is regression, the weighted average of their output will be used instead.

Execution Optimization

One problem of instance-based learning is that you need to store all previously seen data and also compute the distance of query point to each of them. Both time and space complexity to serve a single query is O(M * N) where M is the number of dimensions and N is the number of previous data points.

Instead of compute the distance between the query point to each of the existing data points, you can organized the existing points into a KD Tree based on the distance function. The KD Tree has the properties that the max distance between two nodes is bound by the level of their common parent.

Using the KD Tree, you navigate the tree starting at the root node. Basically, you calculate the dist(current_node, query_point)

and each of the child nodes of the current_node
dist(child_j, query_point)

And then find the minimum of them, if the minimum is one of its child, then you navigate down the tree by setting current_node to this child and repeat the process. You terminate if the current_node is the minimum, or when there is no more child nodes.

After terminating at a particular node, this node is pretty close to the query point. You need to explore the surrounding nodes around this node (its siblings, siblings child, parent's siblings) to locate the K nearest neighbors.

By using a KD Tree, the time complexity depends on the depth of the tree and hence of order O(M * log N)

Note that KD Tree is not effective when the data has high dimensions (> 6).

Another way is to throw away some of the previous seen data if they won't affect the result prediction (especially effective for classification problem, you can just keep the data at the boundary between two different output values and throw away the interior points of a cluster of data points all has the same output values). However, if you are using KNN, then throwing away some points may change the result. So a general approach is to verify the previous seen data is still correctly predicted after throwing out various combination of points.

Recommendation Engine

A very popular application of KNN is the recommendation engine of many e-commerce web sites using a technique called "Collaborative Filtering". E.g. An online user have purchased a book, the web site looks at other "similar" users to see what other books they have seen and recommends that to the current user.

First of all, how do we determine what attributes of the users to be captured. This is a domain-specific questions because we want to identify those attributes that are most influential, maybe we can use static information such as user's age, gender, city ... etc. But here lets use something more direct ... the implicit transaction information (e.g. if the user has purchased a book online, we know that he likes that book) as well as explicit rating information (e.g. the user rates a book he bought previously so we know whether he/she likes the book or not).

Lets use a simple example to illustrate the idea. Here we have a number of users who rates a set of movies. The ratings is from [0 - 10] where 0 means hates it and 10 means extremely likes it.


The next important things is to define the distance function. We don't want to use the rating directly because of the following reasons.

Some nice users give an average rating of 7 while some tough users give an average rating of 5. On the other hand, the range of ratings of some users are wide while other users are narrow. However, we don't want these factors to affect our calculation of user similarity. We consider two users of same taste as long as they rate the same movie above their average or below their average with the same percentage of their rating range. Two users has different taste if they rate the movies in different directions.

Lets call rating_i_x to denote user_i's rating on movie_x

We can use the correlation coefficient to capture this.

sum_of_product =
sum_over_x( (rating_i_x - avg(rating_i)) * (rating_j_x - avg(rating_j)) )

If this number is +ve, then user_i and user_j are moving in the same direction. If this number is -ve, then they are moving in opposite direction (negatively correlated). If this number is zero, then they are uncorrelated.

We also need to normalize them with the range of the user's ratings, so we compute
root_of_product_square_sum =
square_root(sum_over_x( ((rating_i_x - avg(rating_i)) **2) * ((rating_j_x - avg(rating_j)) **2) )))

Define Pearson Coefficient = sum_of_product / root_of_product_square_sum

Let Pearson Coefficient to quantify the "similarity" between 2 users.

We may also use negatively correlated users to make recommendation. For example, if user_i and user_j is negatively correlated, then we can recommend the movies that user_j hates to user_i. However, this seems to be a bit risky so we are not doing it here.

Monday, May 4, 2009

Machine Learning: Probabilistic Model

Probabilistic model is a very popular approach of “model-based learning” based on Bayesian theory. Under this approach, all input attributes is binary (for now) and the output is categorical.

Here, we are given a set of data with structure [x1, x2 …, y] is presented. (in this case y is the output). The learning algorithm will learn (from the training set) how to predict the output y for future seen data

We assume there exist a hidden probability distribution from the input attributes to the output. The goal is to learn this hidden distribution and apply it to the input attributes of the later encountered query point to pick the class that has the maximum probability.

Making Prediction

Lets say the possible value of output y is {class_1, class_2, class_3}. Given input [x1, x2, x3, x4], we need to compute the probability of each output class_j, and predict the one which has the highest value.
max {P(class_j | observed_attributes)}

According to Bayes theorem, this value is equal to …
max { P(observed_attributes | class_j) * P(class_j) / P(observed_attributes) }

The dominator is the same for all class_j, so we can ignore it, so we just need to find
max { P(observed_attributes | class_j) * P(class_j) }

P(class_j) is easy to find, we just calculate
P(class_j) = (samples_of_class_j / total samples)

Now, lets look at the other term, P(observed_attributes | class_j), from Bayesian theory
P(x1 ^ x2 ^ x3 ^ x4 | class_j) =
P(x1 | class_j) *
P(x2 | class_j ^ x1) *
P(x3 | class_j ^ x1 ^ x2) *
P(x4 | class_j ^ x1 ^ x2 ^ x3)

Learning the probability distribution

In order to provide all the above terms for the prediction, we need to build the probability distribution model by observing the training data set.

Notice that finding the last term is difficult. Assume x1, x2 ... is binary, there can be 2 ** (m – 1) possible situations to watch. It is very likely that we haven’t seen enough situations in the training data, in this case this term for all class_j will be zero. So a common solution is to start with count = 1 for all possible combinations and increase the count when we see an occurrence in the training data.


Bayesian Network

Bayesian Network base on the fact we know certain attributes are clearly independent. By applying this domain knowledge, we draw a dependency graph between attributes. The probability of occurrence of a particular node only depends on the occurrence of its parent nodes and nothing else. To be more precise, nodeA and nodeB (which is not related with a parent-child relationship) doesn't need to be completely independent, they just need to be independent given their parents.

In other words, we don't mean P(A|B) = P(A),
we just need P(A | parentsOfA ^ B) = P(A | parentsOfA)
Therefore P(x4 | class_j ^ x1 ^ x2 ^ x3) = P(x4 | class_j ^ parentsOf_x4)

we only need to find 2 ** p situations of the occurrence combination of the parent nodes where p is the number of parent nodes.


Naive Bayes

Naive Bayes takes a step even further by assuming every node is completely independent


Note that x1, x2, x3, x4 can be categorical as well. For example, if x1 represents zip code, then P('95110' | class_j) is the same as P(x1 = '95110' | class_j).

What if x1 is a continuous variable ? (say height of a person). The challenge is that a continuous variable has infinite possibility such that the chance of seeing one in the training data is almost zero.

One way to deal with continuous variable is to discretetize it. In other words, we can partition x1 into buckets which has an associated range and assign x1 to the corresponding bucket if it falls into that range.

Another way is for each output class_j, we presume a arbitrary distribution function for x1. (lets say we pick the normal distribution function). So the purpose of the training phase is to figure out the parameter of this distribution function (in this case, it is the mean and standard deviation).

In other words, we use the subset of training data whose output class is class_j. Within this subset, we compute the mean and standard deviation of x1 (height of the person). Later on when we want to compute P(x1 = 6.1 feet | class_j), we just apply the distribution function (plug-in the learned mean and standard deviation).

Note that the choice of the form of the distribution function is quite arbitrary here and it may be simply wrong. So it is important to analyze how x1 affects the output class in order to pick the right distribution function.


Spam Filter Application

Lets walk through an application of the Naive Bayes approach. Here we want to classify a particular email to determine whether it is spam or not.

The possible value of output y is {spam, nonspam}

Given an email: "Hi, how are you doing ?", we need to find ...
max of P(spam | mail) and P(nonspam | mail), which is same as ...
max of P(mail | spam) * P(spam) and P(mail | nonspam) * P(nonspam)

Lets focus in P(mail | spam) * P(spam)

We can view mail as an array of words [w1, w2, w3, w4, w5]
P(mail | spam) =
P(w1='hi' | spam) *
P(w2='how' | spam ^ w1='hi') *
P(w3='are' | spam ^ w1='hi' ^ w2='how') *
...
We make some naive assumptions here
  • Chance of occurrence is independent of preceding words. In other words, P(w2='how' | spam ^ w1='hi') is the same as P(w2='how' | spam)
  • Chance of occurrence is independent of word position. P(w2='how' | spam) is the same as (number of 'how' in spam mail) / (number of all words in spam mail)
With these assumptions ...
P(mail | spam) =
(hi_count_in_spam / total_words_in_spam) *
(how_count_in_spam / total_words_in_spam) *
(are_count_in_spam / total_words_in_spam) *
...
What if we haven't seen the word "how" from the training data ? Then the probability will becomes zero. Here we adjust terms such that a reasonable probability is assigned to unseen words.
P(mail | spam) =
((hi_count_in_spam + 1) / (total_words_in_spam + total_vocabulary)) *
((how_count_in_spam + 1) / (total_words_in_spam + total_vocabulary)) *
((are_count_in_spam + 1) / (total_words_in_spam + total_vocabulary)) *
...

What we need is the word_count per word/class combination as well as the word_count per class. This can be done through feeding a large number of training sample mails labeled with "spam" or "nonspam" into a learning process.

The learning process can also be done in parallel using a 2-rounds of Map/Reduce.


Alternatively, we can also update the counts incrementally as new mail arrives.

Saturday, May 2, 2009

Machine Learning Intuition

As more and more user data are gathered on different web sites (such as e-commerce, social network), data mining / machine learning technique becomes an increasingly important tool to analysis them and extract useful information out of it.

There a wide variety of machine learning applications, such as …
  • Recommendation: After buying a book at Amazon, or rent a movie from Netflix, they recommends what other items that you may be interested
  • Fraud detection: To protect its buyer and seller, an auction site like EBay detect abnormal patterns to identify fraudulent transaction
  • Market segmentation: Product company divide their market into segments of similar potential customers and design specific marketing campaign for each segment.
  • Social network analysis: By analysis the user’s social network profile data, social networking site like Facebook can categorize their users and personalize their experience
  • Medical research: Analyzing DNA patterns, Cancer research, Diagnose problem from symptoms
However, machine learning theory involves a lot of math which is non-trivial for people who doesn’t have the rigorous math background. Therefore, I am trying to provide an intuition perspective behind the math.

General Problem

Each piece of data can be represented as a vector [x1, x2, …] where xi are the attributes of the data.

Such attributes can be numeric or categorical. (e.g. age is an numeric attribute and gender is a categorical attribute)

There are basically 3 branch of machine learning ...

Supervised learning
  • The main use of supervised learning is to predict an output based on a set of training data. A set of data with structure [x1, x2 …, y] is presented. (in this case y is the output). The learning algorithm will learn (from the training set) how to predict the output y for future seen data.
  • When y is numeric, the prediction is called regression. When y is categorical, the prediction is called classification.

Unsupervised learning
  • The main use of unsupervised learning is to discover unknown patterns within data. (e.g. grouping similar data, or detecting outliers).
  • Identifying clusters is a classical scenario of unsupervised learning

Reinforcement learning
  • This is also known as “continuous learning” where the final output is not given. The agent will choose an action based on its current state and then will be present with an award. The agent learns how to maximize its award and come up with a model call “optimal policy”. A policy is a mapping between from “state” to “action” (given I am at a particular state, what action should I take).

Data Warehouse

Data warehouse is not “machine learning”, it is basically a special way to store your data so that it can be easily group in many ways for doing analysis in a manual way.

Typically, data is created from OLTP systems which runs the company’s business operation. OLTP capture the “latest state” of the company. Data are periodically snapshot to the data-warehouse for OLAP, in other words, data-warehouse add a time dimension to the data.

There is an ETL process that extract data from various sources, cleansing the data, transform to the form needed by the data-warehouse and then load into the data cube.

Data-warehouse typically organize the data as a multi-dimensional data cube based on a "Star schema" (1 Fact table + N Dimension tables). Each cell contains aggregate data along different (combination) of dimensions.


OLAP processing involves the following operations
  • Rollup: Aggregate data within a particular dimension. (e.g. For the “time” dimension, you can “rollup” the aggregation from “month” into “quarter”)
  • Drilldown: Breakdown the data within a particular dimension (e.g. For the “time” dimension, you can “drilldown” from months” into “days”)
  • Slice: Cut a layer out of a particular dimension (e.g. Look at all data at “Feb”)
  • Dice: Select a sub data cube (e.g. Look at all data at “Jan” and “Feb” as well as product “laptop” and “hard disk”
The Data-warehouse can be further diced into specific “data marts” that focus in different areas for further drilldown analysis.

Some Philosophy

To determine the output from a set of input attributes, one way is to study the physics behinds them and write a function that transform the input attributes to the output. However, what if the relationship is unknown ? or the relationship hasn’t been formally specified ?

Instead of based on a sound theoretical model, machine learning is trying to make prediction based on previously observed data. There are 2 broad type of learning strategies

Instance-based learning
  • Also known as lazy learning, the learner remembers all the previous seen examples. When a new piece of input data arrives, it tried to find the best matched data it previous seen and use its output to predict the output of the new data. It has an underlying assumption that if two piece of data are “similar” in their input attributes, their output are also similar.
  • Nearest neighbor is a classical approach for instance-based learning

Model-based learning
Eager learning that learn a generalized model upfront, and lazy learning learn from seen examples at the time of query. Instead of learning a generic model that fits all observed data, lazy learning can focus its analysis close to the query point. However, getting a comprehensible model in lazy learning is harder and it also require large memory to store all seen data.

Saturday, July 5, 2008

Branch and Bound Algorithm

Branch and Bound is a tree pruning technique for solving optimization problem using search technique.

Optimization problem is trying to find a path which maximize (or minimize) the profit. This solution can be represented as a state-space tree. Each branch represents all the possible next steps. The goal is to find a path which returns the maximum profit.



Knapsack problem

We have n kinds of items, 1 through n. Each item j has a value pj and a weight wj. The maximum weight that we can carry in the bag is c. How should we decide which item to pick such that the sum of value is maximized (optimized) while the sum of weight is less than c (fulfill constraint).


Minmax algorithm

2 players make moves in turn. Now is Player 1's turn, how should he choose his move in order to minimize his maximum lost when look ahead N steps.



Brute force approach
One naive approach is to layout all the possible combination of steps and calculate the profit of each path. This is exponential complexity.

So a tree pruning mechanism is usually employed to reduce the exponential explosion. Bound and branch is such a mechanism. The basic idea is to have a cheap way to compute the upper bound and lower bound of the profit of a subtree. If a subtree is know to have its upper bound lower than the lower bound of another subtree, then this subtree can be pruned because its best result is worse than the worst result of that another subtree.



Knapsack solution




Minimax solution (Alpha-Beta search)




Thursday, January 31, 2008

Solving algorithmic problem

There are some general techniques that I've found quite useful to come up with a solution for an algorithmic problem.

Understand the problem
  1. Make sure the problem is well defined and understood. Try to rephrase the problem in different ways to double verify the understanding is correct.
  2. Understand the constraints and optimization goals of the expected solution. There will be many conflicting dimensions of a solution (e.g. performance, security, scalability ... etc), knowing what can be traded off is important.
Design the solution
  1. While you are thinking about the solution, try to make it visual. e.g. Sketch the solution on a whiteboard to help you think.
  2. See if you can solve the problem by using a simple-minded, brute force search. Try all the possible combinations of solution to determine which one works. But get a sense of the BigO complexity analysis to judge whether the brute-force approach is feasible.
  3. See if you translate the problem into a search problem. Then you can utilize existing search algorithm (e.g. hill climbing, A*, BFS, DFS ... etc) to help.
  4. By exploit the nature of the problem, see if you can define your solution recursively. Assume that you already have a solution with the problem of the smaller size, can you use that to construct the current solution. e.g. Can you define F(n) based on F(n-1), F(n-2) .... etc. And also try to see if you can translate the recursive solution into an iterative one.
  5. At this point, you may have found multiple solutions. Do a BigO analysis in both time and space and pick the one that aligns with your optimization goal.
Verify the solution
  1. Walk through some simple examples to test out your solution, include some boundary cases in the examples to see how your solution handle those corner cases.
  2. Prototype your solution. Define some measurement and instrument your prototype. Run it and compare the result with what you expect.

Friday, January 25, 2008

Recursion vs Iteration

Recursion is a very commonly used technique to express mathematical relationship as well as implement computer algorithm. Define a function recursively usually result in very concise form and ease the understanding. However, recursive function is not performance friendly in terms of stack growth and processing time.

Here is a general form of a recursive function ...

def f(n)
if (n <= j)
return known_value[n]
else
return g(f(n-1), f(n-2), ... f(n-j))

When this function is invoked, the stack size will grow linearly O(n) and the processing time will grow exponentially O(j**n)

However, the above function can be rewritten into an iteration form. The idea is to revert the order of execution, moving from the known points to the unknown values.

def f(n)
if (n <= j)
return known_value[n]
for i in 0..j
cache[i] = known_value[i]
for i in (j+1) .. n
cache[i] = g(cache[i-1], cache[i-2], ... cache[i-j])
return cache[n]

In this iteration form, the stack size will not grow and the processing time will grow linearly O(n). Therefore, performance gains significantly when the recursive form is rewrittened in the iteration form.

Thursday, January 24, 2008

A* Search

Given a start state S and a Goal state G, find an optimal path S -> A -> B -> ... -> G

We can assign a cost to each arc so the optimal path is represented with the lowest sum of cost. This problem can be modeled as a weighted Graph where nodes represent a state and cost associated arcs represent the transition from one state to another.

A* search is one of the most popular search algorithm and it has the following characteristics
  1. If there is a path existing between S to G, it can always find one
  2. It will always find the one that is optimal (least cost)
Here is the description of the algorithm
  • Define a function: f(X) = g(X) + h(X) where ...
  • g(X) = minimal cost path from A to X
  • h(X) = a heuristic estimation cost from X to S. It is important that the estimated cost h(X) is less than the actual cost
  1. The algorithm keep track of a list of partially explored path, with an associated f(X) within a priority queue as well as a list of explored nodes. Initially the priority queue contain just one path [S] with cost C and the visited node list contains just S.
  2. Repeat the following ...
  3. Pick the lowest cost path ... based on f(X) from the priority queue to explore
  4. Set current_node to be the last node of the lowest cost path
  5. If current_node is explored already, skip this node and go back to step 3
  6. If current_node is the goal, then done. This is the optimal path
  7. Otherwise, find out all the neighbours of the current_node. For each of them, insert a path that leads to them into the priority queue.



def A_Search(S, G)
visited_nodes = [S]
priorityQ.add([] << S)
while priorityQ.non_empty
lowest_cost_path = priorityQ.first # according to f(X)
node_to_explore = lowest_cost_path.last_node
continue if visited_nodes contains node_to_explore
if node_to_explore == G
return lowest_cost_path
else
add node_to_explore to visited_nodes
for each v in node_to_explore.neighbors
new_path = lowest_cost_path << v
insert new_path to priorityQ
end
end
end
return nil # No solution
end



Proof: A* Search can always find a path if it exist

This should be obvious because the while loop will only terminate if the Goal is reached or all connected node has been explored

Proof: A* Search will find the path with least cost

Lets say the algorithm find a path S -> A -> K -> ... -> G which is not the optimal path
S -> A -> B -> ... -> G

That means, in the last extraction from the priorityQ, f(G) is smaller than f(B)

This is not possible because ...
f(G) == cost(S->A->K...G) > cost (S->A->B...G) > g(B) + h(B) == f(B)

Sunday, December 9, 2007

Search Engine and Ranking

Crawling

Start from an HTML, save the file, crawl its links

def collect_doc(doc)
-- save doc
-- for each link in doc
---- collect_doc(link.href)

Build Word Index

Build a set of tuples

{
word1 => {doc1 => [pos1, pos2, pos3], doc2 => [pos4]}
word2 => {...}
}

def build_index(doc)
-- for each word in doc.extract_words
---- index[word][doc] << style="font-weight: bold;" size="5">Search

Given a set of words, locate the relevant docs

A simple example ...

def search(query)
-- Find the most selective word within query
-- for each doc in index[most_selective_word].keys
---- if doc.has_all_words(query)
------ result << doc
-- return result


Ranking

There are many scoring functions and the result need to be able to combine these scoring functions in a flexible way

def scoringFunction(query, doc)
-- do something from query and doc
-- return score

Scoring function can be based on word counts, page rank, or where the location of words within the doc ... etc.
Scoring need to be normalized, say within the same range.
weight is between 0 to 1 and the sum of all weights equals to 1

def score(query, weighted_functions, docs)
-- scored_docs = []
-- for each weighted_function in weighted_functions
---- for each doc in docs
------ scored_docs[doc] += (weight_function[:func](query, doc) * weight_function[:weight])
-- return scored_docs.sort(:rank)

Monday, November 26, 2007

Distributed UUID Generation

How to generate a unique ID within a distributed environment in consideration of scalability ?

Also, the following feature requirements are given ...
  • The ID must be guaranteed globally unique (this rule out probabilistic approaches)
  • The assigned ID will be used by client for an unbounded time period (this means that once assigned, the ID is gone forever and cannot be reused for subsequent assignment)
  • The length of the ID is small (say 64-bit) compared to machine unique identifier (this rule out the possibility of using a combination of MAC address + Process id + Thread id + Timestamp to create a unique ID)
Architectural goals ...
  • The solution needs to be scalable with respects to growth of request rates
  • The solution needs to be resilient to component failure

General Solution

Using a general distributed computing scheme, there will be a pool of ID generators (called "workers") residing in many inter-connected, cheap, commodity machines.

Depends on the application, the client (which request for a unique id) may collocate in the same machine of the worker, or the client may reside in a separate machine. In the latter case, a load balancer will be sitting between the client and the worker to make sure the client workload is evenly distributed.

Central Bookkeeper

One straightforward (maybe naive) approach is to have the worker (ie: the ID generator) make a request to a centralized book-keeping server who maintains a counter. Obviously this central book-keeper can be a performance bottleneck as well as a single point of failure.

The performance bottleneck can be mitigated by having the worker making a request to the centralized book-keeper for a "number range" rather than the "id" itself. In this case, id assignment will be done locally by the worker within this number range, only when the whole range is exhausted will the book-keeper be contacted again.

When the book-keeper receive a request of a number range, it will persist to disk the allocated number range before returning to the client. So if the book-keeper crashes, it knows where to start allocating the number range after reboot. To prevent the disk itself being a SPOF, mirrored disk should be used.

The SPOF problem of the book-keeper can be addressed by having a pair of book-keepers (primary/standby). The primary book-keeper need to synchronize its counter state to the standby via shared disks or counter change replication. The standby will continuously monitor the health of the primary book-keeper and will take over its role when it crashes.


Peer Multicast

Instead of using a central book-keeper to track the number range assignment, we can replicate this information among the workers themselves. Under this scheme, every worker keep track of its current allocated range as well as the highest allocated range. So when a worker exhaust its current allocated range, it broadcast a "range allocation request" to all other workers, wait for all of their acknowledgments, and then update its current allocated range.

It is possible that multiple clients making request at the same time. This kind of conflicts can be resolved by distributed co-ordination algorithms (there are many of well-known ones and one of them is bully algorithm)

For performance reason, the worker doesn't persist its the most updated allocated id. In case the client crashes, it will request a brand-new range after bootup, even the previous range was not fully utilized.


Distributed Hashtable

By using the "key-based-routing" model of DHT, each worker will create some "DHT Node" (with a randomly generated 64-bit node id) and join a "DHT ring". Under this scheme, the number range is allocated implicitly (between the node id and its immediate neighbor's node id).

Now, we can utilize a number of nice characteristics of the DHT model such as we can use large number of user's resource for workers with O(logN) routing hobs. And also the DHT nodes contains replicated data of its neighbors so that if one DHT node crashes, its neighbor will take over its number range immediately.

Now, what if a DHT node has exhausted its implicitly allocated number range ? When this happens, the worker will start a new DHT node (which will join at some point in the ring and then has a newly assigned number range).