Skip to main content

Posts

Showing posts from 2017

Expectation maximizaton

Expectation maximization algorithm Expectation maximization(EM) is an algorithm that applied in many applications. EM can be used in Hidden Markov Model (HMM) or in Bayes model. This algorithm basically has 2 steps: Expectation step and Maximization step. The main advantage of EM is resolve problem with incomplete data or with latent variable. In simple, E step gives an assumption and M step will maximize the assumption and find out the next attribute for next E step. The algorithm is finished when we got convergence. We will talk about the main idea of algorithm and the math behind it.  The most popular example of EM is flip two coins A, B. Assume, we have two biased coins A and B. We flip coin in $m$ times, each time for $n$ flips. The question is: what is probability of head of coin A and coin B respectively: $\theta_{A}$ and $\theta_{B}$ in experiment.  If all information is provided: which coin (A or B) is used in each time, we can calculate the probabilities eas...

Introduction to recurrent neural networks

Recurrent neural networks (RNN) is one of the most popular neural networks. If you hear about LSTM ( Long-short term memory), it is one of types of RNN. Specially, the RECURSIVE neural networks is a general of Recurrent neural networks. The different between them is shared weights. In Recursive neural network, shared weights are put in every node, however in recurrent neural networks, shared weights are put through sequences. Problem : Could you know what word will be filled in the sentence: “I like French … ” ?. Let represent the sentence in numberic of words, based on dictionary, we are facing with a sequences problem: predicting the next word given by previous words. RNN does not only dare with sequence problem, It also build a neural network that can remmember. It is exactly what the brain does regularily. Normally, a feedforward neural networks only process information through layers and forget information in t...