[Paper Reading] A Fast Learning Algorithm for Deep Belief Nets


  • It is a Unsupervised Probabilistic generative graphical model to learn P(X), while LeNet/AlexNet and so on are discriminative models that focus on P(Y|X).
  • The top two layers of the DBN form an undirected bipartite graph called Restricted Boltzmann Machine
  • The lower layers form a directed sigmoid belief network
  • DBN can be formed by “stacking” RBMs. Later Autoencoder is used instead.
  • Greedy, layer-by-layer learning
  • Optionally fine-tuned with gradient descent and backpropagation.


  • RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph
  • Architecture: RBM has an input layer (also referred to as the visible layer) and one single hidden layer and the connections among the neurons are restricted. So RBM looks like a MLP connection between two layers


Leave a Reply

Your email address will not be published. Required fields are marked *