Tags
Language
Tags
April 2024
Su Mo Tu We Th Fr Sa
31 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 1 2 3 4

Unsupervised Deep Learning in Python

Posted By: AlenMiler
Unsupervised Deep Learning in Python

Unsupervised Deep Learning in Python: Master Data Science and Machine Learning with Modern Neural Networks written in Python and Theano (Machine Learning in Python) by LazyProgrammer
English | 30 Jun 2016 | ASIN: B01HUA6BOG | 62 Pages | AZW3/MOBI/EPUB/PDF (conv) | 1.26 MB

When we talk about modern deep learning, we are often not talking about vanilla neural networks - but newer developments, like using Autoencoders and Restricted Boltzmann Machines to do unsupervised pre-training.

Deep neural networks suffer from the vanishing gradient problem, and for many years researchers couldn’t get around it - that is, until new unsupervised deep learning methods were invented.

That is what this book aims to teach you.

Aside from that, we are also going to look at Principal Components Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE), which are not only related to deep learning mathematically, but often are part of a deep learning or machine learning pipeline.

Mostly I am just ultra frustrated with the way PCA is usually taught! So I’m using this platform to teach you Principal Components Analysis in a clear, logical, and intuitive way without you having to imagine rotating globes and spinning vectors and all that nonsense.

One major component of unsupervised learning is visualization. We are going to do a lot of that in this book. PCA and t-SNE both help you visualize data from high dimensional spaces on a flat plane.

Autoencoders and Restricted Boltzmann Machines help you visualize what each hidden node in a neural network has learned. One interesting feature researchers have discovered is that neural networks learn hierarchically. Take images of faces for example. The first layer of a neural network will learn some basic strokes. The next layer will combine the strokes into combinations of strokes. The next layer might form the pieces of a face, like the eyes, nose, ears, and mouth. It truly is amazing!

Perhaps this might provide insight into how our own brains take simple electrical signals and combine them to perform complex reactions.

We will also see in this book how you can “trick” a neural network after training it! You may think it has learned to recognize all the images in your dataset, but add some intelligently designed noise, and the neural network will think it’s seeing something else, even when the picture looks exactly the same to you!

So if the machines ever end up taking over the world, you’ll at least have some tools to combat them.

Finally, in this book I will show you exactly how to train a deep neural network so that you avoid the vanishing gradient problem - a method called “greedy layer-wise pretraining”.