Can you explain Auto Encoder for Unsupervised Learning Models
On the most basic level, you would have an Encoder-Decoder pair. You would pass your unlabeled data through the Encoder and the Decoder in that order, and the expected output is the input itself — train that dual network so the output is as close to the input as possible. If the Encoder is bottlenecking, i.e. the encoded output is much smaller in size than the input, then one can say it has effectively extracted the key features from the input data. Now you can do all sorts of things with those extracted features, for example: use those features for classification, or add some perturbation to those features and pass through the Decoder to get some fun deformities, and so on.