Sequence-Sequence model with Attention mechanism — There are Encoder-Decoder — seq2seq models which are capable in solving many problems like Machine Translation, Image captioning and many more. Then, Why do we need enhanced sequence-to-sequence models like Attention model? At least once, everyone of you might have experienced Translation speech either in political meetings or in movies…