With the two major application fields, natural language and speech signal processing, as a carrier, this book details various sequence models commonly used in deep learning. The 12 chapters of the book cover not only fundamental knowledge such as word embedding, recurrent neural networks, convolutional neural networks, and transformers but also advanced topics like attention mechanisms and sequence-to-sequence problems. Meanwhile, it includes cutting-edge content that is rarely covered in other looks, such as pre-trained language models, generative adversarial networks, reinforcement learning, and flow-based models, to broaden readers’ horizons. It suits algorithm engineers in internet companies and serves as a reference textbook for senior undergraduate or graduate-level courses in natural language processing and deep learning.