The past decades have experienced a sheerly unlimited availability of heterogeneous textual resources scrapeable from the World Wide Web and advancements in hardware that efficiently carries out parallel mathematical operations. In natural language processing (NLP) methodology, this has increasingly led to a paradigm shift away from linguistically motivated, staged pipelines with clearly separated tasks towards statistical, data-driven approaches that model problems end to end. This talk will describe state-of-the-art NLP on a conceptual level highlighting the possibilities that deep learning enables. To this end, we will present how various application scenarios of interest, such as document summarisation or sentiment analysis, can be formulated as sequence processing tasks, which allows to model them with neural networks. We will then discuss opportunities and challenges that come along the deep learning surge, such as the uninterpretable black-box nature of neural networks, the lack of quality control and the dependence on large amounts of training data.