NLP News

Share this post

NLP News - Resources for learning NLP, advances in automatic speech recognition, language modelling, and MT

nlpnewsletter.substack.com

NLP News - Resources for learning NLP, advances in automatic speech recognition, language modelling, and MT

Sebastian Ruder
Sep 4, 2017
2
Share
Share this post

NLP News - Resources for learning NLP, advances in automatic speech recognition, language modelling, and MT

nlpnewsletter.substack.com

After the summer lull, we're off to an explosive start again! This newsletter is packed full of awesome resources to kick-start your learning of NLP, whether you'll head back to your job or to university. Not only that, we have a plethora of interesting presentations, awesome blog posts, and companies who are rethinking the way we apply NLP. Common themes in this newsletter are advances in automatic speech recognition (ASR),  language modelling, and machine translation.


(A)CL is booming! Submissions at ACL are on the rise year-over-year.

Slides & presentations

Challenges for ACL, Presidential Address ACL 2017 — www.slideshare.net

Joakim Nivre discusses the main challenges for ACL in the era of Deep Learning. These are a) equity and diversity; b) publishing and reviewing; and c) good science.

Architectures for Neural Machine Translation — drive.google.com

Nal Kalchbrenner gives a comprehensive overview of architectures of NMT systems and discusses the trade-offs of using recurrent, convolutional, and attentional encoders & decoders.

Extracting Social Meaning From Language, CVPR 2017 Keynote — www.youtube.com

Dan Jurafsky talks about studying interactions between police and community members in traffic stops with regard to the role of race and modelling the language of scientific papers to better understand scientific innovation.

Blog posts

My Year at Brain — colinraffel.com

Colin Raffel, Google Brain resident, class of 2016, looks back on his experience at Brain and talks about his research, in particular his work on monotonic alignment, which led to an ICML 2017 paper.

Four deep learning trends from ACL 2017 — www.abigailsee.com

Abigail See summarizes the key research trends at ACL 2017: 1. incorporating linguistic structure; 2. reconsidering word embeddings; 3. making models interpretable; and 4. using attention.

Notes on state of the art techniques for language modeling

Some notes on state-of-the-art techniques for language modeling by Jeremy Howard. Related to this is also a short blog post by Danijar Hafner.

How I replicated an $86 million project in 57 lines of code — medium.freecodecamp.org

This blog post is not about NLP, but demonstrates succinctly how effective the combination of open-source software and ML is. With 57 lines of codes, the author is able to create a working license plate verification system, compared to a $86 million system tested by the local police department.

When (not) to use Deep Learning for NLP — deliprao.com

A short blog post by Delip Rao on when (not) to use Deep Learning for NLP: 1. When a simpler solution exists; 2. when costs matter; 3. when end-to-end is not worthwhile.

NLP Resources

In the following, you will find a list of resources from CMU, Oxford, and the University of Maryland, which are freely available and can be used both as an introduction to NLP with neural networks, as well as a reference for more in-depth topics such as sequence-to-sequence models or imitation learning.

Neural Networks for NLP, CMU

Slides of Graham Neubig's Neural Networks for NLP course covering word, sentence, & document models, as well as sequence-to-sequence models, structured prediction, syntactic & semantic parsing, and much more. Bookmark the link and come later as slides get updated.

Deep Natural Language Processing, Oxford — github.com

The Deep NLP course at Oxford organized jointly with DeepMind. Lecture materials are by Phil Blunsom, Chris Dyer, Edward Grefenstette, and others and focus on recent advances in analysing and generating speech and text using recurrent neural networks.

A Course in Machine Learning by Hal Daumé III

A set of introductory materials that covers most major aspects of modern machine learning as well as more advanced topics such as structured prediction and machine learning by Hal Daumé III, the creator of SEARN, one of the best known algorithms for solving structured prediction problems.

Machine Translation and Sequence to Sequence Models, CMU — www.phontron.com

Another excellent course by Graham Neubig, this time focusing on MT and sequence-to-sequence models.

21 draft chapters of Speech and Language Processing (3rd ed. draft) — web.stanford.edu

New draft chapters are available for the bible of CL, Jurafsky & Martin's Speech and Language Processing. New chapters include QA and Dialog Systems and Chatbots.

RL Resources

Hacks for training RL systems from John Schulman's lecture at Deep RL Bootcamp, August 2017 — github.com

Tricks for training Reinforcement learning models written down by William Falcon while attending the Deep RL Bootcamp at UC Berkeley.

Videos of the Deep Learning and Reinforcement Learning Summer Schools, Montreal 2017 — videolectures.net

Videos of the talks at both summer schools by Deep Learning pioneers such as Hugo Larochelle, Yoshua Bengio, Ian Goodfellow, and many more.

Implementations

Word embeddings for 1017 languages — github.com

Word embeddings for 1017 languages and implementation of Malaviya et al. (EMNLP 2017).

Regularizing and Optimizing LSTM Language Models implementation — github.com

Averaged Stochastic Gradient Descent with Weight Dropped LSTM implementation in PyTorch.

Conference countdown

WMT Proceedings

If you want to catch up on all the cutting-edge techniques for Machine Translation, the Proceedings of the Second Conference on Machine Translation (WMT17) are now available online. 

NIPS 2017 registration is now open — nips.cc

If the NIPS 2016 attendee numbers are anything to go by, NIPS 2017 will sell out fast. If you intend to attend the conference in Long Beach, register fast.

Industry insights

Website personalization startup LiftIgniter raises $6.4M — techcrunch.com

LiftIgniter aims to enable website personalization not only for the likes of Amazon and Facebook, but for every website using ML & NLP.

DeepL Translator — www.deepl.com

Cologne-based NLP startup DeepL (formerly Linguee) surprised everyone when it announced that its translation system is better than those of Google, Facebook, and Microsoft. We're now waiting for more (technical) details. In the meantime, try out their demo for yourself.

SkipFlag uses your conversations to build a knowledge base — skipflag.com

SkipFlag extracts information from your conversations on Slack and other platforms and uses them to automatically build an enterprise KB.

Doc.ai launches blockchain-based conversational AI platform for health consumers — www.zdnet.com

What is the other hot trend these days besides Deep Learning? Blockchain. Doc.ai uses blockchain and NLP to connect patients with a mobile robo-doctor to discuss their health.

How Grammarly Quietly Grew Its Way to 6.9 Million Daily Users in 9 Years

Grammarly has been around for a while. This article reviews its journey, from building a profitable product selling to universities to the freemium Chrome extension now used by 7 million daily active users.

Kaldi now offers TensorFlow integration

Good news for everyone working with automatic speech recognition (ASR)! Kaldi, one of the most popular open-source speech recognition toolkits, can now be integrated with TensorFlow.

Apple Machine Learning Journal Vol. 1: Siri speech synthesis, inverse text normalization, and ASR

Vol. 1 of the Apple Machine Learning Journal is out with articles on how to use Deep Learning to improve Siri's voice, on framing inverse text normalization as a labeling problem, and on improving neural network acoustic models.

Paper picks

A Brief Survey of Deep Reinforcement Learning

Reinforcement Learning for NLP is getting more popular. This survey introduces reinforcement learning, discusses the two main streams of value-based and policy-based methods, and covers central algorithms such as the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic.

Semi-supervised sequence tagging with bidirectional language models (ACL 2017)

Transferring knowledge and applying it to new domains with limited amounts of data is an active research area within NLP. Peters et al. show that not only can we use embeddings pre-trained on a large unlabelled corpus using methods like word2vec, but we can in addition use the embeddings obtained from pre-training a language model on a large corpus. It is interesting to see that both embeddings contain complimentary information, even though word2vec approximates a LM.

Learning to Skim Text (ACL 2017)

As our models become more accurate, runtime and efficiency gain in importance. For reading comprehension, it is still difficult to have an RNN read a book or very long document and answer questions about it. Yu et al. use RL to train a model to learn how far to jump after reading a few words of the input text. By doing this, the model is up to 6x faster at the same accuracy as an LSTM on four different tasks.

Dataset spotlight

New Speech Commands Dataset — research.googleblog.com

Google Research releases the Speech Commands Dataset for learning speech primitives. The dataset contains 65k one-second utterances of 30 short words such as "yes", "no", "left", "right", "on", "off", etc. by thousands of different people. While it won't allow for complex speech recognition applications, it will allow anyone to train an ASR model in a matter of hours.

2
Share
Share this post

NLP News - Resources for learning NLP, advances in automatic speech recognition, language modelling, and MT

nlpnewsletter.substack.com
Comments
Top
New

No posts

Ready for more?

© 2023 Substack Inc
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing