Lectures on Language Technology and Machine Learning

Uppsala, June 8, 2018

The computational linguistics group at Uppsala University is delighted to invite you to an afternoon of public lectures on language technology and machine learning by leading experts in the field. The lectures will take place in Room 7-0043, English Park Campus, Uppsala University, on the 8th of June according to the schedule below. Attendance is free for anyone interested.

13.15-14.00 Yue Zhang
Singapore University of Technology and Design
Graph Models for Natural Language Processing
Deep neural network models have achieved strong effects in natural language processing (NLP). Recently, the structured neural networks have been used to encode syntactic and semantic role structures. These add to the representation power of linear structures for encoding sentences. On the other hand, recurrent structures have been less explored for encoding graph structures, which are seen in semantics tasks such as Abstract Meaning Representation (AMR). We investigate graph structure recurrent networks for sentence encoding and semantic embedding. In particular, a special state transition mechanism is design to handle cyclic structure in graphs. Results show that our network architecture can be useful for both sentence embedding and AMR structure encoding, giving highly strong accuracies.
14.00-14.45 Iryna Gurevych
Technische Universit├Ąt Darmstadt
Cross-Topic Argument Mining
Argument mining is a core technology for automating argument search in large document collections. Despite its usefulness for this task, most current approaches to argument mining are designed for use only with specific text types and fall short when applied to heterogeneous texts. In this talk, we propose a new sentential annotation scheme that is reliably applicable by crowd workers to arbitrary Web texts. We source annotations for over 25,000 instances covering eight controversial topics. We show that integrating topic information into bidirectional long short-term memory networks outperforms vanilla BiLSTMs by more than 3 percentage points in F1 in two- and three-label cross-topic settings. We also show that these results can be further improved by leveraging additional data for topic relevance using multi-task learning.
14.45-15.15 Break
15.15-16.00 Barbara Plank
IT University of Copenhagen
Multi-task Learning in Natural Language Processing
Multi-task learning (MTL) is quickly gaining popularity in natural language processing (NLP). It has been applied to a range of tasks, from syntactic and semantic sequence prediction, discourse parsing, to mental-health prediction and machine translation. In this talk I will introduce MTL and show examples of successful use. While prospects of MTL are appealing, little is known on when MTL works. I will report on recent work towards understanding when MTL might be effective.
16.00-16.45 Michael Ashcroft
Uppsala University
Reinforcement Learning: What's behind the fuss?
At least since Deep Mind sold out to Google for $500 million and demonstrated that an AI system could learn to out-perform humans in computer games and beat champions at go, there has been an abiding public and academic interest in deep reinforcement learning (DRL), as well as a lot of hype and confusion. In this talk, I explain DRL, its basis, limitations and potential. DRL is extremely powerful, potentially very general and certainly very interesting. It is also hard, often unstable, requires substantial setup and preconditions for real world applications, and is unsuitable for many practical problems. We will look at the mathematics involved, the various alternatives known to be of use in certain situations or to address particular problems, discuss what is required to be able to practically apply these methods to problems, and touch upon the idea that this is a potential advance in general AI. I will discuss these matters with reference to three recent or ongoing projects that made use of DRL techniques (involved organizations in parentheses): The control of autonomous agents in defence training simulations (FOI), of roads in environmentally threatened transport systems (UN), and of logistics in hospitals (B&M). I am interested in discussions on if and how these techniques might be applied in natural language processing. I note that I have no experience of doing so, though I have worked with various deep neural network applications in some general NLP projects.