Keynote Speakers

Workshop Description

Multi-lingual representation learning methods have recently been found to be extremely efficient in learning features useful for transfer learning between languages and demonstrating potential in achieving successful adaptation of natural language processing (NLP) models into languages or tasks with little to no training resources. On the other hand, there are many aspects of such models which have the potential for further development and analysis in order to prove their applicability in various contexts. These contexts include different NLP tasks and also understudied language families, which face important obstacles in achieving practical advances that could improve the state-of-the-art in NLP of various low-resource or underrepresented languages.

This workshop aims to bring together the research community consisting of scientists studying different aspects in multilingual representation learning, currently the most promising approach to improve the NLP in low-resource or underrepresented languages, and provide the rapidly growing number of researchers working on the topic with a means of communication and an opportunity to present their work and exchange ideas. The main objectives of the workshop will be:

  •    • To construct and present a wide array of multi-lingual representation learning methods, including their theoretical formulation and analysis, practical aspects such as the application of current state-of-the-art approaches in transfer learning to different tasks or studies on adaptation into previously under-studied context;
  •    • To provide a better understanding on how the language typology may impact the applicability of these methods and motivate the development of novel methods that are more generic or competitive in different languages;
  •    • To promote collaborations in developing novel software libraries or benchmarks in implementing or evaluating multi-lingual models that would accelerate progress in the field.

By allowing a communication means for research groups working on machine learning, linguistic typology, or real-life applications of NLP tasks in various languages to share and discuss their recent findings, our ultimate goal is to support rapid development of NLP methods and tools that are applicable to a wider range of languages.


December 8, 2022 GMT+4, Abu Dhabi National Exhibition Centre

All talks take place at Room 9.

Daily schedule is as follows.

09:00 - 09:15 Opening remarks

09:15 - 10:00 Oral Session 1

   • Few-Shot Cross-Lingual Learning for Event Detection
Luis Guzman Nateras, Viet Lai, Franck Dernoncourt and Thien Nguyen

   • Zero-shot Cross-Language Transfer of Monolingual Entity Linking Models
Elliot Schumacher, James Mayfield and Mark Dredze

   • Zero-shot Cross-Lingual Counterfactual Detection via Automatic Extraction and Prediction of Clue Phrases
Asahi Ushio and Danushka Bollegala

10:00 - 10:30 Shared Task Session

   • The MRL 2022 Shared Task on Multilingual Clause-level Morphology
Omer Goldman, Francesco Tinner, Hila Gonen, Benjamin Muller, Victoria Basmov, Shadrack Kirimi, Lydia Nishimwe, Benoît Sagot, Djamé Seddah, Reut Tsarfaty, Duygu Ataman

   • Transformers on Multilingual Clause-level Morphology
Emre Can Açıkgöz, Tilek Chubakov, Müge Kural, Gözde Şahin, Deniz Yüret

10:30 - 11:00 Coffee Break

11:00 - 12:30 Poster Session

   • Entity Retrieval from Multilingual Knowledge Graphs
Saher Esmeir, Arthur Câmara and Edgar Meij

   • Rule-Based Clause-Level Morphology for Multiple Languages
Tillmann Dönicke

   • Comparative Analysis of Cross-lingual Contextualized Word Embeddings
Hossain Shaikh Saadi, Viktor Hangya, Tobias Eder and Alexander Fraser

   • How Language-Dependent is Emotion Detection? Evidence from Multilingual BERT
Luna De Bruyne, Pranaydeep Singh, Orphee De Clercq, Els Lefever and Veronique Hoste

   • Structural Transfer Learning in NL-to-Bash Semantic Parsers
Kyle Duffy, Satwik Bhattamishra and Phil Blunsom

   • MicroBERT: Effective Training of Low-resource Monolingual BERTs through Parameter Reduction and Multitask Learning
Luke Gessler and Amir Zeldes

   • Impact of Sequence Length and Copying on Clause-Level Inflection
Badr Jaidi, Utkarsh Saboo, Xihan Wu, Garrett Nicolai and Miikka Silfverberg

   • Towards Improved Distantly Supervised Multilingual Named-Entity Recognition for Tweets
Ramy Eskander, Shubhanshu Mishra, Sneha Mehta, Sofia Samaniego and Aria Haghighi

   • Average Is Not Enough: Caveats of Multilingual Evaluation
Matúš Pikuliak and Marian Simko

   • Early Guessing for Dialect Identification
Vani Kanjirangat, Tanja Samardzic, Fabio Rinaldi and Ljiljana Dolamic

   • Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching
Chenxi Whitehouse, Fenia Christopoulou and Ignacio Iacobacci

   • Politeness Evaluation in Nine Typologically Diverse Languages: Zero-Shot Transfer from English Data
Anirudh Srinivasan and Eunsol Choi

   • Improving Bilingual Lexicon Induction with Cross-Encoder Reranking
Yaoyiran Li, Fangyu Liu, Ivan Vulić and Anna Korhonen

   • Human-in-the-Loop Hate Speech Classification in a Multilingual Context
Ana Kotarcic, Dominik Hangartner, Fabrizio Gilardi, Selina Kurer and Karsten Donnay

   • HumSet: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crises Response
Selim Fekih, Nicolo' Tamagnone, Benjamin Minixhofer, Ranjan Shrestha, Ximena Contla, Ewan Oglethorpe and Navid Rekabsaz

   • Multilingual Multimodal Learning with Machine Translated Text
Chen Qiu, Dan Oneață, Emanuele Bugliarello, Stella Frank and Desmond Elliott

   • The Effects of Corpus Choice and Morphosyntax on Multilingual Space Induction
Vinit Ravishankar and Joakim Nivre

12:30 - 14:00 Lunch Break

14:00 - 14:45 Invited Talk by Razvan Pascanu, Deepmind

14:45 - 15:30 Oral Session 2

   • Adapters for Enhanced Modeling of Multilingual Knowledge and Text
Yifan Hou, Wenxiang Jiao, Meizhen Liu, Carl Allen, Zhaopeng Tu and Mrinmaya Sachan

   • Model Transfer or Data Transfer? Cross-Lingual Sequence Labeling in Zero-Resource Settings
Iker García-Ferrero, Rodrigo Agerri and German Rigau

   • JamPatoisNLI: A Jamaican Patois Natural Language Inference Dataset
Ruth-Ann Hazel Armstrong, John Hewitt and Christopher D. Manning

15:30 - 16:00 Coffee Break

16:00 - 16:45 Invited Talk by Kyunghyun Cho, NYU

16:45 - 17:00 Mini Break

17:00 - 17:45 Invited Talk by Ev Fedorenko, MIT

17:45 - 18:00 Closing Remarks

Shared Task

MRL 2022 features a new shared task on Multilingual Clause-level Morphology, which aims to provide a new evaluation benchmark for assessing multilingual representation learning models in terms of linguistic and cross-lingual generalization capabilities. All participants should follow the same submission template to send in their system descriptions using the same submission link and deadline.

Best Paper Award

This year’s edition has received a competitive number of research submissions through our internal evaluation system, as well as the ARR and the findings of EMNLP. Our program committee devoted a great work in evaluating all papers and selected the following papers as the best paper and the best paper runner up papers.

Best-paper award:

   • Adapters for Enhanced Modeling of Multilingual Knowledge and Text
Yifan Hou, Wenxiang Jiao, Meizhen Liu, Carl Allen, Zhaopeng Tu and Mrinmaya Sachan

Honorable mentions:

   • Few-Shot Cross-Lingual Learning for Event Detection
Luis Guzman Nateras, Viet Lai, Franck Dernoncourt and Thien Nguyen

   • Zero-shot Cross-Language Transfer of Monolingual Entity Linking Models
Elliot Schumacher, James Mayfield and Mark Dredze


Duygu Ataman, NYU Orhan Firat, Google Hila Gonen, UW and Meta AI Jamshidbek Mirzakhalov, Salesforce Kelechi Ogueji, University of Waterloo Sebastian Ruder, Google Gözde Gül Şahin, Koç University



Interested in being a Sponsor? Contact us!