AI Research Trends

8 key trends from AI’s global flagship R&D conference NeurIPS 2021

The most renowned global AI R&D conference, Neural Information Processing Systems (NeurIPS) 2021, is hosted this week and brings a grand finale to an action packed year in AI. Machine Learning theory, new algorithms in areas like Self-Supervised Learning, Reinforcement Learning and Graph Neural Networks, combined with applications in Computer Vision, NLP, Self-driving Cars, Robotics, Life Sciences, and much more. If you’re invested in keeping up with the latest progress in AI, you are bound to be impacted by the work presented at NeurIPS 2021.

Jakub Zavrel, founder of Amsterdam based AI research company Zeta Alpha, has curated 8 key trends that you need to know about. 

With 2334 full papers, we see a 20% increase in publication volume over last year. On top of this, there are 60 side workshops, and the organizers expect more than 15 thousand online attendees from all over the world. Such a dense landscape of high quality work is hard to navigate without a good guide and map. Here are our tips to dive in and come out with pearls.

1. Drug discovery using AI is hot

For sure, the hottest area applying Machine Learning today is at the level of proteins and molecules. Drug discovery continues to push the existing boundaries of machine learning techniques, and the huge potential for society and for pharma business is obvious. Even Microsoft Research is getting into this field by opening a new lab on this topic in Amsterdam, headed by Max Welling. No wonder, since the Graph Neural Nets (GNN) he helped popularize are the stars of this new field.

A good entry point for discovering the latest advances in AI based Drug Design is the paper: A 3D Generative Model for Structure-Based Drug Design. Shitong Luo, from the Chinese biotech company Helixon Research, and his co-authors use GNN’s to create molecules that fit into specific protein binding sites. They do this with so-called Masked Language Models, developed in NLP (corrupting a string by masking some elements and learning to reconstruct it), to generate candidate molecules in 3D atom by atom until the binding site is filled. A promising direction for much faster and cheaper development of new drugs, and the authors achieve a new state-of-the-art on several benchmarks.

2. The all seeing AI thanks to computer vision

The whole Deep Learning revolution in AI started with the advances made in the field of Computer Vision. This year’s NeurIPS shows that the progress has not slowed down at all. A lot of new work is on video, often related to self-driving cars. Our favorite in this area is: The Emergence of Objectness: Learning Zero-shot Segmentation from Videos. Humans can easily track objects of types which they have never seen in moving footage… so machines should be able to do the same! Runtao Liu and his co-authors have built a new zero-shot model for object segmentation by learning from unlabeled videos. Zero-shot learning means that zero training instances of the target category are used during training. They use video data and clever tricks to set up a training approach that leverages how objects and foreground tend to behave differently in recordings without using any labeled data.

3. International competition is still going strong

NeurIPS is prestigious and competitive, because the standards to get a paper accepted are very high. It always feels a bit like an Olympic medals ranking.

The US still leads by number of publications, with China a good second and catching up. The Netherlands comes in a respectable 14th place. Google (incl. Deepmind) is the largest organization in terms of contributed papers. If Google were a country, it would rank fourth after the US, China, and the UK. Does it even matter? Many of the graduate students in the US (and in Europe) are from China and India and other ‘competing’ countries. AI research remains a global enterprise and an intellectual and scientific dialog with a lot of international, and industry-academic collaboration even though a lot is at stake in leading the pack.

4. Together is better: the move to multimodal

The importance of large AI conferences like NeurIPS is that they cut across  boundaries of individual subfields. One of the big emerging themes at NeurIPS is the progress in multi-modal models that combine language and vision. A great paper  by Maria Tsimpoukelli and team (Deepmind) that is likely to have an impact is: Multimodal Few-Shot Learning with Frozen Language Models. This paper is about the transfer the abilities from language to a multi-modal setting (vision and language) by a simple idea: train a Large Language Model, freeze it (meaning: fix its weights), and then train an image encoder to transform an image into a prompt for that language model to perform a specific task.

Though not yet dominant in terms of absolute performance, it is interesting to compare a model which is fully finetuned with multimodal data, and hence very expensive to train, with this one that keeps the Language Model frozen and only trains the visual prompts. Applications of these ideas might soon be found in generating descriptions of images for ecommerce search or for making multi-media accessible for the visually impaired.

Another cool paper in this category is VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text, by Hassan Akbari, and a team of mostly Google researchers. They use a pure Transformer-based pipeline for learning semantic representations from raw video, audio, and text without supervision. It achieves SOTA in major video kinetics benchmark while avoiding supervised pre-training.

5. Some papers were already famous ahead of the show

In today’s fast pace of AI research, the majority of conference papers are already available on arxiv.org before the conference. So why attend the conference at all? NeurIPS presents a great opportunity to hear the story more directly, and meet world-class AI researchers from your desk at home. Those already making an impact (measured by citations) are likely to be high quality and probably worth tuning in to. Here are the top 5 “Already famous” papers at the conference start.

6. Natural Language Processing and Search Technology

On the language side of things, a lot of bigger-is-better, and my-model-is-bigger-that-your-model progress on language models has dominated the headlines this year. This is reflected in a plenary panel discussion on “The Consequences of Massive Scaling in Machine Learning” which opens the conference on Tuesday, as well as a lot of work that tries to make this area more efficient or easier to diagnose and apply.  Carried by this, 2021 has brought unbelievable progress towards human level accuracy in Information Retrieval, nowadays also known as Neural or Vector Search. Deep Learning networks are redefining the way we access knowledge and documents. If you are into this, at NeurIPS you should check out: Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval,  End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering, and One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval.

Other, more general NLP papers you might like at NeurIPS: COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining and MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers. MAUVE introduces a new quality measure for text generated by large language models, and won one of the best paper awards.

7. Reinforcement learning

Some of the big news in AI, think AlphaGo Zero or solving Rubik’s cube with a robot hand, is always tied to the promise of Reinforcement Learning (RL): learning an agent with complex behavior using only delayed feedback about a reward.

In fact, as recently as October 2021, researchers from Deepmind published a position paper “Reward is enough” stating their belief that RL is the best path to so-called AGI (Artificial General intelligence). In industry, practical applications, such as control of autonomous vehicles, have remained limited, and RL is stuck in being awesome at beating humans in computer games after being trained in self-play. This does not stop the researchers at NeurIPS from devoting significant energy to solving the outstanding problems in the area, notably a lack of efficient generalization. About a quarter of the papers mention RL in some form, which means that many of AI’s brightest minds are betting precious time on upcoming breakthroughs in this area. 

A good starting point to the latest in RL are two papers by Ben Eysenbach, Russ Salakhutdinov and Sergey Levine. Robust Predictable Control and Replacing Rewards with Examples. These papers propose a compression inspired method for learning robust policies in RL, and a novel way to specify the reward function for complex problems by using examples. A few other notable RL papers at NeurIPS to explore are: Behavior From the Void: Unsupervised Active Pre-Training, Emergent Discrete Communication in Semantic Spaces, Near-Optimal Offline Reinforcement Learning via Double Variance Reduction. And as a bonus here’s a semantic map of the RL space at NeurIPS 2021.

8. AI Ethics and Regulation

With the successes of AI, and the progress in the field, come successful applications, and in application of new technologies, inevitably unforeseen problems arise. The program of NeurIPS reflects that. With two keynotes, a panel, and numerous tutorials and workshops addressing issues of inclusiveness, fairness, bias and technical operational issues introduced by application of AI systems, the conference is no longer the bastion of tech-geekiness it once was. As AI has become mainstream, it needs to deal with mainstream acceptance and regulation of its technologies.

Only two years ago, at NeurIPS 2019, there was a widespread feeling that with all the progress in Deep Learning, truly self-driving cars were about to hit the streets in months. Some of that optimism has waned, and they still have some delay in driving out of the lab, but if you want to know where the major industrial and academic labs are today, and where they are likely to be heading next year, you need to pay attention to this conference.

2048 1366 Jakub Zavrel

Jakub Zavrel

Founder and CEO of Zeta Alpha, a smarter way to discover and organize knowledge for AI and Data Science teams. Experienced AI researcher, technologist and entrepreneur. Founder of Textkernel, a global market leader in Machine Intelligence for People and Jobs. Degrees in Cognitive Artificial Intelligence and Computational Linguistics from an era when Neural Networks were as fashionable as today, but significantly less powerful.

All stories by : Jakub Zavrel
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy