Every week, the editors of ai.nl source the most compelling news stories about data science and AI in The Netherlands and abroad. In this edition: we are looking at OpenAI’s next-gen AI image generator DALL-E, Google’s use of AI to update outdated business hours, billion dollar language AI startups, DeepMind’s latest AI algorithm and the potential of artificial evolution.
The pace at which artificial intelligence (AI) is evolving shows the importance of the technology to the industry as well as the society. For AI to become truly useful, AI first needs to do small things really well. This week’s stories show how AI is getting good at things like generating images, updating business hours, and even deducing whether a person is depressed simply by looking at their Twitter profile.
We are once again featuring the latest AI products from OpenAI and DeepMind, while also looking at how Google is implementing AI within its Maps product. In addition, we are following our coverage around robotics with a look at self-assembling robots. Here is a look at some of the biggest AI news from last week that shows the evolution of AI.
DALL-E is an AI-based image generator that creates images from textual descriptions. Named after the Spanish artist and the Disney robot, the AI program was created by OpenAI and the AI lab introduced a new and improved version to the public on Wednesday. This new version is capable of producing high-resolution pictures from text queries that OpenAI says will be “more realistic and accurate with 4x greater resolution.”
The early version of DALL-E produced images that looked like painting but DALL-E 2 is better thanks to the use of a process called diffusion, which starts with a pattern of random dots and gradually alters that pattern towards an image. With DALL-E 2, OpenAI is ensuring that obscenities, nudity, conspiracy theories, and actual likeness are not allowed. For now, the tool is only available to a small batch of testers and you can sign up for a waitlist here. With Google saying AI-generated content is against its guidelines, it needs to be seen how DALL-E works.
Google Maps is one of the best tools to find out about a new location and even to see operating hours of a business. However, there are a number of times when the business hours mentioned on the listing are outdated. The search giant is now using artificial intelligence and Duplex tech to keep business hours updated on Google Maps.
The AI first looks at when the business profile was last updated and matches with working hours of similar business and popular times data to decide how likely it is that the hours are incorrect. If the AI determines that the business hours need an update, it looks at additional information such as account information or street view images. There is also Google Duplex being used to check the hours with an actual human. This could be one of the simplest and yet crucial use of AI within Google’s giant suite of products.
In a new study, researchers have developed an algorithm capable of recognising depressed Twitter users in 9 out of 10 cases. According to WHO, 300 million people suffer from depression worldwide and the research states that depressive symptoms are a common problem. In order to detect depression in time, the researchers turned to Twitter and built an algorithm with 90 per cent accuracy.
The algorithm works by determining a person’s mental state by analysing 38 different factors from a public Twitter profile. These factors include content of the messages and determines how many positive and negative words or emojis have been used. It also takes into account the time at which a message is posted and number of friends or followers to reach a conclusion.
The algorithm was trained using two databases containing the Twitter histories of thousands of users. Subsequent tests found that the algorithm is accurate 88.39 per cent of the time and the researchers believe that their algorithm can be extended to other social media platforms or used for a different purpose altogether such as criminal investigations.
The field of language AI, also referred to as natural language processing (NLP), has seen unprecedented advancement in the past few years. This advancement can be owed to the development of self-supervised learning and deep learning architecture known as the transformer. Forbes writes that the next generation language AI will make the leap from academic research to widespread real-world adoption.
The report adds that the first set of startups driving this change will be those developing and making available core general-purpose NLP technology for others to apply across industries and use cases. The GPT-3 language model designed by OpenAI fits this category. The second set of startups driving the change will be ones building search engines from the ground up.
Other areas where we will see major progress with language AI include writing assistants, language translation, sales intelligence, chatbot tools and infrastructure, internal employee engagement, conversational voice assistants like aforementioned Google Duplex, contact centres, content moderation, and healthcare.
The developers behind DeepStack, an AI system developed by DeepMind to beat humans at heads-up, no-limit poker, have formed a new startup that aims to dominate the stock market. Their new venture, called Equilibre Technologies, will employ algorithms to pick stocks and cryptocurrency.
While a typical stock picking AI will try to guess what’s going to happen next, the Equilibre team plans to create an incredibly complex algorithm to solve a problem without a lot of information. They are planning to combine game theory with artificial intuition to gain “theoretical advantage over other computer-based or human trading methodologies.” While the stock market is subject to market risk, the creation of Equilibre shows how DeepMind is becoming a hub for grooming AI talent.
AI researchers are shifting their focus from building programs trained on a large database to giving their programs ” the kind of knowledge we take for granted.” It is common knowledge that computers lack the common sense that we humans possess and use it to successfully discern information without full context.
The goal of many AI researchers is to build artificial general intelligence, a machine with the ability to learn and reason like a human mind. However, to get there, computers will need to learn common sense first and this New Yorker article explores how AI researchers are making progress on this long-term goal.
In a TED talk published this month, computer scientist Emma Hart explained a radical new technology that allows robots to be created, reproduce and evolve over long periods of time. She says this technology will result in robot design and fabrication becoming a task for machines rather than humans.
Hart explains that such robots won’t be used for straightforward things like cleaning your home. These robots will instead be used for things exploring a faraway place or say a Martian surface. She explains that instead of studying the geography, building a robot and sending it there with the hope of it surviving and working, this new technology will build the robot first and then enable it to continue to evolve to adapt to its new surroundings.
We have already seen robots taking over factory floors, hospitals, and even in our own homes as a cleaning agent. Robots are also being offered as a service to industries but this new technology could be the next frontier for robotics, paving the way for truly intelligent robots that self-assemble and become intelligent overtime. Watch the TED Talk here.
What to read next?
- 🌍 How LUMO Labs investor Andy Lürling helps AI-startups and tries to make the world a better place
- 🧠 OpenAI wants to be the first to build artificial general intelligence: a look at its history, key members, and major achievements like GPT-3
- 📖 Top AI universities in the US: best universities and degrees in the field of AI and machine learning