Every week, the editors of ai.nl source the most compelling news stories about data science and AI in The Netherlands and abroad. In this edition: a new algorithm capable of writing Dutch text itself, an AI to predict 2022 Oscar winners, start of a new AI lab in Utrecht, and possible weaponisation of AI if the intent is malicious.
Last week proved to be a pivotal one for artificial intelligence (AI) with NVIDIA introducing its H100 GPU based on the Hopper architecture. The GPU will help companies scale their AI data centres and run cloud-based instances that will significantly speed up machine learning and help data scientists working with big data. While the immediate benefits are sometimes away, the news this week shows that AI will only get better with availability of more computers and edge computing.
The advancement in AI processing will also lead to betterment of the society, allowing for creation of more labs like the AI lab opened in Utrecht to dive into issues for government organisations. It will also make deep learning more powerful and allow for more applications like the ones in the news. Here is a look at some of the biggest news you need to know about AI.
Meet Lisa – an algorithm that writes Dutch web text itself
AI is already being used to write news and web articles in English. Now, Dutch online marketing agency Epurple has developed an algorithm called Lisa that can write Dutch web texts independently. The algorithm is developed using Natural Language Processing (NLP) and the agency says it is able to write “high-scoring SEO/SEA web texts in Dutch”.
Lisa is an example of how large language models like OpenAI’s GPT-3 allows people to innovate on the language front. The online marketing agency says that Lisa is developed on NLP but also uses online input data from different languages, including the GPT-3 model. Lisa is being pitched as a tool to write new content based on relevant keywords as well as make existing content better to improve SEO relevance.
Oscar 2022 prediction with Machine Learning
It would come as a surprise if we tell you that a machine learning model failed to predict the winner of best picture at this year’s Oscar. Well, that has happened but the model managed to predict a number of other categories correctly. BigML, which provides a machine learning platform that can be used by anyone, has built a prediction model using modelling approaches like OptiML and combining top 10 best performing models from OptiML for individual categories. Once the models were created, BigML made batch predictions and found that OptiML Fusions and Deepnets agreed on most categories.
However, in categories where it found differences, the company chose to average out the two approaches for its final predictions and rankings. It is able to build these models due to historic data of the Academy Awards and relevant data points for this year’s nominees. BigML is also offering its dataset available for anyone to clone and reach their own predictions. One of the profound use cases of AI is predictive modelling and this BigML project shows how it can be used.
A guide to Kaggle and TensorFlow challenge
Amsterdam Intelligence has a definitive piece on Kaggle, an online platform that facilitates international competitions for data science projects. Most of the competitions on the platform have a duration of three months, allowing participants to form a team and embark on solving data science problems. The TensorFlow competition selected by Amsterdam Intelligence is also one that can do a lot of good for our biodiversity.
The challenge is called TensorFlow – Help Protect the Great Barrier Reef, which aims to protect coral reefs that are under threat from crown-of-thorns starfish (COTS). The goal of the project is to accurately identify starfish in real-time using an object detection model trained on underwater videos of coral reefs. The solution will help the project’s researchers to identify the COTS threatening Australia’s Great Barrier Reef. The team suggests that participants in the Kaggle Competition should always start by understanding the data or the validation strategy.
After grasping the data, it is important to perform an exploratory data analysis. The competition is evaluated using the F2-score at different intersection-over-union (IoU) thresholds. This is followed by selecting the right object detection model, up-scaling the image resolution, relabeling the training data, augmentation, and object tracking. We highly recommend reading the blog in full here but it is important to know that being carried away by the public leaderboard is not healthy.
A new AI lab to study government issues
The AI Labs in Utrecht is now open with each lab focussing on a social theme. While these labs started as an initiative of Utrecht University, they are now focusing on a broad knowledge network in the field of AI. These labs will focus on social themes such as national security, sustainability, mobility, and media. The lab will also see participation from public and private organisations, students, lecturers, and researchers.
There are four labs: National Police Lab AI being the oldest, followed by the Mobility Lab, the Sustainability Lab and the Media Lab. The Utrecht University has also announced the AI Lab for public services focusing on government issues. With these labs, the goal is to develop more knowledge about the specific themes and use the learnings to design and test solutions. AI is being used for a lot of things but the AI Lab wants to put it to use for governance by bringing together professional practice and training.
Tesla and Apple among trusted brands to develop autonomous vehicles
Mercedes recently showed its Level 3 autonomous driving tech on a S-Class in the United States. Autonomous tech is fast evolving and it is only a matter of time before we see truly autonomous vehicles based on level 5 autonomy hit the roads. These level 5 autonomous vehicles are most likely to ditch the steering wheel and be fully autonomous. However, one of the questions in the industry is who will do it first?
If a study by automotive research company AutoPacific is anything to go by, most people see Tesla becoming the first to introduce such a tech. This shouldn’t come as a surprise considering Tesla already supports level 2 autonomy on Model S called AutoPilot. Tesla emerged top among respondents with 32 per cent vote followed by Toyota with 19 per cent and BMW with 18 per cent votes. The top five list was rounded out by Chevrolet and Ford with 16 per cent and 14 per cent votes, respectively.
The study found that younger customers trust BMW more than Chevrolet or Ford. However, Apple has also managed to enter the top ten list with 13 per cent of votes. This trust comes despite Apple not having a vehicle or prototype to show and reports that its autonomous driving tech team has lost some key members. Japanese tech giant Sony also received 5 per cent votes but did not manage to land in the top ten spots.
An AI experiment generates 40,000 hypothetical bioweapons
The ability of AI to crunch the available data and turn it into actionable insights makes it both useful and capable of causing harm. To a large extent, the use of AI will depend on the intent of its creator and a new research shows how AI can be easily misused and trained for malicious purposes. This comes just when AI is able to spot diseases early and even help manage chemical reactions.
A trial run shows that when provided with data, an AI can imagine the designs for hypothetical bioweapon agents. An existing AI identified 40,000 such bioweapons in a span of only six hours. “We have spent decades using computers and AI to improve human health – not to degrade it,” the researchers said about their discovery.
An AI system called MegaSyn, which is normally used to detect toxicity in molecules, was used to run the trial at an international security conference. The experiment saw the researchers keep the toxic molecules and also train the model to put these molecules together. With a span of only six hours, the model delivered 40,000 hypothetical bioweapons. The research shows how AI can be used for good as well as for evil.