Every week the editors of ai.nl source the most compelling news stories about data science and AI in The Netherlands and abroad. In this edition: Andrew Ng calls for small data sets, Mercedes goes for level 3 autonomous driving and BrainCreators collaborate to make public lighting in Utrecht efficient.
Last week was another eventful week in the world of artificial intelligence (AI) where inspection and detection became the talk of the town. The news did not directly relate to the underlying technology but shows how the application of AI can help the betterment of society across areas like inspection, self-driving vehicles, and the next wave of AI.
From the latest developments in the field of training AI faster to NVIDIA CEO Jensen Huang preparing to share details on the future of AI at the GTC conference, here is a look at some of the biggest news from the world of AI that you need to know.
BrainCreators and Luminext collaborate to make public lighting intelligent
Can you bring AI-powered intelligence to public lighting? Well, BrainCreators and Luminext want to make it a possibility with their major collaboration. BrainCreators is a specialist in digital inspection while Luminext is a leader in smart lighting. The two companies have announced a collaboration that will lead to development of an intelligent solution for the management and maintenance of public lighting.
The collaboration will see BrainCreators’ digital road stewarding solution called INSPECH, which allows for visual road inspection, being combined with Luminext’s platform to remotely manage, monitor and control the public lighting of municipalities developed with Luminizer. The project supported by both the province and the municipality of Utrecht will allow the inspection of public lighting, eliminate manual checks and better visualise energy consumption. The solution could pave the way for a smart city where even public lighting is managed and governed by data.
A simple trick to train AI two times faster
One of the problems facing data scientists and AI engineers right now is the AI models getting ever larger. The introduction of GPT-3 demonstrated the significant jump in performance achieved by increasing model size. This has resulted in the AI industry building ever larger AI models, driving an increase in resources required to train massive neural networks. There is also the cost associated with massive computing and energy required for these AI models.
In order to circumvent this increase in cost, researchers from Oxford University have outlined a new approach that will reduce the training time in half. They do so by rewriting the fundamental approach to AI training called backpropagation. An AI model is built by training the neural networks on data relevant to the problem, using a process called backpropagation. This process is split into two phases, called forward and backward pass.
These forward and backward runs are repeated multiple times using lots of data until the network reaches an optimal configuration. The Oxford researchers have found a way to simplify this repetitive approach. Their new training approach does away with the backward pass entirely and their algorithm makes estimates of how weights will need to be altered on the forward pass. The best part being that their new approach offers approximations that are close enough to comparable performance of backpropagation.
Andrew Ng says AI attention should shift to small data
AI is already making its impact felt in a number of industries but Andrew Ng, one of the most prominent figures in AI, sees a future where the priority will shift from bits to things. He says the inevitable future is one where AI focus will shift from big data in the lab to expert knowledge in the field. The visionary co-founder of Coursera and adjunct professor at Stanford University believes that there is a need to build tools that will empower “customers to build their own models, engineer the data and express their domain knowledge.”
He is already making it possible through Landing AI and Ng says that there is a need for this tool to be consistent with a good workflow so that experts can quickly realise where they agree. He is also voicing support for multimodal AI, which combines different forms of inputs such as text and images. In his vision for the next 10 years in AI, Ng says that “attention needs to shift towards small data and data-centric AI.”
Mercedes demos level 3 autonomous driving
In an early sign of automakers eventually getting to true autonomous driving, Mercedes has demonstrated Drive Pilot, a level 3 autonomous driving with its S-Class luxury sedan. This is the most advanced level of “Driver Assist” technology available on a commercial automobile yet and shows that AI is making rapid progress in the field of self-driving or autonomous driving technology. The most common driver assist technology available right now is level 3, which requires drivers to keep their hands on the steering and eyes on the road.
With Drive Pilot, those driving the S-Class will be able to take their hands off the wheel and focus on something other than the road. You can browse email or watch YouTube videos on the centre console. For the feature to work, Mercedes S-Class must be on a freeway and going under 40 mph (~ 64 kmph). The technology is made possible by retrofitting the S-Class with three types of radar, lidar and computer vision sensors and an onboard computer that acts as the AI-powered brain. We are far from seeing cars without steering wheels on our roads but this could be the first significant step in that direction.
NVIDIA CEO Jensen Huang will accelerate AI and computing at GTC
This week will see NVIDIA CEO Jensen Huang back on the virtual stage of GTC where he will outline new ways that the graphics computing giant is looking to accelerate computing of all kinds. Huang has made it a mission to not only introduce new products but often tune them with stories that bring perspective to the advancements in technology.
NVIDIA is saying this year’s GTC, being held from March 21 to March 24, won’t be any different. In addition to accelerating computing and demonstrating new AI capabilities possible with new improved compute performance, Huang will also bring prominent experts in the field of AI and technology and introduce startups from the field of Omniverse and Quantum Computing as part of its Inception session.