Home Technology peripherals AI LightSim: An autonomous driving lighting simulation platform launched at NeurIPS 2023 to achieve a realistic, controllable and scalable simulation experience

LightSim: An autonomous driving lighting simulation platform launched at NeurIPS 2023 to achieve a realistic, controllable and scalable simulation experience

Dec 15, 2023 pm 04:22 PM
industry neurips 2023 lightsim

Recently, researchers from Waabi AI, University of Toronto, University of Waterloo and MIT proposed a new autonomous driving lighting simulation platform LightSim at NeurIPS 2023. Researchers have proposed methods to generate paired illumination training data from real data, solving the problems of missing data and model migration loss. LightSim uses neural radiation fields (NeRF) and physics-based deep networks to render vehicle driving videos, achieving lighting simulation of dynamic scenes on large-scale real data for the first time. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验

LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验

  • Project website: https://waabi.ai/lightsim
  • Paper link: https://openreview.net/pdf?id=mcx8IGneYw

Why is automatic driving lighting simulation needed? ?

#Camera simulation is very important in robotics, especially for autonomous vehicles to perceive outdoor scenes. However, existing camera perception systems perform poorly once they encounter outdoor lighting conditions that have not been learned during training. Generating a rich dataset of outdoor lighting changes through camera simulation can improve the robustness of autonomous driving systems.

Common camera simulation methods are generally based on physics engines. This method renders the scene by setting the 3D model and lighting conditions. However, simulation effects often lack diversity and are not realistic enough. Furthermore, due to the limited number of high-quality 3D models, the physical rendering results do not exactly match the real-world scenes. This leads to poor generalization ability of the trained model on real data.

The other is based on data-driven simulation method. It uses neural rendering to reconstruct real-world digital twins that replicate data observed by sensors. This approach allows for more scalable scene creation and improved realism, but existing technologies tend to bake scene lighting into the 3D model, which hinders editing of the digital twin, such as changing lighting conditions or adding or deleting new ones. Objects etc.

In a work at NeurIPS 2023, researchers from Waabi AI demonstrated a lighting simulation system LightSim based on a physics engine and neural network: Neural Lighting Simulation for Urban Scenes.

LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验

Different from previous work, LightSim also achieves:

1. Real ( realistic): For the first time, it is possible to simulate the lighting of large-scale outdoor dynamic scenes, and can more accurately simulate shadows, lighting effects between objects, etc.
2. Controllable: Supports editing of dynamic driving scenes (adding, deleting objects, camera positions and parameters, changing lighting, generating safety-critical scenes, etc.) to generate more Realistic and more consistent video to improve system robustness to lighting and edge conditions.
3. Scalable(scalable): It is easy to expand to more scenarios and different data sets. It only needs to collect data once (single pass) to reconstruct and conduct real-life data collection. Controlled simulation testing.

LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验

Building a simulation system

Step 1: Build a real-world re-illuminable digital twin

In order to reconstruct autonomous driving scenes in the digital world, LightSim first divides dynamic objects and static scenes from the collected data. This step uses UniSim to reconstruct the scene and remove camera view dependence in the network. Then use the marching cube to get the geometry, and further convert it to a mesh with basic materials.
LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验
In addition to materials and geometry, LightSim can also estimate outdoor lighting based on the sun and sky, the main light sources of outdoor daytime scenes, and obtain a high dynamic range environment map (HDR Sky dome) . Using sensor data and extracted geometry, LightSim can estimate an incomplete panoramic image and then complete it to obtain a full 360° view of the sky. This panoramic image and GPS information are then used to generate an HDR environment map that accurately estimates sun intensity, sun direction and sky appearance.

LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验

Step Two: Neural Lighting Simulation of Dynamic Urban Scenes

After getting the numbers Once twinned, it can be further modified, such as adding or removing objects, changing vehicle trajectories, or changing lighting, etc., to generate an augmented reality representation. LightSim will perform physically based rendering, generating lighting-related data such as base color, depth, normal vectors, and shadows for modifying the scene. Using this lighting-related data and an estimate of the scene's source and target lighting conditions, the LightSim workflow is as follows.

LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验

Although physically based rendered images reconstruct the lighting effects in the scene well, due to imperfections in geometry and errors in material/lighting decomposition, the rendering results often lack realism, such as blurry, unrealistic Surface reflections and boundary artifacts. Therefore, researchers have proposed neural deferred rendering to enhance realism. They introduced an image synthesis network that takes a source image and a precomputed buffer of lighting-related data generated by the rendering engine to generate the final image. At the same time, the method in the paper also provides the network with an environment map to enhance the lighting context, and generates paired images through the digital twin, providing a novel pairwise simulation and real data training scheme.

Simulation capability display

Change the lighting of the scene (Scene Relighting)

LightSim can render the same scene in a time-consistent manner under new lighting conditions. As shown in the video, the new sun position and sky appearance cause the scene's shadows and appearance to change. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验LightSim can perform batch relighting of scenes, generating new time-consistent and 3D-aware lighting changes of the same scene from estimated and real HDR environment maps. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验
Shadow Editing

LightSim The lighting representation is editable and can change the direction of the sun, thus updating lighting changes and shadows related to the direction of the sun's light. LightSim generates the following video by rotating an HDR environment map and passing it to the Neural Deferred Rendering module. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验LightSim can also perform shadow editing in batches. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验
Lighting-Aware Actor Insertion

Except In addition to modifying lighting, LightSim can also perform lighting-aware additions to unusual objects, such as architectural obstructions. These added objects can update the object's lighting shadows, accurately occlude objects, and spatially adapt to the entire camera configuration. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验
Simulation migration (Generalization to nuScenes)

Due to LightSim’s neural deferred rendering network It is trained on multiple driving videos, so LightSim can be generalized to new scenarios. The following video demonstrates LightSim's ability to generalize to driving scenes in nuScenes. LightSim builds a lighting-aware digital twin of each scene, which is then applied to a neural deferred rendering model pre-trained on PandaSet. LightSim migration performs well and can relight scenes relatively robustly. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验
Real and controllable camera simulation

In summary With all the features demonstrated, LightSim enables controllable, diverse and realistic camera simulations. The following video demonstrates LightSim's scene simulation capabilities. In the video, a white car made an emergency lane change to the SDV lane, introducing a new roadblock, which caused the white car to enter and create a brand new scene. The effects generated by LightSim under various lighting conditions in the new scene are as follows: .LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验Another example is shown in the video below, where a new set of vehicles has been added after a new road obstacle has been inserted. Using simulated lighting built with LightSim, newly added vehicles can be seamlessly integrated into the scene. LightSim:NeurIPS 2023推出的自动驾驶光照仿真平台,实现真实、可控和可拓展的模拟体验
Summary and Outlook

LightSim is a perceptible A lighting camera simulation platform for processing large-scale dynamic driving scenes. It can build lighting-aware digital twins based on real-world data and modify them to create new scenes with different object layouts and SDV perspectives. LightSim is capable of simulating new lighting conditions on a scene for diverse, realistic and controllable camera simulation, resulting in temporally/spatially consistent video. It is worth noting that LightSim can also be combined with reverse rendering, weather simulation and other technologies to further improve simulation performance.

The above is the detailed content of LightSim: An autonomous driving lighting simulation platform launched at NeurIPS 2023 to achieve a realistic, controllable and scalable simulation experience. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Aug 22, 2024 pm 08:02 PM

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

See all articles