NVIDIA reveals its next-gen chipset for autonomous vehicles

NVIDIA‘s GPU Technology Conference isn’t only about gaming graphics cards. The company had other news up its sleeve, including in the autonomous vehicle space. During the GTC keynote, NVIDIA CEO Jensen Huang announced a system-on-chip (SoC) called Drive Thor. NVIDIA says it designed the chip using the latest advancements in graphics and processing to provide 2,000 teraflops of performance, all while keeping costs down.

 

NVIDIA says that Drive Thor can unify all the various functions of vehicles — including infotainment, the digital dashboard, sensors, parking and autonomous operation — for greater efficiency. Vehicles with the chipset will be able to run Linux, QNX and Android simultaneously. Given the vast processing power that autonomous vehicle operations require, automakers can even use two of the Drive Thor chipsets in tandem by employing a NVLink-C2C chip interconnect technology to have them running a single operating system.

 

NVIDIA’s Next-Gen Chipsets

 

In addition, NVIDIA claims that the SoC marks a significant leap forward in “deep neural network accuracy.” The chipset has a transformer engine, a new addition to the NVIDIA GPU Tensor Core. “Transformer networks process video data as a single perception frame, enabling the compute platform to process more data over time,” NVIDIA says. It noted that the SoC can boost inference performance of transformer deep neural networks by up to nine times, “which is paramount for supporting the massive and complex AI workloads associated with self driving.”

 

The SoC follows NVIDIA’s Drive Orin chipset and it replaces Drive Atlan. It will be used in vehicles that go into production starting in 2025. The first customer NVIDIA has lined up is Geely-owned EV brand Zeekr, which is already using Orin chipsets for level 3 automation. Meanwhile, NVIDIA has signed up two more Drive Orin partners: automakers Xpeng and QCraft.

 

For more updates on storage industry, Click here.

Nvidia’s Instant NeRF can turn 2D Photos into 3D scenes in the blink of an AI

The Instant NeRF is out to turn 2D Photos Into 3D Scenes in the Blink of an AI. Nvidia’s latest AI demo is pretty impressive: a tool that quickly turns a “few dozen” 2D snapshots into a 3D-rendered scene. In the video below you can see the method in action, with a model dressed like Andy Warhol holding an old-fashioned Polaroid camera. (Don’t overthink the Warhol connection: it’s just a bit of PR scene dressing.)

 

The Instant NeRF

 

Instant NeRF is a neural rendering model that learns a high-resolution 3D scene in seconds — and can render images of that scene in a few milliseconds. The tool is called Instant NeRF, referring to “neural radiance fields” — a technique developed by researchers from UC Berkeley, Google Research, and UC San Diego in 2020. If you want a detailed explainer of neural radiance fields, you can read one here, but in short, the method maps the color and light intensity of different 2D shots, then generates data to connect these images from different vantage points and render a finished 3D scene. In addition to images, the system requires data about the position of the camera.

Researchers have been improving this sort of 2D-to-3D model for a couple of years now, adding more detail to finished renders and increasing rendering speed. Nvidia says its new Instant NeRF model is one of the fastest yet developed and reduces rendering time from a few minutes to a process that is finished “almost instantly.”

 

Quicker and Easier to Implement

 

As the technique becomes quicker and easier to implement, it could be used for all sorts of tasks, says Nvidia in a blog post describing the work. According to statements by Nvidia: “Instant NeRF could be used to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps. The technology could be used to train robots and self-driving cars to understand the size and shape of real-world objects by capturing 2D images or video footage of them. It could also be used in architecture and entertainment to rapidly generate digital representations of real environments that creators can modify and build on.

In a paper describing the work, Nvidia’s researchers said they were able to export scenes at a resolution of 1920 × 1080 “in tens of milliseconds.” The researchers also shared source code for the project, allowing others to implement their methods. It seems NeRF renders are progressing quickly and could start having a real-world impact in the years to come.

 

Also Read: Why was AMD absent from the budget CPU market?

Nvidia hackers target Samsung, release 190GB of sensitive data

Rising instances of cybersecurity and cyberhacking are affecting the functioning of major tech companies throughout the world. Recently, the concerns for cyber attacking are increasing despite tight and tough security measures. In a piece of recent news, what seems like Samsung might have been a victim of a suspected cyberattack by the group responsible for Nvidia hackers.

 

Sensitive data hacked by Nvidia Hackers

 

According to latest reports, some of Samsung’s confidential data has reportedly leaked due to a suspected cyberattack. A few days ago, South American hacking group Lapsus$ uploaded a trove of data it claims came from the smartphone manufacturer. Bleeping Computer was among the first publications to report on the incident. It is unclear what the timeline of the Samsung breach is, and what sort of contact the hackers have had with the company. There have been no public demands like Lapsus$’s call for open-source drivers and an end to the crypto mining limiter, LHR, from Nvidia.

 

Bootloader Source Code

 

Among other information, the collective says it obtained the bootloader source code for all of Samsung’s recent devices, in addition to code related to highly sensitive features like biometric authentication and on-device encryption.

The leak also allegedly includes confidential data from Qualcomm. The entire database contains approximately 190GB of data and is actively being shared in a torrent. If the contents of the leak are accurate, they could cause significant damage to Samsung. According to The Korean Herald, the company is assessing the situation.

 

NVIDIA Data Breach

 

If Lapsus$ sounds familiar, it’s the same group that claimed responsibility for the recent NVIDIA data breach. In that incident, Lapsus$ says it obtained approximately 1TB of confidential data from the GPU designer, including, the group claims, schematics and driver source code.
The collective has demanded that NVIDIA open source its drivers and remove the cryptocurrency mining limiter from its RTX 30-series GPUs. It’s unclear what, if any demands, Lapsus$ has made of Samsung. The group has previously said its actions haven’t been politically motivated.

 

Also Read: Meta Introduces ‘Personal Boundary’ for User Safety in the Metaverse