Tag Archive: 3D graphics

Matterport grabs $5M more to accelerate deep learning development for their 3D capture tech

3d capture tech

Matterport is picking up new funding as it looks to speed the development of deep learning tech in its capture technology which brings immersive views of spaces into 360-degree 3D

The company, which largely specializes in scanning spaces for commercial and real estate purposes, announced today that they’ve picked up $5 million in funding from Ericsson Ventures. This strategic raise brings the company’s total announced funding to $66 million according to Crunchbase.

As 3D rendering grows more important thanks to spatial computing platforms like VR and AR, Matterport has one of the biggest libraries of 3D environments thanks to its loyal and prolific users who have uploaded over a half million scans of public and private spaces which are already viewable in VR.

A big focus of this new investment is taking these 3D scans and striving to gather more and more insights from them through deep learning-based AI development which will not only help them understand what’s in a space but how to improve the quality of the 3D images themselves.

“Ericsson Ventures saw the tremendous opportunity Matterport has to extend our technology lead by using our massive library of 3D models as a deep learning training dataset to create AI that will be the basis for our next generation products,” Matterport CEO Bill Brown said.

In May the company launched its Pro2 camera, which addressed a big request from existing customers who were excited about the potential of 3D 360 room scans but still needed 2D images to put into print materials. The camera retails for $3,995 and is available now.

Huddesfield Designers Bring New Ginetta Racing Car to Life

The in-house design team at the 3M Buckley Innovation Centre (3M BIC) has used 3D technology and augmented reality to help Ginetta fine tune its latest prototype. 

3d racing car

Having already provided a similar service for the launch of its first prototype in 2015, Ginetta approached the 3M BIC design team to animate its £1.3millon LMP1 machine.

This enabled the car manufacturer’s own in-house design team to visualise the cars development, as well as showcase it at a launch event at Silverstone Circuit to potential buyers.

Ewan Baldry, technical director at Ginetta, said: “3D technology is an important part of our design process and marketing. To see something on a flat CAD screen has a few limitations, so being able to see something you can move around is very helpful.

“The main thing with a project such as this, from a marketing point of view, is to show credibility in the early stages to demonstrate to people the direction you are heading in, therefore having 3D visuals was key.”

The animation for the LMP1 car was created using physical STL data (used for Computational Fluid Dynamics (CFD) testing or wind tunnel analysis) submitted to the 3M BIC design team by Ginetta.

Some adjustments had to be made to the original model in order for it to be re-textured with the corresponding racing livery, using Autodesk 3DS Max.

The team then rigged the car for animation and set the lighting for rendering purposes.

Paul Tallon, lead consultant designer at the 3M BIC, said: “3D rendering is a process in which an algorithm calculates the movements of a virtual photon on interaction with a surface of varying qualities.

“With the 3M BIC’s High Performance Computer and the latest Vray rendering software, we were able to get the detail to look as real life as possible in our render. This was particularly important for Ginetta who was looking for a realistic render to show their clients.”

As well as the on-screen render, the design team produced the car in augmented reality (AR) for use with the Microsoft Hololens, enabling people to walk around a scaled down holographic version of the car.

A 3D model was also printed in nylon by selective laser sintering (SLS) using the industrial additive manufacture printer on the 3M BIC’s Innovation Avenue, all of which were showcased at the launch event at Silverstone.

Ewan added: “Having worked with the 3M BIC team previously we knew they’d do the project justice. Again, we were really pleased with the service. We didn’t give them very much time, but they still produced something which was professional and to a high standard.”

Significant interest in the LMP1 has already been expressed following the launch event, from both new and existing customers.

The 3M BIC design team is currently working on the next stage of the process which involves creating a serious gaming experience that allows users, particularly racing drivers, to virtually test the LMP1 car on a track with varying different scenery and weather conditions to enhance the driver experience.

Leeds-based Ginetta, the leading British race car manufacturer, was founded in 1958 and acquired by racing driver and businessman Lawrence Tomlinson in 2005.

Since then it has taken the racing industry by storm, selling cars across the world and training some of the brightest stars in motorsport.

Source: bqlive.co.uk

3D graphics of Nvidia Uses AI are Now Better Than an Artist

Nvidia spans both gaming graphics and artificial intelligence, and it is showing that with its announcements this week at the Siggraph computer graphics event in Los Angeles.

Those announcements range from providing external graphics processing for content creators to testing AI robotics technology inside a virtual environment known as the Holodeck, named after the virtual reality simulator in the Star Trek series. In fact, Nvidia’s researchers have created a way for AI to create realistic human facial animations in a fraction of the time it takes human artists to do the same thing.

“We are bringing artificial intelligence to computer graphics,” said Greg Estes, vice president of developer marketing at Nvidia, in an interview with GamesBeat. “It’s bringing things full circle. If you look at our history in graphics, we took that into high-performance computing and took that into a dominant position in deep learning and AI. Now we are closing that loop and bringing AI into graphics.”

“Our strategy is to lead with research and break new ground,” he said. “Then we take that lead in research and take it into software development kits for developers.”

Above: Nvidia’s Optix 5.0 can “de-noise” images by removing graininess.

Image Credit: Nvidia

Nvidia has 10 research papers this year at the Siggraph event, Estes said. And some of that will be relevant to Nvidia’s developers, which number about 550,000 now. About half of those developers are in games, while the rest are in high-performance computing, robotics, and AI.”

Among the announcements, one is particularly cool. Estes said that Nvidial will show off its Isaac robots in a new environment. These robots, which are being used to vet AI algorithms, will be brought inside the virtual environment that Nvidia calls Project Holodeck. Project Holodeck is a virtual space for collaboration, where full simulations of things like cars and robots are possible. By putting the Isaac robots inside that world, they can learn how to behave, without causing havoc in the real world.

Above: The Project Holodeck demo

Image Credit: Dean Takahashi

“A robot will be able to learn things in VR,” Estes said. “We can train it in a simulated environment.”

Nvidia is providing external Titan X or Quadro graphics cards through an external graphics processing unit (eGPU) chassis. That will boost workflows for people who use their laptop computers for video editing, interactive rendering, VR content creation, AI development
and more, Estes said.

To ensure professionals can enjoy great performance with applications such as Autodesk Maya and Adobe Premier Pro, Nvidia is releasing a new performance driver for Titan X hardware to make it faster. The Quadro eGPU solutions will be available in September through partners such as Bizon, Sonnet, and One Stop Systems/Magma.

Nvidia also said it was launching its Optix 5.0 SDK on the Nvidia DGX AI workstation. That will give designers, artists, and other content-creation professionals the rendering capability of 150 standard central processing unit (CPU) servers.

The tech could be used by millions of people, Estes said. And that kind of system would cost $75,000 over three years, compared to $4 million for a CPU-based system, the company said.

OptiX 5.0’s new ray tracing capabilities will speed up the process required to visualize designs or characters, thereby increasing a creative professional’s ability to interact with their content. It features new AI “de-noising” capability to accelerate the removal of graininess from images, and brings GPU-accelerated motion blur for realistic animation effects. It will be available for free in November.

By running Nvidia Optix 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering (meaning both AI and graphics tasks).

“AI is transforming industries everywhere,” said Steve May, vice president and chief technology officer of Pixar, in a statement. “We’re excited to see how Nvidia’s new AI technologies will improve the filmmaking process.”

On the research side, Nvidia is showing how it can animate realistic human faces and simulate how light interacts with surfaces. It will tap AI technology to improve the realism of the facial animations. Right now, it takes human artists hundreds of hours to create digital faces that more closely match the faces of human actors.

Nvidia Research partnered with Remedy Entertainment, maker of games such as Quantum Break, Max Payne and Alan Wake, to help game makers produce more realistic faces with less effort and at lower cost.

Above: Nvidia is using AI to create human facial animations.

Image Credit: Nvidia

The parties combined Remedy’s animation data and Nvidia’s deep learning technology to train a neural network to produce facial animations directly from actor videos. The research was done by Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen. Nvidia’s solution requires only five minutes of training data to generate all the facial animation needed for an entire game from a simple video stream.

Antti Herva, lead character technical artist at Remedy, said that over time, the new methods will let the studio build larger, richer game worlds with more characters than are now possible.

Already, the studio is creating high-quality facial animation in much less time than in the past.

 

“Based on the Nvidia research work we’ve seen in AI-driven facial animation, we’re convinced AI will revolutionize content creation,” said Herva, in a statement. “Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create. After working with Nvidia to build video- and audio-driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks.”

In another research project, Nvidia trained a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live animated avatars, and more easily produce games in multiple languages.

Above: AI can smooth out the “jaggies,” or rough edges in 3D graphics.

Image Credit: Nvidai

AI also holds promise for rendering 3D graphics, the process that turns digital worlds into the life-like images you see on the screen. Film makers and designers use a technique called “ray tracing” to simulate light reflecting from surfaces in the virtual scene. Nvidia is using AI to improve both ray tracing and rasterization, a less costly rendering technique used in computer games.

In a related project, Nvidia researchers used AI to tackle a problem in computer game rendering known as anti-aliasing. Like the de-noising problem, anti-aliasing removes artifacts from partially-computed images, with this artifact looking like stair-stepped “jaggies.” Nvidia researchers Marco Salvi and Anjul Patney trained a neural network to recognize jaggy artifacts and replace those pixels with smooth anti-aliased pixels. The AI-based solution produces images that are sharper (less blurry) than existing algorithms.

Nvidia is also developing more efficient methods to trace virtual light rays. Computers sample the paths of many light rays to generate a photorealistic image. The problem is that not all of those light paths contribute to the final image.

Researchers Ken Daum and Alex Keller trained a neural network to guide the choice of light paths. They accomplished this by connecting the math of tracing light rays to the AI concept of reinforcement learning. Their solution taught the neural network to distinguish the paths most likely to connect lights with virtual cameras, from the paths that don’t contribute to the image.

Above: Nvidia uses AI to figure out light sources in 3D graphics.

Image Credit: Nvidia

Lastly, Nvidia said it taking immersive VR to more people by releasing the VRWorks 360 Video SDK to enable production houses to livestream high-quality, 360-degree, stereo video to their audiences.

Normally, it takes a lot of computation time to stitch together images for 360-degree videos. By doing live 360-degree stereo stitching, Nvidia is making life a lot easier for the live-production and live-event industries, said Zvi Greenstein, vice president at Nvidia.

The VRWorks SDK enables production studios, camera makers and app developers to integrate 360 degree, stereo stitching SDK into their existing workflow for live and post production. The Z Cam V1 Pro (made by VR camera firm Z Cam) is the first professional 360 degree VR camera that will fully integrate the VRWorks SDK.

“We have clients across a wide range of industries, from travel through sports, who want high quality, 360 degree video,” said Chris Grainger, CEO of Grainger VR, in a statement. “This allows filmmakers to push the boundaries of live storytelling.”

Source: Venturebeat.com