You may have read that we’re developing generative 3D tools using NVIDIA Edify. Or that when it’s done it’ll be able to generate detailed 3D models in seconds (perfect for concepts and designs). But did you know the first public demo is happening at GTC 2024?
Join us in the Shutterstock booth (G109) in the GenAI Pavilion for a full preview!
You can generate any model you’d like in under a minute and then get a walkthrough of where the tech stands. Each day, HP will also be 3D printing samples of models generated by attendees, so get there early if you want a custom keepsake.
This is all building towards the official launch of our generative model API, which will help companies bring ethical 3D model generation to all your favorite tools and platforms. Whether that’s standards like Maya and Blender; game engines like Unity and Unreal; even massive worlds like Roblox; or even start-ups/tech companies new to the world of 3D generation. If that sounds like you, you’re in luck, as we’ve just opened up Early Access at ai.nvidia.com. Sign up now to try it out.
And although we are still in the early days, we’re making great progress!
Generations that used to take 40 minutes are now happening in less than a minute. And the quality keeps getting better. We’ve significantly enhanced and expanded our training dataset, which we’re using to improve the capabilities of our generative model. This will make it easier for people to generate more complexity, more combinations, and more uniqueness in their outputs. It’s also going to help us provide textures that are much sharper and detailed, so models either come out ready for professional use.
Everything you’re hearing about is also being trained on clean data. That means that the 700 million images/videos and 1 million 3D models powering the AI all come from contributors who have been compensated for the role their IP has played in training AI models. Further, artists who chose not to have their content used in this way are able to opt-out, though to date, less than 1 percent have chosen to do so.
It’s great if we make something that helps artists move faster, or even opens up a new revenue stream, but only if it’s done the right way. This will do all of the above and give artists a better way to respond to the increasing demands of 3D. We can’t wait to see it in the wild.
Other GTC plans
Besides the demo, we’ll also be talking at the show.
This includes an industry panel with NVIDIA and HP at the Generative AI Theater at 1:25pm PT, where we’ll be discussing 3D generative workflows, the most common use cases, and how 3D printing factors in.
But let’s say you want to hear how it’s being developed, stop by the “Revolutionizing 3D Design” panel, which we are doing with NVIDIA’s Ming-Yu Liu, Vice President of Research, Joel Pennington, Product Manager, Generative AI and 3D Content Creation and Jingyi Jin, Principal Engineer in Generative AI. It will be held at SJCC 220B (L2) on Wednesday, March 20 from 11 – 11:50 AM PT and virtually via the NVIDIA livestream.
Hope to see you there.
Sign up for early access
Want the power of generative models in your products or company? Talk to us today about early access to our API.