AI In 3D Modelling: Meet Poly
Introduction
Deep learning models like ChatGPT and DALL-E aren’t only enabling breakthroughs in language modeling. They’re also beginning to make strides in domains like computer vision and graphics. For example, an AI model from Google Research, dubbed BigGAN-deep, a few years ago learned to generate more realistic animals and objects than ever before. And just last month researchers at the University of California, San Diego developed a method for converting black-and-white sketches into photorealistic color images, which could be used to help artists customize their artwork or to reimagine Michelangelo’s works of art as they might’ve appeared with color.
One of the most exciting things about AI is how it can help artists create more realistic images. For example, BigGAN-deep learned to generate more realistic animals and objects than ever before. And just last month researchers at the University of California, San Diego developed a method for converting black-and-white sketches into photorealistic color images, which could be used to help artists customize their artwork or to reimagine Michelangelo's works of art as they might've appeared with color.
Poly uses AI to generate 3D models and Art assets
Poly is essentially a stock asset library along the lines of Adobe Stock and Shutterstock but populated exclusively by AI generations. While platforms like Getty Images have banned AI-generated content for fear of potential legal blowback, Poly is barreling full steam ahead.
Poly’s first tool in its planned web-based suite generates 3D textures with physically-based rendering maps. In modeling, “physically-based rendering” refers to a technique that aims to render images in a way that mimics the flow of light in the real world.
With Poly, designers can describe a texture (e.g., “Tree bark with moss”) and optionally provide a reference image to get generated textures for crafting 3D models.
What was the inspiration for this platform?
You may have noticed that there are no human models on the platform. Why? Well, the concept of a model is limited to humans and their bodies—but what if you could use AI to generate 3D models and art assets for your games?
The inspiration for this project was actually inspired by team members who were working on design-related projects at Poly. They were looking for an easier way to create high-quality images without having to waste time on something that would take away from other tasks they had in mind: building environments or creating characters. So they thought about how they could cut down on their workflow by automating some steps of the process so that everyone could focus more efficiently on what needed be done next (or last).
The company uses AI to generate 3D models and art assets for the design community. “We started Poly in early 2022 from a shared passion to ‘increase the creative capacity of the world,” Abhay Agarwal, CEO of Poly said.
Agarwal joined Y Combinator’s S22 batch after working at Facebook/Instagram and Google.
Agarwal asserts that Poly’s generative AI is superior to most in terms of the quality of assets it produces. The jury’s out on that. But Poly aims to further differentiate itself by expanding its generative AI service across asset types such as illustrations, sprites, sound effects and more. It plans to make money through enterprise partnerships, premium integrations for design tools and by charging a subscription fee for royalty-free access to assets, including commercial and resale rights.
Agarwal claims that “thousands” of developers are currently using Poly’s free service, which generates an unlimited number of assets for noncommercial use, while “hundreds” are paying for Poly’s pro plan. To date, the platform has generated more than two million textures. That momentum drew in investors, including Felicis, Bloomberg Beta, NextView Ventures, Y Combinator, Figma Ventures and the AI Grant, which together contributed $3.9 million in venture capital toward Poly at Y Combinator’s demo day in September. “Poly’s customers range from professionals at Fortune 500 companies to individual freelancers in game design, AR/VR, interior design, architecture and 3D rendering for ecommerce and marketing,” Agarwal said. “Poly has a multi-year runway and can focus on building the best possible technology since a higher-quality product is required to stand out and win in this emerging and highly active space.”
You can also build your own 3D models using Poly’s API, which is available for free.
Poly’s technology has been used by thousands of developers who have generated more than two million textures in just a few months, Agarwal said.
What’s next for Poly? Where do you see this going in the next 5-10 years?
I think Poly is just starting to scratch the surface of what AI technology can do. We are excited about where it will take them, but also how it would help artists and designers create more amazing content, faster than ever before.
We’re excited to continue exploring new applications and creative possibilities for deep learning models like ChatGPT and DALL-E in the coming years. The future is bright for Poly, and even though it has only been around for a few months, the project already has some impressive traction. The platform’s AI generated content can be used in both game development and design applications, and it’s being used by top developers. Just as AI has enabled breakthroughs in other domains, we hope that these tools will also help artists unlock their full potential.
Comments
Post a Comment
Hey friends
We care about your opinion, so don't hesitate to let us know what you think, Thank you.