AI turns sketches into Arts – Meta (Facebook) New Artificial Intelligence (AI) research tools turn ideas (sketches) into Arts
- July 17, 2022
- Posted by:
- Category: Research News
AI turns sketches into Arts – Meta (Facebook) New Artificial Intelligence (AI) research tools turn ideas (sketches) into Arts
Facebook, rebranded to Meta on Thursday 14th July 2022 stated that People will be able to create digital images using text and straightforward sketches according to their innovative artificial intelligence (AI) research concept.
Meta went further to state that the outcomes of their research in this area may open up new avenues for AI-powered artistic expression that put creators and their ideas at the forefront of the procedure. The Non-Fungible Token (NFT) industry is not left behind as one can easily transform his imagination from sketches to full-blown Arts with little to no artistic skill.
Imagine being able to instantaneously create narrative graphics or a digital painting without ever taking up a paintbrush. Meta went ahead to introduce what they called: Make-A-Scene, an exploratory artificial intelligence (AI) research idea that will let users materialize their ideas.
Use text prompts and freeform sketching to create visuals with Make-A-Scene. Text descriptions were frequently employed as input by earlier image-generating AI systems, although the outcomes could be unpredictable. For instance, the text “a painting of a zebra riding a bike” may not accurately represent your vision; the bicycle may be facing the wrong way, or the zebra may be too big or too little.
This is not the case anymore, thanks to Make-A-Scene by Facebook (Meta). It exemplifies how people can use a range of elements, including text and straightforward illustrations, to more precisely communicate their visions.
To enable detailed sketches as input, Make-A-Scene records the scene plan. If the designer prefers, it can also produce a layout with simply text-based prompts. The concept focuses on identifying significant elements of the imagery, such as objects or animals, that are more likely to be significant to the creator.
Recommended: Chukwuemeka Odumegwu Ojukwu University intensifies research in automotive industry
Make-a-Scene Empowers Creativity for Artists and Non-Artists
Meta stated that they gave AI artists like Sofia Crespo, Scott Eaton, Alexander Reben, and Refik Anadol access to their Make-A-Scene demo as part of the research and development process.
Make-A-Scene was used by Crespo, a generative artist who focuses on the meeting point of technology and nature. She discovered that with its freeform drawing powers, she could start to produce swiftly across new ideas.
As a visual artist, you sometimes just want to be able to create a base composition by hand, to draw a story for the eye to follow, and this allows for just that.” — Sofia Crespo, AI artist
Meta has it that Make-A-Scene can help everyone express themselves more effectively and is not only meant for artists. Program manager at Meta Andy Boyatzis utilized Make-A-Scene to create artwork with his two and four-year-old children. They animated their ideas and imaginations through amusing drawings.
If they wanted to draw something, I just said, ‘What if…?’ and that led them to creating wild things, like a blue giraffe and a rainbow plane. It just shows the limitlessness of what they could dream up.” — Andy Boyatzis, Program Manager, Meta
Meta is Building the Next Generation of Creative AI Tools
A machine learning system can’t only produce content. Meta wants people to have the ability to influence and manage the content that a system produces in order to fully achieve AI’s promise to advance creative expression. It should be simple to use and intuitive so that users can express themselves in whichever way suits them best, whether through speech, text, gestures, eye movements, or even sketching to bring their ideas to life.
Through initiatives like Make-A-Scene, Meta tends to investigate new avenues for artistic expression that AI can open up. Although we are moving forward in this area, this is only the beginning. In order to develop techniques for more expressive communications in 2D, mixed reality, and virtual worlds, Meta has decided to continue to push the boundaries of what’s practical using this new class of generative creative tools.