Have mentioned that here a number of time, reviewing. We worried that when building expertise systems, even considered a definition of creativity in a domain to balance its value.
Can AI Demonstrate Creativity? By Keith Kirkpatrick
Communications of the ACM, February 2023, Vol. 66 No. 2, Pages 21-23 10.1145/3575665
Creativity has been defined as the use of the imagination or original ideas, especially in the production of an artistic work. While the source of the development of those ideas can be debated—does creativity spring from the heart, the brain, the soul, or one's experiences—it has been largely accepted that humans alone possess the capability to truly create.
The emergence of computers and artificial intelligence (AI) has led to systems that, fed a sufficient amount of training data, can mimic the output of a creative writer, artist, or musician, thereby encroaching on humans' monopoly on the creative process. Artificial intelligence techniques can be used to create new ideas in a few different ways, such as producing unique combinations of familiar ideas, creating new works based on the attributes of previous works, and by offering new ideas based on combinations of attributes and ideas that humans may not have thought of during the creation of a new work.
A notable example of the power of AI to generate a so-called "creative" work was demonstrated in 2016 when the IBM Watson AI platform was used to create a movie trailer for 20th Century Fox's horror film, Morgan. The first example of a trailer created solely by AI, Watson was used to analyze the visuals, sound, and composition of hundreds of existing horror film trailers, and then it selected scenes from the completed Morgan movie for editors to patch together into a trailer. The use of AI to comb through scenes to create a trailer in the style of other horror movies helped to reduce the amount of time editors needed to spend on the project from a week down to a single day.
How AI Can Mimic Creative Works
The process for using AI to generate creative content is largely based around the use of foundational models or generative adversarial networks. These approaches utilize deep neural networks designed to mimic the ways in which the human brain learns by creating associations between specific elements that can be combined to create a finished work.
These neural networks are fed millions or billions of examples of a particular output (which could include images, sound samples, or text passages), which they subject to a sophisticated type of pattern matching to "learn" specific attributes, patterns, or cues. For example, algorithms that are used to create artwork in the style of impressionist artists would be shown works from Monet, Renoir, Manet, Degas, Cezanne, and Matisse, generally considered to be masters in this style of artwork. The neural network examines the works as patterns of pixels, and can be trained to identify the specific patterns that define the impressionist style. This creates a framework of knowledge that can be used to create a new work based on the learned parameters and attributes. The more "layers" or "depth" the model has, the more complex the resulting patterns and correlations can be.
Large AI companies such as OpenAI (which describes itself on its website as "a research and deployment company" whose mission is "to ensure that artificial general intelligence benefits all of humanity") have created applications such as DALL-E (referencing the artist Salvador Dali), which was announced in January 2021, and demonstrated that this approach could reproduce and recombine features from those existing images in new and aesthetically pleasing ways. The next version, DALL-E 2, released a year later, featured improvements to image quality and demonstrated that the system could reproduce different artistic styles. ... '
No comments:
Post a Comment