tdevane

View Original

Artem ex Machina

collaborative creativity and AI’s avant-garde

. . .

Reading many headlines today, you’d be forgiven for thinking that our machine overlords have finally arrived to take all that we hold dear. Artificial intelligence and machine learning have come up and showed out in powerful and public ways in 2023. The existential gasp regarding the perceived threat these technologies represent has been especially loud from the entertainment business and the creative fields that make it go.

The lightning speed and high quality of generative AI output in its current iteration displays new computational creativity that can seem overwhelming. When faced with such immense power, it is a common human response to retreat to the familiar and established, while associating the immense new technology with a potentially destructive force that threatens what came before it. Having debuted to the masses in something of a whirlwind, today’s generative AI has caused collective cognitive dissonance between wonder and worry – a reactive emotional paradox that feels particularly acute as we actively participate in its growth. The technology’s output and concerns about that output abound. AI alternate endings to classic films saturate TikTok. AI-powered Wes Anderson-style takes on the casts of Harry Potter, Game of Thrones, and the MCU pervade Instagram feeds. Guardrails and carve-outs for AI have become critical topics adding fuel to the WGA and SAG-AFTRA strikes. That the current capabilities of these technologies have provoked an existential gasp from creative industries is not surprising. These are fields of artwork and entertainment – for these purposes film and television, music, and writing – within which the human artist has always been revered as the original, singular intelligence essential to the creative process. Long gestating as deep technology R&D and on the furthest cusps of entrepreneurial innovation, this moment is an inflection point for artificial intelligence and machine learning in entertainment. For these creative fields in particular, it’s a paradigm shift in the making. To better understand the new AI-ML reality suggests an exploration of both the profound contours and unprecedented capabilities of the current creative moment as well as the interdependent history of technological innovation and the entertainment industry. In so doing, hopefully this essay can provide a more optimistic lens and clear framing through which to see and engage AI-ML present and future as collaborative accelerants distinctively equipped for the rapidly evolving creative environment emerging all around us.

Note 1: I’m not making any claim of certified qualification in these realms of technology or creativity beyond my own evolving points of view, borne out of the continued pursuit of an education in and experiences of these technologies. This essay presents an optimistic perspective focused specifically on the growing presence of AI-ML in creative fields. Even for someone whose career has centered on technology and innovation, this is all so new. Right now, the contours and concerns of an AI-ML paradigm shift are defined by abundant and consequential unknowns. Certain crucial issues regarding AI-ML in other industries and the implications for society writ large fall outside of the parameters of this essay. The concept of inflection points, for technology and creative expression, inspired this essay and helped center much of the research. I have to thank my friend Mike Dempsey for his piece a few years ago going deep On Inflection Points, which sparked said inspiration. He is a curious, brilliant, and thoughtful guy and everyone should read all of his writing.

Note 2: Even with this focused scope, forward consideration for emergent AI-ML issues regarding copyrighted work and real artist performances must be addressed. Using AI to replicate human actors’ likeliness in future content, indefinitely and without proper approval and compensation, is not acceptable. AI-ML creative engines cannot continue to be trained using real artist-created content without consent. 100% AI-ML generated content must be able to be identified as such. On a cautiously positive note, priority action is being directed to the rapid development of regulations, evolved consideration of copyright laws, and new opt-in compensation models for artists’ past portfolios and future works and performances. The WGA and SAG-AFTRA have both made AI-ML concessions deal-breaker priorities in their on-going strike negotiations with studios. On the heels of a White House summit regarding AI guardrails in late July, OpenAI and Google and five other major tech corporations agreed to implement watermarks for all AI-generated content, enhancing security and authenticity awareness. These issues will continue to require deep, perpetual examination, always moving towards mutually acceptable resolutions and paths to sustainability for all willing and active participants.

Note 3: Buried beneath headlines like those at the top of this essay and more often altogether ignored is the fact that not all AI is generative. Generative is absolutely the most broadly public facing example available to most people right now. As such, it may not be surprising that generative is the only one of the four distinct types of machine learning that has its own Wikipedia page.  However, there are indeed four distinct types of machine learning and related artificial intelligence: classification, interpretation, prediction, and generation. In terms of their real-world application, each is powerful enough to handle an array of complex tasks within their categorical function. Each has discernible, high potential use-cases across a variety of stages in the cinematic, musical, and publishing creative processes. Throughout this essay, when not specifically addressing one of these four categories, I refer to AI-ML as a singular technology for the sake of simple syntax.


Creepy Completeness

A disconcerting trait of generative AI is its ability to return finished products, or what very closely resemble finished products, from simple input prompts. Whether that’s detailed visual imagery from engines like Stable Diffusion or nuanced narrative and comedy writing from Chat-GPT, the wholeness and perceived polish of generative AI’s creative output is remarkable, even jaw-dropping, when rendered in the blink of an eye. Putting human margin doodles and stick figures to shame, these results feel excellent, professional. This perception underlies certain human discomfort right now. In crafting whole creative assets on demand, the demonstrable capabilities of generative AI could eliminate the time and toil of human minds, historically creativity’s essential ingredient. Artists wrestling with the blank canvas or writers staring at the blank page could feel obsolete when said canvas and page need only be blank for the milliseconds of compute time it takes for generative AI models to process prompts.

Underneath the very real concern for sustained human working value, there may be a more visceral trigger in human nature that generative AI pulls. Instant AI output of human-created quality provoking an unsettled human reaction rhymes with another techno-centric developmental concept common in humanoid robotics and certain areas of computer animation and virtual reality. The uncanny valley suggests that humanoid objects that nearly yet imperfectly resemble actual human beings elicit uncanny or strangely familiar feelings of uneasiness in observers. [See the movie version of Cats to immediately experience it or the movie Ex Machina for a nuanced exploration of the subject.] While not exact comparisons, similarities are apparent. The uncanny valley reaction is essentially a distorted mirror effect. It’s eerie to be in the presence of precise human physiology rendered through inorganic objects with uncanny accuracy. The creative output of generative AI provokes a similar instinctive discomfort by delivering completed versions of human artistic work of nearly comparable quality. Though not nearly as slick a name as uncanny valley, I’ll call this related concept “creepy completeness” in AI-human creativity.

Two considerations that can help assuage creepy completeness to reconcile human-centric creativity in a future creative paradigm pulled forward in part by AI-ML. First, just reiterating Note #3 above. Creepy completeness and an array of AI-only creative outputs are products of generative AI and limited to this type. That’s not to diminish what this single category of AI is proving capable of creating. However, it’s important to separate the collaborative potential of all categories of creative AI-ML from the initial disruption of generative AI right now. Second, we remain in the proverbial driver’s seat of these technologies development and their presence in creative production. Generative and otherwise, human engineers build and train these sophisticated models and human creatives determine the developmental inputs and drivers of each creative process. Both groups of humans share an ever more obvious responsibility to delineate the purpose, scope, and function of AI-ML as a cooperative asset in shaping the creative future.
 

Another Tool in the Kit

An antidote to creepy completeness exists in an evolving perspective on AI-ML technology so delineated along a tactical axis. Where AI products prove uniquely capable as point solutions within a human-centric creative process, their use will be a boon to creators eager to maximize their own creative thinking. As mechanisms of unprecedented functional efficiencies, AI-ML can enable human creatives to work at superior levels of abstraction. Seen this way, these innovative tools applied to specific production tasks can elevate human creativity — affording more space for imagination and original thought to initiate and drive those processes. In a recent interview with Variety, director Steven Soderbergh expressed a similar sentiment through a judicious and pragmatic lens:

“I may be the Neville Chamberlain of this subject, but I am not afraid of A.I. in this specific context. It has no life experience. It’s never been hungover. It’s never made a meal for anybody it loved. It’s just another tool. If it helps you finish a first draft of a script, great. But can it finish that thing and make it great on its own? Absolutely not.”

- Steven Soderbergh, interview with Variety, 6.12.23


This point of view is instructive for AI-ML across all artistic mediums. Seeing AI-ML as another tool, one that’s definitely useful but hardly a replacement for human intuition and originality is thoroughly practical and absolutely sufficient. The imaginations of directors and all creators are limitless — ever seeking to do more, say more, bring more to life in their form. Its the creative tools that at once enable art to work and define its boundaries. In this way, Soderbergh expresses openness to the unique additive value of a new technology as a tool in service to a greater cinematic vision. It’s an egalitarian perspective on the functional means of innovative technology shared by many of the most highly regarded, groundbreaking filmmakers of all time.

Throughout history, technology has expanded creative and professional opportunities for artists dramatically, by providing newer and more powerful tools for artists. The advent of new technologies often causes fears of displacement among traditional artists. In fact, these new tools ultimately enable new artistic styles and inject vitality into art forms that might otherwise grow stale. These new tools also make art more accessible to wider sections of society. This dynamic is paramount to interconnected nature of filmmaking and new technology. The history of film is filled with artist-tinkerers, as well as teams of artists and technologists. Together they’ve advanced the techno-cinematic frontier, often resulting in symbiotic inflection points that accelerated nascent technology and redrew the bounds of cinematic possibility. Walt Disney adopted and advanced the use of the multi-plane camera as well as breakthrough technologies in motion picture color and sound recording. Novel types of camera lenses enabled many of Orson Welles’ groundbreaking cinematographic techniques. The development of cheaper, portable camera and audio hardware facilitated the unbridled experimentation of the New Wave era, heavily influencing landmark directors including Stanley Kubrick, George Lucas, and Francis Ford Coppola. In more recent history, two seismic techno-cinematic inflection points stand out: the arrival of CGI special effects and the shift to digital filming. Both inflection points display distinct characteristics in achieving transformational impact, a close examination of which can contribute to building a better, more informed view of the potential next order effects to come from AI-ML.

Digital Inflection Points in Cinematic History

Part One — Special Effects: Practical to CGI

In the early 1990s, computer-generated imagery in movies arrived — not with the current thunder of AI-ML, but in no less consequential a manner for the future of filmmaking. Beyond a few notable spot uses in the 70s and 80s, fledging CGI was a limited afterthought for minor finishing techniques, primarily adding motion blur realism to stop-motion sequences. Then James Cameron and Steven Spielberg turned to the cutting edge of digital computer graphics innovation to realize audacious visions for two landmark films: Terminator 2 and Jurassic Park. In production, these movies actively advanced the dynamic capabilities of nascent CGI technology. The blockbuster theatrical runs of both movies elevated mainstream awareness of the new creative possibilities that CGI made possible. The in-depth stories of these technical breakthroughs manifest in pre-production of these films are fascinating and can be enjoyed here. However, the abridged version is this, Jurassic Park and T2 would become the first movies to feature 100% digitally rendered main characters, the T-1000 liquid villain and the dinosaurs of Isla Nublar. At premiere, these movies opened to box office records; both would remain among the top 30 highest grossing films of all time for the next 25 years. In an audacious and rare technological two-step, these two films combined to trigger an extraordinary inflection point for the novel technology – initiating the step-function changes which created production-level CGI in Hollywood and instantly winning the global acclaim and approval that cemented their cinematic value. Within a year, Pixar would release Toy Story, the first 100% digitally rendered film, debuting a new form of animated movie that would dominate the next 30 years. A year after that, Peter Jackson started pre-production on The Lord of The Rings trilogy while the Wachowskis did the same on The Matrix. CGI continues to redefine filmmaking across genres and production budgets, unleashing ever-expanding creative possibilities. Today, the technology is virtually omnipresent.

CGI’s inflection point in the entertainment industry is a valuable precedent for how creative industries may understand and engage with novel technologies to achieve a progressive creativity that doesn’t sacrifice or subjugate human centrality. In examining the impact of CGI over time, several transformative distinctions in film and TV become clear. These next-order effects can help frame how we perceive the directional purpose of AI-ML anchored to the intent and expectation that the technology assists, enhances, and expands the creative process and its achievable possibilities.

1. Cost-Effective, Time-Efficient: While modern CGI can require significant upfront investment, it often proves to be more cost-effective in the long run compared to practical effects, which are physical in origin and linear in production. Creating complex sets, building elaborate physical props, and executing dangerous stunts can be time-consuming, expensive, and impractical. CGI allows filmmakers to achieve similar results digitally, reducing production costs, and minimizing safety risks. Where the Jurassic Park contained two small line items for CGI, the T-Rex animatronics alone took up nearly 25% of the movie’s $56M budget. CGI also eliminated many of the additional production schedules required with traditional effects — offering faster turnaround times and enabling edits to occur in real-time post-production alongside live action edits. With practical effects, there are often delays in constructing and setting up physical elements, such as props or models. Additionally, practical effects may require multiple takes or adjustments to achieve the desired result, further extending the production schedule. In contrast, CGI allows for quicker iterations and adjustments, speeding up the overall production timeline, with no additional shooting or editing time or budget required.

2. Creative Force Multiplier: CGI unlocked profound functional freedoms and new paradigms of creativity in the cinematography, direction, and editing of visual mediums of entertainment. In terms of imaginative visions fully realized, CGI can create anything the human mind can conceive. Facilitating creative output without the limitations imposed by physical constraints, CGI is a tool of unprecedented power expanding the bounds of storytelling and world building. Once an onerous and exacting process divorced from filming, editing with CGI is instantaneous, malleable, and frictionless. This CGI distinction elevated post-production effects editing to a multidimensional creative complement and companion to the initial production. Advancements in CGI technology have led to highly realistic and visually stunning results. The level of detail, texture, and overall visual quality achievable with CGI has continued to improve, enabling the creation of virtual characters, environments, and effects that are indistinguishable from reality. Without CGI, we wouldn’t have Wall-E or Wakanda, Gollum or Groot, or innumerable other examples, from short to feature length. The finished results — often astounding, at times meh — always reinforce the scale and scope of the new creative paradigm that CGI brought to entertainment.

3. Consummate Co-Creation: CGI in entertainment continues to exemplify a successful and deliberate creative partnership between a technical innovation and incumbent entities and elements. CGI seamlessly integrates with live-action footage, allowing for the combination of real actors, sets, and props with digital elements. This integration enables filmmakers to blend practical and digital effects, enhancing the overall visual experience and expanding storytelling possibilities. CGI can enhance practical effects and vice versa in the same sequence. Terminator 2 and Jurassic Park were the first two films to debut this effects collaboration. Cameron and Spielberg recognized that practical effects and CGI would work best in tandem and completed post-production through creative meritocracy — using whichever technique best achieved each shot for the film. While this isn’t to make a claim that old and new work together in perfect harmony always and forever, the symbiosis is a lesser known, yet notable pattern characteristic of past technical inflection points in creative industries. Rather than rendering incumbents outmoded, transformative new technologies have a discernible history of embedding within or alongside existing participants, workflows, products, and systems in a collaborative sync. These powers combined tend to deliver creative output more capably and completely than those incumbents did beforehand, or than the new technology could alone. In the case of CGI, it also further fostered creative collaboration between film and tv production teams and technologists and software developers.

4. Accessibility Unlocked: Counterintuitive to the immediate perception of disruptive innovations at technical inflection points, CGI technology has been a remarkable plus to aggregate human employment. The availability and accessibility of CGI has increased over time as the tools, software, and FX library databases have standardized. While advanced CGI techniques were once limited to large studios, the democratization of technology has made CGI more accessible to independent filmmakers and smaller productions. This has unlocked opportunities for a wider range of filmmakers to explore and incorporate CGI into their projects. Similarly, CGI tools’ ease of use and lower cost to learn has reduced specialized barriers, making entertainment careers more feasible for many more creative individuals to pursue. For industry incumbents on active productions, CGI has been a multiplier of job creation. As the charts below detail:

Source: StephenFollows.com

In fact, outpacing every other major production department on top 200 grossing films each year, VFX teams on on domestic feature films have grown 325% on average from 1997 to 2020. Contrary a vocal human concern about AI-ML, history would indicate that the zero-sum fear of new technology retiring humans broadly is unfounded. And in most cases, the lasting impact beyond the technical inflection point is substantial new job expansion and growth within the new creative paradigm.

Part Two — Shooting Motion Pictures: Film to Digital

Six years after the ascent of CGI, digital technology arrived that could replicate if not supplant the physical medium of the motion picture itself, celluloid film stock. At the turn of the millennium, major improvements in the usability and video quality of professional studio digital cameras unleashed digital cinematography, editing, and distribution. This would turn out to be a monumental turning point for Hollywood, which had spent ninety years beholden to physical film stock as its creative canvas. The entire industry revolved around movies shot on real film and seen in theaters on real film. Where CGI caught fire at a fledgling development stage, early digital cinematography debuted to friction, backlash and resistance in the late 90s. Prominent directors believed the soul of cinema resided in 35mm film stock. Audio-visual manufacturers sought to protect their lucrative business as suppliers of the specialized, expensive equipment necessary to shoot, process, edit, and distribute physical film. Movie theaters exclusively equipped with traditional film reel projectors were reluctant to consider the upfront cost of adding a digital system or converting entirely to digital projection. Though first feature-length film entirely shot and edited on digital premiered in 1998, its not surprising that The Last Broadcast was a self-financed indie horror movie, filmed on a $900 budget and edited in Adobe Premiere home desktop computer. The gradual adoption of digital filming took place in fits and starts over the following ten years. Mainstream approval arrived when digitally-produced Slumdog Millionaire dominated the 2009 Academy Awards — winning 8 Oscars, including Best Picture, Best Director, Best Cinematography and Best Film Editing. Having delivered on Hollywood’s biggest night, digital filmmaking soared. As the charts below show, In 2009 nearly 90% of the top 200 films were shot on film against less than 20% on digital. Within five years, these percentages would completely invert with movie theaters quickly followed suit.

Source: StephenFollows.com

As with CGI, the deeply technical complete story of this cinematic sea change is fascinating and not without auteur controversy. However, in decoding a second, more recent digital inflection point in motion pictures, consequential, high-impact traits akin to those from CGI become apparent. Such data helps establish an impact blueprint that enables pattern matching between these past events — when techno-cinematic convergence produced unprecedented collaborative value — and those of the AI-ML present and future.

1. Budget Obliteration: Removing the physical constraint of the medium, the tangible cost and time savings of digitization were immediate, immense, and nearly universal across every line item and stage of film and TV production. Entire budgets previously required for raw film stock, negative processing, and reel scanning disappeared. Shooting on film requires purchasing and loading multiple reels, processing the footage, and transferring it to a digital format for post-production. These expenses can be substantial, especially for larger productions. In contrast, digital cameras offered reusable memory cards or hard drives, reducing ongoing material costs and enabling more cost-effective workflows. 2002’s Attack of the Clones, the first domestic feature shot entirely on digital, spent $16k to shoot 220 hours saved on SSD hard drives. The equivalent amount of film would have cost nearly 115x more or $1.8 million. In order to theatrically release on film, every theater needed a physical copy of the movie, running about $1,500 per cut. For wide feature releases in, say, 5,000 theaters, this added up to an industry average of $7.5 million to distribute each film. For a digital release in theaters, digital film files sent via mailed hard drives, satellite or cloud systems could complete wide distribution for 90% less.

2. Dynamic Creative Flexibility: Digital cameras offer a wide range of settings and options, allowing filmmakers to experiment with different looks, frame rates, and resolutions. This flexibility grants creative freedom and the ability to adapt to various shooting conditions. Shooting on film comes with a tangible cost per frame, which can influence decision-making on the number of takes and overall shooting ratio. Digital filmmaking removes this constraint, enabling filmmakers to shoot more footage without worrying about additional expenses. This freedom encourages exploration, improvisation, and capturing multiple angles or performances, leading to richer storytelling and more creative possibilities and advantages: Digital footage can be easily transferred and processed in a digital workflow. Editing, color grading, visual effects, and other post-production processes can be performed more efficiently and cost-effectively compared to working with film. Digital workflows also enable seamless integration with computer-based visual effects and CGI techniques, expanding creative possibilities in post-production.

3. Cross-Discipline Collaboration: Shooting on film was often a plodding process since it could be several days if not weeks between filming a scene and seeing how the raw footage looked. Practically, drawn out turnaround times meant lengthy production schedules, frequent reshoots, and inevitable budget strain. These fragmented, piecemeal constraints neutered spontaneous ideation on set, limiting creativity in production workflows where it is essential to success. Furthermore, single copies of eventually reviewable footage went to directors and higher ups, reinforcing a sense of rigid top-down process that deterred potential iterative creative collaboration between a films leadership and everyone else who brings it to life. Digital cameras enabled instant playback of footage on set. Real-time feedback empowered filmmakers to make adjustments to any element of a shot or scene on the fly. The immediacy of digital film review streamlines and powerfully elevates linear film shoots to more dynamic creative flows. The availability of digital film allows for potential collaboration between the director, cinematographer, and other departments, enabling them to fine-tune the visuals and storytelling more effectively. The software systems that enable digital filmmaking are designed to be mutually available to all parties - facilitating input across departments and stages of production - all conducive enhanced collective creativity in action.

4. Availability Amplified: For decades, the cost of raw materials, complex equipment, and specialized training necessary to shoot and edit film stock made film and tv production extremely exclusive. with deep pockets to cover the above costs upfront, Major studios and broadcast corporations had a true oligopoly over what was made for both big and small screens. The emergence of digital filming shattered these exclusivities by drastically lowering barriers to entry. Moreover, digital filmmaking has become more accessible, as high-quality digital cameras, highly open editing software, and large capacity SD cards are more affordable and accessible, empowering many more aspiring filmmakers, independent productions, and small studios to enter the industry. It should be noted as well that this technical inflection point bring about the end of shooting films on film, a long-term reality which I reference again towards the end of this essay. It remains perfectly acceptable and possible for filmmaking traditionalists with enough resources to make movies on film, and they do. While non-studio independent film releases have exploded in the years since digital filming broke through , the total number of all films — studio, indie, and hybrid — released has remained steady every year as well:

Source: StephenFollows.com


Now, side by side, two techno-cinematic inflection points in filmmaking compared to the implications of AI-ML for creative industries today could invite more contrast than comparison. This, of course, is not the intent or purpose of this essay. Emergent technologies do so at different speeds, their development, adoption, and application curves shaped by wholly unique externalities of their given times. Further, these circumstantial factors often influence how novel technologies are perceived, especially in immediate and shorter-term timeframes. However, the progress of technology’s irrepressible innovation tends to position its technological phases in an evolutionary sequence as opposed to independent categories warranting evaluation side by side. While cyclical in invention-adoption-installation from one innovative paradigm to the next, the irruptive and installation phases of novel technology often evolve up from mature technologies that allow them to exist. The pace of AI-ML development has been predicated not just on innovative algorithms, but also on both our ability to generate, access, and store massive amounts of data as well as the advances in graphics processing architectures and in-kind hardware to process such enormous amounts of data. Continuing concurrent advances in computing power, storage capacities and communication technologies (like 5G broadband) will support the embedding of AI processing within and at the edge of the network. Applying a broader picture lens over time, innovative technological breakthroughs come to resemble interdependent layers of a broad technology stack — each building block enmeshed in both its foundational predecessor and in the future innovative layers it inevitably helps create.

Creative origination often relies on human imagination, emotion, and lived experience to ask abstracted questions and drive new ideas not addressed by constrained learning systems. Conversely, many production tasks within creative output are much more repetitive and predictable, executed in structured workflows and data conformity. Production tasks are also often time-consuming or otherwise costly for humans to complete due to the necessity to find and retrieve specific information from enormous datasets (i.e. raw footage files, music recording files, digital stock image databases) or otherwise analyze and infer insight from those sprawling datasets. And so, identifying and separating these two types of tasks within the overall creative process provides a much clearer framework to position AI-ML in their most positive and collaborative creative applications.  AI-ML works well when there are clearly defined problems that do not depend on external context or require long chains of inference or reasoning in decision making. It also benefits significantly from large amounts of diverse and unbiased data for training. Accordingly, production tasks that require manipulation of immense amounts of data, stand out as amenable to integration of AI-ML functions.

Let’s look now at existing examples of creative AI-ML, in the realm of Soderbergh’s tool concept, that fit the function of production tasks. Emerging use-cases that display the immediate value of AI-ML as a tool for highly specific point solutions along existing planes of artistic output. These creative applications fit into four core categories: information analysis, post-production workflows, information extraction, and data compression. Within a given category, AI-ML delivers extraordinary if not unprecedented efficiency or flexibility in completing specific tasks, often at significantly lower cost than previously required. Think the dinosaur walking track and CGI’s first use-case on Jurassic Park or the first digitally shot single sequences in the 1996 film Rainbow. As discussed, each successful application examined herein also serves as a mechanism enabling greater human conceptualization and abstraction — freeing up human creatives’ time and energy to focus on creative thinking within the scope of their process that may previously have been muted, ignored, or bogged down by completing these functions themselves. It should be said that these examples and many other applications may quickly come to address several categories in combination. These combinations will likely blossom into broader and more substantial AI-ML adoption in creative processes. However, each represents an initial real-world entry point and anchor that demonstrates the technology as a collaborative enhancement to human-centric creativity.

AI-ML Production Task Point Solutions

Film & TV Post-Production Ops: For motion pictures, post-production is when raw footage is cut and assembled. This editing can include touching up visual elements for clarity and quality, shot matching and colorization for visual and narrative consistency, syncing sound-mixed audio, music, and sound effects, and the addition of digital visual effects. Most relevant to potential applications of AI-ML, many post-production functions revolve around the dynamic manipulation of enormous amounts of uniform data: hundreds of hours of digital film files. RunwayML and Descript are two startups of note in the AI-ML video editing space.

Resolution Quality: Upscaling and super-resolution are algorithmic processes common in post-production that calculate higher pixel counts from low-res frames to provide an improved high-res frames for distribution. Such video scaler systems require separate implementation and often separate servers, which can be expensive. Upscaling also can leave nonsense anomalies in frames or frame transitions. AI-ML models known as residual convolutional neural networks have been trained with matched low-res and high-res images to learn the redundancy and differences present the two images’ pixel data. Subsequently combined with a generative AI layer, these models can produce high-res images from low-res frames as the only input. Similarly trained models have been built to improve both temporal and spatial motion for frame sequences as images change. AI-ML upscaling has produced shaper textural and motion details, with a crispness and clarity that outperforms traditional image scalers. Animated movie productions have been early adopters. In 2020, computer graphics engineers at Pixar developed super resolution system mapping low to high-res image creation trained on Pixar’s animated film archive. This deep learned architecture capably “reconstructed artifact-free images with detail and sharpness indistinguishable from ground truth…consistent even on scenes with depth of field and motion blur.” In addition to this quality, their most recent models can reduce approximately 50-75% of the studio’s render farm costs.

De-noising, Color Correction, and Color Grading: Various types of unwanted visual noise can be introduced to film during recording, processing, and broadcast signal acquisition, white snow picture blur is a common example. As such, de-noising nodes are commonplace in post-production workflows. Similar to AI-ML resolution quality above, neural networks trained with matching noisy and clean video clips estimate a residual noise map that’s currently delivering state-of-the-art performance in removing spatial and temporal noise from raw footage. Color correction, grading and matching for film and television is an essential part of the late post-edit process. It’s also surprisingly expensive and time intensive. It can take several weeks for feature length films and generally requires 5-10% of the entire production budget. Motion picture focused machine learning interpretation and classification models can be trained to analyze input footage and apply appropriate color grading adjustments. Automatically matching individual filmmakers’ inputs for color aesthetics from shot to shot and ensuring tone and palette are remain consistent saves an enormous amount of time and budget normally allotted to a traditional color correction process. Startups working in this space include Colourlab.ai and VEED.

Sound Library Management: Within studios and labels, disorganized sound libraries without classification systems delay new music creation and release every year across genres. Externally, 3rd party pre-recorded sound libraries, for music and sound effects, without user-friendly order cause time delays in production. The advent of certified digitized audio increased the volume of available, lower cost sound files while and the emergence of cloud databases made whole libraries accessible on-demand. However, upon this digital, cloud-enabled base, the absence of reliable search and efficient discovery in process remains a prominent source of time lost across industries including music recording, film, tv, and other forms of audio including podcasts. With machine learning, classifiers like Convolutional Neural Networks (CNNs), used prominently in image identification and matching models today, provide a basis for sophisticated, large scale audio classification architecture, which, combined with interpretive and predictive layers, can dynamically tag, categorize and structurally organize public and private sound libraries – delivering search-friendly design and rapid results from Boolean or otherwise complex queries. OG digital sound library startup Splice has development machine learning-powered sample search. Other startups building in the space include Cyanite and Pibox.

Music Copyright Protection and Sample Clearance: Copyright lawsuits have beleaguered the music industry recently and certain notable past cases have dragged on for years. On the surface, it seems AI is shaping up to be a near-future synthetic defendant further complicating this issue considering the AI-generated Deep Drake track fallout from earlier this year. There are AI/ML applications can predict copyright conflict, avoiding later lawsuits and restoring some confidence in artist collaboration, directly or through sampling. As things stand, perceived concern about lawsuits has many recording artists in recoil, abandoning sampling as a potential creative source. This risks limiting certain creativity if recording artists are siloed and absent a fundamental musical inspiration — that which already exists. AI-ML can’t mend every burnt collab bridge and more powerful neural networks may be needed to finally bring many of the lost musical gems from the sample-heavy mixtape era to streaming (shout out Dedication II and Drought 3). As a point solution however, AI can analyze each element of complex audio recordings and cross-compare with against copyrighted libraries. Likeliness percentage rankings flag any song element above acceptable similarity thresholds. Startups working in this area include Audioshake and StarCoder from HuggingFace.

Audio Mixing and Song Mastering: As with most of the process of audio recording and production, final mixing and mastering have experienced increased accessibility and relative costs reduction thanks to workflow digitization and user-friendly software tools. However, the entire mastering process can still be frustratingly meticulous to complete. AI-powered audio mixing and mastering tools currently analyze audio tracks and automatically balance levels, EQ frequencies, and apply dynamic processing for optimal sound quality. Certain products offer mix and master by predetermined artist or producer settings as well as genre-trained quality thresholds. Two startups bringing compelling products to market here are: LANDR and RoEx.

Script Assistance: Compelling and original narratives are a vital undercurrent that shape many forms of creative art, including film, tv, gaming, and literature. For anyone who writes there are few things more daunting than a blank Word doc. For writing teams across many creative mediums, the draft edit stage for a screenplay or script is often an excruciating, stop-and-start process, as painful as it is slow. During the earliest phases of film or TV development, revising and punching up a draft script into its final version can be the heaviest lift of an entire production. AI-powered platforms can facilitate real-time collaboration among multiple scriptwriters, enabling seamless co-writing, version control, and feedback integration. Natural language processing models can be deployed to run draft analysis for plot holes, inconsistencies, or logical errors, ensuring that the story maintains plausibility and continuity. As an originality failsafe akin to sample clearance, certain models can compare drafts against huge archives of existing script databases to ensure they are sufficiently unique and distinct from other copyrighted materials. Working to help solve the dreaded blank page, Lex is a startup building a seamless, intelligent writing aid while Writesonic and Colossyan are two text-focused AI-ML startups that offer products focused on script assistance.

Pre-visualization: Previz is the pre-production process of creating visual representations of scenes and sequences before filming. Less expensive then principal filming, previz utilizes mediums like 2D storyboard sketches, 3D reconstruction, location scouting imagery, and animation to enable filmmakers to plan and refine shots, set designs, and special effects as well as test staging and art direction options. The concept is also utilized in other creative arts including still photography, performing arts, video game design, and narrative animation. Traditionally, previz is often a painstaking, linear process. Transforming a director’s earliest imaginative visions into first draft visuals can be time-consuming and labor-intensive for the teams involved. Image and video generative specific AI-ML models can help filmmakers and their teams create quick visual representations of their ideas and dynamically iterate different lightning, locations, camera angles, character blocking and movements. Making more efficient use of valuable pre-production time and resources, AI-ML-assisted previz helps bridge any gaps between a director’s creative vision and the technical execution of their production teams. Storia AI and PentoPix are both video-based AI startups bringing fascinating pre-visualization products to market currently.


In performing the above functions and many others with precision and fidelity today, AI-ML provide advance data and early signal of the techno-creative inflection point at hand, through which we can begin to anticipate the next order effects to come of a kind with those examined in the two techno-cinematic examples earlier.


Efficiency in the Routine: Exponentially Faster & Cheaper

“ML-AI here are about the augmentation of creativity. In the end, it's more about how can you get better efficiencies in human creative production. With filmmaking, 99% of the work is actually very mundane. It's going through hundreds of hours of video in some cases to arrive at the core pieces to use. So there's still a very good reason to use this technology as an assistant here, rather than replace the human in the loop."

- John R. Smith, Discovery Technology Foundations at IBM Watson Research Center


The twin hallmarks fundamental to many an innovative technology, cost and time savings drive breakaway new value realization and current applications of creative AI-ML deliver both in abundance. More so than the other next order effects explored herein, significantly faster and cheaper task completion is a clear and present value proposition for each of the existing AI-ML point solutions outlined in the previous section. Below are a few more AI-ML time and cost saving statistics to highlight just how powerful and consistent these two value characteristics already are, despite technology still in the early stages of development:

What makes these examples of AI-ML performing functions faster and cheaper into a lasting higher order impact is the scale of the efficiencies delivered. These are not marginal production time improvements or minor savings on a line item or two in budgets. Not unlike the techno-cinematic examples previously, these examples are evidence a technology that can be light speeds faster at a fraction of the going cost in certain areas of creative production.

Given concerned rhetoric swirling around generative AI, we’ll tread softly in expounding on the further manifestation of faster and cheaper that these technologies can come to provide. With AI-ML, one could conjure a near future by pulling forward the current speed and low cost that project’s the imminent reduction of human contribution in these fields. However, as we’ve seen with the two examples of digital inflection points in filmmaking, the extraordinary cost and time benefits of novel technology innovations do not sentence human creative collaborators to an outmoded junkyard. While not without some friction upon arrival and during their installation phase, these technical inflection points tend to jumpstart an expansive creation of human opportunity as they progress into a paradigm shift. The specific solutions described above are, nevertheless, data-based evidence of these technologies unprecedented capabilities and extraordinary value in reducing time cost and money cost of many stages of creative production. It wouldn’t be surprising to see certain point solutions of AI-ML expand along a horizontal creative plane to similar adjacent production tasks or to vertically specialize around specific content types such as audio, video or text. Wherever the technology can address creative production tasks for which the prior completion was cumbersome, repetitive, and a known drain on time or budget, AI-ML will have potential to be adopted as a collaborative tool. Crucially, these types of functional AI-ML adoption will translate to increased productivity through each stage and so more human-made creative output overall.


Imagination Manifest: Freedom To Try

“An essential aspect of creativity is not being afraid to fail.”

- Dr. Edwin Land, Co-Founder, Polaroid, LIFE Magazine October, 1972


One of the most transformative effects of the cinematic digitizations discussed earlier was advancing the ability to experiment, especially in real-time on active productions. The ease-of-use creative flexibility and cheaper production costs introduced a new filmmaking paradigm in which trying new ideas, including those that turned out to be bad ones, was no longer prohibitive by cost, equipment complexity, or production time. A spur of the moment idea on set for a different shot angle or in post-production for a new VFX design could be brought to life, where before those experimental thoughts may not even be voiced. New ideas are creativity’s foundation and the technological tools and infrastructure of digitization pulled Hollywood forward such that it could better afford to facilitate their suggestion at many more points in production. Much of the early progress of creative AI-ML suggest that these novel technologies are poised to further expand the freedom to try things across industries of art and entertainment and for modern creativity more broadly.

Stepping out of the weeds of industry for a moment, human confidence to creatively express is a characteristic state of mind that diminishes with age. Abundant in children bound only by available surfaces to draw on, creative confidence gradually encounters external influences – form, rules, trained skills, and, perhaps most significantly, criticism – which tend to subdue that self-belief. Artistic wunderkinds and Peter Pan aside, over time growing up inevitably limits potential creative pursuits for the masses. With creative AI-ML, especially generative, a fundamental and innovative purpose is the capability to take an idea and turn it into something. The low barriers to entry and ease of use of many leading AI-ML products suggest immense potential to foster and restore creative confidence and expression. So, what does this mean for the existing professional fields in creative arts? Machine learning will give creatives more freedom to test out novel ideas in the flow of production at myriad points during the process. Returning nearly instant results at tiny relative costs, the fears of failure, reprimand, or being deemed ‘bad’ essentially vanish. Human creatives will be able to build up new confidence to suggest and try out creative ideas that aren’t necessarily fail-proof, aren’t perfect, or even ultimately just aren’t good, because AI-ML output will be able to give some indication in advance of what a conceptual idea would look, sound, or read like.

AI-ML enabled freedom to try should also enhance and help answer the question what would “x” look like? in the earliest ideation phases of creativity. Substitute the “x” with an idea for: a storyboard, a 3D reconstruction of an environment, a sizzle reel, a character sketch, a POV shot, or a drum machine over a classical music genre. A version of a take on what that would be comes to life immediately at a low relative cost. In terms of process-oriented creativity, these applications of AI-ML, even their generative versions, then can function as spark and springboard which expand imaginative possibilities our own creative inspiration and output. Within this kind of framework, AI-ML can powerfully augment and so encourage the age-old simple ideation process of throwing stuff against the wall to see what sticks. The technologies wouldn’t choose what sticks – that creative necessity to discern, to choose remains a human responsibility drawn from intuition, emotional resonance, lived experience, and abstract perspective. However, an instantaneous means of palpably visualizing the stuff that’s throw at the proverbial wall is an extraordinary enhancement, not just for efficiency, but as a means of adjusting our aperture to realize new horizons.

In his autobiography, Mark Twain opined on the fallacy of new ideas. He reasoned that “we simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of colored glass that have been in use through all the ages.” Whether one agrees entirely with Mr. Twain or not, there is immense creative power in re-examining all the possibilities that have come before, exploring new combinations of existing artistic elements, or telling an old story in a distinctly novel way. In gathering the same old pieces of colored glass that we put into said kaleidoscope, AI-ML can powerfully enhance the breadth and depth of their sourced domains as well as the speed of their collection, enabling our ability to explore exponentially more combined choices and compare endless new concepts immediately. Natural language processing (NLP) that’s foundational to AI-ML interpretation, classification and so search can draw across the entire spectrum of available knowledge bases, fields of study, and collective histories to deliver exhaustive lists of existing ideas and concepts. In this way, generating Twain’s old ideas can return existing possibilities as broad or specific in their type and origin as the input prompt desires or requires. If certain creativity then depends on the art of human choice, AI-ML applications can deliver the entire realm of the existing to our fingertips in an instant, freeing our minds to focus on attempting combinations - new, curious, or thus far overlooked - that may provide centrifugal inspiration that blossoms into an eventual work of art.


Visual Communication: A Collaborative Force Multiplier

"The great enemy of communication is the illusion of it.”

- William H. Whyte, Fortune Magazine, September, 1950


Creativity is an essential means of expression and so communication. The American Jazz aficionado Lionel Hampton said that “all art is communication of artists’ ideas, sounds, thoughts”. If communication then underpins the whole purpose of creative expression, it is when creative production exceeds the capacity of an individual artist that in-process human-to-human communication becomes a necessity that can be a functionally time-consuming burden to get right at various stages. For example, a cartoon creator must convey to storyboard sketch artists their vision for characters, scenes, and environment, creator and sketch artists must explain their color pallet concepts to colorists, all three must communicate ideas for character expressions, emotions, movements and interactions to digital animators, and so on. While at each design stage, the domain expertise and skills of each new participant and the evolving visual asset assist these co-creator communications, it can be a challenge from drafting phases through to post-production digital edits to extract from spoken vision a mutually satisfying result — certainly on the first few tries. The visual drafting capabilities of AI-ML can be a powerful means of collaborative communication during these crucial interactions. Generative models can take a state vision at the given production stage as input prompt and return countless visual concepts in the appropriate medium. Depending on results, the vision input can be adjusted on the fly as creative juices flow in response to instant output. The effect can be to enable these production stages to progress faster, each artist contributing to a dynamic that continuously refines a final product as opposed to burning time drafting preliminary visuals from more linear communication. Not at all a means of replacing the overall creative process or any specific production stage, AI-ML applied in this way should function as a collaborative force multiplier. Assisting the translation of imagination to a visual example will potentially benefit artist-to-artist communication especially around concepts that they are struggling to accurately explain.


Expanding Human Opportunities: Unprecedented Accessibility

“Anyone can cook.”

- Chef Auguste Gusteau, Ratatouille


Given the availability of consumer and enterprise AI-ML applications and the current state of external, enabling technologies – 5G broadband network saturation, ubiquitous public data availability, high volume cloud data storage and processing power – the existing stages of these innovations may be more accessible than any other breakthrough technology to date, including the internet 1.0 and the mobile revolution. Along with the twin tenets of faster and cheaper, democratizing access is a persistent redeeming quality of novel technologies and a strong bellwether of its potential staying power. Within the realm of creative industries, many more potential creators empowered to create – to turn an idea more ably into something – is inevitable progress, especially long-term.

This does not suggest that everything these technologies help to create must be good or will be. I am not predicting the immediate, high-volume creation of works of art born out of every new prompt or AI-ML creative utilization. To expect such from a novel technology that’s broadened creative access to individuals who may previously have had limited creative experience is attempting to construct an artificial and frankly weak and snobby barrier to acceptable entry in the face of a technological dynamic that’s breaking them down. Recall though the brilliance of Chef Gusteau’s simple coda about his profession from Ratatouille one of my all-time favorite movie quotes. Aside from access to grocery ingredients and a kitchen, there are no barriers of entry to cooking. Anyone, anyone, can cook. Does he believe everyone should? Perhaps not. Does he mean anyone can go on to earn a Michelin Star? Certainly not. The reality with these and future versions of creative AI-ML means more people, so motivated, can more easily try. That is a good thing.

The potential human functional boost from AI-ML — operating at superior levels of abstraction in production with more time to focus on creative thinking — is only an opportunity if its recognized, grasped, and embraced. As explored above, AI-ML can be a significant new factor in completing certain functions in creative production, notably those heavy lifts that are repetitive, menial, structured, or data intensive tasks. Engaging these technologies as such does not retire the human but empowers emergent individual human qualities and pushes human creatives to access complex thinking that can have a more consequential impact on that which is created. And so, if synthetic computer intelligence may free us up to concentrate creatively, we must recognize that freedom and develop original concepts, tell old stories through new narrative lenses, and inject our unique human empathy, perspective, emotion, and experience into all that we do with this newly available creative time and abstracted mindshare.

Following the pattern of past techno-cinematic digital inflection points, AI-ML ,while able to shift and transform entire creative industries, aren’t predestined to become absolute. Neither CGI-VFX nor digital cinematography became required tools and neither eliminated the methods of filmmaking that preceded them. Practical, physical special effects lived on alongside CGI and continue to do so depending on a director’s preferences and vision. Many of today’s beloved directorial auteurs – Quentin Tarantino, Paul Thomas Anderson, Christopher Nolan, Wes Anderson, and, yes, when they can, Steven Spielberg and Martin Scorsese – are ardent devotees of shooting, editing, and, in some cases, distributing their movies on traditional film stock. Sure, these names can afford the cost barriers of such tradition, but the point is they still can. The explosion of digital cinema and the saturation of filming digitally and, crucially, digital theater distribution, did not snuff out traditional techniques. Developing in a like fashion, creative applications of AI-ML are unlikely to become definitive pre-requisites for every creative production.



Coda: Change, Trust and Evolving Responsibility

Recalling Soderbergh once more: AI-ML technology is a tool with creative yield that depends on data it's trained with and the algorithms used. As such, standalone AI-ML art in any medium today is derivative. It exists as a synthesis of historical data upon which its algorithms are trained. With immense sophistication, this output often mixes elements of that training data in dizzyingly complex and fascinating combinations, but it nonetheless derives from recorded human ideas. Pulling this nuance forward, it may be straightforward for these technologies to produce something novel by random chance. But it’s much more challenging for creative AI-ML to come up with something new that’s uniquely inspirational, unexpected and useful to a particular creative flow. Given this, the true potential of AI-ML in unlocking new forms of creativity lies in the symbiotic relationship between human creativity and AI-powered tools, where humans guide and shape the technology to achieve creative objectives. As AI-ML continues to advance, its impact on creativity is likely to evolve. However, its power and consequences come down to how we build it and how we use it.

The enduring impacts of previous techno-cinematic inflection points depended on a crucial precedent that will again be essential to achieving any of the potential next order effects of AI-ML explored above — the willingness and intent of human creators to develop a proactive, transparent, and trusted partnership with new technologies in the creative process. At the respective technical inflection points for CGI and DIGI, several prescient and ambitious filmmakers embraced their emerging capabilities and future promise and their peers quickly followed suit. With vigorous, thorough intent, legions of entertainment creatives immersed themselves in understanding the inner workings of these tools and actively tested their limits, fostering collaborative efforts that embedded them in production workflows industry wide. In so doing, those human partners continually discovered, evolved, expanded, proliferated the cutting-edge capabilities of those technologies over time. That mutually driven advancement helped transform two groundbreaking new technologies into enduring, dynamic ecosystems that facilitate and encourage audacious human creativity and experimentation in ways that previous techniques, workflows, systems had not or could not. Many of the AI-ML point solution examples above feel like they could be v3 AI-ML evolutions of the v2 digital innovations that preceded them. Being digitally flexible and cloud redundant, CGI and digital filming debuted paradigms within which mistakes didn’t cost extra or compromise completed work, inspiring human creative confidence to try new ideas in the moment that might not work. Building upon its digital predecessor, AI-ML can be a paradigm shift that encourages that same human creative confidence elevated to scale of transposing their entire imaginations. Any scenario, setting, soundscape or sequence, from minuscule to galactic, can be realized in a remarkably complete form instantly without additional cost. This is a remarkable benefit to expanding creativity. Trying new ideas that might not work may evolve into Trying Everything or even Trying Anything if it comes to mind. Through an array of applications, AI-ML can breath life into the creative abstracts that live inside all our heads and anyone can try it out already.

Attaining such an outcome will depend on how we perceive, approach, and ensure a collaborative dynamism with AI-ML into the many future unknowns we’ve yet to realize. The levels of intent, enthusiasm, trust, openness, and genuine desire with which we engage AI-ML will determine the contours, contributions, characteristics and capabilities of the technology’s current inflection point and the new paradigm thereafter. How we partner is all that much more critical with AI-ML, given that it involves human creatives with empathy and every other emotion on one hand and a built technology with no inherent concern for human emotions in functional objectives on the other. Should humans lean into the reactionary fear, zero-sum insecurity, and dismissive angst that rejects this technology, we could manifest a synthetic intelligence that evolves to reflect the very worst in us. An upside of advanced AI-ML however is that it can learn, does learn, and can be actively trained to recognize and know the very best, most optimistic, and forward-thinking versions of its human creative collaborators. If creatives embrace this collaborative opportunity with enthusiasm and an eyes wide open view towards enabling previous impossibilities, far from marginalizing human artistic impact, AI-ML will elevate it, broadening the scope of what we can create and democratizing access for who can create it.


Epilogue

It’s worth noting that the arrival of AI-ML comes at an inauspicious time for the media and entertainment sector that’s worth $2.47T worldwide. Surging financial stakes have forced actors and writers to strike for fair compensation and driven the industry to codify derivative project production with no appetite for creative risk. As a result, the business of entertainment today operates at a diametric inverse to the bold, roaming nature of artistic imagination. Studios, labels, and publishers are designed to restrict creativity. They mine existing IP for a kind of product output that has delivered strong financial returns in the past and lean into imitating or extending that known entity for the bulk of what gets produced going forward. In real-time, that’s meant an inescapable deluge of sound-a-likes, sequels, franchises, spinoffs, and reboots. Of course, the incongruence with artistic creativity is that once something truly groundbreaking is made — a movie, album, book, or show — its repurposed versions are generally made absent originality. By design and expected financial performance, ancillary offshoots are exactly that. Connecting converging dots, a modern entertainment industry prioritizing productions based on repetition may seem ripe for adoption of AI-ML technology that has the potential to create polished content of a kind with existing media data that trains it. The current state of the entertainment business may be intensifying the resounding sense of existential vulnerability in artists and creators regarding AI-ML. That is completely understandable. If, however, creatives choose to utilize this technology for the creatively profound, increased collaborative adoption can give rise to positive next order effects like those outlined in this essay, whereby the lasting impact of AI-ML on these fields has been to transform, expand, and improve. In a recent interview on The Town with Matt Belloni, Rick Nicita, former co-chair of CAA, made clear the reality and importance of this moment in entertainment. “It’s been too much corporatization and not enough innovation,” he said. “Art can only flourish with the new. Studios and financiers have to really realize and digest that what makes hits are fresh takes, not just repetition. The entertainment business will always exist but for it to thrive artistic risk-taking has to be encouraged again.” AI-ML can provide a remarkable boost towards new artistic heights if the entertainment industries once again embrace the creative risk of originality.

We humans are a curious lot. We rapidly develop and en-masse enthusiastically engage new technology that changes things while stoking fear of the worst possible scenarios we can imagine for what that technological change could take from us — even as we use it more and more. Only human, I suppose. Our contradictions give us certain depth. As director David Milch once said: “any good poem or any good human being in any good story spins against the way it drives.” Our collective natural drive is change. In truth, change shapes our reality and our ability to change defines us. Change is the perpetual motor of the human condition and the fundamental state of our biological existence all the way down to the atom. It is then only natural that the very essence of creativity is change. Harnessing change within the specificity of an active creative effort requires ingenuity and, more than anything else, originality. Original thought remains a distinctly human quality. Movies like Barbie and Oppenheimer and series like The Bear and Reservation Dogs show us human originality succeeds. It’s thanks to that originality that artistic works can become works of art.