Pressman Film CEO Sam Pressman is diving into artificial intelligence (AI) to explore the technology’s use in storytelling.
Pressman’s (Daliland) short film In Search Of Time co-created by Pierre Zandrowicz and Mathew Tierney premiered in Tribeca Festival on June 8 and is, they claim, the first AI-generated film to play at a major film festival.
It combines imagery from an iPhone with open source AI platform Stable Diffusion to create a meditation on memory and loss in honour of Pressman’s father Ed Pressman (American Psycho, The Crow, Wall Street), the pioneering independent producer who died in January aged 79.
Pressman and Tierney are also behind immersive experience Human After All: The Surreal Matrix Of AI, Art, And The Motion Picture, a symposium about the intersection of AI, art and cinema which can be seen at The Canvas 3.0 at Oculus NYC through June 12. It presents conversations, events and installations including a roundtable discussion on the potential of AI in cinema involving Pressman, academics and other filmmakers, and a talk about AI, law and intellectual property.
Pressman Film also produced the feature thriller Catching Dust, starring Erin Moriarty and Jai Courtney, which premieres at Tribeca Festival on June 11. The company is in post-production on the reimagining of The Crow directed by Rupert Sanders and starring Bill Skarsgård and FKA Twigs.
What’s been the appeal of AI to you?
Sam Pressman: More than anything it’s wanting to understand. To just completely reject this emerging space to me feels like a huge loss. As an artist it’s really inspired me and the more that we spoke about AI and the actual technical abilities, the way that it is completely revolutionised the singular creator [the more we wanted to experiment]. Matt and Pierre made this film that’s the first film made with AI to be welcomed to a major film festival. That alone is indicative of how far out into uncharted territory working with AI to make art can be.
The “why” [of exploring AI] is to see what positive work can be made and embrace that it is still an artist engaging with the machine, as terrifying as that is. But the only way to really understand what it is is to play with it.
How did In Search Of Time come about?
Tierney: Sam introduced me to Pierre [Zandrowicz] who was a founder at Atlas V the French VR, XR, AR company. Atlas V has two other projects at Tribeca Immersive and Pierre’s been in the immersive space for a long time. Sam and I have had many, many long conversations about the potentiality for AI and cinema for a good year and a half. Sam and Pierre met and after five minutes they said, ‘Let’s make a film’ for no other reason than to do it.
Our intention behind it was to avoid the tropes that people have adopted with AI, which is to explore sci-fi. We wanted to tell the most human story we could, so we decided upon memory and then we made it a story about childhood memory and how we have to grapple with ageing and time disappearing from our lives.
Pressman: My father was in the hospital at the time we started working on it. There was an earlier cut but we ended up with it being from the perspective of the child and not going into ageing. It’s only six minutes, it’s a slice of life, like a poem that you just melt into. But in a lot of ways In Search Of Time opens up another possibility: there’s great potential for doing a series of these projects because shared memories, especially through great cinema, transport us into something that is universal. This technology actually is a utility that in the hands of someone’s imagination can unlock a much more democratic, open production for them.
Tell us about the imagery
Pressman: We use the metaphor at the beginning of the film of a tree stump and how the intricate layers of our memory are like the rings of a tree. And then we have iPhone footage of a boy playing on a tree stump and we used Stable Diffusion and text prompting to create what we wanted. The tree stump becomes a complete, painted forest. The hole in the tree stump becomes this beautiful waterfall, characters start populating it, you see squirrels running around and as soon as you realise that there’s this beautiful painting unfolding, the image is gone, like a memory. It’s little ideas triggering memories. I don’t think that would have been possible six months ago.
What camera did you use?
Tierney: We used an iPhone. We wanted to prove that we could do it so a kid in Oklahoma with a computer and a camera phone who’s willing to learn a little bit can do something similar. That was our goal: to prove that you can make cinema if you have an idea and a few tools.
Did you shoot original footage?
Pressman: We didn’t shoot [content] for the film; it was digital memories. We all have this canvas that has been receding into a netherworld of our clouds, so we wanted to see how we could reanimate it and make it art and give it that spirit.
In doing so it does feel elusive and it was in a lot of ways a meditation on loss: both loss of the naive, beautiful childhood experience of running on a beach and also the feeling of loss of those that we love that. A computer and a photograph can bring back the memories of the time.
How does the AI diffusion model work?
Tierney: The tool we used most is called Stable Diffusion and the beauty is it’s open source. It’s built this remarkable community of people sharing everything they’ve learned, everything they’re building. You can pull from different models that different users and creators have built. You can go to the community and say you need this or that tool.
Pressman: It takes each frame and processes it however you want to augment the image. But oftentimes that yields a very disparate frame-to-frame experience, which is inherently very surreal but doesn’t feel super coherent. So it took a lot of work with anti-flicker finishing products. Frames A, B, and C, are radically different because light fell on the subject here in a totally different way than the frame before it and the machine-learned model doesn’t know how to make them consistent, so you reprocess it and as with a sculpture you continue to refine it.
And the fascinating thing is how far that’s come in six months, because the first version we made felt beautiful and surreal although it felt like it had the hallucinations of the machine.
What was the idea behind Human After All: The Surreal Matrix Of AI, Art, And The Motion Picture, at Oculus NYC?
Tierney: It was based on all these conversations Sam and myself were having. We realised there are incredible technologists, scientists, material scientists, artists, filmmakers talking about these things. They may be on Twitter, and occasionally conversations overlap, but rarely are they in the same room. Sam and I said the best thing that we’ve done together is when we go out into the world, talk to people and learn from disparate fields. And we thought we should just ragtag it together, invite people we know and see who will come and just put creators and technologists in the same room. I think we’re all just going learn something from it and a lot of things will get built on these connections that that exist in person.
What are your next plans with AI?
Pressman: There are a couple of filmmakers we’re developing projects with who have used [AI programmes and deep learning models] DALL-E and Midjourney to create storyboards and pre-visualisations. I think that’s an immediate utility that’s already showing value. I think people working in VFX and post-production are already dependent on a lot of AI that’s built into editing and post-production software, whether that’s motion tracking or other applications.
There’s so much at stake with how AI will impact Hollywood and creators. It’s a key part of the Hollywood Guild contract negotiations, so what do you see as the creator’s roadmap with AI?
Pressman: Where things are undefined is the questions of how actors will be appropriated. That’s a very dangerous space and there’s great reason why SAG-AFTRA has a lot of questions about that because an actor’s likeness can be replicated with such ease, it draws into question the ownership of their own person. So the question is how do filmmakers use these various tools and respect both how the artists are respected and how the subjects in the films are respected as humans.
Tierney: Our main intention was to say instead of relying on these tools to save us some time in the writing or make things for us and let’s do the opposite and write everything ourselves, direct everything ourselves, do the sound design, do the score and then just take one of these tools.
Let’s just take the base thing that everyone has living on their phone, and use the tool to anonymise it and to make it a universal story. One of Sam’s ideas was about Coachella [the annual US outdoor arts and music festival]. People have millions of memories that they capture on their phone, and they just live on a drive somewhere, and then they’re lost. That’s an example of how we could take a major festival, take all the footage that’s invested on Instagram, use a tool like Stable Diffusion, and create these beautiful new memories and data maps of what this festival really was and make art out of all the things that end up on a cutting floor, whether the cutting room floor is your phone or even at a studio. Everyone’s capturing memories all day throughout their lives and we have such a cool opportunity to just really expand and explore all of that.
No comments yet