New FREE AI Video Generator & Feature Length AI Films!

Theoretically Media
7 Mar 202413:38

TLDRThe video introduces Hyper, a new AI video generation platform developed by former Google Deep Mind alumni, offering free text-to-video and image-to-video conversion. It showcases features like video repainting and various templates for video generation. The platform's interface is user-friendly, and while there's a limit on HD video generation, workarounds are suggested. The video also discusses the potential of AI in creating full-length films and touches on recent tensions between Mid Journey and Stability AI.

Takeaways

  • 🚀 A new AI video generator platform called Hyper has been introduced, offering free usage for users to create videos from text or images.
  • 🤖 Hyper is developed by Yia Meow and Zuu Wong, former Google Deep Mind employees, and has raised over $19 million in funding.
  • 🎬 The platform supports various video generation methods, including text-to-video, image-to-video, and video repainting.
  • 🖌️ Hyper's interface is user-friendly with options like HD video generation, image animation, and video extension coming soon.
  • 📈 Hyper's HD video generation is limited to around 2 seconds per clip, but there are workarounds to extend the duration using video editing software.
  • 📊 The community feed on Hyper showcases the diverse capabilities of the platform, with outputs ranging from short clips to near Sora-quality scenes.
  • 🧠 Research on generating full-length AI films is underway, with movie LLM utilizing GPT 4 and text-to-image models to create detailed scripts and visuals.
  • 🎥 Movie LLM's process involves style immobilization to maintain consistency throughout the generated film, using large datasets like MovieNet for reference.
  • 💡 The AI video generation field is rapidly evolving, with platforms like Mid Journey and Stability AI contributing to advancements and occasional industry drama.
  • 🔥 Mid Journey is currently training their video model, while Stability AI is set to release Stable Diffusion 3, indicating continuous innovation in the space.

Q & A

  • What is the new AI video generator mentioned in the transcript?

    -The new AI video generator mentioned is called Hyper, a text-to-video platform developed by Yia Meow and Zuu Wong, two former Google Deep Mind alums.

  • How much funding did Hyper raise at its inception?

    -Hyper raised over $19 million in funding, indicating its potential as a significant contender in the AI video generation market.

  • What are the main features of Hyper's video generation capabilities?

    -Hyper offers text-to-video generation, image-to-video animation, and video repainting, allowing users to create animated content from text prompts, images, and even restyle existing videos.

  • What is the significance of the 'image to video' feature in Hyper?

    -The 'image to video' feature is significant as it enables users to generate animated videos from static images, adding movement and life to the original content.

  • What are the two video quality options provided by Hyper?

    -Hyper provides two video quality options: Full HD and Standard Definition. The Full HD option generates high-quality but shorter videos, while the Standard Definition option allows for slightly longer videos with added templates and styles.

  • How can users extend the length of videos generated by Hyper?

    -Users can extend the length of their videos by using non-linear editors like Premiere or DaVinci Resolve to slow down the video speed while keeping the optical flow turned on, thus creating a longer video sequence.

  • What is the current limitation of Hyper's Full HD video generation?

    -The current limitation of Hyper's Full HD video generation is that the generated videos tend to be around 2 seconds long, which is quite short for a narrative or detailed content.

  • What is the role of GPT 4 in the research paper on AI-generated feature-length movies?

    -In the research paper, GPT 4 is used for the initial breakdown of the film, generating the theme, overview, movie style, frame-level description, and characters. It also assists in generating scenes and dialogues for each scene.

  • How does the style immobilization process work in the AI-generated movie research?

    -The style immobilization process involves extracting keywords from the chapters, characters, and plot summary, running them through Stable Diffusion to generate consistent scenes, characters, and locations, and then embedding a consistent style throughout the film.

  • What was the reported issue with Mid Journey's service?

    -Mid Journey experienced a 24-hour outage due to bot-like behavior from paid accounts, which was traced back to Stability. AI employees scraping for images and text prompts, leading to a ban on all Stability. AI employees using the Mid Journey service.

  • What is the current status of Mid Journey's video model and Stability's Stable Diffusion 3?

    -Mid Journey is still training its video model, which is reported to be quite good. Stability will be releasing Stable Diffusion 3 into the public domain imminently.

Outlines

00:00

🎥 Introduction to Hyper AI Video Generator

The paragraph introduces a new, free AI video generator platform called Hyper, developed by Yia Meow and Zuu Wong, former Google Deep Mind employees. The platform has raised over $19 million and offers text-to-video and image-to-video generation, as well as video repainting. The interface is user-friendly with options for HD video generation, image animation, and video extension (coming soon). The author shares their experience using Hyper with a prompt about creepy dolls in a factory, comparing the results with another platform called Pika, and notes that while the HD videos are limited to 2 seconds, they can be extended using video editing software.

05:00

🚀 Advancements in AI Video Generation

This paragraph discusses the latest research in AI video generation, specifically a paper on 'movie llm' that leverages GPT 4 and text-to-image models to generate detailed scripts and visuals for movies. The process involves generating a film's plot, style, character descriptions, and scene details, followed by creating dialogue. The style immobilization process ensures consistency throughout the film by using keywords and stable diffusion to generate key frames. The paper does not provide video examples, but the author notes the potential for feature-length AI-generated movies in the future. The paragraph also touches on the challenges of training models for long videos and the use of the MovieNet dataset for better film synopses.

10:02

💡 Mid Journey and Stability Conflict

The final paragraph covers recent developments at Mid Journey and Stability AI. Mid Journey's CEO, David Holtz, reported a 24-hour outage due to bot-like behavior from paid accounts, which he attributed to Stability AI employees scraping for images and text prompts. This led to a ban on Stability employees using Mid Journey's service. Stability AI's Nick S. Pierre and Imad Mustafa engaged in a public exchange on Twitter, with Mustafa defending Stability's data usage and model performance. The paragraph concludes with a call for resolution between the two companies, as Mid Journey continues to train its video model and Stability prepares to release Stable Diffusion 3.

Mindmap

Keywords

💡AI video generator

An AI video generator is a software platform that utilizes artificial intelligence to create videos from text or image inputs. In the context of the video, it refers to the new free platform called Hyper, developed by Yia Meow and Zuu Wong, former Google Deep Mind alums. The generator is capable of converting text prompts into full-length videos and can also animate images, showcasing the advancement in AI technology.

💡Hyper

Hyper is a new video generation platform that allows users to create videos for free. It is developed by Yia Meow and Zuu Wong, who previously worked at Google Deep Mind. The platform has raised over $19 million and offers features like text to video conversion, image animation, and video repainting. It is seen as a significant contender in the AI video generation space.

💡Video repainting

Video repainting is a process within AI video generation where an existing video is altered visually, giving it a new aesthetic or style. In the context of the video, this feature is showcased by Hyper, where a video of someone pouring a smoothie mixture is transformed into a watercolor koi fish scene, demonstrating the creative potential of AI in video editing and manipulation.

💡Interface

In the context of software and technology, an interface refers to the point of interaction between a user and a computer program or system. A user-friendly interface allows for easy navigation and operation of the software. In the video, the interface of Hyper is described as straightforward, with options for light and dark modes and various settings for video generation.

💡Standard definition and full HD

Standard definition (SD) and full high definition (HD) are terms used to describe the quality of video resolution. SD offers a lower resolution, while HD provides a higher, more detailed image quality. In the video, the author compares the output of the AI video generator in both SD and HD formats, noting that HD videos are of higher quality but are limited to shorter durations, whereas SD videos can be longer but may appear less dynamic.

💡Community Feed

A community feed is a shared space on a platform where users can view and interact with content created by others. It serves as a hub for showcasing user-generated content and discovering new creations. In the context of the video, the community feed of Hyper is explored to demonstrate the variety and capabilities of the AI video generator.

💡Movie LLM

Movie LLM (Language Learning Model) refers to a research concept that leverages AI to generate detailed scripts and corresponding visuals for movies, based on a given prompt. It uses advanced AI models like GPT 4 to break down the film's theme, style, and characters, and then generates scenes, dialogues, and key frames for the movie. This concept is an example of how AI can enhance long video understanding and potentially create feature-length AI-generated movies.

💡Style immobilization

Style immobilization is a process in AI-generated content creation where the visual style of the output is locked into a consistent appearance. This is achieved by extracting keywords from the content and running them through AI models to generate visuals that maintain a uniform style throughout the video. In the context of the video, this process is crucial for creating AI-generated movies that have a consistent look and feel.

💡Mid Journey

Mid Journey is a company mentioned in the video that seems to be involved in the AI and creative content generation space. The company has experienced some issues, including a 24-hour outage caused by bot-like behavior from paid accounts, which was allegedly linked to employees from another company, Stability. This situation has led to a ban on Stability employees using the Mid Journey service.

💡Stable Diffusion 3

Stable Diffusion 3 is an upcoming version of a machine learning model developed by Stability. It is expected to offer improvements and new capabilities in data generation and augmentation. The model is anticipated to be released to the public soon, indicating the ongoing development and advancement in AI technologies for content creation.

💡Creative AI

Creative AI refers to the application of artificial intelligence in the domain of creative content generation, such as producing videos, music, art, and more. The video discusses various platforms and models that fall under this category, showcasing the rapid evolution and potential of AI in transforming creative processes and outputs.

Highlights

A new AI video generator platform called Hyper is introduced, offering free usage for users.

Hyper is developed by Yia Meow and Zuu Wong, former Google Deep Mind alums.

The platform has raised over $19 million, indicating its potential as a significant player in the AI video generation field.

Hyper offers text to video conversion with smooth and impressive animations.

Image to video conversion is also possible, allowing users to generate animations from static images.

A unique feature of Hyper is video repainting, which changes the style of an existing video.

The user interface of Hyper is straightforward and offers a dark mode for user comfort.

Hyper provides various options for video generation, including HD and standard definition outputs.

The platform allows users to extend their video length, a feature not yet enabled but upcoming.

Hyper's community feed showcases the diverse capabilities and creativity enabled by the platform.

The quality of Hyper's AI-generated videos is compared to that of Sora, indicating high visual standards.

The AI video generation field is advancing towards feature-length movies with research into long video understanding.

Movie LLM is a research project that leverages GPT 4 and text-to-image models to generate scripts and visuals for movies.

The style immobilization process in Movie LLM ensures consistent visual themes and characters throughout the generated film.

The Movet dataset, consisting of data from over 1,000 movies and 60,000 trailers, is used to enhance the quality of AI-generated film synopses.

Mid Journey, another AI platform, experiences a 24-hour outage due to bot-like behavior from paid accounts.

Stability AI is implicated in the outage, leading to a ban on their employees using Mid Journey's service.

The drama between Mid Journey and Stability AI unfolds publicly on Twitter, with both parties exchanging statements.

Despite the controversy, Mid Journey continues to train their video model, while Stability AI prepares to release Stable Diffusion 3.