YouTube Shorts to integrate Veo, Google’s AI video model

The main attraction of YouTube’s Made On YouTube event on Wednesday morning was, you guessed it, artificial intelligence. The company announced that it is integrating Google DeepMind's AI video generation model, Veo, into YouTube Shorts, letting creators generate high-quality backgrounds as well as six-second clips.

At Google's I/O 2024 developer conference, Veo was unveiled as a cutting-edge video generation model. The technology directly competes with OpenAI’s Sora, as well as other rival video generation models, such as Pika, Runway, and Irreverent Labs. It can create 1080p video clips in various cinematic styles.

Veo in Shorts is meant to be a significant upgrade from YouTube’s AI-powered “Dream Screen” feature, which launched in 2023 to allow creators to generate backgrounds in Shorts using text prompts. YouTube believes the Veo model will enhance the video background generation process even further, enabling creators to produce more impressive clips. One of the key advantages of Veo is its capability to edit and remix previously generated footage.

Additionally, this will be the first time creators can generate six-second-long standalone video clips for Shorts. When creators select "Create" and enter a prompt, Dream Screen will generate four images. They then select a photo to turn it into a video.

The new capability will help creators add filler scenes to their videos, allowing for smoother transitions and tying the overall story together. For example, creators can include scenes such as the New York City skyline at the beginning of a sightseeing video to add more context.

The company will integrate Veo into Dream Screen later this year. The creations on Shorts will be watermarked using DeepMind’s SynthID tech to mark them as AI-generated.

Veo in Dream Screen
Veo in Dream Screen

In addition to the Veo integration, the company announced a slew of new features coming to YouTube, including "Jewels" and gifts, digital items that viewers can send during livestreams. This feature seems to be similar to TikTok's "Gifts." Jewels is aimed at providing new ways for viewers to interact with creators and actively participate in livestreams. The feature will start rolling out to vertical livestreams in the U.S.

YouTube also expanded its automatic dubbing tool to support more languages, including French, Italian, Portuguese, and Spanish. Notably, it’s testing "expressive speech," or the ability to transfer a creator’s tone, intonation, and ambient sounds into dubbed audio, creating a more natural experience.

The company is expanding the availability of its Community hubs to more channels, giving creators and followers the ability to interact with each other, including being able to share posts and reply to each other.

It's also introducing its "hyping" feature to additional markets. YouTube initially tested its Hype tool in Brazil, Turkey, and Taiwan, allowing users to express support for their favorite creators. Videos with the most hype points are showcased on a special leaderboard.

Additionally, the company revealed during today’s event that creators can now use AI to help brainstorm video ideas within YouTube Studio. They can also produce AI-generated thumbnails and respond to followers with new AI-assisted comments.