Skip to main content
This guide shows you how to extend an existing video by generating new content that continues from the end of the original. By providing a video URL and a text prompt, the AI model produces a new video segment that picks up where the original left off. This is useful for building longer narratives, adding follow-up scenes, or expanding short clips.

What You Will Build

Video Extension

Continue an existing video with new AI-generated content

Narrative Building

Add follow-up scenes to develop a story over multiple segments

Seamless Transitions

The AI model maintains visual continuity from the source video

Iterative Creation

Extend videos repeatedly to build longer sequences

Before You Begin

Make sure you have:
  • A Pictory API key (get one here)
  • Node.js or Python installed on your machine
  • The required packages installed
  • A publicly accessible URL of the video you want to extend
npm install axios

Step-by-Step Guide

Step 1: Set Up Your Request

Prepare your API credentials, the source video URL, and a prompt that describes what should happen next in the video.
import axios from "axios";

const API_BASE_URL = "https://api.pictory.ai/pictoryapis";
const API_KEY = "YOUR_API_KEY"; // Replace with your actual API key

// Video extension configuration
const videoRequest = {
  prompt: "The woman sits down on the grass as children run and play around her in a wide shot",
  extendVideoUrl: "https://example.com/videos/woman-walking-in-park.mp4",
  model: "pixverse5.5",
  aspectRatio: "9:16",
  duration: "8s"
};
The extendVideoUrl must point to a publicly accessible video file. This parameter cannot be used together with firstFrameImageUrl or referenceImageUrls.

Step 2: Submit the Video Extension Request

Send the request to the AI Studio video generation endpoint. The API processes the source video and generates a new segment that continues from it.
async function extendVideo() {
  try {
    console.log("Submitting video extension request...");

    const response = await axios.post(
      `${API_BASE_URL}/v1/aistudio/videos`,
      videoRequest,
      {
        headers: {
          "Content-Type": "application/json",
          Authorization: API_KEY,
        },
      }
    );

    const jobId = response.data.data.jobId;
    console.log("Video extension started.");
    console.log("Job ID:", jobId);

    return jobId;
  } catch (error) {
    console.error("Error submitting request:", error.response?.data || error.message);
    throw error;
  }
}

Step 3: Poll for the Result

Check the job status at regular intervals until the extended video is ready.
async function waitForVideo(jobId) {
  console.log("\nPolling for video extension result...");

  while (true) {
    const response = await axios.get(
      `${API_BASE_URL}/v1/jobs/${jobId}`,
      {
        headers: { Authorization: API_KEY },
      }
    );

    const data = response.data;
    const status = data.data.status;
    console.log("Status:", status);

    if (status === "completed") {
      console.log("\nVideo extended successfully!");
      console.log("Video URL:", data.data.url);
      console.log("Duration:", data.data.duration);
      console.log("Dimensions:", data.data.width, "x", data.data.height);
      console.log("AI Credits Used:", data.data.aiCreditsUsed);
      return data;
    }

    if (status === "failed") {
      throw new Error("Video extension failed: " + JSON.stringify(data));
    }

    // Wait 15 seconds before polling again
    await new Promise(resolve => setTimeout(resolve, 15000));
  }
}

// Run the complete workflow
extendVideo()
  .then(jobId => waitForVideo(jobId))
  .then(result => console.log("\nDone!"))
  .catch(error => console.error("Error:", error));

Understanding the Parameters

ParameterTypeRequiredDefaultDescription
promptstringYesA text description of what should happen next in the video. Must be between 5 and 5,000 characters.
extendVideoUrlstringNoA publicly accessible URL of the video to extend. Must be a valid URI. Cannot be used together with firstFrameImageUrl.
modelstringNopixverse5.5The AI model to use for generation. Supported values: veo3.1, veo3.1_fast, pixverse5.5. See Generate Video API for model capabilities and pricing.
aspectRatiostringNoFirst supported ratio of the selected modelThe output aspect ratio. Valid values depend on the model. For example, pixverse5.5 supports 16:9, 9:16, 1:1, 3:4, 4:3, while veo3.1 supports 16:9, 9:16.
durationstringNoFirst supported duration of the selected modelThe duration of the extended segment. Valid values depend on the model. For example, pixverse5.5 supports 5s, 8s, 10s, while veo3.1 supports 4s, 6s, 8s.
webhookstringNoA URL to receive a POST notification when the job completes. Must be a valid URI.

Building Multi-Segment Videos

You can extend videos iteratively to build longer sequences. Each extension generates a new video segment that continues from the previous one. Example workflow:
  1. Generate the first video segment using a text prompt.
  2. Retrieve the video URL from the completed job.
  3. Use that URL as the extendVideoUrl in a new request with a prompt for the next scene.
  4. Repeat to build a multi-segment video.
Each extension request generates a standalone video file, not an appended version of the original. To combine segments into a final video, you will need to concatenate them using a video editing tool or library after all segments are generated.

Tips for Extending Videos

  • Describe the transition naturally. Write your prompt as a continuation of the action. For example, if the source video shows someone walking, your prompt might say “She stops and turns to look at the sunset.”
  • Maintain consistency. Use the same model and aspectRatio as the original video for the best visual continuity between segments.
  • Keep prompts grounded. Reference what is happening at the end of the source video. Abrupt changes in subject or setting may produce less coherent results.
  • Plan your narrative. When building multi-segment sequences, outline the full story before generating. This helps you write prompts that flow naturally from one segment to the next.

Next Steps