Video Summary Usecase
Video Summary/Highlights API creates a summary of a long video. For example, if you have a 30-minute-long video, you can use our AI to summarize that in a shorter duration (let's say 2 mins, 5 min) video.
Watch the Video on how to generate video from highlights.
To summarise the video, please follow the below-mentioned steps:
-
STEP 1: CALL TRANSCRIPTION API: Generate the Transcription of the uploaded Video by passing the URL (on which the video file was uploaded in the request param by using /v2/transcription API. Transcription data will be sent in a callback URL or it can be fetched from GET Job API.
curl --location 'https://api.pictory.ai/pictoryapis/v2/transcription' \ --header 'Authorization: YOUR_AUTH_TOKEN ' \ --header 'X-Pictory-User-Id: YOUR_USER_ID' \ --header 'Content-Type: application/json' \ --data '{ "fileUrl": "PUBLIC_STREAMABLE_MP4_URL", "mediaType": "video", "language": "en-IN", "webhook":"<CALLBACK_URL>" }'
{ "data": { "jobId": "\<JOB_ID>" }, "success": true }
Call GET Job API to get the transcription data. The transcription data is also sent to the
webhook
URL.{ "success": true, "data": { "transcript": [ { "uid":"", "speakerId": "", "word": [ { "uid": "ef4b1282-5b34-45a2-984d-d84e6700756e", "word": "Once", "start_time": 0, "end_time": 0.88, "sentence_index": 0, "is_pause": true, "pause_size": "small", "state": "active", "speakerId": 1 }, {...}, {...} ] }, {...}, {...}. ], "txt": "Once Again", "srt": "1\\n00:00:00,880 --> 00:00:02,960\\nOnce again", "vtt": "WEBVTT\\n\\n1\\n00:00:00.880 --> 00:00:02.960\\n- Once again, ", "job_id": "4eac6816-9435-49d0-ba61-db226ec5cf0c" } }
-
STEP 2: Call Highlights API Highlights for the video can be obtained by passing
transcript
array from step 1 in highlights API . This API requires you to pass the duration for which highlights are needed.
curl --location 'https://api.pictory.ai/pictoryapis/v2/transcription' \
--header 'Authorization: YOUR_AUTH_TOKEN ' \
--header 'X-Pictory-User-Id: YOUR_USER_ID' \
--header 'Content-Type: application/json' \
--data '{
"transcript": "{{transcript_settings}}", //obtained from GET JOB of transcription API
"highlight_duration": 30,
"webhook": "<CALLBACK_URL>"
}'
Call GET Job API to get the transcription data. The transcription data is also sent to the webhook
URL.:
//Sample response
{
"job_id": "b292f6ca-fbf2-4380-93e1-95d26b5a563e",
"success": true,
"data": {
"transcript": [
{
"uid": "6e12aac1-0dab-4229-90aa-238c62846e4c",
"words": [
{
"uid": "d9f11114-5165-42d7-af5f-3b62e53613dc",
"word": "",
"start_time": 0,
"end_time": 0.5,
"sentence_index": 0,
"is_pause": true,
"pause_size": "small",
"state": "active"
},
{
"uid": "279fbfb5-5af4-4022-b694-65253056323e",
"word": "",
"start_time": 0.5,
"end_time": 0.91,
"sentence_index": 0,
"is_pause": true,
"pause_size": "small",
"state": "active"
},
{
"uid": "4039e0b1-a830-45d6-bf69-5818a7277779",
"word": "Once",
"start_time": 0.91,
"end_time": 1.24,
"sentence_index": 0,
"highlight": true
},
}]
}
}
- STEP 3: Create Text Sentences: As of now, Summary API converts speech to text in an array of word formats (instead of generating sentences). Each word object consists of the
start_time
andend_time
of the word that needs to be displayed in the videos.
You would need to write logic to form sentences from words and include the start time of the sentence (the start time of 1st word in the sentence) and the end time of the sentence (the end time of the last word of the sentence). A sample of the response is given above. - Step 4: Call Text to Video APIs: Once sentences are created, you need to follow Text to Video steps and call Storyboard API to add subtitles to the original video. To add subtitles to the scene you need to:
- Divide the original video into different scenes.
- Add subtitles to each scene.
- Storyboard API has a scenes array and for each scene, you need to pass the following parameters:
a.backgroundUri
: original Video URL (AWS S3)
b.text
: this will be the subtitle that needs to be displayed for that particular scene
c.BackgroundVideoSegments.start
: This defines the start time of text in the video. Pass the start time of the sentence here
d.BackgroundVideoSegment.end
: This defines the start time of text in the video. Pass the end time of the sentence here.
Storyboard API returns Job_Id in response. The job status can be seen by calling GET Job REST API. Once this job is complete, it gives a video preview URL and render_video settings in response. Video Render settings returned in this job can be used to generate video in .mp4 format.
Video can be generated by calling Video Render API. Video Render API returns JobId in response. The job status can be seen by calling GET Job REST API. Once this job is complete, it gives the link to the .mp4 video in response.
Updated 7 months ago