Skip to main content
GET
https://api.pictory.ai
/
pictoryapis
/
v1
/
jobs
List Jobs
curl --request GET \
  --url https://api.pictory.ai/pictoryapis/v1/jobs \
  --header 'Authorization: <authorization>'
{
  "items": [
    {
      "success": true,
      "data": {
        "transcript": [
          {
            "uid": "63773537-fb4c-4f93-8bed-217951b16fb2",
            "speakerId": 1,
            "words": [
              {
                "uid": "49297a70-5f5d-459c-8486-618f7bc983d3",
                "word": "Once",
                "start_time": 0,
                "end_time": 0.48,
                "speakerId": 1,
                "sentence_index": 0
              },
              {
                "uid": "0cd65dd6-1c37-4b1c-8e9e-2772541dc21a",
                "word": "again,",
                "start_time": 0.48,
                "end_time": 0.88,
                "speakerId": 1,
                "sentence_index": 0
              }
            ]
          }
        ],
        "jobId": "bbd75639-c3cb-4add-bf7b-e4e39cffb3b0",
        "status": "completed"
      }
    }
  ],
  "nextPageKey": "eyJsYXN0RXZhbHVhdGVkS2V5Ijp7ImNyZWF0ZWRfYXQiOiIyMDI1LTAxLTAxVDEyOjAwOjAwWiJ9fQ=="
}

Overview

Retrieve a paginated list of all jobs associated with your API client. This endpoint returns jobs in descending order by creation date (newest first), allowing you to monitor all asynchronous processing tasks such as video transcriptions, summaries, renders, and template creations.
You need a valid API key to use this endpoint. Get your API key from the API Access page in your Pictory dashboard.

API Endpoint

GET https://api.pictory.ai/pictoryapis/v1/jobs

Request Parameters

Headers

Authorization
string
required
API key for authentication (starts with pictai_)
Authorization: YOUR_API_KEY

Query Parameters

nextPageKey
string
Base64-encoded pagination token returned from a previous request. Use this to retrieve the next page of results. If not provided, the first page of results will be returned.Example: "eyJsYXN0RXZhbHVhdGVkS2V5Ijp7fX0="

Response

Returns a paginated list of job objects in the items array. Each job includes its unique job_id, current status, and job-specific data including transcripts, highlights, render results, or error information. If more results are available, a nextPageKey is included for pagination.
items
array of objects
required
Array of job objects, ordered by creation date in descending order (newest first)
nextPageKey
string | null
required
Base64-encoded pagination token to retrieve the next page of results. If null, there are no more pages available. Pass this value as the nextPageKey query parameter in the next request to fetch additional results.

Response Examples

{
  "items": [
    {
      "success": true,
      "data": {
        "transcript": [
          {
            "uid": "63773537-fb4c-4f93-8bed-217951b16fb2",
            "speakerId": 1,
            "words": [
              {
                "uid": "49297a70-5f5d-459c-8486-618f7bc983d3",
                "word": "Once",
                "start_time": 0,
                "end_time": 0.48,
                "speakerId": 1,
                "sentence_index": 0
              },
              {
                "uid": "0cd65dd6-1c37-4b1c-8e9e-2772541dc21a",
                "word": "again,",
                "start_time": 0.48,
                "end_time": 0.88,
                "speakerId": 1,
                "sentence_index": 0
              }
            ]
          }
        ],
        "jobId": "bbd75639-c3cb-4add-bf7b-e4e39cffb3b0",
        "status": "completed"
      }
    }
  ],
  "nextPageKey": "eyJsYXN0RXZhbHVhdGVkS2V5Ijp7ImNyZWF0ZWRfYXQiOiIyMDI1LTAxLTAxVDEyOjAwOjAwWiJ9fQ=="
}

Code Examples

Replace YOUR_API_KEY with your actual API key that starts with pictai_
curl --request GET \
  --url 'https://api.pictory.ai/pictoryapis/v1/jobs' \
  --header 'Authorization: YOUR_API_KEY' \
  --header 'accept: application/json' | python -m json.tool

Usage Notes

Jobs are returned in reverse chronological order by creation date, with the most recently created jobs appearing first.
Pagination: When you have many jobs, use the nextPageKey value from the response to retrieve subsequent pages. Pass the nextPageKey as a query parameter in your next request.
Job Types: Jobs can represent various async operations including video transcriptions, summaries/highlights, project renders, and template creations. The data structure varies based on the job type.
Rate Limiting: Be mindful of API rate limits when polling for job status. Consider implementing exponential backoff or using webhooks for job completion notifications.

Common Use Cases

1. List All Recent Jobs

Retrieve and display all recent jobs:
import requests

url = "https://api.pictory.ai/pictoryapis/v1/jobs"
headers = {
    "Authorization": "YOUR_API_KEY",
    "accept": "application/json"
}

response = requests.get(url, headers=headers)
data = response.json()

print(f"Total jobs: {len(data['items'])}\n")

for job in data['items']:
    print(f"Job Status: {'Success' if job['success'] else 'Failed'}")
    if 'jobId' in job.get('data', {}):
        print(f"  Job ID: {job['data']['jobId']}")
    if 'status' in job.get('data', {}):
        print(f"  Status: {job['data']['status']}")

    # Check for errors
    if not job['success'] and 'error' in job.get('data', {}):
        print(f"  Error: {job['data']['error'].get('message', 'Unknown error')}")

    print("---")

2. Paginate Through All Jobs

Fetch all jobs across multiple pages:
import requests

def fetch_all_jobs(api_key):
    url = "https://api.pictory.ai/pictoryapis/v1/jobs"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    all_jobs = []
    next_page_key = None

    while True:
        # Add pagination parameter if we have a next page key
        params = {"nextPageKey": next_page_key} if next_page_key else {}

        response = requests.get(url, headers=headers, params=params)
        data = response.json()

        # Add jobs from this page
        all_jobs.extend(data.get('items', []))

        # Check if there are more pages
        next_page_key = data.get('nextPageKey')
        if not next_page_key:
            break

    return all_jobs

# Usage
all_jobs = fetch_all_jobs("YOUR_API_KEY")
print(f"Total jobs across all pages: {len(all_jobs)}")

# Count by status
completed = sum(1 for job in all_jobs if job['success'])
failed = len(all_jobs) - completed
print(f"Completed: {completed}, Failed: {failed}")

3. Monitor Job Status

Check the status of specific job types:
import requests

url = "https://api.pictory.ai/pictoryapis/v1/jobs"
headers = {
    "Authorization": "YOUR_API_KEY",
    "accept": "application/json"
}

response = requests.get(url, headers=headers)
data = response.json()

# Filter transcription jobs
transcription_jobs = [
    job for job in data['items']
    if 'transcript' in job.get('data', {})
]

print(f"Transcription jobs found: {len(transcription_jobs)}")

for job in transcription_jobs:
    job_id = job.get('data', {}).get('jobId', 'Unknown')
    status = job.get('data', {}).get('status', 'Unknown')
    success = job.get('success', False)

    print(f"Job ID: {job_id}")
    print(f"  Status: {status}")
    print(f"  Success: {success}")

    if 'transcript' in job.get('data', {}):
        transcript_segments = len(job['data']['transcript'])
        print(f"  Transcript Segments: {transcript_segments}")

    print("---")

4. Find Failed Jobs

Identify and log failed jobs for debugging:
import requests
from datetime import datetime

url = "https://api.pictory.ai/pictoryapis/v1/jobs"
headers = {
    "Authorization": "YOUR_API_KEY",
    "accept": "application/json"
}

response = requests.get(url, headers=headers)
data = response.json()

# Find failed jobs
failed_jobs = [job for job in data['items'] if not job['success']]

if failed_jobs:
    print(f"Found {len(failed_jobs)} failed jobs:\n")

    for job in failed_jobs:
        job_id = job.get('data', {}).get('jobId', 'Unknown')
        error_msg = job.get('data', {}).get('error', {}).get('message', 'No error message')
        error_code = job.get('data', {}).get('error', {}).get('code', 'No error code')

        print(f"Job ID: {job_id}")
        print(f"  Error Code: {error_code}")
        print(f"  Error Message: {error_msg}")
        print("---")
else:
    print("No failed jobs found!")

5. Export Jobs to CSV

Export job data for reporting:
import requests
import csv

def export_jobs_to_csv(api_key, filename="jobs_export.csv"):
    url = "https://api.pictory.ai/pictoryapis/v1/jobs"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    response = requests.get(url, headers=headers)
    data = response.json()

    # Prepare CSV
    with open(filename, 'w', newline='') as csvfile:
        fieldnames = ['Job ID', 'Status', 'Success', 'Has Transcript', 'Has Highlight', 'Error']
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)

        writer.writeheader()

        for job in data.get('items', []):
            job_data = job.get('data', {})

            row = {
                'Job ID': job_data.get('jobId', 'N/A'),
                'Status': job_data.get('status', 'N/A'),
                'Success': 'Yes' if job.get('success') else 'No',
                'Has Transcript': 'Yes' if 'transcript' in job_data else 'No',
                'Has Highlight': 'Yes' if 'highlight' in job_data else 'No',
                'Error': job_data.get('error', {}).get('message', 'None')
            }

            writer.writerow(row)

    print(f"Exported {len(data.get('items', []))} jobs to {filename}")

# Usage
export_jobs_to_csv("YOUR_API_KEY")

Job Data Structure

The data field in each job object varies based on the job type:

Transcription Jobs

{
  "jobId": "uuid",
  "status": "completed",
  "transcript": [
    {
      "uid": "segment-uuid",
      "speakerId": 1,
      "words": [
        {
          "uid": "word-uuid",
          "word": "Hello",
          "start_time": 0.0,
          "end_time": 0.5,
          "speakerId": 1,
          "sentence_index": 0
        }
      ]
    }
  ]
}

Summary/Highlight Jobs

{
  "jobId": "uuid",
  "status": "completed",
  "highlight": [
    {
      "start": 0.0,
      "end": 10.5,
      "text": "Summary segment text",
      "importance": 0.95
    }
  ]
}

Failed Jobs

{
  "jobId": "uuid",
  "status": "failed",
  "error": {
    "code": "PROCESSING_ERROR",
    "message": "Failed to process media file"
  }
}