Skip to main content
GET
https://api.pictory.ai
/
pictoryapis
/
v1
/
jobs
/
{jobid}
Get Job by ID
curl --request GET \
  --url https://api.pictory.ai/pictoryapis/v1/jobs/{jobid} \
  --header 'Authorization: <authorization>'
{
  "success": true,
  "data": {
    "transcript": [
      {
        "uid": "63773537-fb4c-4f93-8bed-217951b16fb2",
        "speakerId": 1,
        "words": [
          {
            "uid": "49297a70-5f5d-459c-8486-618f7bc983d3",
            "word": "Once",
            "start_time": 0,
            "end_time": 0.48,
            "speakerId": 1,
            "sentence_index": 0
          },
          {
            "uid": "0cd65dd6-1c37-4b1c-8e9e-2772541dc21a",
            "word": "again,",
            "start_time": 0.48,
            "end_time": 0.88,
            "speakerId": 1,
            "sentence_index": 0
          }
        ]
      }
    ],
    "status": "completed"
  }
}

Overview

Retrieve the current status and complete results of a specific job using its unique job ID. This endpoint returns detailed information about the job including its current state (in-progress, completed, failed, or cleaned), progress information, transcript data, highlights, output URLs, and any error messages if the job failed. Use this endpoint to poll for job completion status or retrieve the final results of transcription, summary, render, or other asynchronous processing tasks.
You need a valid API key to use this endpoint. Get your API key from the API Access page in your Pictory dashboard.

API Endpoint

GET https://api.pictory.ai/pictoryapis/v1/jobs/{jobid}

Request Parameters

Path Parameters

jobid
uuid
required
The unique identifier (UUID) of the job to retrieve. This is returned when the job was initially created.Example: "17684c46-9d14-44ed-8830-ff839713ef8b"

Headers

Authorization
string
required
API key for authentication (starts with pictai_)
Authorization: YOUR_API_KEY

Response

Returns a job object containing the unique job ID, success status, and detailed data including transcripts, highlights, processing status, or error information depending on the job type and current state.
job_id
uuid
required
The unique identifier of the job. This UUID can be used to track and retrieve the job status.
success
boolean
required
Indicates whether the job completed successfully. True if the job is in-progress or completed successfully, false if it failed or encountered an error.
data
object
required
Contains the job-specific data including status, progress, results, or error information. The exact structure depends on the job type and current state.

Response Examples

{
  "success": true,
  "data": {
    "transcript": [
      {
        "uid": "63773537-fb4c-4f93-8bed-217951b16fb2",
        "speakerId": 1,
        "words": [
          {
            "uid": "49297a70-5f5d-459c-8486-618f7bc983d3",
            "word": "Once",
            "start_time": 0,
            "end_time": 0.48,
            "speakerId": 1,
            "sentence_index": 0
          },
          {
            "uid": "0cd65dd6-1c37-4b1c-8e9e-2772541dc21a",
            "word": "again,",
            "start_time": 0.48,
            "end_time": 0.88,
            "speakerId": 1,
            "sentence_index": 0
          }
        ]
      }
    ],
    "status": "completed"
  }
}

Code Examples

Replace YOUR_API_KEY with your actual API key and YOUR_JOB_ID with the job ID you want to retrieve
curl --request GET \
  --url 'https://api.pictory.ai/pictoryapis/v1/jobs/17684c46-9d14-44ed-8830-ff839713ef8b' \
  --header 'Authorization: YOUR_API_KEY' \
  --header 'accept: application/json' | python -m json.tool

Usage Notes

Rate Limiting: When polling for job status, implement exponential backoff to avoid hitting rate limits. Start with longer intervals (5-10 seconds) and increase if the job is still processing.
Job Expiration: Jobs may be cleaned or archived after a certain period. Check the documentation for data retention policies.
Webhooks Recommended: Instead of polling, consider using webhooks when initiating jobs. The webhook will automatically notify you when the job completes.

Common Use Cases

1. Poll for Job Completion

Check job status repeatedly until it completes:
import requests
import time

def wait_for_job_completion(api_key, job_id, max_wait=300, poll_interval=5):
    """
    Poll for job completion with exponential backoff.

    Args:
        api_key: Your API key
        job_id: The job ID to monitor
        max_wait: Maximum time to wait in seconds (default 5 minutes)
        poll_interval: Initial poll interval in seconds
    """
    url = f"https://api.pictory.ai/pictoryapis/v1/jobs/{job_id}"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    start_time = time.time()
    current_interval = poll_interval

    while time.time() - start_time < max_wait:
        response = requests.get(url, headers=headers)
        data = response.json()

        if not data.get("success"):
            return {"error": "Job failed", "data": data}

        status = data.get("data", {}).get("status", "unknown")

        if status == "completed":
            return {"status": "completed", "data": data}
        elif status == "failed":
            return {"status": "failed", "data": data}
        elif status == "processing":
            progress = data.get("data", {}).get("progress", 0)
            print(f"Processing... {progress}%")

            # Exponential backoff
            time.sleep(current_interval)
            current_interval = min(current_interval * 1.5, 30)  # Max 30 seconds
        else:
            print(f"Unknown status: {status}")
            time.sleep(current_interval)

    return {"error": "Timeout", "message": "Job did not complete within the maximum wait time"}

# Usage
job_id = "17684c46-9d14-44ed-8830-ff839713ef8b"
result = wait_for_job_completion("YOUR_API_KEY", job_id)

if result.get("status") == "completed":
    print("Job completed successfully!")
    print(f"Transcript segments: {len(result['data']['data']['transcript'])}")
else:
    print(f"Job did not complete: {result.get('error')}")

2. Extract Transcript from Completed Job

Get the full transcript with timestamps:
import requests

def get_job_transcript(api_key, job_id):
    """Extract full transcript text from a completed job."""
    url = f"https://api.pictory.ai/pictoryapis/v1/jobs/{job_id}"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    response = requests.get(url, headers=headers)
    data = response.json()

    if not data.get("success"):
        return None

    transcript_segments = data.get("data", {}).get("transcript", [])

    # Extract full text
    full_text = []
    for segment in transcript_segments:
        words = segment.get("words", [])
        segment_text = " ".join([w["word"] for w in words if not w.get("is_pause")])
        full_text.append(segment_text)

    return {
        "full_text": " ".join(full_text),
        "segments": transcript_segments,
        "total_segments": len(transcript_segments)
    }

# Usage
transcript = get_job_transcript("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")

if transcript:
    print(f"Full Transcript ({transcript['total_segments']} segments):")
    print(transcript['full_text'])

3. Generate SRT Subtitle File

Convert transcript to SRT format:
import requests

def generate_srt_from_job(api_key, job_id, output_file="subtitles.srt"):
    """Generate an SRT subtitle file from job transcript."""
    url = f"https://api.pictory.ai/pictoryapis/v1/jobs/{job_id}"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    response = requests.get(url, headers=headers)
    data = response.json()

    if not data.get("success"):
        print("Job not successful")
        return False

    transcript_segments = data.get("data", {}).get("transcript", [])

    def format_timestamp(seconds):
        """Convert seconds to SRT timestamp format (HH:MM:SS,mmm)"""
        hours = int(seconds // 3600)
        minutes = int((seconds % 3600) // 60)
        secs = int(seconds % 60)
        millis = int((seconds % 1) * 1000)
        return f"{hours:02d}:{minutes:02d}:{secs:02d},{millis:03d}"

    # Generate SRT content
    srt_content = []
    subtitle_index = 1

    for segment in transcript_segments:
        words = segment.get("words", [])
        if not words:
            continue

        # Filter out pauses
        real_words = [w for w in words if not w.get("is_pause")]
        if not real_words:
            continue

        start_time = real_words[0]["start_time"]
        end_time = real_words[-1]["end_time"]
        text = " ".join([w["word"] for w in real_words])

        # SRT format: index, timestamp, text, blank line
        srt_content.append(f"{subtitle_index}")
        srt_content.append(f"{format_timestamp(start_time)} --> {format_timestamp(end_time)}")
        srt_content.append(text)
        srt_content.append("")  # Blank line

        subtitle_index += 1

    # Write to file
    with open(output_file, 'w', encoding='utf-8') as f:
        f.write("\n".join(srt_content))

    print(f"SRT file generated: {output_file} ({subtitle_index - 1} subtitles)")
    return True

# Usage
generate_srt_from_job("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")

4. Check Multiple Jobs in Parallel

Monitor multiple jobs simultaneously:
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed

def check_job_status(api_key, job_id):
    """Check status of a single job."""
    url = f"https://api.pictory.ai/pictoryapis/v1/jobs/{job_id}"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    try:
        response = requests.get(url, headers=headers)
        data = response.json()

        return {
            "job_id": job_id,
            "success": data.get("success"),
            "status": data.get("data", {}).get("status", "unknown"),
            "progress": data.get("data", {}).get("progress"),
            "error": data.get("data", {}).get("error")
        }
    except Exception as e:
        return {
            "job_id": job_id,
            "success": False,
            "error": str(e)
        }

def check_multiple_jobs(api_key, job_ids):
    """Check status of multiple jobs in parallel."""
    with ThreadPoolExecutor(max_workers=5) as executor:
        # Submit all jobs
        future_to_job = {
            executor.submit(check_job_status, api_key, job_id): job_id
            for job_id in job_ids
        }

        results = []
        for future in as_completed(future_to_job):
            result = future.result()
            results.append(result)

        return results

# Usage
job_ids = [
    "17684c46-9d14-44ed-8830-ff839713ef8b",
    "bbd75639-c3cb-4add-bf7b-e4e39cffb3b0",
    "another-job-id-here"
]

results = check_multiple_jobs("YOUR_API_KEY", job_ids)

# Print summary
for result in results:
    print(f"Job {result['job_id']}: {result['status']}")
    if result.get('progress'):
        print(f"  Progress: {result['progress']}%")

5. Handle Job Errors Gracefully

Implement robust error handling:
import requests

def get_job_with_error_handling(api_key, job_id):
    """Get job status with comprehensive error handling."""
    url = f"https://api.pictory.ai/pictoryapis/v1/jobs/{job_id}"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    try:
        response = requests.get(url, headers=headers, timeout=10)

        # Check HTTP status
        if response.status_code == 401:
            return {"error": "UNAUTHORIZED", "message": "Invalid API key"}
        elif response.status_code == 404:
            return {"error": "NOT_FOUND", "message": "Job not found"}
        elif response.status_code == 500:
            return {"error": "SERVER_ERROR", "message": "Internal server error"}
        elif response.status_code != 200:
            return {"error": "HTTP_ERROR", "message": f"HTTP {response.status_code}"}

        data = response.json()

        # Check job success
        if not data.get("success"):
            error_info = data.get("data", {}).get("error", {})
            return {
                "error": "JOB_FAILED",
                "message": error_info.get("message", "Job failed"),
                "code": error_info.get("code")
            }

        # Check job status
        status = data.get("data", {}).get("status", "unknown")
        if status == "failed":
            error_info = data.get("data", {}).get("error", {})
            return {
                "error": "PROCESSING_FAILED",
                "message": error_info.get("message", "Processing failed"),
                "code": error_info.get("code")
            }

        return {"success": True, "data": data}

    except requests.exceptions.Timeout:
        return {"error": "TIMEOUT", "message": "Request timed out"}
    except requests.exceptions.ConnectionError:
        return {"error": "CONNECTION_ERROR", "message": "Failed to connect to server"}
    except Exception as e:
        return {"error": "UNKNOWN_ERROR", "message": str(e)}

# Usage
result = get_job_with_error_handling("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")

if result.get("success"):
    print("Job retrieved successfully!")
    print(f"Status: {result['data']['data']['status']}")
else:
    print(f"Error ({result['error']}): {result['message']}")

Job Status Values

StatusDescription
processingJob is currently being processed. Check progress field for completion percentage.
completedJob completed successfully. Results are available in the data field.
failedJob failed with an error. Check the error field for details.
cleanedJob data has been cleaned or archived. Results may no longer be available.

Best Practices

  1. Use Webhooks: Instead of polling, configure webhooks when creating jobs to receive automatic notifications when jobs complete.
  2. Implement Exponential Backoff: When polling, use exponential backoff to reduce API calls and avoid rate limiting.
  3. Cache Results: Once a job is completed, cache the results locally instead of repeatedly fetching them.
  4. Handle All Status States: Implement logic to handle processing, completed, failed, and cleaned states appropriately.
  5. Set Reasonable Timeouts: Don’t poll indefinitely. Set a maximum wait time based on expected job duration.
  6. Monitor Progress: For long-running jobs, use the progress field to provide user feedback.
  7. Validate Job IDs: Ensure job IDs are valid UUIDs before making requests to avoid 400 errors.
  8. Handle 404 Errors: Jobs may expire or be deleted. Handle 404 responses gracefully.