Skip to main content
DELETE
https://api.pictory.ai
/
pictoryapis
/
v1
/
jobs
/
{jobid}
/
clean
Clean Job
curl --request DELETE \
  --url https://api.pictory.ai/pictoryapis/v1/jobs/{jobid}/clean \
  --header 'Authorization: <authorization>'
{
  "success": true
}

Overview

Remove all job data and associated resources for a specific completed job. This operation permanently deletes all files generated by the job including video outputs, audio files, thumbnails, subtitle files (SRT, VTT, TXT), share URLs, preview URLs, and any uploaded input files. Use this endpoint to free up storage space and remove sensitive data after you’ve downloaded or processed the job results. This operation is only available for jobs that have completed (successfully or failed) and is irreversible.
Irreversible Operation: Once a job is cleaned, all associated data is permanently deleted and cannot be recovered. Make sure you have downloaded any needed outputs before cleaning.
You need a valid API key to use this endpoint. Get your API key from the API Access page in your Pictory dashboard.

API Endpoint

DELETE https://api.pictory.ai/pictoryapis/v1/jobs/{jobid}/clean

Request Parameters

Path Parameters

jobid
uuid
required
The unique identifier (UUID) of the job to clean up. This is returned when the job was initially created.Example: "17684c46-9d14-44ed-8830-ff839713ef8b"

Headers

Authorization
string
required
API key for authentication (starts with pictai_)
Authorization: YOUR_API_KEY

Response

Returns a simple success indicator confirming the job and all associated resources have been cleaned.
success
boolean
required
Indicates whether the cleanup operation was successful. True if the job was successfully cleaned up, false otherwise.

Response Examples

{
  "success": true
}

Code Examples

Replace YOUR_API_KEY with your actual API key and YOUR_JOB_ID with the job ID you want to clean
curl --request DELETE \
  --url 'https://api.pictory.ai/pictoryapis/v1/jobs/17684c46-9d14-44ed-8830-ff839713ef8b/clean' \
  --header 'Authorization: YOUR_API_KEY' \
  --header 'accept: application/json' | python -m json.tool

Usage Notes

Data Loss: This operation permanently deletes all job data and outputs. Ensure you have downloaded any needed files before cleaning the job.
Job State Requirement: Jobs can only be cleaned after they have completed (either successfully or failed). You cannot clean jobs that are still processing.
Storage Management: Regularly clean completed jobs to manage storage usage and costs, especially for high-volume applications.
Idempotent Operation: Calling this endpoint multiple times with the same job ID is safe. Subsequent calls will return success even if the job was already cleaned.

Common Use Cases

1. Clean Job After Downloading Results

Download results and then clean up:
import requests
import time

def download_and_clean_job(api_key, job_id, output_dir="."):
    """
    Download job results and then clean up the job.

    Args:
        api_key: Your API key
        job_id: The job ID to process
        output_dir: Directory to save downloaded files
    """
    base_url = "https://api.pictory.ai/pictoryapis/v1/jobs"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    # Step 1: Get job status and results
    get_url = f"{base_url}/{job_id}"
    response = requests.get(get_url, headers=headers)
    data = response.json()

    if not data.get("success"):
        print(f"Job {job_id} not successful")
        return False

    status = data.get("data", {}).get("status")
    if status != "completed":
        print(f"Job not completed yet. Status: {status}")
        return False

    # Step 2: Download outputs (example for video render job)
    output_url = data.get("data", {}).get("outputUrl")
    if output_url:
        print(f"Downloading output from {output_url}")
        output_response = requests.get(output_url)

        output_file = f"{output_dir}/job_{job_id}_output.mp4"
        with open(output_file, 'wb') as f:
            f.write(output_response.content)
        print(f"Saved to {output_file}")

    # Step 3: Save transcript if available
    transcript = data.get("data", {}).get("transcript")
    if transcript:
        transcript_file = f"{output_dir}/job_{job_id}_transcript.json"
        with open(transcript_file, 'w') as f:
            import json
            json.dump(transcript, f, indent=2)
        print(f"Saved transcript to {transcript_file}")

    # Step 4: Clean up the job
    clean_url = f"{base_url}/{job_id}/clean"
    clean_response = requests.delete(clean_url, headers=headers)
    clean_data = clean_response.json()

    if clean_data.get("success"):
        print(f"Job {job_id} successfully cleaned")
        return True
    else:
        print(f"Failed to clean job {job_id}")
        return False

# Usage
download_and_clean_job("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")

2. Batch Clean Multiple Completed Jobs

Clean multiple jobs at once:
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed

def clean_single_job(api_key, job_id):
    """Clean a single job."""
    url = f"https://api.pictory.ai/pictoryapis/v1/jobs/{job_id}/clean"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    try:
        response = requests.delete(url, headers=headers)
        data = response.json()

        return {
            "job_id": job_id,
            "success": data.get("success", False),
            "error": data.get("error") if not data.get("success") else None
        }
    except Exception as e:
        return {
            "job_id": job_id,
            "success": False,
            "error": str(e)
        }

def batch_clean_jobs(api_key, job_ids, max_workers=5):
    """
    Clean multiple jobs in parallel.

    Args:
        api_key: Your API key
        job_ids: List of job IDs to clean
        max_workers: Maximum number of parallel requests
    """
    results = []

    with ThreadPoolExecutor(max_workers=max_workers) as executor:
        future_to_job = {
            executor.submit(clean_single_job, api_key, job_id): job_id
            for job_id in job_ids
        }

        for future in as_completed(future_to_job):
            result = future.result()
            results.append(result)

            if result["success"]:
                print(f"✓ Cleaned job {result['job_id']}")
            else:
                print(f"✗ Failed to clean job {result['job_id']}: {result.get('error')}")

    # Summary
    successful = sum(1 for r in results if r["success"])
    failed = len(results) - successful

    print(f"\nSummary: {successful} cleaned, {failed} failed")
    return results

# Usage
job_ids_to_clean = [
    "17684c46-9d14-44ed-8830-ff839713ef8b",
    "bbd75639-c3cb-4add-bf7b-e4e39cffb3b0",
    "another-job-id-here"
]

batch_clean_jobs("YOUR_API_KEY", job_ids_to_clean)

3. Clean Old Completed Jobs

Automatically clean jobs older than a certain age:
import requests
from datetime import datetime, timedelta

def clean_old_jobs(api_key, days_old=7):
    """
    Clean jobs that completed more than X days ago.

    Args:
        api_key: Your API key
        days_old: Clean jobs older than this many days
    """
    base_url = "https://api.pictory.ai/pictoryapis/v1/jobs"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    # Step 1: Get all jobs
    response = requests.get(base_url, headers=headers)
    data = response.json()

    if not data.get("items"):
        print("No jobs found")
        return

    cutoff_date = datetime.now() - timedelta(days=days_old)
    jobs_to_clean = []

    # Step 2: Filter old completed jobs
    for job in data["items"]:
        job_data = job.get("data", {})
        status = job_data.get("status")

        # Only clean completed or failed jobs
        if status not in ["completed", "failed"]:
            continue

        # Check if job has timestamp (you may need to track this separately)
        # For demonstration, we'll clean all completed jobs
        # In production, you'd check the completion timestamp

        jobs_to_clean.append(job_data.get("jobId"))

    print(f"Found {len(jobs_to_clean)} jobs to clean")

    # Step 3: Clean the jobs
    cleaned_count = 0
    for job_id in jobs_to_clean:
        if not job_id:
            continue

        clean_url = f"{base_url}/{job_id}/clean"
        clean_response = requests.delete(clean_url, headers=headers)
        clean_data = clean_response.json()

        if clean_data.get("success"):
            cleaned_count += 1
            print(f"Cleaned job {job_id}")

    print(f"\nCleaned {cleaned_count} out of {len(jobs_to_clean)} jobs")

# Usage
clean_old_jobs("YOUR_API_KEY", days_old=7)

4. Clean with Confirmation

Require user confirmation before cleaning:
import requests

def clean_job_with_confirmation(api_key, job_id):
    """
    Get job details and ask for confirmation before cleaning.

    Args:
        api_key: Your API key
        job_id: The job ID to clean
    """
    base_url = "https://api.pictory.ai/pictoryapis/v1/jobs"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    # Step 1: Get job information
    get_url = f"{base_url}/{job_id}"
    response = requests.get(get_url, headers=headers)
    data = response.json()

    if not data.get("success"):
        print(f"Failed to retrieve job {job_id}")
        return False

    job_data = data.get("data", {})
    status = job_data.get("status", "unknown")

    # Step 2: Display job information
    print(f"\nJob ID: {job_id}")
    print(f"Status: {status}")

    if "transcript" in job_data:
        print(f"Has Transcript: Yes ({len(job_data['transcript'])} segments)")

    if "highlight" in job_data:
        print(f"Has Highlights: Yes")

    if "outputUrl" in job_data:
        print(f"Output URL: {job_data['outputUrl']}")

    # Step 3: Ask for confirmation
    print("\n⚠️  WARNING: This will permanently delete all job data and outputs!")
    confirm = input("Are you sure you want to clean this job? (yes/no): ")

    if confirm.lower() != "yes":
        print("Cleanup cancelled")
        return False

    # Step 4: Clean the job
    clean_url = f"{base_url}/{job_id}/clean"
    clean_response = requests.delete(clean_url, headers=headers)
    clean_data = clean_response.json()

    if clean_data.get("success"):
        print(f"✓ Job {job_id} successfully cleaned")
        return True
    else:
        print(f"✗ Failed to clean job {job_id}")
        return False

# Usage
clean_job_with_confirmation("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")

5. Conditional Cleanup Based on Job Type

Clean jobs based on type and retention policy:
import requests

def smart_clean_jobs(api_key, retention_policies=None):
    """
    Clean jobs based on type-specific retention policies.

    Args:
        api_key: Your API key
        retention_policies: Dict mapping job types to whether they should be cleaned
    """
    if retention_policies is None:
        retention_policies = {
            "transcription": True,   # Clean transcription jobs
            "highlight": True,        # Clean highlight jobs
            "render": False,          # Keep render jobs
            "template": False         # Keep template jobs
        }

    base_url = "https://api.pictory.ai/pictoryapis/v1/jobs"
    headers = {
        "Authorization": api_key,
        "accept": "application/json"
    }

    # Get all jobs
    response = requests.get(base_url, headers=headers)
    data = response.json()

    cleaned = 0
    skipped = 0

    for job in data.get("items", []):
        job_data = job.get("data", {})
        job_id = job_data.get("jobId")
        status = job_data.get("status")

        if not job_id or status not in ["completed", "failed"]:
            continue

        # Determine job type
        job_type = None
        if "transcript" in job_data:
            job_type = "transcription"
        elif "highlight" in job_data:
            job_type = "highlight"
        elif "outputUrl" in job_data:
            job_type = "render"

        # Check retention policy
        should_clean = retention_policies.get(job_type, False)

        if should_clean:
            clean_url = f"{base_url}/{job_id}/clean"
            clean_response = requests.delete(clean_url, headers=headers)

            if clean_response.json().get("success"):
                print(f"✓ Cleaned {job_type} job {job_id}")
                cleaned += 1
        else:
            print(f"→ Kept {job_type} job {job_id}")
            skipped += 1

    print(f"\nCleaned: {cleaned}, Kept: {skipped}")

# Usage
smart_clean_jobs("YOUR_API_KEY")

What Gets Deleted

When you clean a job, the following data and resources are permanently removed:
Resource TypeDescription
Video OutputsAll rendered video files in various formats and resolutions
Audio FilesGenerated audio tracks, voiceovers, and audio exports
ThumbnailsPreview images and video thumbnails
Subtitle FilesSRT, VTT, and TXT subtitle/caption files
Transcript DataWord-level transcript with timing information
Highlight DataAI-generated highlight segments and summaries
Share URLsPublic sharing links and preview URLs
Preview URLsTemporary preview and playback URLs
Input FilesUploaded source files (videos, audio, images)
Project DataIntermediate processing files and temporary data

Best Practices

  1. Download Before Cleaning: Always download and backup any needed outputs before cleaning a job.
  2. Verify Job Completion: Ensure the job has completed (successfully or failed) before attempting to clean. Processing jobs cannot be cleaned.
  3. Archive Important Results: For jobs with important results, save the full job data (transcript, highlights, outputs) to your own storage before cleaning.
  4. Automate Cleanup: Implement automated cleanup policies to regularly clean old jobs and manage storage costs.
  5. Handle Errors Gracefully: Jobs may already be cleaned or deleted. Handle 404 errors appropriately.
  6. Use Batch Operations: When cleaning multiple jobs, use parallel requests with reasonable rate limiting.
  7. Implement Retention Policies: Define clear policies for how long different types of jobs should be retained.
  8. Log Cleanup Operations: Maintain logs of cleaned jobs for audit and recovery purposes.
  9. Confirm Critical Operations: For interactive applications, require user confirmation before cleaning jobs.
  10. Check Storage Limits: Monitor your account’s storage usage and clean jobs proactively to avoid reaching limits.

Error Handling

Common errors and how to handle them:
ErrorCauseSolution
INVALID_STATEJob is still processingWait for job to complete before cleaning
NOT_FOUNDJob doesn’t exist or already cleanedThis is safe to ignore in cleanup scripts
UNAUTHORIZEDInvalid or expired API keyVerify your API key is valid
INTERNAL_ERRORServer-side errorRetry with exponential backoff