Remove all job data and associated resources for a specific completed job. This operation permanently deletes all files generated by the job including video outputs, audio files, thumbnails, subtitle files (SRT, VTT, TXT), share URLs, preview URLs, and any uploaded input files.Use this endpoint to free up storage space and remove sensitive data after you’ve downloaded or processed the job results. This operation is only available for jobs that have completed (successfully or failed) and is irreversible.
Irreversible Operation: Once a job is cleaned, all associated data is permanently deleted and cannot be recovered. Make sure you have downloaded any needed outputs before cleaning.
You need a valid API key to use this endpoint. Get your API key from the API Access page in your Pictory dashboard.
The unique identifier (UUID) of the job to clean up. This is returned when the job was initially created.Example:"17684c46-9d14-44ed-8830-ff839713ef8b"
Data Loss: This operation permanently deletes all job data and outputs. Ensure you have downloaded any needed files before cleaning the job.
Job State Requirement: Jobs can only be cleaned after they have completed (either successfully or failed). You cannot clean jobs that are still processing.
Storage Management: Regularly clean completed jobs to manage storage usage and costs, especially for high-volume applications.
Idempotent Operation: Calling this endpoint multiple times with the same job ID is safe. Subsequent calls will return success even if the job was already cleaned.
import requestsimport timedef download_and_clean_job(api_key, job_id, output_dir="."): """ Download job results and then clean up the job. Args: api_key: Your API key job_id: The job ID to process output_dir: Directory to save downloaded files """ base_url = "https://api.pictory.ai/pictoryapis/v1/jobs" headers = { "Authorization": api_key, "accept": "application/json" } # Step 1: Get job status and results get_url = f"{base_url}/{job_id}" response = requests.get(get_url, headers=headers) data = response.json() if not data.get("success"): print(f"Job {job_id} not successful") return False status = data.get("data", {}).get("status") if status != "completed": print(f"Job not completed yet. Status: {status}") return False # Step 2: Download outputs (example for video render job) output_url = data.get("data", {}).get("outputUrl") if output_url: print(f"Downloading output from {output_url}") output_response = requests.get(output_url) output_file = f"{output_dir}/job_{job_id}_output.mp4" with open(output_file, 'wb') as f: f.write(output_response.content) print(f"Saved to {output_file}") # Step 3: Save transcript if available transcript = data.get("data", {}).get("transcript") if transcript: transcript_file = f"{output_dir}/job_{job_id}_transcript.json" with open(transcript_file, 'w') as f: import json json.dump(transcript, f, indent=2) print(f"Saved transcript to {transcript_file}") # Step 4: Clean up the job clean_url = f"{base_url}/{job_id}/clean" clean_response = requests.delete(clean_url, headers=headers) clean_data = clean_response.json() if clean_data.get("success"): print(f"Job {job_id} successfully cleaned") return True else: print(f"Failed to clean job {job_id}") return False# Usagedownload_and_clean_job("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")
Automatically clean jobs older than a certain age:
Report incorrect code
Copy
Ask AI
import requestsfrom datetime import datetime, timedeltadef clean_old_jobs(api_key, days_old=7): """ Clean jobs that completed more than X days ago. Args: api_key: Your API key days_old: Clean jobs older than this many days """ base_url = "https://api.pictory.ai/pictoryapis/v1/jobs" headers = { "Authorization": api_key, "accept": "application/json" } # Step 1: Get all jobs response = requests.get(base_url, headers=headers) data = response.json() if not data.get("items"): print("No jobs found") return cutoff_date = datetime.now() - timedelta(days=days_old) jobs_to_clean = [] # Step 2: Filter old completed jobs for job in data["items"]: job_data = job.get("data", {}) status = job_data.get("status") # Only clean completed or failed jobs if status not in ["completed", "failed"]: continue # Check if job has timestamp (you may need to track this separately) # For demonstration, we'll clean all completed jobs # In production, you'd check the completion timestamp jobs_to_clean.append(job_data.get("jobId")) print(f"Found {len(jobs_to_clean)} jobs to clean") # Step 3: Clean the jobs cleaned_count = 0 for job_id in jobs_to_clean: if not job_id: continue clean_url = f"{base_url}/{job_id}/clean" clean_response = requests.delete(clean_url, headers=headers) clean_data = clean_response.json() if clean_data.get("success"): cleaned_count += 1 print(f"Cleaned job {job_id}") print(f"\nCleaned {cleaned_count} out of {len(jobs_to_clean)} jobs")# Usageclean_old_jobs("YOUR_API_KEY", days_old=7)
import requestsdef clean_job_with_confirmation(api_key, job_id): """ Get job details and ask for confirmation before cleaning. Args: api_key: Your API key job_id: The job ID to clean """ base_url = "https://api.pictory.ai/pictoryapis/v1/jobs" headers = { "Authorization": api_key, "accept": "application/json" } # Step 1: Get job information get_url = f"{base_url}/{job_id}" response = requests.get(get_url, headers=headers) data = response.json() if not data.get("success"): print(f"Failed to retrieve job {job_id}") return False job_data = data.get("data", {}) status = job_data.get("status", "unknown") # Step 2: Display job information print(f"\nJob ID: {job_id}") print(f"Status: {status}") if "transcript" in job_data: print(f"Has Transcript: Yes ({len(job_data['transcript'])} segments)") if "highlight" in job_data: print(f"Has Highlights: Yes") if "outputUrl" in job_data: print(f"Output URL: {job_data['outputUrl']}") # Step 3: Ask for confirmation print("\n⚠️ WARNING: This will permanently delete all job data and outputs!") confirm = input("Are you sure you want to clean this job? (yes/no): ") if confirm.lower() != "yes": print("Cleanup cancelled") return False # Step 4: Clean the job clean_url = f"{base_url}/{job_id}/clean" clean_response = requests.delete(clean_url, headers=headers) clean_data = clean_response.json() if clean_data.get("success"): print(f"✓ Job {job_id} successfully cleaned") return True else: print(f"✗ Failed to clean job {job_id}") return False# Usageclean_job_with_confirmation("YOUR_API_KEY", "17684c46-9d14-44ed-8830-ff839713ef8b")
Download Before Cleaning: Always download and backup any needed outputs before cleaning a job.
Verify Job Completion: Ensure the job has completed (successfully or failed) before attempting to clean. Processing jobs cannot be cleaned.
Archive Important Results: For jobs with important results, save the full job data (transcript, highlights, outputs) to your own storage before cleaning.
Automate Cleanup: Implement automated cleanup policies to regularly clean old jobs and manage storage costs.
Handle Errors Gracefully: Jobs may already be cleaned or deleted. Handle 404 errors appropriately.
Use Batch Operations: When cleaning multiple jobs, use parallel requests with reasonable rate limiting.
Implement Retention Policies: Define clear policies for how long different types of jobs should be retained.
Log Cleanup Operations: Maintain logs of cleaned jobs for audit and recovery purposes.
Confirm Critical Operations: For interactive applications, require user confirmation before cleaning jobs.
Check Storage Limits: Monitor your account’s storage usage and clean jobs proactively to avoid reaching limits.