Endpoints
GET/health
Health check. No authentication required.
Response (200)
{
"status": "healthy",
"version": "0.3.0"
}
POST/upload-urlAuth Required
Generate a presigned S3 URL for uploading an observation file. For files > 50 MB, returns multipart upload URLs.
Request Body
| Field | Type | Required | Description |
filename | string | Yes | Name of the file (must end in .fil or .h5) |
file_size | integer | No | File size in bytes. If provided and > 50 MB, multipart upload is used. |
Response — Single Upload (file ≤ 50 MB)
{
"upload_url": "https://mitraseti-dev-data.s3.amazonaws.com/uploads/...",
"file_key": "uploads/user-id/uuid/observation.fil",
"expires_in": 3600,
"max_size_mb": 1024,
"multipart": false
}
Response — Multipart Upload (file > 50 MB)
{
"multipart": true,
"upload_id": "abc123...",
"file_key": "uploads/user-id/uuid/large_file.fil",
"parts": [
{ "partNumber": 1, "url": "https://...presigned-url-for-part-1..." },
{ "partNumber": 2, "url": "https://...presigned-url-for-part-2..." }
],
"part_size": 10485760,
"max_size_mb": 1024,
"expires_in": 3600
}
Multipart uploads: For files > 50 MB, the response includes presigned URLs for each 10 MB part. Upload each part with a PUT request, then call
POST /complete-upload with the ETags. See
Complete Multipart Upload.
Errors
| Code | Reason |
| 400 | Missing filename or unsupported file type |
| 403 | Missing or invalid authentication |
POST/complete-uploadAuth Required
Complete a multipart upload after all parts have been uploaded. Required for files > 50 MB.
Request Body
| Field | Type | Required | Description |
file_key | string | Yes | The file_key from the upload-url response |
upload_id | string | Yes | The upload_id from the upload-url response |
parts | array | Yes | Array of {"partNumber": 1, "etag": "\"abc123\""} from each part upload response |
Response (200)
{
"message": "Upload complete",
"file_key": "uploads/user-id/uuid/large_file.fil"
}
Example: Multipart upload with curl
# Upload part 1
ETAG1=$(curl -s -X PUT "$PART1_URL" --data-binary @part1.bin \
-D - -o /dev/null | grep -i etag | awk '{print $2}')
# Upload part 2
ETAG2=$(curl -s -X PUT "$PART2_URL" --data-binary @part2.bin \
-D - -o /dev/null | grep -i etag | awk '{print $2}')
# Complete upload
curl -X POST "$API/complete-upload" $AUTH \
-H "Content-Type: application/json" \
-d "{
\"file_key\": \"$FILE_KEY\",
\"upload_id\": \"$UPLOAD_ID\",
\"parts\": [
{\"partNumber\": 1, \"etag\": $ETAG1},
{\"partNumber\": 2, \"etag\": $ETAG2}
]
}"
POST/abort-uploadAuth Required
Abort an incomplete multipart upload. Call this to clean up if the upload fails or is cancelled.
Request Body
| Field | Type | Required | Description |
file_key | string | Yes | The file_key from the upload-url response |
upload_id | string | Yes | The upload_id from the upload-url response |
Response (200)
{
"message": "Upload aborted"
}
POST/analyzeAuth Required
Start the processing pipeline for an uploaded file. Returns a job ID for status polling.
Request Body
| Field | Type | Required | Description |
file_key | string | Yes | S3 key from the /upload-url response |
filename | string | No | Original filename (used for display) |
source_name | string | No | Friendly name for the observation (defaults to filename) |
Response (202)
{
"job_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "submitted",
"message": "Processing started. Poll GET /jobs/{job_id} for status.",
"status_url": "/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890"
}
Errors
| Code | Reason |
| 400 | Missing file_key or invalid JSON body |
| 404 | File not found in S3 (upload first) |
| 413 | File exceeds tier size limit |
| 429 | Monthly job limit reached, or too many concurrent jobs (max 5) |
| 500 | Pipeline failed to start |
GET/jobs/{job_id}Auth Required
Get job status and results. Full results are included when status is complete.
Response (200) — Completed Job
{
"job_id": "a1b2c3d4-...",
"status": "complete",
"source_name": "Voyager1_trimmed",
"filename": "Voyager1_trimmed.fil",
"file_size": 46137344,
"created_at": 1712444400,
"completed_at": 1712444430,
"results": {
"raw_hits": 543,
"filtered_hits": 163,
"candidates": 20,
"processing_time_seconds": 28.4,
"top_candidates": [
{
"frequency_hz": 1420405751.68,
"drift_rate": 0.15,
"snr": 14.2,
"classification": "NARROWBAND_DRIFTING",
"interestingness": 0.82,
"rfi_source": "",
"source": "matched_filter"
}
]
}
}
Status Values
| Status | Description |
submitted | Job accepted, pipeline starting |
processing | Pipeline executing (ingest → de-Doppler → classify → export) |
complete | Analysis finished, results available |
failed | Pipeline encountered an error |
Export artifacts timing: When status becomes complete, the core results are ready. However, export files (catalog, FITS, waterfall) are generated asynchronously and may take 10–30 seconds after completion to become available for download.
GET/jobsAuth Required
List all jobs for the authenticated user, ordered by most recent first.
Query Parameters
| Param | Type | Default | Description |
limit | int | 50 | Maximum number of jobs to return (max 100) |
Response (200)
[
{
"job_id": "a1b2c3d4-...",
"status": "complete",
"source_name": "Voyager1_trimmed.fil",
"filename": "Voyager1_trimmed.fil",
"file_size": 46137344,
"created_at": 1712444400,
"completed_at": 1712444430,
"results": { ... }
},
{
"job_id": "b2c3d4e5-...",
"status": "submitted",
"source_name": "GJ1002.fil",
"created_at": 1712444500
}
]
Job history: Jobs submitted via API also appear in the web dashboard and vice versa. History is limited per tier (Free: 10 jobs, Researcher: 50, Institution: 200, Enterprise: 500). Oldest completed jobs are automatically pruned when the limit is exceeded.
GET/jobs/{job_id}/downloadAuth Required
Download results for a completed job. Returns a presigned URL for binary formats, or inline content for CSV.
Query Parameters
| Param | Type | Default | Description |
format | string | json | Output format (see table below) |
Available Formats
| Format | Content | Description |
json | results.json | Full pipeline output with raw hits, filtered hits, candidates, processing time |
csv | CSV text | Signal catalog as CSV (returned inline, not as a presigned URL) |
catalog_json | catalog.json | Classified signal entries in JSON format |
classified | classified.json | ML-classified signals with class labels and confidence |
fits | catalog.fits | FITS binary table (compatible with TOPCAT, DS9, astropy) |
waterfall | waterfall.png | Frequency vs time spectrogram visualization |
waterfall_thumb | waterfall_thumb.png | Thumbnail version of the waterfall spectrogram |
Response — Presigned URL (json, catalog_json, classified, fits, waterfall)
{
"download_url": "https://mitraseti-dev-results.s3.amazonaws.com/jobs/...",
"format": "json",
"filename": "results.json"
}
Response — Inline CSV
source_name,frequency_hz,drift_rate,snr,classification,interestingness,...
Voyager1,1420405751.68,0.15,14.2,NARROWBAND_DRIFTING,0.82,...
Voyager1,1420405889.12,-0.08,8.7,RFI_TERRESTRIAL,0.15,...
...
Example: Download all formats
# Download JSON results
curl -s "$API/jobs/$JOB_ID/download?format=json" $AUTH \
| python3 -c "import sys,json; import urllib.request; \
urllib.request.urlretrieve(json.load(sys.stdin)['download_url'], 'results.json')"
# Download CSV (returned inline)
curl -s "$API/jobs/$JOB_ID/download?format=csv" $AUTH > signals.csv
# Download waterfall spectrogram
curl -s "$API/jobs/$JOB_ID/download?format=waterfall" $AUTH \
| python3 -c "import sys,json; import urllib.request; \
urllib.request.urlretrieve(json.load(sys.stdin)['download_url'], 'waterfall.png')"
# Download FITS catalog
curl -s "$API/jobs/$JOB_ID/download?format=fits" $AUTH \
| python3 -c "import sys,json; import urllib.request; \
urllib.request.urlretrieve(json.load(sys.stdin)['download_url'], 'catalog.fits')"
Errors
| Code | Reason |
| 400 | Invalid format parameter |
| 403 | Job belongs to a different user |
| 404 | Job not found, or requested format not yet available |
GET/usageAuth Required
Get current usage statistics and tier quota details.
Response (200)
{
"user_id": "796e24d8-...",
"tier": "researcher",
"usage_this_month": 12,
"limit": 500,
"remaining": 488,
"tier_details": {
"files_per_month": 500,
"max_file_mb": 1024,
"price": "$99 USD/mo",
"retention_days": 90
}
}
POST/api-keysAuth RequiredResearcher+
Generate a new API key. The full key is shown only once in the response — store it securely.
Request Body
| Field | Type | Default | Description |
name | string | "Default" | Friendly name (e.g. "production-v1", "test-pipeline-2") |
ttl_days | integer | 90 | Days until expiration (1–365) |
Response (201)
{
"key_id": "3f8a1b2c",
"api_key": "sk_live_a1b2c3d4e5f6...",
"expires_at": 1720272000,
"ttl_days": 90,
"warning": "Store this key securely. It will not be shown again."
}
Important: The api_key value is only returned at creation time. Copy it immediately. If lost, revoke the key and generate a new one.
Errors
| Code | Reason |
| 403 | Free tier cannot create API keys — upgrade required |
GET/api-keysAuth RequiredResearcher+
List all API keys for the authenticated user (keys are masked, only metadata is shown).
Response (200)
{
"keys": [
{
"key_id": "3f8a1b2c",
"name": "production-v1",
"created_at": 1712444400,
"expires_at": 1720272000,
"active": true,
"expired": false
},
{
"key_id": "9d4e5f6a",
"name": "old-test",
"created_at": 1709852400,
"expires_at": 1717680000,
"active": true,
"expired": true
}
]
}
DELETE/api-keys/{keyId}Auth RequiredResearcher+
Revoke an API key. The key is marked inactive and can no longer be used for authentication.
Response (200)
{
"message": "API key revoked",
"key_id": "3f8a1b2c"
}
Errors
| Code | Reason |
| 403 | Key belongs to a different user |
| 404 | Key not found |
POST/reportsAuth Required
Submit an error report or feedback for a specific job.
Request Body
| Field | Type | Required | Description |
job_id | string | Yes | The job ID to report about |
error_type | string | No | pipeline_failure or user_report |
description | string | No | Description of the issue |
metadata | object | No | Additional context (browser, screen size, etc.) |
Response (201)
{
"report_id": "rpt-abc123...",
"message": "Report submitted. Thank you."
}
Python Example
Complete Python script that uploads a file, runs analysis, polls for results, and downloads all outputs:
import requests
import time
import os
API = "https://api-dev.deepfieldlabs.dev"
API_KEY = "sk_live_xxxxxxxxxxxx"
HEADERS = {"x-api-key": API_KEY, "Content-Type": "application/json"}
AUTH = {"x-api-key": API_KEY}
FILE_PATH = "observation.fil"
file_size = os.path.getsize(FILE_PATH)
# 1. Get upload URL
resp = requests.post(f"{API}/upload-url",
json={"filename": os.path.basename(FILE_PATH), "file_size": file_size},
headers=HEADERS)
upload = resp.json()
file_key = upload["file_key"]
# 2. Upload file
if upload.get("multipart"):
# Multipart upload for large files
parts = []
with open(FILE_PATH, "rb") as f:
for part in upload["parts"]:
chunk = f.read(upload["part_size"])
r = requests.put(part["url"], data=chunk)
parts.append({"partNumber": part["partNumber"], "etag": r.headers["ETag"]})
requests.post(f"{API}/complete-upload",
json={"file_key": file_key, "upload_id": upload["upload_id"], "parts": parts},
headers=HEADERS)
print("Multipart upload complete")
else:
# Single upload for small files
with open(FILE_PATH, "rb") as f:
requests.put(upload["upload_url"], data=f,
headers={"Content-Type": "application/octet-stream"})
print("Upload complete")
# 3. Start analysis
resp = requests.post(f"{API}/analyze",
json={"file_key": file_key, "filename": os.path.basename(FILE_PATH)},
headers=HEADERS)
job_id = resp.json()["job_id"]
print(f"Job submitted: {job_id}")
# 4. Poll for results (recommended: 10-second intervals)
while True:
status = requests.get(f"{API}/jobs/{job_id}", headers=AUTH).json()
print(f" Status: {status['status']}")
if status["status"] in ("complete", "failed"):
break
time.sleep(10)
# 5. Process results
if status["status"] == "complete":
results = status["results"]
print(f"\nResults: {results['raw_hits']} raw hits, "
f"{results.get('filtered_hits', 0)} filtered, "
f"{results.get('candidates', 0)} candidates")
print(f"Processing time: {results.get('processing_time_seconds', 0):.1f}s")
for c in results.get("top_candidates", [])[:5]:
freq_mhz = c["frequency_hz"] / 1e6
print(f" {freq_mhz:.3f} MHz | SNR={c['snr']:.1f} | {c['classification']}")
# 6. Download results (wait for exports to be generated)
time.sleep(15)
for fmt in ["json", "csv", "fits", "waterfall"]:
resp = requests.get(f"{API}/jobs/{job_id}/download?format={fmt}", headers=AUTH)
if fmt == "csv":
# CSV is returned inline
with open(f"{job_id[:8]}_signals.csv", "w") as f:
f.write(resp.text)
print(f" Saved {fmt}: {job_id[:8]}_signals.csv")
else:
data = resp.json()
if "download_url" in data:
content = requests.get(data["download_url"])
ext = {"json": "json", "fits": "fits", "waterfall": "png"}[fmt]
filename = f"{job_id[:8]}.{ext}"
with open(filename, "wb") as f:
f.write(content.content)
print(f" Saved {fmt}: {filename}")
else:
print(f"Job failed: {status.get('error', 'Unknown error')}")