REST API
Trigger scraping tasks and retrieve results via HTTP.
Base URL
All API requests go to:
https://scrapespace.com/api
Authentication
Every request requires an API key in the x-api-key header. See API Keys.
Endpoints
Start an AI agent task
POST /api/agent/run
{
"prompt": "Get the top 10 trending repos on GitHub with name, description, stars"
}
Returns { jobId, taskId }. The job starts in pending status.
Stop a running task
POST /api/agent/stop
{
"jobId": "job-uuid-here"
}
Run a scraper (refresh)
POST /api/jobs
{
"script_id": "script-uuid-here"
}
List jobs
GET /api/jobs
Returns the 30 most recent jobs by default. Pass ?limit=N (max 100) to fetch more.
Get job details
GET /api/jobs/{id}
Returns the job including status, logs, and metadata. For running jobs, logs includes live agent activity.
Get job results
GET /api/jobs/{id}/output
Returns the result data as { records, total }.
Export job results
GET /api/jobs/{id}/export?format=csv
Supported formats: csv, json, txt, md. Text and markdown exports require single-record output.
List automations
GET /api/scripts
Get automation details
GET /api/scripts/{id}
Job statuses
| Status | Meaning |
|---|---|
pending | Queued, waiting for a runner |
running | Currently executing |
success | Completed successfully |
failed | Execution error |
cancelled | Stopped by user |
rejected | Prompt rejected (not a scraping task) |
blocked | Blocked by website (CAPTCHA, access denied) |
max_steps | Agent hit the step limit |