[{"data":1,"prerenderedAt":587},["ShallowReactive",2],{"\u002Fdocs\u002Fdeveloper\u002Fapi-keys":3,"docs-search":142},{"id":4,"title":5,"body":6,"description":134,"extension":135,"meta":136,"navigation":137,"path":138,"seo":139,"stem":140,"__hash__":141},"docs\u002Fdocs\u002F5.developer\u002Fapi-keys.md","API Keys",{"type":7,"value":8,"toc":128},"minimark",[9,14,37,46,50,57,92,96,99,114,117,121,124],[10,11,13],"h2",{"id":12},"creating-an-api-key","Creating an API key",[15,16,17,25,31,34],"ol",{},[18,19,20,21,24],"li",{},"Go to ",[22,23,5],"strong",{}," in settings",[18,26,27,28],{},"Click ",[22,29,30],{},"Create API Key",[18,32,33],{},"Give it a descriptive name",[18,35,36],{},"Copy the key immediately — it won't be shown again",[38,39,40,41,45],"p",{},"API keys have the format ",[42,43,44],"code",{},"api_{publicId}.{secret}",". The secret portion is hashed and cannot be retrieved after creation.",[10,47,49],{"id":48},"authentication","Authentication",[38,51,52,53,56],{},"Include your API key in the ",[42,54,55],{},"x-api-key"," header:",[58,59,64],"pre",{"className":60,"code":61,"language":62,"meta":63,"style":63},"language-bash shiki shiki-themes github-light github-dark","curl -H \"x-api-key: api_abc123.yoursecrethere\" \\\n  https:\u002F\u002Fscrapespace.com\u002Fapi\u002Fscripts\n","bash","",[42,65,66,86],{"__ignoreMap":63},[67,68,71,75,79,83],"span",{"class":69,"line":70},"line",1,[67,72,74],{"class":73},"sScJk","curl",[67,76,78],{"class":77},"sj4cs"," -H",[67,80,82],{"class":81},"sZZnC"," \"x-api-key: api_abc123.yoursecrethere\"",[67,84,85],{"class":77}," \\\n",[67,87,89],{"class":69,"line":88},2,[67,90,91],{"class":81},"  https:\u002F\u002Fscrapespace.com\u002Fapi\u002Fscripts\n",[10,93,95],{"id":94},"permissions","Permissions",[38,97,98],{},"API keys have the same permissions as your user account. They can:",[100,101,102,105,108,111],"ul",{},[18,103,104],{},"Run AI agents and replay existing automations",[18,106,107],{},"List and retrieve jobs, automations, and schedules",[18,109,110],{},"Create and manage schedules",[18,112,113],{},"Access all data within your team",[38,115,116],{},"API access requires Pro, Founding, or Business. Pro \u002F Founding get basic access; Business gets full access.",[10,118,120],{"id":119},"revoking-keys","Revoking keys",[38,122,123],{},"Delete an API key from settings to revoke it immediately. Any requests using that key will be rejected.",[125,126,127],"style",{},"html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":63,"searchDepth":88,"depth":88,"links":129},[130,131,132,133],{"id":12,"depth":88,"text":13},{"id":48,"depth":88,"text":49},{"id":94,"depth":88,"text":95},{"id":119,"depth":88,"text":120},"Create API keys to access ScrapeSpace programmatically.","md",{},true,"\u002Fdocs\u002Fdeveloper\u002Fapi-keys",{"title":5,"description":134},"docs\u002F5.developer\u002Fapi-keys","gJo9VQrLLbNqYxTdRMOK3CTE0pHLoZTAIUvMs_LPkio",[143,148,153,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254,259,264,269,274,279,284,289,294,298,303,308,313,318,323,328,333,337,342,347,352,357,362,367,372,377,382,387,392,397,402,407,412,417,422,427,432,437,442,444,448,452,456,460,465,470,474,478,483,488,493,498,503,508,513,518,523,528,533,538,543,548,552,557,562,567,572,577,582],{"id":144,"title":145,"titles":146,"content":147,"level":70},"\u002Fdocs\u002Fgetting-started\u002Fhow-it-works","How It Works",[],"The agent → scraper → refresh pipeline that powers ScrapeSpace.",{"id":149,"title":150,"titles":151,"content":152,"level":88},"\u002Fdocs\u002Fgetting-started\u002Fhow-it-works#the-three-execution-modes","The three execution modes",[145],"ScrapeSpace has three ways to run a scraping task. Understanding them helps you get the most out of the platform.",{"id":154,"title":155,"titles":156,"content":157,"level":158},"\u002Fdocs\u002Fgetting-started\u002Fhow-it-works#_1-agent-mode","1. Agent mode",[145,150],"When you type a prompt on the Agent page, ScrapeSpace spins up a cloud browser and an AI agent. The agent: Reads your promptTakes a screenshot of the pageAnalyzes the DOM to find interactive elementsDecides what action to take (click, type, scroll, run JavaScript)Repeats until the data is extractedReturns structured results This is the most flexible mode — the agent can handle complex navigation, pagination, login flows, and dynamic content. It typically takes 30 seconds to a few minutes depending on complexity.",3,{"id":160,"title":161,"titles":162,"content":163,"level":158},"\u002Fdocs\u002Fgetting-started\u002Fhow-it-works#_2-replay-mode-scraper-refresh","2. Replay mode (scraper refresh)",[145,150],"After a successful agent run, ScrapeSpace evaluates whether the task is repeatable. If it is, the agent's actions are saved as an automation — a deterministic sequence of browser actions. Replays run the saved automation directly in the browser without any AI involvement. They're fast (typically 3–5 seconds) and use replay quota rather than AI run quota.",{"id":165,"title":166,"titles":167,"content":168,"level":158},"\u002Fdocs\u002Fgetting-started\u002Fhow-it-works#_3-heal-mode-coming-soon","3. Heal mode (coming soon)",[145,150],"Websites change. When an automation replay fails because a selector moved or a page restructured, ScrapeSpace's self-healing system kicks in. It analyzes what broke and patches the automation automatically.",{"id":170,"title":171,"titles":172,"content":173,"level":88},"\u002Fdocs\u002Fgetting-started\u002Fhow-it-works#how-automations-work","How automations work",[145],"You describe what data you need (an AI run creates the automation)The automation can be replayed on-demand or on a scheduleEach replay runs the saved extraction — fast, deterministic, no AI neededYour plan determines monthly AI run and replay budgets, schedule frequency, and (Free only) how many automations you can store at once",{"id":175,"title":176,"titles":177,"content":178,"level":70},"\u002Fdocs\u002Fgetting-started","Introduction",[],"ScrapeSpace turns plain English into structured data from any website.",{"id":180,"title":181,"titles":182,"content":183,"level":88},"\u002Fdocs\u002Fgetting-started#what-is-scrapespace","What is ScrapeSpace?",[176],"ScrapeSpace is an AI-powered web scraping platform. You describe what data you need in plain English, and our AI agent opens a real browser, navigates the site, extracts the data, and returns structured results. No code, no CSS selectors, no browser extensions. Just tell ScrapeSpace what you want and get clean, structured data back.",{"id":185,"title":186,"titles":187,"content":188,"level":88},"\u002Fdocs\u002Fgetting-started#who-is-it-for","Who is it for?",[176],"ScrapeSpace is built for people who need data from the web but don't want to write extraction code: Marketers pulling competitor pricing, ad copy, or review dataSales teams building prospect lists from directories and LinkedInData analysts gathering public data for research and reportingSmall businesses monitoring prices, inventory, or job postings",{"id":190,"title":191,"titles":192,"content":193,"level":88},"\u002Fdocs\u002Fgetting-started#key-features","Key features",[176],"AI agent — describe your task, the AI handles navigation, pagination, and extraction. Each new build uses one AI run from your monthly quota.Automations — successful agent runs save as reusable automations that replay in seconds without AI costScheduling — replay automations automatically (daily, every 6 hours, or hourly depending on plan)Login support — store credentials securely and let the agent log in on your behalfProxy support — route requests through proxies for geo-targeted extractions and reliable access at scaleREST API — trigger AI runs programmatically and retrieve results via API",{"id":195,"title":196,"titles":197,"content":198,"level":70},"\u002Fdocs\u002Fgetting-started\u002Fquick-start","Quick Start",[],"Run your first AI scraping task in under a minute.",{"id":200,"title":201,"titles":202,"content":203,"level":88},"\u002Fdocs\u002Fgetting-started\u002Fquick-start#run-your-first-task","Run your first task",[196],"Go to the Agent page in your dashboardType a prompt describing what data you want, for example:Get the top 10 trending repositories from github.com\u002Ftrending, including the repo name, description, language, and star countClick Run and watch the AI work The agent will open a browser, navigate to the site, and extract the data you asked for. You'll see live activity logs as it works.",{"id":205,"title":206,"titles":207,"content":208,"level":88},"\u002Fdocs\u002Fgetting-started\u002Fquick-start#view-your-results","View your results",[196],"Once the task completes, you'll see structured results — a table of objects with the fields you requested. You can: Copy the data as JSONDownload as CSVView the generated script to understand what the agent did (Pro plan and above)",{"id":210,"title":211,"titles":212,"content":213,"level":88},"\u002Fdocs\u002Fgetting-started\u002Fquick-start#automations","Automations",[196],"If the task is repeatable (e.g., the same data from the same site), ScrapeSpace automatically saves it as an automation. You can: Re-run it anytime with one click (runs in seconds, no AI cost)Schedule it to run on a recurring basisTrigger it via the REST API",{"id":215,"title":216,"titles":217,"content":218,"level":88},"\u002Fdocs\u002Fgetting-started\u002Fquick-start#next-steps","Next steps",[196],"How It Works — understand the agent → scraper → refresh pipelineWriting Prompts — get better results with better promptsREST API — automate everything programmatically",{"id":220,"title":221,"titles":222,"content":223,"level":70},"\u002Fdocs\u002Fagent","AI Agent",[],"How ScrapeSpace's AI agent controls a browser to automate tasks.",{"id":225,"title":226,"titles":227,"content":228,"level":88},"\u002Fdocs\u002Fagent#overview","Overview",[221],"The AI agent is the core of ScrapeSpace. It controls a real cloud browser using screenshots and DOM analysis — the same way a human would browse, but faster and more precise.",{"id":230,"title":231,"titles":232,"content":233,"level":88},"\u002Fdocs\u002Fagent#what-the-agent-can-do","What the agent can do",[221],"Navigation Navigate to any URL, go back, reload, and switch between browser tabsSwitch into iframes to interact with embedded contentWait for elements or navigation to complete before continuing Interaction Click, double-click, and hover over any elementType into inputs, search boxes, and formsSelect options from dropdownsPress keyboard keys and shortcuts (Enter, Tab, Escape, Ctrl+A, etc.)Scroll pages, or scroll directly to a specific elementExpand collapsible sections and accordionsLow-level mouse and keyboard control for elements that resist normal interaction Data extraction Extract text or HTML attributes (href, src, etc.) from individual elementsExtract structured data from repeated lists (tables, cards, search results)Run JavaScript directly on the page for complex or bulk extractionMake HTTP requests to a site's API endpointsHandle pagination — next buttons, infinite scroll, page numbers Authentication & security Log in to websites using your stored credentialsGenerate one-time passwords (TOTP) for two-factor authenticationSolve CAPTCHAs automatically — Cloudflare Turnstile, hCaptcha, reCAPTCHA v2\u002Fv3, FunCaptcha, GeeTest v3\u002Fv4, Amazon WAF, and MTCaptchaSave and load cookies to maintain sessions",{"id":235,"title":236,"titles":237,"content":238,"level":88},"\u002Fdocs\u002Fagent#what-the-agent-cannot-do","What the agent cannot do",[221],"The agent controls a real browser, so it can do anything you could do manually. However, it will reject prompts that have nothing to do with browser automation: Math problems, trivia, or general knowledge questionsWriting essays, emails, or other textAnything that doesn't involve a website",{"id":240,"title":241,"titles":242,"content":243,"level":88},"\u002Fdocs\u002Fagent#live-activity","Live activity",[221],"While the agent is working, you can watch its progress in real-time via the activity log. Each action is logged with what the agent did and why, so you can understand its decision-making process.",{"id":245,"title":246,"titles":247,"content":248,"level":70},"\u002Fdocs\u002Fagent\u002Fresults","Results & Output",[],"Understanding agent output, data formats, and result handling.",{"id":250,"title":251,"titles":252,"content":253,"level":88},"\u002Fdocs\u002Fagent\u002Fresults#output-format","Output format",[246],"All results are returned as a JSON array of objects. Each object represents one extracted item with named fields. [\n  { \"title\": \"Example Post\", \"url\": \"https:\u002F\u002F...\", \"points\": 142 },\n  { \"title\": \"Another Post\", \"url\": \"https:\u002F\u002F...\", \"points\": 98 }\n]",{"id":255,"title":256,"titles":257,"content":258,"level":88},"\u002Fdocs\u002Fagent\u002Fresults#viewing-results","Viewing results",[246],"After a job completes, results are displayed in the format that best fits the data: Table — the default for most structured dataCards — image + title data like products, listings, or profilesTimeline — date-based content like posts or news articlesArticle — single long-form content with a title and bodyText — single text field resultsStats — numeric data displayed as metric cardsComparison — side-by-side view when there are few records with many fields The view is chosen automatically based on the shape of the data. Every result page also includes an activity log showing what the agent did step-by-step.",{"id":260,"title":261,"titles":262,"content":263,"level":88},"\u002Fdocs\u002Fagent\u002Fresults#exporting-data","Exporting data",[246],"From the results page you can export as: CSV or JSON — for any dataText or Markdown — for single-record results only (articles, individual items)",{"id":265,"title":266,"titles":267,"content":268,"level":88},"\u002Fdocs\u002Fagent\u002Fresults#via-the-api","Via the API",[246],"Results are available programmatically: curl -H \"x-api-key: api_xxx.yyy\" \\\n  https:\u002F\u002Fscrapespace.com\u002Fapi\u002Fjobs\u002F{jobId} The response includes output (the result array), status, logs, and metadata.",{"id":270,"title":271,"titles":272,"content":273,"level":88},"\u002Fdocs\u002Fagent\u002Fresults#result-quality","Result quality",[246],"The agent extracts data dynamically from the page DOM — it never hardcodes or fabricates data. If a field can't be found on the page, it will be null in the output rather than made up. html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}",{"id":275,"title":276,"titles":277,"content":278,"level":70},"\u002Fdocs\u002Fagent\u002Fruns","Runs",[],"Understanding task executions and their statuses.",{"id":280,"title":281,"titles":282,"content":283,"level":88},"\u002Fdocs\u002Fagent\u002Fruns#what-are-runs","What are runs?",[276],"Every time you start a new agent task or replay an automation, ScrapeSpace creates a run. Runs track the execution from start to finish — including status, duration, and results.",{"id":285,"title":286,"titles":287,"content":288,"level":88},"\u002Fdocs\u002Fagent\u002Fruns#run-types","Run types",[276],"AI runs use the AI agent to navigate a real browser and extract data. A successful AI run can save a reusable automation. Replays execute a saved automation directly, without AI. They typically complete in 3–5 seconds and use a separate monthly replay budget. Scheduled runs are replays triggered automatically by a schedule you've configured.",{"id":290,"title":291,"titles":292,"content":293,"level":88},"\u002Fdocs\u002Fagent\u002Fruns#statuses","Statuses",[276],"StatusMeaningPendingQueued, waiting for an available runnerRunningCurrently executingSuccessCompleted successfully — data is readyFailedSomething went wrong during executionBlockedThe target site blocked accessCancelledYou stopped the run manuallyRejectedThe prompt wasn't a valid scraping task",{"id":295,"title":256,"titles":296,"content":297,"level":88},"\u002Fdocs\u002Fagent\u002Fruns#viewing-results",[276],"Successful runs with extracted data show results inline. You can export data as CSV or JSON from the run detail page.",{"id":299,"title":300,"titles":301,"content":302,"level":88},"\u002Fdocs\u002Fagent\u002Fruns#retrying","Retrying",[276],"Failed or blocked runs can be retried from the run detail page. The AI agent may take a different approach on retry.",{"id":304,"title":305,"titles":306,"content":307,"level":70},"\u002Fdocs\u002Fagent\u002Fwriting-prompts","Writing Prompts",[],"How to write effective prompts that get the data you need.",{"id":309,"title":310,"titles":311,"content":312,"level":88},"\u002Fdocs\u002Fagent\u002Fwriting-prompts#be-specific-about-what-you-want","Be specific about what you want",[305],"The agent works best when you tell it exactly what data fields you need. Compare: Weak promptStrong promptGet data from Hacker NewsGet the top 30 stories from Hacker News including title, URL, points, and comment countScrape Amazon productsGet the first 20 results for \"wireless mouse\" on Amazon including product name, price, rating, and number of reviews",{"id":314,"title":315,"titles":316,"content":317,"level":88},"\u002Fdocs\u002Fagent\u002Fwriting-prompts#include-the-target-url-when-helpful","Include the target URL when helpful",[305],"If you know the exact page, include it: Go to https:\u002F\u002Fnews.ycombinator.com and get the top 30 stories with title, URL, points, and author If you don't include a URL, the agent will figure out where to go — but being explicit saves time.",{"id":319,"title":320,"titles":321,"content":322,"level":88},"\u002Fdocs\u002Fagent\u002Fwriting-prompts#specify-pagination","Specify pagination",[305],"If you need data across multiple pages, say so: Get all job listings from the first 5 pages of results on Indeed for \"data engineer\" in \"New York\" Without this, the agent may only scrape the first page.",{"id":324,"title":325,"titles":326,"content":327,"level":88},"\u002Fdocs\u002Fagent\u002Fwriting-prompts#describe-the-output-structure","Describe the output structure",[305],"The agent returns JSON arrays. If you want specific field names: Return each result as an object with fields: company_name, job_title, salary_range, location",{"id":329,"title":330,"titles":331,"content":332,"level":88},"\u002Fdocs\u002Fagent\u002Fwriting-prompts#tips","Tips",[305],"One task per prompt — don't ask for data from multiple unrelated sitesBe explicit about quantity — \"top 10\", \"first 50\", \"all results on this page\"Mention dynamic content — \"scroll down to load all results\" or \"click 'Show More' until all items are visible\"Reference visual layout — \"the table in the middle of the page\" or \"the sidebar with pricing tiers\"",{"id":334,"title":211,"titles":335,"content":336,"level":70},"\u002Fdocs\u002Fautomations",[],"Reusable data extractions created from successful agent runs.",{"id":338,"title":339,"titles":340,"content":341,"level":88},"\u002Fdocs\u002Fautomations#what-are-automations","What are automations?",[211],"When the AI agent successfully completes a task, ScrapeSpace evaluates whether the extraction can be repeated reliably. If it can, the agent's actions are saved as an automation — a deterministic script that replays the same steps without AI.",{"id":343,"title":344,"titles":345,"content":346,"level":88},"\u002Fdocs\u002Fautomations#how-automations-are-created","How automations are created",[211],"You run an AI agent task (this consumes one AI run from your monthly quota)The agent extracts your data successfullyScrapeSpace assesses whether the task is repeatableIf yes, an automation is saved automatically Not all tasks produce automations. One-off extractions (e.g., a page that requires different navigation each time) won't generate a reusable automation. Free-tier users with a full storage cap will get the data delivered as a job result, but the automation isn't saved — upgrade or delete an existing automation to save more.",{"id":348,"title":349,"titles":350,"content":351,"level":88},"\u002Fdocs\u002Fautomations#running-automations","Running automations",[211],"Click Run on any automation to replay it. It executes in the cloud browser using the saved script — no AI involved. Replays typically complete in 3–5 seconds and count against your monthly replay quota.",{"id":353,"title":354,"titles":355,"content":356,"level":88},"\u002Fdocs\u002Fautomations#managing-automations","Managing automations",[211],"Each automation shows: Name — derived from the original promptLast run — when it was last replayedStatus — result of the most recent runSchedules — any recurring schedules attached to it Free-tier accounts can save up to 2 automations. Paid plans have unlimited storage.",{"id":358,"title":359,"titles":360,"content":361,"level":70},"\u002Fdocs\u002Fautomations\u002Fscheduling","Scheduling",[],"Run automations automatically on a recurring schedule.",{"id":363,"title":364,"titles":365,"content":366,"level":88},"\u002Fdocs\u002Fautomations\u002Fscheduling#creating-a-schedule","Creating a schedule",[359],"From an automation's detail page, click Add Schedule to create a recurring run. You can configure: Frequency — how often the automation runsTimezone — schedules execute in your selected timezone",{"id":368,"title":369,"titles":370,"content":371,"level":88},"\u002Fdocs\u002Fautomations\u002Fscheduling#frequency-options","Frequency options",[359],"Schedule cadence is gated by your plan tier: PlanMaximum frequencyFreeNo schedulingStarterOnce per dayPro \u002F FoundingEvery 6 hoursBusinessHourly A Starter user can schedule daily, weekly, or monthly runs but not anything more frequent than once a day. A Pro user can schedule down to every-6-hours but not hourly.",{"id":373,"title":374,"titles":375,"content":376,"level":88},"\u002Fdocs\u002Fautomations\u002Fscheduling#cron-expressions","Cron expressions",[359],"All paid plans can use custom cron expressions in advanced mode: 0 9 * * 1-5 — weekdays at 9 AM0 *\u002F6 * * * — every 6 hours0 8,20 * * * — twice daily at 8 AM and 8 PM The cron expression must respect your plan's maximum frequency. A Starter user can write 0 9 * * 1 (Mondays at 9 AM) but not 0 *\u002F6 * * * (every 6 hours).",{"id":378,"title":379,"titles":380,"content":381,"level":88},"\u002Fdocs\u002Fautomations\u002Fscheduling#monitoring-scheduled-runs","Monitoring scheduled runs",[359],"Each schedule shows its execution history — when it ran, whether it succeeded, and the results. Failed runs are flagged so you can investigate.",{"id":383,"title":384,"titles":385,"content":386,"level":88},"\u002Fdocs\u002Fautomations\u002Fscheduling#pausing-and-deleting","Pausing and deleting",[359],"You can pause a schedule to temporarily stop it without losing the configuration, or delete it entirely. Pausing is useful during website maintenance windows or when you need to adjust the automation.",{"id":388,"title":389,"titles":390,"content":391,"level":70},"\u002Fdocs\u002Faccounts-and-proxies\u002Faccounts","Login Accounts",[],"Store website credentials so the agent can log in on your behalf.",{"id":393,"title":394,"titles":395,"content":396,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Faccounts#why-use-login-accounts","Why use login accounts?",[389],"Some websites require authentication to access data. Rather than including credentials in every prompt, you can store them once and reference them by name.",{"id":398,"title":399,"titles":400,"content":401,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Faccounts#adding-an-account","Adding an account",[389],"Go to Accounts in the sidebarClick New AccountEnter a label (e.g., \"LinkedIn\"), username, and passwordOptionally, enter an MFA secret (Base32 TOTP key) for two-factor authenticationSave Credentials are encrypted at rest and only decrypted when the agent needs to log in during a task.",{"id":403,"title":404,"titles":405,"content":406,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Faccounts#using-accounts-in-prompts","Using accounts in prompts",[389],"When running an agent task that requires login, open Run settings and select the account from the dropdown. The agent will use those credentials to authenticate during the task.",{"id":408,"title":409,"titles":410,"content":411,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Faccounts#security","Security",[389],"Credentials are encrypted using AES-256-GCM before storageThey are never exposed in logs, results, or generated scriptsOnly your team can access your stored accountsYou can delete an account at any time to remove the stored credentials",{"id":413,"title":414,"titles":415,"content":416,"level":70},"\u002Fdocs\u002Faccounts-and-proxies\u002Fproxies","Proxies",[],"Route scraping requests through proxy servers.",{"id":418,"title":419,"titles":420,"content":421,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Fproxies#why-use-proxies","Why use proxies?",[414],"Proxies help with: Geo-targeting — access region-specific content by routing through a proxy in that countryLoad distribution — spread requests across multiple IPs to stay within target sites' fair-use limitsReliability — keep extractions running smoothly when your home IP is throttled or temporarily unreachable",{"id":423,"title":424,"titles":425,"content":426,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Fproxies#adding-a-proxy","Adding a proxy",[414],"Go to Proxies in the sidebarClick New ProxyEnter the proxy details: host, port, username, and passwordSave",{"id":428,"title":429,"titles":430,"content":431,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Fproxies#proxy-protocols","Proxy protocols",[414],"ScrapeSpace connects to proxies over HTTP.",{"id":433,"title":434,"titles":435,"content":436,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Fproxies#using-proxies","Using proxies",[414],"When running an agent task, open Run settings and select a proxy from the dropdown. All browser traffic for that task will be routed through the selected proxy.",{"id":438,"title":439,"titles":440,"content":441,"level":88},"\u002Fdocs\u002Faccounts-and-proxies\u002Fproxies#proxy-health","Proxy health",[414],"If a proxy is unreachable or returns errors during job execution, the job will fail with an error message. Make sure your proxy credentials are correct and the proxy server is accessible before running tasks.",{"id":138,"title":5,"titles":443,"content":134,"level":70},[],{"id":445,"title":13,"titles":446,"content":447,"level":88},"\u002Fdocs\u002Fdeveloper\u002Fapi-keys#creating-an-api-key",[5],"Go to API Keys in settingsClick Create API KeyGive it a descriptive nameCopy the key immediately — it won't be shown again API keys have the format api_{publicId}.{secret}. The secret portion is hashed and cannot be retrieved after creation.",{"id":449,"title":49,"titles":450,"content":451,"level":88},"\u002Fdocs\u002Fdeveloper\u002Fapi-keys#authentication",[5],"Include your API key in the x-api-key header: curl -H \"x-api-key: api_abc123.yoursecrethere\" \\\n  https:\u002F\u002Fscrapespace.com\u002Fapi\u002Fscripts",{"id":453,"title":95,"titles":454,"content":455,"level":88},"\u002Fdocs\u002Fdeveloper\u002Fapi-keys#permissions",[5],"API keys have the same permissions as your user account. They can: Run AI agents and replay existing automationsList and retrieve jobs, automations, and schedulesCreate and manage schedulesAccess all data within your team API access requires Pro, Founding, or Business. Pro \u002F Founding get basic access; Business gets full access.",{"id":457,"title":120,"titles":458,"content":459,"level":88},"\u002Fdocs\u002Fdeveloper\u002Fapi-keys#revoking-keys",[5],"Delete an API key from settings to revoke it immediately. Any requests using that key will be rejected. html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"id":461,"title":462,"titles":463,"content":464,"level":70},"\u002Fdocs\u002Fdeveloper\u002Frest-api","REST API",[],"Trigger scraping tasks and retrieve results via HTTP.",{"id":466,"title":467,"titles":468,"content":469,"level":88},"\u002Fdocs\u002Fdeveloper\u002Frest-api#base-url","Base URL",[462],"All API requests go to: https:\u002F\u002Fscrapespace.com\u002Fapi",{"id":471,"title":49,"titles":472,"content":473,"level":88},"\u002Fdocs\u002Fdeveloper\u002Frest-api#authentication",[462],"Every request requires an API key in the x-api-key header. See API Keys.",{"id":475,"title":476,"titles":477,"content":63,"level":88},"\u002Fdocs\u002Fdeveloper\u002Frest-api#endpoints","Endpoints",[462],{"id":479,"title":480,"titles":481,"content":482,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#start-an-ai-agent-task","Start an AI agent task",[462,476],"POST \u002Fapi\u002Fagent\u002Frun\n\n{\n  \"prompt\": \"Get the top 10 trending repos on GitHub with name, description, stars\"\n} Returns { jobId, taskId }. The job starts in pending status.",{"id":484,"title":485,"titles":486,"content":487,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#stop-a-running-task","Stop a running task",[462,476],"POST \u002Fapi\u002Fagent\u002Fstop\n\n{\n  \"jobId\": \"job-uuid-here\"\n}",{"id":489,"title":490,"titles":491,"content":492,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#run-a-scraper-refresh","Run a scraper (refresh)",[462,476],"POST \u002Fapi\u002Fjobs\n\n{\n  \"script_id\": \"script-uuid-here\"\n}",{"id":494,"title":495,"titles":496,"content":497,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#list-jobs","List jobs",[462,476],"GET \u002Fapi\u002Fjobs Returns the 30 most recent jobs by default. Pass ?limit=N (max 100) to fetch more.",{"id":499,"title":500,"titles":501,"content":502,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#get-job-details","Get job details",[462,476],"GET \u002Fapi\u002Fjobs\u002F{id} Returns the job including status, logs, and metadata. For running jobs, logs includes live agent activity.",{"id":504,"title":505,"titles":506,"content":507,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#get-job-results","Get job results",[462,476],"GET \u002Fapi\u002Fjobs\u002F{id}\u002Foutput Returns the result data as { records, total }.",{"id":509,"title":510,"titles":511,"content":512,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#export-job-results","Export job results",[462,476],"GET \u002Fapi\u002Fjobs\u002F{id}\u002Fexport?format=csv Supported formats: csv, json, txt, md. Text and markdown exports require single-record output.",{"id":514,"title":515,"titles":516,"content":517,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#list-automations","List automations",[462,476],"GET \u002Fapi\u002Fscripts",{"id":519,"title":520,"titles":521,"content":522,"level":158},"\u002Fdocs\u002Fdeveloper\u002Frest-api#get-automation-details","Get automation details",[462,476],"GET \u002Fapi\u002Fscripts\u002F{id}",{"id":524,"title":525,"titles":526,"content":527,"level":88},"\u002Fdocs\u002Fdeveloper\u002Frest-api#job-statuses","Job statuses",[462],"StatusMeaningpendingQueued, waiting for a runnerrunningCurrently executingsuccessCompleted successfullyfailedExecution errorcancelledStopped by userrejectedPrompt rejected (not a scraping task)blockedBlocked by website (CAPTCHA, access denied)max_stepsAgent hit the step limit html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"id":529,"title":530,"titles":531,"content":532,"level":70},"\u002Fdocs\u002Fbilling\u002Fplans","Plans & Pricing",[],"ScrapeSpace plan tiers, features, and limits.",{"id":534,"title":535,"titles":536,"content":537,"level":88},"\u002Fdocs\u002Fbilling\u002Fplans#plans","Plans",[530],"PlanPriceAI runs \u002F moReplays \u002F moStored automationsScheduleConcurrentMax durationAnti-blockingAPIFree$0\u002Fmo52002Manual only160 sec——Starter$79\u002Fmo201,000UnlimitedDaily115 min✓—Pro$199\u002Fmo6015,000UnlimitedEvery 6 hours330 min✓BasicBusiness$499\u002Fmo150150,000UnlimitedHourly1060 min✓Full",{"id":539,"title":540,"titles":541,"content":542,"level":88},"\u002Fdocs\u002Fbilling\u002Fplans#founding-member","Founding Member",[530],"The Founding Member plan provides Pro-level limits at $79\u002Fmonth (billed annually at $948\u002Fyear) for existing members. Founding seats are no longer publicly available. Founding Members get the same monthly AI runs and replay budgets as Pro, every-6-hour scheduling, 3 concurrent runs, and API access. Canceling forfeits the seat permanently — the founding price cannot be reclaimed.",{"id":544,"title":545,"titles":546,"content":547,"level":88},"\u002Fdocs\u002Fbilling\u002Fplans#what-counts-as-an-ai-run","What counts as an AI run?",[530],"An AI run is when our agent builds a new automation from your prompt. Replays of saved automations don't count against your AI run quota — they have their own monthly budget.",{"id":549,"title":359,"titles":550,"content":551,"level":88},"\u002Fdocs\u002Fbilling\u002Fplans#scheduling",[530],"Your plan determines the maximum schedule frequency: Free: No scheduling (manual runs only)Starter: Once per dayPro \u002F Founding: Every 6 hoursBusiness: Hourly",{"id":553,"title":554,"titles":555,"content":556,"level":88},"\u002Fdocs\u002Fbilling\u002Fplans#concurrent-runs","Concurrent runs",[530],"This is how many automations can execute simultaneously. If you hit the limit, new runs queue until a slot opens.",{"id":558,"title":559,"titles":560,"content":561,"level":70},"\u002Fdocs\u002Fbilling\u002Fusage","Usage & Limits",[],"How AI runs, replays, and storage limits work — and what happens when you hit them.",{"id":563,"title":564,"titles":565,"content":566,"level":88},"\u002Fdocs\u002Fbilling\u002Fusage#plan-limits","Plan limits",[559],"Each plan includes three monthly allotments: AI runs — how many times our agent can build a new automation from your prompt. The most expensive operation; budgeted carefully.Replays — how many times a saved automation can re-run without using AI. Replays are fast and cheap, so the limits are much higher.Saved automations (Free only) — how many automations you can keep stored. Paid plans are unlimited. You can see your current usage on the billing page or via the API: GET \u002Fapi\u002Fbilling\u002Fusage",{"id":568,"title":569,"titles":570,"content":571,"level":88},"\u002Fdocs\u002Fbilling\u002Fusage#what-happens-at-the-limit","What happens at the limit?",[559],"AI runs exhausted: new agent prompts are blocked with an upgrade prompt. Replays of existing automations continue to work — your scheduled extractions keep running. Replays exhausted: all replay attempts are blocked, including scheduled ones. Affected schedules pause automatically and resume at the start of the next month. Saved automations cap reached (Free): AI runs still complete and deliver data, but successful runs aren't saved as reusable automations. Delete one of your stored automations to make room, or upgrade for unlimited storage. Limits reset at the start of each calendar month. There are no overage charges and no per-run pricing — hitting the wall just means waiting for the reset, or upgrading.",{"id":573,"title":574,"titles":575,"content":576,"level":88},"\u002Fdocs\u002Fbilling\u002Fusage#scheduling-limits","Scheduling limits",[559],"Your plan determines the maximum schedule frequency. If you downgrade, existing schedules are automatically adjusted to the new plan's maximum frequency.",{"id":578,"title":579,"titles":580,"content":581,"level":88},"\u002Fdocs\u002Fbilling\u002Fusage#paused-automations","Paused automations",[559],"If a scheduled automation would exceed your monthly replay quota, ScrapeSpace pauses its schedule until the start of the next billing cycle. You'll receive a notification listing affected automations and the reset date. When your quota resets, the schedule resumes automatically — no manual action needed. Manual replays of a paused automation are blocked while the pause is in effect.",{"id":583,"title":584,"titles":585,"content":586,"level":88},"\u002Fdocs\u002Fbilling\u002Fusage#monitoring","Monitoring",[559],"Check your usage through: Billing page — current AI runs, replays, and storage counts with reset dateIn-app banner — appears when you cross 70% or 90% of any monthly limitAPI — GET \u002Fapi\u002Fbilling\u002Fusage returns current counts and limits html pre.shiki code .sScJk, html code.shiki .sScJk{--shiki-default:#6F42C1;--shiki-dark:#B392F0}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",1778247257470]