Why use it as a Scraping API
- One endpoint for any target URL
- API-key auth, no browser orchestration required
- HTML-first response that plugs into existing parsers
- Easy fit for ETL, monitoring, AI ingestion, and data pipelines
Endpoint
GET https://parsing.webunlocker.gologin.com/v1/scrape?url={encoded_url}
Authentication
Send your API key in headers:Request model
| Name | In | Type | Required | Description |
|---|---|---|---|---|
url | query | string (absolute URL) | yes | Target URL to scrape |
Quickstart
cURL
Python
JavaScript (fetch)
Response behavior
Current response body is raw rendered HTML/text.200-> success, body contains page HTML/text401/403-> invalid or missing API key422-> invalid or missingurl429-> rate limit500+-> temporary server-side error
Retry guidance
Retry with exponential backoff for:- network errors
408,429,500,502,503,504
Scope note
Current public endpoint is HTML-first. Do not assume extra backend endpoints (for example screenshot/crawl/search/usage) unless explicitly documented and enabled for your account.Official Node.js SDK
Install:SDK quick start
Constructor options
Core SDK methods
scrape(url)-> normalized response objectscrapeRaw(url)-> nativeResponsebuildScrapeUrl(url)-> full request URL
SDK-side derived methods (from returned HTML)
These are not separate backend endpoints; they are derived from API HTML in the SDK:scrapeText(url)-> plain textscrapeMarkdown(url)-> markdownscrapeJSON(url)-> structured metadata (title,meta,links,headings)batchScrape(urls, { concurrency })-> helper for multiple scrape calls
Typed errors
WebUnlockerErrorAuthenticationErrorRateLimitErrorAPIErrorTimeoutErrorNetworkError
401/403->AuthenticationError429->RateLimitError500+->APIError- timeout/abort ->
TimeoutError - fetch/network failure ->
NetworkError
CLI
scrape(raw HTML/text)text(derived text)markdown(derived markdown)json(derived metadata)
Typical scraping use cases
- price and catalog monitoring
- lead and directory enrichment
- market intelligence pipelines
- content aggregation and indexing
- LLM ingestion and extraction workflows