Skip to main content
This guide shows how to scrape your first URL using Web Unlocker in synchronous mode — the simplest path to getting results.

Prerequisites


Authentication

All requests require your API key in the X-API-Key header:
X-API-Key: YOUR_API_KEY
The base URL for all API endpoints is:
https://api.webunlocker.gologin.com/api/parsing/v1

Make your first request

In synchronous mode, you send a URL and receive the full result in a single response — no polling required.

Python

import requests
import base64

API_URL = "https://api.webunlocker.gologin.com/api/parsing/v1"
API_KEY = "YOUR_API_KEY"

response = requests.post(
    f"{API_URL}/tasks",
    headers={"X-API-Key": API_KEY},
    json={"url": "https://example.com"}
)

data = response.json()

if data["status"] == "completed":
    # Save HTML
    with open("page.html", "w") as f:
        f.write(data["result"]["html"])

    # Save screenshot (base64-encoded PNG)
    screenshot = data["result"]["screenshot"]
    screenshot = screenshot.replace("data:image/png;base64,", "")
    with open("screenshot.png", "wb") as f:
        f.write(base64.b64decode(screenshot))

    print(f"Done. Task ID: {data['task_id']}")
else:
    print(f"Failed: {data.get('error')}")

JavaScript (Node.js 18+)

import fs from "fs/promises";

const API_URL = "https://api.webunlocker.gologin.com/api/parsing/v1";
const API_KEY = "YOUR_API_KEY";

const response = await fetch(`${API_URL}/tasks`, {
  method: "POST",
  headers: {
    "X-API-Key": API_KEY,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ url: "https://example.com" }),
});

const data = await response.json();

if (data.status === "completed") {
  // Save HTML
  await fs.writeFile("page.html", data.result.html, "utf-8");

  // Save screenshot (base64-encoded PNG)
  const screenshotData = data.result.screenshot
    .replace("data:image/png;base64,", "");
  await fs.writeFile("screenshot.png", Buffer.from(screenshotData, "base64"));

  console.log(`Done. Task ID: ${data.task_id}`);
} else {
  console.error("Failed:", data.error);
}

What the response looks like

A successful sync response looks like this:
{
  "task_id": "123e4567-e89b-12d3-a456-426614174000",
  "status": "completed",
  "result": {
    "html": "<html>...</html>",
    "screenshot": "data:image/png;base64,iVBORw0KGgo...",
    "title": "Example Domain"
  },
  "execution_time": 5.81,
  "created_at": "2025-01-01T12:00:00Z",
  "completed_at": "2025-01-01T12:00:05Z"
}
The result.html field contains the full rendered HTML as a string. The result.screenshot is a base64-encoded PNG wrapped in a data URI. The result.title is the page title extracted from the HTML.

Handling failures

Check the status field before reading result. Tasks can finish with three statuses:
StatusMeaning
completedSuccess — result contains your data
failedThe page couldn’t be scraped — check error for details
timeoutSync mode timed out (>295s) — use the returned task_id to poll for the result
if data["status"] == "completed":
    html = data["result"]["html"]
elif data["status"] == "failed":
    print("Error:", data["error"])
elif data["status"] == "timeout":
    # Task is still running — switch to polling
    task_id = data["task_id"]
    # See Async Mode guide
For complex or slow sites, use async mode instead.

No external dependencies needed

The examples above use requests (Python) and built-in fetch (Node.js 18+). If you prefer to use only the standard library, Python’s urllib works fine too — see the examples on GitHub.