Frequently Asked Questions

Get the most out of the Screenshot This! service

Get Answers to Common Questions

Using our service is simple: Just fill out the Capture Screenshot form on the homepage, providing:

  • URL to Capture: The webpage you want to screenshot or convert into a PDF.
  • Response Type: Choose Image, PDF, or JSON (if you want raw JSON data).
  • Emulation/Timing Settings: Adjust device presets, viewport sizes, JavaScript or ad-block toggles, etc.

Once submitted, the service sends an API request to a headless browser. When it's done, you’ll see your screenshot or PDF preview, and can download the file immediately.

Currently, our service can return:

  • Images: PNG, JPEG, or WEBP.
  • PDF: Great for archiving or printing webpage content.
  • JSON: A raw JSON response that can include metadata (like status code, final resolved URL) or even base64-encoded images if you want to process them yourself.

For an image response, you can also control the image quality for JPEG/WEBP (to compress or reduce file sizes).

For a PDF response, the service captures a PDF snapshot, which can include multiple pages if the site is long (like a scrollable page).

Absolutely! If you check Full Page Screenshot, the headless browser scrolls through the entire page, stitching everything into one tall image (or multi-page PDF).

Alternatively, you can disable "Full Page Screenshot" and specify a custom viewport width and height. This mode captures exactly what would be seen in a real browser window of that size.

  • Ad Block: Instructs the headless browser to block known advertising/trackers. This can reduce clutter or bandwidth usage, but some sites might detect ad blockers.
  • Hide Ads: Injects a CSS-based approach to hide typical ad elements. This does not block them from loading, but visually hides them in the screenshot.
  • Disable JS: Completely disables JavaScript from running. This can be useful if you want to see how the site looks in a no-script environment, or if JavaScript is causing a blocking issue.

The wait_for_selector field instructs the browser to wait until a certain DOM element is present (like .container-loaded or #appReady) before taking a screenshot. This ensures the page is in the correct state.

The wait_js_condition lets you provide a small JavaScript expression (e.g., window.isContentLoaded === true) that must be true before capturing.

Delay (ms) adds an extra pause after the page is considered ready, which can be helpful if you suspect further asynchronous content is still loading.

Wait Timeout (ms) is how long the browser will keep trying to meet these conditions (selector or JS) before giving up.

The Device Preset dropdown quickly simulates popular devices (e.g., iPhone, Galaxy). This sets viewport sizes, deviceScaleFactor, and sometimes user-agent strings. If you choose “None (custom),” you can manually specify your viewport width/height and user-agent.

Network Profiles (like 3G, 4G, or slow) artificially throttle connections to mimic real mobile or slow conditions. This is useful for testing how your page behaves on poor networks or limited bandwidth.

Record Video captures a short screencast of the page load or user interactions (if any are scripted). This helps you see the step-by-step rendering. However, note that it’s typically limited to short durations.

Capture Console Logs saves any messages the page logs to the browser console (console.log, console.error, etc.). This is extremely helpful for debugging JavaScript issues or warnings.

When you enable Capture Trace, the headless browser records a performance trace during the page load. This is basically a detailed timeline of:

  • Network events
  • JavaScript execution steps
  • Layout and rendering stages
  • CPU usage, paint, and composite operations

Once the trace is completed, it’s provided as a file (often .json or .zip) that you can open in Chrome DevTools:

  1. Open Chrome and press F12 to open DevTools.
  2. Go to the “Performance” (or “Timeline”) panel.
  3. Import the trace file or drag-and-drop it in.
  4. Analyze exactly how the page loaded, which scripts blocked the main thread, and potential performance bottlenecks.

This is particularly powerful for diagnosing slow or janky pages, or checking how your page interacts with CPU resources over time.

A HAR file (HTTP Archive) logs every network request and response during your page load:

  • URLs, headers, HTTP status codes
  • Timing info (DNS, TCP connect, TTFB, content download time)
  • Cookies, request/response sizes, and more

When you enable “Capture HAR,” the browser automatically records these details into a single `.har` file. After capturing:

  1. Download the HAR file from the result panel.
  2. Open Chrome DevTools (Network tab) and drag the `.har` file in, or use an online HAR viewer.
  3. Review every request to see if anything failed, took too long, or is unexpectedly large in size.

It’s invaluable for performance auditing, diagnosing slow resources, or verifying if specific endpoints are being requested.

  • Set a “Wait For” condition if your page is dynamic. This ensures you capture it after key elements have loaded.
  • Use a custom user agent to see how the page behaves for specific browsers.
  • If capturing a mobile view, pick a relevant device preset (e.g., iPhone 12, Galaxy S21). This helps replicate real user experiences.
  • Shorten the capture timeouts if your site loads quickly, so the screenshot is faster.
  • Enable debug logs if you’re encountering issues (assuming your build includes a DEBUG_MODE or logging feature). This can help reveal unexpected errors.

By default, the browser will block navigation to pages with invalid SSL certificates. However, if you enable “Ignore HTTPS Errors”, the service will proceed even with self-signed or otherwise invalid certificates.

This is particularly useful for testing internal or staging environments that are using self-signed certs. Just be aware that you’re bypassing normal TLS/SSL trust checks, so only enable this if you trust the target site.

Yes! If a page prompts for username and password via HTTP Basic Authentication, simply fill in your Basic Auth User and Basic Auth Pass before requesting the screenshot.

The browser will automatically supply these credentials in the “Authorization” header, letting you capture images or PDFs from sites behind this classic form of authentication. Make sure to keep these credentials secure!

You can simulate any latitude/longitude by entering values in the “Geolocation” fields (for example, {"latitude": 37.7749, "longitude": -122.4194} for San Francisco). Additionally, you can specify a Locale (like en-US or fr-FR) to mimic how websites appear in different languages/regions.

This is particularly valuable when testing location-based services, map applications, or ensuring language-locale experiences are correct for different regions.

Yes. Under “Advanced Options,” you can add a headers object and a cookies array:

  • Headers: Add any HTTP headers you want (e.g., X-My-Header, Referer, Authorization). This is handy for testing special conditions or custom tokens.
  • Cookies: Provide name-value pairs (and optional domain/path/expiry) to pre-load in the browser. Great for testing authenticated pages, user sessions, or region-based variants that depend on cookie data.

These headers and cookies will be applied before the initial request, letting you shape the page environment exactly how you need.

Yes. The Max Image Width and Max Image Height fields let you specify an upper boundary for the captured screenshot. If the raw screenshot is larger than those dimensions, the service will downscale it (while preserving the aspect ratio).

This is helpful when you need a screenshot that fits a specific size or to keep file sizes in check. If the screenshot is smaller than your specified maximums, it won’t be enlarged.