Skip to content
Browsertrix: High-Fidelity Browser-Based Automated Crawling + Behavior System
Branch: master
Clone or download
ikreymer more README tweaks (#17)
* renaming:
behavior_time -> behavior_max_time
sample_crawl_specs -> sample-crawls
- tests: add DEBUG=1 for browsertrix debug logging
Latest commit 2c9242a Apr 23, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
browsertrix more README tweaks (#17) Apr 24, 2019
browsertrix_cli
flocks API Refactor / Browser Profiles Support (#15) Apr 23, 2019
frontend
pywb API Refactor / Browser Profiles Support (#15) Apr 23, 2019
sample-crawls more README tweaks (#17) Apr 24, 2019
scripts
static time-slicing and ui-work: Apr 15, 2019
tests more README tweaks (#17) Apr 24, 2019
.coveragerc setup: update deps, add coveragerc, update README Mar 24, 2019
.dockerignore API Refactor / Browser Profiles Support (#15) Apr 23, 2019
.flake8 init Mar 20, 2019
.gitignore Run crawl (of WARC) tests on travis (#3) Apr 15, 2019
.travis.yml API Refactor / Browser Profiles Support (#15) Apr 23, 2019
Dockerfile API Refactor / Browser Profiles Support (#15) Apr 23, 2019
LICENSE
README.md more README tweaks (#17) Apr 24, 2019
cli-requirements.txt
dev-requirements.txt small bit of clean up and use gatsby for building our static site Mar 25, 2019
docker-compose.yml
install-browsers.sh
mypy.ini
pool_config.yaml API Refactor / Browser Profiles Support (#15) Apr 23, 2019
pyproject.toml Run crawl (of WARC) tests on travis (#3) Apr 15, 2019
pytest.ini small bit of clean up and use gatsby for building our static site Mar 25, 2019
requirements.txt Run crawl (of WARC) tests on travis (#3) Apr 15, 2019
setup.py Rename Crawlmanager -> Browsertrix (#13) Apr 19, 2019
test-docker-requirements.txt Run crawl (of WARC) tests on travis (#3) Apr 15, 2019
test-local-requirements.txt Run crawl (of WARC) tests on travis (#3) Apr 15, 2019

README.md

Browsertrix

Code Style Build Status

High Fidelity Browser-Based Crawling Automation

Browsertrix is a brand new toolset from the Webrecorder project for automating browsers to perform complex scripted behaviors as well as crawl multiple pages. (The name was originally used for an older project with similar goals).

Browsertrix is a system for orchestrating Docker-based Chrome browsers, crawlers, behavior systems and web archiving capture and replay.

It includes the following features:

  • Crawling via customiazable YAML-based crawl spec
  • High-fidelity browser-based crawlers (controlled via webrecorder/autobrowser)
  • Execution of complex, domain-specific in-page behaviors (provided by webrecorder/behaviors)
  • Capture or replay into designated [pywb]((https://github.com/webrecorder/pywb) collections
  • Optional screenshot creation of each page.
  • Support for customized browser profiles to minimize capture of private information.

Getting Started

Installing Browsertrix

Browsertrix is currently designed to run with Docker and Docker Compose. The Browsertrix CLI requires local Python 3.6+.

To install, run:

git clone https://github.com/webrecorder/browsertrix
cd browsertrix
python setup.py install
./install-browsers.sh
docker-compose build
docker-compuse up -d

The install-browsers.sh script installs additional Docker images necessary for dynamic browser creation. The script can be used to update the images as well.

Installing Browsertrix CLI

The Browsertrix CLI is installed by running python setup.py install and includes full functionality for running crawls and creating browser profiles.

Once installed, browsertrix commands are available via the browsertrix command.

Creating a Crawl

To create a crawl, first a crawl spec should be defined in a yaml file. An example spec, sample-crawls/example.yaml might look as follows:

crawls:
  - name: example
    crawl_type: all-links
    num_browsers: 1

    coll: example
    mode: record

    seed_urls:
      - https://www.iana.org/

Then, simply run browsertrix crawl create sample-crawls/example.yaml --watch

The --watch param will also result in the crawling browser opening in a new browser window via vnc connection.

If started successfully, the output will be similar to:

Crawl Created and Started: cf30281efc7a
Status: running
Opening Browser 1 of 1 (CKVEMACNI6YBUKLQI6UKKBLB) for crawl cf30281efc7a

To view all running crawls, simply run browsertrix crawl list which should result in output similar to:

CRAWL ID      NAME          STARTED       STATUS   CRAWL TYPE    COLL              MODE      TO CRAWL  PENDING   SEEN      BROWSERS   TABS  
cf30281efc7a  example       0:00:35 ago   running  all-links     example           record    15        1         25        1          1    

To get more detailed info on the crawl, run browsertrix crawl info --urls <crawl_id> (where <crawl_id> = cf30281efc7a in this example)

Crawling Options

Browsertrix supports a number of options, with a key option being the crawl_type, which can be:

  • single-page -- crawl only the specified page
  • all-links -- crawl all links on this page
  • same-domain -- crawl all links within this domain (upto a depth of 100)
  • custom -- Supports custom depth and scope rules!

The first 3 options are designed to be a simple way to specify common options, and more may be added later.

When using custom, the crawl_depth param can specify the crawl depth (hops) from each seed url.

The scopes list can contain one or more urlcanon MatchRules specifying urls that are in scope for the crawl.

See custom-scopes.yaml for an example on how to use the custom option.

The coll option specifies the pywb collection to use for crawling, and mode specifies record (default) or replay or live (direct live web connection).

The num_browsers and num_tabs option allow for selecting total number of browsers and number of tabs per browser to use for this crawl.

The seed urls for the crawl should be provided in the seed_urls list.

The cache option specifies cacheing options for a crawl, with available options:

  • always -- Strict cacheing via Cache-Control on almost every resource to limit duplicate urls in a single browser session (default option when omitted)
  • default -- Keep default cacheing for a page
  • never -- disables all cacheing for all urls.

All example crawl configs demonstrating these options are available in: sample-crawls

In-Page Behaviors

For every page, Browsertrix runs a designated behavior before collecting outlinks, (optionally) taking screenshots, and moving on to the next page.

The behaviors are served via a separate behavior API server. The current list of available behaviors is available at: https://github.com/webrecorder/behaviors/tree/master/behaviors

The behaviors are built using a special library of behavior functions (preliminary docs available here: http://blog.webrecorder.io/behaviors/)

If no site-specific behavior is found, the default autoscroll.js

The behavior_max_time crawl option specifies the maximum time a behavior can run (current default is 60 seconds). When crawling sites with infinite scroll, it is recommended to set the behavior_max_time to be much higher.

pywb Collections and Access

All data crawled is placed in the ./webarchive/collections/ directory which corresponds to the standard pywb directory structure conventions eg. a collection test would be found under ./webarchive/collections/test.

Collections are created automatically on first use and can also be managed via wb-manager with webarchive as the working directory.

The running pywb instance can also be accessed via http://localhost:8180/

Replay Crawling and Screenshots

Browsertrix supports crawling in replay mode, over an existing collection, which may be useful for QA processes, especially when combined with screenshot creation.

By adding the screenshot_coll property to each crawl, Browsertrix will also create a screenshot of each page. Additional screenshot options are to be added soon. (Currently, the screenshot is taken after the behavior is run but this will likely change).

Crawl options can also be overriden via command line.

For example, given a crawl spec ./my_crawl.yaml, one could first capture with:

browsertrix crawl create ./my_crawl.yaml --screenshot_coll screenshots-capture

and then run:

browsertrix crawl create ./my_crawl.yaml --screenshot_coll --mode replay screenshots-qa

By default, screenshots are saved with urn:screenshot:<url> prefix. Based on the above crawls, one could then query all capture and qa screenshots in pywb via:

http://localhost:8180/screenshots-capture/*/urn:screenshot:*
http://localhost:8180/screenshots-qa/*/urn:screenshot:*

Sample record and replay configs, social-media.yaml and social-media-replay.yaml, are also available.

(Note: The screenshot functionality will likely change and additional options will be added)

Other Crawl operations

Other crawl operations include:

  • browsertrix crawl stop for stopping a crawl
  • browsertrix crawl watch <crawl_id> for attaching and watching all the browsers in a given crawl.
  • browsertrix crawl remove-all for stopping and removing all crawls.

See browsertrix crawl -h for a complete reference of available commands.

Browser Profiles

It is often useful to prepare a browser, such as by logging into social media, other password protected sites to be able to capture content that is not generally accessible. However, doing so during a crawl is tedious, and worse, may result in passwords being recorded to WARC.

Browsertrix addresses this problem with the support of browser profiles. A profile can be created by running a base Chrome browser, performing custom actions, and then 'saving' the running browser into a new 'profile' image.

To create a profile:

  1. Run: browsertrix profile create

  2. This should start a new remote browser (Chrome 73 by default) and open it in a new window. You can now interact with the browser and log in to any sites as needed.

  3. The command line should have the following message and a prompt to enter the profile name, eg. logged-in

A new browser window should have been opened
You can use the browser to log-in to accounts or otherwise prepare the browser profile
(The content will not be recorded to WARC)
When done, please enter a new name to save the browser profile: 
  1. Once the name is entered the profile is saved, and you can continue browsing to make a new profile, or select 'no' and close the browser.

    If everything worked, running browsertrix profile list should show:

PROFILE           BASE BROWSER
logged-in         chrome:73
  1. To use the profile, set the profile property in the crawl spec YAML, or simply include --profile in the command line:
browsertrix crawl create ./my_crawl.yaml --profile logged-in

The browsers used for the crawl will be a copy of the browser saved during profile creation.

browsertrix profile remove can be used to remove an unneeded profile.

Note: The profile functionality is brand new and subject to change. At present, it is tied to the particular browser Docker image used an extend the image. The system may switch to Docker volumes in the future.

Testing

Browsertrix includes several test suites, also tested on automatically via Travis CI.

Docker Integration Tests

Browsertrix includes a Docker-based test suite that runs crawls over content replayed from a WARC (no live web content is accessed). This test suite requires Python 3.6+.

To run this test suite, run:

bash ./tests/start-test-compose.sh
pip install -U -r test-docker-requirements.txt
py.test --headless ./tests/test_live_crawl.py
bash ./tests/stop-test-compose.sh

The test suite does not perform any live crawling, but runs all the tests/crawl_tests.yaml in replay mode using an existing test WARC downloaded from S3.

Local API Tests

To install and run local tests of the API (without Docker), run the following: (Python 3.7+ is required)

pip install -U -r requirements.txt -r test-local-requirements.txt
py.test ./tests/test_api.py

Frontend

Browsertrix also includes a frontend (still under development) which will have the same features as the CLI.

To access the browsertrix frontend, load http://localhost:8000/

The frontend React app is found in ./frontend and can be started via:

yarn run develop

(The develop server is started at http://localhost:8001 to avoid conflict with production)

To build the production bundle, run:

yarn run build-prod

This should update the production server running at http://localhost:8000

You can’t perform that action at this time.