Crawlers based on simple requests to HTML files are generally fast. However, it sometimes ends up capturing empty bodies, especially when the websites are built on such modern frontend frameworks as AngularJS, React and Vue.js. Powered by Headless Chrome, the crawler provides simple APIs to crawl these dynamic websites with the following features: * Distributed crawling * Configure concurrency, delay and retry * Breadth-first search (BFS) to automatically follow links * Pluggable cache storages such as Redis * Support CSV and JSON Lines for exporting results * Pause at the max request and resume at any time * Insert jQuery automatically for scraping * Save screenshots for the crawling evidence * Emulate devices and user agents * Priority queue for crawling efficiency * Obey robots.txt * Follow sitemap.xml * Promise support


