SAN FRANCISCO -- The Internet is bigger and better than what a mere browser allows. "Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL" (No Starch Press, April 2007, http://www.nostarch.com/webbots.htm) is for programmers and businesspeople who want to take full advantage of the vast resources available on the Web. As author Michael Schrenk demonstrates, there's no reason to let browsers limit the online experience-especially when it's so easy to automate online tasks to suit individual needs.
This new book begins by outlining the deficiencies of browsers, then explains how these deficiencies can be exploited in the design and deployment of task-specific webbots--customized programs that aggregate different sources, filter content for relevant data, and automate online transactions.
Inside "Webbots, Spiders, and Screen Scrapers," readers learn how to write fault-tolerant webbots and spiders that:
-download entire websites and parse data from web pages -manage cookies and decode encrypted files -automate form submissions and send and receive email -send SMS alerts to cell phones -unlock password-protected websites -automatically bid in online auctions -exchange data with FTP and NNTP servers
Sample projects reinforce these new skills so readers can create simple Web applications to track online prices, create anonymous browsing environments, archive online data, and more. In addition, the author's website (www.schrenk.com) provides readers with sample scripts and code libraries, as well as a place to test their own webbots.
"It can be difficult to learn how to design, develop, and deploy webbots," said No Starch Press founder Bill Pollock. "Mike Schrenk has been living and breathing this stuff for many years and is the perfect teacher to share his accumulated wisdom."