Bot Code, Norms, and Law

There’s a good post on Dark Reading by Ido Safruti about norms and etiquette for bot code.  According to Imperva’s most recent bot traffic report, bots comprise the majority of Internet traffic.  May bots are intentionally disruptive or misleading — for example, bots that create comment link spam on blogs.  Others are useful — for example, they, allow a search engine to index web pages.  Even useful bots can be disruptive, such as by using up site capacity,  and the robots.txt standard has been developed so that site owners can limit or exclude bot traffic.

Safruti provides the following guidelines for ethical bot code:

1.  Declare who you are;
2. Provide a method to accurately identify your bot;
3.  Follow robots.txt;
4.  Don’t be too aggressive.  

These are sound guidelines, but my lawyer Spidey sense wonders how they might translate into legal norms, or whether they should become legal norms.  The most immediate way in which guidelines like this can become part of legal norms is through a contractual terms of use.  I’m not sure a terms of use would be enforceable either as a legal or practical matter against unwanted bots, not least because the measure of contractual damages would be unclear.  There’s an interesting 2001 case in the First Circuit finding a Computer Fraud and Abuse Act violation for bot use, but the facts are quirky and it seems to me perhaps wrongly decided.  Perhaps guidelines like Safruti’s provide a standard of care for a tort claim if an unwanted bot causes a business interruption, though in states where the economic loss doctrine applies this would produce an difficult question about whether slowing a website is a kind of compensable property damage.  Guidelines like this could also be incorporated into a regulatory regime, which the Internet community as a whole might not find palatable.