12/6/2017 12:01:00 AM – Bob Barr
The “internet poll” has become a familiar device with which to solicit reader feedback and drive engagement on topics from sports and entertainment to law and politics. But, with obvious flaws in polling methodology (e.g., random sampling, representative samples), not to mention vulnerability to fraud, the results of such polls carry little if any scientific value. They are a marketing tool only; except, it seems, when it comes to formulating federal regulations.
Providing a public comment period before federal regulations can be finalized is a legal and long-standing component of federal rulemaking. Typically the window for public comments is 30 to 60 days; during which time anyone – from Joe Six-Pack to high-paid industry consultants – can submit commentary used in considering the adoption of a proposed rule. Federal regulatory agencies increasingly prefer that public comments be submitted digitally, “so that [people’s] input on a proposed rule or other document is more easily available to the public” and easier to organize for agency review. Therein lies the problem.
Electronic commentary makes it extremely easy to “stuff the ballot box” with canned commentary from armies of online activists with the click of a mouse; or even millions of computer “bots” forging the identities of real people – dead or alive. In either case, it is becoming difficult (if not impossible) to seriously consider such feedback, particularly as the Federal Communications Commission seeks to repeal Obama-era regulations involving access to the internet.
The underlying problem is that the use of modern technology in this way has reduced public input on regulatory rule-making to little more than “digital shouting matches” between organizations on one side supporting a particular proposed rule change, and those on the other side opposing the change.
According to reports from a third-party company tapped to process and catalog public commentary submitted electronically to the FCC regarding its proposal to reverse the so-called “Open Internet Order” (more commonly known as “net neutrality”), pushed by the Obama administration, a troubling pattern has emerged in which massive surges of comments (in the hundreds of thousands, if not millions) are received electronically by the agency in a matter of days.
In this case, the comments, while often appearing unique and cogent on the surface, were subsequently determined to have been created by artificial bots using a natural language generator. Worse still, it appears some of these bots borrowed the identities of real people in order to submit comments to the Federal Register; a violation of state law the New York Attorney General now is investigating.
Overall, Wired.com reports that “over a third of the nearly 22 million comments that poured into the [FCC] . . . included one of seven identical messages,” and “more than half were associated with duplicate or temporary emails.” And, while bot activity is seen heavily in responses supporting FCC Chairman Ajit Pai, who is leading the drive to repeal net neutrality, both sides appear to be impacted; undermining the credibility of all comments.
There is also the issue of form-letter commentary, popular among activist organizations that send a call-to-arms to their many members, each of who can within a few seconds submit commentary for, or against, an issue. Even though the responses come from real people, and reflect a real sentiment, it is hard to consider such a lazy way to flood commentary requests genuine in the spirit of public comment; especially when the scheme is followed quickly by gloating fundraising appeals.
To his credit, Chairman Pai indicated the volume of responses is less important to the rulemaking process for reversing the net neutrality rule than the quality of the comments; offering at least some relief from the optics-obsessed previous administration in which this might have been spun a different way to support another government power grab.
However, the ease with which these bots and activist organizations can flood public commentary, and the sophistication with which bots can mimic human communication, raises difficult questions about the effectiveness of digital comment submissions; not to mention the threat it poses to rulemaking by weaker-willed agencies that are more susceptible to perceived public pressure. More practically as well, are the additional taxpayer-financed resources it takes to identify, sort, and catalog millions of public comments.
Regardless of whether these digital shenanigans result in actual fraud prosecutions by state or federal authorities, the damage they are causing to an important element of participatory democracy is very real and makes it far easier than it should be for regulatory officials to simply ignore the public altogether and do as they please.