Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article seems to make the most sense. As a hypothetical example, Google might have been manually fulfilling thousands of FISA requests. It sucked up time, took engineering resources, was error-prone, and lacked the legal paper trail to track such requests. In addition, the government wanted something faster, that used fewer agents to submit/collect data, and could more easily get updates.

So one day, an NSA agent negotiates with Google to build fisarequest.supersecret.google.com. NSA agents can directly upload FISA request documents, as easily as submitting expense receipts. In turn, Google's legal department can view each request and decide to comply with the click of a mouse, as easily as approving an expense report. If authorization is granted, the NSA can now view the emails of the requested user, and new emails can simply be viewed with a refresh of the browser. As Schmidt and Drummond say, Google can decline a request if it's improper or overly broad, and ask for additional information -- again, as easily as a manager looking at a questionable minibar expense on a trip report.

Said NSA agent does this with multiple companies, internally brands this as PRISM, puts together a Powerpoint deck, and declares victory. It's "direct access" since it's coming straight from Google and on a Google server, and it's "real-time" in that a request can be authorized quickly, and there are no more .zip files with data dumps involved.

Google has never actually heard of PRISM, and only knows that they built a tool to make it easier to do what they were already doing with legal FISA requests. To them, the NSA doesn't have "direct access," which is a loaded term for unfettered superuser access.

The entire program costs only $20mm because the government now requires only a few agents to submit requests and collect data. The cost of building the tools is borne by the companies, who see it as a cheaper way to comply with an existing legal obligation.



This is the best explanation I've seen so far. It requires no one to lie. The Washington Post inferred too much from the evidence it had, but I'm thankful this debate has occurred. The NSA denies that it has broad access, the companies deny it, individuals at those companies also deny it. Obama denies it. All of them speak truthfully.

The EFF publishes a timeline of when companies decided to streamline their own access requests, which may well be perfectly truthful.


This is pretty much what I think, knowing no more than what anyone here knows. It's reasonable to argue that NSA-issued request are wrong, period, but that's different from saying that Google is actively assisting them in a para-legal process...not only out of Constitutional concerns, but because building such a backdoor necessarily creates security risks to users, NSA targets or not.

In my layman's opinion, building such a backdoor in an infrastructure as complicated as Google's would require a team as well as an oversight group...someone has to write tests for that "feature" and someone else has to make sure those tests aren't seen by those who don't need to know (I.e. the rest of Google's sizable test department)...this kind if arrangement would seemingly have to be known by someone on the executive team.


Google has specifically stated that they don't take this ("drop box") approach:

"We cannot say this more clearly—the government does not have access to Google servers—not directly, or via a back door, or a so-called drop box."


The drop box could be an NSA server and not a "Google server", depending on who manages the hardware.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: