In this article, the author argues that the premise of “big data” lies not in the amount of data that we can generate, collect or store, but in the ability to use data to make informed decisions. He explores both the needs of data analysis projects and how to apply these types of projects to real-world circumstances. The article concludes that whatever data systems are ultimately put in place, privacy and civil liberties should be a primary concern and not simply an afterthought; the author proposes methods that can be implemented during the design of big data programs in order to curtail privacy violations.
The author explores the origins of “big data” and how the phenomenon was able to spread to a wider audience through parallel technological advances in computer hardware and the open source movement. The article focuses specifically on the progress made by Google related to a series of papers published from 2003 to 2006 that were instrumental in the development of other major tech companies that were able to create provide technical support, making the process of deciphering big data more accessible. The paper then examines how this “democratization” has affected the national security community by providing better intelligence at a lower cost.
In his symposium speech, the General Counsel of the National Security Agency, Raj De attempts to bridge the gap between the public discourse about NSA and the reality of the legal rules, oversight, and responsibility that currently exist at the agency. De sought to clarify NSA’s activities relating to data collection and storage and what legal authorities the agency relies on. De makes clear that certain limitations are imposed on NSA practices, including minimization and retention procedures within the agency and oversight from both the executive and legislative branches and the FISC.