Privacy Research

I had nothing to do with Cambridge Analytica. I never worked for them or any other similar company. Quite the opposite: It was I who brought Cambridge Analytica’s actions to public attention; I was behind the first Guardian article exposing their actions (2015). Long before their inception, I had been warning scholars, policymakers, and the general public about the privacy risks of the algorithms widely used by governments and companies.

I was one of the first scholars to point out the privacy threats posed by the predictive algorithms widely employed by companies and governments. Since early 2000s, companies and governments have been collecting huge amounts of data generated by internet users, such as emails, web browsing logs, search queries, tweets, Facebook Likes, credit card transactions, and so on. This data is often sold or exchanged. It is then used to learn more about us: Predict our future behavior, consumer choices, political preferences, or even propensity to commit crimes.

In 2012, Facebook patented a technology (later used by Cambridge Analytica) to determine users’ personality from their profile data. I published a study showing the accuracy of such predictions. This paper called for, and triggered, an immediate tightening of privacy policies of Facebook and other online platforms, and informed new EU and U.S. privacy laws.