“A new and sometimes awkward relationship” on Big Data and Human Rights should be Further Explored

The AAAS (American Association for the Advancement of Sciences) Science and Human Rights Coalition organized a 2-day conference on ‘Big Data and Human Rights’ on January 15 and 16 in Washington DC, where Data-Pop Alliance’s co-founders Emmanuel Letouzé and Patrick Vinck (who also serves on AAAS’ Committee on Scientific Freedom and Responsibility) presented their perspectives on this critical yet still under-researched field.

As stated in the title of the event summary article (see below), Big Data and human rights are linked by “a new and sometimes awkward relationship”. On the one hand, Big Data can help detect and fight infringements of human rights. On the other hand, the very use of Big Data can challenge core human rights—notably, but not only, privacy. Fundamentally, tensions between competing human rights are likely to be increasingly salient in the age of Big Data—as in the recent case of Ebola epidemic: should mobile-phone data have been shared as was suggested to map population movements and the spread of the disease even as it may have infringed on individual and group privacy and perhaps safety?

Two major impediments to our collective paradigmatic and practical progress on these issues are insufficient conceptual depth and clarity on the one hand, and insufficient recognition of the possibility of unintended harms on the other hand. It is too often assumed that the ends (good intentions) always justify the means; that what ‘we’ can do and what ‘we’ should do don’t differ widely, once ‘we’ have asserted that ethical considerations have been taken seriously (when organizing data science competitions for example). The issue is that there may not only be tensions between human rights we may fail to realize or anticipate, but ethics and human rights are fundamentally different disciplines, such that it is yet largely unclear what ‘universal’ ethical principles may look like—which may very well be a contradiction in terms—and how they may fit within human-rights based approaches. One central question is also who ‘we’ are.

As fields of research and practice, ethics and human rights have a common philosophical underpinning in concerns for human dignity, and long histories going back to the Enlightenment and shaped by the traumatic experience of the Second World War. But they are obviously not synonymous and have indeed evolved largely in isolation from each other. Although they seek to have moral validity, human rights are primarily a legal concept, not a moral concept; they are about what is just not what is (considered) good, which is the realm of ethics, and highly contextual. Consequences have been a lack of critical thinking at the nexus of both disciplines, simplistic assumptions that human rights work is by definition ethical, and a conception of ethics often limited to recognizing human rights as inviolable, although though most ethics codes don’t even mention them.

The advent of Big Data does and will force ‘us’ as a community of researchers and practitioners to think deeply about these questions. Although we as Data-Pop Alliance and with interested partners intend to dedicate time and efforts to think in greater depth about these issues in the weeks and months ahead, we have attempted in recent writings and events to articulate our take as follows.

First, as put forth in paper on the “Ethics and Politics of Call Data Analytics”, we believe that the Menlo Report published in 2012 to suggest “Ethical Principles Guiding Information and Communication Technology Research” provides a useful frame and a set of four principles (Respect for Persons, Beneficence, Justice and Respect for Law and Public Interest) to guide Big Data research and practice. One direct application may be to inform the work of Human Rights organizations interested in taking advantage of Big Data, notably on what should be done with data acquired unethically but that have value to advance human rights. We feel these ethical principles can serve as cautionary yardsticks and stated objectives.

Second, echoing the main tenets of a philosophical tradition perhaps best embodied today by Joshua Cohen and discussed by Kenneth Baynes and Rainer Forst, we propose that a political conceptualization of human rights may be the best suited to the expectations and challenges raised by Big Data. Specifically, our stance is that in modern pluralistic data-infused societies, the most fundamental human right is political participation, specifically the right and ability of citizens and data producers to weigh in on debates about what constitutes a harm, notably through greater legal and effective control over the rights and use of their data. This perspective highlights the fundamental political nature and requirements of the (Big) Data Revolution—one that is about people’s empowerment, not just about the ability of politicians and corporations to get and use or misuse more individual data.

The current ways in which personal data are collected, shared, used, seem unsustainable—because ethical and legal frameworks were unable to keep up with the pace of the Big Data tide. A reversal of trends is very possible, as the history of other innovations has shown: it is possible that someday what many corporations do today will be deemed too intrusive, such that instead of evolving toward a world with less and less privacy, we will in fact evolve toward a world with much tighter restrictions on the data that corporations can collect and what they can do with them. As argued by Data-Pop Alliance Academic Director Alex ‘Sandy’ Pentland in a recent HBR interview, faced by repeated data abuses and misuses, we run the risk of citizens, regulators and society more broadly deciding to shut down the system—to completely lock data.

If we want to avoid that outcome, much more needs to be done in the way of empowering people to craft the future of Big Data today.

(We are grateful to Valéry Pratt for useful inputs and discussions).

 

 

Big Data and Human Rights, a New and Sometimes Awkward Relationship | AAAS – The World’s Largest General Scientific Society

Despite their exciting potential to uncover human rights abuses, technologies that collect and analyze huge amounts of data can also infringe on other human rights. The AAAS Science and Human Rights Coalition explored the way forward.

28 January 2015  Kathy Wren

The GDELT project monitors the world’s broadcast, print, and web news in over 100 languages and can identify human-rights related events before the news appears in mainstream, Western channels. This visualization shows all news events captured from 1979 to 2013. | Gdelt.org

Even as experts at a recent meeting showed how big-data technologies can reveal and document human rights abuses like child sex trafficking, they and others in the audience were considering the implications for privacy, free expression, and other human rights.

“The application of big data in the human rights domain is still really in its infancy,” said Mark Latonero, research director and professor at the USC Annenberg Center on Communication Leadership & Policy and fellow at the Data & Society Research Institute. “The positives and negatives are not always clear and often exist in tension with one another, particular when involving vulnerable populations.”

Latonero spoke at the 15-16 January meeting of the AAAS Science and Human Rights Coalition, a network of scientific and engineering membership organizations that recognize a role for scientists and engineers in human rights.

“Big data” conventionally refers to the collection, storage, and analysis of huge amounts of data. Although there are many sources of new kinds of digital data, the bulk is created — sometimes intentionally, sometimes not — when people use the Internet and their mobile devices. According to a 2014 White House report, more than 500 million photos are uploaded and shared every day, and more than 200 hours of video are shared every minute. People also leave a trail of “data exhaust” or “digital bread crumbs” when they shop, browse, and interact digitally. This information is collected, stored, and analyzed, sometimes after being sold, for marketing and other purposes including scientific research.

The White House report notes: “Used well, big data analysis can boost economic productivity, drive improved consumer and government services, thwart terrorists, and save lives.” But, these benefits must be balanced against the social and ethical questions these technologies raise, the report continues. These types of tradeoffs were at the forefront at the AAAS meeting, where several participants raised questions about property rights and data ownership.

Speakers including Emmanuel Letouzé, who is also the cartoonist Manu, described big data and petroleum as commodities that can benefit society while also creating power imbalances. | Emmanuel Letouzé

Cell phone, insurance, credit card, and other companies collect personal data about their customers that could be used for altruistic as well as business purposes, but “my take is, it’s not their data to start with,” said Emmanuel Letouzé cofounder and director of the think-tank Data-Pop Alliance. The notion of “data philanthropy” may be flawed at the outset, he said, adding that for citizens in developing countries, any benefits they may receive must be weighed against the risks of providing personal information used in big data analysis projects, especially in regions with a history of political instability and ethnic violence.

Several participants said they didn’t necessarily mind sharing their data, for example with social media companies or Amazon.com. However, they wanted more transparency and a better understanding of how their personal data was being used. Likewise, a recent Pew survey reported that nine out of 10 respondents said they felt consumers has lost control over how companies collect and use their personal information.

“The most fundamental human right, I think, is being able to weigh in on what constitutes a harm,” said Letouzé, arguing that citizens should have a much greater say in how their data is used. His colleague, MIT professor and Data-Pop Alliance academic director Alex “Sandy” Pentland, has called for a “new deal on data,” a set of workable guarantees that the data needed for public goods are readily available while, at the same time, protecting personal privacy and freedom.

Even in more stable, developed countries, big-data technologies can potentially be used for discrimination and manipulation, argued Jeramie Scott, national security counsel at the Electronic Privacy Information Center (EPIC). He disagreed with the White House report’s recommendation that big-data policies focus primarily on how the data is used: “Data collection can have a chilling effect” on the rights to self-expression and free association, he said. People may censor themselves or their activities online, he proposed; for example, they may fear discrimination by companies evaluating their credit-worthiness.

Top: Patrick Vinck, Harvard Humanitarian Initiative; Mark Latonero; Megan Price; Kalev Leetaru, GDELT; Bottom: Emmanuel Letouzé, Samir Goswami, Jeramie Scott | AAAS

Other speakers described projects that use big data in ways that directly support human rights — but even they felt caution was needed.

Latonero showed how analyzing classified ads can reveal patterns suggesting organized child sex trafficking and even investigate particular individuals. Corporations such as Western Union, Google, and J.P. Morgan Chase are also analyzing data that can reveal financial transactions or other evidence of human trafficking. When this data is shared with human rights groups and researchers, it brings up yet-unanswered questions about who has a responsibility to act if a human rights abuse is uncovered, and who has the responsibility to report and monitor that situation, Latonero said.

Samir Goswami, director of Government Professional Solutions at LexisNexis, described a pilot product called SmartWatch that scans over 26,000 information sources each day and alerts a client company or government entity when it finds indications of societal risks somewhere in its supply chain, including for human rights. And, the GDELT project, which monitors the world’s broadcast, print, and web news from every country in over 100 languages, can show when human-rights related events are being reported well before the news makes its way through mainstream, Western channels.

Nonetheless, as researchers who work with human-rights related evidence already know, even large datasets must be checked for biases, such as the omission of key facts. “Big data, while promising, interesting, and useful, is not synonymous with complete or representative data,” said Megan Price, director of research at the Human Rights Data Analysis Group.

Watch a webcast
of the meeting

Consent is another complex issue. When Goswami worked at Amnesty International USA, he partnered with DataKind, who convened a group of data scientists to analyze a 30-year archive of Urgent Action bulletins that contained information about prisoners of conscience, detainees, and other individuals whose human rights were being threatened. The scientists developed a pilot method to predict human rights risks, and the bulletins will be organized in a publicly searcheable database by Purdue University.

Even when data science is harnessed for the public good, Goswami noted, the widespread dissemination of identifying information does have implications for the informed consent. For example, even if individuals consent to have their data collected for one purpose, they may not be aware at that time of other ways that data might be used in the future. Goswami agreed with an audience member that data scientists could learn from the field of medical ethics and the institutional review boards that oversee issues such as informed consent in clinical research.

Human rights experts and data scientists must continue to talk to each other, all agreed. “The thing that keeps me up at night is data scientists trying to intervene in human rights issues with no context of the human rights issue, and then human rights professionals using big data without examination of the assumptions around that data,” said Latonero.

Share
Keywords
Author(s)
Share
Recommendations

Project Report

Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity

The rapid growth of artificial intelligence (AI) offers significant potential to improve

Project Report

Feminist Urban Design: A Gender-Inclusive Framework for Cities

The inception report “Feminist Urban Design: A Gender-Inclusive Framework for Cities,” developed

Toolkit

FAIR Process Framework

Work by Data-Pop Alliance on steps 1-5 has been integrated into FAIR