Artificial Intelligence - Who Is Helen Nissenbaum?

 



In her research, Helen Nissenbaum (1954–), a PhD in philosophy, looks at the ethical and political consequences of information technology.

She's worked at Stanford University, Princeton University, New York University, and Cornell Tech, among other places.

Nissenbaum has also worked as the primary investigator on grants from the National Security Agency, the National Science Foundation, the Air Force Office of Scientific Research, the United States Department of Health and Human Services, and the William and Flora Hewlett Foundation, among others.

Big data, machine learning, algorithms, and models, according to Nissenbaum, lead to output outcomes.

Her primary issue, which runs across all of these themes, is privacy.

Nissenbaum explores these problems in her 2010 book, Privacy in Context: Technology, Policy, and the Integrity of Social Life, by using the concept of contextual integrity, which views privacy in terms of acceptable information flows rather than merely prohibiting all information flows.

In other words, she's interested in establishing an ethical framework within which data may be obtained and utilized responsibly.

The challenge with developing such a framework, however, is that when many data sources are combined, or aggregated, it becomes possible to understand more about the people from whose the data was obtained than it would be feasible to accomplish with each individual source of data.

Such aggregated data is used to profile consumers, allowing credit and insurance businesses to make judgments based on the information.

Outdated data regulation regimes hamper such activities even more.

One big issue is that the distinction between monitoring users to construct profiles and targeting adverts to those profiles is blurry.

To make things worse, adverts are often supplied by third-party websites other than the one the user is currently on.

This leads to the ethical dilemma of many hands, a quandary in which numerous parties are involved and it is unclear who is ultimately accountable for a certain issue, such as maintaining users' privacy in this situation.

Furthermore, because so many organizations may receive this information and use it for a variety of tracking and targeting purposes, it is impossible to adequately inform users about how their data will be used and allow them to consent or opt out.

In addition to these issues, the AI systems that use this data are biased itself.

This prejudice, on the other hand, is a social issue rather than a computational one, since much of the scholarly effort focused on resolving computational bias has been misplaced.

As an illustration of this prejudice, Nissenbaum cites Google's Behavioral Advertising system.

When a search contains a name that is traditionally African American, the Google Behavioral Advertising algorithm will show advertising for background checks more often.

This sort of racism isn't encoded into the coding; rather, it develops through social contact with adverts, since those looking for traditionally African-American names are more likely to click on background check links.

Correcting these bias-related issues, according to Nissenbaum, would need considerable regulatory reforms connected to the ownership and usage of big data.

In light of this, and with few data-related legislative changes on the horizon, Nissenbaum has worked to devise measures that can be implemented right now.

Obfuscation, which comprises purposely adding superfluous information that might interfere with data gathering and monitoring procedures, is the major framework she has utilized to construct these tactics.

She claims that this is justified by the uneven power dynamics that have resulted in near-total monitoring.

Nissenbaum and her partners have created a number of useful internet browser plug-ins based on this obfuscation technology.

TrackMeNot was the first of these obfuscating browser add-ons.

This pluinator makes random queries to a number of search engines in attempt to contaminate the stream of data obtained and prevent search businesses from constructing an aggregated profile based on the user's genuine searches.

This plug-in is designed for people who are dissatisfied with existing data rules and want to take quick action against companies and governments who are aggressively collecting information.

This approach adheres to the obfuscation theory since, rather than concealing the original search phrases, it just hides them with other search terms, which Nissenbaum refers to as "ghosts." Adnostic is a Firefox web browser prototype plugin aimed at addressing the privacy issues related with online behavioral advertising tactics.

Currently, online behavioral advertising is accomplished by recording a user's activity across numerous websites and then placing the most relevant adverts at those sites.

Multiple websites gather, aggregate, and keep this behavioral data forever.

Adnostic provides a technology that enables profiling and targeting to take place exclusively on the user's computer, with no data exchanged with third-party websites.

Although the user continues to get targeted advertisements, third-party websites do not gather or keep behavioral data.

AdNauseam is yet another obfuscation-based plugin.

This program, which runs in the background, clicks all of the adverts on the website.

The declared goal of this activity is to contaminate the data stream, making targeting and monitoring ineffective.

Advertisers' expenses will very certainly rise as a result of this.

This project proved controversial, and in 2017, it was removed from the Chrome Web Store.

Although workarounds exist to enable users to continue installing the plugin, its loss of availability in the store makes it less accessible to the broader public.

Nissenbaum's book goes into great length into the ethical challenges surrounding big data and the AI systems that are developed on top of it.

Nissenbaum has built realistic obfuscation tools that may be accessed and utilized by anybody interested, in addition to offering specific legislative recommendations to solve troublesome privacy issues.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Biometric Privacy and Security; Biometric Technology; Robot Ethics.


References & Further Reading:


Barocas, Solon, and Helen Nissenbaum. 2009. “On Notice: The Trouble with Notice and Consent.” In Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information, n.p. Cambridge, MA: Massachusetts Institute of Technology.

Barocas, Solon, and Helen Nissenbaum. 2014. “Big Data’s End Run around Consent and Anonymity.” In Privacy, Big Data, and the Public Good, edited by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, 44–75. Cambridge, UK: Cambridge University Press.

Brunton, Finn, and Helen Nissenbaum. 2015. Obfuscation: A User’s Guide for Privacy and Protest. Cambridge, MA: MIT Press.

Lane, Julia, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, eds. 2014. Privacy, Big Data, and the Public Good. New York: Cambridge University Press.

Nissenbaum, Helen. 2010. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...