Replay at https://www.youtube.com/YouthAILab

Brief Overview:

Fiona McEvoy, the founder of YouthTheData.com and an AI ethics speaker, presented the ethical concerns of AI-driven bias, surveillance, and deception. Fiona J McEvoy is an AI ethics writer, researcher, speaker, and thought leader based in San Francisco, CA. She was named one of the 30 Women Influencing AI in San Francisco by RE•WORK and one of the 100 Brilliant Women in AI Ethics 2020 (and 2019) by Lighthouse3. She is also regularly asked to present her ideas to AI conferences in the US and internationally. By introducing the cognitive flaws of people (Solomon Asch’s Halo effect, the Law of Small Numbers, and Ego depletion), she revealed how people’s rationalization did not truly reflect the truth and could change based on random factors (such as time and order of information). Therefore, AI cannot be assumed as neutral because it can be designed by flawed people and trained with biased data, which can be interpreted differently based on historical, social, or political context. McEvoy used Amazon AI recruiting tool bias and deep fake videos (e.g. Boris Kohnson’s endorsement of Corbyn for Prime Minister) as real-life examples of bias and deception. McEvoy pointed out that ethics is different for everyone, based on culture, generational values, and socioeconomic status, using the Infamous Trolley Scenario as an example. AI brings out new responsibilities, so be vigilant. 

Full Summary:

As an AI ethics writer and speaker and the founder of YouthTheData.com, Fiona McEvoy presented the ethical concerns of AI-driven bias, surveillance, and deception. Because technology continues to change people’s lifestyles and interactions, she introduced the importance of anticipating and negating the consequences of AI. Technology of AI, for instance, could alter perception of reality and influence psychology. 

McEvoy first outlined the cognitive flaws of people through Solomon Asch’s Halo effect, the Law of Small Numbers, and Ego depletion. Respectively, these flaws prove how first impressions shape people’s perception of others, even though the information comes at a random sequence, how rationalization did not truly reflect the truth (using the example of the Gates foundation and research on good schools), and how people, under certain conditions, would make default decisions rather than rational ones. 

Therefore, McEvoy said, artificial intelligence took over because of the human faults in natural intelligence. She defined AI with two meanings: the simulation of intelligent behavior in computers and the mechanical capability to imitate human behavior and intelligence. AI can be sorted into two categories, decision making (like spam filters or social media) and decision guidance, like voice assistants. Presently and in the future, artificial intelligence can expand in more domains, like diagnosis, self-driving vehicles, hiring, and virtual worlds; this means that AI can be prone to more issues. 

McEvoy explained that people cannot assume technology is neutral because it is designed by humans who may use flawed algorithms and biased data sets. Taking a look at the 8 risk zones of AI (a toolkit by ethicalos.org, Ethical Operating System), she gave several examples of algorithmic bias, data control and surveillance, and deception. Selected data doesn’t necessarily represent the world view because things change over time, and data can be interpreted differently based on historical, social, or political context. 

She raised the concern of AI algorithms that can discriminate against gender, race, or age, through the Amazon AI recruiting tool affair: the algorithm showed bias against women because it used the data set of the last ten years, when most employees were men. Even removing identifiers, like name and gender, the system could pick out the gender of the applicant, therefore, discarding their resume. 

Facial recognition, on the other hand, is used as surveillance for employees. It is found that 80% of medium-large companies use technology to monitor their employees, even accessing cameras to track mood and emotional states. Their moral reasoning is to accurately assess work productivity and happiness levels, however, the line for its usage has not been drawn. Rumored to be true, the military in China installed sensors in helmets and in high-speed trains to track whether people were happy, depressed, or anxious, leading to the question of how far authority figures are allowed to go in people’s personal lives. 

Deep fake videos, created by AI to show something happening that never occurred, have increasingly surfaced. Such deception and inauthenticity can influence societal views, like in politics with the deep fake video of Boris Kohnson’s endorsement of Corbyn for Prime Minister. It’s hard to recognize who is truly human or not, because of the realistic AI-generated fake faces. 

That being said, McEvoy pointed out that ethics is different for everyone, based on culture, generational values, and socioeconomic status, so figuring out the perfect ethics definition for AI to follow is near impossible. A thought-provoking scenario is the infamous trolley problem, which can somewhat relate to AI and its implementation in autonomous vehicles: a train without a driver is on a track to kill five people, however, you can pull the lever, turning the tracks of the train and only killing one. Those who pick turning the lever take a consequentialist approach because it’s logical that losing five lives is worse than losing one, while those who chose to let nature take its course don’t want the responsibility of pulling the lever and killing a person. There is no right answer to the issue. An example of ethics coming into play was in a Google developers’ conference, where they introduced a voice assistant that could make reservations— it had the ability to react and sound like a person— however, the internet reaction was far different than expected. Twitter users believed it was cruel and unfair to deceive others, expressing the need to acknowledge the dignity of human service workers. 

With that, McEvoy wrapped her presentation up with the idea that AI brings out new responsibilities, for companies, academics, governments, and users of technology. Be vigilant by raising awareness of transparency, risks, and how people can support a tech-positive environment. The Q&A closed out with the idea that AI has the potential to be an opportunity and to help the economically disadvantaged or people with disabilities, like how technology has transformed the blind community. Look for more resources in McEvoy’s blog, Netflix’s “The Social Dilemma” (2020), the Center of Humane Technology, Data and Society (NYU), and Oxford University’s Center for Future Humanity.

For previous Youth AI talks, visit https://www.youtube.com/YouthAILab