Co-Founder, Amplify Latinx
Eneida Roman is a nationally recognized leader, and is the Principal of Roman Law, a boutique legal practice she founded in Boston, MA. Over the course of the last 15 years, Eneida has served in prominent leadership roles on national and local organizations. In 2012 Eneida co-founded The Latina Circle (TLC), a Boston-based network that is advancing Latina leaders into positions of power and influence. In 2017, TLC launched Amplify Latinx, a non-partisan convener of 3000 plus members, that builds economic and political power by significantly increasing Latinx civic engagement and representation in leadership positions across sectors. www.amplifylatinx.com
New York Times.
Algorithms are written by humans and these algorithms inherently will result in artificial intelligence bias, given that as human beings we interpret the world based on our life experiences. Forward thinking organizations provide implicit bias awareness training, so why not also become aware (and do something about) how artificial intelligence is fundamentally impacted by human bias? If organizations intentionally design teams of computer scientists that include diverse gender, race, ethnicity, ideologies and sexual orientation, the algorithms they create can have a more balanced representation of our society. Artificial Intelligence affects almost every aspect of our lives as it is used on a daily basis to make critical decisions in the private and public sectors. If not properly addressed, artificial intelligence bias can create problems ranging from bad business decisions to injustice, thus perpetuating the privilege of one class over another.
Eneida’s Discussion Questions (PDF for Print)
- We use the internet every day for work and play and these algorithms learn from our patterns that predict our online behavior. What do you believe is the social impact of this artificial intelligence? i.e. the social impact of artificial intelligence on recent US elections.
- Does it make sense to you that artificial intelligence and algorithms should have principles and standards and/or perhaps federal regulations? Why or Why Not?
- Have you ever asked yourself how diverse the computer science profession is and how diverse AI teams are in the organizations you typically interact with? We hold organizations accountable for their marketing and other outward facing aspects, so why not also hold them accountable for the world view of their artificial intelligence?