Diversity, Inclusion and Leadership in Tech

Logo

This is a reading seminar on issues of diversity and inclusion, focusing on how to take leadership in creating more equitable and just communities in tech.

Course Schedule

Week 1: Surveying the Landscape of Diversity in Tech

Readings

Response

Summarize the information in the readings - what is the current landscape of diversity in tech? What seem to be (or are not) the main bottlenecks for achieving diversity? Can you relate any part of your readings to your personal experience in tech as a student of Data Science?

Thinking ahead - What topics do you hope to explore in this seminar? What do you hope to gain from participating in this seminar?

Week 2: How Did Tech Become So Homogenous: the Social/Cultural/Historical Factors

Readings

Response

Summarize the information in the readings - in what ways do these readings challenge the popular narrative of the history, norms and values of the tech industry? Can you relate any part of your readings to your personal experiences as a student of Data Science or experiences of those close to you? The readings identify social/cultural factors that contribute to the stymying of diversity in tech, how can we address these obstacles? In particular, should the solutions be individual/personal/grassroots or institutional (what do the readings from last week and this week suggest)?

Week 3: How Does the Diversity/Homogeneity of Tech Affect the Products We Design and the Ways They Impact the World?

Readings

Response

Summarize the information in the readings - this week’s readings highlight the disparate impacts of technology (not just digital) on marginalized groups. Do you see themes in the ways technology can negatively and unequally impact these groups? Do you see common sources for these disparities (e.g. are these disparities arising from malevolent actors/systems, are they anomalies/accidents)? Relating this week’s reading to ones from previous weeks, can you say to what extent does addressing ethical issues in data science require one to address the issues of diversity and inclusion in tech?

Week 4: What Are Our Obligations to Fix the Failings in/of Tech? What Are Pathways to Solutions?

We’ve surveyed the unequal representation in tech and the unequal impacts of tech. This might raise the questions: but what can be done and what should I do? This week’s reading addresses these questions from philosophical foundations, presents practical frameworks and offers constructive critiques of solutions being offered.

Response

Summarize the information in the readings - in what ways is an individual data scientist obligated to engage with the ethical implications of their work (on what grounds would you argue that there is any obligation)? What are some ways the tech community have tried to address the broader impact (especially negative impact) of their products? What are the promises and pitfalls of these proposed fixes?

Week 5: Barriers and Pathways to Inclusion

Since we’ve been discussing how data science curricula should prepare a student to tackle the alignment of technology and human values: Why should we be addressing diversity, ethics and social justice in a data science classroom? What might be our goals and desired outcomes for our students by addressing these issues? What are some potential difficulties with meaningful integration of these issues into traditional classrooms? What are some ways you (personally) have encountered these issues in the classroom - were these experiences productive, difficult, frustrating?

Readings

Barriers to full inclusion along multiple axes:

  1. Black and brown tech workers share their experiences of racism on the job
  2. Chapter 3 in Black, Brown and Bruised: How Racialized STEM Education Stifles Innovation
  3. LGBTQ+ voices: Learning from lived experiences
  4. “Those invisible barriers are real”: The Progression of First-Generation Students Through Doctoral Education
  5. International Students in Transition: Voices of Chinese Doctoral Students in a U.S. Research University

Pathways towards inclusion:

  1. Chapte 5 in Black, Brown and Bruised: How Racialized STEM Education Stifles Innovation
  2. Fourteen Recommendations to Create a More Inclusive Environment for LGBTQ+ Individuals in Academic Biology

Response

Summarize the information in the readings – identify barriers to academic/professional success and inclusion for each demographic group, add to this list experience of other demographic groups we’ve read about in previous weeks (e.g. women, Asian Americans). Look for overarching themes: what are the common threads in these experiences, in what ways are the experiences of these groups different/unique. Based on the narratives collected here, can you say to what extent is acknowledging identity important in professional spaces (what are the benefits to the institution and what are the benefits to the individual)? What are the consequences when these identities are ignored or devalued? Can you relate the readings to your personal experience – in what ways have you balanced your non-professional identities and your professional identity? Have these always been the same for you (i.e. you have not been explicitly aware that you held multiple identities) or have these identities been at times in conflict?

Week 6 & 7: Action Brainstorming for IACS & GAC

Special vistors: Paul Tembo and Yaniv Yacoby

Week 8: How to Build Successful Coalitions/Communities Across Differences

This week we want to gather lessons from the literature on diverse coalition/community building and instantiate them for IACS and the broader SEAS community.

Readings

  1. Coalition‐building and the forging of solidarity across difference and inequality
  2. Strength in Numbers: A Guide to Building Community Coalitions
  3. Building Collaborative Capacity in Community Coalitions: A Review and Integrative Framework

Response

Summarize the readings – what are some “best practices” for coalition building? Specifically, what are some best practices for ensuring that our coalitions are inclusive and diverse? How do we maintain the effectiveness and cohesion of the coalition when the diverse identities/lived experiences in our coalition lead to difference in goals/opinions? Instantiate these lessons for IACS. That is: (1) characterize the communities and identities within IACS (2) identify unique needs/goals/experiences of each community (3) what would a successful & diverse student (or student+staff) coalition look like and what would be the purpose for this coalition (concretely, is this coalition the same as the GAC? Is this “coalition” a core group of IACS students/staff who are not necessarily in the GAC but are in charge of setting the tone for the culture of IACS? Is this “coalition” just the entire IACS community)? (4) how do you share decision making within your definition of the IACS coalition (for example, if your definition of this “coalition” is the entire IACS community, who then makes decisions about the goals and interests for this community – the GAC, the vocal minority, the silent majority?)?

Week 9: Development of Codes of Ethics in Professional Societies

This week, we will explore how professional communities develop codes of ethics and how these communities are regulated by these codes. We want to relate this weeks readings to previous reading on the role of ethics in data science.

Readings

  1. An Introduction to Software Engineering Ethics
  2. Historical perspectives on development of the codes of ethics in the legal medical and accounting professions
  3. Evolution of the American society of civil engineers code of ethics
  4. Professional Self-regulation in North America: the Cases of Law and Accounting

Response

Summarize the information in the readings - how did professional codes of ethics come to be - what motivated/precipitated the establishment of codes? How do these codes differ and how are they similar across disciplines? What are some relevant take-aways for data science? How do professional societies in the other disciplines self-regulate - how do they socialize new members and enforce these codes of ethics? What lesson can we draw for data science as an emmerging profession?

Week 10: Data Biases and Inequities

This week, we’ll start our exploration of algorithmic bias. However, in understanding the roots of algorithmic bias and the broader impacts of this bias, we need to start with something more fundamental than the actual model or algorithm: we need to consider the data that feeds our ML pipelines, as often biases and harms are already apparent in the data that we (do not) collect and in the way that we collect them.

Readings

  1. Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination
  2. New Categories are Not eNough: rethinking the Measurement of sex and gender in social surveys
  3. Why Are Health Studies So White?
  4. Chapter 5 of Data Feminism: Unicorns, Janitors, Ninjas, Wizards and Rock Stars
  5. Introduction and Chapter 1 of Ghost Work

Response

Summarize the information in the readings - Algorithmic bias has received much public and academic attention in the last years. We are also well aware that many of the biases in the outputs of our algorithm often follow from the biases in the data as well as biases in the way that we formulate the computational question itself (e.g. the case of predictive policing from previous readings). Given this week’s readings, what are other important ways that our data collection processes can generate bias (and what are the consequences of these types of biases in the data)? What are the ways our data collection processes can directly generate unequal and negative broader social impacts? Relate this week’s readings to those from previous weeks that examine algorithmic/technological biases from other perspectives (e.g. the talk by the author of “Black Software”, “Race After Technology” and “Invisible Women”), what is the broad picture of problems/ethical issues in data collection, curation and manipulation? Instantiate this at IACS: in what ways have you critically examined the data collection, curation and manipulation pipeline in your courses? Have the treatment of data in these courses ever fallen into the common pitfalls highlighted in the readings? Do the readings provide insight on how we can improve the way we educate students about data in IACS courses?

Week 11: Algorithmic Bias

This week we continue to explore algorithmic bias, focusing on case studies of when an algorithm may create unjust, inequitable effects and why this happens.

Readings

  1. Chapter 2 of The Alignment Problem
  2. Algorithms are Not Neutral: Bias in Collaborative Filtering
  3. Algorithm Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice

Response

Summarize the information in the readings - what is the definition of algorithmic bias? What are the sources of these biases (e.g. can algorithmic biases be always attributed to bias in the data or malicious intent of the user/developer?)? Is it possible to translate human values of fairness and justice into formal, algorithmic properties – what do the readings suggest (if the answer is yes, what are some useful paradigms for doing this)? What are some common pitfalls that engineers/designers can fall into when trying to eliminate algorithmic bias?

Week 12: Biases in Socio-Technical Systems

In the past weeks we’ve examined biases in data and algorithms. This set of readings ask us to examine what happens when biases in algorithms interact with biases in human actors and in social institutions.

Readings

  1. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments
  2. Algorithmic Decision-Making and the Control Problem
  3. Humans rely more on algorithms than social influence as a task becomes more difficult
  4. Runaway Feedback Loops in Predictive Policing
  5. The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally

Response Putting together the three sets of readings (data bias, algorithmic bias and bias in socio-technical systems), think about when, where and how biases can occur in the data science/ML pipeline? Based on your readings, what are some common design pitfalls that may allow for bias to creep in? From the perspective of an engineer, what are some best practices that might mitigate the effect of bias? From the perspective of a citizen, end-user, what are some ways you can advocate for your own rights in the presence of algorithmic bias (how would you know you’ve been affected by algorithmic bias)? From the perspective of a law/policy maker, how would you advocate for legislating/regulating the usage of AI in decision making systems?