This is a reading seminar on issues of diversity and inclusion, focusing on how to take leadership in creating more equitable and just communities in tech.
Readings
Response
Summarize the information in the readings - what is the current landscape of diversity in tech? What seem to be (or are not) the main bottlenecks for achieving diversity? Can you relate any part of your readings to your personal experience in tech as a student of Data Science?
Thinking ahead - What topics do you hope to explore in this seminar? What do you hope to gain from participating in this seminar?
Readings
Response
Summarize the information in the readings - in what ways do these readings challenge the popular narrative of the history, norms and values of the tech industry? Can you relate any part of your readings to your personal experiences as a student of Data Science or experiences of those close to you? The readings identify social/cultural factors that contribute to the stymying of diversity in tech, how can we address these obstacles? In particular, should the solutions be individual/personal/grassroots or institutional (what do the readings from last week and this week suggest)?
Readings
Response
Summarize the information in the readings - this week’s readings highlight the disparate impacts of technology (not just digital) on marginalized groups. Do you see themes in the ways technology can negatively and unequally impact these groups? Do you see common sources for these disparities (e.g. are these disparities arising from malevolent actors/systems, are they anomalies/accidents)? Relating this week’s reading to ones from previous weeks, can you say to what extent does addressing ethical issues in data science require one to address the issues of diversity and inclusion in tech?
We’ve surveyed the unequal representation in tech and the unequal impacts of tech. This might raise the questions: but what can be done and what should I do? This week’s reading addresses these questions from philosophical foundations, presents practical frameworks and offers constructive critiques of solutions being offered.
Response
Summarize the information in the readings - in what ways is an individual data scientist obligated to engage with the ethical implications of their work (on what grounds would you argue that there is any obligation)? What are some ways the tech community have tried to address the broader impact (especially negative impact) of their products? What are the promises and pitfalls of these proposed fixes?
Since we’ve been discussing how data science curricula should prepare a student to tackle the alignment of technology and human values: Why should we be addressing diversity, ethics and social justice in a data science classroom? What might be our goals and desired outcomes for our students by addressing these issues? What are some potential difficulties with meaningful integration of these issues into traditional classrooms? What are some ways you (personally) have encountered these issues in the classroom - were these experiences productive, difficult, frustrating?
Readings
Barriers to full inclusion along multiple axes:
Pathways towards inclusion:
Response
Summarize the information in the readings – identify barriers to academic/professional success and inclusion for each demographic group, add to this list experience of other demographic groups we’ve read about in previous weeks (e.g. women, Asian Americans). Look for overarching themes: what are the common threads in these experiences, in what ways are the experiences of these groups different/unique. Based on the narratives collected here, can you say to what extent is acknowledging identity important in professional spaces (what are the benefits to the institution and what are the benefits to the individual)? What are the consequences when these identities are ignored or devalued? Can you relate the readings to your personal experience – in what ways have you balanced your non-professional identities and your professional identity? Have these always been the same for you (i.e. you have not been explicitly aware that you held multiple identities) or have these identities been at times in conflict?
Special vistors: Paul Tembo and Yaniv Yacoby
This week we want to gather lessons from the literature on diverse coalition/community building and instantiate them for IACS and the broader SEAS community.
Readings
Response
Summarize the readings – what are some “best practices” for coalition building? Specifically, what are some best practices for ensuring that our coalitions are inclusive and diverse? How do we maintain the effectiveness and cohesion of the coalition when the diverse identities/lived experiences in our coalition lead to difference in goals/opinions? Instantiate these lessons for IACS. That is: (1) characterize the communities and identities within IACS (2) identify unique needs/goals/experiences of each community (3) what would a successful & diverse student (or student+staff) coalition look like and what would be the purpose for this coalition (concretely, is this coalition the same as the GAC? Is this “coalition” a core group of IACS students/staff who are not necessarily in the GAC but are in charge of setting the tone for the culture of IACS? Is this “coalition” just the entire IACS community)? (4) how do you share decision making within your definition of the IACS coalition (for example, if your definition of this “coalition” is the entire IACS community, who then makes decisions about the goals and interests for this community – the GAC, the vocal minority, the silent majority?)?
This week, we will explore how professional communities develop codes of ethics and how these communities are regulated by these codes. We want to relate this weeks readings to previous reading on the role of ethics in data science.
Readings
Response
Summarize the information in the readings - how did professional codes of ethics come to be - what motivated/precipitated the establishment of codes? How do these codes differ and how are they similar across disciplines? What are some relevant take-aways for data science? How do professional societies in the other disciplines self-regulate - how do they socialize new members and enforce these codes of ethics? What lesson can we draw for data science as an emmerging profession?
This week, we’ll start our exploration of algorithmic bias. However, in understanding the roots of algorithmic bias and the broader impacts of this bias, we need to start with something more fundamental than the actual model or algorithm: we need to consider the data that feeds our ML pipelines, as often biases and harms are already apparent in the data that we (do not) collect and in the way that we collect them.
Readings
Response
Summarize the information in the readings - Algorithmic bias has received much public and academic attention in the last years. We are also well aware that many of the biases in the outputs of our algorithm often follow from the biases in the data as well as biases in the way that we formulate the computational question itself (e.g. the case of predictive policing from previous readings). Given this week’s readings, what are other important ways that our data collection processes can generate bias (and what are the consequences of these types of biases in the data)? What are the ways our data collection processes can directly generate unequal and negative broader social impacts? Relate this week’s readings to those from previous weeks that examine algorithmic/technological biases from other perspectives (e.g. the talk by the author of “Black Software”, “Race After Technology” and “Invisible Women”), what is the broad picture of problems/ethical issues in data collection, curation and manipulation? Instantiate this at IACS: in what ways have you critically examined the data collection, curation and manipulation pipeline in your courses? Have the treatment of data in these courses ever fallen into the common pitfalls highlighted in the readings? Do the readings provide insight on how we can improve the way we educate students about data in IACS courses?
This week we continue to explore algorithmic bias, focusing on case studies of when an algorithm may create unjust, inequitable effects and why this happens.
Readings
Response
Summarize the information in the readings - what is the definition of algorithmic bias? What are the sources of these biases (e.g. can algorithmic biases be always attributed to bias in the data or malicious intent of the user/developer?)? Is it possible to translate human values of fairness and justice into formal, algorithmic properties – what do the readings suggest (if the answer is yes, what are some useful paradigms for doing this)? What are some common pitfalls that engineers/designers can fall into when trying to eliminate algorithmic bias?
In the past weeks we’ve examined biases in data and algorithms. This set of readings ask us to examine what happens when biases in algorithms interact with biases in human actors and in social institutions.
Readings
Response Putting together the three sets of readings (data bias, algorithmic bias and bias in socio-technical systems), think about when, where and how biases can occur in the data science/ML pipeline? Based on your readings, what are some common design pitfalls that may allow for bias to creep in? From the perspective of an engineer, what are some best practices that might mitigate the effect of bias? From the perspective of a citizen, end-user, what are some ways you can advocate for your own rights in the presence of algorithmic bias (how would you know you’ve been affected by algorithmic bias)? From the perspective of a law/policy maker, how would you advocate for legislating/regulating the usage of AI in decision making systems?