Digital Redlining, Friction-Free Racism and Luxury Surveillance in the Academy
For the final episode of our third season, we are joined by Chris Gilliard, a professor and scholar who is highly regarded for his critiques of surveillance technology, privacy, and the invisible but problematic ways that digital technologies intersect with race, social class and marginalized communities.
In particular, Chris’ work highlights the discriminatory practices that algorithmic decision-making enables – especially as these apply in the higher education context.
We discuss the various problems that surveillance technology and AI pose for higher education and the future of research, scholarship and academic publishing.
Listen Now
Transcript
Safa: You are listening to the Unsettling Knowledge Inequities podcast, presented by the Knowledge Equity Lab and SPARC – the Scholarly Publishing and Academic Resources Coalition.
Safa: For the final episode of our third season, we are joined by Chris Gilliard, a professor and scholar who is highly regarded for his critiques of surveillance technology, privacy, and the invisible but problematic ways that digital technologies intersect with race, social class and marginalized communities.
In particular, Chris’ work highlights the discriminatory practices that algorithmic decision-making enables – especially as these apply in the higher education context.
We discuss the various problems that surveillance technology and AI pose for higher education and the future of research, scholarship and academic publishing.
Chris: My name is Chris Giliard. I am a Just Tech fellow at the Social Science Research Council, and I’m based just outside of Detroit. I grew up in Detroit and I grew up in the spectrum of what was a VICE unit called STRESS in Detroit in the early seventies.
And so STRESS stood for: Stop The Robberies, Enjoy Safe Streets. And a key element of STRESS was that it was a surveillance and VICE unit that was an attempt to have a very severe crackdown on crime. They were active for several years – the notable stat is that in that time, in a maybe two to three year span, they killed 23 people, 21 of whom were black.
And so I’ve always been cognizant of surveillance and law enforcement and how that affects black folks. So I grew up with that in the background. But the other story I tell is I teach at a community college and one of the things I found out about my school is that they instituted a policy of filtering the internet – which is very unusual for colleges.
And what that did was it made it so that there are lots of things that students – and frankly, professors – would try to research that couldn’t come through because the internet was filtered. And so if you’re not familiar with how filters work, they let in a lot of things that whoever’s filtering don’t want let in, and they keep out a lot of things that reasonable people would assume that you could research at a college.
So for instance, it keeps out lots of poems and quotes from the Bible and things like that. But, the specific thing that happened with the students I was teaching is they were doing research on what used to be called “revenge porn”, but is now more often referred to as non consensual intimate image distribution.
And so when they went to search the term “revenge porn” – I should be very accurate about that, they weren’t looking for actual revenge porn but when they searched the term “revenge porn” – nothing would come up. The searches would just sort of pretend that the word porn didn’t exist.
And so they couldn’t do any research, even though there’s tons of scholarship on it. And that really kind of set me down the road about thinking about, you know, filtering and surveillance and broadband access and the ways that decisions by our institutions really affected how students could do their research and effectively blocked and walled them off from all kinds of knowledge.
So those are the two background stories I tell that I think really kind of set me up for where I am now.
And part of that scenario I discussed with the students and the research is that I started labeling a lot of these practices as “digital redlining”. And what I mean by that is – I’m from Detroit, as I mentioned, and you know, there’s a strong history of the housing policy of redlining in the city. And it’s still very visible in lots of areas in the city, particularly when you look on both sides of Eight Mile or both sides of Mac Avenue and things like that. But there’s a way that a lot of these decisions that either institutions or tech companies – or in many cases the government make about technology, about machine learning, about access to broadband that very much disproportionately affect marginalized communities.
And so I started to really pay attention to that. And also the ways that computational tools and machine learning make all kinds of decisions about people often without their knowledge, that wall them off from certain things, that prevent particular opportunities, that pigeonhole people.
I mean the classic example being the Facebook ethnic affinity policy that they had for a long time, that was exposed by ProPublica, which in brief showed that – well, there’s not a great short explanation. So for a time, for a long time, Facebook didn’t let people articulate their own ethnicity on the platform. So for instance, if I were on Facebook, there was no button or category or anything like that where I could say: I identify as a black person, right?
But on the back end, Facebook did have categories like that, which they called ethnic affinity. And because Facebook is an ad targeting mechanism, they allowed people to target based on ethnic affinity. ProPublica figured this out. And so what that meant was that, in a lot of cases, people could explicitly target or exclude certain communities. And so in effect, what that meant isI could advertise for a job or housing and say: I don’t want any black people to see this ad. I don’t want any Jewish people to see this ad, I don’t want any old people to see this ad. Very much illegal in other instances, but allowed for the most part on Facebook.
Repeatedly they’ve claimed to stop doing this. Repeatedly they’ve been uncovered as continuing to do it. I can’t say if they’re doing it on this day, but there’s a long history of them claiming to have fixed that particular thing and not. So that is a kind of a classic example of what I would call digital redlining.
There’s lots of ways that that happens in academic spaces as well. I think one of the most clear examples (of digital redlining) is with remote proctoring. It’s existed for a long time, but it took off during the pandemic and the basic understanding or claim of remote proctoring is that since people were not sitting in classrooms, there needed to be a way to surveil them to ensure that they’re not cheating.
Okay. And so you could tell just by the way I’m talking about it, that I don’t really accept this claim. So there are many problems with the way that these systems work. And I should note, I’m not going to name any particular system because these companies are very litigious. Just by naming them, you are at risk of being attacked. So I won’t name any particular system, but these systems as a whole exhibit quite a few problems – chief among them is they use some kind of – most of them, if not all of them, use some kind of facial recognition, but in some cases face recognition, which they like to differentiate, meaning they’re not using it to identify a specific person, but just to say if a face is present. I don’t make those distinctions, but the companies do.
But one of the key aspects of that is that facial recognition and face surveillance notoriously are less effective with black and brown faces. And so lots of students have reported, and this has been reported in the Verge, in the New York Times, the Washington Post, the Wall Street Journal, the Chronicle of Higher Ed, and Vice Magazine – it’s been widely reported. Lots of students have had to, for instance, for the duration of a test, shine a bright light in their face. You know, sometimes 2, 3, 4, 6 hours – or in many cases the system won’t even recognize them as a face or as a human. And so I can’t really think of a clearer example of how a computational tool or system discriminates against people based on their race. I would encourage people to just think about the impact of trying to prove to a system that you’re a human being just so you can take a test.
So there are lots of other examples, but that’s the one of the most vivid and dystopian that always comes to mind for me.
Safa: In addition to digital redlining Chris has coined two other related terms, namely friction free racism and luxury surveillance – terms which are helpful in thinking about the equity implications of increasing surveillance in academic settings.
Chris: A lot of tech companies use the term friction to explain the difficulties of interacting with other human beings in your day to day world. So, to give an example, before this talk, before you know, I spoke with you, I went to get a coffee. And I had to say hello to the person and they asked me how my day was. And you, I asked them how their day was and I told them about the drink I had the last time, which was a little too spicy. You know, because it was like a seasonal thing, Ghost Pepper Moca, things like that. A tech company calls that friction. It’s anything or anyone who gets in between you and the service that you desire.
Now, I don’t call that friction. I call that being pleasant or being a human being. You know, like I enjoy those interactions. but a tech company would call that friction, or if you get into a taxi and the taxi driver says, Oh, are you new to this city? And I say, No, how long have you lived here? Or something like that. Like Uber calls that friction.
Now the goal of many services, particularly in the gig economy, but there are a wide range of services that tech companies offer, part of the goal for them is to eliminate friction. So anything that gets in between me and my ride from point A to B is considered friction. And again, that includes talking to the person. So, particularly with the gig economy, many of these services are performed by marginalized communities. You know, black and brown folks, sometimes immigrants, and I find it highly problematic that it’s seen as a benefit to be able to ignore the humanity of people. So I’ve come to term that friction-free racism.
And the term luxury surveillance – this initially came from the observation that – I mean, one I consider fairly mundane, but that other people took note of, which is that there are many, many similarities between an ankle monitor and a Fitbit or an Apple watch.
I mean, the sort of joke, right, the like gall’s humor is that: what’s the difference between an ankle monitor and a Fitbit? And the punchline is that a Fitbit collects more data. I coined the term to describe this. Things like Fitbits, Apple watches, Amazon Halo rings, Ringer doorbells.
I used the term luxury surveillance to describe those things, which are basically devices, computational tools that perform a variety of surveillance functions, but are often things that are viewed as luxury items and that have some purported benefit that people are willing to pay for.
So I mean, one of the things that immediately comes to mind to me is the ways in which many campuses are now building kind of luxury accommodations for students, whether that’s, you know, high rises or like gleaming fitness centers and things like that.
But I also think about the ways that, in some instances, colleges and universities have dictated that students wear some kind of biometric tracking device, whether that’s a Fitbit, Apple watch, things like that. There are some colleges who do this and there’s also a really underexplored way in which this is a huge part of college athletics, just to track and trace like the biometrics of college athletes.
I mean, those are, those are a couple examples that I can think of off the top of my head.
Safa: Throughout this past season we have heard how various AI tools are presented to users in academic settings as being there to simplify their workload and make their lives easier – with little to no disclosure about surveillance or the implications for their privacy. In some ways, these users are treated as both “consumers” and sources for data mining – in other words, users become the products.
Chris: So I think the central concern of a lot of these companies and the technologies that they pitch towards everyone is the idea that they can optimize your life, through machine learning and proprietary algorithms. Okay? So I don’t believe this, but this is the pitch.
This is what Amazon tells us. It’s what Google tells us. It’s what Facebook tells us, and on and on and on and on. I mean, another really prevalent example is TikTok. So many people who use TikTok, or even people who haven’t, have heard claims about TikTok’s magic algorithm, which, if you’re on the platform even briefly, that it starts sensing who you are and delivers you content. It knows you better than you know yourself.
Okay. Again, I don’t believe this, but this is the myth that companies have pitched to us. The connection I make is that, I think this is a widespread myth that is deployed not only in social media, but in all aspects of technology that filter into our lives. And this includes, scholarly lives and academic lives, which is the idea that somehow a machine or machine learning or an algorithm is going to be able to find for you the information that you need or deliver to you things that you want, maybe even before you know you want them. I think the danger in this, or at least one of the dangers is that – the way I think about scholarship teaching learning is that it’s exploratory.
I think most of the people listening have had the experience either in person, you know, in a library for instance, or a bookstore or online, when you don’t know what you’re looking for, you might be in the stacks, right? You might be on a particular shelf. You might be wandering, and you find things that really speak to either what you are interested in or touch on something that maybe you didn’t know you were interested in.
And so I contrast that with the idea that machine learning is gonna deliver that to you. And part of the problem, and again, I think there are many, but part of the problem is that these technologies serve the companies. They serve Amazon, they serve Tik Tok, they serve Google. they are primarily invested in understanding us as consumers to whom we can be delivered content to maximize our consuming potential.
When I think about that in terms of academics, I think it’s very dangerous, and worrisome, and when I think of all the ways that these systems discriminate, silo, pigeonhole, track, trace, control, particularly marginalized groups, again, I think that’s really dangerous.
One of the key insights I’ve gained from talking to students is how untrue it is that students don’t care about these things, that they don’t care about privacy, that they don’t care about surveillance, that they don’t care about framing their own narrative about who they are and what they’re interested in.
So we talk about that quite a bit. I share a lot of insights with them from people like Safiya
Noble – who, I think, and this has become more and more rarely apparent when we talk about Google, the extent to which Google is an advertising service and everything else, it does fall under that. I think when you have that discussion with students,, whatever level of students, K through 12 or college, they’ve already started to- they’ve already framed their ideas about this.
And so often I’m just kind of reinforcing them. But, you know, it really affects how we talk about, doing research and about disinformation and misinformation. The difference between kind of finding information and information that comes to you and thinking about why a specific piece of information has come to you.
So yeah, I think that’s really important to talk to students about.
Safa: Some of the leading academic publishing and research analytics firms seem to be building an end-to-end platform for research—spanning idea generation to publication to evaluation and faculty information systems for universities—infused with the kind of constant user tracking that Chris has described in other settings as “ambient intelligence.” This poses serious implications for how we learn and conduct scholarship.
Chris: When I think about what some of these firms are calling “ambient intelligence”, it’s the idea that our environment will be constantly covered in sensors, whether those be cameras, microphones, cameras and microphones, you know, some form of radar or things like that, that track our every move, feed it through some set of algorithmic tools in order to again, predict and optimize, people’s behaviors.
I could go on for a really long time about the problems with that, but I think the biggest problem – again, is that it doesn’t serve the needs of the individual, despite what these companies claim.
And again, so when it comes to scholarship, I think that from the youngest child all the way, you know, to the end of life, I think one of the most important things about learning is inquiry, right?
I mean, I’ll just speak for myself often, I don’t know necessarily, I don’t start with an answer. I start with a question. And with the notion that a machine or a set of, you know, machine learning tools is going to consistently feed me answers, I think that’s the wrong way to go about it.
And never mind the dangers of being constantly surveilled in your academic pursuits. I mean, there are an increasing number of things and types of research that people do, that put them in danger. Whether that’s like reproductive health, different forms of activism, I mean, we can think about the dangers to trans and queer folks.
And so having systems that are constantly looking at what people do, feeding it, through some computational process and also giving it or selling it to all kinds of other parties, and the government – I don’t think it’s dangerous, I mean, it absolutely is dangerous.
I’m kind of stumbling on how to conclude that. But I mean, I think there are a host of problems with the way that these systems are used.
Safa: One example of a tool that has lots of surveillance components is the learning management systems that many universities purchase and use. These systems have various student analytics features – including how much time a student spends in a course module, how many pages of a digital paper a student reads, whether they submit their assignments on time, and whether a student has likely committed plagiarism and much more – which can be mined and used to generate predictions on student performance. As universities are the ones purchasing and allowing these systems to surveillance student and staff behavior, they should very carefully consider the implications these systems have on student privacy, discrimination and other negative consequences.
Chris: So I think when we talk about learning management systems and all the things that are embedded in them, I think a key aspect of these things is the variety of forms of surveillance. I mean, I’m on record as big wildly anti-surveillance in almost all of its forms, maybe all of them.
But, the promise of these things is that with enough surveillance of students, that you’ll be delivered some set of insights that – how long I was on a website, or how long I was on a page, or how many times I logged in – are some kind of proxy for learning. They are not.
But it’s a good sell to institutions who are looking for certainty, you know, who value predictions because they believe it will give them some ways to kind of maximize profit or increase persistence, things like that.
But it is a really, really, really extremely important thing to ask – and I’m actually not gonna give the answer, but a really important thing to ask is whether or not these things actually do the thing that the companies say they do. And we can talk about learning management systems, which you could argue are in some form or another – I mean, I won’t take that on for now. But also, proctoring systems.
The question is: does surveillance increase the ability to maximize student potential? And so the answer is no. And does surveillance decrease student cheating and increase academic integrity? And the answer is no. The bodies who are telling us yes, are the companies who are selling the technology. There is very little, and in some cases no independent research that asserts that these things do that effectively – do the things that these companies claim they do.
And I think that’s really important because, you know, schools, academic institutions often pride themselves on being data driven institutions. But I mean frankly, many of these companies are selling the equivalent of magic beans and schools have never asked whether or not they work. And you know, I mean it’s tremendously expensive. It’s dangerous, it diverts resources, I think, from some of the things that do work or are more effective.
And it’s really disturbing to me that institutions accept the word of companies whose main goal is just to sell them things. I mean, there’s a recent academic study done about proctoring systems and whether or not they reduce cheating. I mean, their answer was no.
Overwhelmingly the data that says they do work comes from the companies who sell it.
And I hope, I mean, I’m not an optimistic sort, but I do hope that some institutions start to take a look at how much money they’re spending on some of these things and what kind of return they’re getting on their investment.
Safa: In our previous episode we heard from journal editors about the machine learning and automated tools that have been introduced into editorial workflows by their publishing platform – which seem to present a significant risk for what Chris has described as “friction-free racism.”
Chris: Yeah, so I think all of these things that are based either on the notion that you’re gonna be delivered some optimized list or set of people, or things like that, or that highly high-quality decisions are best made using past data in very specific ways – I think bring into focus all the things that we’ve talked about in terms of digital redlining, fiction-free racism, and luxury surveillance.
And so what I mean by that is, academic institutions, much like the rest of society, have strongly – and have a long history since their inception of discriminating against particular folks. You know, women, people of color, trans and queer scholars, on and on and on and on. And so the idea that you’re going to somehow maximize some kind of output based on the very discriminatory history of these institutions often means that the output’s going to look very much like the input.
I mean there are billions of examples, but I would use the example of Amazon that famously had a hiring algorithm that they said they never fully put into place, right? That never ultimately was responsible for any particular person being hired. And I’m missing a little bit of the specifics, so maybe the guy’s name was Chad, or maybe it was John or something like that. But ultimately because it was based on what looked like a successful employee at Amazon had been a white male – that some of the most clear indicators according to the algorithm of who was going to be a successful employee were things like whether or not they played lacrosse in college or something like that. And so it sounds absurd, right? I mean, I don’t know if other people will chuckle when they hear this, but it happened. And there are so many examples like that.
There’s examples from predictive policing. I mean, there’s examples about who gets accepted into college based on some of these same, same things. And so I’m really highly suspect of any system or set of systems that claims it’s gonna optimize or maximize based on past data. Because often the institution and the data they collected, are rife with bias. I mean, bias isn’t like the perfect word for that, but it’s a term often used to describe those things.
And so I think, yeah, we should all be highly skeptical, when companies make these claims.
Safa: Within the structures of higher education, libraries in particular have a professional commitment to privacy. Yet, more and more of the knowledge resources that libraries provide are only accessible through platforms that collect and monetize user data. While libraries have fought to protect patron records from groups like law enforcement – private firms are much less inclined and committed to doing the same – which could have important implications for academic freedom.
Chris: There is going to be or continues to be, I’m not sure which, a real chilling effect on the ways people feel free to do research. For instance, if there are certain things, scholarly pursuits that you have, that are wholly legitimate – you have to be very careful about whether or not you search those things in Google, you know? Because we all know that Google is primarily, again, like an advertising engine, but also heavily reports to the government, and very freely provides data to the government.
I think historically libraries have been a place that we’re relatively free of that. I mean there’s obviously been lots of battles over that, over time. But I think that the idea that your searches, your research, any of that, any of your activities in the library, the idea that they’ll not only be under surveillance, but available to, ICE, to Homeland Security even to,in some cases I would imagine, state and local and law enforcement. I mean that is really super scary to me. And I think it would have and will have a chilling effect on how people seek out knowledge.
So you know, in my school they instituted a policy of filtering the internet, so anyone who searched the term “revenge porn” who was on the network would not be able to access anything that had the word porn in it. And so again, like many scholarly articles that talked about revenge porn had porn in the title, or many articles in popular newspapers, right? Like the Washington Post or New York Times refer to it as revenge porn.
I mean, the other insidious thing that I didn’t point out about this is that often, and connected to what I’ve mentioned about scholarship often being about inquiry, is that often people when they’re doing research, and this ranges again from all levels of research, is that we’re often looking for things and we don’t quite know what we’re looking for or how to describe them, right?
The proper terms for them or something like that. And so when people don’t find something, often, the assumption is that that scholarship doesn’t exist. And so if I go looking for articles about revenge porn, none pop up, I’m not an expert on it – in this theoretical thing, then I might, one might assume there’s no scholarship on it.
And so you hit, you know, a brick wall. Again, and I think it’s really dangerous, like I saw firsthand the way it prevented people from the pursuits that they were actually interested in.
Safa: Another deep concern with the deployment of AI tools in academic settings is that they are prone to privileging certain popular search topics and terms and have the effect of driving attention to certain papers and authors and invisibilizing many others.
Chris: I think it’s true with Google, but I also think it’s true of the companies that provide the structure for libraries – and there’s a real conflation of quality with popularity, you know, in the era of social media.
A previous guest you had, pointed out the ways that this supports structures that privilege certain people. Like if you are a white male professor from Harvard versus a black professor at a community college, often those citations are gonna be sent to the top right. Again, there’s also a way in which popularity is in no way a proxy for quality or even doesn’t necessarily match the thing that you’re looking for.
I mean, I’m sure we’ve all had the experience where we found some little publicized paper that was like exactly what we needed. And again, I think that is such an essential element to me to how people teach and learn and develop interest is finding those things that have not like, kind of floated to the top or been floated to the top.
I think it’s really insidious that, and a previous guest mentioned this too, the systems that we rely on for scholarship are also the systems that are persecuting immigrants, you know, that are used to surveil communities.
And so in the way that we have to kind of participate in what is a deeply harmful set of practices and companies and technologies, in order just to kind of do scholarship that oftentimes is invested or is in an attempt to dismantle these systems. It’s very insidious.
Safa: Over the past season we have been in conversation with those on the forefront of thinking about and fighting against the encroachment of surveillance publishing and the uncritical deployment of AI and algorithmic decision making in higher education – but it is fair to say that it continues to be a huge problem with far reaching consequences that can be difficult to fully understand given the deep information asymmetries between vendors and users.
Chris: A big part of the problem is how opaque these systems are that most people who aren’t experts or who don’t spend a tremendous amount of time digging into them would have no way of knowing some of these things.
And on top of that, many of these companies, you know, will not, for a variety of reasons, give people access to their data, because they’ll claim it’s some proprietary system, or they don’t want to show people the inner workings because then we’ll realize they don’t really work. Or they don’t want to give people access to these systems because they’ll make explicit the ways that they’re used to harm and target marginalized communities.
And so it’s very difficult to create a high enough level of awareness because it’s so hard to dig into these systems. So I don’t kind of fault people because I mean, I spend most of every day looking at this stuff, right? And often it’s still very opaque and hard to understand.
But I do think there’s a growing awareness. And I do think there has come to be a lot more emphasis on the ways that these systems are harming people. We could look at unionization movements at some of these companies. We can look at the defection of people from companies like Google. We could look at the No Tech for ICE movement. I mean there are a lot of ways that I see pushback that didn’t exist in a widespread way for a long time.
There have been a lot of both individual and collective responses from students, I think that is really important to highlight. I mean, there’s a recent case in Ohio, I think, where a student sued his institution because he said that the room scan from the proctoring system was constituting an illegal search. And the court agreed with him.
There has been a lot of pushback over a variety of surveillance methods, instituted or deployed against students throughout the pandemic. Again, through proctoring systems or other systems that claim to detect COVID and things like that, there’s a growing awareness, I think, by students about the need for privacy, and the dangers that some of these surveillance technologies pose.
I think that’s really important because a lot of this stuff is built on the myth or the claim that it’s for the benefit of students. And so when students push back, articulate their needs, their need for safety, and how these things don’t reflect the kinds of things they understand as safety – I think it’s really important because so much of it is done supposedly in their name.
You have to be really careful, because a lot of these companies have taken a stance, a very aggressive stance towards people who speak out against them, in many cases, even students, right? So even like the sort of “don’t attack students” guideline is off the board. So it’s a very fraught existence. And I don’t think everyone is cut out for that. I mean, I may not be cut out for that. But here I am.
The thing I focus on and I would ask everyone to focus on, is to demand proof that these systems work. That is not as fraught, right? And what you’ll find is that there’s very little proof of that. Like demand hard evidence that proctoring systems increase academic integrity and reduce cheating demand. Hardcore proof, independent research that surveilling students in a learning management system improve student retention or learning outcomes or demand proof that how long someone spent on a page of an ebook is a good proxy for how much they learned. You’ll find none because there is none because it doesn’t do those things.
And so that I think is a thing that most of us are in a position to do, or, you know, sometimes you might have to replace the word demand with ask for. But I think that is one route to dismantling the reliance on these systems because there exists very little evidence that the Magic Beans do the thing that the vendor says that they do, right? And in fact there’s lots of evidence that it doesn’t do that or that it brings with it some associated harms that are not benefits to the students or to faculty or to the institution.
And so – I am a big proponent of telling people that whoever is currently being targeted by a system, if they’re not in that group, they’re next. So what I mean by that is – and I hate to keep going to this well, but, it’s a very rich example. I often point to professors and say, if you think that remote proctoring’s end game is just focused on students, you are very much mistaken. Now we can look at the ways that worker surveillance has been ramped up during the pandemic, not only in arenas that we typically associate with it, say service workers, truck drivers, things like that, but with people who write code, attorneys, people in all kinds of walks of business who typically have, I think, thought themselves immune from that type of surveillance being exposed to it and having it deployed against them.
I can point to some of the school surveillance or student surveillance platforms. One in particular that on their blog openly stated that one of the things they could be used for would be to curtail, short circuit unionized activity by teachers.
These things are very, tightly connected. These systems that are deployed against students are eventually going to come for us all. And I think this is true in a variety of arenas, right? I mean, it’s kind of the core of the difference between imposed and luxury surveillance, right?
That the ankle monitor and the Fitbit are like very closely aligned with each other for a lot of reasons. Like they’re both systems of control. And so I think that’s an important in roads to helping people.
Well, let me, let me backup a little bit. I would love to live in a society where I could just say this thing is bad for formerly incarcerated people, and we could stop there, like, stop doing this thing, right? I would love to live in a world where I could say, this thing is bad for students, and then like people would realize it’s bad for students and just stop doing it. Unfortunately that is not where I live. You often have to tell people how a system is going to be used to harm them or is currently harming them, in order for them to care.
So to academics, I often like to point out that many of these things that are deployed against, certain groups of people, you know, or students, is going to come for them in some ways that they’re not, certainly will not be comfortable with, will chafe against and will harm them. Particularly too, I think that’s important to stress because often there are people in academia heavily invested in the idea of like a meritocracy – that the best research or the smartest people and things like that are going to rise to the top. I mean, you could tell by the way I’m chuckling, I don’t believe that, but like a lot of people believe that. When you talk about how machine learning and computational tools are going to take the place of whatever mechanisms we had in place before, and a lot of the inherent biases in that, I think that is an important thing to talk about, that maybe will help people understand like why these systems aren’t a great idea.
Safa: Thank you so much for tuning in to our third season.
If you are provoked by what you heard today, we invite you to join us at the Knowledge Equity Lab. Together we can fundamentally reimagine knowledge systems and build healthier relationships and communities of care that promote and enact equity at multiple levels.
Please visit our website, sign up for our mailing list, follow us on social media and send us a message to get involved!