The Perils of Artificial Intelligence in Academic Publishing
One of the key themes that intersects across all of our episodes this season is the surveillance and highly extractive and harmful economic practices of big corporations in the academic publishing sector, whose artificial intelligence tools are creating new forms of control and governance over our daily and professional activities.
In this episode, we are joined by Christine Cooper, Yves Gendron, and Jane Andrew – co-editors of the Critical Perspectives on Accounting journal and co-authors of the article: “The perils of artificial intelligence in academic publishing.”
We reflect on how automated decision-making algorithms are deployed in academic publishing, particularly for peer review and related editorial decision-making.
We explore the implications of these technologies on research practices, scholarly expertise and autonomy, and the struggle for control over the future of “sustainability, creativity, and critical values of the academic world.”
Listen Now
Transcript
Safa: You are listening to the Unsettling Knowledge Inequities podcast, presented by the Knowledge Equity Lab and SPARC – the Scholarly Publishing and Academic Resources Coalition.
One of the key themes that intersects across all of our episodes this season is the surveillance and highly extractive and harmful economic practices of big corporations in the academic publishing sector, whose artificial intelligence tools are creating new forms of control and governance over our daily and professional activities.
In this episode, we are joined by Christine Cooper, Yves Gendron, and Jane Andrew – co-editors of the Critical Perspectives on Accounting journal and co-authors of the article: “The perils of artificial intelligence in academic publishing.”
We reflect on how automated decision-making algorithms are deployed in academic publishing, particularly for peer review and related editorial decision-making.
We explore the implications of these technologies on research practices, scholarly expertise and autonomy, and the struggle for control over the future of “sustainability, creativity, and critical values of the academic world.”
————-
Yves: My name is Yves Gendron I’m an accounting professor at University of Laval in Quebec City. My research interests relate to – well, three main areas briefly. First one, I would say is the backstage of professional service firms. Secondly, the backstage as well of corporate governance. So corporate governance of public companies especially. And finally, epistemological studies such as: why are we facing a kind of journal ranking mania, regarding what is an academic contribution? So this is a very brief summary.
Christine: I’m Christine Cooper. I’m a professor of accounting at the University of Edinburgh, Edinburgh, Scotland. I’m interested in all aspects of accounting that impact on ordinary people’s lives. Especially the way that accounting impacts so negatively on ordinary people’s lives. Like lots of people just accept accounting. If someone says: look, we can’t afford to do this. They just turn over and okay, like, we can’t afford to do it. But I think what I like to do is to, I suppose, uncover the truth in that.
Jane: My name’s Jane Andrew and I’m an accounting professor at the University of Sydney in Australia. I’m interested in both Yves and Christine’s work, but I’m also, myself personally very interested in the role that accounting information plays in public policy. So the kinds of public policies that are enabled or obscured as a result of the kind of accounting information that’s mobilized in that context.
But I’m also currently engaged in a really large project on data breaches and what happens when data is breached and what organizations do and how they respond to that and their obligations to the community to notify and rectify issues around data related kind of – either cyber crimes or accidents.
Christine: So of the three of us, I’m the person that’s been one of the co-editors in Chief for the longest period of time. Critical Perspectives on Accounting, I suppose, is part of a new wave of journals that began in the early 1980s, when lots of big academic publishers decided that this was gonna be a very profitable revenue stream for them. So at that time, several new journals were set up and Critical Perspectives was one of those. And the original two founding co-editors, I think when they stopped, I think about 1998, I was one of the people that took over from then, and then Yves joined us and then later on Jane joined us. But they probably remember exactly the dates.
Yves: I think my date is 2014.
Jane: Yeah. And I joined as an associate editor, I think in 2016 or 2017. And then, as co-editor in 2018.
Christine: So, when I took over, the two founding editors of Critical Perspectives on Accounting, David Cooper and Tony Tinker, they hadn’t moved to the online platform. So while they were editors, they were actually worried about using online platforms because of the control that they would be seeding to an academic publisher.
So everything was done with paper. So literally in those days, to submit a paper to the journal, you sent three copies. And either David or Tony they would keep a copy, and then there were always two blind reviewers, so they would send copies of the paper through regular post to two reviewers. And there was I think quite a lot of administration then, because they would clearly have to log everything very carefully, remember the two reviewers that it was sent to. And then the way that you would do a review is that you would, you know, of course it was word processed – I’m not saying it was all handwritten or anything like that, but often that wouldn’t even go by email in those days. It would be sent again through the post. So again, David and Tony would receive a letter from you and then they would wait for both reviews and then they would then write to you physically, a letter as well, with their decision. And so there was a lot of chasing up. And I think Tony initiated something where he would send you a postcard if you hadn’t sent your review on time. But the whole process was very very different, much slower than it is with an online system.
So when we took over, I think part of the requirement of Elsevier was that we did go then with the online platform. And of course while there were dangers in that, there was a lot of kind of good things in terms of the speed. So you’d receive a paper electronically, you’d then send it to reviewers, you wouldn’t even need to remember who the reviewers were or when you’d sent it, because the system would kind of automatically log everything and do all of that work for you.
So being on the electronic system, I could completely understand the dangers, but in many ways it made the work of being an editor much, much easier.
Yves: I would like to emphasize two key changes that took place since I finished my PhD in 1997 and now – and this relates to the topic that we are going to discuss. When I finished my PhD, I never heard that it was important to be cited. Even the first few years when I was an Assistant Professor, no one was bothered about getting cited. And then quite suddenly around, I would say year 2000, 2001, 2002, getting cited became something important.
I then heard about the impact factor. I was not aware of that before. So the impact factor, I think it was Thompson at the time who owned the database, and the impact factor, which is produced through citations – and these citations are now part of big databases – these impact factors now allow a number of people to control or they think they can control the work of academics. So in departments, in faculties, and elsewhere – including publishing houses like Elsevier and others, people rely on very superficial measures, when we think about it, to manage the work of academics. So this is a very significant change that took place in the last 20 years.
Also a very significant change occurred probably around 2005, I’m not really sure. But the first publications I had were basically processed, I think in the US, or Canada, or the UK, or Netherlands. But then somehow, production was handled in India. So the big publishing houses such as Elsevier, they moved production. Production means when you submit an article, it’s under word format, and basically what the publisher does, the proofs/production staff they transfer, they convert this word file into an article format. So now this takes place in India.
Well, what motivated Elsevier to do that, I’m pretty sure, is profitability – short term profitability. In the sense that the wages in India are lower than in a number of occidental countries – and what is amazing is that all the big publishing houses, most of them moved as well, their production to India. And India is now at the center of the way in which artificial intelligence, AI, is being implemented within publishing houses. There are a number of consulting companies, or IT firms in India, which specialize in developing AI capacities at publishing houses. So they help Elsevier and other firms to do that.
Christine: Perhaps I could just add also to the first part of what you’ve said, like, when I first became an academic, we could basically publish in any journal that we wanted to. And then gradually, it happened very early in the UK, we started to get a ranking of journals which, as Yves said, this was on completely very superficial kind of grounds, but on grounds that could, I suppose, that could be quantified.
So quality was measured in things like citations and so on and so forth. But then the UK ranking system, they decided that unless you were on a special database, which was the ISI database the ranking of your journal would automatically go down. So when I became an editor of CPA, we were not an ISI journal, and that was one of the things that we had to do very quickly, to get on this database. Otherwise, our journal would’ve been downgraded.
And the implications for the kind of research that we like to publish would be that, you know, people would probably not do critical accounting research unless there was an outlet that was going to kind of allow them to build a career really and publish in higher rank journals. So those things happened. And then – so Thompson Reuters, their kind of journal ranking was the only game in town. And then other big publishers, I mean notably Elsevier, they started their own kind of system, you know, through Scopus and gathering data. And eventually what’s happened is that that data is actually very valuable.
So the big publishers actually play lots of games. They decide on the rankings in some senses because they provide, let’s say – you know, sticking with the UK – they provide the Chartered Association of Business Schools with most of the data that they use then to rank the journals.
So that is like an offshoot that also becomes really important. So the academic journal publishers help to decide the rankings that then encourage people to publish in the journals that they produce – so that, you know, in some ways, they’re in a very nice, virtuous kind of circle.
Safa: As co-editors, Christine, Yves and Jane have experienced first hand how AI is being increasingly deployed and interjected into the editorial process – often in very opaque and non transparent ways.
Jane: It’s never obvious to us. So I think that’s a really important starting point, is that it’s always obscured. So it’s always through sort of accident that we start to realize that the process has some sort of back office that we are not privy to. So publishers aren’t communicating the shift towards AI to any of the participants really, beyond the kind of corporate entity. And that makes it tricky for us because there’s lots of things that do appear to make our lives easier, but they also have quite significant consequences.
So one thing that we were talking about, how important community building is and actually relationships are in research. And part of what happens when so much of the sort of platform is digitized and made easy for us is a sort of breaking of that community bond – because we have a tradition really, where we can, of writing to people outside of the platform to say: there’s this paper, it’s really interesting, I think it’s in your area of expertise. Would you be interested in reviewing it for us? And in doing that, it’s twofold, right? So one is to solicit interest in doing free work, basically, for the academic community, but it’s also about building our connections and relationships.
And I definitely learned that from Yves and Christine when I joined. Had I not had their model, as kind of role models of editorial work, it would’ve been very easy for me to just join as the co-editor of CPA and use the system because, you know, you learn from others, right? How to do your work. And that stage I can see it being quickly forgotten – but it’s so critical, particularly to a community like ours, which is not a mainstream research community. So much of it is, you know, about keeping everybody connected to each other.
And there’s a passion and drive, I mean, which we don’t have a monopoly on in our field – but, certainly AI, I think has the capacity to really infringe on that core aspect of research communities. So that’s all I wanted to say. It’s just not very obvious to us.
Yves: So research on digital surveillance indicates that since the beginning of the internet, a number of large databases have been developing, some of them being housed in private companies, others in government. So basically everything we do on the internet can be captured somehow through these large databases. So basically it’s the same process which is going on at publishing houses.
When as an author or as a reviewer or as an editor, I click on an area in the journals website, in the editorial website, my click is recorded in some databases. When it’s for Critical Perspective on Accounting, it’s at Elsevier. So every click we do is captured. Every piece of information we enter in the editorial system is captured. The name of the reviewers, for instance, the content of our editorial letters, the content of the reviews, our keywords. So the authors of the paper, typically they are required to develop five or six keywords at the beginning. And it’s the same when you want to be a reviewer, the system asks five or six keywords to describe your expertise.
This information is actually used in order to – well, one of the main areas where the implication of AI is the most obvious at Critical Perspectives in Accounting is through “selecting reviewer”. So this field is very important when we think about it. The fate of a paper, what happens to that, depends a lot on the two reviewers we select. So what we found when we investigated the capacity of AI to exert influence on an editor’s work – we were already aware that it was possible for an editor to click on a “select reviewer “ tab. Which was there to help, but we had never used it, because we feel that the decision to select reviewers should not be guided by AI or through computer help. This is so important and so human that we prefer to always avoid that.
But when we decided to investigate AI, then we clicked on that for the first time and we watched some training videos, which have been produced by Elsevier for editors on how to use their apparently incredible – the incredible help they could provide us.
And then, well, basically what we found, which was not really a surprise, but when we think about it, it’s quite disturbing – so when a paper is submitted and you click on “Help me AI”, what appears on your computer is a listing at the beginning of 5 names which are suggested to you, and if it’s not enough, well, you could ask for the other 95 names. So AI produces a listing of 100 potential reviewers. But what about the ranking of those reviewers?
Well, the ranking works, in accordance with the keywords I mentioned before. So matching of keywords and the H index. So the H index, what does it mean? It relates to Google Scholar, which for reasons which are not that obvious to me, Google decided somehow at the beginning of 2000 to track down who cites who in the academic domain. As a result of that, a metric was developed, the H index, which basically represents, the extent to which one is cited. It’s a little more complex than that, but just for the sake of this podcast, I think it’s enough to highlight that the higher H index you have, the higher you’re cited, basically.
So in the suggestions, what we see at the top are those people with high H index, which implies basically a power – a power of elites in a way, power of senior people. When you think about it, an editor, well, it’s quite safe to me if I select someone with a very high H index, I don’t have to ask questions, does this person have a nice track record? And is she or he established? And so it could be quite easy for me to click on the first one, first thing that appears. But when I do that implicitly, people with strong H indexes will be often solicited to be reviewers, which implies that the gate keeping, which occurs around journal submission, journal reviewing will be deeply impacted by what people with a strong H index think is good work or not so good work.
Christine: The publishing platforms, they do take a lot of the very kind of boring work out of being an editor, logging things and keep remembering things, and so on and so forth. And so I completely agree with Jane’s take on this – they offer the new tool, which is, you know, advice on reviewers, as something to make your life easier. So the possibility of anything sinister lying behind that is kind of like: oh, no, this is just designed to help you. And if you look at the training, webinars and you do the courses, the very first thing that you are told is: we know that you have trouble finding good reviewers. So now we’re gonna help you with this. So it’s very much seen as a tool for helping you.
It’s like – it would be interesting to find out from Elsevier, actually how many journal editors at the moment really use this tool? We are very, very skeptical of it. But I can imagine that some people just, I mean, it’s because it makes your job very, very fast. You don’t even need to read the paper. You should basically click on it, open it, there are suggested reviewers, then you click on that, and then it’s all gone off and you’ve done your job.
So the serious work of being an editor is, you know, receiving a paper – this is just personally what I do – I’m not really interested in whether who sent the paper or, you know, their H index or anything about them. I want to read it, I want to see what they’ve said. And then as soon as I start reading it, well, I’m thinking about two things. Is it a good fit for the journal and who might be the reviewers? So any kind of human understanding and thought, even if it’s well intentioned in terms of suggesting reviewers, is immediately taken away. And that’s without all the potentially very sinister things that could be going on – like I’m not casting any aspersions here. I’m not saying that that’s going on, but the potential then for any kind of distortion or kind of, you know, political choices in terms of who should be reviewers and so on and so forth – all of that stuff could creep in, but even if it doesn’t, it still takes away human judgment.
Safa: The development and presentation of these AI tools and features is driven by certain values. While one overt value is that of trying to help make the daily jobs of editors easier, it is clear that there are more covert values at play.
Jane: There is a sort of pervasive kind of logic that we’re a ‘contaminant’ to the process, you know, that we bring in bias and subjectivity and that these tools can strip that out so papers and authors and knowledge get a ‘fair run’ in a way that they can’t because we are somehow inhibiting that. So I think that can be very powerful as an argument. And of course it’s equally if not more problematic that you have AI driving this. But I do see a sort of next generation of academics and colleagues kind of buying into that logic. So it does have an audience.
I was gonna say just about the AI conversation as well, is that they experiment with all kinds of things temporarily – and then they come and they disappear, and then, you know, there’s something new, and we often pick it up later. So we are talking about reviewers, but you know, we know that AI can write reviews. So that is not being rolled out at CPA yet, but we would be relatively resistant editors – I don’t think we would be the first place that they would start to experiment with those kinds of tools. But, you know, suggesting letters, you know, there’s all kinds of things. There’s phenomenal capacity that is only just, sort of beginning that we are getting a sort of a light taste of to some extent. And that is really worrying I think – when you start having tech write your reviews.
Christine: Just thinking about this – so if the reviewers that they choose are people that are highly cited and they’re the people that measure citations and they can sell and provide the data to people, at the very least, this is reinforcing their logic that quality means high citations. And of course there’s, you know, a lot of academic work that questions that from all different perspectives. Nonetheless – to be honest, it’s like, you know, we think that we’re critical of the system, that we can understand it a little bit, or we’re struggling to try to understand it – but when the metrics come out about the ranking of our journal every year, we heave a sigh of relief that we do well. So even though we don’t want to buy into this system, we are still caught up in it.
Yves: In terms of values, I would differentiate front stage values and backstage values. So on the front stage Elsevier claims that artificial intelligence will be very useful because it allows the matching of expertise – as long as it’s well captured through keywords. So matching of expertise – and saving time. It allows the editor to save time. It’s really quick. Okay, there’s a new submission, then you click on the button “Help me AI”, so you get 5 names, you click 1, that’s it. It took 20 seconds instead of, well, I know when I do it without it, it takes me 20, 25 minutes, 30 minutes. So it’s a big difference.
So these are the front stage values – efficiency. But in the backstage, it’s clear that profitability is involved. Elsevier does not do it because it will allow to expand knowledge development, to increase the quality of papers, of articles. No – it’s because it will pay off in monetary terms in some ways.
Another backstage value, as Christine mentioned, is elite reproduction. So the views of elites, of established people, established logics tend to be reproduced through the system. So this is a hidden, I would say, a hidden value. And basically, well, performance in a way is another value behind it, in the sense that given that the H Index is widely used and the H Index is a measure of performance in a way, your performance at publishing work which is cited – well, basically, some people will benefit from the way AI is used and other people will not benefit. So if we think, for instance, about new scholars, given the way AI could be used, if we click on “Help me AI”, what is the likelihood that a new scholar is going to be asked to review a paper?
But this is a very significant stake, epistemologically speaking. So at Critical Perspectives on Accounting, what we really try to do in a number of cases is to encourage new scholars to match a new scholar with a senior one as the two reviewers. The youngest one learns along the way. But this will never happen if AI is used – the way it is programmed.
Safa: The automated decision making functions offered by these online publishing platforms have particular risks and implications for the global academic community and for knowledge equity.
Yves: I would say one significant risk involved with the way AI is used and the way in which AI is likely to be used in the future, relates to the emergence or the development of journals without human editors. When we think about it, an editor basically selects reviewers – AI is now currently doing that in a way, provides recommendations – and editors make editorial decisions.
In order to make these decisions, we look at the recommendations made by the reviewers, which is: revise or reject or conditionally accept. A computer is able to do that, to look at the recommendations of the reviewers, and then with an algorithm, make a decision accordingly. When, of course, an editorial decision is more complicated than that.
When we do it as humans, it involves paying lots of attention to what the reviewers wrote. What the authors wrote, And then trying to think about all this in order to see, well, does it make sense to publish this piece of work? What’s the likelihood that if, I say to the author, could you please revise – what’s the likelihood that they’re going to be able to improve the paper? And so obviously the development of research would be significantly affected the day we see the emergence of these journals managed with AI as editors. So journals without human beings overseeing the editor process.
Jane: I actually think the scope for kind of ideas – and Yves has touched on this definitely – that are sort of radical departures from what is within this sort of frame that’s currently being published, become much more narrow. So if I was a younger scholar, that would be something that I would wanna be aware of. I think the other thing is how much this is a sort of increasingly a closed loop. So there’s pressure on young scholars to produce and publish, there’s pressure internally to publish more quickly – and so this, the speed becomes an addiction. And everybody is addicted to it. The publisher is addicted to it, editors become addicted to it, because they’re under pressure, performance pressures internally. So it’s a complex combination of pressures on scholarship that I think will intensify this kind of move towards artificial intelligence because it can speed things up.
But I’ll give you an example of something recently that I did, which was accept a paper that actually most of us were not sure about. And I talked to Christine and Yves about it and, you know, it was a risk to publish it. Because I wasn’t sure, and the reviewers weren’t sure, but it was good enough and interesting enough for us to put into CPA and allow for a community discussion of those ideas rather than us kind of already muting them before it becomes something that the community participates in, In terms of ideas.
And I think those things will become very difficult once AI is invisibly – without us really understanding its dynamics whatsoever, making those decisions. That’s not to say that editors themselves make those decisions necessarily well all the time either. So it’s sort of tricky to sort of create a fantasy of the perfect editor who as a human can do these things, but there is a sensitivity and a sort of tuning into communities and ideas and interests that is peculiar to doing this work over a long period of time. And, you know, meeting people, seeing people, listening to ideas that I think is different to something that’s got an algorithm that we can’t see and don’t really understand its drivers.
Christine: Can I just push that a little bit further? I think my fear is, I suppose, what this could turn into – that it definitely hasn’t yet. That once people like us are kind of taken out of the picture and if things are very kind of automated and someone just sits there and clicks buttons and then, you know, the reviews are done and then the reviews come back, and then as Yves said, if two reviewers say reject the paper just gets rejected or whatever – its that things could be built into algorithms to make sure that only specific political perspectives ever get into print.
And so you could, do the best research – accounting research is important, but I mean, if it’s medical research or really important scientific discoveries, if really amazing things are gonna be killed because they don’t suit some drug company interest or a government interest or, you know, whatever. I mean, yeah, we are completely imperfect. I completely agree with that, but we do at least provide some kind of I suppose checks and balances, whereas an AI system could just completely be taken over in terms of some kind of corporate interest or whatever. I think this is my fear of where this is leading to. And for that reason it needs to be resisted now.
Jane: I agree about this resistance – because there isn’t a clear mode for it. So it’s really hard to have dialogues with the publisher about things that they don’t tell us that they’re doing. And then when we do have dialogues with them, we’re often ignored. So the weird thing about working with multinational publishers is the actual core of their work is often very far away from their actual interests. So we can have conversations about quite trivial things that need to be resolved, that are very difficult to resolve – because we aren’t really the sort of front core interests of our publishers or our contacts with publishers. So we kept quite far away. So yeah, resisting is really, really difficult.
We would think we would be in a position to do that – there was an example, just a very specific one that came and went very fast, which was that papers were being published. And I discovered that in these published papers you could move over kind of words that AI had decided were the core words of the paper. And it would kind of have this popup sort of screen that then linked you to other papers that related to that core idea. So it might have been something like corporate governance, for instance. And then you go and cross the page and hover over it, oh wow, there’s all these other corporate governance papers – of course published by the same publisher that you could buy as well. So there was, you know, sort of this other commercial component to it.
But in our case, when you went over those AI chosen keywords, the papers were often completely the opposite to what the authors had been discussing. A completely different sphere of work, wouldn’t have been referred to, which was probably not even relevant because perhaps corporate governance, according to AI, is very, very mainstream. Whereas the work that we published may have been a critique of current practice.
That disappeared quite fast, before we even had a chance to say anything, but it appeared without anyone’s consent, so the author didn’t have consent to that. So if that had been my paper and I’d seen these hovering things that were directing people to read things that were, you know, how to make capital markets work better, I’d be horrified. And that just wasn’t something that was discussed with us or with the authors. And that that really shifts the control over how your work is read. It did disappear though, so someone must have complained.
Christine: Also in terms of resistance, so the journals have got a very strong vested interest in making sure that they’re the more top ranked journals that everyone wants to publish in. Because otherwise we would get new entrants that were not controlled by big publishing houses.
The problem is for an academic, you are continually pushed to publish in the top rank journals. So why would you spend, you know, a year or two years or however long it takes of your life to write a really good manuscript and then send it to a journal that didn’t count. That was my point about, we’re actually also part of the system – because if you’re an academic and you want to resist this and you want to do other things, basically you’re going to kill your career.
Which is, you know, these are jobs after all for us. You know? I mean, of course we love our jobs and we are really committed to it, and we think they’re important, and we are doing some kind of social good, you know, all of that is not, you know, they’re not just simply for the money, but nonetheless, we have to eat.
Yves: So Critical Perspectives on Accounting is owned by Elsevier. It’s important to be aware that there are two types of journals. There are journals which are owned by academic associations, and there are journals which are owned by publishing houses. So Critical Perspective on Accounting is the property of Elsevier. So we could not say to Elsevier, well, we don’t want to be involved in this, we will do our own thing and goodbye. We cannot say that. There is an ownership relationship.
Elsevier has huge resources, huge. So they make each year 1.5 billion Euros of profits. It’s one of the companies in the world which has the highest profitability rate. So the company has lots of money, as a result, partially of the academic papers it publishes, and the derivatives, the ancillary products they sell to others, relying on its databases and other things.
Our ability to say no to these publishing houses when there’s an ownership relationship is a problem.
As a matter of fact, the extent of influence that these big publishing houses exert on the way in which knowledge is developed in the world is worrying, I would say. In terms of accountability, for instance. To whom is Elsevier accountable? Is it to academics or ideas of academic work? No, they’re accountable to their stockholders. So there’s a pretty significant tension between the quest for profit, and on the other hand, what academics think knowledge is aimed at, and this is not easy to reconcile. But in all this, again, we can resist in different ways AI – but our ability to do that is to some extent constrained as a result of the framework in which our journal operates within the jurisdiction of Elsevier, which is a company that processes thousands of academic journals.
So what we are saying currently regarding Critical Perspectives on Accounting, it applies to thousands of other journals owned by Elsevier and also owned by several of the competitors of Elsevier – Springer, Sage publications, which basically rely on the same approaches. They imitate a lot one another.
Safa: In their article – “The perils of artificial intelligence in academic publishing”– Yves, Christine and Jane coin the term “elite spotting” to refer to one of the consequences of automated decision making – which is the growing inequality between those who are highly cited and those who are made invisible in the databases – and which has clear negative consequence on academic freedom and diversity and equity in academic communities.
Jane: I mean, I think Yves and Christine, both of them are talking about the ways in which the system drives behavior around certain types of metrics that then allows you only to see people who have already succeeded and are successful. And, you know, the constraints that that place is on – not only in terms of mentoring early career academics into all of the aspects of being an academic, but also the narrowness of ideas that then are allowed to emerge.
And that doesn’t speak to quality either. Just because somebody’s got a high citation count doesn’t mean that they’re an appropriate reviewer, or that even their work is high quality.
So you know, it strips out all of that kind of nuance. And then just reproduces and perhaps rewards behavior that’s very focused on those elements – because then you stay inside this elite loops.
And I think that I was also saying how difficult it potentially becomes to publish work that departs radically from the ideas that are currently being circulated within any research community.
And that would happen not only in the humanities like us, but it would happen also in the sciences, where it is incredibly difficult to sort of challenge established ideas. And that narrows our capacity for knowledge. Because if we think about this actually as there’s many ways of coming to understand the world, and they should find space to be, to be discussed.
And the problems with algorithms that sit invisibly with lack of transparency underlying that is that we can imagine that we are seeing a lot of ways of coming to know and understand and debate the world, but actually we’re seeing very, very narrow, you know, and increasingly narrow set of ideas.
I think for me, not all of this is bad news. But I feel very frustrated by the lack of transparency around what is going on and a sort of muting of any sort of public debate within academic communities about the effects of this on the kinds of ideas and the career trajectories of people who wanna participate in academic life going forward over the next – whatever the future is.
Yves: Well, in terms of academic freedom, is academic freedom a kind of illusion when knowledge develops in a world where a few very powerful publishing houses from the private sector, own control academic journals – the majority of academic journals? So we are not free from private interests. So the extent which private interests could impact the world of research, this impact is in a way affected by private interest.
And when we think about it, well, as a society, how could we let something like this develop? So there’s an impact on academic freedom, but the impact is broader than that because it costs libraries, university libraries, lots of money in order to subscribe to the journals which are owned by these publishing houses. But at the same time, who contributes to the articles? Academic researchers paid by universities. So basically paid by public money. But at the same time, the libraries of their universities need to pay lots of money in order to have access to the journals in which these academics publish. So this is a system which is basically unfair. I would say unfair. And it’s a system which basically threatens academic freedom because as academics we have a relatively low capacity to influence the way in which knowledge is disseminated by these journals. And what these journals do eventually impacts research work, in different ways.
So academic freedom is something which is very important. It’s probably the core value or the core, the essence of academia is academic freedom, but we need to take care because it’s currently threatened in a very significant way.
Christine: If you work at a university that’s quite rich, they can afford to pay journals to have gold access. So straight away, your chances of being more highly cited go up. I mean, I completely agree with him. There’s the, you know, the issue of even being able to afford to buy journals, to have access to these journals in your library, and then there’s even the next step. So even though our libraries are buying work that we’ve already created, it’s like then to make the these things open and to play the game in terms of, you know, in terms of the metrics and, you know, the algorithms and all of that to get the higher citations, universities can actually use their money to help that to happen.
So that if you’re from an elite university, you’re likely to stay as an elite university. And if you’ve never managed to kind of break into that elite system, the chances are you then won’t be able to do that. And of course then there’s all the issues of the geographic differences. So in probably the majority of the world, universities can’t even afford to, you know, take one step into buying into that system. So yeah, the whole system works against equality and diversity.
I mean, there are initiatives that have began to try to kind of challenge that, like PlanS and the San Francisco Agreement, which tries to kind of at least encourage universities, not just to judge someone on the number of top rank journal publications they’ve got, and to encourage us to read people’s work and make our own decisions about quality and so on and so forth.
But we’re on a bad trajectory in terms of trying to broaden access to, I suppose, the whole publishing world. Which makes a huge impact on every single thing that’s going on in a society.
Jane: Going back to this sort of core idea around AI, if it was to be designed, if it was being rolled out to support us, we would be in dialogue with the designers. And there is no dialogue with us. There’s no asking us what we need, what might support us.
There’s no explanation to us about the kinds of technologies that are available that, you know, would allow us to sort of co-create a future that is, you know, obviously it’s going to be different. It’s not going to be what it was. It’s not what it was, but if it’s truly there as an enabler, we would be part of that dialogue.
And that’s very frustrating, I think for us, because we also feel quite responsible for our community in terms of trying to assist the crafting of a relationship between technology and the community and the publishers that does avoid elite spotting and does support the kinds of freedoms which are always curtailed, but that we all share an interest in sustaining into the future. And that lack of dialogue is, you know, incredibly frustrating, I think around technology. And that happens in the workplace. It happens everywhere.
So, there was a past in which there was a lot greater discussion around simple things like the use of cameras in workplaces. In Australia, we had very clear legislation around disclosure of cameras in workplaces. And people had to consent to a camera. Students had to consent to being filmed, for instance, being recorded.
This technology has moved so rapidly that those old school ideas that actually protected people, and encouraging participation in the relationship between technology and social life, that has just apparently collapsed. You know, I think in part because of the speed of transformation and that speed serves power, internally management power inside organizations, but also in terms of publishing and ideas.
Safa: Despite the growing hegemony of AI in academic publishing, it is possible for editors and others in the academic community to resist and push back on the encroachment of AI into our community practices and knowledge governance.
Yves: Where there are many solid institutions, and AI is becoming one of them. And changing institutions is always a challenge. It doesn’t mean that it cannot be changed, because, well, 20 years ago, AI did not exist. So it was humans who created this structure in a way, but changing an institutionalized pattern of technology could be quite a challenge.
What we can do, at least, is what you’re currently doing through this podcast, informing people, informing other academics or informing students, citizens about some of the dangers that come with AI being increasingly used in academia – but without accountability.
We cannot get outside of this system. If we say to our journal manager or general producer that we no longer want you to track our clicks on CPA, the answer is going to be, well, it’s going to be a big no. You cannot, It’s part of the deal. So we are prisoners of this, especially when the journal is owned by a private company.
But we can talk about it, we can discuss it at conferences, we can publish on it in order to voice concerns. So it’s important that these voices keep reproducing in order to make sure that we don’t lose these dangers from our radar screen.
Christine: I mean, of course resistance is completely possible. It would just be very difficult because of the things that Yves said, it is to do with the different institutions. There are so many different institutions now that have a stake in the outputs of AI metrics. Probably the most advanced kind of capitalist country states, the governments have got this kind of control over universities by their different research assessment exercises.
Like, as Yves said, very highly profitable, but very powerful institutions in terms of academic publishers. They have, of course, a very strong stake in this. And even now within universities, there’s a whole machinery that’s been set up to help academics in some ways, but also to monitor academics and constrain academics through just the bureaucracy surrounding it.
So there’s a lot of employment that’s been created because of the way that publishing is now kind of managed and all the values that it has. So trying to overcome all of those things, of course, academics could stand up and say no. I think it’s gonna be a big ask, but of course, you know, we could change things. I mean, these things aren’t inevitable, but I mean the resistance would need to be very, I think, very solid among academics across the globe.
Jane: I agree. I just wanna, if I was to say anything about this, it would be to remind us all, and not just us, but collectively, how important it is to be together. So how important it is to have human social interactions as part of the experience of just living, but, you know, in academic life. So we’ve just had a long period, three year period of absence of each other. Reinvigorating lived exchanges is one way in which we can be constantly reminded of how important that is to our community. Broadly, but academic life.
The other thing I was gonna say is one of the things we do, which is kind of a resistant strategy, is we make a lot of decisions offline. We take a lot of things off the platform and do it, you know, through email because that is a way of us securing a different way of coming to a conclusion about academic work because the system doesn’t, you know, foster those kinds of conversations that are more nuanced or more complicated about, you know, papers or decisions or whatever.
In taking it offline, the system can’t see it. So the publishers starts to think that their system is perfect because it’s working. They don’t see all the work that is required to be undertaken to make the AI work, if that makes sense. So whilst it sort of is a resistant strategy, it always has these other consequences that we are aware of – it becomes very difficult then to, whilst resisting, also articulate that resistance in a way that reshapes AI.
Safa: To learn more about these issues, we encourage you all to read Christine, Yves, and Jane’s article on the topic, “The Perils of Artificial Intelligence.”
Yves: “The Perils of Artificial Intelligence”, by that we mean that AI, although it’s presented as being neutral and an effective way of dealing with tedious tasks, it involves behind the scene values, in particular what we spoke about previously.
Through AI a powerful alliance, a triangular alliance, develops between AI, public companies, big public companies, and journal rankings. So in different areas now academics are ranked in accordance with the journals where they publish. So rankings exist – and the combination between AI and journal rankings is a very powerful one. AI perpetuates it, amplifies the very big power that journal rankings had previously before AI.
So the whole AI selecting reviewers based on the H index, based on the extent to which one is cited, AI contributes to perpetuate, to strengthen this logic. But in all this, we risk having an academic community, which is basically focused on publishing in the same journals because they’re at the top. And these top journals are basically promoted indirectly through AI.
And if we all have the same ambitions, it means that we will all rely on the same methods. The methods that big journals like. We will look at the same topics, the topics that big journals like.
So where is this going to compromise innovation in research? Where are the new ideas coming from? Are new ideas going to be relegated to peripheral journals, which are not on the radar screen of most people – because they don’t have an impact factor, or they’re not followed by AI, by general rankings?
So these issues are important issues. We need to talk about it, especially because as Jane mentioned previously, AI is quite obscure. It’s indirectly that we learn about it. So as editors we were in a sense in a good position to indirectly learn about this. But our knowledge of it is imperfect. It’s in a way superficial because we are not allowed to see what’s going on in India at these consulting houses, which provide advice to Elsevier and Springer and other publishing houses.
So everything basically develops behind the walls, which is another key concern.
Safa: Thank you so much for tuning in.
If you are provoked by what you heard today, we invite you to join us at the Knowledge Equity Lab. Together we can fundamentally reimagine knowledge systems and build healthier relationships and communities of care that promote and enact equity at multiple levels.
Please visit our website, sign up for our mailing list, follow us on social media and send us a message to get involved!