Please note that on our website we use cookies necessary for the functioning of our website, cookies that optimize the performance. To learn more about our cookies, how we use them and their benefits, please read our Cookie Policy.

Editor vs. Algorithm: The Need for Human Connections

Editor vs. Algorithm: The Need for Human Connections

One of highlights of Spark was undoubtedly Lorraine Candy, the editor-in-chief of Elle being interviewed by Claire Beale, the editor-in-chief of Campaign. It started with a question about the role of the human editor in an age where the algorithm is powering so much online content. Candy’s response that editors are walking algorithm sparked an industry wide debate. Candy admitted that editors do work with and value data, but it is combining it with the editor that turns magazine media content into a powerful and exciting force. We asked Nick Southgate to give us his take on the importance of human connections in influencing decision making.

Nick Southgate, Behavioural Economist

I watched Claire Beale of Campaign interview Lorraine Candy of Elle at Magnetic’s annual Spark event. Beale asked Candy what role human editors had in an age of data and algorithms. Candy’s answer was simple – “We [Editors] are the walking algorithms”. Should we believe her?

Nick Southgate

In a world where falling in love with computers is made to look plausible by movies such as Heer and Ex Machina, should we not also believe that algorithms and AI could edit magazines better, or at least as well as, the current ‘walking algorithms’ do? Is it not a more humble and achievable task in comparison? To understand our answer, we should turn to one of the staple questions of philosophical AI– the frame problem.

In essence the issue is this. Computers can now beat human all comers at chess and even the more complex Go. Yet the same computer would not know to leave the room if a fire started – something even the worst chess player in the world would know to do. AI is great at specific intelligence. Humans are very good at general intelligence.

If the problem is one of specific intelligence, then a computer would be better. Computers are good at repetitive tasks that demand great accuracy or crunching a vast number of options to find the best one. When AI really struggles is when it needs to read situations – and most of all when it needs to read people.

The AI editor we all know best is probably Amazon’s Book Recommendations. I’ve spent a lot of time telling Amazon which books I already own and what I think of them. I’m impressed by its recommendations. It’s alerted me to books I didn’t know where coming out and directed me to authors I didn’t know. 

However, when I really want a recommendation I prefer to ask Profs Noys and Price, friends of mine who teach in the English Department at The University of Chichester.

The reason is simple. The Profs can make leaps in recommendations Amazon just can’t. Yes, I trust their personal expertise and deep knowledge – but that is only half the battle, and not the half where the battle is won.

What they also know is my mood and where I am in my life. They know if I want to be educated or entertained, whether my mind needs emptying or filling. More perceptively, they can suggest when my tastes need correcting or challenging. They don’t just want to sell me a book. They want me to be a better reader and a better person.

This is why they are my editors and Amazon is only an algorithm. Editors choose content for human reasons – and this demands the wide and general intelligence human beings are good at. Algorithms choose content for narrower and more purely commercial reasons. Amazon only recommends books that it can sell me, for example.

It is not just that we trust people to build a better product. It is that editors, human editors, know how to make human connections. They know that their magazine brand is not merely a commercial exchange, but a social one. Editors know when to stretch readers, when to leap forward, when to move back. Editors know how to make human leaps. And, for the moment, algorithms don’t. 

This article has been edited for length, read the original