Google Video and Italy: Is there nothing we won’t watch? – Telegraph:
Google Video and Italy: Is there nothing we won’t watch?
Following Google’s conviction in Italy, Robert Colvile suggests that internet users hold the key to controlling inappropriate content.
By Robert Colvile
Published: 7:57AM GMT 26 Feb 2010
On September 8 2006, a new item was added to Google Video in Italy. It showed an autistic schoolboy in Turin being abused, physically and verbally, by his classmates. On Wednesday, three executives from Google – who had never worked in Italy, or had any idea of the video’s existence before it was deleted two months later – were found guilty (in absentia) of invading the teenager’s privacy, and given six-month suspended sentences by an Italian court, after charges were brought by a local Down’s syndrome charity.
The outrage was immediate. David Drummond, the company’s chief legal officer, and one of those convicted, claimed the ruling ‘poses a grave danger to the continued freedom and operation of the many internet services that users around the world – including many Italians – have come to rely on’. A coalition of supporters was quickly assembled ahead of the inevitable appeal, including Index on Censorship, Reporters Without Borders, and the US government.
Why the fuss? Everyone agrees that the video was unacceptable and disgusting. The prosecutors argued that Google had a duty to ensure that such videos complied with privacy law before they were made public, that comments beneath the video suggesting that it was inappropriate were ignored, and that it should have been spotted when it made the ‘most viewed’ list on the site.
Google countered that it took the video down within three hours of being alerted by the authorities, that European (and Italian) law states that responsibility for such videos lies with those who post them, and that taking a random set of executives from its hierarchy to court was hardly the way to resolve the issue.
Whatever the merits of the case, there is a broader point. It is not just that this ruling implies that Google – or anyone else who operates a website – is responsible for every offensive video, photo or comment that appears there. It is that this is only the latest of a seemingly endless series of instances in which the internet has been used to spread misery.
In Canada, for example, a video of an obese child miming a lightsaber fight – the so-called ‘Star Wars Kid’ – attracted millions of views and made his life a misery (in this case, the teenager, Ghyslain Raza, sued the classmates who put the video online, rather than the firms who hosted it). In Italy, Facebook groups have been removed for advocating the assassination of Silvio Berlusconi, or the use of children with Down’s syndrome for target practice – ‘an easy and amusing solution’ for disposing of ‘these foul creatures’. And an inquest heard this week that Emma Jones, a British teacher in Abu Dhabi, drank cleaning fluid – either deliberately or accidentally – after becoming panicked that naked photographs of her had been posted online, which could have led to her being condemned as a prostitute.
Faced with this kind of unpleasant material, the layman might ask why it can’t just be blocked before it is uploaded, as the Italian court wants. There are two objections, one philosophical and one practical. The first is whether we want internet companies to have the power to decide what is tasteful or ethical. Everyone would agree that child pornography should be removed from Google’s search database and the perpetrators brought to trial. But what when Apple decides – as it recently has – that it does not want ‘titillating’ content on its online store? Who decides what meets that vague guideline and what doesn’t?
It is the practical objection, however, that is most significant. YouTube, the video service bought by Google in 2006, receives 20 hours of video every single minute – a torrent of information that no human being could reasonably pre-screen. It is not just video, either. Bill Eggers, the global director of Deloitte Research, points out that it took the Library of Congress more than 200 years to amass a collection of 29 million books and periodicals, 2.4 million audio recordings, 12 million photographs, 4.8 million maps and 57 million manuscripts. The same amount of data is now being added to the internet every five minutes.
Google, of course, has built its success on making sense of this chaos. It is the firm’s proudest boast that no human hand governs the placement of search results on your screen – it is all down to the ‘algorithm’, the impossibly complex formula that runs the site. So committed is Google to this idea that it refused to remove a racist image of Michelle Obama that was ranking highly from its image search function, arguing that the image was perfectly legal and its high placement reflected the reality of what people were linking to and interested in.
This is also why the fact that the European Commission is investigating Google over its search results – as disclosed in The Daily Telegraph this week – is about far more than the threat of billions of dollars in fines. If it is true that companies Google disapproves of, or is threatened by, are pushed down its rankings then it would destroy the belief – almost an article of faith within the firm – that its algorithm is the most perfect, and most objective, way yet found to make sense of the world’s information.
However, when it comes to screening out inappropriate content, even the mighty Google falls short. After YouTube in particular was criticised for relying on content pirated from elsewhere, such as TV shows or music videos, the firm developed a system called ‘Google Content ID’. This compares YouTube videos with copyrighted material in Google’s database and lets its creators decide whether to block the pirated material or impose their own advertising on it (a far more popular option). But the system is not yet sophisticated enough to tell when a video is displaying inappropriate content, such as pornography or racist diatribes. Nor can Google rely on the ‘tags’ people use to identify their content: a video marked ‘hot sex’ or ‘Britney Spears naked’ can often turn out to be footage of a cat on a piano, which has been mislabelled to drive up traffic.
The only practical solution, then, is for users to alert companies such as Google and Facebook to inappropriate material being hosted on their sites. And here is where things become even more disturbing. Think back to that video from Turin. Here was a disabled child being mocked and beaten by other pupils – a repulsive spectacle. Yet it still became one of the most viewed items on the site. And even though some of those thousands who watched it posted comments underneath suggesting it was inappropriate, Google insists that not a single one bothered to click on the button, displayed on every video, that would alert it to the existence of inappropriate content.
In other words, Google, Facebook and the like are at the mercy of human nature. They can act when there have been clear-cut breaches of laws or standards, such as pornography on YouTube, or Facebook groups that advocate the murder of the disabled, or profile pictures that display Nazi regalia (if that is illegal in the relevant country). But they can do less about people’s instinctive tendency towards voyeurism or cruelty – hence the estimates that a fifth of the pupils in Britain’s schools and a seventh of the teachers have become the victims of some form of ‘cyber-bullying’.
‘Wherever like-minded people gather into a mob, you can bid farewell to nuance, empathy or good manners,’ wrote the Sunday Telegraph columnist Jemima Lewis recently, recalling the ‘astonishing rudeness’ with which dissenters from parenting orthodoxy are treated on the Mumsnet website. One mother who expressed dismay that her daughter’s primary school had been discussing lesbianism and civil partnerships was told she was ‘homophobic’, a ‘d—head’ and to ‘Go and live with the Amish if you can’t deal with the real world.’
Executives at Facebook believe this kind of behaviour will dwindle as people come to use their real names online (as they must do on its network), rather than hiding behind anonymity. But Shami Chakrabarti, the director of the human rights group Liberty, thinks they should go further. ‘I understand that it’s very difficult, if you want a quick and speedy and free internet, to say there must be massive obligations on companies to vet things before they go up. But they could go a lot further than they currently are, in terms of taking their responsibilities to educate and warn people more seriously.’
Ultimately, however, it comes down to how we behave. ‘People can be thoughtless, people can be reckless, people can be hurtful, and the web, despite being wonderful in all sorts of ways, is a new – and in some ways more dangerous – medium for that,’ says Chakrabarti.
‘It’s fashionable to bang on about a Big Brother state, but we’re all capable of being pretty nasty little brothers and sisters to each other as well.’