tv Click - Short Edition BBC News December 11, 2021 3:30am-3:46am GMT
this is bbc news. the headlines: the uk's health security agency says the country could have more than a million omicron cases by the end of the month, and that two doses provides little protection against the new variant. a senior government minister has described the situation as "seriously worrying". the white house says president biden is very concerned by a supreme court decision to leave strict new abortion laws in place in texas. the controversial new law bans abortions after six weeks of pregnancy. the legislation is being challenged by abortion providers. the high court in london has ruled thatjulian assange should be extradited to stand trial in the united states following assurances from washington about the way he'll be treated. the wikileaks founder faces charges linked to the leaking of classified military documents. his supporters say the us could not be trusted. coming up in around 10 minutes�* time,
we'll have newswatch. but first, here's click. has there ever been a time when we haven't been at war? battles have been raging down through the ages, over lands, religion, over resources. and every so often, a new technology comes along which gives one side a massive advantage and changes the shape of war forever. throughout the history of warfare, there has been one
common thread, and that is it has been people who have made the decisions on who and how to fight. but now we are having to ask the question — what would happen if you took the human out of the loop? weapons, guided and driven by artificial intelligence, are no longer science fiction. and next week, the un will discuss whether the development and deployment of this kind of technology should be left unrestricted, should be regulated or outright banned. so far, the us and the uk have opposed binding agreements to regulate or ban the use of so—called killer robots. james clayton reports from silicon valley on the dawning reality that al researchers say we need to start thinking about today. the nuclear bomb totally transformed warfare.
there are those that now say that we are on the cusp of something similar. it is a fast track to — i think �*dystopia' is the right word for it. the nation is still recovering from the incident, which officials are saying is some kind of automated attack, which killed 11 us senators at the capitol building. autonomous weapons combine a confluence of different technologies — drones, facial recognition, artificial intelligence and big data to create a sort of superweapon that not only detects and destroys, but can make that decision itself, and can be owned not just by states, but potentially by organisations, terrorist groups, anyone. this is the kind of dystopian reality that has been painted by critics — assassinations, private armies of bots, computers deciding whether humans live or die. these types of weapons that could easily be deployed and moved throughout different environments, like a swarm,
the sort of embodiments of the robo—dog with a machine gun, and how easy they can proliferate, how easy they can fall into the hands of not what we think of as traditional militaries. this isn't about prohibiting or banning ai usage in the military, or even in weapon systems. it's about drawing a red line on the specific use case of weapons, which are these smaller types of systems that target people. even the un secretary general is worried. the weaponisation of artificial intelligence is a serious danger. on december 13, a review of conventional weapons is scheduled to be held at the un in geneva where they will be discussing killer robots. campaigners will be looking for an outright ban. but, already, that looks unlikely, with the us reportedly saying it would prefer a non—binding agreement.
the discussion should be more about how we regulate it and how we kind of try to define it and approach it rather than trying to outright ban it, which is not going to happen. russia, china and the us are going to go after these technologies, so they are very keen to avoid being put at a competitive disadvantage against what is increasingly looking like the sort of great power, cold war—type competition over the next 30 to 50 years. but if countries can't ban killer robots, what will that mean for humanity? it is a fast track to — i think �*dystopia' is the right word for it. it's a world in which we've delegated and relegated the decision to take a human life to algorithms, right? but it's not quite as simple as that. others argue that autonomous weapons are often mischaracterised. it's not being given the authority to kind of decide its mission set. no commander in the world would ever want a weapons
system that decided what it wanted to do at a given moment. these would be preprogrammed rules according to preprogrammed rules of engagement that are legally screened to make sure they meet the requirements of law of conflict. the machine may make cleaner decisions on the rules of engagement, which have been preset and preassigned on legal grounds, than a stressed pilot, who's trying to do a million things at once. that may be the case with a sophisticated military, but that's not necessarily what we're talking about here. if anyone has an ability to access a type of weapon that can selectively target a group of people, just lay that framing onto all of the types of conflict that we see today. whether we think about conflicts within country, when we think about rogue states, when we think about terrorist groups, when we think about cartels,
we think about violent crime. now you're giving...powering those types of conflicts with a weapon that can target at scale, right? and to me, that is a very, very scary future. autonomous weapons aren't a distant possibility. much of the tech needed to create them already exists and some believe that if humans can't get together to ban them, it could be one of humanity's greatest mistakes. that was james clayton and i've been speaking to professor stuart russell, whose bbc lecture this week warned about the dangers of ai—controlled weaponry. the letter raised the possibility of children playing with toy guns, being accidentally targeted by the killer robots. he was involved in the original slaughterbots short film from 2017, which, in itself, was shockingly realistic. sound of drones applause
did you see that? before it was premiered publicly, ishowed it to some of my ai colleagues. when they were watching the ceo of the arms company demonstrating the capabilities of this new technology and the kind of uses that you could put it to, they thought this was a documentary. they didn't think this was fictional at all. your kids probably have one of these, right? when it premiered in geneva, actually, at the negotiations on autonomous weapons, the russian ambassador sort of sneered at this and said, "why are we even discussing this? "this is science fiction. "it won't be possible for even 25 or 30 years." three weeks after we premiered the movie, turkish arms company stm actually announced a weapon and they advertised capabilities for autonomous hits on humans, face recognition, human tracking, all of the things that the ceo talks about.
those genuine bots could exhibit the same kind of intelligence and autonomy that's in the film, i would imagine they'd be manually controlled and flown into things? you might think that, but, actually, no. they are fully autonomous, and the un has a report showing that they were actually used autonomously to hunt down retreating troops in libya in march of 2020. i think there are many different arguments people make. one is a moral one, that it is just morally unacceptable to turn over to a machine the decision to kill a human being. the human is piloting the
drone, a human has to push the button to fire a missile. if you don't need a human to do that, then you can basically launch weapons by the million. enough to kill half a city, the bad half. type in a rough description of the mission, like, you know, "wipe out everyone in this city between the age of 12 and 60." just characterise him, release the swarm and rest easy. so you create this weapon of mass destruction that's more effective than nuclear weapons, cheaper, easier to build, easier to proliferate, and doesn't really leave behind a huge radioactive smoking crater. is the answer just always to keep a human in the loop? and is the problem with that — which human? i think the answer is �*yes�*. to disallow attacks
where there's no human supervision, there's no human who's looking at the actual situation and the actual target and saying, "yes, this is ok." even under the assumption that the machine is programmed by someone who has the best legal training and the most humanitarian ofaims, even in that situation, we face problems of not being able to make the decisions correctly. the problem is the idea of these slaughterbots, all the bits can be bought in a decent supermarket, probably with the exception of a small bit of explosive. so, what do we do? they're technically already available and how would you ever ban them? so, we ban many things that are already available. so, biological weapons — it wouldn't be that hard for someone with the knowledge to create a biological weapon, but we still ban them. chemical weapons are widely available industrial products. the companies that make them are required to account
for those products, to check that their customers are real customers and not fake shell companies. companies that receive an order for 5 million quadcopters would need to check who's buying the 5 million quadcopters. we can do this in ways that will not be perfect but will prevent the kinds of weapons of mass destruction that i am most concerned about. that is it. you can keep up with team on social media, find us on youtube, instagram and twitter. . ~ us on youtube, instagram and twitter. ., ~ , ., ., twitter. thank you for watching. _ twitter. thank you for watching. we - twitter. thank you for watching. we will - twitter. thank you for watching. we will see | twitter. thank you for - watching. we will see you soon. bye—bye.
hello, and welcome to newswatch with me, samira ahmed. are details and recordings related to horrific crimes, such as the death of arthur—labinjo hughes, suitable for the audience of an early evening news bulletin? and has the bbc�*s coverage of that downing street party been obsessive and irresponsible? what exactly did or didn't happen in downing street on 18 december last year remains unclear for now. but that's not for a lack of questions on the part of journalists. here's laura kuenssberg quizzing the prime minister at wednesday's press conference — the purpose of which was to announce new covid measures. how can you stand at that lectern, exactly where some of your team laughed and joked about covid rules, and tell people they must now follow your new instructions? and are you really asking the public to believe that you had no idea what was going
on under your own roof? caroline cockwell got in touch with us to say... and sue williams agreed... that's a story that's not going away anytime soon, and we hope to talk to someone at the bbc about it next week. meanwhile, it was announced on tuesday that an investigation will open next week into the death of six—year—old arthur labinjo—hughes. it's a highly distressing case, and we are being careful
about what we show you and what we say on today's programme. but here's some of phil mackie's report last thursday, the day his stepmother and father were convicted of killing arthur. arthur labinjo—hughes had been a healthy and happy little boy. but he was subjected to months of beatings and punishments by his stepmother, emma tustin, and father, thomas hughes. during the trial, jurors listened to the hundreds of audio recordings that tustin made — all of them extremely distressing. some of those audio recordings and videos were shown in that report — a decision which prompted several viewers to contact us in protest. here's barbara harris — and first, helen walker. the horrific and tragic murder of arthur was an important news story and was rightly included in the news at six. however, it was truly shocking to see film of the little boy in a terrible state of suffering, as recorded by his murderer, followed by a playing of the audio of his cries, used as part of the broadcast.