tv The Stream Al Jazeera October 31, 2014 12:30pm-1:01pm EDT
unlike these creatures it's economy never grew up. don't forget you can keep right up to date with all of the developments, particularly in burkina faso because events seem to be moving rather quickly, on our website, aljazeera.com. can we teach robots morality? the u.s. is betting millions on the prospect. later from catching criminals to reading emotions, mind-blowing advances in facial recognition software, bringing computers frighteningly close to mind-reading.
my producer and co-host is here to bring in all your feedback throughout the program. teaching robots morality. it feels like we're in a science-fiction movie. >> it's awesome. we're paid to geek out today. this is fascinating and terrifying, so i imagine a future met plus programmed by the matrix and run by sky net and patrolled by robocop. all of us will be like step ford wives because they'll have facial recognition technology that can tell if we're lying. we'll all be boring people, which means siri will be the most interesting person. we asked our community, in future of artificial intelligence, could you imagine dating your siri? we wouldn't last. our communication styles are incompatible. she never listens to me. taylor on facebook says, if it's a robot, i.e., a computer then reason will be all it has first.
all you have to do is make is so the suffering the beings is a concern to it, and you have the potential for morality in the robot but the consequences. >> taylor is thinking. >> it's smart. >> piloting planes, diagnosing illnesses and making ethical decisions on the battlefield, those are duties that required trained and talented individuals. what if i told you the trained robots are up for the same kind of challenges? >> according to researchers artificial intelligence is advancing so rapidly that in some cases you can't tell whether you deal with a human being or computer. recently programmers rocked the artificial intelligence world by creating a super computer that passed the turning test. the test measures the intelligence of a machine by whether it can fool people into believing it's a human, and in this case it fooled a third of the scientists into believing they were talking to a 13-year-old boy leading some to believe we're on a path where machines eventually achieve the
same level of critical thinking, introspection and even ethical reasoning as human beings. after you hear our guests today, you will not think this is impossible. of course, what are the long-term consequences and implications here? to help us sort it out from london is george, who is an engineer and expert of artificial intelligence. this is the senior he had for for the blog extreme tech. the phrase artificial intelligence gets thrown around lot with everything about google to the iphone siri. what does artificial intelligence mean in 2014? >> it means different things than it used to mean several decades ago. several decades ago when it started in the late '50s and '60 it meant developing machines, computers, robots that had the potential to think and self-reflect. they have everything we
recognize as human intelligence. nowadays, because we understand more of the complexity around the human brain and human consciousness, the definition of that culture would sound more if you'd like humble. it means making machines very intelligent, but not necessarily self-aware. >> what about this computer we just referenced that fooled some scientists into thinking that it was human? have we broken through a barrier in artificial intelligence? >> i don't believe so, no. certainly not. this is what is called a tiering test. it was proposed several decades ago from one of the greatest pioneers, if you'd like, of computer artificial intelligence, alan cheering, and no one really believes this is saving the artificial intelligence. far from it. it's seen more as a p.r. exercise that has achieved a very important thing.
we are having a discussion about artificial intelligence, which is a serious business here between you and sebastian. i think that's very good news indeed. >> we're talking about -- >> go ahead. >> as george mentioned, havi having -- that's one very small act of intelligence, if you can even call it intelligence. it's just -- you know, we used to have very lofty ideas of intelligence as george says, but now the folks see very, very specific parts and just tricking a few people that you're talking to a boy instead of a computer is a very, very small challenge. it doesn't actually help you have rational thoughts. it doesn't help you drive a car. it doesn't help you do these useful things. it's also worth noting that in this case the judges knew they were talking to a computer, and they knew they were judging a xur. computer. so there's a lot of bias there. it's not a huge breakthrough.
>> if it only tricks a third of the people, aren't there significant implications here for people with nepharious intentions? if a computer can trick 30% of folks to think it is real, that could be problematic. >> i would say yes, it's possible. i would say that the fact the test wasn't done very well. if the test was -- if it took someone off the street, sat them down and said talk to this person and they actually believed it was a person, i would say there's som applicability here. the fable fact they news they were talking to a computer makes it feel iffy. there are not a lot of research troops passing the test. it's an old world odyssey that hung on because it's attached to alan's name, so it has the significance. i don't know of any large ai groups seriously trying to pass it. they're working on other things.
>> sebastian, the government is investing in moral robots. they have investigated 7.5 million. should robots be trust odd to make moral decisions? lynn said, hell, i don't trust human beings to making decision. he he said will they interpret commands like a lawyer would and whose morals will they follow? religious morals? good question. should we have moral robots of wa warfare. you heard my geek references. sky net, matrix. it's possible to make these moral robots? your thoughts. >> first of all, it's necessary for you to think about this possibility. i think it was in 1942 that isaac the science-fiction writer define the three laws of row bolttics, and that was like the first effort.
it's a realization rather if you have intelligent creatures, machine creatures moving around in our environment, interacting with us, driving cars or performing operations, then those machines will have to make decisions. sometimes those decisions have to have a moral underpinning. they have to be moral decisions. just imagine driving a car down the road to avoid an accident and is given basically two choices. either to kill person a or kill person b. how will that machine make this decision? this is a moral decision. it's a very perhaps predictable decision if we go down to the road this has cost. these are issues we definitely need to address, and if they did not concern purely the media establishment if you like or the battlefield. they have to do with what will happen in our everyday lives as well. >> sebastian, george raises an interesting issue here, without getting too technical, how do
you code for moral consequences? how do you code for ethics? >> so this is -- i mean, as the twitter people mentioned, the problem here is we still find it very hard to quantify what human morality is or human ethics, so the starting point for the u.s. military research is actually having to work out how does a human make a moral decision? if you are driving your own car and you have a choice of running over two people, which one do you choose to run over? this decision plagues people for a long time. once you work out what human morality is, in theory you can program it into a computer. it would probably take the form of a huge number of questions, like millions of questions like you will take as much data as you can. what does the person look like? every kind of thought that goes through your head to make a decision and it would try to work out the answer. at end of the day, the robot is
making that decision, which is very hard for us to get our heads around. >> the hard part for me to get my head around is not that it's not logic, right? feelings come into play, which are very different than actual inputs that require simply logics. >> so this is the thing. so i mean it depends on whether you believe that humans are purely the result of a bunch of chemical reactions in your head that make decisions or whether there is some kind of other force that is helping you make those decisions. i mean, there's a big argument that all your decisions that you make are just based on, you know, chemical things in your head. making you answer in certain ways. in theory, we should be able to make a robot that makes exactly the same decisions. there's been research into making robots that have the same hormones in the brain, the same endokrien systems to make a robot behave like a human.
at some point you would think that they could closely mirror humans. you know, that's -- yeah. >> are communities skeptical that air-to-ground combat has increased casualties. forget the money. what's the human cost of killing machines? christa says fictional creations seems to channel a lot of anxiety. sebastian you tweeted in, i welcome our new robot overlords. you'd be a trainer. >> you'll stick around for the next segment. the pictures and text that you share online with friends may seem like a private affair, but when it comes to the nsa, they're fair game to collect and store usually facial recognition software in the name of national security. up next, how the same technology is also being used by local law enforcement in catching criminals and how stores might be the next in line making your
welcome back. we're discussing the latest mind-blowing advances in artificial intelligence. facial recognition software has been around, but with recent advances we're seeing effective results of use in law enforcement and other sectors. this month a chicago man became the first person to be convicted for a crime as a result of the technology. here to talk about the various ways facial recognition software is permeates the lives on skype from san francisco is jessica lynch. she works on transparency and privacy issues in new technology.
from ames, iowa, brian meinke from iowa state university. thank you for joining us. brian, we justed mentioned the first man to be convicted using facial recognition software, convicted of robbery. these these recognition programs fool-proof. >> they're not fool-proof but they work well. facebook recently announced they have an app that has a 98% accuracy rate. they have a lot of data, so the more data about you, particularly facial information the more accurate the program is. >> talk about how the applications may work in the retail sphere. if i walk into a store how i'm recognized by my face and altering my shopping experience? >> right now we're on the cusp of seeing some apps looked at by retailers. essentially, what they're experimenting with now are generally referred to as
anonymous. this is anonymous video analytics. they try to segment people into categories, so this all generates from a desire to get you to look at things, for example, and visual signs. if you put an image on a sign more likely to appeal to someone, they'll look at it. they figure out who you are, but they do so in a general sense. what we will see are things more related to, for example, reward card types of systems where you kind of walk in and get recognized. >> throughout the show i'm the harbin engineer of negative consequences and potential bad news. and the community says will aib be covered under constitutional provisions and will we revault our meaning of life with obvious questions. miguel says if people want smarter technology by personal assistance and other stuff, they have to give up privacy unfortunately. jen, chicago currently has
23,000 surveillance cameras and the police pay for the technology. where is the balance here between our privacy and security in in brave new world? >> well, i think that's something we really have to talk about as a society. now is the time to put privacy protections in place. right now we're not at the point where facial recognition can automatically identify any face in a crowd. we will be getting their soon, and as the government builds out databases of millions of images of people, it's something we really need to be worried about for the future. >> jennifer, the nsa is reportedly accessing images available on social media to use for law enforcement. they're publicly available images. do you have concerns about the nsa doing this? >> well, this is something that came out in "the new york times" article a couple weeks ago about how the nsa is collecting millions of images every day and employing facial recognition
technology to learn who people are in the images. i think what the nsa is doing is the agency is combining that facial recognition data with other bio graphic information and information from social media that explains who people are associated with and just using that information to identify people and create a bigger picture of who they are. >> some more bad news here. scott, we asked the community what are the drawbacks to use computer software to convict people of crimes? it's dangerous in and of itself to be relied upon technologies despite increasing algorithms. juries should not predicate the decisions and i'd argue it's a dangerous press dents to do so. sebastian our resident geek here, what is your feedback to scott's comment here? are you scared the precedent this technology will be setting? >> this is predicated on the idea that computers are better than humans.
i mean, as has been mentioned, facebook has an algorithm better at humans at recognizing two faces. if you have seen him before and the computer saw him before and you see that person again in a crowd oran dom shop, the computer is better than a human at recognizing that person. so, again, this is an inherent distrust for computers and robots, but, you know, also surely, you know, computers aren't biased. they don't have prejudices. they don't have all sorts of stuff like that. you could also say having a computer making those decisions is possibly quite a good idea. it's not like the computer is goalkeeper -- going to make the decisions on their own. a human will check it over time. >> whether we're talking about something like a jury and being used in a court system, eyewitness is not that reliable. you have to think that a computer software program would
be more reliable than eyewitness testimony. >> one would hope so. >> jennifer, where is the line, though? where is the line where you say, that's far enough? that's as much as law enforcement can use this kind of technology, and it has to stop here? where is "here"? >> i wanted to get back to an early points about bias and technology and this idea that computers are not biased. now, of course the only way that computers get information is if a human enters that information. a lot of times the information that's input into a database is based on for criminal databases based on biased policing, and so, you know, there's that saying garbage in and garbage out. if you're entering images into a database that are based on racial profiling, then that's your pool of people who you're trying to identify. one thing that we learned was from documents that we received from the fbi about the fbi's massive next generation identification facial recognition database is that the system isn't all that accurate
in actual fact. so the fbi only guarantees accuracy 85% of the time. that 85% of the time is dependent on the actual candidate being in the search results that are provided, and that's the top 50 search results. so that means 49 people may be misidentified and become suspects for a crime. >> mills on twitter shares her concerns and says, just don't problem to carry human bias and don't target based on zip codes would be a good start. chief justices, don't forget the system can be hacked. a lot of implications really. >> indeed. thanks to our guests jennifer lynch and brian. still ahead, can computers read our minds? how facial recognition software may even know what you're feeling by understanding your expressions. here's the catch. it's on the brink of becoming easy to use by anyone with a click of the button. you don't want to miss this.
what if your devices could read your emotions and respond to them? they're trying to do just that. it will enable a revolution in device and application personalization. >> welcome back. researchers at ohio state university say they have found a way for computers to decipher 21 distinct facial expressions including complex emotions like happily disgusted and software like what you just saw is getting ready to make this kind of technology available to the wider public. how will this change the way we interact, and how will it impact our freedom of expression? joining us on skype out of new york is a cognitive scientist and a lead researchers on the ohio state study i mentioned. just reading about it makes me
feel like someone is invading the space in my head. i'm not sure if i want perfect strangers to know my mood. what's the upside of this application? >> there were many upsides. understand how facial expressions have emerged and work is essential for understanding psychological disorders that people might have, and there are dozens, many dozens of psychological researchers that have been defined in the literature between very difficult to now to understand the difference between them. we thought we only had six basic emotions that we could all express in the same way with happiness, surprise, anger, disgust, fear and sadness. what happens is we're just extremely difficult to differentiate between dozens and dozens of different emotional things.
with these 21 ones the hope is we can differentiate between many disorders. so we'll be able to diagnose to begin with, which is a very difficult task right now for psychiatrists, and the medical establishment. it also is going to allow us to better understand what the difference is between the deserters and hopefully down the line ways to help people become more adaptive to our society to interact with the rest of us. >> we have all have friends with a high degree of emotional intelligence. you think you look completely normal, and they're like what happened? are the computers able to pick up the same kinds of nuances that an intelligents, emotionally intelligent human being would pick up on? >> the algorithms we have right now are as good as the average human is. so maybe a little better than
the average human. so they're not extremely, extremely good yet. there's small changes that you were mentioning, but we're getting there. so the hope is, yes, in a few years we can detect the small differences. >> all right. so amy on twitter a loyal streamer shares her concerns. i know this is going to happen. i just can't stand the creeping lack of privacy. if i'm on social media, it's voluntary, but all of this, it isn't. we asked our community about the latest technology. a ledee tektor app can spot fake emotio emotions. do we have a right to lie? >> we should ask how accurate it is? >> on facebook james says this kind of sucks and everybody kicks up the quote-unquote friendly when they say somebody they know. i'd like an app so they recall what the name is.
white lies, i'm fueled many my social interaction by white lies. when i say i'll see you for lunch, you never want to see that person for lunch. do we have a right to the white lies, and does this technology deprive us of that and how does it impact social interactions? >> the important thing to note here is computers are very, very good. so the way it works is it looks at your face and looks for big expressions and it will probably go in very, very slow motion and then will analyze those movements and then see it. so all these disgusted. we already know these emotions have a human element. when we see someone, they're happily disgusted so we know what a person looks like when they're happily disgusted. when you try to tell a white lie, the computer probably has as good a chance as a human to tell that you're telling a white lie. we all know in someone's eyes they crinkle up and they're not really smiling.
there are things that are telltale signs. there's a chance a computer can tell better than a human. that's the risk. like a human, you might have seen stage magicians that read your mind by seeing the movements in your lips and that kind of thing. as computers get good enough, they may enable you to get your smartphone pointed at someone and read their mind by looking for this little movement in their lips or whatever. that's probably, you know, a year or two away. there are pretty cool applications there for sure. >> on that note, we're going to end it. only a year or two away. that's not that comforting, to be honest. >> i will wear makeup for the rest of my life. >> thanks for all of our guests. until next time, we will see you online.
♪ the army chief takes power in burkina faso after a wave of protests forces the long-time president to resign. ♪ hello, welcome to al jazeera. i'm martine dennis in doha. also coming up, turkey's president rejects claims that his country is supporting isil fighters. yemen's president is told form a new government within ten days or face the con consequences. and we hear from both