Skip to main content

We will keep fighting for all libraries - stand with us!

tv   Amir Husain The Sentient Machine  CSPAN  January 27, 2018 8:01am-9:31am EST

8:01 am
victims of america's culture. former president vicente fox and the leadership of the soviet union in years leading up to world war ii. just a handful of programs airs on book tv. for a complete schedule go to booktv.org. >> all right. hello, everybody. thank you all for being here today at the book company. we have an interesting night ahead of us joined by am, ir
8:02 am
husain. amir explored artificial intelligence for many uncommon angles. part reflection on the existential questions. from military and health care and so on as well as draw narrow and narrow. he's the founder and ceo of spark cognition. award-winning company which provides ai powered from cybersecurity and beyond. he's also the founding member of ibm's advisory, amir presented by south by southwest as well as numerous conferences and work
8:03 am
appeared in many publications including the new york times, wired and it's the first tv. thank you book tv for broadcasting the event. feel free to step up to the microphone to your left over here if you a question in mind. without further due welcome me in joining me amir husain. [applause] >> thank you very much for that generous introduction and also for your kindness in hosting me at this lovely bookstore and i came in here and was really enjoying the ambiance and it's a wonderful place to be. what i will start by doing is reading a passage from the book
8:04 am
and then perhaps we can start to explore some of the themes that the book covers. the book goes through the sections by asking some of the existential questions, the ai, what that means for us and then we talk about some of the fears around the ai, two of them that ai might kill us, another one that if it doesn't kill us, it will take our jobs and will lead to mass unemployment and instability. we talk about these things and we try to quantify whether these happen will only happen with a certain level with artificial intelligence, with agi or otherwise, and is there something to, you know, pushing the band, is there value in pushing the band. these sorts of discussions an the book really separates that
8:05 am
out to multiple to what we call the hyperwar, application to ai in the battlefield, health care, the future of spaces, architecture where intelligence is within the buildings and also mind hacking which is a ai powered campaign that's used to transform and shape thinking and will in a society. for example, potentially to hijack an election and a lot that's being talked about with the last election and we will find out here shortly hopefully how much of a roll some technologies played. broad impact across all the different areas. so i will start perhaps with a reading, we will get into the few of the chapters of the book, beyond the book what i want to focus on are some other slides
8:06 am
and ideas that build on what's in the book, the book you can read but i wanted to take our time today to bring a little bit extra to have some ideas that perhaps enrich or expand or otherwise elaborate on what's in the book and then we will take some time for questions. okay. so for those of you who do have a book, i'm going to read just five minute worth of text here from a section that's called decoupling work and purpose which is on page 159. decoupling work and purpose, what differentiates humans from apes. in his brook sapians, historian argue that is one of the reasons we are singular indifferent is because we can tell collective lies. other apes couldn't do this. believing in these collective
8:07 am
allowed us to create forms of mass cooperation organized religions, tribal affiliations and trade. that became larger than what any other animal or organism could sustained. the combined power of cooperation through fictions, provided a means of us training and perpetuating our interests and form of life and it made us dominant over individual organisms that might have had more power otherwise. so what is the essence of this humanity, as we discussed in part one, our current debates over the future of artificial intelligence tend to get stuck with either the loss of our jobs or a fair for all mortality. today, our sense of identity is so tied up in our ability to
8:08 am
produce economic output that we stake all ourselves by the last names of our productivity. goldsmith, farmer, and miller, but these identities are not fundamentally human, they evolved over time, when homosapins appeared as see cheese almost 200 years ago we appeared and over time evolved into larger and larger groups, bonded together through religion and afill combraicións until we created an organized macro organism the human race. when we didn't have any other mek niced devices to perform labor, we enlisted the force of our own people, we organized way that is we now describe as subhuman. the valued not exist in any one individual or another pushing
8:09 am
the block, it was contained in the organizational process that transform people into cogs in the machine. human kinds created pyramids, tim ls, city states and ultimately entire empires n. the modern area, the age of capitalism, the systematic structure is no different. in the same work of capitalism, most humans provide specific and repeatable tasks, these culminate in one global macro process in which the vast majority of humans are cogs. today's fiction, the prevailing cultural belief system of global
8:10 am
capitalism exhorts us to take pride in this work, whether it's to wake up at 5:00 a.m. and tend the fields or work at office at 9:00 a.m. and pull spreadsheet laboratory top. our faith in the fiction has gotten the better of us. modern society is now contending with the same mythology as capital, as system it continues to progress in a process so that the numbers the top, the top 1% become smaller and smaller until it is the top .1% and then the top .01%. in 2016, oxford reported that the world's 62 richest billionaires had much wealth as 3.6 billion people or the bottom 50% of the world's population. in 2017 that number had dropped to the world's eight richest billionaires.
8:11 am
fewer than one dozen people have more wealth than half the world's poorest population in total. the same report assessed that the annual income of the poorest had increased by less than a single cent every year in the last quarter century. so we see different cultures attempting to adjust their own story-telling and response in the global system. it's currently experimenting with universal basic income and switzerland is in the midst of considering it. like everything else in culture, political ideas, movements, food choices, the midst that our worth is tie today our productivity is a fluid one.
8:12 am
in the economic context and this too will change as the planet's population make such notions completely and sustainable. all of these ideas are in accordance with times with which we live and the what we tell ourselves often that our worth comes from ability to create value is no different. whether we are farmers, marketing directors, truck drivers or commodities traders, in the near or far future, our work will be completedly some form of artificial intelligence. for our purposes here, i invite all of us to use cooperative skills to create a new fiction together. imagine decoupling existence from the notions of more conventional employment.
8:13 am
in the real world, involves, politicians, leaders to be successfully achieved, but as a part excerpt imagine our social system has embraced the decoupling. this allows us to move beyond feelings of alarm and fear that arise with the increasing powers of artificial intelligence that triggers of amyydala. where would we live. i do not wish to den gait the real-world concerns regarding the rise of ai in our world, but i contend that this is the least interesting place for our discussion to end. since the origin of our species human values, troops and traits have all changed. there's no such thing as a fixed state human being. there is no such thing as a fixed state human being.
8:14 am
6 million years ago when our ancestors first roamed the planet we were not like what we are today and only 10,000 years from now, we would not maintain our present state, our humanity is on an evolution, it's all essential to our being, human values, fundamental traits are in a permanent flux, just as we evolved into something different, we will evolve into something different, something unrecognizeddable. we must accept this as a fact of our existence and from this acceptance, then we can identify our greatest purpose.
8:15 am
the start of many sessions have a different tone. this one is more philosophical and ends with the question of let us take for granted that machines will perform the labor as the economic laborers that we claim today and feel so proud of and even associate with our identity. does that capability, the capability to do these labors in a better way, does that make us less human? high-quality of execution that is we call labor, was that really what defined? of course, for me the answer is no. the book starts to explore what is fundamentally human, for example, we get into things like people say, look, this was very
8:16 am
humane behavior. what does that mean? what that means that there was an element of charity in this behavior. element of love in the behavior. two participants in relational exchange where one was given from one to another, act of charity. we find that if you start to think about things in a purely philosophical way, the exchange, the idea of that exchange exists even if the people don't exist. that exchange as pattern can be seen in many other places. what is fundamentally human is a very difficult question and it's something that we will discover, of course, but one thing that makes human beings very unique in that they are the only form
8:17 am
of creation is they are able to perceive the unlimited universe of all possible ideas and in that perception there is value, however, our society is structured in a way that doesn't allow that value to be recognized in any tangible terms to give credit to help in the day-to-day needs of an individual that is surfing and uncovering from that ideological landscape. questions are some of the questions that i don't want to get into any further, these are the types of things that we talk about the book but i explain what genetic al go rhythms are and what supervised learning and what unsupervised learning are and what latest state of the art in many industries and autonomous weapons and health
8:18 am
care advancement. any questions or any thought that is anybody want to bring up? okay. so this is sort of the bonus material and obviously, you know, my very strong view that a new form of intelligence is coming, can i tell you why i think this is a new form of intelligence, why this is fundamentally different from how we think and i will explain that to you in detail but where we are now is close enough to where the question doesn't need to be asked, hey, when can you build data from star trek, the general purpose intelligence that can go solve all types of problems, these ai systems with capabilities in areas that are meaningful, the most obvious
8:19 am
example is driving a car but maintaining aware house, flying a fighter plane and radar to more accurately determine where the enemy aircraft are, are we able to identify packages that go into the hold of the plane or hold of a large truck accurately than ten people running around trying to find the packages. there are tasks that the the systems that the system can begin to automate in very viable and component ways. those niche areas represent likelihood for human beings. it's not a question of when we will get modern data, it's a question when we will get to 25 or 30 or 40% unemployment. the reason i want to raise the question is whenever the question is generally raised in the context of ai, you suddenly are slapped down with some sort
8:20 am
of a random latitude that don't worry, technology always progresses and, of course, in the past also it progressed and at that time, you know, we had the industrial revolution and so other jobs came up and they'll be other jobs in the future that you don't even know about, maybe the way that i look at it there are only two meaningful things that the human race has done or attempted to do so far in the history of the human race. one is the replication of physical muscle in the late 1600's with the steam engine where we were able to create a muscle that could move things that no animal or naturally occurring muscle could move. we build trains, network of trains, we built cars and we built steam boats and warships
8:21 am
powered by all of this and now we have come to the cusp of a period of time where we have a pretty good chance of being able to replicate the mental muscle and if you think about a human being, there's muscle and then there's mind. with these two things replicated, it's very, very difficult to argue then that there will be jobs that will be of sufficient quantity that will neither require better muscle nor require better mind. so what does that mean? what that means is that the ai debate is almost absent of policy and this is where i want to get to, so what qualifies me to talk about this? why did i write the book, i have been passionate about the space for a long time but what qualifies me to talk about these
8:22 am
things is i'm the founder of spot cognition, one to have fastest growing companies in austin and the u.s., computer science which was named the number one computer science department by u.s. world and also i serve on a think tank in dc as a member of the ai committee. so i bring the three elements as i do business every day, i work with the largest customers in the world from boeing to raytheon to look heed -- lockheed and all of the significant companies on a daily basis. i'm also involved with moving the science of ai forward and i care deeply about the implications that all of this work has in the area of policy.
8:23 am
i go around meeting with the nato leadership, two generals of nato explaining some of the concepts which will need to be integrated and looks like they will be in the nato strategy going forward. i've spent a lot of time at the dod in trying to create this impetus, really, to think about ai in a very different way, not just as yet another technology and i think that's working, so policy, science, and the practical element of the business all coming together, that's what i've been doing and i feel that that's a good combination. so just very quickly, many of you may be wondering, we have been saying machines think, what does that even mean, how do machines think? machines can think in various different ways and the slide is not designed to explain to you
8:24 am
all the various ways in which machines can think but i wanted to show you something that's pretty simplistic that one can follow along with in graphical terms. one way that machines can think is that given a very simple set of rules, they can apply those rules to create large graphs and the graphs represent all of the state that is could exist in that world that the alga rhythm is modeling. let's take tic tac toe and in the world of tic tac toe you can have a machine generate possible states and a machine can do that very quickly and start, you can make a move and at that point, the machine already knows that one move that you've made where does t fall in the tree and is there a connection between there and a state in which the machine wins and if so, the next move is
8:25 am
the logical step for the machine to take. this comes from a different kind of thinking, it can recomp out the world and the rest is just search. it already knows what of the 29 parts it has options to go down. it'll go down that path. the tic tac toe game is not very useful. doesn't need to be hold much but it's not very, you know, broad.
8:26 am
reinforcement learning has been able to play most ataric games and not just pac man and in most cases the ai just with no competition. the way it's done that in packman there's additional choices and options, where exactly is mrs. pac man, how many gold nuggets are left, where the ennis coming from, where on and so on, a huge number of cases which are not as easily modellable as tic tac toe. what that does is -- so i'm going to start knowing nothing. i'm only going to look at one factor which is how far i've got and in games, i'm going to use
8:27 am
score as a proxy of how far i got. and i'm going to play this darn thing at machine speed and i'm going to keep going on and on and on and on and every time i play and fail, i will remember what my last score was, i will take my score, say i got a 100, i played random nonsense moves. i got to a 100. the moves were up, up, less, less, right left, left and died, right? so looks at that and says, okay, well, i started with completely random moves, i might make somewhat less random moves because the value of the very first move i've head was 100. the very first move got me to a hundred, the value of the second move i've head however many points the first move game me minus that and what's left that's the value. what does that tell you, as you come closer and closer to the
8:28 am
death the value of the moves close to your death are by definition pretty low. at this point, heck, i don't have much value, let me start trying different things and in this way in a self-way running very, very rapidly at machine speed, it's able to train itself, it goes through the reinforcement learning cycles to rapidly train itself and this is the kind of methodology using for self-driving cars, the kind of methodology for the game players, this is the kind of methodology that was used for alga go that defeated the number one player. this is a mechanism that's showing a lot of progress and it gets around the problem of having to provide these algorithms with all the data, right, so they can generate a lot of their own data.
8:29 am
we were talking about the crosses. you can see that only three things algorithm knows is that for every success generation, you can only add one symbol at a time. the rule of the game is that in one shot you have two crosses. the winning is when you a line diagonal or horizontal. when you go from one row to other row you to alternate symbols. first you get across and then you get a cross. those are the moves it needs to know to be the world's best player. i deliberately came here because
8:30 am
i want to pivot to something. think about it this way, three basic rules and that's it and computation, three basic rules, multiplied by computation, create this large tree, if i was to draw those three bullets inside the graphical outline of a seed, you would see that that seed plus some computation gave you a full developed strategy. now, that actually turns out to be a very possible concept because it turns out that in the universe, in reality much of what we see works the same way. in fact, a tree of -- a physical tree is encoded for the most part in a seed and there are processes that then run on the information that's contained on the seed and resources that are extracted from the outside and
8:31 am
those process's ultimately create a tree but this happens in mathematics and also in computer science also. this is something that's called the game of life, how many of you are familiar with the game of life? in the game of life, there are incredibly simple rules, for example, that if you have two or three neighbors, then you live, if you're a cell which is colored in and you have exactly two or three neighbors, then you live. if you have more than three neighbors you die as of overpopulation, okay, if you have less than two neighbors, you die as of overpopulation and if you have, i think, what is it exactly three neighbors and you're a dead cell, then you can come back to life, that's it.
8:32 am
okay, four rules. any child can color these based on those rules. so what you see happening here is a complex ecosystem just based on what those four rules and son -- some random patterns. there are creatures that get created. there are things called gliders which glide, they look the same, they glide through this surface, there are things called spinners that alternate between a star and they'll go back to being a circle. this guy over here, you see this? so there is a notion of self-stability in these structures but constant motion. then you also see parts of this collide with each other and when that collision happens it's very hard to figure out what's going happen. sometimes you get a bigger
8:33 am
structure, sometimes you get many structures and sometimes a mass that didn't look anything like a glider, you can get 18 gliders running out of the big mass. okay. what this tells you is that really, really simple rules and really, really simple computation can be the source of incredible amounts of complexity. this is actually not that complex. if you pick up a book by steven wolf, a new kind of science, he belabors to a great degree and gives examples of many, many kinds of such where he shows absolutely beautiful nonrepeating patterns started from a seed which is so simple. the other thing about them is that they are really not predictable. so, i mean, with something like this, it would be incredibly difficult to run this for a few
8:34 am
million generations and figure out whether this would be blue by then. there are so many, many interactions happening. now, these aren't the only types of structures that we can extract, that we can take from something very simple and make into something very complex. what you are seeing here is we are diving into -- this is a -- a mathematician is responsible for the tinny equation that generate it is massive structure. structure has been generated in computer to where the dimensions of the structure are now larger than the known universe. the complexity in the structure is no less than what you would see in the picture of the galaxy. think about that. two things, you can take something really, really simple and apply computation and make
8:35 am
it into an emergent system where things start to come alive and they have behavior and they start interacting with each other and then from a tinny little equation with the power of recurring and integration, you can get to a point where you can create a virtual space, which by the way is beautiful, you can create a virtual space that's larger infinite, but even now, we've generated the structures larger than the known universe. you, me, us, we all, could spend our entire lifetimes just reversing this entity and we would never, ever get done. we would see an infinite decimal part of it. it doesn't exist anywhere in real life, but then what is real life? so why do i bring this up in
8:36 am
context of artificial intelligence? because it's important to figure out where a lot of this magic is coming from, because a lot of times i hear people say, well, come on now, 27 lines of code and it's doing something intelligent, how the heck is that possible? maybe because the 27 lines of code are doing things in a way in the way the manufacturer is taking an equation this large and expanding it out into a completely unique, beautiful structure that is infinite larger than the known universe. size doesn't matter in these sorts of situations and in particular you can actually encapsulate a lot of intelligence in very, very little code because code is something that doesn't just run once, code is something that can continuously improve and that's where reiteration and all of the things make things so powerful.
8:37 am
so where we are now is that we can easily now generate landscapes, simply computation on a generational basis. using algorithms, algorithm, let's do a quick experiment. we wanted to take a grid, draft paper and somebody said make a three-dimensional surface that looks like mountains on it but do it one bar at a time, maybe they have given you straws that sit exactly on each grid and you can cut the straws at different lengths, so you've got to make, you know, the 3d, how would you do it, one way to be to just sit in one place, cut the straws at random lengths and randomly put them on the grid, what do you get, you get a structure but something that you wouldn't really find in nature which is
8:38 am
also a very interesting thing. in nature, you generally don't find hills that go up down, up down, up down, like the bitcoin price curve. there's some -- there's a very specific normalization, there's algorithm called noise generation and what this does is it takes the averages of your -- of what's around you and just by doing that, it's a simple -- just by doing that it can create structures that look very life-like, but we can do as many as we like. we can create a billion planets and have drop mountain ranges of all types on these billion planets and we can go explore all of them. so now, beyond that, i've given
8:39 am
you some examples of how machines do things differently, they generate a lot and they choose a little bit. they generate a lot of ideas and they choose a little bit. now why is this the case? the mind of a machine that's fundamentally different from the mind of man. first of all, we are con trained by the ability to consume no more than 20 watts, our brain cannot consume more than 20 watts, we do a heck of a lot with 20 watts, but as they become faster and faster, the 20 watts would be hard, and so because the brain has only 20-watts of power to contain itself what has become amazing
8:40 am
act is proofing, what we do is ignore stuff, expert laziness which allows us to be incredibly effective at the things we need to do in order to survive. it's interesting in the world that we have, we have to survive but because we progressed past those levels of hierarchy of needs to a point where we need to solve problems, yes, hay don't -- yes, they don't have anything to the revolution but are important to us because they can yield benefit from concepts and new social and economic systems, but we weren't evolved to do that. we weren't evolved to think
8:41 am
about string theory and 11 dimensions. none of us can see 11 dimensions. the only people that can that only mathematicians in the path that can see them but because we've never had the experience we can only describe them in sort of loose ways that you can kind of communicate but communicating that verbal description will never do that for the other person. a perfect recall, machines can see everything, remember it forever and the importance of this is that as they get better as they get more mature, the original memory is preserved and they can go back and reapply their advance intelligence to perfectly preserved original memory and come up with a different conclusion. we can't do that. second, it is disembodied, machine intelligence can be at
8:42 am
11 different places at one point in time. machine intelligence has no desire or no overwhelming desire to protect our body, doesn't make any sense. we are all about protecting our body because our brain hasn't existed outside of the body to the best of our knowledge and the only way to keep alive is keep the body going too. there's no issue there. in fact, you can exist in multiple places. then there's the issue of no physical size limitation, there's no physical size limitation, you know, if a guy could be 50 times as smart by having a brain that was three times as big, society wouldn't really be accepting of that. the guy would look weird, you know, but with systems there's no -- there's no such limitation and make a joke about the guy looking weird, biological there's lots of challenges in such a born ever being born.
8:43 am
then future processing, brains can compute at the level they interact and computers are getting there, they are expensive, big, they produce a lot of heat, it's not the same amount of return, if you will, on a per wattage basis but computers are getting there. so it's fundamentally different types of ways of thinking and now just a couple more on what this is doing, so software, ai software, how does software eat the world. this is what an engine used to look like before tesla. it used to have valves and a block and cash -- carborrators.
8:44 am
that's what a motor looks like now. everything else is software. controlling the fuel that went in to the engine to give it power and increase rpm and so on and so forth. that's all just a solid state regulator in a board. so all of this has moved to software. the physical parts have been eaten up by software, this is happening all over the world. so what does that mean? what that smeens if you think about where the jobs are going, i apologize that part of the slide, not very readable, but that's basically from a study of several ai experts that suggest when a specific job will be doable by artificial intelligence, so, for example, in five years ai will be able to assemble any lego, and now when we say any lego, really,
8:45 am
assemble anything that's lego-like. ai will be able to do your laundry, that's one of my personal favorites, i'm looking forward to that in the next five years, in the next ten years, drive commercial trucks, tesla has already got the prototype out, ill at this stage guess that that's not too far out, maybe 10, maybe 12. assistive technology is another area where ai is making very rapid gains. so when i look at this chart and then people tell me, some people in policy, some people in politics, nothing to worry about, this happened when the industrial age came about and when people established large automated factories. you know, sure, we had people move from the farms and they got
8:46 am
jobs there at the factories, when mind is replicated and muscle is replicated, what the hell is left for us? and so that leverage that, not to say, well, stop the robots, i'm not arguing for a movement, i'm argumenting for the exact opposite which is to say if that's happening, good for all of us, let's go and figure out what the right set of policies is that can allow us to live in that world properly. excuse me. this is not just about china but many parts of the world. when you make it very difficult for people to make a living, you're asking for instability, i
8:47 am
earlier cited a number where the current mechanism of the tech know capitalism and finance capitalism has gotten to the point where eight people in the united states control the wealth equivalent to 50% of the bottom and at the same time we are automating not the jobs of those eight but the jobs of the bottom 50. there is no social structure put into place now, there is no renewed social contract that can take care of thighs people and that's something that can cause catastrophe. when the catastrophe, god for bid if it comes, that catastrophe will not be a catastrophe caused by ai, that will be a catastrophe caused by leadership. and that's something that many of us need to start talking about now because their
8:48 am
reflexive response in all a lot of things, how about we ban this, how about we limit this? just like in the bush era we had curves on stem cell research. the next year china established the world's largest genome sequencing facility and this year, i believe, three cases in live humans are being tried in china. we are not anywhere close. we are not the leader in genetics anymore. so with that, let me just leave you with one question and a visual. so what the hell do we do if we are going build a better mind and if we are going to build a better body, if not now, in ten years, 15 years, 20 years, it'll happen at some stage. so what do we do?
8:49 am
so this idea that i have of this infinite landscape of ideas, think about this as the plane upon which all ideas that are discoverable, all concepts, all ideas rest and think about our journeys in life not as journeys in this three-dimensional space in the linear spaces coming into contact with people and opening the door and going through it and crossing the street but think about it as a timeline along which we acquire a view of more and more of this idea escape, we grow. and if you think about it this way, then you see man and a robot and you see that the man's bubble circle is tinny and the
8:50 am
artificial intelligence is growing at much faster rate, but you know what, you keep zooming out and you discover that this landscape is infinite. look, there's another computer way in the distance doing its calculations somewhere else in idea space. so what i realized was that -- and the way that i put my angst or at least subsided from me was that when the race is infinite, speed is irrelevant. it's only perspective that's irrelevant. it's only where i am in the idea space and what i take away from it is what's relevant. the super computer, the ai, discovering ideas, a million times faster than me on an infinite landscape, what percentage of the infinite
8:51 am
landscape does a million times whatever i have done represent? zero percent. and what does my exploration represent? zero percent. so in that sense, there's a tremendous amount of value in any -- and how we incorporate ideas into society, i have some thoughts on that but i think the session has gone long, so let me pause this here and maybe move it to questions and see if there's some elements that you would like to discuss. >> you said there's no limits. >> in physical terms there are limits because, look, to the extent that the universe is isn't infinite, which we don't know, maybe, there's a substance
8:52 am
, substance where you can convert matter to the most optimal configuration allowing it to carry out the largest number of compute operations. and that -- that hypothetical perfect computer is -- and you can converse and you'll have a very large computer. >> how large is it? >> you said you would have a large computer, so how large? >> in terms of what, ram and hard disk size? >> that would be a tough calculation. it would be the level of -- i mean, it would be powers that would essentially be unfathomable, if you think about the brain in just the one human brain which is not an optimal computer we have on the order 10
8:53 am
to 14 connections and something like 10 to or 10 to the 12 neurons and that's imperfect brain. if you can imagine even if that efficiency, if we converted unknown mass of the entire universe into brains, what would that look like? i mean, it would just be -- it will be the kind of thing that for all practical purposes while it would still be a number and technically you couldn't call it infinite, it would be just the most massive numbers that would be inconceivable. >> but i'm more cure use as to what you're asking the question, you want to know the absolute limit of computation. >> seems like it will eat up the universe that we are in. >> right, so if we want to
8:54 am
transform everything, it would use matter from the universe to do that, there are many scenarios that people have thought of where this agi escapes and it starts to confound people and goes around them and then it starts to consume matter and make more and more compute systems. of course, this is completely hypothetical, it's exactly as hypothetical as that ai becoming, owing the wizard of oz or the ai becoming, you know, little pony. there's absolutely no precedence to something like this and the assumption that there is a case in which ai can go bad, yeah, there are cases where ai can go too and we are not at that point
8:55 am
where we achieved agi technology, so the point is, do you stop work on this technology, and if so, how do you enforce stopping work on the technology with that scared of an outcome, i can tell you bands don't work. the united states was the first country that created a nuclear weapon's capability. there is one in russia to have it, they didn't want russia to have it, china got it, they didn't want most recently north korea to get it, north korea got it. and on and on and on. and by the way, all along the way, most of these countries were saying, no, no, we don't nuclear weapon's program but we will sign whatever you need, we don't have a nuclear weapon's program because that's how you build a nuclear weapon's program when everybody wants you to build a nuclear's weapon program. that's a situation, it's called prisoners dilemma, if i act honestly and we both suffer
8:56 am
equally, in that environment i can't trust that you will act honestly as long as by you acting dishonestly and me acting honestly, you go scot-free and i pay twice the price. that's what these countries find themselves in all of the time. there's in verifiability. so unfortunately these things will continue and my own focus and my own interest is working in areas referred to explainable ai, by the time we get to a place where agi is a possibility or really sophisticated ani systems that could be at -- autonomous on the battlefield. i've written about that. at least there's a level of accountability, ethics and of control that can be mathematically granted -- guarantied in these systems.
8:57 am
yes. >> to add to her question that i'm not expert in ai but seems from what i read the question is a little bit loaded because and confirmed this but the power of the ai is that it can't be defined by physical space, that because theoretically everywhere, so, i mean, that's sort of the power of ai, it's not like, you know, computer, i mean, part of the ai is almost like operates like or own neuronetwork so it can't -- it can't contain it, can't contain that power. >> that's why i ask. >> yeah, so you do need obviously some sort of a computation to run the algorithms of an ai. those may change over time. tomorrow you might invent a
8:58 am
quantum computer and the quantum computer may become hundreds of times more powerful than today's but you still require some computation to execute your code on. but i take your point, your point is that there is so much power in systems that are relatively smart and the specific issue that you're word about. the one question is how big can it be, the problem is we don't know how big the universe, every day you read a story about how much dark matter is, i don't know how heavy the universe is. >> it's like saying how much will bitcoin be next month, we don't know. >> a heck of a lot. for all practices and purposes we wouldn't know what to do with the level of computation other than simulate the universe is
8:59 am
right again which is why people think the universe is assimilation. i don't personally think that the union rers is -- universe is simulation but many of the things that happened in the universe are consequences of computation. >> per cautionary principal, can you expand on that a little bit? you are dealing with people that are in positions of power to apply this technology in way that is will affect us all, you're involved with a lot of researchers who are furthering this technology which even you admit, it could potentially be extremely dangerous or extremely beneficial as a scientist, i would think that you would have a judgment, your own personal judgment on the precautionary principle whether it's a valid principle to apply and within
9:00 am
this context. well, i think, i will tell you my view first of all, a lot more has been done than we know. the second thing is that the implications of what these capabilities will give to the practitioner of those capabilities, those are significant advantages that human nature and at least my study in human nature suggests that there is absolutely no principle or anything else that's going to prevent a full- on program desire focus to obtain these capabilities. .. ..
9:01 am
trust was something and get they showed in a distributed system trust can be very much made into a mathematical formula, so now you don't trust to someone, but in a certain way more trust exists between two parties that wouldn't care to trust the other in a human sense than has ever existed even among parties that claim to be ready to die for each other, so there are ways in which we can take these complex-- concepts and implement them mathematically. around using mathematics to
9:02 am
define aging societies, so in many environments where you have a large number of-- a ground can be turned on you. a drone can make a mistake. a drone go off kilter and might have the wrong navigation software and six or eight can start destroying the rest of the swarm or they can crash onto civilian locations, so what we have designed as a block change based mechanism where the swarm has been converted into a flying no man in the center democracy.
9:03 am
there are two different aspects to each agent. when is that it can make observations about other agents and report them back. all those reports are stored in a block chain, so another words more than half the agents would have to be corrupt for the view being developed to go back, so you have a pretty good guarantee and the second thing is that when the system decides the majority view is that a drone is either acting badly, took an action no one else knew it was authorized to take war started firing in a location where it was not authorized to fire or is going slower or faster or exhibiting any sort of noncompliance. and they can on an out of band channel disabled that:, so we
9:04 am
have a very nice algorithm we have implemented the algorithm to show to people. it's hard to show to people with flying drones, but we do that on a screen a real program and you have these objects and then you can click on any one of them and say i have hacked this guy and instead of him moving in this direction and carrying whatever he's carrying he will now start ruining everyone else's life and travel in the opposite direction and make a huge mess and take everything other people are sorting and make a huge mess so you see quickly other citizens and agents observed that and think what the hell is this guy doing then they report all that and there is no judge, no jury. there's only federated pok chain style agreements on global interception and as soon as that happens there is a containment mode and the actual containment
9:05 am
mode can be different. containment moaned comes to place and-- containment mode comes and they prevent that entity from doing anything else. once containment is guaranteed everyone else goes on their merry way with no human intervention, the analysis of what's going on in an environment, high security guarantees on that enforcement mechanism not being how cabal because again it uses mathematics, block chains and a told you-- total variable sense of how you control things like that. i've done this whole thing injustice because i try to verbally and-- describe to an entire algorithm, but to me those are the kinds of things that do give us hope in where we need to invest. that's where i can contribution the mean i'm not a fan of band
9:06 am
bands, don't think they get anything done. >> do you feel this technology should be weapon lysed and the reason i ask is because right now the dod stands on that technology that supports our current troops is that decisions of life handed would be made by his handlers, so at the current state, no, they will not be weapon nice, but do you think they should or will they ever weapon nice? [inaudible] >> china just said-- >> correct me if i'm wrong, but i don't think the technology is there that we can allow that to happen. >> technology will get a lot of better, but i will give you some examples--
9:07 am
>> the answer is no. they're not weapon eyes like the drones you are talking about mostly are for surveillance. >> every year-- every naval us ship has one or two systems that are called c iws. they are canons completely independent. they fire on the order of 2000 rounds a minute up to more than 2 miles out and they have their own radar with an motorized mount and it all six together. okay? totally autonomous, so there is an awful mode and on and on mode and when you turn it in autonomous mode, it's designed to take out see skimming cruise missiles. why you think they made it a thomas? because people won't do the job. that's a system that in a
9:08 am
limited context with a two-mile range and designed to take down low-flying aircraft and see skimming missiles, but it is a thomas and put on every navy ship. there's a-- with the cyst-- it can operate in autonomous mode for defense. >> they are probably developing eight autonomous submarine as we speak. >> i can only comment on public information, but the chinese and russians have put out videos and reports on two systems they have just created. >> i think it's pretty benefit that ai will be more and more
9:09 am
like power of persuasion with ideas and solutions like solve problems, but this is that mike there anyone like-- it's not like god giving down the 10 commandments. i would rather see ai come up with new ways of doing things, but you have to be able to persuade. >> absolutely. there's an entire chapter in the book called mind hacking and that whole chapter is about using ai to automate and persuades, so the challenge is you could persuade people either way because, i mean, that intent is being supplied by you the human user and you can choose to swing one where the other. there are three elections where we know stuff like this had happened.
9:10 am
the us election in the indian election and in all three there was data science tests done and that that is shared in the book, so that science of persuasion and using our social intelligent to craft messages to persuade people, absolutely at the possibility. if you want to use it for good, great. other people want to use it for bad. currently, there's nothing defending against that and my concern is the technology that this advance can be used for so many different things, not investing in technology that can keep us safe is ludicrous, to assume that the construct of a band and telling hundred ambassadors, hey, guys stop building those autonomous weapons. they are not going to stop building autonomous weapons, so
9:11 am
the work really is about building more technology. i hope that people put the technology to use, but i also don't want someone to try to put this technology to a bad use and for us to find out that we never thought about that or have the defensive capabilities to really even deal with that, so in that mind hacking chapter i also talk about the problems that ai driven mind hacking and this merger of modern psychology and psychoanalysis with national language and what causes the driving and how we can develop ai shields to protect us from these kinds of threats in the future. >> has significant do you think us leadership is in setting the rules of the road for ai use?
9:12 am
>> i just wrote an article with general alan and foreign-policy month ago and in the article we laid out exactly what the issues are, but here the us government spent in 2014 $1.1 billion on ai in 2015 is spent $1.2 billion in ai. china announced $150 billion on ai just governmental spend over five years and china has announced a national 2030 ai plan where the official goal is to nominate-- dominate the ai space and be the number one provider of ai technologies. china is putting in immense amounts of money or china is putting in and taking full measures to bring the best talent from all over the world, for example the stanford professor andrew went, worked at
9:13 am
a chinese company and developed capabilities and came back, but many others are starting to do this. students of chinese origin, the rate at which they used to stay in the us as the us policies have been made so difficult that the smart students that used to state are now no longer interested. china is also a developed country. it's not like they are getting peasant wages in china. they are getting good wages in china and if they are treated well here and they are smart people, why the hell would they set-- stay? we are so proud of our understanding of marketing economics, it's the same market economics. if you want someone that is bright and a rare commodity and you treat them like crap and they are getting more money not being treated like crap somewhere else would you think will happen? they have made h1b visas are because the h2 spouse work deal
9:14 am
that obama did has been canceled , so it becomes more expensive to even live in the united states, so it went from we should take every ai machine student and stamp a green card to his graduation documentation. it went from not rhetoric to bands and your wife cannot work and let me do, you know sort of very intrusive looks at you and everything that you brought in and let me figure out where you came from and-- it's not interesting anymore. what we have to do is become that shining city on the hill again, a place people want to come to and all of the current discourse politically has gone down the path of making this not that shining city on the hill,
9:15 am
not the place where people are looking for the solution. that is the big tragedy in the biggest advantage america has, that optimism, that world war ii optimism, the volumes of mechanic and projects printed in popular mechanics and kids doing stuff in garages that no other kid or country was doing and we could do it and a president that went off and said we do these things not because they are easy, but because they are hard. >> any other questions? >> i'm curious if you have any developed talks about the idea of basically a cultural war coming into play. hugo did garris who writes and presents himself inserted a goofy way, but i find his ideas compelling. not sure if you are-- family or with his book, but he predicts
9:16 am
that time in the near future where a few of us are thinking about now, but i think we can 20 years from now and might be what we are all thinking about is there will be a certain number of the people who really like the idea of ai and there'll be a certain number of people who have an allegiance to biological life and they will come into a conflict, which will make the war between christians and muslims book very moderate. i wonder if you anticipate that or if you think that's just eight paranoid pipedream. >> carry jan i read a dr. seuss book that went something like, you know, that stars on their bellies. do you remember that one? so, it's that sort of thing.
9:17 am
of the point is it's not about technology or about the color of your complexion or where you came from. the reality of the human condition is that you put all of us together here in the room and you leave this here long enough and pretty soon we start to figure out where we are all from and what our traditions weren't what are religions were in whether we have religion and we start to group up and then you take one of those groups to the next room and after three days they will be at each other's throats and on and on and on until we look for we look for the ones that don't have stars on their bellies, so this can be yet another form of how people choose to differentiate themselves, but i think we already know whether this or something else we will always find ways. the problem is that it's a bug in the human brain that you
9:18 am
can't rewrite and just fix. it's not like an apple update. once you discover it you know, next thursday you can fix it. the only way to get fixed is if you manage around it and one of the ways at least in my observation is that people dissent to being animalistic when he pressured, so these wars so on and so forth at the end of today about some level of putting such pressure on people that they no longer can hold on to this imagine sophistication that they aspire to and feel they have arrived that. i recently read a book which i read and then could not do anything for two weeks. it was incredibly depressing, subject to read the book, but the book was called "man's
9:19 am
search for meaning" the author was a survivor of all switch and going in he was a psychologist, so he was a mature person and he describes, which i will not describe what he went through four years and then he says that in analyzing my own self and what little sanity i had left and what little remembrance i had up my faculty in my experience as a psychologist i am telling you that i was no longer a man, that it took me a long time to become a man again. so, when we pressure people to where they are no longer men, if we take that 57-year old truck driver who has done on honest
9:20 am
day's job every day of his life never had an accident and we tell him listen, we decided to replace you with a tesla semi and here's two weeks, but you can go and join university of washington and why don't you go learn of bioengineering. i mean, i hear that's the next cool thing. he's 57 years old. there's 18-- he might do something there, but it's unfair that's not what society is supposed to tell someone that served society well for 57 years just simplistic idea there had when it makes sense for society to head in a direction that on a macro level is beneficial for society with automation and ai are, then those who are
9:21 am
disaffected and paying the price for that which they could not have foresaw, society owes it to them to take care of them. it is part of the cost of the ship. we keep making shifts and orphans and we keep pressuring people and we expect that there will be no blowback. there will be no problems. well, there will be problems. as much as the bottom 50. [inaudible] >> they will not sit together and have 10 crumpets, i can tell you. it will be i think ugly. i think you saw some of that in the recent days with the race riots, but those groups were small, but if you think about
9:22 am
huge numbers of those disaffected middle class people that don't consider themselves to have done any wrong and society is suddenly change the agreement with them, where will they go? these are mature societies. look at these issues in advance and plan for these issues in advance. which by the way america used to be a mature society and the g.i. bill was a great example of this kind of majority. >> so, there's been now, sitting here is clear that ai there's not-- it won't affect when you talk about the policy implication and what's going to happen to truck drivers and marketers and assistants and even doctors, what countries,
9:23 am
what states are doing policy well? >> scandinavians for some reason always end up doing policy well. they are experimenting with a variety of different games where they have sort of this minimum payments they are doing. they have taken up so much of the cost of living high-quality life and kind of made that-- the state may hear people say that's communism, socialism and maybe it has elements of communism and socialism, but to me what's important is not socialism or capitalism are communism. to me if the commendation of the best set of things that work for where we are now and if some elements have to be borrowed after all we are not really purely in out free market.
9:24 am
it's not like purely free-for-all whoever has more money can just do whatever the hell he wants, so balances makes sense in every society and i think they found different kind of balances. i been impressed with how the germans have used automation to create very high-end brands so they have kept a small number of people employed, but kept huge exports going over the last many years. i remember how vile book-- bible that is because europe is having trouble and also may be one country can do that, but if all countries do that what's been exported would not be high-end, so the other part of this, by the way, is it's a large highly populous country that are developing countries that will hit get a huge demographic dividend.
9:25 am
the assumption was all of these young people, millions and millions of them will get educated and enter the workforce and there will be work for them to do. i don't think that will happen because by the time that demographic demo-- dividend comes about i think so much automation would have happened even if not in their country, in countries that buy from them that the use of that labor may not be as important as they somehow can redirect it to local needs, so one of the things we have talked about often is as the technology plays out politics will change. countries that could have been stable won't be in countries that could not have an stable will be in countries that are rich that are small can use autonomous weapons in large quantum and go and implement
9:26 am
campaigns that in the history of man a country that size could never have implemented, so these are like the fundamental things that change how the world works and there's no parallel in history for this, so let's see. thank you very much. oh, you had a question? i will take this as the last one >> elon musk has went on the record that the greatest threat to mankind is not pollution or north korea, that ai. do you agree with him connect no the greatest threat to man is, was and will always be man.
9:27 am
>> think you guys. [applause]. thank you guys. >> book tv has recently covered several books on technology including talks by former world champion on artificial intelligence. software engineer ellen ullman on her 20 year career and tech businessman on the precursor to today's online community, the plato system. if this is a topic that interests you visit book tv.org. several programs will appear and can be watched in their entirety online. >> you are watching book tv on c-span2, television for serious
9:28 am
readers. here's our primetime lineup. 6:15 p.m. eastern, brian clemens, alexander tien and dean rader reflect on gun violence in america. 7:40 p.m. a tour where we speak with many people who are responsible for bringing their books from acquisition to publication. 9:00 p.m., father gregory boyle reflects on his work with gang members. on book tvs afterwards at 10:00 p.m., republican national committee put spokesperson reports on the grassroots populist movement in the us. she's interviewed by daily beast senior columnist matt lewis. we wrap up our primetime program at 11:00 p.m. with stephen who examines joseph stalin's leadership of the soviet union in the years leading up to world war ii. that all happens tonight on c-span twos book tv, 48 hours of nonfiction authors and books every weekend, television for serious readers.
9:29 am
>> it's one of the things clearly that turns off a lot of people about donald trump is that he does seem to have a certain outer third row or garrity. his populism inevitably vulgar. >> well, today happens to be the anniversary of caesar's costing of rubicon and 49 bc. caesar was a populous. with you older? no. i don't think it's linked to populism. although, i think those people who wield the term populous of a weapon would like to have us think so. i'm glad you mentioned that because it seems to me that one of the fundamental objections of donald trump is that affect. he wears the wrong kind of ties.
9:30 am
he likes his stake well done he puts catch up on it. these are unpardonable sins. there are other things as well, but i think that a large part of this aspect of the test area over populism over donald trump is a matter i said as static, maybe a better word would be snobbery. >> you can watch this and other programs online at book tv.org. >> at evening, everyone and welcome to the henry hollis er

65 Views

info Stream Only

Uploaded by TV Archive on