Skip to main content

tv   Book Discussion  CSPAN  October 19, 2014 6:30am-8:19am EDT

6:30 am
and then almost all the charges were dismissed, and he came home that summer, tried to re-enroll again as a senior and was told now you're 19, you're too old, sorry. so he became a high school dropout, and then he couldn't pay the $225 in court fees that came due after the case ended, so then he had a warrant out for his arrest. so i think in addition to the pleas, there's all these other things going on with the court system, like how long people are spending without a conviction, and then the court fees are just crippling -- we could just agree to get rid of court fees as a reason to go back to prison or go back to jail. >> i would agree. the public defenders have no interest, they have such an
6:31 am
enormous caseload, they want to move through the cases as quickly as they can, and the judges frankly don't want these things going to trial because they don't have enough room on the docket if all these things went to trial. so if you can't afford an attorney -- lavar was lucky. he got into trouble after his older brother had become an nfl star, and so there was a high-powered attorney hired. there was quick resolution. there was downgrading and there was still a plea deal. but mostly because the facts of the case there was no dispute. what i find interesting is that texas is now the national leader in prison reform. >> incredible. >> the reason why is because the conservative republican legislature doesn't want to spend money on prisons anymore so they're using exactly the same arguments for reducing
6:32 am
prisons -- reducing incarceration rates as bleeding heart liberals were using the mid-'70s. it's word for word exactly the same reasoning, but the motive is because the budget is out of control, and they need to get it under control. >> there's a number of motives from the right. partly you have states and cities being out of money, but you also have evangelicals who believe in redemption. you have the libertarians who think this is government overreach. this is the idea of parse moan any, that imprisonment is a last resort that incarcerating someone is a form of -- is an awesome form of state power that should only be used extremely rarely. and we're seeing this amazing coalition from the right and left, and i think there's a e enormous amount of leadership from the right on criminal justice reform and, frankly, for
6:33 am
the liberals, it's a little embarrassing. it's like, where have we been? >> i cover the texas legislature so i can only speak to texas. i can't speak to anyplace else except to say those are a lot of the same reason prison reformers were using in the '70s, and it's -- it is embarrassing because liberals were trying to catch up with the tough on crime, and then as soon as -- >> of '94. >> then as soon as they get tough on crime, the right turns on them and goes in the exact opposite direction. there's another question. >> hopefully this will be a nice -- i think we're almost up to the 5:00-point. but i was struck by tomlinson hill as an historical documentation of top-down government policy created in the interests of those with money and power. publish how can telling these stories promote modern policy of public service and democracy
6:34 am
with the focus on civic responsibility rather than a system creating advantages, device civil by class and race? that's a question for everybody. >> since you mentioned tomlinson hill i'll start. you know, this is dark stuff. and it's -- and it's stuff that is hard to accept. it's hard to accept that that really nice looking old couple in that seep -- sepia photo in 1920, they were lynching people. that's what i had to face up to. so, that way it's very dark. but when you think about how far we have come and what our potential is, we have raised the
6:35 am
least racist generation in history. when all the studies in -- in our history, in roared recorded history, what we have statistics for, the studies to back up. and that's something we need to be proud of. now, if we can take this generation, the millenial generation and younger, and then give them a little understanding how we got here, then i think we might have a magic combination to solve the problems that continue to divide our society. but it's one thing to not be racist. and it's one thing to know the history. but those two things together is what we need. to have a new generation of leaders who will bring about change. and that's kind of what i'm pushing for. i don't think the younger generation is being taught, when
6:36 am
the state board of education in texas talks about changing the slave trade to the triangle trade because they're embarrassed by the word "slave" you know we're not teaching our kid right, even if they're not racist. so that's what i think is the answer. >> well, i guess i take some heart from the activists who led the way aniston. one person was david baker who was a leader in a group called community against pollution, and there's an intergenerational element to this story. his parents were the first african-americans to arrive to aid the freedom riders' bus on mother's day 1961 when the because was fire-bombed. he went off to new york and became a labor organizer and came back to aniston and got a job at monsanto, working on the
6:37 am
cleanup. the didn't know how hazardous what he was working with. the chemical he was involved in cleaning up until after he started the job. and he took that knowledge and became active and eventually the leader of the group-community against pollution, that was so instrumental in helping to bring -- hold monsanto to account, and he said at one point to me, he said, it was a magnificent struggle but we are not at the end. >> unfortunately we are out of time. i want to thank everyone for attending the session and asking such great questions and thanging the panel for participating here. the authors will be signing
6:38 am
upstairs. the books can be purchased at the sales area. benefits the festival so we can bring all these great authors to the festival. i want to thank everyone for coming out today on such a rainy day and supporting the festival, and hopefully we'll have great day tomorrow, and come back again. [applause] [inaudible conversations] [inaudible conversations]
6:39 am
6:40 am
6:41 am
6:42 am
6:43 am
6:44 am
6:45 am
6:46 am
6:47 am
6:48 am
6:49 am
6:50 am
6:51 am
6:52 am
6:53 am
6:54 am
6:55 am
6:56 am
6:57 am
6:58 am
6:59 am
7:00 am
best expressed by this random comments on the blog, not because i think this is better for the world, why should i care about world when i get in god? this increases the chance i have of experiencing a more technologically based future. so that is the answer. we have to be clear about what exactly the question is we are trying to answer. if the question is what is best for me, what should i hope for,
7:01 am
take the logical advancement? i think that is basically right. if your hope is that we can realize this super long life span of because they span of because they make span of becoming like a cheap 30th planet sized spray them with a million, clearly that is not going to have been in the default course of the event. the default is to tell you we are all going to die and did a few decades. we are just rotting away. so that is how it's going to have been. a muslim radical shakeup at curious, care for aging, something totally radical would have to occur. so to maximize that happening if i would hope this move as quickly as possible. also aside from this kind of astronomical lights they come if you want to work gadgets and a higher standard of living come
7:02 am
you should hope for faster tape knowledge he progress. however, if we ask you this different question, not what is best from a linguistic point of view, but personally, then i think the answer is quite different. something perhaps closer to the atf expressed by what i call the technological development, which charts harmful technologies, especially once they've raised to a special race and accelerate it benefited, especially those that reduce the existential risk posed by nature or the technologies. so rather than the blank check we are trained to make everything happen sooner, we should tank carefully about the desired pacing of different conventions. it may be the right question is that for a different hypothetical technology, do we want to develop better not want
7:03 am
to develop that. the modern form of technology in the aftermath we will develop it. the more valid to question instead may be on the margin, do we have reasons to push to make her teeth early technology complex sooner or later? we presumably think we have some ability to influence the exact timing. although i thought the funding given to technology research is wasted with no impact to what the technology was developed. so by promoting the field, working in it or move the view we could move things around, make it happen a couple of months earlier, a couple of later. even though that is my theme, and uninspired they changed it may come it might be important if this effect effects essential risk that we are coming from. in many cases, it is quite
7:04 am
possible that the particular sequence in which different key technologies i took up might make a difference. we have listed the three year earlier technology like a.i., datatype and synthetic trilogy. each has significant existential risk associated with it now you can imagine one where it develops quality. and i'm supposed to get lucky, we get through that. we survived the risk of every develop nanotechnology and somehow we avoid using that to which world wars or destroys another race and we get to that too and finally we face machine super intelligence, which may not succeed, but the net bubble effect is the sum of these different technologies. perhaps if these technologies are a different order, we could avoid some of these existential
7:05 am
risks. while we still have to then hope we get through that technology, and the existential risk. but then if we succeeded in having faith in beneficial a.i., we can help eliminate or reduce the other wrist fermanagh type. in reality, there's a host of additional complicating considerations we have to take into account before we can make all things considered about these issues. thinking in terms of the timing might give us more insight into these things than the simpleminded question of whether we should have technology or not. so we can propose a replacement of the traditional concept of sustainability, which is a dead man icon said. traditionally think of sustainability. in terms of achieving states
7:06 am
such day you could remain in that state indefinitely. so use up resources only as the same rights as every punish pay nature. only the rate things are taken up and broken down and cleaned out. but what these considerations suggest this instead of achieving best april is, we should try to pursue a treasure to re-along which we can continue for a long time and that would lead in a desirable direction. it's an open question whether that would be best but none day increasing static sustainability for decreasing it. it is kind of burning a lot of fuel in the sense to maximize the sustainability to reduce the fuel consumption is just hovers for as long as possible and then it crashes. i demand next sustainability
7:07 am
concept maximize the trust and then you have a more sustainable state that you can continue on. so that obviously is a metaphor. it's not meant to be implied that we should maximize fossil fuels, but this graph here i've done three axes. technology on them at the same capability in the first draft and coordination, some measure about the degree to which the world is able to call with problems such as avoiding wars and arms races, technology races destroyed public good. a third dimension there with the ability to know how to use technological and other resources. so it is my fee to have the best possible state to have them optimally to be a to reach the point where we have huge amounts of each of these quantities.
7:08 am
like you want some superduper advanced technology to make the world is wonderful as possible, but also make sure we don't decide to wage war in deep inside so it's not consuming more mindless entertainment, but something actually worthwhile that has more value. although the endpoint may be we want to reach the region, that is over the question of where we are now in the rocket ship is in this picture, we are better off with faster solar growth in any one of these. although we want to ultimately have these advanced technological capabilities, we would catch it after we first got our act together on achieving global coordination. the framework for these macro strategic issues.
7:09 am
this statistic issue of super intelligence. at some point a train station to an error super intelligence. this will be the most important thing that's ever happened in human history. if we conditional snl at their essential catastrophes, even having a shot at it, they asked the wave principle to come about with the enhancement of biological permission such as occurred overt evil lectionary scales. or three fences and machine intelligence. right now machines are are inferior to us when we're talking about general intelligence, general powerful learning outcomes that we can train to do any different jobs. but they are improving at a faster rate than biological
7:10 am
conditions. set to some extent it's not a question of which of these will achieve super intelligence first. we can think more of ways to enhance biological condition. the way that will initially become technologically feasible is through genetic selection or genetic engineering. in principle there other ways like smart drugs and such such a few injected a. if there were such a chemical to reduce its endogenous way. but things might become feasible with genetics over the course of a decade or something like that. it would be to do selection, in vitro fertilization. this would mainly require us to have more information about the genetic architecture of intelligence, which is on path
7:11 am
to a sufficient degree, but the recent postings to be the prices sink with dean has been high enough to make it impossible to conduct very large-scale studies with very large populations like hundreds of thousands of individuals or even millions, but the price is not cool in which some studies are gaining. you would need such large studies that the variation in the attitude higher ability is not due to a few different aims, by too many genes that each had a very small effect, hundreds of genes or thousands of genes. so to discover a small effect community large sample and that is now becoming possible. military technology would be required. generally, in the context of in vitro fertilization, something
7:12 am
like eight to 10 embryos are introduced infertility treatment and is there some kind of physical abnormality, you will not select one of the healthy looking once and you can also screen for certain monitor that disorders, but you can't currently select for complex behavior. that would then become feasible. this technology would be vastly combined with another technology we don't have, which is the ability to derive from the know the firms themselves. if you had that ability, you could reduce an initial set of embryos, select the best one can of the best in the sense of having the highest trade value of what they were interested in and then derive stem cells that you could -- from which you
7:13 am
could drive gamez to generate another generation of embryos. but that would achieve an effect is to collapse the human generation spam from 20 to 30 years maybe a couple of months and instead of a eugenicist trying to persuade millions of people to change their breeding patterns for many hundreds of years, you could now have all of this over the course of one or two years without having to change. the effects to be vastly greater. so this kind of ticked knowledge he has been used in my gamez derived stem cells. it's not just used in human population. so some analysis together with my colleague carl schulman on what the effects might be in different levels of selection. if you stick to random embryos since to the most promising ones
7:14 am
come you look at for her i.q. point. just like one in 10, maybe 11 or 12, maybe 19, one and 1000. see there's this deeply diminishing return increase in the population from which you select gives you less and less. instead of doing a one-shot celestion, there's an iterated selection, then you have -- you don't get it the diminishing returns. maybe you could get as many as 65 i.q. points from 10 generations of that level of selection. it might bring up to a kind of the node that is never existed in all of human history beyond whatever phenomenon is genius scientist at the.
7:15 am
we don't know how far you go because it's uncharted territory, but it does look like some super intelligence to these types of biological enhancements. there is no reason to think the human species is the smartest possible biological species that could exist. instead we are the dominant possible biological species capable of creating technological civilization. basically, our ancestors got smarter until they reached minimum level and then they did a. but there's no reason to think that there is not a huge space about us. so i think i could create some sort of weak form of super intelligence. ultimately, the potential for information processing is vastly greater. the point sometimes made to me is because it looks really
7:16 am
dangerous to create machines that are far smarter than us, this is the reason to accelerate progress along biological cognitive enhancements so we can keep up with the machines. i think it works exactly the other way round. if we pursue enhancement, their life in which machines overtake us because will then have the more capable scientist to make more progress on artificial intelligence and computer science. it still seems like it might be a beneficial thing to do, but not because it was sort of delay the takeover of our machines, but because we might be more confident when we are creating the super intelligent machines. the other races where we could imagine them having biological condition and set a improving our collective rationality system from new institutions.
7:17 am
i'm not going to talk about that now. brain computer interfaces. i don't think that is where the action is, beside the past. i think it's going to be technology really quite different to enhance the capability through implanting things in your body. you might think would be great if you could access the school by just thinking about it, but it i can access google already and i don't need surgery to be about to do so. most of the benefits we are achieving to this cyborg implants we can achieve by having the same device and then using natural interfaces like your eyeballs, which can project 100 million feet per second into your brain.
7:18 am
it's going to be hard to beat that effect. in any case, it's not really squeezing sensory data into the brain. the first thing the brain does with artificial intelligences try to extract it. so for people with disabilities and so forth, by the time it's possible to enhance the intelligence of the healthy adult, my guess is bad for sort of the electronic part the 30s so smart that's where it ends and misleads the path of machine intelligence as the main additional part to consider and so we are just going to quickly get prepared for the public at large, and they think a.i. as milestones in the media.
7:19 am
a variety of different techniques and methods have been developed. in order to produce intelligence through the synthetic a.i. approach, it clearly would need something more than we are to have. so one or more fundamental new discoveries, architectural are missing. whether we are two or three discoveries that this, so there's just a great insurgency they are. all of these have been developed or most of these in the last 60 years or so since we had computers. a couple of them done before. so that's the lifetime of some people around today. it's not really that long.
7:20 am
like long. psychically match another century, we might have another 10 or 20 of these. the particular demands for its easy to measure the performance of machine intelligence against human intelligence that the hardware has also made a significant coach are pushing to improve roughly half of the improvements over the past couple of decades are due to hard work getting faster and we have to make improvements ms seems to hold for a variety of proportions, and there is another path towards machine intelligence, which is to take -- we have an existence groove like the human brain. if it turns out to be too difficult to just figure it all out from scratch by doing computer science and math
7:21 am
mannix, we can grow inspiration and try to reverse engineer the brain. that can be done at different levels of ambitions in terms of biological possibilities and how the brain is organized using several board and i'm from with that and patch it up with a completely synthetic artificial algorithm or you could try to really been in a copy of human brain and the approach known as whole-grain initiative. you would freeze it and feed slices to get pictures and use automated recognition in the stack of pictures of konica denature x of the uploaded out
7:22 am
how dinner on functions bendable emulation on a sufficiently powerful computer. mississippi where far away from being able to do now and enabling technologies to do that. but it would not require any new conceptual rates, 70 new theory to understand how thinking works to produce machine intelligence. in principle you could do it by understanding how the parts of the framework and then using brute force technologies to build emulation of that. i can see some cross-sections of neurons. at the bottom they are there is a block of these pictures on top of one another. in the upper right corner is the state-of-the-art of image recognition, algorithms having
7:23 am
been extract it from there. soviet microscopes they can see with the right solution. to image an entire brain that resolution would take roughly forever and the picture up there in the right corner looks very nice, but these algorithms make error and does add up and they have good arrow computational models, but not all. so there would be a lot of incremental work that we are required to make this kind of thing feasible. so we know that it's not just around the corner. it's got to be like that case and take time to get the pieces together. maybe longer, but it's not going to happen next year. but this kind of stuff there's some sort of chance to make some radical breakthrough where this
7:24 am
could happen sooner. there's the fact that these are multiple paths and the fact that if it turns out the machine is just too hard for us to crack, debris could then wait until we have a smarter version to tackle the problem. all of that to increase the probability the destination will eventually be reached. a technical audience like you i don't have to go through this, but they sometimes complained that is seen as something works it is no longer called a.i. a lot of algorithms and techniques developed to a.i. are you still around the economy today, but we just think of them as software. it has to be not at all a complete standstill for the last century, but instead progress that has not yet gone all the way. they do well, but when will this
7:25 am
point have been quite we did a survey on simulating a.i. experts. one of the questions we asked was by what year do you think there's a 50% chance will have human bubble machine intelligence and the answer to that was depending on which group of experts we asked. if the code actually, we asked them to conditional eyes on their been no collapse of global civilization, so maybe you would have to push out by a few years back the media estimate. it seems fairly sensible and the large insurgency and the center that could have been much sooner or could take much longer. we asked by what year do you think there's a 90% probability and they said 2007 -- 2075. in my mind that's overconfident.
7:26 am
i don't think we should assign more than 10% probability. we also asked by what year, if and when we, if and when we do reach in the bubble machine intelligence, how long will it be from then until we get super intelligence? you can see it is different. i'm very sick ester are we are from human intelligence. if and when we've reached out point, there's a high probability and soon thereafter the reasons we are thinking about it because they often get rolled into one overall skepticism. the question of how far we are from now to sound rough notion of human equivalents or when they get there, how quickly will
7:27 am
machines completely leave us in the dust? says some things hinge on this question, the question of how steep the takeoff would be. we will have a fast takeoff commended the minutes or hours, days, couple of weeks. for a start, it then seems like it's not going to really be possible to invent new solutions as we go along. if we are going to get a favorable outcome, it will be because of preparations that we take place before we developed previously. it's on the other hand it's going to take decades or centuries to slowly come to implement the expanded capabilities then we will have ample time to develop global institutions to develop a new science and students perhaps in the field. so the degree to which you need to worry about following this
7:28 am
depends on how fast you think takeoff will be. also, what the fast takeoff scenario, you're much more likely to get a single outcome from a world error for there's only one more decision-making agency. basically if you go to hear the novel within the course of hours or weeks, it is likely there will be one product that has completed the transition before the next following one has even started. so in technology of research, you also have competing projects, but they are so close to one another that they are only a few days apart. usually the leader has a few months on the next. so the faster the takeoff, the more likely you will have a system that is maturely and radically super intelligent or even close. in such a world, it is likely
7:29 am
that this first super intelligence could be extremely powerful. more or less the same reasons we humans are very powerful to other animal species that are muscles are stronger, our stronger, are famous or sharper, but that enables us to cancel these other technologies, complex social organization to the point now where the fate of the guerrilla, even though they are much stronger than us, takes a lot more on what we do than what the guerrillas themselves do. in this scenario we have a super intelligence that will quickly achieve technological maturity and have a telescoping of the future where maybe we would be able to achieve in 20 or 30,000 years have been quickly by super intelligence doing the research rather than biological. you're potentially extremely powerful agent that may shape the future according to a
7:30 am
school's been a lot made hinge on exactly what those goals are. we can talk more about that in the q&a. but if you have a slow technology stands, you will have multi-biological with many different companies or countries or teams in systems with roughly comparable levels of capabilities. at no .1 been so far ahead of the other that he could just lay down the law. in that kind of multipolar thing coming up a different set of concerns. not just one agent has a bizarre lu. a different set of concerns. not necessarily less serious concern. you can have economic competitive forces come into play and evolutionary dynamics on mines in depending exactly what assumptions you make him he might have about the population explosion of these difficult mines because they can be easily copied.
7:31 am
people make copies for each of these digital minds can make, april 2 subsistence income for a mind like hardware rental and electricity. but that's for machine minds to lower biological mines because they are less efficient. we need to eat. we need houses to live in and so what say you can create subtle models where it becomes impossible for humans to earn an income. we will provide income from capital. the question also arises whether we would you vote to preserve the institutions indefinitely. there's this world where trillions of chileans of digital minds that are gradually becoming much faster and smarter then us and yet imagined we would be holding onto significant share of capital. the long-term evolutionary outcomes of that kind of dynamic
7:32 am
might not contain anything that looks like a human or anything we place value one. that song and dance in beauty and so forth. it might turn out these are efficient at producing some particular kind of output in that economy. so there are a lot of things that are true at either way, but it does look like if there is going to be a big falcon on which the future hinges, the rival a super intelligence is a good candidate to what it might look like. so let's open it up and have a little bit of escutcheon. thanks. [applause] >> i will just briefly mention the microphone is down there. you can line up in front of them i can just get ready with your questions. and then nick, do you want to
7:33 am
call on people? >> may be in the order they come out. >> line up behind the showman ratepayer. -- this gentleman right there. >> intercepts the extremely hard for him to take your presentation seriously. really. so this idea of the singularity machines in the 1950s, if we go through your three parts right there, the first is genetic. you are proposing, for example, this accelerating selection. if you do it that way, even presuming that we do arrive at the genes which, which we haven't done yet despite what you say. >> and say we have it. >> okay, we haven't. even if we did do this properly,
7:34 am
you are leaving out genetics. this is not going to work. the second one is the narrow science project you proposed is pretty much exactly the one had remarked them is getting a billion euros for in the european union at the moment and thousands of us have signed a petition saying that garbage. it's not going to work. it's a waste of money. you can check on the web why we are saved. the third one is interesting. you say that a.i. is that the most liquid to succeed. so you must decide upon the court very short period of time. i do not see a single technique developed after 1990. granted they have different names like the machine is basically the perception, for men 1850s. i truly wonder, are you selling books or is this a serious academic thesis? >> okay, let me take the three
7:35 am
questions in turn. so with biological genetic selection come if all you're doing is selecting complications related to genetics, it might be that he will only be able to capture part of the heritability. but even by just capturing a part of it, you can come some of the way to enhancing the genetic disposition to intelligence. if you develop for advanced technologies, where you could understand how the perks of engineers, but even without making an assumption about that, you could still be confident you would be part of the way. i said my thing and i've really no opinion about whether that's worthwhile or not. i just point out that at some point with advancing technologies miss different fields, it looks like they'll should succeed.
7:36 am
that's what they've been a complete waste of money to do it anytime soon. i am not even sure would be desirable to do it even if there were a prospect of succeeding. i might actually be happy if it were more promising. as for whether there are any more reasons for discoveries than for discoveries in the somewhat arbitrarily key insight, deep learning is fairly recent and conditional networks that big data is kind of compassionate recently our boundary away if we camped. and my impression is that progress has probably continued that more or less a constant rate. it's very difficult to mashup because we don't have a good ship it out generally
7:37 am
intelligent a.i. czar. but there doesn't seem to be any slacking off in the field in general. there's a lot of enthusiasm, a lot of lake interesting purchase in acquiring companies like the company we have been working with them is just acquired a google for $400 million this last year after it did in competition with facebook, really trying to scoop up talent. so i don't perceive any general kind does since the dissolution meant the dissolution meant for a sense of technician in the field of a.i. maybe stewart can fill in more of that later. yes. >> thanks for the talk. my question is about coordination. you have the graphic think was capability, insight and coordination and then you talk about global coordination. i wondered what you see is the highest possible outcomes, if you could have your magic wand,
7:38 am
how that might look, how it might be reached and who would be in control of basic knowledge is, how would that look? >> they're sort of two separate variables to consider here. an abstract idea of world order where he probably can be solved. and not since there's only one position you doubt the concept is mutual, as to whether that is a dictator that has unchallenged power or whether it's legal world government or rather as sick super intelligent a.i. that is taken over are universally shared this income you can imagine many different possible ways of instinctive ness and so much more desirable than others. in terms of thinking of the possible outcomes that could occur, it's an interesting concept to have because you get a different set of possible outcomes of the outcome will be selected not by an aggregate of preferences come up with a these
7:39 am
kind of competitions in zero-sum games that can occur when you have conflict. the long-term outcome is fairly likely to be because of the possibility of super intelligent being very local and then bringing enormous power. even if we decide all these speculations, there has been a long-term trend for an increase scale of political integration used to be a tribe of maybe 60 people is the largest aggregate of political integration. now we have supernatural entities, supranational entities like the e.u., weak forms of global governance organizations. so if you think of these past as magnitude, we probably, than halfway. we need another magnitude in the largest political unit and we would have a global governance
7:40 am
system. so i don't know if that answers your question. >> do you see that as a desirable outcome? >> , blue. obviously, what would depend on what the values are. you can assert they with nontrivial probability of lake extreme dystopia. nevertheless, there are things we can do to make it easier for different parts of the world to coordinate and collaborate inside of the robustly good tape. it would seem to have not just with a.i., for example, to make it less likely you'll have the technology race at the end where nations have to scale back on safety precaution, but also other existential risks, a big choruses conflict, weapon systems built for new tape elegies might be more disturbed you. so the greater the ability, the greater the powers who would
7:41 am
have to kind of court made the effort, the better the chances would be. although it might increase the risk of permanent tyranny. and that creates together sufficiently to be worth promoting. >> why did the grass on your last slide flat out over time? >> say that again. >> why did the grass on your last slide flat not overtime? >> but see, the last slide. yeah, well, because ultimately it reached technology technology maturity, bree receives the materials that is physically possible. and then, the only way to grow your information processing is by acquiring new materials and
7:42 am
died in the long run can only be done out of pino millbrae. so you have the speed of right amtran light. where is the venture and there you might have a much faster goal while you figure out better ways to raise the existing hardware. there are better ways of building more powerful hardware. >> what do you change about -- [inaudible] >> yeah, so those kinds of scenarios will depend on your political function or like how much worse if the future is owned and designed a small
7:43 am
number of people then owned and designed by all of us together. that is like a non-obvious question in moral philosophy. on the other hand, is a mideast people do not like status, but randomly selected people, they are quite similar from the rest of our values except they wouldn't care about us. they might not care about us in the same way we care about each other lies, but they may care in similar ways about abstract goods with beauty and happiness and avoidance of suffering. pathetic and narrower conception of values, just the values of a few people. personally, i think it would be much more desirable if they were a place both for these abstract audience, but also everybody
7:44 am
today on the planet including animals. there's just so much out there that you could allocate a solar system to each person alive and it would not visibly diminish the amount of proliferation of matter optimized were beauty are optimized for scientific discovery. there is room enough to sort it all out always double values, a very high degree of realization. it would be by far it might be the desirable outcome. the sub optimal outcome of having a few people, i question where we would have different views then i'm not sure what we think about that. >> i'm following instructions. -- okay, so i am wondering, to
7:45 am
the extent that this work in artificial intelligence is based on the human form of intelligence as a model, is there any thought as to whether there is an upper limit to the capabilities of that form of intelligence and whether there's any that i've require some different form of intelligence to reach these higher levels. >> i certainly don't think the human architecture is optimal in a broad sense. it is constrained by the way that biological neurons were, so there's things you kind of those. it's also constrained by what evolution has had time to discover. so, it might be well a different form of architecture, a different orbit and talladega
7:46 am
consciousness would be associated with the information processing that was more optimized for intelligence. or for some specific kind of intellectual task that you will perform. if we think of the starting point of the human mind, suppose it succeeded and perhaps initially we have something exactly like the human mind running on a computer, then very number of things they can do to immediately increase the effective amount of intelligence disposable to make copies of. so by creating more and more instances of this human mind, you could create a kind of collective super intelligence with the ability to think about more than anyone human. if you have at this point something that is maybe not exponential of the hardware capacity, you could also by
7:47 am
reading a little bit, if you run the human bubble mind on a computer, that's a thousand times faster. beyond that, you could then try to do more qualitative improvements. you could try to add neurons to send area in the cortex or muck around and play around and probably very things you could discover after a while. but ultimately, that there are more fake would be surpassed by synthetic machine intelligence, artificial intelligence designed from the ground up, maybe by this population in doing this computer science research. but ultimately moving away from the biological architecture because it just seems very impossible but what we have happened with our biological constraints would be some information processing when you move some of strengths.
7:48 am
>> just a quick follow-up. if you vote with the full brief emulation approach and not words, which are less words, would you associate creating an artificial personality? >> different levels of success. you would get an exact copy intact with values and memories and positive feelings. but in my feet before you get the ability to create, you would get something rougher. something that had the same learning ability as a generic human being. maybe even before you get back on the u.k. something that didn't actually work, the sort of thing you could patch up with artificial algorithms, maybe figure out how a particular color works and then combine that with sooner or more fake a.i. that operates nothing like a person, so all of those are
7:49 am
possible. my own guess is if we shot in the hope that then getting a.i. that actually contained all the human values that we might then enhance with a intelligence would inherit our value, we would probably find the succeeded before the under a more fake angle and would be easier by collaborating together something vaguely inspired and really succeeding that we would get all the relevant positions captured. >> i presume many members of the audience are interested in the far future and want to become researchers or do research for some time. in which it does three years to think the value is higher? wondering about sebring taligent discovered researching on that area, trying to find things with
7:50 am
existential risk or forms of a node and nodes are getting the order right for the technological developments that separate us from technological maturity. >> yes, i think it might have a kind of negative value. working on the control problem seems to be a very important task. that would be a possible candidate is now working on it yourself, then maybe the division of labor working on something else and support people specializing. other possible candidates with the two to general analysis of this kind of macro strategy, either focusing on funding other existential risks relocating and the ones, are methodologically with the concept that can help us be more intelligent about how
7:51 am
we approach these kinds of macro strategic issues. fair that the research been done there and a lot of misunderstanding we now have this on the recent and has been produced by a small number of people they might be import and truthful considerations to be discover that could radically change our view. that is another high expected area of research. the third area would be to build for general capacity inside the effective algorithm community, more effective organizations that can support aforesaid areas of investigation and other targets of opportunity that might emerge to enhance biological commission on trade commissioner work on global coordination problems are things like that. >> my question is actually very similar to that. i mean, there's a lot of
7:52 am
uncertainty about what kind of technology and what order we should price to be developed, et cetera. i'm wondering as an individual, what kind of general strategies could use it as her had a prayer test the project to be done and how individuals can maximize their influence in reducing this thread. >> i am a believer in the principle of condition of slavery. for most people in the most people on the road, the most efficient way to contribute come if you get done life, make a lot of money and then reflecting your degree and there's the factions of the most effect to charities in people specializing to read the work. for certain kinds of people, say if you happen to be really good at computer science or not.x, for example they may be what directly on the control problem. or if you have for the good political skills are high
7:53 am
position in the organization or if you're a successful turtle is coming up at work on raising public awareness. there is some additional opportunities. but those would be different for different kinds of people. so the baseline to me is just a neat and for some people there are additional options. >> there's going to be decision of which things are unique here. it it seems possible there are some scenarios where it will actually be harmed zero and it matters what you work on. i'm not question of deciding what to work on, what can individuals do to improve how well they make they make these quite >> yes, we only have a fairly small understanding of possible research directions. i tried to describe in my book in the last two charts are some of my thinking about things like cognitive enhancement, in
7:54 am
relation, improving hardware, and there's other areas where we haven't yet popped through things sufficiently that we even have a good guess as to where the arrow points. for most areas, the difference to admit individually will be very small. it may be dominated by whatever difference you take by directly supporting people working in a more focused way. if you donated 1% of your income to the right group of people who do the technical work, that may count for more than which general area of technology, for they are working on clean energy for cars or whether you are working on this margin of electricity. intrinsically different signs regarding the level of existential risk. even if we could find out what that is, it might be the elasticity to having one
7:55 am
research and it is so small that it would be trumped by a more targeted interventions. >> okay, i think i get the last one. one more thing is given how much there is, do you think it warrants putting resources into kind of a charity trip research this particular exception? >> one candidate for the most -- we working quite closely with the center for effective voucher with them, which is like an umbrella for a couple of different organizations and 80,000 hours then giving what you can. their main object is to grow the effect is algorithm community, which is then tried to do good in a more efficient way and make you quite hard about pirate station issues. initially the integrations about how poor people in the entire world about that increasing different costs and some of the
7:56 am
key people are very concerned with causes that they are aimed to affect the long-term future and reduce existential risk. but supporting the growth of that movement is a possible candidate for best use of additional manpower or money. >> hi, this is in a crucial point, but i'm wondering if your calculation of the i.q. gain from embryo selection assumed that the variants of the iqs of embryos from a given couple of the same of the variants of iqs than the population as a whole, when in fact you have a limited number of combinations. >> i am trying to remember what exactly we assumed. carl, do you remember? at some point you run out of select is -- there are certain
7:57 am
alleles you might not find if you start only from the genes of two people. for example, there is the rare alleles that from one side goat give you a point. if you're homozygotes, out with something like that. so if you start with a particular person or a couple that don't have -- that are not heterozygotes, you would not be able to get those additional -- [inaudible] >> in our calculation inc. that, right? yeah. >> so it seems like we've been kind of dancing around the moral issue here, in the simplest case, whether we are worried about the existential risk at all and i think you've been careful to speak about value
7:58 am
systems in general rather than one in particular. but i'm interested in sort of, we spend a lot of time thinking about what your attitude is towards a moral realism and more generally whether you think there is some trade-off between allocating people to research moral philosophy rather than see the aversion of risk and what it should be. >> yes. i don't have a strong view that the nature of moral truth, like a lot of moral philosophers have thought about that and still have it reached a consensus. i haven't done much work on that myself. there might be some returns to improvements in moral philosophy i don't think it will be necessarily in order to say get the a.i. transition right.
7:59 am
i didn't have time to talk about this at all, but if you have this particular scenario where you have one seed a.i. that grows into superb allegiance and shapes the future, obviously it becomes very critical of the a.i. the problem with select team and giving it, it goes into two parts. on one hand it's a technical problem. you've actually embed that in the initial subhuman a.i., like how it is ultimately expressed, like some kind of human happenings. but there's also a philosophical problem, which is virtual to select in the first place. if this goal is really going to shape the future, love would depend on to get the choice wisely. but there is an idea about how to go about that to some extent it does assure the requirements to decide that now, which is in direct garbage dvd. rather than trying to direct the specified desire for sunset of
8:00 am
teachers, and you want the world to have come to you instead best by the process whereby the a.i. would he able to find out what it is that you alluded to. as a policy to make it more concrete, if we had had 40,000 years to think about this question. and it would have been smarter and if we had more. so now you have translated the problem into an empirical question. the empirical question is what we would've asked you under the circumstances and we can leverage the a.i. intelligence to make a better estimate to be answered that empirical question then if we just did it ourselves and made our moral philosophy. so the hope is for the a.i. is concerned, we can -- we don't need to find the correct moral theory.
8:01 am
.. maybe there are some differences of there. it would place a small attention, if superlarge population was not very much present. so i think some would be useful.
8:02 am
that might be the most cost-effective to do but compared to most ways to spend the money it seems to be quite desirable. >> is it ethical to control or chain super intelligence inhumanity innocents to support us which would be sort of a lesser being at that point? >> if we're going to build a web to build in some way or other. it's just a matter of two things. which way we build it, which goal that will ultimately have. so i think that if we think about this from an en antiwar. point of view, a lot of ambitions about what the human would feel. humans don't have this clean goal at the top and everything else being instrumental value to rise from the goal. we are parts of our brains the
8:03 am
want of different things. we think if something is a slave, even if he says he's okay with it there some part deep inside self is not really satisfied. but none of that may be true of artificial intelligence. there might be no part at all that is frustrated by serving some goal defined by humans. i didn't have time to get this but i discern this thing which is to claim that intelligence and valleys are organized. -- values. you could have a super intelligence whose only goal is to maximize the number of paperclips it produces over its lifetime, or italy wants to calculate. or wants to realize the highest reaches of human value. all of these are equally
8:04 am
compatible, high level of intelligence. i figure we might as well pick one that will result in something we will recognize to be valuable. it is true that might be moral important entities within. maybe it's also processes that are conscious and have moral status, and then clearly that has to be taken into account. >> wait, were you coming back to? [laughter] >> go ahead. sorry. hi. so i see that's highly unlikely that artificial intelligence coming about in this kind of competitive oligarchic
8:05 am
capitalist society could ever really have an inclusivity but are all of humanity. and so i was just kind of curious what your views on that, like if we need kind of a shift in our overall global system before we should start gunning for this, or what you thought about that? >> well, my guess is the larger chunk, getting anything recently anyone's human value is already a country that. on top of that there's a concern people that are selective but not select in the way of the people would select for the common good. the way we go about this is like first solve the control problem. before we saw the intelligence problem is that maybe we would wait another generation or two to make sure there was no flaw in the solution and have
8:06 am
enormous rewards. after a period of scrutiny then eventually we would launch the ai which would not be safe and secure this benefit. that is the whole global coordination problem. i don't think that's awaits public would happen. it's going to happen pretty much assume someone figures out how to make it happen. it is this race between the people try to figure out a solution to the control problem and the people who try to figure out a solution to the intelligence problem. ultimately, harken back to this differential developer. we need solutions to both but he would be good if they came in the right order. a solution to the control problem first. they're two different things, slow down the development of machine intelligence or you could try to accelerate the development of the control. i think vastly more leverage on
8:07 am
the latter, tens of thousands if not hundreds of thousands of people working on improving computers and software and hardware on all of these things around the world. very hard to make much difference to that one individual, whereas control problem maybe there are like six people in the world working full-time. one extra person would make a visible difference. plus nobody opposes work on a control problem where people would be enemies if you try to slow the ai. it makes more sense first you want to affect the sequence in which the problems are solved, you should work on accelerating solutions. if you could also just make society much fair, that also seems a positive to do. they are facing challenges to find really powerful legal -- lever like where one small organization can make a big difference at the global level.
8:08 am
>> it seems like a lot of people in the mainstream ai community and in the general population as well are skeptical of super intelligence. why do think that is? how to get more people to think critically about that? >> we should have asked, i think he stepped away sure after having fired up. [laughter] well, i think first one has to be sure that they are actually skeptical about this type of thing, like i'm not skeptical about because i'm skeptical about like i declared a i is just around the corner, let's just say. or the things are progressing on these exponential curves and we can predict with high accuracy exactly when the ai will occur. so i will be some of these people who like to make skeptical attitudes at the end
8:09 am
of they wouldn't necessarily disagree that much if you've specified over a precise plan with appropriate probabilities attached. if they still disagree, well, i think there are some that ma mae if there is views about the mind which like non-reductionist at, they think there some weird constant stuff happening or some material stuff. some i think are just very impressed by the vector could have predicted ai in the past and they have been wrong. i think that some evidence. it's not that strong evidence against ai happening in like half a century. and so that would be my guess as to why there's like a level of skepticism. and one more thing. there's also tendencies sometimes, some researchers in labs over hype what systems can do.
8:10 am
like if you're building a rot you want to make it seem like it's doing some advanced, amazing stuff. other people react against that. they've been duped or have seen of the people being duped by these claims and wanted more cold water on the. that water spills over into these more abstract claims about what might be feasible at some point. >> i guess how to get more people -- [inaudible] >> i'm interested in any great ideas for doing that. [laughter] >> start with the book. >> on one hand it seems probably true in the sense that the mathematical space is possible super intelligence is so vast that somewhere in there there's one that harnesses intelligence through any arbitrary goal or you could conceive of. but on the other hand, humans
8:11 am
exhibit this behavior of pursuing one goal until their understanding of the world or that goal reaches a point of conflict and they say what he meant, this subject it doesn't actually make sense and more, i've had some experiences that i changed what i think the right thing to do is. and i'm wondering, like has been much conversation about the property of super intelligence, whether that's actually a desire, what it is and how it relates, and i'm wondering the think a lot of this ethical flexibility? >> not -- well, with human minds i think like this hodgepodge of different conflicting things that tries, metaphorical speak, my personal cognitive theory. but different parts with
8:12 am
different sort of agendas and to try to strike some compromise, and maybe depend on what time of day it is and what social stimuli are, different parts rise. we face these problems of weakness of the will or noblest parts of her mind try to take control of the basic parts and made what happens when humans realize some value doesn't make sense is something analogous to broaden of coalition of parts suddenly figure out the way they can no propose a new goal like satisfy a broad range of powerful mine constituents analogous to overthrowing the government. i'm sure that whatever the true story is it's a lot more complicated than that but there's not complexity in human brain that you could imagine all kinds of things happening that could account for these like a parent ships a final vote. it might also be a hormonal thing changes when you're a teenager, and maybe you don't have some big story about how they've discovered the meaning
8:13 am
of life that is what you just more testosterone or something. all the things are possible. out industries of artificial mind there would be some artificial minds which also have this kind of process. i'm not sure that it would be better if they have that, like because it just makes the outcome a lot less predictable. it's hard not to get this idea if you can only see what the goal is the whole thing is trying to pursue. the hope of the right kind of cult emerges from some collocated dynamic of many competing goals which seems almost like a miracle. it happened in him is because we've evolved to end up with a particular type of behavior outcome. but there's no reason to expect that kind of natural human and meaningful goal to occur from a random assortment of different, competing processes, would be my guess. >> hello. so you talked about kind of like
8:14 am
humans getting better at collaborating on a grand scale. and what i thought of when i heard that was it sounds like we could make some kind of cultural gains or improvements in that area that would be beneficial and advance a certain technological improvement. and i was curious what you think some of those might be? >> i can see that that would be a valuable injury to achieve. i'm afraid noc any very powerful way to go about achieving it. a lot of people have tried in many different ways for many years to make the world more peaceful and just, and with marginal success. but that success is slow in coming and the effect of like millions of people really working. so if there were like a very easy thing that someone could do that would kind of radically make the world more peaceful,
8:15 am
probably somebody would already have thought of that. but may be new opportunities will open up, there could be new insights coming about, there could be new intervention points like with internet and stuff like that, new kinds of ways doing stuff that happened happened before because they couldn't. or maybe when you do like collection there some way to try to affect what choices people make, but they could be these new areas where, there could be powerful ways of shifting global values. it's the kind of things people like to struggle but. there is some lever that could affect what people's lives are. pursue all the vested interest, i'll try to push their weight which you can adjust the would be hard for individual to make a big difference. where as if there's this other issue were almost no one is getting about, that's all, and will be a greater chance you
8:16 am
could walk up to the lever and pull and no one would say. >> can i see if i understood? it seems like it would be valuable if we could improve like how much peace there is in the world or how much -- >> yeah, but before i would start work directive on data, like other indirect ways i would want to see like a promising idea for how to have a big impact on the level of peace in the world. sure, if you're like president obama or if you're like very powerful entity you could meaningfully affect the amount of peace there is in the world. but the small random individual will have a small effect on that i think, unless we find some good new idea for how to go about this. >> hello and thank you. i'm very interested in sort of trying to reverse-engineer sort of what pleasure are intended
8:17 am
for it to come up with an information theoretic definition for them. and i'm wondering if you see this as having any control proper and if you see this happening expected the possibility -- utility. >> it seems like an interesting thing if one could come up with a really neat thing that really captures the relevant property. so the are some things, questions the same kind of interesting and relevant even though you can't set the are relevant. i think there's some value. sometimes the only become relevant when combined with another insight, and then suddenly you get an implication that changes what you should do. so thinking about a lot of these, like the paradox, having seen it in, like it's the good thing it looks like it's probably going to be relevant for a lot of things although you don't see exactly how it affects we spend your money or time or
8:18 am
the demolition argument, another one of those. it's in that general area of like what pain and pleasure are your i could with more time spent us a more practical questions were maybe that would be important, but it seems like, if i could find the true answer to that it would be worth learning about. >> for a computer scientist, what are some of the lowest hanging fruit in terms of the world at large in the researching ai? >> well, so unless you focus merely on the control problem and the foundation of the -- mathematical work there, you might not be better off folks at ai at all, but improve the melody --


info Stream Only

Uploaded by TV Archive on