Skip to main content

tv   CSIS Discussion on Artificial Intelligence National Security  CSPAN  November 5, 2018 10:04am-12:02pm EST

10:04 am
[inaudible conversations] [inaudible conversations] [inaudible conversations]
10:05 am
>> good morning, everybody. congratulations on getting through the rain. i know it's been wet and soggy and usually when membrane like that, 90% of the people skip it so i'm pleased and proud to see thank you for coming. my name is john hamre, i'm the e president. csis amaryllis largely ornamental to say welcome to all of you. i do want to run a safety now spent. we do this when we have groups from outside. we want to say briefly, if we do hear an announcement, we'd never had happened in the five years we've been here but if we do, andrew will be responsible for you. we're going to exit through that door or the store. they will both take us to the stairs that are right by that door. we will go down to the alley,
10:06 am
take two left-hand turns and right entered and will go over to national geographic and we will assemble there. they have a great show right now at national geographic. about the titanic. people don't know that the titanic, discovering the titanic was actually a cover story for a secret navy mission. we lost the scorpion, submarine. we wanted to find it, learn more about it so it was a cover story you can imagine. after they had done all the time of the submarine, they spent one week to find the titanic. it's a great show. i'll pay for the tickets if anything happens. if nothing happens you pay for your own ticket. but do go see. we're delighted to have everybody here and i want to say thank you for those making possible for us to make this did have this conference with you today. it's about artificial intelligence. this, these are the most often
10:07 am
spoken buzzwords nobody knows anybody about in washington. we go through phases like this. i remember when big data was the buzzword in washington. right now artificial intelligence is the buzzword. everybody's thinking about it and there's really not enough intellectual context to understand what we're talking about. that's what the study is about. andrew is going to in a few minutes did you kind of a background of the study that we're releasing today, and video will introduce it but this is one of those very interesting questions where an enormous amount of momentum is moving all around us, and what other issues associated with what we're discovering and how are we going to manage it. these are open ended questions for which we really don't have answers. the purpose of the conversation is to lay out a framework and then to hear from three experts that are going to help us about
10:08 am
this more, and a more structured way. before we turn to them let me ask alan to come up on the stage. i want to say thank you again for giving us this opportunity. [applause] in. >> hello and thank you all for coming out and thanks, jon. we do sincerely appreciate the opportunity to be with you and see if i is, much. i just have a few minutes so let me just make a couple points before we get on with the program. we are very proud to be a sponsor of this report and this event. i've had a a chance to read an early draft of what you are about to see. i can tell you it is filled with what we consider very important and very interesting insights into artificial intelligence and its impact. i'd like to use a few minutes here to tell you why thales felt that sponsoring this project was
10:09 am
important, not just to us but to our entire industry. for those of you not familiar with us, we are very large technology company, european-based but with global footprints, especially here in the u.s. and north america. we serve five very large vertical markets, space here we are a large provider of satellites of payloads, including the space station, aerospace, commercial and military aircraft and air traffic management, ground transportation. if you ride the new york subway system you had a chance to experience our products in action. security post cyber and physical security such as airports. and, of course, the fence, one of the ten largest defense providers on earth. in each case you can appreciate that we address some of the most challenging and complex problems that are faced. those that really impact critical decisions and those
10:10 am
that occur at the most sensitive time. so in other words, what we're involved with affects lives. now, we recognize a handful of years ago that there are emerging technologies that will disrupt our businesses and our markets. they include areas like big data analytics that john referred to but also cybersecurity, internet of things and especially artificial intelligence. we as a company have made significant investments in each been ordered to stay in front for our businesses. in case of artificial intelligence we're already incorporating this technology to help us solve some important tools, use cases for specific problems. just to give you one example, as a large collective satellite imagery data we now apply artificial intelligence to help decipher or make sense of satellite imagery to detect sensitive or perhaps threatening items among a very large set of
10:11 am
imagery data. in things like air support security where we're doing facial recognition to detect potential threats -- airport security -- given the nature and complexity of the problems we help solve at and the nature oe customers we serve, our success is going to depend not just on technology but on other topics. we needed we felt about understand the role of government in artificial intelligence, the issues such as ai reliability. you'll hear about explainable ai and verification. these are especially important topics in critical for defense application. none of us wants to apply artificial intelligence to create the next terminator, for example, and these are a particular very important topics. we need to better understand the hazards and risks, and most importantly at thales we want to
10:12 am
help shape the conversation about ai because we think it's important. want to be part of the collective force for good and ethical applications of ai to really help address our worlds most foundational challenges. and so with those objectives in mind we embarked on this initiative partnering with csis, we felt they would bring and could mobilize the best breadth and depth of strategic thinking and expertise to successfully address this topic at this stage. so from a thales perspective we look forward to this presentation and panel answering to the continued engagement with csis at all of you. so we thank you for coming. i thank you for the few minutes here this morning. we are going to see the brief video that john mentioned. i think it's about three and half minutes. i think it will provide you a good summary of some of your ofe initial insight you hear the panel expound upon shortly, so thank you very much.
10:13 am
[applause] ♪ >> artificial intelligence is a uniquely complex field. we had thought about it for centuries, worked for the modern version of it for over 60 years and made significant breakthroughs especially specie learning since 2012. yet compared to what it could someday the geforce, and i is only started. i can emerging field, investors will spread their bets hoping if you have big enough to justify the rest. ai will deliver the greatest rewards to those prepared to make long-term investments and
10:14 am
invest in ai applications alone will not ensure results for success here defense capabilities of ai depend on a complex supported system and ai ecosystem. when properly executed this enables ai to take root to develop and improve on human performance. but a fully developed ai ecosystem and the kinds of results that justify its expense don't happen overnight and in many cases they don't happen at all. most public and private organizations including the department of defense are woefully underinvested in the support structures that ai depends on. this creates the debt that must be paid upfront to allow for successful ai adoptions. until it is tackled, the debt of an underdeveloped ai ecosystem will only grow undermining long-term success. the smartest investors have begun to understand how this ai start of debt must be dealt with in their initial investments across the field.
10:15 am
since ai emerged as a practical reality in the past decade, the private sector has dominated ai investment. tech companies are perfecting the painstaking process that making this leaves. meanwhile, commercial adopter sadiq khan to think critically about what can make the ai applications worthwhile long-term. many had seen the consequences of ignoring deficits in the ai ecosystem and to allocate resources accordingly. most government ai adopters start out far more underinvested and commercial users. the department of defense exemplifies this issue. if the nation strategic goals for ai are to be realized, investing in the ai ecosystem must be a top priority. this investment will lay the groundwork for wider government adoption of ai. while they can and do leverage commercial ai for strategic uses, there are some areas where
10:16 am
commercial developers will not invest. the technology required to deliver and verify ai results for national security applications differ somewhat is expected from a commercial ai and must operate and specialized high risk areas and extremely secure, assured, reliable and explainable. verification and validation are essential for the systems. the development of this technology is vital to the national interest and must be fast tracked. by doing this the government can also make a critical difference in ai in the wider commercial market. this public sector development could yield breakthroughs in the field. a viable national strategy for artificial intelligence would require investing in the ai ecosystem to pay down debt especially in the workforce and spreading bets across the ai field. for the public sector it will be critical to focus on ai reliability. if the government works closely
10:17 am
with the commercial sector to drive the technology forward, the u.s. can leverage ai to achieve its strategic objectives. >> all right. thank you again everyone for coming today. john opened by saying he was ornamental. i get i maybe a little redundant because we tried to pack as much of our report major finding is occurred into the video which you've just seen. i hope you enjoyed it and that will be out there on the web, on our website, on youtube and possibly a couple of outlets for those who want to follow up and watch it again. i am going to briefly run you to the very top level findings of our report. there's a lot more there. it's about 78 page report so i encourage everyone eager didn't get a copy on your way and if we ran out of those, which we may
10:18 am
have based on the size of the crowd, there will be more available through the website shortly. let me give you that top level overview. i did want to start by thinking he had thales for the support for this project. i also want to thank lindsay shepherd who was a lead author on the report and bully the engine behind the project and are two other contributing authors, robert and leonora who worked very hard to put the support together and make it look good and sound good and actual afterthoughts in it. i also want to thank the attendees at our many workshops on this project that we had six workshops. there was a hard core of about 20 folks who came to most of them if not all of them, another group of ten to 15 who came for some of the sessions. none of the errors in this report are the fault i hesitate to say that. they can disown every aspect of it but the insights for deeply valuable to thales as we went through this process.
10:19 am
john touched on the fact one of the foundational questions when you're going to do and ai study is to start with a question of what are you actually talking about when it comes to artificial intelligence. there can be a meaningless term dependent on the level of knowledge the person talking about it and the problem that they're trying to bring a limitation to. there's a lot of good work being done on artificial intelligence so i want to begin by saying we had to find any specifically for this report not because i want to criticize or critique anyone else's definition but because we need her to have an extent of what our scope was to do useful project. our focus was narrow ai. we didn't try to get into questions of general artificial
10:20 am
intelligence and issues and the problems that causes. largely because our timeframe was relatively near-term focused, the next five to ten years. our judgment is during that time frame the issues of narrow ai are going to dominate how this field develops and the significance that it has for people trying to implement ai solutions and government actors trying to understand the technology and capitalize on the technology. and so by narrow ai it means artificial intelligence is a technology that does problem specific tasks dependent solutions to cognitive problems. the way that ai operates is very different in many ways from what we would normally think of as human intelligence, the kind of problem solving that a lot of these algorithms engage in there's little to no resellers to what we would think of as human cognition. so what is as we look at it here -- no resemblance --
10:21 am
technologies within the ideal that were of the greatest concern to us are not thanks trying to mimic human intelligence but trying to perform tasks and solve problems in whatever way they can do. we also had this study was a fairly broad look. so would look at issues of ai investment, issues of ai adoption, issues of ai management and did a bit of the survey of international activity in ai. it was a very broad study and i think there would be a lot to be gained by going in deeper on each of these topics. what you will see is the highlights in some ways of violence because of the very broad look. one of the things we try to do with our first effort was to come up with some kind of conceptual framework of understanding the ark of ai, how does progress in ai field
10:22 am
happened, , how is it likely to proceed. there's a lot of different ways of looking at it. there's increasing degrees of metonymy in ai systems. there's increasing degrees of collaboration and the way ai operates that leads to higher order affects, , higher order applications that ai overtime, we hope. we try to capture that in a framework that we can visualize. along the bottom we look at ai capability, starting with the very near west possible task like a telephone switchboard which is just trying to move communication between two channels or two users in an intelligent and accurate way and connecting the right people at the right time. and then build up towards broader and broader, more general purpose task overtimes. and in some cases you can think of that, think of it. in terms of the ai acting entirely on its own, becoming
10:23 am
increasingly autonomous but that's not the only path and it may not be for some defense applications some of the most important paths because of you heard a lot of senior lived at the department of defense articulate when not at a point now where anyone is willing to sign up for completely autonomous activation of many vertical defense missions. there needs to be a a human ine loop. one of the other dimensions it's how does ai taking the context of the problem, of the world in which its operating, of other actors come human actors, other ai actors that may be opponents. they may be collaborators in the process, and so the exposure or the ability of the ai to move up to this hierarchy of behaviors, becoming more interactive, more collaborative and then ultimately more able to act in a closer approximation of how a a human-based intelligent actor would act over time.
10:24 am
there's a lot more discussion in the report. i encourage you to go into it. in terms of how it informed our work, i would say we tried not to be overly focused on the increasing levels of autonomy as the only dimension of ai progress. what you will see, we have disconnect on this chart you will see. there's a long way to go. if we were to do a non-disconnected chart out to some of the applications that only seven speculate about, you would see we are still very much in the early stages of ai development characters long way to go to get to some of the more advanced applications that have been discussed and imagined.
10:25 am
i wanted to revisit this chart on the importance of ai ecosystem. if there's nothing else we want you to walk out of this room thinking about this report, it's about the importance of the ai ecosystem. this was our biggest take away, which is that for all of the importance of the different ways of implementing ai technology through machine learning, computer vision, other elements of the field, or is something fundamental that gets at how this works, how it can be usefully implemented and managed over time. it's this collection of things that we've termed the ai ecosystem that's people, , the workforce, the technical workforce that is developing but also engaging, managing and using the ai technology. use our little symbol, the ability to secure the data on
10:26 am
which the ai operates. it's the foundational piece of the tool being able to secure also being able to work on gathering data, data quality. and those issues. it's of the ability to have a network, having the computing power to process the date and a network to share data so that they critical applications get the data they need when they need it are able to do the training but then also able to do their mission specific task. it's the policies required to actually manage ai so there's a lot of work to be done. there's a very good report put out earlier this year which looked a lot at policy issues and strategy for ai and recommend that you everyone, take on the topic although they touched on a lot of things we look at as well. and then last but not least the ability to verify and validate what ai tools actually do here and particularly for government
10:27 am
users, high consequence and high consequence missions, many of which we see of the department of defense. the ability to validate forms of ai as the girls and evolves is not only critical but incredibly challenging. in other words, we don't know how to do that at this point in time. i think will do more about that in the panel discussion. we looked a little bit at ai investment and want to also give credit. we leaned heavily on data gathered by mckinsey, a very good report on ai investment, some work done by -- to look at ai in the federal government space and where it lies. then there's a great report called ai for the american people that the white house put out that summarizes some of the white house investment. some of the take away there is very rapid growth in ai investment starting in about 2010 and really starting to skyrocket in 2012 as the curve
10:28 am
really went vertical, driven a lot by machine learning and the take-up of machine learning as a technique for ai that was delivering very important significant results. in 2016 you see that companies across the space inclusive of private equity, companies investing internally of about 26-$39 billion of investment. we also see that in terms of government investment looking at the broadest possible category which is an i.t. networking information technology research and development, about $4.5 billion per year from 2016-20 something-2018 on average. a lot of money going into this. i mentioned that there was a very broad category. one of the things you can get wrapped up in ai is what constitutes a real ai.
10:29 am
we have somewhat sidestepped that debate because our argument is the importance of the ai ecosystem means an investment in critical computing capability for networking is so foundational and so supportive of what you need within the ai ecosystem that it's worth considering that as an ai related investment. we have tried to split hairs and say what specifically ai, what's an algorithmic investment that isn't ai, because it doesn't meet some criteria. we have chosen to get into that kind of fine distinction making because we think the categorical look is important. we also looked at ai adoption. what does it take, what does a user would want to know or need to know or need to have in order to make effective implementation of ai quirks we tried to do a bit of a survey where you see ai being implemented. i mentioned in the commercial space, machine learning has taken off in a big way.
10:30 am
we see ai in the financial industry, insurance industry come in the advertising industry in a big way especially online. on the government side, i should say in terms of self driving vehicles, another huge area of commercial investment that's driving the field forward. on the government side there's also quite a bit of ai progress or effort, forward momentum going on. we see this with image recognition come with logistics applications, with unmanned applications. so project maven is relatively well-known government effort, scope of which has made a lot of progress in recent years and is that significant investment leadership interest, they see hunter unmanned, surface vessel has likewise been driving the field forward on the government side and there's been some pretty innovative work done in the marine corps, under logistics side to try to capitalize on ai and also within
10:31 am
the f-35 program and logistics area, summa corporation of ai that we see taking place. for us one of the big things that jumped out as we talked to ai adoption was the significant debt, start up debts that have to be dealt with. this chart says technical debt at the topic fidgeted technical debt and workforce debt because we want to emphasize that the debt is at least as much or possibly more so on the workforce side as it is one kind of the networking and computer infrastructure and data housing and data collection side. because without the workforce you really have very little to work with if you can gather the data. it's not as easy as maybe just saying you can gather the data but there's a commence amount of data, a lot of work to be done to make sure that his quality data, , useful date and it's the workforce that has to do that work. but when we get into ai adoption
10:32 am
will be over and over again from experts is this issue of the startup debt that people trying to put ai face is the biggest issue that's out there, the specifics of the technology are not right now what is holding the field back. it is the start of debt. and again we try to incorporate that into the idea of the ai ecosystem that there are debts to be paid across all aspects of the ai ecosystem and that's what we hope will be the major take away of our look at this effort. we also had a session looking specifically about if we really have ai net and western use of what it issues associate with actually managing that usage? people think of ai, and i would say we didn't think of ai as a general intelligence industry but if you think of ai as a unit of application, to use some dod terminology, within the force, how would you kind of command and control the force? what it issues that would come
10:33 am
up using that to achieve missions in a real tactical operational context? what we discovered is that there are issues at the tactical level for how you use ai, very much tied to the workforce, but also at the operational level, middle manager level where a lack of familiarity or anything of ai can completely stymied the ability to use these tools and frustrate the tactical folks are maybe young and innovative and able to embrace and grasp this technology. if they can't get the resources and support from the next level up of management they can be completely stymied in their ability to execute. then the strategic level, broad organizational level policies in the dod context, dod policies, procedures, guidelines and legal things. some of the things that came out on that site is a lot of work to be done on understand intellectual property in and the significance and ownership and licensing of intellectual
10:34 am
property and an ai context when you algorithms generating intellectual property, how does the management and ownership of that manifest. tremendous amount of work to be done at each of these levels to really make ai useful for high, especially for high consequence of missions. and then trust, reliability and security, something goes hit hit hard in the video but for an operational user of ai to truly put that into high consequence scenario, understanding how it operates, what it is doing, why it's doing what it is and how we know is going to do in a way we expect or in at least a safeway overtime. and lastly the study looked at some of the international activity in ai. this has gotten a lot of attention. we dig into it in some detail across survey of countries. i do want to run you through a death march of all of them,
10:35 am
china obvious he stands out as a huge investor. russia as well. one thing, major take away witm international is that the numbers of countries that are making significant efforts on ai is fast. actually a global competition that is going on. the chinese are heavily committed as are others. what these countries are seeking to do with ai varies very much. almost as much of an idea about different ways of using ai to promote national interest as you go across as her countries engaged in doing it. again, here we returned to the force of the ai ecosystem because some a global competitive aspect, i i walk ay from this study not so much concerned that this going to be a specific technology developed in china or in russia that will give them some indomitable advantage that will take them out hit of the us or others in the world, or vice versa, as
10:36 am
much as the robustness of it ai ecosystem will be what confers advantage over the longer term for countries engaged in work on ai. and then on recommendations, again, refer you to the report for the full list. we think issues of ai trust and security are critical areas for use government investment. the need, the degree to which this is required for high consequence government missions is in excess i think of where the yield its own will be able to go. there's critical need for investment, particularly as a set on verification of validation with us a lot of theoretical work to be done to understand how this is even possible. ai challenges some of our traditional methods of doing test and verification and validation. i have had already our point about the workforce, the criticality of developing, nurturing the workforce, have access to the workforce, being able to get the workforce into
10:37 am
the organization if there is a lot the commercial industry is going to do in ai development. they will take the lead but if we think we can useful to use ai government missions without a ayes personal and the government, we are kidding ourselves that we need this talent organically as well as out there and the private sector were a lot of the bulk of the investment will be having. the importance of the digital capabilities and although there are some strengths in the government digital capability what you see reflected in the early adopters and kevin ai tech junkie such as the intelligence committee would have been significant business makeover series of years, there still a commence about that needs to be done on the government side. lastly on policies, and being able to number one, manage and safely use ai and government context in terms of cooperating with the private sector successful acquisition of software which ai fundamental is, a lot of work to be done
10:38 am
here. and so i will leave our slight up on ai ecosystem to try to habit the home. that concludes a broad overview of her findings and recommendations. i want to turn now to our panel as they will join me up here on the podium. i will let you know up front we did have four panels lined up. fortunately i represent the ibm was flying down this month and his flight was delayed and possibly not even going to take off. and lest he comes in drag yourself in the back of him and hammers on the class, i don't think you will be able to join us today but we do have a fine set of panels and i'm going to join them now at the table.
10:39 am
>> thank thank you, ladies and gentlemen, turning us today for the discussion. i will introduce our panel and then we'll move into it. to my left is ryan lewis who is a vice president of cosmiq works which is an in-q-tel lab, dedicate helping u.s. national security agencies, commercial organizations, academia and nonprofits leverage emerging remote sensing capabilities and recent advances in machine learning technologies, particularly computer vision. thank you for joining us, right. to his left is is erin hawley is vice president of public sector at datarobot, which is a company that is in the artificial intelligence business using artificial intelligence on data. she can tell you more about how that works. she what's close with them, workers and has a substantial commercial business as well.
10:40 am
using the ai to do, to draw insights from large volumes of data. and to our left is david sparrow or david is a at the institute for defense analyses, both david and there were regular attendance at a workshop so want to thank them for that. david has a a phd in physics from mit. he spent 12 12 years as an acac physicist and then joined i.d.e.a. in 1996 because his work on technology, insertion and ground combat platforms and recently has gone on a deep dive on the challenges of autonomous systems, technological maturity and intelligence machines and on test and evaluation verification of validation of autonomous system driven by artificial intelligence. so thank you for joining us this morning. i want to start by giving each of our panelists an opportunity to give us a few thoughts first
10:41 am
of all, on your qaeda perception of ai challenges if you want to reference the report, that's great as long as. then we'll get its more specific questions but ryan, why do we so with you? >> first of all thanks for the opportunity to speak here today. i had the opportunity to read the report over the weekend, and i loved all 78 pages of it. as mentioned, in-q-tel serve as strategic investor for the use intelligence community and in the labs, we go one step further and focus on applied research projects. that offers us the unique perspective in terms of what's not just happening within the artificial intelligence market in terms of both incumbent activities as well start startups, but also in terms of what someone argues is the
10:42 am
leading edge that's coming out of academia or other national laboratories. when we compile all of our experiences into one, perhaps the simplest way to surmise what we're seeing in the market today is simply that ai in its general sensors as a fundamental chance for the intelligence community and military to rethink some of the applications and processes. the key word is that it offers. as mentioned, a lot of these technologies are in the very early stages with some very exciting and a tractable results but there still so much work to be done. i think for us that opens up the perspective in terms of how we think about implications both near-term and long-term to this type of technology. it's important to set the stage because that comment is in stark contrast with all the hype we
10:43 am
hear around ai in general. by show of hands how many people here have heard of computer vision pattern recognition conference? i i know a couple people have. so if you haven't heard of that conference, that conference held up faster than capital playoff tickets. and that should be startling to all of us in some ways because the question is why. why are people so excited about going to conference that just a few years ago was not well heard of? reason is because for researchers, they are just now have the opportunity to present very tractable result in very niche focus areas that are maybe expanded beyond that in each. when we think about the implications from a national security perspective, what that means is how can we harness some of that excitement? we highlight it at the macro
10:44 am
level is well put in the report, how do start to decide infrastructure for the ecosystem, but more generally i think for us as we look at specific applications what of the core parts, even welcome what of the human-machine interface? what does it look like? in some cases whether it's an end-user using different analytical tools come specific of robot, or if it is a scientist building to own models, what are the expectations for those employees and what do we anticipate the lifecycle for those tools to be? it's a completely different way of thinking about a problem from your workforce perspective. the other piece, well, from applications view, is that these technologies are still in experiment stages which can be at times very frustrating as my colleagues will tell you. but what isn't frustrating,,
10:45 am
what's really cool is it allows us now in these early days to begin to figure out what processes we want to change and which ones we think are strong. hard to believe deep learning isn't a solution to everything. i know that's sacrilegious in some areas but it's important to know when should these tools be applied and when they shouldn't. these are some the questions we can highlight examples for us but the take away is these technologies allow for early experimentation which could have drastic effects in certain applications or processes and may only be tertiary and others. >> next up is erin. i should say because i want to make sure we're not misunderstood, we're talking about a broad problem for government and making effective and useful. there are companies out there doing really good stuff with the and erin works for one pair were not trying to say no without there's doing well useful stuff with ai today. they are but there is its largest in the issue, but erin,
10:46 am
you can lead us to more insights on that. >> thank you so much that we are thrilled be a today. datarobot is a company the back about 2012 our ceo is working and interest industry and he is a data scientists data scientists who are currently working at facebook and google and amazon, they are spending weeks to months to really develop some of the strongest algorithms in the world in order to help predict some of the things i could happen in the business place. co-ceo to himself that this is competition called -- every data scientist or data analyst who was interested in partaking in this context whether they were from allstate our netflix on a certain challenge, they realize even if they work for the insurance company, he and his partner, they thought it takes us weeks and months to develop about a single algorithm. that's just too long. we're going to be the day in and
10:47 am
day out by china and russia's advances what they're trying to accomplish if we don't try to take a step ahead. in 2012, thanks to in-q-tel as well with investment, datarobot was formed and the idea behind it was we need to make a software platform where we take with the best pieces with data scientist bring to the table which is a combination of having really strong domain expertise as well as as a really great background in computer science and the 13th being very strong mathematician, , statistician to how to bring it together in the software platform for rather than taking weeks to months to enter single question about something you get information for complex develop as software platform so that instead of being limited to a single algorithm, we offer chances for people to put the data in and be able to generate out hundreds of different models within a few minutes, versus weeks or months which we don't have the time to do anymore.
10:48 am
the folks that we have in our organization, we took three years, the investments we have cause the comedic of which at the first three years and instead of putting a product to market within six months, the executive team decided we will take those three years and $30 million in order to make sure we built the strongest platform which is what we do as automated machine learning. we are under the role of artificial intelligence machine learning, natural language processing from deep learning, right of different think you'll hear about. our focus is on automating the machine learning process. in doing that it's faceting some of the things we've been able to do especially in the commercial industry. i started the federal public sector team about two years ago embracing some really nice ways to get started for both the military and the intelligence community but in commercial it's really outstanding to see what we're able to do to help across a variety of different markets including banking. anti-money laundering is what of the biggest risks to our
10:49 am
financial and economic future. the fact where able to identify anti-money laundering schemes and help stop them before they get started has saved banks and those of us who are consumers hundreds of millions of dollars in just a short timeframe. from a datarobot perspective what we're trying to do is help people understand how can you come most people in this room will say if you ask especially a federal agency and we people are truly data scientist in your organization, you might find one or two within the massive organization who are dealing with volumes of data. if you actually look back and you deploy capably was like ours, we cannot take it one or two data scientists and allow them to usually really take all of that massive amounts of data and make some solid good answers and solutions for you. that's what we've been able to do in the public sector space and what we're seeing happening day in a a debt and the commerl industry. >> thanks. david.
10:50 am
>> so from the perspective of building these tools and making the investments, this is very much in keeping with the ecosystem approach and i'm going to bring it way down to, let me call clarification. i'm delighted to be. this is a real treat for me. andrew said something about my went into this. it's from and evaluations perspective on usually the intricate ai is an element in a system of some sort, either technology assessments or test and evaluation verification. i went through prior to getting the report and reading, i said what would i say just sort of on my own, i will try to compare. one of our mantras or soundbites out of place is ai is not a thing. andrews reports says it's a
10:51 am
buzzword but it's not even a tactility or set of technologies. it refers to everything from mathematical research and provability of software to aspirations for the good of society. and it's important to keep reminding yourself that you can't do anything with anything that broad. you have to narrow it down just like they did in the report, what we mean is filled out but you also cannot lose sight of the bigger issue thought you actually have the ecosystem. the second point i would like to make which is largely overlooked, we don't have anything like a predictive theory here. we were doing very useful things and is very profound work on theoretical underpinnings but we're not there yet. this has implications across the board, implications for how you want to do development applications.
10:52 am
it has legal and ethical implications. i would class the need for predictive theory as part of the technical debt that has to be paid down. you can do very, very useful things without the predictive theory. part of what happens if you work on ground competition work on artillery systems. when a useful artillery systems we still thought he was a fluid and 100 years before periodic chart of the elements which told us about her energetic materials came from. there can still be tremendous utility but there can be much more utility and i would point out that in the artillery business when there's something about what they did know what the periodic chart was, they had a lot of access which gets me to point number three which is about risk. i like to think about risk in terms of two different types of things that got a lot of attention.
10:53 am
one is out for go, triumph, and others of self driving cars. alpha go is not good or anybody. there are no severe consequences. there's no downside risk and that is incredibly freaking for the developers. it simplifies the verification and validation because you can get something for just about it which is really hard to validate, you don't care. nothing bad is going to happen. absolute different situation in self driving cars. very bad things can happen. they all we have happened. and if catastrophic pairs are possible, rare and catastrophic events have to be understood. it's got to be even more severe and the defense regime than in the self driving cars. this links of course to the underlying predictive theory, to go back to the artillery business, you understand aerodynamics, you know
10:54 am
approximate where the risk are going to be when you do something new. this can focus your development, can focus your test and evaluation. it allows you to have a basis for the legal and ethical and liability issues. the last -- not the last. no danger that i'm done yet. human interaction was an excellent wanted to talk about. we've been thinking about this with stone technology, stolen terminology. i forgot who it was stolen from where we are evolving from a human to appear to be relationship. the report talks about shifting workload from the human to the machine but i think when you do that and aspirational one for the things we want to do, we want to ship responsibility from the human to the machine.
10:55 am
this is going to be important in very, very many ways, particularly for defense because what is going to do visit will impose the need for experimentation that hasn't been done before. we don't know how it's going to work out as we shipped the boundaries of responsibility and is going to be a lot of experimentation that has to be done. this is already going on in the self-driving car regime in a certain sense, at least tesla, what tesla is doing is they are beta testing the software with real cars that are driving around on the road. put the skill on this. first of all recall tesla gets beat up for the low production, the fact they have had trouble gearing up for production. which means that made 100,000 of these vehicles and if hit as high as 5000 in a single day. switch from that to the department of defense. the biggest platform program we
10:56 am
have at the moment is joint light tactical vehicle. at no point did they expect to make 5000 in a year. so you are in a completely different regime, a completely different learning regime from what you're doing with your fielding. you're going to have to front end load experimentation if you operate in that came to space in a way that the commercial sector, which inks in terms of millions, doesn't have to. -- thinks. the fifth thing out ousting abt was data, and i promised dr. hamre, but coming out of the research area in physics, this is an area which has an underlying theory and you still have to have 90% of your money and 80% 80% of the people doine data part. however hard you think did is, it's probably harder than that.
10:57 am
-- think data is. what a close with two other comments. the report labels this area as semantically problematic which i don't disagree with, and their two semantic issues to raise. very, very commonly in the community and i think the report lapses into it and a couple of places, is trust is treated as an un-alloyed good. trusting things that are not working is not an alloyed good. the psychology field has the term calibrated trust and it is important to keep reminding yourself that trust going up is not necessarily a good thing here trust going up to exactly the right place where you trust it, you know what missions and what environments the system will perform well in, which also know what once it will not perform in is critical. it is routine, i think a part of human nature, that people assume
10:58 am
the systems going to work and, therefore, what you do is trust it. well, no system works perfectly everywhere. so the idea of calibrator trust as opposed to freestanding trust and support. the other has to do with explainable and transparency, and would at instrumentation for this. one of the things you'll need to do to build that there is to be able to look into the decision-making processes of the ai systems. this also very, very frequently gets treated as well, you know, once we can explain to come it will work, people will trust it, will be adopted. but one of my favorite lines from thing is explain a abilits not a panacea. there's a lot of things people might explain to you which would cause you to reject resident endorse their position. so with that, again, i'm delighted to be here at the think my turn is up.
10:59 am
>> thanks, david. i've got a handful of questions i'm going to throw at the panel to try to drive some discussion and then we'll open up for audience questions after that. i want to start with what i consider some of the unfinished work for project there still a lot more to be done. we kind of started our project with the perspective of if we look at how i was actually being used in reality in the commercial space and covet space, it would give us a lot of insight into the areas where progress was going to grow fastest. i think there's a lot to that but honestly it was frustrating trying to do that. one of my assumptions before really digging too deeply into this project was that ai was going to be really good and useful i i doing things that humans really struggle with the one things human struggles with his making decision in microseconds. i'm thinking of missile defense and some other areas.
11:00 am
another thing human struggle with is dealing with absolutely fast unknowable, millions, data point sets of knowledge. we do see ai making a substantial contribution on that evolved in question. my own perspective is we don't see a island as much assistance yet on visually time critical kinds of things. you see them in the commercial sector in terms of the financial industry but in terms of defense, not so much because it turns out a lot of face time critical things are also really high consequence things and we run into this problem and we don't really understand how these algorithms achieve the solutions and we're not highly confident that they will make the right call. ..
11:01 am
is most likely to be in the near term. >> why don't you start? >> all right. so obviously, we invest across a lot of different areas and in our labs, we have labs that focus on everything from cybersecurity to data. we focus more towards geospatial applications, mainly focused on computer vision. from our experience so far, we do see in the next five to ten years these sorts of technologies to have a fundamental impact on what the industry has called the t.c. process. what we mean is that it's no longer just a process about finding things in an image and then reporting that out, but
11:02 am
these technologies offer, albeit very early stage, offer a chance to quantify and systematically explore each part of that chain. so going to beginning part with the tasking piece, do we know what we're asking for specifically, do we know what has a high enough valleue. if we think about it from an artificial intelligence perspective, think about a machine learning model. you want to find building footprints, something we focus on a lot. you want to know early on what sort of resolution do you need, what sort of spectral coverage do you need and also what sort of temporal collection do you need. it's one thing to ask a person that who has looked at this particular application for years. it's another to have specific models tuned for those different types of data. that for us is really exciting. one of the ways we try to explore this with industry and
11:03 am
with the government is an initiative that we have launched in coordination with digitiglobe and with hosting services from amazon web services called spacenet, modeled after image net, and the intent there is we have open sourced a large amount of curated data and i agree with david's comment, the data curation piece is the most painful part. we host machine learning competitions and also work with others to post open source tools from that data set. i think one of the things that we have been continuously surprised upon with every competition and every data set we release is that some of our assumptions are always challenged. what we think makes the most logical sense isn't always the case, depending on the model, or what's even more exciting and sometimes frustrating is that the results will vary greatly between different models. we recently just released a blog
11:04 am
post that highlighted just the difference in machine learning performance between the same image essentially but at different nadir angles so looking at building footprint detection from one angle, then at the exact same data angle just on the other side, but you will have a shadow effect now, performance varies greatly. this is very subtle. this is one input so you are looking for building footprints in one resolution type, and you have two different images, and the same area, yet your performance is very different. this extrapolates out with more sources, greater search area. it's these sorts of things that we want to explore across each part of the chain, so when we think about long-term implications in the geospatial demand for something like a.i., it's allowing us to say what is most valuable for this specific problem and does it help us
11:05 am
answer the question in the most impactful way. from our view, whether you're a startup or you're an incumbent providing services, what is kind of most compelling right now is still being in this experimentation stage to figure out what's best, what isn't. there's a lot of lessons learned that will serve whether we know it or not, will probably serve as a foundation for a lot of our decision making going forward. i think the one thing -- a couple of things we would highlight, though, that are critical to shape that outcome is -- one of the first of which is we already mentioned data and i won't belabor that point, but when we think about key applications in the national security environment, think of a specific question that we want, whether it's foundational mapping or finding a very specific object. having a strategic focus and dedicated focus around building a core data set, and ask anyone on our team or anyone on our
11:06 am
investment team, they will tell you step one whether you're building a coordinated set from real life information or trying to use synthetic information, that is a critical first step. one of the other things that we've seen, without getting too much in the nuts and bolts, is focusing on core tools and some standardization of data formats. just being able to search across different file formats to say what is in this image is still a very tough task. if you look at amazon's open data repository, which is really rich in terms of mostly government provided satellite data for nasa and from noaa and our space and data is also hosted there, right now one cannot search across all those different repositories to say i want an image of atlanta, georgia, which is where one of
11:07 am
our competitions cities currently. the fact is an end user can't do that, means that right out of the gate, an analyst or end user, regardless of their technical skill, is going to have to step through multiple functions just to put a data set together to then start answering questions or use tools to then figure out which models. so if we think about what's key, whether it be data sets or tool standardization or just having some basic evaluation metrics that we agree upon for certain questions, the core focus should be around how can we have sort of these fundamental building blocks that we can start asking more complex questions and then have even more complex analytical techniques as things mature. >> erin? >> thank you. so i would agree on the geospatial side. an example of something we have been able to work with right now, you think the government is not as far ahead as they are but
11:08 am
there are specific areas where i think we are seeing really interesting applications. one would be in the idea of the geospatial. for instance, we have a lot of information about isil holdouts as an example. we have been able to take that data and if you understand historically because machine learning is about two things, it's about training the model and actually scoring or predicting on whatever it is that model that you choose. so in a geospatial example, we were able to take isil holdout locations that we knew about historically in order to help protect the war fighter and to be able to identify future isil holdouts. what we might not have been able to do that, if we hadn't collected this information and built out machine learning models so that we could more accurately predict where a war fighter may be going that might be a sensitive area to go. that information is available to us. the government has massive amounts of data available. we need to use that data and build out really strong machine learning models. as far as applications that we can see now and in the next five
11:09 am
to ten years, i know for me, this is a hot button for me, but the fact that there's a queue of 740,000 people waiting for security clearance is mind-boggling to me. so why is it that indeed.com, which is the world's largest search engine, if we were all applying for a job right now, we would probably go to the internet. if you go to indeed, your resume is filtered through, it's quickly mined for the information that's gathered from it and they quickly identify the organizations that are the best fit for you. they also throw out the resumes that do not make sense for that organization. why is that same application not being used in something as critical as national security and clearances? you could still bring in, there's three areas you could work in. you could bring in all the applications and immediately, those that have not been in the past, using historical, those that have not been thrown out immediately make a good indicator of the current
11:10 am
applications that they're coming through of folks who we might want to go ahead and say we don't need to have a senior investigator work as much time on these. these 740,000, there's at least 200,000 that are good citizens, they have not been arrested, they are not at risk. we take that same idea with those applications that do need to be spent and have more information identified in them, where your investigators would spend a greater amount of time. we should not have a queue of 740,000 applications, when commercial today is able to do what they're doing across the board with machine learning. we also see it within fraud. we in the commercial industry, and our customers today are shaving off tens of millions of dollars by being able to identify fraudulent claims the minute that they come in the door, because of being able to use machine learning and artificial intelligence in their process. that same idea could be involved inside of like a medicare or medicaid environment.
11:11 am
lastly, an example that is going on right now is something that we're very proud of with homeland security. they are very definitely addressing how can they bring in artificial intelligence in certain areas and an example would be better safety and passenger security. so there's a program, the global travel assessment screening system which we are part of, and being able to identify high risk passengers based on machine learning has been something that we have been working on with them for just the past i'd say six to nine months and the results are pretty outstanding. we are going to be able to share that information with those countries that don't have the capabilities that perhaps the united states government does, that we can provide those same models so that we can make sure that passengers screening globally is more easily -- benefits the world. i think in general, what we're able to see in the next five to ten years is across a variety of spectrums, but i think that there's this big fear at most of
11:12 am
the agencies, they think they have to have all of their data ready to go today. that's just not going to happen. ip ste instead of trial to boil the ocean, take data you historically have information on, build out strong models in minutes instead of weeks and months, then go out and make some good, strong, accurate predictions. it's something that's absolutely relevant and available to do today. that's what we are seeing some of our agencies doing. across commercial, we have thousands of use cases. i think federal, we can see more if we ran across some of those. >> dave? >> so i have a narrower interpretation of the question. i think the obvious area in which the microseconds matter is cybersecurity. i think that's not just a national security issue, it's a national economic security issue and then that feeds back into the national security as well.
11:13 am
the industrial espionage is a substantial threat to national wellbeing. i think that there is a belief that this kind of rapid time scale thing can also work in combat situations in electronic warfare. i'm not sure we're quite as ready for that, partly because that goes back to this issue of when are we going to be ready for the human in the loop, and i would add a cautionary remark about that, back to -- related to your question about managing artificial intelligence. there's a whole lot of artificial intelligence which is lost because of decisions made by 23-year-old programmers in the middle of the night who have not been in the strategy meetings and are making decisions imbedded deeply in
11:14 am
code, often tacit, often based on tacit assumptions and there's an issue that goes back to the experimentation issue, there's an issue there of how you get coherance from top to bottom, because it's even harder in these software dominated things than in hardware. one of the other issues, and i will reverse course for a minute, there are places where there's a lot of data and it's pretty good or at least plenty good enough, and for the department of defense, one of those is in the personnel management arena, using machine learning or other techniques on the vast amount of current and past data of the behavior of uniformed military. when do they leave, how often do they leave, what are the predictors, not necessarily on individuals but on the population as a whole, what do they need to then send us so you've got the right number of doctors and not too many lawyers and all that sort of thing. that's a field which is rife for
11:15 am
exploitation and it's an area in which for other reasons, we're already investing in the data curation. so those were the two thoughts i had. because we have to have -- we have to keep the data good anyway. those are the opportunities i had in mind. >> so we're actually seeing that same thing. i think personnel management and human capital management is a number one use case for us. we thought it would probably be in the cybersecurity space but that's a lot like saying artificial intelligence. there are so many things we won't go down this separate arena but work force analyu ana is fascinating to us. the number of agencies asking us to help them identify who is going to retire and when, there is one agency in particular i will not name who helped early retire an entire group of folks, then they realized five years later that was unfortunate because they were the russian
11:16 am
linguists or whatever it was they happened to know. it wasn't necessarily russia. then they ended up having to go out and hire a number of contractors in order to fill those roles. so it sort of backfired on them. now they are taking this approach of let's understand what are all the factors. it's really fascinating. it comes down to, in many cases, they were losing a lot of folks in this one agency not based on age and they wanted to retire, it was that they didn't have any flexibility. they weren't allowed to work from home. their commute was too long. or their boss. we looked at the division and said this division doesn't have anybody leaving and this division does, and it ended up coming back where they were able to put in some environments and some changes that helped. we also find in the department of defense, to your point, we have been able to help a group in the military understand who are the best individuals for certain special ops roles. why spend that person's months
11:17 am
and years of their life going through something they perhaps might not be strong at and how do we understand who are the best candidates for special operations versus wasting a lot of time and taxpayer dollars trying to go through those processes. so those are examples of current customers that we're working with today and it is because we have troves of historical information that helps us pinpoint the best special ops person looks like x, y and z so we can better define what we are going to do in future requirements. >> i'm going to hit the panel with one more question, then we will open up to audience after that. i will get to you, promise. i'm going to ask you to talk about our big thing, the ecosystem. this is something that i think it came up, the idea we should talk about an a.i. ecosystem in our first session but it didn't necessarily translate or impact my brain until we got to the fourth or fifth workshop. it ended up becoming an
11:18 am
overarching thing to me that connected our findings on the international competitiveness, investment, adoption, all these issues we were able to, i think, anchor on this idea of the a.i. ecosystem. you don't have to buy into that framing necessarily, but my question is what do you think needs to happen in the a.i. ecosystem as we defined it or if you have a modification to that, feel free to highlight that, in order for the use of a.i. to become something really compelling for people making decisions that yes, this is a use case i want to invest in, i want to implement in my agency, in my command, in my mission area. where do you see, we talked about this startup debt. where do you see the critical elements of that or where would you dispute that framing? >> i would say that question kind of brings me back to almost
11:19 am
over four years ago when we first started seeing the lab and it kind of comes to a central question with anyone, whether it's government or commercial customers that have very high consequence emission, which is what is good enough. more specifically or put in a different way, what are you trying to do. i remember one of our first meetin meetings, and i'll rephrase it so you can have the same sort of general confusion the end user did, too. we just released and open sourced one of our first computer vision models and met with a government end user. we walked up to him and said what f-1 score is sufficient for you with an intersection over union variable between .25 and .5. and the customer looked at us and goes i have no idea what you're talking about. and we sat back and said we didn't know what to say, either.
11:20 am
the reality is, all joking aside, why is that a good story, it's a good story because at the time and even now, so much of work that is really compelling in model development is still in what one may call non-applied. if you are going to write a paper or even do early testing, you want to know, you are deeply involved in your metric, in this case we were using an f-1 score, but if you are an end user particularly in high consequence, maybe you care about that, the explainability component, but what you more care about is does it answer my question. the reality is that different questions require very different fidelity in models, thus everything that's associated with that all the way down to the data set. so for truly compelling examples of derivatives from machine learning models, i think that's the first place we always want to start. we have already highlighted some
11:21 am
examples of that occurring and a really good way to illustrate this is if one is just interested in general building counts after a natural disaster, and we are trying to figure out generally what could be the level of impact, not how many exact buildings, not whether it's the material damaged on the buildings, give me a count, that problem seems fairly tractable with some error bars. if something is higher consequence with a very, very low acceptability for error, then that's something we have to work on. i think perhaps one of the most exciting pieces in the next two to three years is going to be fleshing out entire work flow for applications that have pretty good models built for them. so a really good example of this would be in some of the folks we have worked with at a couple different organizations, one including a company called development seed, if you look at what they're doing with
11:22 am
humanitarian open street maps, think about how to integrate general projections into a, in this case, just tell me what the most complex tiles are to label after a disaster, it's still early days and all that's still in the prototype stage, but as that work flow matures, that is a great use case of highlighting not only how a machine learning model is deployed, in this case all open source, but also how humans interact with it. so what's the feedback rate if those -- if the severity rating in those chips are wrong, or what if they're right. throughout that entire cycle, it will then lay the groundwork for equally compelling work in more complex scenarios where maybe the error rates or acceptability of error is lower. >> erin? >> what we're finding is the most important thing to do is just to understand at a high level that an agency needs to
11:23 am
have senior sponsorship. they have to, when you talk about the people that are part of this equation, if you do not have senior sponsorship and you don't have the person at the highest level who is embracing the fact that you're trying to endeavor on some sort of journey with a.i., you are going to fail. you need to have that senior leadership. we spend a lot of time doing workshops just to sort of lay the groundwork that a.i. is the big bubble and there's machine learning, there's deep learning, there's neural. we are going to try to focus in on what can you do in what we call supervised machine learning. how can you take something and not try to boil the ocean but take a small subset of something you are trying to do that you really believe that you want to get the answer to. you have that senior sponsorship, it's incredibly important. from there, having a business owner at the next level who understands the data, we are never going to take the people out of this equation. that's the most important thing. you have to have somebody who has domain expertise and understands the data better than anybody else, and who has their
11:24 am
senior leader has their back. they want them to go off and try to accomplish something. then you have your technical folks, your folks, your data analysts, they are really strong in tableau, in visualization tools, but they do not have a degree in computer science and in math and stats. trying to find that unicorn is incredibly hard. what you do have, is you have people with domain expertise, then you bring in the capabilities, whatever tools it is you're using and it could be from the data management side, all the way through the consumption, where you are actually doing your visualization. what you want to be able to accomplish in our view is having that senior sponsorship through your business and down to your technical level and ultimately, when you're building out supervised machine learning models, you need to have full transparency behind that. because you need to be able to have answers to how did you get to this answer, how did you understand that this group of patients are at risk for infection in your hospital because of these factors that
11:25 am
you developed. you need to have somebody who has a domain expertise that can read behind the algorithms and machine learning that's created and be able to really decipher it. that helps you with your people piece, your transparency and having that full open communication plan i think is really important, as well thinking of some of the other ecosystem pieces. of course, the policies that enable it, you need to make sure the right people have access to the information, and that the insights they are trying to gather have the right -- the guidance behind it. we find a lot of times, especially in the intelligence community, there's this big fear that if you have a machine do everything, that there's going to be this cross-population between secret and top secret data. so you need to make sure the policies and the governance factor are through there. i completely support everything in your a.i. ecosystem and what you labeled out is really important but it starts at the people level i think for us, then going through the trust and transparency with what you're creating and what you are
11:26 am
actually going to produce as your results, and having the people tied to it is very important. >> so i'm inclined to, because of the senior sponsorship, to tell a story from 20 years ago, when john hamre was deputy secretary of defense and i was assigned to one of the organizations, and i was one of the people who was the advisers on modeling and simulation for j. ward at the time, which was an attempt to include logistics in combat modeling. i was at the front of this horseshoe tablelike i'm important and these two kids come in to talk about the configuration control of the software. so they talk for 20 minutes, lights come back on and they say any questions. i look around, at the advisory group, and all i'm seeing is deer in the headlights.
11:27 am
and i think to myself i'm the only one sitting at this table that ever wrote code for a living, and i stopped doing it ten years ago. now, fortunately, the guy who was handling the meeting saw the same thing i did, picked up the gavel, banged it down, with no questions we move on to the next speaker. but nothing that i have seen and none of the people that you've shared this story with indicate that it's gotten any better. so the senior sponsorship is important, but with the department of defense, we don't have a mechanism to get the people, even with my level of experience which is now not ten years out of date, but 30 years out of date, into these positions. and i don't see a solution, and nobody has told me one, but you need people who are informed about this in ways, in the same way they are informed about budgeting or about combat or
11:28 am
about aviation issues. in terms of the ecosystem, which is not the natural way to think about the problem, but i thought about it in terms of american strength historically at integration issues, and i think since we're moving into an era of -- probably moving into an era of great power competition, we want to think about this ecosystem in terms of what is the nature of an ecosystem that would support artificial intelligence, what are the elements of an ecosystem that would support liberal democracies. i don't have an answer, i'm not a political scientist. i do physics, i do equations. it's way easier. and i don't know where we have the advantage there. i think it ought to be an intellectual international leadership role that we try to take, and i think within our own nation, our own -- in our own
11:29 am
community, we have to encourage broader literacy and we have to try to tighten the terminology down so that the non-experts can actually grapple with the problems, especially well. but i think a critical issue is this issue that we want to find an ecosystem in which the liberal democracies are competitive. >> i would inject one more level of complexity in your comment, which is especially in the computer vision domain, but machine learning writ large, unlike historical analysis of the defense industrial base, if you look at a lot of the work that's occurring in the machine learning domain, so much of it both on the tools and framework side and algorithm side are in the open source, which is a very different environment than what we're used to dealing with historically in terms of how we think about national power and national assets. it's something that we thought,
11:30 am
it comes up continuously even just in our purview of our lab in the sense of what makes sense to open source and what does not. and we continuously come back on the side of being more open, simply because there is still so much early work to be done, it's hard necessarily, at least from our view, it's hard necessarily to determine where we have surpassed a foundational capability and now it's left to go into the realm of proprietary. i know both of you have to deal with that. i'm just curious your thoughts on that. >> well, to go back to the international aspect, the openness is natural to our country in ways that it is not natural to others, and there may be a way in which to capitalize on that and make that a strength rather than a weakness. but again, i'm punching above my weight class talking about international political sphere.
11:31 am
>> all right. let me turn to audience questions. you have been waiting very patiently. we do have a microphone that will be brought. please ask one question, keep it brief, make it a question, and tell us who you are. i see one hand here. >> steve winters, independent consultant. i think i will direct it to david. it's just a minor point, but i think you made the remark comparing to self-driving cars, where people could be hurt in an accident, to the sort of -- the case of alpha-go, where nobody's going to be hurt. isn't there an argument to be made that alpha-go is so much more dangerous because what everybody drew from their hype over that was that my gosh, this is how you win wars. i mean, new tactics were coming
11:32 am
out that the go players hadn't seen in the whole history of humanity. of course, that's a determinative game but then you have the a.i. people having a very good result so can you say something about the danger there and maybe the openness is the danger. >> so i accept your suggestion that what i was talking about was physical risk, not intellectual risk. i think there were elements of hype about it. i think go is an intensely digital game with rules and in fact, the rules are the same on both sides which frequently is not the case in warfare. i don't know, i work at an institute that to some extent was invented 60 years ago to counter hype, so i'm constantly with the dangers. i tend to be self-regulating
11:33 am
over time but yeah, i think there was a tremendous amount of enthusiasm that, you know, all we need, i mean, one connotation made to people was all we need is curated data on 30 million wars and we're ready to go. so the scale is very different. that said, it was very powerful accomplishment and one that was not expected, even by many in the field. shortly after it happened, my wife and i were driving out in sh shenendoah and there was some npr thing we heard for a few minutes but they were talking about the fact, they were talking about this as the computer beat the world's best player of go. that's one way to look at it but i think the right way to look at it is 300 of the best computer scientists with unlimited budget and unlimited access to computer power, most of whom were decent
11:34 am
go players, were able to pool their resources and beat the best single individual at go. and when you describe it that way, then the hype is stamped out. but you made an interesting distinction between intellectual risk and physical risk that i had not made before. thank you. >> okay. i will come here, in the blue blazer. three rows up from me. there you go. thanks. >> jennifer simms. i know virtually nothing about a.i. and i haven't had time to learn it to report, but i heard the mention that every country has a different purpose in developing a.i. i heard that china is more advanced than america in this, so i wonder what do you think is china's purpose in developing this and where are they now and how is that going to impact or affect the united states?
11:35 am
thank you. >> well, the work we did in the report, i would say it was pretty extensive application of a.i. projected or in the strategy that china has been discussing as part of broader efforts they have towards kind of seizing the technological high ground in a range of industries. so a.i. sort of complements or supports their efforts along a number of dimensions in the plan, you know, made in china 2025 is one of the documents that describes that. there are others as well. so what i would say, there is tremendous strength there, they have invested in a number of institutes focused on a.i. they have recruited heavily an a.i. work force, some of it folks who have come to study in the united states, gone back to china, been recruited back, and others generated right there in china. they produce literally hundreds of thousands of engineers every
11:36 am
year out of their graduate schools and universities. so they have some real advantages there. they have advantages in the quantity of data that they gather through constant surveillance of the population and there are very low limits to aggregating and sharing an exploiting that data that we don't have here. so there's real strength there. one element that i think can be sometimes overblown is the amount of money they're putting into it. the truth is, we don't really know the amount of money. there is this, you know, $150 billion figure that's out there that's a multi-year number and that's a projection of the size of the a.i. industry that is their goal to achieve. so it's a little less clear exactly how much in terms of real currency is being invested in a.i. but there's little doubt that it's substantial and that it's at least comparable to our investment and perhaps stronger. the way we kind of came down in thinking about it was to focus less on specific dollar
11:37 am
investments and more on the health of the ecosystem, because our view is that what may be applicable at doing facial recognition at airports, allowing them to monitor people in the population, that problem may not be at all or equally applicable to other warfare applications that we would consider more important in a battlefield scenario. it's not clear that there's a transferrable advantage from one to the other. but we do think there's a transferrable advantage to having a really robust a.i. ecosystem that you can apply people and infrastructure and policy to multiple different kinds of problems, and carry over some advantage there. i still think that silicon valley represents the most robust a.i. ecosystem that we see today. that's a good and important advantage for the united states. it's a perishable advantage so it's not to be [ inaudible ]. other thoughts from the panel on
11:38 am
that? >> i have one thought which is most of these companies think of themselves as international companies. i'm not sure silicon valley's american. so it's located here, that confers some advantages, but it is striking to me that the google employees seem much more squeamish about project maven than they do about the massive surveillance state that's going up in china. well, is that ecosystem on our side? not immediately apparent to me. >> other questions? here in the middle. >> thank you. federation of american scientists. kind of a segue to your last
11:39 am
comment, with regard to artificial intelligence and national security, the talent acquisition problem, how is that being addressed when technology is not more secretive as many of the other programs, yet government contracting doesn't account for the fact that the salary figures being paid in silicon valley are outrageously high by at least some of them, for top talent, i got this from a recent "new york times" article and discussions with a venture capital iist well knownn the valley. how are we going to switch that to bring that talent into the national security arena? thank you. sorry it's a long question. >> so we see this challenge all of the time. i would agree that silicon valley, we are in jeopardy there because there is a massive, massive push from china to gather as much information as they possibly can, from whether
11:40 am
they're getting our technology or having their folks, like you said, study in this country, then go back. one of the things that we're looking at is the fact that it is possible, it's not that f farfetch farfetched, the idea of making and creating an environment of citizen data scientists. i'm not a data scientist. i work with four of the number one data scientists in the world at data robot. we are a company of about 500 people. we are the ones who are trying to hire those folks, just like google is, just like amazon and facebook, but the idea behind it is that you shouldn't make the technology as one piece of the whole ecosystem, you shouldn't make it so difficult that people like you and i who are not data scientists can't leverage the benefits from it. you want to be able to have that area like i said, the unicorn where data scientists are so hard to get and so expensive to get, they are especially not going to be hired in the federal government because they can get much higher salaries in commercial. so what we are trying to do is create an environment of citizen data scientists where you have
11:41 am
the domain expertise, you understand your data better than anybody, but you don't need to have the computer science background and the math and stats background as part of the ability to get really actionable intelligence from your data. so if you think about it now, the way you're operating with the internet every day, you're on your phone, you're not a trained expert in coding. you didn't need to know computer science in order to log into your social media account this morning. that same idea and that same movement is happening within the environment of artificial intelligence. we need to make the tools and the capabilities and this entire ecosystem much more easier to understand through education and the ability for all of us who have the understanding of our data to be able to gather the information and turn out actionable intelligence from it without having to have these massive degrees and very expensive people within your work force. we call it a citizen data scientist, just bringing the power down to the common people, if you will.
11:42 am
>> to extend that thought, think about capabilities on a spectrum. erin highlights it very well. if the intent is for u.s. government to be hiring individuals who want to build foundational networks from the ground up, then yes, that is a monumental task for anybody. doesn't matter what organization. but what's been particularly compelling for us both to invest in as well as participate in open source as well as this experience from other partners, is that the evolution of tools has been drastic just in the last couple of years. data robot's a great example of this. a step further back, not fully a product, but what we have seen is tools that are entry level that help end users who are perhaps not skilled or not familiar with building out their own models but still learn how to work with a model. a great example of this is if you were to look online at what
11:43 am
aws called stage maker. it's one example of a service offering, but essentially these are tools that allow end users to quickly spin up a model and look at some results. it does require some scripting skills, but it is something that we have seen government end users start to work with pretty aggressively. this is compelling, because to erin's point, it starts increasing that literacy drastically. when we started in cosmiq, none of us except for one was a geospatial expert. the reason i bring that up, we started where everyone else started, looking at open cv. this was before tensor flow was open source. we learned through experimentation. i think what's so great about a lot of these new tools is that -- and the reason we contribute and others contribute to open source, is it allows for those tools to be more robust
11:44 am
and for entry level people or folks who are interested in learning more to start with experimentation. thus then become maybe a stronger end user, a tool like data robot, or maybe built their own model as they have greater familiarity. >> so one of my colleagues who actually comes out of the machine learning business and spent some time in the pentagon, while the joint advanced intelligence center was being set up, basically came back and said everybody's worried about how are you going to get the very best a.i. people in the jaic. i said i don't need the best, second tier is plenty good enough. they need the best contracting officers and the best lawyers in the jaic. and this fits into my mantra which as a physical scientist i keep reminding myself of. the united states government is primarily a resource allocation
11:45 am
organization. when you think of its role in the ecosystem, that's a big part of it. and that's contracting and law and ethics and those things. i think that's an element of the ecosystem that's worth bearing in mind. it goes to this point that, you know, you don't need your government yusers to be power users. so i think there's an element there in terms of building the ecosystem about which pieces of it the government needs to be the best at and which pieces they can be good enough at. because you're not going to be able to be the best at everything. >> i would just add to that from the perspective of our report, to tie it to dave's earlier comment about silicon valley, maybe the robust silicon a.i. ecosystem, but it's a private sector entity and doesn't necessarily report to any nation per se. that's true.
11:46 am
that's one reason why in the report we talk about the government needing an a.i. ecosystem that is organic, not to compete with our out-strip silicon valley by any stretch of the imagination, but enough of one to be an intelligent user to push forward military critical applications, to work with those in the industry who want to take on the burden of security and the threshold of trust and explainability to do the kinds of high consequence work the government needs. i will say from our perspective here at csis, we do have a data team that works for me and the defense industrial group, we do a lot of work on contract data trying to draw policy conclusions implications from that, and we have seen in the last really two years, i would say, a dramatic increase in the availability of young people coming out of college or coming out of graduate programs with really significant, serious data analytic skills. so the academic world is out
11:47 am
there. they are responding to the call. from what i've seen, there's a pretty robust market for those folks out there in the private sector. so there is some room for hope. i'm going to make room for one last question, then we will have to stop. i always like to balance the room. let me head to the right side i haven't touched yet. >> i was having a question about ethics. given the background and the backdrop of a number of different private sector companies kind of leading the way on a lot of ethics writing, i think of deep mind having their own group specifically dedicated for that, what do you think are some of the main ethics a.i. principles for the national security community, specifically the dod? >> i'm going to dodge your question. there was a reference to the national a.i. r & d strategy and
11:48 am
as it turns out that while you were putting this report together, they actually put out a request for comments on an update which is under way, which was just closed a week or so ago. but one of our remarks was that this area particularly was one that required additional attention and an even greater u.s. focus for international leadership. i would tie it back to the values of what we described as those of the liberal democracies. i think we need to run our country and prevail as needed on that basis. >> just one thing to add on that is just the model sensitivity and model bias is something that regardless of the application, is a critical issue for any development or end user team.
11:49 am
just even in the geospatial domain, one thing we have to think about is how do we, whether it's internal work or work in collaboration with our partners through spacenet, how do we incorporate enough geospatial diversity that we or models that are released can operate in different domains. it's a really niche example but arguably, that example can translate to a variety of applications. it's something that as data sets grow, whether they be open source or in-house or as algorithms become increasingly benchmark, that's an important factor that should always be kept in mind, whether it's for dad's generation all the way to the deployment of end use application. >> we look at it the same way. one of the reasons that's so important is that when you look at the models and transparency behind that and being able to see how the result was provided,
11:50 am
so for instance, we can tell you in this jurisdiction of ohio that opioid crisis looks like it will cause x amount of deaths next year and they ask you why is that. we're not limiting it to just one model. our platform allows for whether it's tensor flow by google, python, doesn't really matter. what you really want to be able to understand is that your data scientist community and the folks who are building out platforms like data robot are looking at it as you still own your data and you have to have the understanding of what it is that you're providing to the system to go off and build models to. but being able to have it not limited, and we have a cio for one of the intelligence agencies say you all are a lot like switzerland. you don't go out and pick a particular model because you are the company who developed that model. tensor flow is developed by google but we have that inside of our platform so when we turn out our models, we are spinning up hundreds of models and sometimes thousands of combinations of models which are
11:51 am
ensemble models and then you can open tup the blueprint which is complete transparency behind every step in the process, that you can see what's happened. as far as the ethics and governance behind that, that's really very dependent on the organization that you're working with and the experts in the data field there. so if you look at tools like ours, whatever you provide the system to develop and to have the models built from, it's something that your organization hopefully has vetted out and has approved before we're turning out a result for you. at the end your data scientists or senior executive or whoever it might be needs to look at the data results and the intelligence that we provided to say yes or no, good or bad, to that answer. >> i would just add to that, there's -- we have a lot of ethical policy that we already have, so we have rules of engagement in a military context, we have requirements that our personnel system generate outcomes relating to diversity or non-bias across a
11:52 am
range of dimensions, so we have a lot of ethical policy in place across national security world. the question to me is how do you translate that into something that the machine can meaningfully comply with, comply with is probably not the right word, but can meaningfully address. we in many ways right now are challenged to measure and say is this algorithmic output complying with our ethical policies. we have to do some translation, what does that mean in the context of what that algorithm is actually being tasked to do and what that machine intelligence is being tasked to do. it gets addressed in that other report i mentioned on the national strategy for a.i. that came out of our strategic technology program. i would ask you to look there. but this is really one of the central challenges and it's not so much a lack of ethical policy or guidance as how do we translate that into something that the machine intelligence
11:53 am
can meaningfully address, and then to the point that i think david has really brought home for me on a couple of occasions, if we don't know how the a.i. is doing what it's doing or we aren't able to say if we see one outcome in one instance, can we assume that with the exact same inputs in two months'gorithm ha evolved, it will do the same thing. it's a huge fundamental challenge to benchmarking and measuring a learning algorithm that we need to do some work on. so i think we're going to have to stop there. i want to really thank our audience for sticking with us for a long but hopefully interesting discussion. i raeeally enjoyed this discussion, especially with the panel. i want to commend you, if you didn't get a hard copy or if you are watching online our report, which is available on the website, you can download it electronically. it will look just like this. or you can order one for yourself. we have the video we showed on
11:54 am
our website. i should have mentioned that there's a second video so we did an earlier video in this project that was released a couple months ago that is more of an introduction to the concept, then the video you saw today which really summarizes the work of our report. i want to thank thomas for making the project possible and alan for joining us. please join me in thanking our panel for a great discussion. [ applause ]
11:55 am
11:56 am
president trump will be on the campaign trail this evening, a day ahead of the midterm elections. c-span will have live coverage at about 6:00 eastern from for h
11:57 am
wayne, indiana. with just one day to go before the midterm elections, c-span is your primary source for campaign 2018. tonight, book tv is in prime time with a look at recent books on politics. juan williams of fox news examines the trump administration's policies on civil rights. in his book "what the hell do you have to lose." maent institute senior fellow heather macdonald argued that diverse thinking is being challenged at the collegiate level in her book "the diversity delusion" and emory university african-american studies chair carol anderson spoke at the wisconsin book festival in madison about voter suppression. book tv all this week in prime time on c-span 2. >> which party will control the house and senate? watch c-span's live election night coverage starting tuesday at 8:00 p.m. eastern as the results come in from house, senate and governor races around
11:58 am
the country. hear victory and concession speeches from the candidates. then wednesday morning at 7:00 a.m. eastern, we will get your reaction to the election, taking your phone calls live during "washington journal." c-span, your primary source for campaign 2018. >> the c-span bus is traveling across the country on our 50 capitals tour. during our stop in montpelier, vermont we asked folks which party should control congress and why. >> an important issue to me is kind of -- is just, you know, social justice issues regarding women's rights. i think we're at a very, you know, kind of in a very turning point in the u.s. history and i think there's a really, really good opportunity for lawmakers and those who are, you know, being elected to office to really make a big difference and
11:59 am
just kind of turn the tide on what is going on in the country right now regarding that. >> one of the issues that is very important to me that is addressed in the next election is mental health issues and having enough money in the budget to have -- to deal with all the mental health issues that's going on in the world right now. >> as far as i see it, i think that currently, the atmosphere of, you know, the u.s. political scene, i think our two-party system has failed us in a sense. it doesn't represent a vast array of ideologies. if you really compare the legislation and the ideals of the republican and democratic party, you can kind of see a lot of similarities beyond certain niche social jurisdiction and ideology. it really doesn't acknowledge and address any other political issues. it creates sort of a mundane sort of population that doesn't
12:00 pm
really want to consider the idea of a third party or consider the idea of different ideologies for politics or legislation. it's really created a sort of stagnant environment for politics and ideology in the united states. ... election is much more important to me as the last two years under trump, has not been ideal for any of the lgbtq community. and as the issue of the current trans issue happening in court right now will greatly affect me as i myself am going to be
12:01 pm
transitioning to mail and i have several friends who are trans. this new policy i guess is currently called because a tub and four voted him or propose i don't believe, is in my eyes just further evidence of why the government has no business, has no business being in so involved in my life. i really don't think this is something they should have a say in. but they been voting about a lot of things they shouldn't have a say in. it's important to me that this is resolved because it's not just, takes away more rights from me and my ability to exist. >> voices from the states, part of c-span's 50 capitals tour. >> real clear politics sean trende he joins us for the next hour

69 Views

info Stream Only

Uploaded by TV Archive on