Realizing the need for cybersecurity science
NSA has played an active role in system security for over six decades—originally in the area of cryptography for classified communications and later in the development of a wide range of technologies to protect modern computing systems. To maintain its edge, NSA has a tradition of using expert panels for advice and guidance in critical technical areas. In 2008, the Information Security Panel initiated a discussion concerning the scientific underpinning for computer security engineering. Their concern stemmed from the growing use of commercial off-the-shelf technology in critical government systems, and they questioned whether the frequency of high profile security failures could be attributed to a lack of scientific rigor in security engineering. In contrast, they noted that the science and engineering associated with cryptographic systems, while still imperfect, seemed to result in far fewer catastrophic failures. The panel concluded that NSA's Information Assurance (IA) Research Group should review the state of cybersecurity science and consider establishing an initiative to put cybersecurity engineering on par with other established engineering disciplines.
The panel's concerns and challenge were welcomed as corporate-level acknowledgement of what security researchers at NSA and throughout the community had come to believe—that a new, strategic initiative was needed to advance security from the current patchwork of point solutions and ad hoc approaches and that resources should be shifted to focus on the development of a cohesive and organized body of knowledge as a foundation for the field of cybersecurity. The IA research group was convinced that the Agency's experience developing strong foundations for cryptography provided the model for what might be done in cybersecurity science and that the evolution of NSA's IA mission into the cyber domain provided more than enough motivation for it to take on a leadership role.
Assessing the state of cybersecurity science
Gauging the state of cybersecurity science, or any science, requires some method of determining what work truly qualifies as science. While there are myriad definitions of science that relate to testable hypotheses—for example, the ability to make predictions and the use of methodical procedures—a simplistic definition adopted by the IA research group was "any work that describes the limits of what is possible." A good example of science consistent with this definition is Claude Shannon's seminal work on channel capacity, which established upper bounds on the rate of information transfer through a communications circuit. Shannon's results have provided the foundation upon which much of modern communications engineering is based.
Our simple litmus test provided us with a simple and straightforward way to distinguish scientific results in our review of security research. We began with a high-level review of research papers presented at prominent security conferences and then surveyed the security curricula of leading academic institutions. We concluded that most security work meeting our definition of science was concentrated in the areas of cryptography, cryptographic protocols, program correctness, fault tolerance, and formal methods. Much of the other research in security has been concerned with models of security (e.g., Bell-Lapadula and Biba), heuristic design principles, attack strategies, design/assessment of security components (e.g., firewalls, filters, and virtual private networks), risk assessment, intrusion analysis, etc. Although this body of research has contributed to the development of more trustworthy systems, it does not contribute to our understanding of the science of cybersecurity.
Overall, we concluded that the results of our review were consistent with the advisory panel's view of cybersecurity science. But an equally important conclusion we reached was that making significant strides in cybersecurity science would require an effort much larger than NSA alone could support. Unlike NSA's authority in the field of cryptography, no single government organization is charged with responsibility for cybersecurity technology and its scientific foundations. We felt that developing a body of science to support our nation's interests in cyberspace would require a large, long-term effort supported by the combined resources of government, industry, and academia. NSA's mission and experience in information assurance, and its six decades of investment in the science of cryptography, place it in a unique position to provide a leadership role for advancing the science of cybersecurity.
A holistic approach to cybersecurity science
To socialize the idea of a broad program focused specifically on science, we consulted with the other government organizations that have traditionally sponsored security research. Those discussions resulted in a decision to sponsor a workshop to explore the topic of cybersecurity science in depth with a broad group of representatives from government, academia, and industry. In November 2008, the Workshop on the Science of Security (i.e., science of cybersecurity) sponsored by the National Science Foundation (NSF), the Intelligence Advanced Research Projects Agency (IARPA), and NSA was held in Berkeley, California. Attendees included experts from traditional information security fields as well as others from a variety of nontraditional fields including biology, economics, and sociology. The range of topics discussed was equally broad and included such questions as:
Is a science of cybersecurity possible?
What might a science of cybersecurity look like?
How can we reason about problems that seem impossibly hard?
Is it possible to have scientific security metrics?
What lessons can we learn from other disciplines?
Several days of discussions generated a broad and divergent set of ideas concerning the possibility of developing a science of cybersecurity. But there was general agreement on several areas where advances were sorely needed. The first concerned the need to account for human behavior in models of system security. While the difficulty of modeling intelligent adversarial behavior has long been recognized as a shortcoming in security models, it has also become increasingly apparent that a science of cybersecurity should account for human behavior associated with the overall operation and defense of cyber systems. In either case, however, the addition of a human dimension was acknowledged to add enormous complexity to the task of analyzing and designing secure systems.
There was also agreement that the ability to produce systems that are secure in the real world requires accounting for important factors beyond just the technical aspects of the security mechanisms used. The poor adoption rate and ineffective use of available security technology over the past several decades were viewed as evidence of this. Beyond the role of human behavior, the impact of financial and business constraints on the effectiveness of system security were highlighted.
While no specific plan of action emerged from the workshop, the collection of ideas generated significantly influenced the research programs of numerous funding groups, NSA's in particular. In a significant departure from past NSA research programs, our new cybersecurity science portfolio will seek to include a much more diverse set of disciplines than previously considered, including human perception, psychology, physiology, economics, data analytics, and game theory.
Strategies for advancing science
Recognizing the need to improve the scientific foundations of security was a useful first step, but it didn't provide insight regarding what strategy might best accomplish this goal. One seemingly obvious and straightforward approach was simply to increase funding for security research that specifically targeted science. It was clear that even sizable increases in current budgets—which weren't likely—would fall far short of producing the advances needed. But before proceeding with any specific strategy, it seemed prudent to investigate why more science hadn't already been produced. Some who have reviewed the broader ecosystem in which research is conducted believe that current incentives associated with security research weren't well suited to producing science. (See Tom Longstaff's article, Barriers to achieving a science of cybersecurity, for more on this subject.) This suggested that we should consider a strategy aimed at reshaping the incentive system. In the end, since it was not clear if either of these approaches would produce the desired results, we decided to adopt a mixed strategy—one that provides direct support for specific science research projects while, at the same time, seeking improvements in the overall conditions for producing science.
On applying strong inference to cybersecurity science
Carl E. Landwehr
In 1964, biophysicist John R. Platt observed that some scientific fields, such as molecular biology and high energy physics, seem to advance more quickly than others, and he argued that the use of a method he dubbed "strong inference" was responsible . In strong inference, a tree of alternative hypotheses is developed and pruned in response to the results of critical experiments. Platt's paper created quite a stir at the time and has continued to inspire responses over the years. (See [2, 3] for two examples.)
Could this approach speed the development of a science of cybersecurity? To investigate this question, NSA sponsored a panel at the 2012 Institute of Electrical and Electronics Engineers Symposium on Security and Privacy. Five cybersecurity researchers active in economics, human behavior, systems, formal methods, and cryptography were asked to assess the suitability and actual use of strong inference in their respective fields. As organizer of the panel and moderator of the discussion, which included lively exchanges with the audience, my personal conclusions are that strong inference is not widely used in the field at present and that its potential benefit is strongest in those domains where natural phenomena, including human behavior, must be modeled. Its benefits are less clear in areas like cryptography and formal methods, where mathematics and logic predominate. Nevertheless, in any field, the intellectual rigor required to formulate a proposed research project as a hypothesis-testing exercise can only help.
Experiments in funding science
For decades, government organizations including the Defense Advanced Research Projects Agency (DARPA), NSF, the Air Force Research Laboratory (AFRL), and the Army Research Office (ARO), as well as NSA have used direct funding for research targeted at specific security topics; so it seemed straightforward to apply the same approach for cybersecurity science. NSA's cybersecurity science initiative is exploring a number of variations of this strategy to assess their effectiveness. One approach, used shortly after the conclusion of the Berkeley workshop, provides supplemental funding to an ongoing security research program (i.e., NSF's Team for Research in Ubiquitous Secure Technology Science and Technology [TRUST] Center) specifically to encourage work in science. A second approach was adapted from industry: it involves funding specific work in science at a small number of academic research groups—referred to as lablets—at highly qualified institutions. The first three lablets, established at Carnegie Mellon University, University of Illinois, and North Carolina State University, were beneficiaries of funding provided to NSA that was specifically earmarked for cybersecurity science. (See Pointers article for more information.) While the initial choice of lablets was limited by timing constraints placed on the funding, the number of institutions participating in the program increased through the inclusion of an outreach requirement for each lablet. The last funding approach included in our portfolio provides support to specific, high-impact problem areas identified through research reviews conducted across the security community. Composition is one cybersecurity science topic that is currently being supported with the goal of understanding how the security properties of a system can be derived from the properties of its component parts.
After several rounds of modest NSA funding supplements to NSF's TRUST Center, increased attention is being devoted to science and beginning to influence other work and researchers. NSA's lablet initiative, formally established in 2012, recently kicked off several dozen projects to explore how effective a multiuniversity, multidisciplinary team approach can be at advancing science and involving nontraditional partners. Early work has focused on identifying core hard problems in science that must be understood in order to deal with the security issues that plague the nation. We have long recognized that security research does not always lead to scientific understanding, and through collaboration with our lablet partners, we are maturing our joint understanding of how to shape research to maximize its contribution to science. Our work funding specific projects in science has just begun, but the quality of the investigators and their previous contributions to science make us confident that these efforts will provide a showcase for cybersecurity science research.
Broadening research participation
A funding strategy that targets specific research projects unavoidably limits participation to a small group of researchers. To significantly broaden participation in cybersecurity science we are investigating ways to reshape the overall research environment to be more conducive to producing science. One goal is to increase the perceived value of research that advances science, even incrementally, rather than of work that tracks the latest security trends. If successful, we believe we can accelerate the creation of a cybersecurity science by leveraging a much larger community of researchers. The downside of such an indirect approach is that specific research outcomes are much less certain and the overall effectiveness of the investment is difficult to assess. While influencing the research environment seems simple notionally, developing a practical strategy to do this is challenging. Some of the approaches we are investigating include challenge problems, competitions, awards for scientific papers, and recognition of researchers' achievements. The strategy we adopt, as in other cases, will include a variety of these techniques.
Our report to NSA's advisory board observed that the scope of the effort needed to develop a science of cybersecurity was well beyond what NSA could accomplish on its own. But we also noted that NSA was in a unique position to lead a community activity to make this happen. One of the key aspects of our science initiative has been enlisting the support of NSA's many research partners including the Air Force Office of Scientific Research, the Department of Homeland Security, NSF, DARPA, IARPA, the federal laboratories, and other groups across the DoD and intelligence community. We have also sought the involvement of our foreign partners, particularly the UK and Canada. Although a government-wide cybersecurity science initiative does not yet exist, we have attempted to coordinate the collection of research projects to provide cohesion and balance.
In the past several years there has been a groundswell of interest in creating more robust scientific foundations for cybersecurity. Today, there are numerous cybersecurity science activities underway, with more being planned, and keeping track of them is becoming increasingly difficult. To deal with this problem and to encourage the development of a community surrounding work on cybersecurity science, NSA has taken a lead role in developing a web-based Science of Security Virtual Organization (SoS VO). This work leverages the Virtual Organization collaboration infrastructure developed by NSF to support its Cyber-Physical Systems (CPS) program. (Visit the CPS Virtual Organization at cps-vo.org.) The goal for the SoS VO is to provide "one stop shopping" for anything related to cybersecurity science. The website will provide information on conference events, research sponsors, current research programs, notices of future initiatives, research tools and data, etc. The research produced by these activities will be made available for review and distribution, and a future goal is to provide video streams of research reviews for wide viewing. The site is also intended to encourage and support collaboration by providing a variety of social networking features including discussion forums, chat, researcher blogs, and lists of challenge problems. (See Advancing the science of cybersecurity with a virtual organization for more information about the SoS VO.)
Transitioning findings to practice
New security systems continue to be developed despite limitations in existing science, so developers must make do with whatever practices are available, however imperfect. Because of this, an important consideration in our initiative is the rapid transition of emerging scientific results into the practice of security engineering. In our cybersecurity science lablet program, for example, we are seeking opportunities to develop courses that capture new science and to augment existing courses with improved scientific foundations. As new material is developed, we intend to leverage relationships with the National Institute of Standards and Technology and NSA's own Centers of Academic Excellence program in order to influence the design of new systems and future generations of developers. (For more information about NSA's Centers of Academic Excellence, visit http://www.nsa.gov/ia/academic_outreach/nat_cae.)
Although the resources currently invested in cybersecurity science are relatively modest compared with other research areas, responsible program managers will still need to track the return on their investment. So, how can progress in cybersecurity science be measured? While breakthrough discoveries and near-term impact are always hoped for, scientific advances are often incremental and produced over periods measured in decades. Therefore, expectations for significant results need to be circumspect and mindful of the many ways in which scientific advance is observed. Types of scientific progress include:
Finding the new—discovering scientific breakthroughs;
Taking a fresh look—developing useful new ways to look at a given set of data;
Finding patterns—discovering and explaining patterns in phenomena across time;
Finding connections—linking theories and explanations across multiple fields of research; and
Influencing others—stimulating further research, including research outside the field, and collaboration across different fields.
In addition, scientific progress may be seen in measures that show rising interest and excitement about a new field, including :
Established scientists begin to work in a new field;
Highly promising junior scientists choose to pursue new concepts, methods, or lines of inquiry;
Students increasingly enroll in courses and programs in a new field;
The rate of publications in the field increases;
Citations to publications in the field increase in both number and range across other scientific fields;
Publications in the new field appear in prominent journals;
New journals or societies appear; and
Ideas from the field are adopted in other fields.
NSA's long-standing investment in cryptographic science and engineering has yielded the most robust encryption technology in the world. But the protection of our nation's cyber systems demands security design and analysis techniques that encompass much more than cryptography, yet are comparably grounded in science. While we do not expect that a science of cybersecurity can guarantee complete protection against cybersecurity threats any more than safety science can guarantee risk-free transportation, it should provide us with greater certainty about the capabilities and limitations of our security mechanisms, allowing us to make well-informed risk decisions. NSA's cybersecurity science initiative is the first step in a long-term endeavor to develop the broad understanding of security that we need to protect our national interests in cyberspace.
About the author
Robert Meushaw is the former technical director of NSA's Information Assurance (IA) Research Laboratory. His current work focuses on developing new strategies and programs for the advancement of a science of cybersecurity. He retired from NSA in 2005 after 33 years of service, including over a decade of work in IA research. Meushaw's career at NSA also included significant stints in both the Production Development Group and the Security Evaluation Group of the IA Directorate. In addition to his technical responsibilities, he served for six years as a technical editor of NSA's Tech Trend Notes and The Next Wave publications. Meushaw holds degrees in electrical engineering from Princeton University and the Johns Hopkins University.
Carl E. Landwehr is an independent consultant in cybersecurity research. Until recently, he was a senior research scientist for the Institute for Systems Research at the University of Maryland, College Park. He received his BS in engineering and applied science from Yale University and his PhD in computer and communication sciences from the University of Michigan. Following a 23-year research career at the Naval Research Laboratory, he has for the past decade developed and managed research programs at the National Science Foundation and the Advanced Research Development Activity/Defense Technology Office/ Intelligence Advanced Research Projects Activity. He is interested in all aspects of trustworthy computing. In December 2010, he completed a four-year term as editor in chief of IEEE Security & Privacy Magazine.
 Platt JR. "Strong Inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others." Science. 1964;146(3642):347–353. DOI: 10.1126/science.146.3642.347
 O'Donohue W, Buchanan JA. "The weakness of strong inference." Behavior and Philosophy. 2001;29:1–20. Available at: http://mres.gmu.edu/pmwiki/uploads/Main/ODonohue2001.pdf
 Davis RH. "Strong inference: Rationale or inspiration?" Perspectives in Biology and Medicine. 2006;49(2):238–49. DOI: 10.1353/pbm.2006.0022
 Feller I, Stern PC, editors. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington (DC): National Academies Press; 2007. Chapter 4, "Progress in science"; p. 67–94.
View PDF version of this article (178 KB)