Country Profile Paper

Country Selected: Iraq

Textbook:  

Clark, R. (2012). Intelligence analysis – A Target-Centric Approach (4th ed.). Los Angeles: SAGE.

Must be utilized. 

Content / Requirements 

To demonstrate intelligence collection and analytical ability, it will be required to

provide a written intelligence profile paper on a country that you select and is a non-allied  country of the United States.

The submission requirements:

Provide a complete review of the selected country’s political and military

features as well as an analysis of the threat that country poses to the United States.

The paper should be at least six pages, double spaced. This does not include the Title or Reference pages.

The content of your research will utilize the concepts studied in the course and textbook readings.

Formatting will include:

APA style;

few to no grammatical or spelling errors; and

references

Introduction

 

The greatest derangement of the mind is to believe in something because one wishes it to be so.

Louis Pasteur

 

We learn more from our failures than from our successes. As noted in the preface to this book, there is much to be learned from what have been called the two major U.S. intelligence failures of this century—the attacks of September 11, 2001, and the miscall on Iraqi WMD. So this book begins with an overview of why we fail.

Why We Fail

As a reminder that intelligence failures are not uniquely a U.S. problem, it is worth recalling some failures of other intelligence services in the last century:

 

  • Operation Barbarossa, 1941. Josef Stalin acted as his own intelligence analyst, and he proved to be a very poor one. He was unprepared for a war with Nazi Germany, so he ignored the mounting body of incoming intelligence indicating that the Germans were preparing a surprise attack. German deserters who told the Russians about the impending attack were considered provocateurs and shot on Stalin’s orders. When the attack, named Operation Barbarossa, came on June 22, 1941, Stalin’s generals were surprised, their forward divisions trapped and destroyed. 1
  • Singapore, 1942. In one of the greatest military defeats that Britain ever suffered, 130,000 well-equipped British, Australian, and Indian troops surrendered to 35,000 weary and ill-equipped Japanese soldiers. On the way to the debacle, British intelligence failed in a series of poor analyses of their Japanese opponent, such as underestimating the capabilities of the Japanese Zero fighter and concluding that the Japanese would not use tanks in the jungle. The Japanese tanks proved highly effective in driving the British out of Malaya and back to Singapore. 2
  • Yom Kippur, 1973. Israel is regarded as having one of the world’s best intelligence services. But in 1973 the intelligence leadership was closely tied to the Israeli cabinet and often served both as policy advocate and information assessor. Furthermore, Israel’s past military successes had led to a certain amount of hubris and belief in inherent Israeli superiority. Israel’s leaders considered their overwhelming military advantage a deterrent to attack. They assumed that Egypt needed to rebuild its air force and forge an alliance with Syria before attacking. In this atmosphere, Israeli intelligence was vulnerable to what became a successful Egyptian deception operation. The Israeli intelligence officer who correctly predicted the impending attack had his report suppressed by his superior, the chief intelligence officer of the Israeli Southern Command. The Israeli Defense Force was caught by surprise when, without a rebuilt air force and having kept their agreement with Syria secret, the Egyptians launched an attack on Yom Kippur, the most important of the Jewish holidays, on October 6, 1973. The attack was ultimately repulsed but only at a high cost in Israeli casualties. 3
  • Falkland Islands, 1982. Argentina wanted Great Britain to hand over the Falkland Islands that Britain had occupied and colonized in 1837. Britain’s tactic was to conduct prolonged diplomatic negotiations without giving up the islands. There was abundant evidence of Argentine intent to invade, including a report of an Argentine naval task force headed for the Falklands with a marine amphibious force. But the British Foreign and Commonwealth Office did not want to face the possibility of an Argentine attack because it would be costly to deter or repulse. Britain’s Latin America Current Intelligence Group (dominated at the time by the Foreign and Commonwealth Office) accordingly concluded, on March 30, 1982, that an invasion was not imminent. On April 2 Argentine marines landed and occupied the Falklands, provoking the British to assemble a naval task force and retake the islands. 4

The common theme of these and many other intelligence failures discussed in this book is not the failure to collect intelligence. In each of these cases, the intelligence had been collected. Three themes are common in intelligence failures.

Failure to Share Information

From Pearl Harbor to 9/11 and the miscall on Iraq’s possession of WMD, the inability or unwillingness of collectors and analysts to share intelligence has been a recurring cause of failure.

Intelligence should be a team sport. Effective teams require cohesion, formal and informal communication, cooperation, shared mental models, and similar knowledge structures—all of which contribute to sharing of information. Without such a common process, any team—especially the interdisciplinary teams that are necessary to deal with complex problems of today—will quickly fall apart. 5

Nevertheless, the Iraqi WMD Commission (the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction, which issued its formal report to President George W. Bush in March 2005) found that collectors and analysts failed to work as a team. 6 They did not effectively share information. And the root causes for the failure to share remain, in the U.S. intelligence community as well as in almost all intelligence services worldwide.

Sharing requires openness. But any organization that requires secrecy to perform its duties will struggle with and often reject openness. 7 Most governmental intelligence organizations, including the U.S. intelligence community, place more emphasis on secrecy than on effectiveness. 8 The penalty for producing poor intelligence usually is modest. The penalty for improperly handling classified information can be career ending. 9 There are legitimate reasons not to share; the U.S. intelligence community has lost many collection assets because details about them were too widely shared. So it comes down to a balancing act between protecting assets and acting effectively in the world. Commercial organizations are more effective at intelligence sharing because they tend to place more emphasis on effectiveness than on secrecy; they also have less risk of losing critical sources from compromises.

Experts on any subject have an information advantage, and they tend to use that advantage to serve their own agendas. 10 Collectors and analysts are no different. At lower levels in the organization, hoarding information may have job security benefits. At senior levels, unique knowledge may help protect the organizational budget. So the natural tendency is to share the minimum necessary to avoid criticism and to protect the really valuable material. Any bureaucracy has a wealth of tools for hoarding information, and this book discusses the most common of them.

Finally, both collectors of intelligence and analysts find it easy to be insular. They are disinclined to draw on resources outside their own organizations. 11 Communication takes time and effort. It has long-term payoffs in access to intelligence from other sources but few short-term benefits.

In summary, collectors, analysts, and intelligence organizations have a number of incentives to conceal information and see few benefits in sharing it. The problem is likely to persist until the incentives to share outweigh the benefits of concealment.

Failure to Analyze Collected Material Objectively

In each of the cases cited at the beginning of this introduction, intelligence analysts or national leaders were locked into a mindset—the consistent thread in analytical failures. Falling into the trap that Louis Pasteur warned about in the observation that I quoted earlier, they believed because, consciously or unconsciously, they wished it to be so. Mindset can manifest itself in the form of many biases and preconceptions, a short list of which would include the following:

 

    • Ethnocentric bias involves projecting one’s own cultural beliefs and expectations on others. It leads to the creation of a mirror-image model, which looks at others as one looks at oneself and to the assumption that others will act rationally as rationality is defined in one’s own culture. The Yom Kippur attack was not predicted because, from Israel’s point of view, it was irrational for Egypt to attack without extensive preparation.
  • Wishful thinking involves excessive optimism or avoiding unpleasant choices in analysis. The British Foreign Office did not predict an Argentine invasion of the Falklands because, in spite of intelligence evidence that an invasion was imminent, they did not want to deal with it. Josef Stalin made an identical mistake for the same reason prior to Operation Barbarossa.
  • Parochial interests cause organizational loyalties or personal agendas to affect the analysis process.
  • Status quo biases cause analysts to assume that events will proceed along a straight line. The safest weather prediction, after all, is that tomorrow’s weather will be like today’s. An extreme case is the story of the British intelligence officer who, on retiring in 1950 after 47 years of service, reminisced: “Year after year, the worriers and fretters would come to me with awful predictions of the outbreak of war. I denied it each time. I was only wrong twice.” 12 The status quo bias causes analysts to fail to catch a change in the pattern.
  • Premature closure results when analysts make early judgments about the solution to a problem and then, often because of ego, defend the initial judgments tenaciously. This can lead the analyst to select (usually without conscious awareness) subsequent evidence that supports the favored solution and to reject (or dismiss as unimportant) evidence that conflicts with it.

All of these mindsets can lead to poor assumptions and bad intelligence if not challenged. And as the Iraqi WMD Commission report notes, analysts often allow unchallenged assumptions to drive their analysis. 13

Failure of the Customer to Act on Intelligence

In some cases, as in Operation Barbarossa and the Falkland Islands affair, the intelligence customer failed to understand or make use of the available intelligence.

A senior state department official once remarked, half in jest, “There are no policy failures; there are only policy successes and intelligence failures.” 14 The remark rankles intelligence officers, but it should be read as a call to action. Intelligence analysts should accept partial responsibility when their customer fails to make use of the intelligence provided and also accept the challenges to engage the customer during the analysis process to ensure that the resulting intelligence is taken into account when the customer must act.

In this book I devote considerable discussion to the vital importance of analysts being able to objectively assess and understand their customers and their customers’ business or field. The first part of the book describes a collaborative, target-centric approach to intelligence analysis that demands a close working relationship among all stakeholders, including the customer, as the means to gain the clearest conception of needs and the most effective results or products. The last chapter of the book discusses ways to ensure that the customer takes the best available intelligence into account when making decisions.

Intelligence analysts have often been reluctant to closely engage one class of customer—the policymakers. In its early years the CIA attempted to remain aloof from its policymaking intelligence customers to avoid losing objectivity in the national intelligence estimates process. 15 The disadvantages of that separation became apparent, as analysis was not addressing the customer’s current interests, and intelligence was becoming less useful to policymaking. During the 1970s CIA senior analysts began to expand contacts with policymakers. As both the Falklands and Yom Kippur examples illustrate, such closeness has its risks. But in many cases analysts have been able to work closely with policymakers and to make intelligence analyses relevant without losing objectivity.

What the Book Is About

This book is for intelligence analysts, and it develops a process for successful analysis—including avoiding those three themes of failure.

Studies have found that no baseline standard analytic method exists in the U.S. intelligence community. Any large intelligence community is made up of a variety of disciplines, each with its own analytic methodology. 16 Furthermore, intelligence analysts routinely generate ad hoc methods to solve specific analytic problems. This individualistic approach to analysis has resulted in a great variety of analytic methods, more than 160 of which have been identified as available to U.S. intelligence analysts. 17

There are good reasons for this proliferation of methods. Methodologies are developed to handle very specific problems, and they are often unique to a discipline, such as economic or scientific and technical (S&T) analysis (which probably has the largest collection of problem-solving methodologies). As an example of how methodologies proliferate, after the Soviet Union collapsed, economists who had spent their entire professional lives analyzing a command economy were suddenly confronted with free market prices and privatization. No model existed anywhere for such an economic transition, and analysts had to devise from scratch methods to, for example, gauge the size of Russia’s private sector. 18

But all intelligence analysis methods derive from a fundamental process. This book is about that process. It develops the idea of creating a model of the intelligence target and extracting useful information from that model. These two steps—the first called synthesis and the second called analysis—make up what is known as intelligence analysis. All analysts naturally do this. The key to avoiding failures is to share the model with collectors of information and customers of intelligence. While there are no universal methods that work for all problems, a basic process does in fact exist.

There also are standard widely used techniques. An analyst must have a repertoire of them to apply in solving intelligence problems. They might include pattern analysis, trend prediction, literature assessment, and statistical analysis. A number of these techniques are presented throughout the book in the form of analysis principles. These analysis techniques together form a problem-solving process that can help to avoid the intelligence blunders discussed earlier.

Sherman Kent noted that an analyst has three wishes: “To know everything. To be believed. And to exercise a positive influence on policy.” 19 This book will not result in an analyst’s being able to know everything—that is why we will continue to have estimates. But chapters 115 should help an analyst to learn the tradecraft of analysis, and chapter 16 is intended to help an analyst toward the second and third wishes.

Summary

Intelligence failures have three common themes that have a long history:

 

  • Failure of collectors and analysts to share information. Good intelligence requires teamwork and sharing, but most of the incentives in large intelligence organizations promote concealment rather than sharing of information.
  • Analysts’ failure to analyze the material collected objectively. The consistent thread in these failures is a mindset, primarily biases and preconceptions that hamper objectivity.
  • Failure of customers to act on intelligence. This lack of response is not solely the customer’s fault. Analysts have an obligation to ensure that customers not only receive the intelligence but fully understand it.

This book is about an intelligence process that can reduce such failures. A large intelligence community develops many analytic methods to deal with the variety of issues that it confronts. But the methods all work within a fundamental process: creating a model of the intelligence target (synthesis) and extracting useful information from that model (analysis). Success comes from sharing the target model with collectors and customers.

Notes

1. John Hughes-Wilson, Military Intelligence Blunders (New York: Carroll and Graf, 1999), 38.

2. Ibid., 102.

3. Ibid., 218.

4. Ibid., 260.

5. Rob Johnson, Analytic Culture in the U.S. Intelligence Community (Washington, D.C.: Center for the Study of Intelligence, Central Intelligence Agency, 2005), 70.

6. Report of the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction, March 31, 2005, Overview.

7. Johnson, Analytic Culture, xvi.

8. Ibid., 11.

9. There exists some justification for the harsh penalty placed on improper use of classified information; it can compromise and end a billion-dollar collection program or get people killed.

10. Steven D. Leavitt and Stephen J. Dubner, Freakonomics (New York: HarperCollins, 2005), 13.

11. Johnson, Analytic Culture, 29.

12. Amory Lovins and L. Hunter Lovins, “The Fragility of Domestic Energy,” Atlantic Monthly, November 1983, 118.

13. Report of the Commission.

14. William Prillaman and Michael Dempsey, “Mything the Point: What’s Wrong with the Conventional Wisdom About the C.I.A.,” Intelligence and National Security 19, no. 1 (March 2004): 1–28.

15. Harold P. Ford, Estimative Intelligence (Lanham, Md.: University Press of America, 1993), 107.

16. Johnson, Analytic Culture, xvii.

17. Ibid., 72.

18. Center for the Study of Intelligence, Central Intelligence Agency, “Watching the Bear: Essays on CIA’s Analysis of the Soviet Union,” Conference, Princeton University, March 2001, www.cia.gov/cis/books/watchingthebear
/article08.html
, 8.

19. Ibid., 12.

15

Technology and Systems Analysis

 

A weapon has no loyalty but to the one who wields it.

Ancient Chinese proverb

 

The impact of technologies such as computers and telecommunications, nanoengineering, and bioengineering reaches across most fields of intelligence. Advanced technologies are particularly relevant to terrorism intelligence and intelligence about weapons of mass destruction (WMD), such as chemical, biological, and nuclear weapons. Technology assessment and systems analysis are important specialized fields of intelligence analysis that depend on creating models. Systems analysis in particular makes more use of target models, especially simulation models, than does any other intelligence subdiscipline. Both routinely use all of the sources discussed in chapter 6.

The technology intelligence discipline is often called scientific and technical (S&T) intelligence. But scientific developments generally are openly published. They are seldom of high intelligence interest. When you do something with science, however, the result is technology, and it can be of intelligence interest. Science is of intelligence interest only insofar as it has the potential to be implemented as a technology. Analysts often are strongly tempted to investigate an interesting scientific breakthrough that will not become a system or a product for decades.

Technology, in turn, is of interest only insofar as it has the potential to become part of a system that is of intelligence interest, for example, a weapons system or, in business intelligence, a competing product. This is a simple paradigm but a valid one that is often forgotten by S&T and weapons systems analysts.

Technology Assessment

Technology assessment makes extensive use of open source information. No other source contains the technical detail that open source material can provide. No matter how highly classified a foreign project may be, the technology involved in the project eventually appears somewhere in the open literature; scientists and technologists want to publish their results, usually for reasons of professional reputation. This rule holds true even with respect to targets that heavily censor their open publications; one merely has to know where to look. For example, a collector can trace even the most sensitive U.S. defense or intelligence system developments simply by following the right articles over an extended period of time in journals such as Aviation Week and Space Technology.

The key to using open source material in technical intelligence is identifying and analyzing the relationships among programs, persons, technologies, and organizations, and that depends on extensive use of relationship or network analysis (chapter 14).

Technology helps shape intelligence predictions and is the object of predictions. The three general types of technology predictions have to do with the following:

 

  • The future performance of a technology
  • A forecast of the likelihood of innovation or breakthroughs in a technology
  • The use, transfer, or dissemination of the technology

Future Performance

Pattern analysis is used extensively in making technology estimates based on open literature. Articles published by a research group identify the people working on a particular technology. Patents are an especially fruitful source of information about technology trends. Tracking patent trends over time can reveal whether enthusiasm for a technology is growing or diminishing, what companies are entering or leaving a field, and whether a technology is dominated by a small number of companies. 1 Patent counting by field is a technique used in creating technology indicators; in technology policy assessments; and in corporate, industry, or national technological activity assessments. Corporate technology profiles, based on patents, are used for strategic targeting, competitor analysis, and investment decisions. They are used to create citation network diagrams (which show the patterns of citations to prior relevant research) for identifying markets and forecasting technology.

Another publications pattern analysis tool, citation analysis, involves counting the number of citations to a particular report. A high number of citations is a proven indicator of the impact, or quality, of the cited report. 2 Citation analysis indicates relationships and interdependencies of reports, organizations, and researchers. It can indicate whether a country’s research is internally or externally centered and show relationships between basic and applied research. Productivity in research and development has been shown to be highly concentrated in a few key people. 3 Citations identify those people. Commercial publications now routinely track citation counts and publish citation analysis results.

A technology assessment methodology must correctly characterize the performance of the technology; that is, it must use the correct measures of performance. For example, one useful measure of high-power microwave tube technology is average power as a function of frequency. Second, the methodology must identify the critical supporting technologies (forces) that can make a difference in the target technology’s performance. Third, it must allow comparison of like developments; it is misleading, for example, to compare the performance of a one-of-a-kind laboratory device with the performance of a production-line component or system. Finally, the methodology must take time into account—the time frame for development in the country or organization of interest, not another country’s or organization’s time frame—since the methodology requires a projection into the future.

A five-stage generic target model has been used for describing the development of a technology or product. It is commonly used to predict future development of a technology. The five stages of technology growth are often represented on the S curve introduced in chapter 5. The technology has a slow start, followed by rapid growth and, ultimately, maturity, wherein additional performance improvements are slight. 4 The vertical axis of the curve can represent many things, including the performance of the technology according to some standard, its popularity, or the extent of its use.

Transitions between stages are difficult to establish because there is a natural overlap and blending between stages. The scheme provides for development milestones that can be measured on the basis of how much or how little has been published on the research involved in the technology.

In the S curve that shows the progress of a technology through the five stages of its lifetime, the vertical axis represents the number of patents or publications about the technology. Another type of S curve measures the performance improvement of the technology at some stage of its development, generally at the fourth (production) stage, though it can be drawn for all stages simultaneously. The horizontal axis for both curves is time. A technology is available for industrial use when it reaches the steepest slope on the S curve, as Figure 15-1 illustrates. In this region, the technology is a “hot” item and is being widely publicized.

When the S curve for technology use flattens, the technology is mature, and only incremental performance improvements are possible. Generally, at this point, the technology has reached some fundamental limit defined by physical laws. Incremental improvements may flow from clever design techniques, increases in scale size, or improvements in materials, but the changes will bring only modest improvements in the technology’s performance.

The S curve is the fundamental tool in a widely used formal methodology for technology assessment. This methodology, called TRIZ-based technology intelligence, assists technology managers in identifying competing industrial technologies in order to forecast their development and determine their potential. (TRIZ is a Russian acronym for Theory of Inventive Problem Solving.) The TRIZ methodology incorporates a number of techniques for locating the technology’s position on the S curve, including the dates and quality of patent applications and measurements of performance improvements. 5 All of the techniques, though, at best can determine the current position of the technology on the S curve and its technological limits only qualitatively.

Innovation

The general nature of advances in any field of technology can be foreseen through the use of such criteria as current activity in the field, the need for a solution to a particular problem, and the absence of fundamental laws prohibiting such advances. 6 If those criteria have been met, one looks for an innovative development using the techniques described in this section.

Innovation is a divergent, not a convergent, phenomenon; no body of evidence builds up to an inevitable innovation in a technical field. Predicting innovation is an art rather than a science. As noted earlier, the U.S. intelligence community long ago gave up trying to predict a divergent act such as a coup, for good reasons. 7 When we attempt to predict innovation, we are trying to predict a breakthrough, which is of the same nature as a coup. A breakthrough is a discontinuity in the S curve of technology development. We cannot predict when an innovation will happen, but as with a coup we can determine when conditions are right for it, and we can recognize when it starts to develop.

Because technology development follows the S curve, we have some prospects for success. Guidelines based on past experiences with innovation can sometimes help predict who will produce an innovation. We say sometimes because innovation often comes from a completely unexpected source, as noted later in this chapter. Finally, standard evidence-gathering and synthesis/analysis techniques can help determine what the innovation will be, but only very late in the process.

Figure 15-1    The S Curve for Technology Development

figure

Predicting the Timing of Innovation. Predicting when the timing is right for innovation in a technology is a matter of drawing the S curve of performance versus time for the technology. When the curve reaches the saturation level and flattens out, the timing is right. The replacement technology normally will start to take over sometime after that point and will eventually reach a higher level of performance. It will not necessarily start at a higher level. Over time in a given field, we obtain a series of S curves on one graph. In each case the replacement technology begins at a lower performance level but quickly surpasses its predecessor technology.

Predicting Sources. Innovation generally comes from an individual or a small organization that is driven by an incentive force and is not held back by restraining forces. Consider DuPont’s innovation record between 1920 and 1950. During that time, only five of 18 major DuPont innovations—neoprene, nylon, teflon, orlon, and polymeric color film—originated within the company. The rest came from small companies or individual inventors and were acquired by DuPont. Many came from a U.K. chemical company under a technology exchange agreement. For comparison, five of DuPont’s seven major product and process improvements over the same period originated within DuPont. The DuPont case is one of many examples that suggest that innovation does not typically take place in a large organization but product and process improvements do.

It is worth asking why DuPont chose to acquire those outside innovations. In the time frame of the study, DuPont maintained one of the best business intelligence operations in the world. Its business intelligence group, which remains largely unpublicized, achieved success by a combination of solid collection (open literature search, human intelligence, and materials acquisition and testing) with good technical analysis to identify promising targets and acquire needed technology effectively and cheaply.

The same generalization held true in the planned economy of the former Soviet Union: Most innovations came from outside the country, and those that came from within were developed by individual scientists or very small groups in Academy of Sciences laboratories. Soviet defense and industrial laboratories did very well on engineering improvements but almost never innovated. And like DuPont’s, Soviet commercial espionage was reasonably effective at acquiring technology. The Soviets, however, were considerably less effective in analyzing, evaluating, and adopting technology than was DuPont.

These examples, along with others discussed later in this chapter, have a common theme: a revolutionary project has a much better chance of success if it is placed under a management structure completely separate from that used for evolutionary developments. In evaluating an organization that is pursuing a potential breakthrough technology, the analyst therefore should assess the organizational structure, as discussed in the preceding chapter, and note how the organization has handled new technologies in the past. Does the organization welcome and aggressively pursue new technologies, as DuPont, 3M, Biotech, and most Internet companies do? Or does it more closely resemble the former Soviet industrial ministries, which saw new technologies as a hindrance?

Evaluating the Innovation Climate. The following example sets the stage for a discussion of two forces—freedom and demand-driven incentive—that foster innovation and that need to be considered in any force analysis.

Back in 1891, an American named Almon B. Strowger developed one of the most significant innovations in telecommunications history—the step-by-step electromechanical switch. The Strowger switch made the dial telephone possible. Strowger’s innovation is remarkable because he was not an engineer; he was an undertaker. But Strowger had two things going for him that are at the core of all innovation: freedom and incentive. He had freedom because no one required him to be an electrical engineer to develop communications equipment. He did not even have to work for Bell Telephone Company, which at that time had an entrenched monopoly. And he had a special kind of incentive. Strowger was one of two morticians in Kansas City. The other mortician’s wife was one of the city’s telephone switchboard operators. Strowger was convinced that she was directing all telephone calls for mortician services to his competitor. Strowger had a powerful economic incentive to replace her with an objective telephone call director.

The elements of freedom and incentive, which we see in Strowger’s case, appear in most significant innovations throughout history. The incentive does not have to be economic; in fact, we can use the nature of the incentive to define the difference between scientists and engineers or technologists: Science is driven primarily by noneconomic incentives, and engineering or technology is driven primarily by economic incentives. The difference is important: It explains why the Soviet Union produced so many competent scientists and scientific discoveries yet failed so miserably in technological innovation. Soviet scientists, like their U.S. counterparts, had as incentives knowledge, recognition, and prestige. But the engineer, in contrast, tends to depend on economic incentives, and those are notably absent in planned economies such as the Soviet one. The Soviet patent system was almost confiscatory and provided no more financial return to an innovative engineer than the typical U.S. company’s patent-rights agreement does. Patents aside, the Soviet engineer who developed something new would likely see his manager take credit for the innovation.

Freedom is a more subtle factor in the innovation equation, but it is just as important as incentive. In the 1920s, the Soviet Union and the United States were world leaders in genetics. At that time, Russian geneticist Trofim D. Lysenko proposed the theory that environmentally acquired characteristics of an organism were inheritable. Lysenko rejected the chromosome theory of heredity and denied the existence of genes. Lysenko was dismissed as a charlatan by Russia’s leading geneticists, including N. I. Vavilov. But he had one powerful argument in his favor: Josef Stalin liked his theories. They were compatible with Stalin’s view of the world. If people could pass on to their descendants the behavior patterns they acquired in Soviet society, then the type of state that Stalin sought to establish could become a permanent one. So, with the backing of the Communist Party, the environmental theory became the only acceptable theory of genetics in the Soviet Union. In August 1940, Vavilov was arrested and subsequently died in prison. At least six of Russia’s other top geneticists disappeared. In later years, the Soviets would admit that “Lysenkoism” set back their effort in genetics by about 12 years.

Intelligence analysts sometimes looked on the Lysenko affair as an aberration. It was, in fact, a fairly accurate picture of the research environment of the Soviet Union. Although there were no more cases as dramatic as Lysenko’s, the contamination of Lysenkoism spread to many other scientific fields in the Soviet Union during the 1930s and 1940s. Restrictions on freedom to innovate contributed substantially to the country’s economic decline. Most of the restrictions stemmed from two phenomena—the risk aversion that is common to most large, established organizations and the constraints of central economic planning that are unique to state-run economies such as that of the Soviet Union.

The Soviet Union was organized for central economic planning, and meeting the plan had first priority. That gave Soviet industries a powerful incentive to continue their established lines. Inertial forces, discussed in chapter 13, dominated. If current production declined, a plant manager would strip his research and development organization to maintain production. Furthermore, central planning seems inevitably to imply short supplies. Supplies would become even shorter if a plant manager attempted to innovate, and new types of supplies needed for innovation wouldn’t be available at all. The Soviet plant that needed something out of the ordinary—a special type of test instrument, for example—had to build the device itself at a high cost in resources.

Moreover, Soviet management subscribed, at least in principle, to Marx’s labor theory of value, in which if one device takes twice as much labor to produce as another, it should command twice the price. Such theories provide a powerful deterrent to both innovation and automation.

In sum, the Soviet industrial structure provided severe penalties for risk taking if a project failed. It provided little reward if the project succeeded, and it provided no special penalty for doing nothing. This set of pressures often led the Soviet or East European plant manager to a counterproductive response when he was required to start a new project: Because there were no rewards for success, and severe penalties for failure, the manager would choose an expendable worker—usually someone who was due to retire shortly—and place him in charge of the project. When the project failed—and the cards were always stacked against it—the plant manager could blame his carefully prepared scapegoat and fire or retire him.

A poor innovation climate has on occasion adversely affected U.S. firms. General Electric (GE), in its race with Bell Laboratories to invent the transistor, paid dearly for the inertial constraint of dependence on familiar technology. As Lester Thurow described in his article “Brainpower and the Future of Capitalism,” Bell Laboratories developed the transistor exactly one day ahead of GE. The reason that Bell was able to trump GE, in spite of GE’s large technological edge in the field, was that GE gave the transistor development assignment to its vacuum tube engineers. They spent three years trying to prove that the transistor would not work. Bell Laboratories, in contrast, spent its time trying to prove that the transistor would work. As Thurow so clearly puts it, “There were five companies in America that made vacuum tubes and not a single one of them ever successfully made transistors or semiconductor chips. They could not adjust to the new realities.” Had it spun off a new company based solely on the viability of the transistor, GE might now have all the patents and Nobel prizes and revenues from the transistor that Bell enjoyed. GE would also have been in a superb position to benefit from the revolution in miniaturization that came with the introduction of the transistor. Instead, GE ended up having to buy transistors and semiconductors from suppliers. 8

The lesson of these examples for the analyst is this: In evaluating the ability of an organization or a country to develop a new technology, look at the culture. Is it a risk-accepting or a risk-averse culture? Does it place a high priority on protecting its current product line? What political, economic, organizational, or social constraints does the organization place on development of technology?

Technology Use, Transfer, and Diffusion Use. In tracking a technology that may have military use, analysts use a generic target model called the bathtub curve—a time-versus-visibility curve shaped like a bathtub. The basic research on a technology is usually visible, because scientists and engineers need to communicate their findings. As this research is applied to a military system or moves toward production, the work becomes more secret and disappears from publication (the bottom of the “bathtub”). Finally, as a weapons system emerges from the research and development stage and enters testing, the program becomes visible again.

Keeping track of a technology on the bathtub curve requires skillful use of a combination of open literature and classified sources. The technique has enabled analysts to determine the capabilities of a technology and the time phases of technical developments and innovations even when the program was at the bottom of the “bathtub.”

Patents are a valuable source for determining what use is being made of a technology. There are several guidelines for working with patents:

 

    • Obtain the names of all coauthors and any institutional affiliations in a patent. In sifting through the great volume of patents, the first search an experienced open literature analyst usually makes is of names or institutions; this preliminary screening allows relevant work to be identified. The second screening is on the specific technology discussed in the patent.
  • Abstracts are useful for screening and identifying patents of interest, but the full text is essential for technology evaluation. The full text also may contain indicators that a patent is of interest to the military or to a particular company.
  • Some patent literature describes patented devices that have been introduced in industry. These provide a filtered set of the most interesting patents—those that are sufficiently applications oriented to be used by an industry.

When an analyst is assessing the usage of a technology, it is easy to become entranced with its performance and promise. Technology does not exist in a vacuum; it is a resource like any other, and it can be well applied or poorly applied. What matters is not just the technology itself but what an organization does with it. Throughout the 1970s and 1980s, Xerox Corporation funded a think tank called the Palo Alto Research Center (PARC). PARC’s staff was perhaps the greatest gathering of computer science talent ever assembled. PARC developed the concept of the desktop computer long before IBM launched its personal computer (PC). It created a prototype graphical user interface of icons and layered screens that evolved into Microsoft Windows. It developed much of the technology underlying the Internet. But despite PARC’s many industry-altering breakthroughs, Xerox repeatedly failed to exploit the financial potential of the achievements (although the return on investment to Xerox from laser printers alone more than paid for PARC). PARC’s culture was well suited to developing new technologies but not so well suited to exploiting them for financial gain.

The various constraints on innovation also may constrain use of a new technology. A technology may carry with it risks of liability or action by government regulatory agencies. Pollution control regulations increasingly shape technology decisions. Possible tort liability or patent infringement suits are risks that might cause a technology to be rejected.

The assessment of regulatory forces (discussed in chapter 13) has become a key decision point for innovations in recent years. Analysts need to ask: Can the product be produced within environmental restrictions on air and water pollution? Does it meet government safety standards? Does it face import barriers in other countries? Can the innovation be protected, either by patents or trade secrets, long enough to obtain a payoff of production costs?

Within the framework of questions such as these, large and small companies have strikingly different approaches to the acquisition of new technology. Within large companies, the rejection rate of new technology is high. Less than one idea in 100 is taken up, and when commercialized, two of three new products fail. Institutional resistance to change is endemic, and new technologies face enormous hurdles—even though a large company’s access to new technologies is typically much greater than that of small companies.

In large companies, a “champion” of a major new technology typically must emerge before the company will accept the new technology. Such champions, also described as “change agents,” are innovators who are willing to risk their personal and professional future for a development of doubtful success. The unpopularity of such champions in large companies is understandable because of two circumstances: the vulnerability of established product lines to the new technology and the high costs of converting the established production line. These factors are particularly important in a large and complex industry such as the automotive industry, where new technology is welcome only if it is incremental and evolutionary.

As a consequence, revolutionary technologies often are brought to the market by outsiders after being rejected by leaders in the relevant industry. Witness Kodak’s rejection of Dr. Edwin H. Land’s instant photography process and its subsequent development by Polaroid; or the refusal of several large corporations to accept the challenge of commercializing the photocopying process that led to the creation of Xerox.

Transfer or Diffusion. It is often important to assess the effectiveness of technology transfer or diffusion—an especially critical topic in the fields of WMD and weapons proliferation. In assessing the effectiveness of technology diffusion, an analyst has to consider several factors:

 

  • Is the technology available, and if so, in what form? In transferring software, for example, a receiving organization can do far more with the source code than with the compiled code.
  • Does the receiving organization reach out for new technologies or resist their introduction? Resistance to change is a severe constraint on technology diffusion. Everyone knows that, but intelligence analysts often ignore it, blithely assuming that instant diffusion exists and forgetting about inertial forces and the “not invented here” factor (see chapter 13).
  • What mechanisms are used to transfer? Some transfer mechanisms are very effective, others less so. Table 15-1 summarizes the effectiveness of various transfer mechanisms.

Technology transfer and diffusion occur rapidly in countries that are open, relatively free from government interference, and technologically advanced. Technology diffusion works extremely well in the United States, with its high mobility of workers and information. The lower mobility of Japan’s workers slows its diffusion mechanism somewhat, but Japan does well at acquiring and assimilating foreign technology because of other unique aspects of its culture. Multinational corporations provide a powerful international technological diffusion mechanism because they use the most effective of the mechanisms enumerated in Table 15-1 to transfer technology among their subsidiaries.

In contrast, technology diffusion was extremely poor in the former Soviet Union because of a tight compartmentation system and Soviet leaders’ insistence on secrecy—both of which were especially severe in defense industries but carried over into the civil industries as well. The situation was aggravated by the low mobility of Soviet workers; so technology did not spread through employees’ relocations either. As a result, a technology that found application in naval weapons systems, for example, would be unknown in the rest of the military and in civil industries.

Table 15-1 Effectiveness of Technology Transfer Mechanisms

Effectiveness Transfer mechanism
High Sale of firms or subsidiaries
License with extensive teaching effort
Joint ventures
Technical exchanges with ongoing contact
Training programs in high-technology areas
Movement of skilled personnel between companies
Medium Industrial espionage
Engineering documents plus technical data
Consulting
Licenses plus know-how
Documented proposals
Sale of processing equipment without know-how
Commercial visits
Low Licenses without know-how
Reverse engineering of the product
Undocumented proposals
Open trade literature, technical journals
Trade exhibits and conferences

Transfer of technology depends also on the motivations of both the transferee and transferor. Organizations that have the best technologies may not have the motivation to transfer. Industry’s approach to technology transfer is evaluative and selective. Industrial transferors of technology expect a return benefit that outweighs the cost of transfer, and they tend to evaluate the feedback of information on benefits, costs, and risks with reasonable objectivity. Among industrial transferors, large firms tend to be more cautious in applying new technologies than do small firms.

When a technology has been developed within a company, the company has incentives to transfer the technology to others. The technology so developed is a time-perishable company asset. If its transfer would result in a return benefit to the company that would more than offset the loss occasioned by the transfer, then the transfer will likely occur.

In making a decision whether or not to transfer, a company must consider many factors that can weigh for or against. The existence of a competing technology; the speed of assimilation and obsolescence of the technology; the vulnerability of established company products; the capability of the company to exploit the technology in-house; the competitiveness of the industry; the capability to protect the technology as a trade secret or under patent laws—all can weigh either way in the decision.

Some factors clearly weigh against a decision to transfer. Most important is the irreversibility of the release of technology. If the company’s analysis of the situation proves wrong, the technology cannot be “recalled.” Second is the advantage of lead time, an advantage that varies greatly according to the nature of the industry.

Other factors weigh for a decision to transfer. In some technologies, a potential return flow of benefits exists because others can build on the disclosed technology; the existence of cross-licensing or technical exchange agreements increases the value of these benefits. In other technologies, transfer is favored for the opposite reason: It makes the transferee dependent on the transferor’s R&D and inhibits the transferee from developing an independent R&D capability. The threat of loss of the technology through industrial espionage or personnel raids is another major factor. Also, it is usually easier to license the technology (for both sides) than it is to contest infringement. Easy transfer is also encouraged by the desire of companies for recognition as leaders in their fields. Furthermore, early release of a technology may result in an industry’s adopting the releasing company’s standards, with consequent competitive advantage to the releasing company.

Systems Analysis

Any entity having the attributes of structure, function, and process can be described and analyzed as a system, as noted in previous chapters. Air defense systems, transportation networks, welfare systems—all of these and many others have been the objects of systems analysis. Many of the formal applications of systems analysis were pioneered in the U.S. Department of Defense during the 1960s.

Much systems analysis is parametric, sensitivity, or “what if” analysis; that is, the analyst must try a relationship between two variables (parameters), run a computer analysis and examine the results, change the input constants, and run the analysis again. Systems analysis must be interactive; the analyst has to see what the results look like and make changes as new ideas surface.

Analysis of any complex system is, of necessity, multidisciplinary. Systems analysts have had difficulty dealing with the multidisciplinary aspects; they are more comfortable sticking to the technical aspects, primarily performance analysis. In its National Intelligence Estimate on Iraqi weapons of mass destruction, the WMD Commission observed, “The October 2002 NIE contained an extensive technical analysis … but little serious analysis of the socio-political situation in Iraq, or the motives and intentions of the Iraqi leadership…. [T]hose turn out to be the questions that could have led the Intelligence Community closer to the truth.” 9 In fact, too much technical analysis is done in a vacuum, with no consideration of political and economic constraints. The following sections address examples of the need to consider those constraints.

Future Systems

The first step in analyzing future systems, and particularly future weapons systems, is to identify the system(s) under development. Two approaches traditionally have been applied in weapons systems analysis, both based on systems of reasoning drawn from the writings of philosophers: deductive and inductive.

 

  • The deductive approach to prediction is to postulate objectives that are desirable in the eyes of the opponent; identify the system requirements; and then search the incoming intelligence for evidence of work on the weapons systems, subsystems, components, devices, and basic R&D required to reach those objectives.
  • The opposite method, an inductive or synthesis approach, is to begin by looking at the evidence of development work and then synthesize the advances in systems, subsystems, and devices that are likely to follow. 10

A number of writers in the intelligence field have argued that intelligence uses a different system of reasoning: abduction, which seeks to develop the best hypothesis or inference from a given body of evidence. Abduction is much like induction, but its stress is on integrating the analyst’s own thoughts and intuitions into the reasoning process. Abduction has been described as “an instinct for guessing right.” 11 Like induction, it is problematic in that, as Roger George and James Bruce note, “different analysts might arrive at different conclusions from the same set of facts.” So both induction and abduction are inherently probabilistic. 12

The deductive (or abductive) approach can be described as starting from a hypothesis and using evidence to test the hypothesis. The inductive approach is described as evidence-based reasoning to develop a conclusion. 13 Evidence-based reasoning is applied in a number of professions. In medicine, it is known as evidence-based practice—applying a combination of theory and empirical evidence to make medical decisions.

Both (or all three) approaches have advantages and drawbacks. In practice, though, deduction has some advantages over induction or abduction in identifying future systems development. If only one weapons system is being built, it is not too difficult to identify the corresponding R&D pattern and the indicators in the available information, and from that to synthesize the resulting systems development. The problem arises when two or more systems are under development at the same time. Each system will have its R&D process, and it is very difficult to separate the processes out of the mass of incoming raw intelligence. This is the “multiple pathologies” problem that is well known in the medical profession: When two or more pathologies are present in a patient, the symptoms are mixed together, and diagnosing the separate illnesses becomes very difficult. Generally, the deductive technique works better for handling simultaneous developments in future systems assessments.

Experience with predictive techniques in weapons systems development has shown that extrapolation works for minor improvements in a weapons system over periods of five years or fewer. It works poorly after that. Projection works in the five- to 15-year time frame. 14 But whatever predictive technique is used, you should consider the force of inertia (discussed in chapter 13). Institutional momentum probably has resulted in more weapons systems development than new requirements have. 15

Once a system has been identified as in development, analysis proceeds to the second step: answering customers’ questions about the system. At the highest level of national policy, details on how a future weapon system may operate are not as important as are its general characteristics and capabilities and a fairly precise time scale. 16 As the system comes closer to completion, a wider group of customers will want to know what specific targets the system is designed against, in what circumstances it will be used, and what its effectiveness will be. These matters typically require analysis of the following:

 

 
"Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!"