Showing posts with label Cybernetics. Show all posts
Showing posts with label Cybernetics. Show all posts

Artificial Intelligence - What Were The Macy Conferences?

 



The Macy Conferences on Cybernetics, which ran from 1946 to 1960, aimed to provide the framework for developing multidisciplinary disciplines such as cybernetics, cognitive psychology, artificial life, and artificial intelligence.

Famous twentieth-century scholars, academics, and researchers took part in the Macy Conferences' freewheeling debates, including psychiatrist W.

Ross Ashby, anthropologist Gregory Bateson, ecologist G. Evelyn Hutchinson, psychologist Kurt Lewin, philosopher Donald Marquis, neurophysiologist Warren McCulloch, cultural anthropologist Margaret Mead, economist Oskar Morgenstern, statistician Leonard Savage, physicist Heinz von Foerster McCulloch, a neurophysiologist at the Massachusetts Institute of Technology's Research Laboratory for Electronics, and von Foerster, a professor of signal engineering at the University of Illinois at Urbana-Champaign and coeditor with Mead of the published Macy Conference proceedings, were the two main organizers of the conferences.

All meetings were sponsored by the Josiah Macy Jr. Foundation, a nonprofit organization.

The conferences were started by Macy administrators Frank Fremont-Smith and Lawrence K. Frank, who believed that they would spark multidisciplinary discussion.

The disciplinary isolation of medical research was a major worry for Fremont-Smith and Frank.

A Macy-sponsored symposium on Cerebral Inhibitions in 1942 preceded the Macy meetings, during which Harvard physiology professor Arturo Rosenblueth presented the first public discussion on cybernetics, titled "Behavior, Purpose, and Teleology." The 10 conferences conducted between 1946 and 1953 focused on biological and social systems' circular causation and feedback processes.

Between 1954 and 1960, five transdisciplinary Group Processes Conferences were held as a result of these sessions.

To foster direct conversation amongst participants, conference organizers avoided formal papers in favor of informal presentations.

The significance of control, communication, and feedback systems in the human nervous system was stressed in the early Macy Conferences.

The contrasts between analog and digital processing, switching circuit design and Boolean logic, game theory, servomechanisms, and communication theory were among the other subjects explored.

These concerns belong under the umbrella of "first-order cybernetics." Several biological issues were also discussed during the conferences, including adrenal cortex function, consciousness, aging, metabolism, nerve impulses, and homeostasis.

The sessions acted as a forum for discussing long-standing issues in what would eventually be referred to as artificial intelligence.

(At Dartmouth College in 1955, mathematician John McCarthy invented the phrase "artificial intelligence.") Gregory Bateson, for example, gave a lecture at the inaugural Macy Conference that differentiated between "learning" and "learning to learn" based on his anthropological research and encouraged listeners to consider how a computer might execute either job.

Attendees in the eighth conference discussed decision theory research, which was led by Leonard Savage.

Ross Ashby suggested the notion of chess-playing automatons at the ninth conference.

The usefulness of automated computers as logic models for human cognition was discussed more than any other issue during the Macy Conferences.

In 1964, the Macy Conferences gave rise to the American Society for Cybernetics, a professional organization.

The Macy Conferences' early arguments on feedback methods were applied to topics as varied as artillery control, project management, and marital therapy.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Dartmouth AI Conference.


References & Further Reading:


Dupuy, Jean-Pierre. 2000. The Mechanization of the Mind: On the Origins of Cognitive Science. Princeton, NJ: Princeton University Press.

Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Heims, Steve J. 1988. “Optimism and Faith in Mechanism among Social Scientists at the Macy Conferences on Cybernetics, 1946–1953.” AI & Society 2: 69–78.

Heims, Steve J. 1991. The Cybernetics Group. Cambridge, MA: MIT Press.

Pias, Claus, ed. 2016. The Macy Conferences, 1946–1953: The Complete Transactions. Zürich, Switzerland: Diaphanes.




Artificial Intelligence - Who Was John McCarthy?

 


John McCarthy  (1927–2011) was an American computer scientist and mathematician who was best known for helping to develop the subject of artificial intelligence in the late 1950s and pushing the use of formal logic in AI research.

McCarthy was a creative thinker who earned multiple accolades for his contributions to programming languages and operating systems research.

Throughout McCarthy's life, however, artificial intelligence and "formalizing common sense" remained his primary research interest (McCarthy 1990).

As a graduate student, McCarthy first met the concepts that would lead him to AI at the Hixon conference on "Cerebral Mechanisms in Behavior" in 1948.

The symposium was place at the California Institute of Technology, where McCarthy had just finished his undergraduate studies and was now enrolled in a graduate mathematics program.

In the United States, machine intelligence had become a subject of substantial academic interest under the wide term of cybernetics by 1948, and many renowned cyberneticists, notably Princeton mathematician John von Neumann, were in attendance at the symposium.

McCarthy moved to Princeton's mathematics department a year later, when he discussed some early ideas inspired by the symposium with von Neumann.

McCarthy never published the work, despite von Neumann's urging, since he believed cybernetics could not solve his problems concerning human knowing.

McCarthy finished a PhD on partial differential equations at Princeton.

He stayed at Princeton as an instructor after graduating in 1951, and in the summer of 1952, he had the chance to work at Bell Labs with cyberneticist and inventor of information theory Claude Shannon, whom he persuaded to collaborate on an edited collection of writings on machine intelligence.

Automata Studies received contributions from a variety of fields, ranging from pure mathematics to neuroscience.

McCarthy, on the other hand, felt that the published studies did not devote enough attention to the important subject of how to develop intelligent machines.

McCarthy joined the mathematics department at Stanford in 1953, but was fired two years later, maybe because he spent too much time thinking about intelligent computers and not enough time working on his mathematical studies, he speculated.

In 1955, he accepted a position at Dartmouth, just as IBM was preparing to establish the New England Computation Center at MIT.

The New England Computation Center gave Dartmouth access to an IBM computer that was installed at MIT and made accessible to a group of New England colleges.

McCarthy met IBM researcher Nathaniel Rochester via the IBM initiative, and he recruited McCarthy to IBM in the summer of 1955 to work with his research group.

McCarthy persuaded Rochester of the need for more research on machine intelligence, and he submitted a proposal to the Rockefeller Foundation for a "Summer Research Project on Artificial Intelligence" with Rochester, Shannon, and Marvin Minsky, a graduate student at Princeton, which included the first known use of the phrase "artificial intelligence." Despite the fact that the Dartmouth Project is usually regarded as a watershed moment in the development of AI, the conference did not go as McCarthy had envisioned.

The Rockefeller Foundation supported the proposal at half the proposed budget since it was for such an unique field of research with a relatively young professor as author, and because Shannon's reputation carried substantial weight with the Foundation.

Furthermore, since the event took place over many weeks in the summer of 1955, only a handful of the guests were able to attend the whole period.

As a consequence, the Dartmouth conference was a fluid affair with an ever-changing and unpredictably diverse guest list.

Despite its chaotic implementation, the meeting was crucial in establishing AI as a distinct area of research.

McCarthy won a Sloan grant to spend a year at MIT, closer to IBM's New England Computation Center, while still at Dartmouth in 1957.

McCarthy was given a post in the Electrical Engineering department at MIT in 1958, which he accepted.

Later, he was joined by Minsky, who worked in the mathematics department.

McCarthy and Minsky suggested the construction of an official AI laboratory to Jerome Wiesner, head of MIT's Research Laboratory of Electronics, in 1958.

McCarthy and Minsky agreed on the condition that Wiesner let six freshly accepted graduate students into the laboratory, and the "artificial intelligence project" started teaching its first generation of students.

McCarthy released his first article on artificial intelligence in the same year.

In his book "Programs with Common Sense," he described a computer system he named the Advice Taker that would be capable of accepting and understanding instructions in ordinary natural language from nonexpert users.

McCarthy would later define Advice Taker as the start of a study program aimed at "formalizing common sense." McCarthy felt that everyday common sense notions, such as comprehending that if you don't know a phone number, you'll need to look it up before calling, might be written as mathematical equations and fed into a computer, enabling the machine to come to the same conclusions as humans.

Such formalization of common knowledge, McCarthy felt, was the key to artificial intelligence.

McCarthy's presentation, which was presented at the United Kingdom's National Physical Laboratory's "Symposium on Mechansation of Thought Processes," helped establish the symbolic program of AI research.

McCarthy's research was focused on AI by the late 1950s, although he was also involved in a range of other computing-related topics.

In 1957, he was assigned to a group of the Association for Computing Machinery charged with developing the ALGOL programming language, which would go on to become the de facto language for academic research for the next several decades.

He created the LISP programming language for AI research in 1958, and its successors are widely used in business and academia today.

McCarthy contributed to computer operating system research via the construction of time sharing systems, in addition to his work on programming languages.

Early computers were large and costly, and they could only be operated by one person at a time.

McCarthy identified the necessity for several users throughout a major institution, such as a university or hospital, to be able to use the organization's computer systems concurrently via computer terminals in their offices from his first interaction with computers in 1955 at IBM.

McCarthy pushed for study on similar systems at MIT, serving on a university committee that looked into the issue and ultimately assisting in the development of MIT's Compatible Time-Sharing System (CTSS).

Although McCarthy left MIT before the CTSS work was completed, his advocacy with J.C.R.

Licklider, future office head at the Advanced Research Projects Agency, the predecessor to DARPA, while a consultant at Bolt Beranek and Newman in Cambridge, was instrumental in helping MIT secure significant federal support for computing research.

McCarthy was recruited to join what would become the second department of computer science in the United States, after Purdue's, by Stanford Professor George Forsythe in 1962.

McCarthy insisted on going only as a full professor, which he believed would be too much for Forsythe to handle as a young researcher.

Forsythe was able to persuade Stanford to grant McCarthy a full chair, and he moved to Stanford in 1965 to establish the Stanford AI laboratory.

Until his retirement in 2000, McCarthy oversaw research at Stanford on AI topics such as robotics, expert systems, and chess.

McCarthy was up in a family where both parents were ardent members of the Communist Party, and he had a lifetime interest in Russian events.

He maintained numerous professional relationships with Soviet cybernetics and AI experts, traveling and lecturing there in the mid-1960s, and even arranged a chess match between a Stanford chess computer and a Russian equivalent in 1965, which the Russian program won.

He developed many foundational concepts in symbolic AI theory while at Stanford, such as circumscription, which expresses the idea that a computer must be allowed to make reasonable assumptions about problems presented to it; otherwise, even simple scenarios would have to be specified in such exacting logical detail that the task would be all but impossible.

McCarthy's accomplishments have been acknowledged with various prizes, including the 1971 Turing Award, the 1988 Kyoto Prize, admission into the National Academy of Sciences in 1989, the 1990 Presidential Medal of Science, and the 2003 Benjamin Franklin Medal.

McCarthy was a brilliant thinker who continuously imagined new technologies, such as a space elevator for economically transporting stuff into orbit and a system of carts strung from wires to better urban transportation.

In a 2008 interview, McCarthy was asked what he felt the most significant topics in computing now were, and he answered without hesitation, "Formalizing common sense," the same endeavor that had inspired him from the start.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Cybernetics and AI; Expert Systems; Symbolic Logic.


References & Further Reading:


Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.

McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lifschitz. Norwood, NJ: Albex.

Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial Intelligence 175, no. 1 (January): 1–24.

Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs of the National Academy of Sciences. http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.



Artificial Intelligence - How Is AI Contributing To Cybernetics?

 





The study of communication and control in live creatures and machines is known as cybernetics.

Although the phrase "cybernetic thinking" is no longer generally used in the United States, it pervades computer science, engineering, biology, and the social sciences today.

Throughout the last half-century, cybernetic connectionist and artificial neural network approaches to information theory and technology have often clashed, and in some cases hybridized, with symbolic AI methods.

Norbert Wiener (1894–1964), who coined the term "cybernetics" from the Greek word for "steersman," saw the field as a unifying force that brought disparate topics like game theory, operations research, theory of automata, logic, and information theory together and elevated them.

Winer argued in Cybernetics, or Control and Communication in the Animal and the Machine (1948), that contemporary science had become too much of a specialist's playground as a consequence of tendencies dating back to the early Enlightenment.

Wiener envisioned a period when experts might collaborate "not as minions of some great administrative officer, but united by the desire, indeed by the spiritual imperative, to comprehend the area as a whole, and to give one another the power of that knowledge" (Weiner 1948b, 3).

For Wiener, cybernetics provided researchers with access to many sources of knowledge while maintaining their independence and unbiased detachment.

Wiener also believed that man and machine should be seen as basically interchangeable epistemologically.

The biological sciences and medicine, according to Wiener, would remain semi-exact and dependent on observer subjectivity until these common components were discovered.



In the setting of World War II (1939– 1945), Wiener developed his cybernetic theory.

Operations research and game theory, for example, are interdisciplinary sciences rich in mathematics that have previously been utilized to identify German submarines and create the best feasible solutions to complex military decision-making challenges.

Wiener committed himself into the job of implementing modern cybernetic weapons against the Axis countries in his role as a military adviser.

To that purpose, Wiener focused on deciphering the feedback processes involved in curvilinear flight prediction and applying these concepts to the development of advanced fire-control systems for shooting down enemy aircraft.

Claude Shannon, a long-serving Bell Labs researcher, went even further than Wiener in attempting to bring cybernetic ideas to life, most notably in his experiments with Theseus, an electromechanical mouse that used digital relays and a feedback process to learn how to navigate mazes based on previous experience.

Shannon created a slew of other automata that mimicked the behavior of thinking machines.

Shannon's mentees, including AI pioneers John McCarthy and Marvin Minsky, followed in his footsteps and labeled him a symbolic information processor.

McCarthy, who is often regarded with establishing the field of artificial intelligence, studied the mathematical logic that underpins human thought.



Minsky opted to research neural network models as a machine imitation of human vision.

The so-called McCulloch-Pitts neurons were the core components of cybernetic understanding of human cognitive processing.

These neurons were strung together by axons for communication, establishing a cybernated system encompassing a crude simulation of the wet science of the brain, and were named after Warren McCulloch and Walter Pitts.

Pitts admired Wiener's straightforward analogy of cerebral tissue to vacuum tube technology, and saw these switching devices as metallic analogues to organic cognitive components.

McCulloch-Pitts neurons were believed to be capable of mimicking basic logical processes required for learning and memory.

Pitts perceived a close binary equivalence between the electrical discharges produced by these devices and the electrochemical nerve impulses generated in the brain in the 1940s.

McCulloch-Pitts inputs may be either a zero or a one, and the output can also be a zero or a one in their most basic form.

Each input may be categorized as excitatory or inhibitory.

It was therefore merely a short step from artificial to animal memory for Pitts and Wiener.

Donald Hebb, a Canadian neuropsychologist, made even more significant contributions to the research of artificial neurons.

These were detailed in his book The Organization of Behavior, published in 1949.

Associative learning is explained by Hebbian theory as a process of neural synaptic cells firing and connecting together.

In his study of the artificial "perceptron," a model and algorithm that weighted inputs so that it could be taught to detect particular kinds of patterns, U.S.

Navy researcher Frank Rosenblatt expanded the metaphor.

The eye and cerebral circuitry of the perceptron could approximately discern between pictures of cats and dogs.

The navy saw the perceptron as "the embryo of an electronic computer that it anticipates to be able to walk, speak, see, write, reproduce itself, and be cognizant of its existence," according to a 1958 interview with Rosenblatt (New York Times, July 8, 1958, 25).

Wiener, Shannon, McCulloch, Pitts, and other cyberneticists were nourished by the famed Macy Conferences on Cybernetics in the 1940s and 1950s, which attempted to automate human comprehension of the world and the learning process.

The gatherings also acted as a forum for discussing artificial intelligence issues.

The divide between the areas developed over time, but it was visible during the 1956 Dartmouth Summer Research Project on ArtificialIntelligence.

Organic cybernetics research was no longer well-defined in American scientific practice by 1970.

Computing sciences and technology evolved from machine cybernetics.

Cybernetic theories are now on the periphery of social and hard scientific disciplines such as cognitive science, complex systems, robotics, systems theory, and computer science, but they were critical to the information revolution of the twentieth and twenty-first centuries.

In recent studies of artificial neural networks and unsupervised machine learning, Hebbian theory has seen a resurgence of attention.

Cyborgs—beings made up of biological and mechanical pieces that augment normal functions—could be regarded a subset of cybernetics (which was once known as "medical cybernetics" in the 1960s).


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Dartmouth AI Conference; Macy Conferences; Warwick, Kevin.


Further Reading


Ashby, W. Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall.

Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision.” Critical Inquiry 21, no. 1 (Autumn): 228–66.

Kline, Ronald R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Information Age. Baltimore, MD: Johns Hopkins University Press.

Mahoney, Michael S. 1990. “Cybernetics and Information Technology.” In Companion to the History of Modern Science, edited by R. C. Olby, G. N. Cantor, J. R. R. Christie, and M. J. S. Hodge, 537–53. London: Routledge.

“New Navy Device Learns by Doing; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times, July 8, 25.

Weiner, Norbert. 1948a. “Cybernetics.” Scientific American 179, no. 5 (November): 14–19.

Weiner, Norbert. 1948b. Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.



Artificial Intelligence - What Is The Advanced Soldier Sensor Information Systems and Technology (ASSIST)?

 



Soldiers are often required to do missions that may take many hours and are quite stressful.

Soldiers are requested to write a report detailing the most significant events that occurred once a mission is completed.

This report is designed to collect information about the environment and local/foreign people in order to better organize future operations.

Soldiers often offer this report primarily based on their memories, still photographs, and GPS data from portable equipment.

There are probably numerous cases when crucial information is missing and not accessible for future mission planning due to the severe stress they face.

Soldiers were equipped with sensors that could be worn directly on their uniforms as part of the ASSIST (Advanced Soldier Sensor Information Systems and Technology) program, which addressed this problem.

Sensors continually recorded what was going on around the troops during the operation.

When the troops returned from their mission, the sensor data was indexed and an electronic record of the events that occurred while the ASSIST system was recording was established.

Soldiers might offer more accurate reports if they had this knowledge instead of depending simply on their memories.

Numerous functions were made possible by AI-based algorithms, including:

• "Capabilities for Image/Video Data Analysis"

• Object Detection/Image Classification—the capacity to detect and identify items (such as automobiles, persons, and license plates) using video, images, and/or other data sources.

• "Audio Data Analysis Capabilities"

• "Arabic Text Translation"—the ability to detect, recognize, and translate written Arabic text (e.g., in imagery data)

• "Change Detection"—the ability to detect changes in related data sources over time (e.g., identify differences in imagery of the same location at different times)

• Sound Recognition/Speech Recognition—the capacity to distinguish speech (e.g., keyword spotting and foreign language recognition) and identify sound events (e.g., explosions, gunfire, and cars) in audio data.

• Shooter Localization/Shooter Classification—the ability to recognize gunshots in the environment (e.g., via audio data processing), as well as the kind of weapon used and the shooter's position.

• "Capabilities for Soldier Activity Data Analysis"

• Soldier State Identification/Soldier Localization—the capacity to recognize a soldier's course of movement in a given area and characterize the soldier's activities (e.g., running, walking, and climbing stairs) To be effective, AI systems like this (also known as autonomous or intelligent systems) must be thoroughly and statistically analyzed to verify that they would work correctly and as intended in a military setting.

The National Institute of Standards and Technology (NIST) was entrusted with assessing these AI systems using three criteria:

1. The precision with which objects, events, and activities are identified and labeled

2. The system's capacity to learn and improve its categorization performance.

3. The system's usefulness in improving operational efficiency To create its performance measurements, NIST devised a two-part test technique.

Metrics 1 and 2 were assessed using component- and system-level technical performance evaluations, respectively, while meter 3 was assessed using system-level utility assessments.

The utility assessments were created to estimate the effect these technologies would have on warfighter performance in a range of missions and job tasks, while the technical performance evaluations were created to ensure the ongoing improvement of ASSIST system technical capabilities.

NIST endeavored to create assessment techniques that would give an appropriate degree of difficulty for system and soldier performance while defining the precise processes for each sort of evaluation.

The ASSIST systems were divided down into components that implemented certain capabilities at the component level.

For example, the system was divided down into an Arabic text identification component, an Arabic text extraction component (to localize individual text characters), and a text translation component to evaluate its Arabic translation capacity.

Each factor was evaluated on its own to see how it affected the system.

Each ASSIST system was assessed as a black box at the system level, with the overall performance of the system being evaluated independently of the individual component performance.

The total system received a single score, which indicated the system's ability to complete the overall job.

A test was also conducted at the system level to determine the system's usefulness in improving operational effectiveness for after-mission reporting.

Because all of the systems reviewed as part of this initiative were in the early phases of development, a formative assessment technique was suitable.

NIST was especially interested in determining the system's value for warfighters.

As a result, we were worried about the influence on their procedures and goods.

User-centered metrics were used to represent this viewpoint.

NIST set out to find measures that may assist answer questions like: What information do infantry troops seek and/or require after completing a field mission? From both the troops' and the S2's (Staff 2—Intelligence Officer) perspectives, how successfully are information demands met? What was ASSIST's contribution to mission reporting in terms of user-stated information requirements? This examination was carried out at the Aberdeen Test Center Military Operations in Urban Terrain (MOUT) location in Aberdeen, Maryland.

The location was selected for a variety of reasons:

• Ground truth—Aberdeen was able to deliver ground truth to within two centimeters of chosen locations.

This provided a strong standard against which the system output could be compared, enabling the assessment team to get a good depiction of what really transpired in the environment.

• Realism—The MOUT location has around twenty structures that were built up to seem like an Iraqi town.

• Testing infrastructure—The facility was outfitted with a number of cameras (both inside and outside) to help us better comprehend the surroundings during testing.

• Soldier availability—For the assessment, the location was able to offer a small squad of active-duty troops.

The MOUT site was enhanced with items, people, and background noises whose location and behavior were programmed to provide a more operationally meaningful test environment.

The goal was to provide an environment in which the various ASSIST systems could test their capabilities by detecting, identifying, and/or capturing various forms of data.

Foreign language speech detection and classification, Arabic text detection and recognition, detection of shots fired and vehicle sounds, classification of soldier states and tracking their locations (both inside and outside of buildings), and identifying objects of interest such as vehicles, buildings, people, and so on were all included in NIST's utility assessments.

Because the tests required the troops to respond according to their training and experience, the soldiers' actions were not scripted as they progressed through each exercise.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.




See also: Battlefield AI and Robotics; Cybernetics and AI.

Further Reading

Schlenoff, Craig, Brian Weiss, Micky Steves, Ann Virts, Michael Shneier, and Michael Linegang. 2006. “Overview of the First Advanced Technology Evaluations for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 125–32. Gaithersburg, MA: National Institute of Standards and Technology.

Steves, Michelle P. 2006. “Utility Assessments of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 165–71. Gaithersburg, MA: National Institute of Standards and Technology.

Washington, Randolph, Christopher Manteuffel, and Christopher White. 2006. “Using an Ontology to Support Evaluation of Soldier-Worn Sensor Systems for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 172–78. Gaithersburg, MA: National Institute of Standards and Technology.

Weiss, Brian A., Craig I. Schlenoff, Michael O. Shneier, and Ann Virts. 2006. “Technol￾ogy Evaluations and Performance Metrics for Soldier-Worn Sensors for ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems Workshop, 157–64. Gaithersburg, MA: National Institute of Standards and Technology.




Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...