[Skip to Content]

Challenge

Demonstrate the ability of advanced human-computer interface technologies to dramatically improve and “naturalize” interactions between a human operator and a computerized system. The proposed technologies should support multiple input modalities—including keyboard and mouse as a default, but also considering voice, gesture and/or facial recognition, eye movement inputs, or other more naturalistic and unobtrusive interfaces.

 
TRISH is soliciting proposals to enhance cognitive and/or behavioral performance through advanced human-computer interfaces (HCI). Preference will be given to systems with multi-modal input capabilities, adaptation to individual users, an ability to respond to high-level queries, and flexible deployment options.

Background

A large proportion of spaceflight vehicle systems are computerized, and hence have the potential for interaction via computer interfaces. To date, however, interfaces in spacecraft have been predominantly limited to keyboard, mouse and touch-screens, or mechanical inputs. Moreover, the majority of computer interfaces are static as opposed to adaptive or flexible in nature. Interfaces rarely adapt based on which individual is using the interface, nor do they flexibly automate portions of a procedure based on learned individual preferences. Partially-automated interfaces provide the additional challenge of maintaining crew situational awareness. That is, if a system takes over some tasks to perform them automatically—particularly if this is done conditionally, as opposed to all the time—then the tasks performed and system status need to be communicated to the crew in a reliable and timely fashion to avoid duplication of effort or inappropriate actions based on outdated situational awareness.

While astronauts are highly trained to use all onboard computer systems, these can often be obtrusive. In particular, they usually require moving to a computer console, finding and calling up the appropriate program or module, finding the appropriate menu item, entering any necessary information and submitting commands. This leads to highly inefficient performance, particularly when a person is already occupied yet needs information on where to go, or what to do next. Under these and other circumstances, it is not always feasible or reasonable to float to a computer console and use the keyboard and mouse to obtain the needed information.

Of the various approaches available, hands-free technologies are particularly relevant to NASA, since system interaction can then occur with limited disruption of ongoing task performance. For example, natural-language systems can be combined with database searches to significantly improve performance efficiency—something regularly demonstrated by Alexa or Siri in support of hands-free content search. However, such approaches still tend to be brittle—i.e., relatively few “actions” can be understood—and they are unable to be used in conjunction with programs of relevance to astronauts. Augmented or mixed reality approaches may also be suitable, as they can help expand the crew member’s capabilities in a heads-up context. However, in this case ways to improve efficiency of task performance—as opposed to simply providing more information—remain to be demonstrated.

Ideally, an advanced interface would require minimal additional hardware while supporting as many of the following features as possible, with seamless switching between input modalities:

  • Robust voice interaction for hands-free use
  • Gesture recognition for simple inputs/tasks (e.g., swipe left=back up, right=go on; fist=confirm entry, and so on)
  • Facial recognition to enable user-specific adaptations of interfaces and possible additional input modes (e.g., head nodding=confirm; head shaking=reject)
  • Eye movements
  • Non-invasive brain interfaces
  • More covert inputs (e.g. keyboard/mouse) for confidential data
  • An ability to “overlay” these multi-modal input capabilities on any other interface—i.e., deploy the novel interface on a given computer to enable voice control of and interaction with all programs installed on that computer

TRISH is particularly interested in far-reaching solutions, rather than incremental steps in HCI. Thus, research that seeks to develop systems that can understand and act on higher level commands are preferred. Higher-level commands may require the system to solicit additional input from the user—and hence may require at least minimal “conversational” capabilities—although the context of each individual conversation will generally be significantly limited in topic or scope.

Examples of such high-level commands might include:

  • “Schedule overnight delivery of today’s medical data back to Earth.”
  • “I need to do an abdominal ultrasound. Can you guide me through the procedure?”
  • “What is the status of the CO2 scrubbing system?”
  • “Let’s troubleshoot the broken toilet.”

Importantly, the overarching goal of any proposed system is to improve the overall performance of the joint human-machine system. This needs to be demonstrated by appropriate human testing, where relevant outcome metrics would likely include (at a minimum) the speed and accuracy of task performance, as well as system usability assessments.

Examples of projects that COULD be considered for funding:

  • Investigation of a prototype HCI that can be used with existing computer programs and which supports multiple input modalities (e.g., voice, gesture, eye movements, keyboard and mouse), including the ability to seamlessly switch between accepting input from each of these modalities.
  • Development and testing of a system that supports adaptive user recognition and configuration—i.e., learning user preferences, providing user-specific help, etc.—to enhance task performance efficiency.
  • A project seeking to experimentally identify and develop new approaches for obtaining and interpreting human user input that demonstrably enhance performance efficiency.

Examples of projects that WOULD NOT be considered:

  • Any project that fails to include robust human testing of the system for both performance efficiency on multiple tasks as well as usability.
  • Invasive solutions (e.g., those requiring hardware or sensors implanted in the user).
  • Testing of any self-contained/closed HCI system that does not interface with other computerized systems. To be useful to NASA, any interface must ultimately be able to interact—but not interfere—with existing programs and computer systems.
  • Any project failing to describe a feasible (potentially future) plan for making the system usable for spaceflight. For example, cloud computing will not be available en route to Mars. If this is required by the proposed solution, a convincing explanation of how that gap can be overcome is needed. The solution does not necessarily have to be implemented during this project.
  • Projects only addressing the topic from a theoretical standpoint, with no software or interface deliverable.

Ideally, solutions would support all the above described capabilities—or be sufficiently modular to allow for addition of new capabilities. However, a strong platform that can address a subset of the topics and be convincingly expanded to others is allowable.

Apply Now