DigitalSpace Commons



DigitalSpace
Final Technical Report
FINAL TECHNICAL PROGRESS REPORT - NUMBER 003

For Contract #NAS2-03134 SBIR 2002

Proposal #: H2.02-8957: “BrahmsVE: Proof of Concept for Human/Agent Intelligent Augmentation”.

Reporting Period: January 14, 2003 – July 14, 2003

Find this report and Phase I Project models at the following Web page:
http://www.digitalspace.com/projects/iss_03

Part 1- Table of Contents
PART      DESCRIPTION

Cover

Project Summary

1 Table of Contents

2 Identification and Significance of the Innovation
2.1. Identifying the Need
2.2. The Innovation

3 Technical Objectives

4 Work Plan
4.1 Technical Approach
4.2 Task Descriptions
4.3 Meeting The Technical Objectives
4.4 Task Labor Categories and Schedules

5 Potential Applications
5.1 Potential NASA Applications
18/01/20055.2 Potential Non-NASA Commercial Applications

6 Contacts
6.1 Key Contractor Participants
6.2 Key NASA Participants
6.3 NASA and Non-NASA Advisors

7 Technical Activities
7.1 Cumulative Technical Activities
7.2 Future Technical Activities

8 Potential Customer and Commercialization Activities
8.1 Cumulative NASA Potential Customer Activities
8.2 Cumulative Non-NASA Potential Customer Activities
8.3 Other Cumulative Commercialization Activities
8.4 Future Potential Customer and Commercialization Activities

9 Resource Status

10 References

Project Summary

Firm: DigitalSpace Corporation, Contract Number: NAS2-03134

Project Title: BrahmsVE: Proof of Concept for Human/Agent Intelligent Augmentation

Identification and Significance of Innovation: (Limit 200 words or 2,000 characters whichever is less)

Space systems involving people working with machines are becoming increasing complex to design, test, train for, and support during flight. This complexity is affecting all aspects of NASA’s programs and impacts mission viability in the critical areas of safety, cost, timeliness and effectiveness. Under development since 2000, BrahmsVE is a model-based agent architecture together with a web-based, multi-user realistic 3D interactive visualization environment designed to allow NASA and other government agencies and commercial enterprises to manage increasingly complex human-machine environments.

Technical Objectives and Work Plan:

The objectives of BrahmsVE are to facilitate: 1) simulation of complex environments where people interact with machine systems and agents, 2) qualitative and quantitative evaluation of system and mission design, 3) training of personnel within these complex environments. In order to achieve the above objectives DigitalSpace divided the Phase I project into eleven major tasks:

  1. Expert interviews to establish parameters for FMARS and ISS 3D modeling
  2. Design and architecture phase for the new logical modules
  3. Implementation of new logical modules in OWorld engine
  4. Creation of 3D models for interior spaces, for PBA and agent-astronaut in FMARS and ISS simulated environments
  5. Implementation of external web-based interfaces and reporting module
  6. Implementation of new activity events from Brahms
  7. Integration of total environment with 3D content
  8. Initial testing of environment (user driven)
  9. Testing of environment via Brahms model
  10. Testing via user and Brahms combined
  11. Project report generation
Technical Accomplishments:

During Phase I, DigitalSpace, has accomplished the following:

  1. Upgrading of BrahmsVE platform including extensions to the OWorld Engine, Brahms commands and Brahms communications.
  2. Creation of ISS and PSA virtual world models and eleven simulation iterations to meet several needs at NASA.
  3. Qualitative evaluation of ISS/PSA simulation, recommendations to NASA requestors and design of new reporting interface.
  4. Drafting of a roadmap for 1.0 production release of BrahmsVE proposed for Phase II.
NASA Application(s):

Some NASA applications of BrahmsVE include: simulation for research of mission hardware and work practices, design of augmentation through agents, just in time training, VR for tele-operations, Sim-Station realistic rendering, Mars MSL ’09 and Titan mission design, Virtual Digital Human project support, education/outreach.

Non-NASA Commercial Application(s): Some non-NASA Federal Government applications of BrahmsVE include: wind farm and alternative energy production modeling for DOD and DOE, mobile agents and human augmented system conceptualization and simulation for military and security applications. Commercial applications for BrahmsVE include: design, training and operation of automated factories, multiplayer robot games, educational spacecraft/colony simulators for student distance team-based learning, museum installations for spacecraft mission simulations, and concept design and test of wireless and mobile devices with human users.

We believe that the successful completion of this Phase I project coupled with significant interest from NASA and non-NASA customers for the BrahmsVE platform justifies Phase II continuation.

Name and Address of Principal Investigator: (Name, Organization, Street, City, State, Zip)
Bruce Damer
DigitalSpace Corporation
343 Soquel Ave, Suite 70
Santa Cruz CA 95062-2305
Name and Address of Offeror: (Firm, Street, City, State, Zip)
DigitalSpace Corporation
343 Soquel Ave, Suite 70
Santa Cruz CA 95062-2305


Part 2 Identification and Significance of the Innovation

2.1 Identifying the Need

Space systems involving people working with machines are becoming increasing complex to design, test, train for, and support during flight. This complexity is affecting all aspects of NASA’s programs and impacts mission viability in the critical areas of safety, cost, timeliness and effectiveness. We believe that collaborative software systems involving model-based agent architectures together with realistic 3D visualization environments are vital new tools that will allow NASA and other government agencies and commercial enterprises to manage increasingly complex human-machine environments.

Two generations ago, NASA used specially equipped spacecraft doubles as simulators to allow crew and mission control to train, and assist in problem solving during mission operations. The ground-based doubles of the Apollo XIII Command and Lunar Modules were instrumental for mission control to test power, life support and other system survival strategies and bring the crew safely back to Earth.

Today, NASA is building craft that are too large and complex to be able to have operational physical doubles on Earth. In addition, efficient and safe day-to-day operation of longer duration missions requires a deeper understanding of design for human work practice, psychology and teamwork. Add to the mix autonomous agents and there is a real risk that the complexity of operational environments can overwhelm crew and mission control, leading to critical errors.

On August 16, 2002, the following news item appeared in the wires of the Associated Press [see bibliographic reference 1 in section 6.2]:
Whitson and the space station's veteran commander, Valery Korzun, got off to a late start installing the Russian cosmic-debris shields. They evidently forgot to open an oxygen valve in their spacesuits while getting dressed, and the air lock had to be repressurized so they could open their suits and fix the problem.

By the time the spacewalkers finally opened the hatch, 250 miles (400 kilometers) above the South Atlantic, almost two hours had been wasted…

Because of the late start, Russian flight controllers cut the spacewalk short at 4 1/2 hours. The retrieval of a collection tray for measuring jet residue was put off, as was wiping the area for signs of contamination.
The above story illustrates an event in the day in the life aboard the most complex of these vehicles to date, the International Space Station (ISS). In this case, the mistake was easily resolved without danger to the mission. However a similar error aboard a human mission in transit to Mars might prove fatal, especially if a failed micro-meteor shield needs to be replaced in an emergency.

Virtual environments assume a critical new role in satisfying the need to train for and to manage mission complexity

Virtual environments created in 3D VR technology on computer workstations or projected onto immersive environments such as CAVEs, head-mounted-displays have been used to good effect in mission training and planning for over two decades. Projects in this tradition include: early Ames tele-operations training with the Canadarm, the Virtual Shuttle, the modeling of the Mars Pathfinder surface environment, and the virtual training environment used by the crew of STS-61, the 1993 Hubble Space Telescope repair mission [2, 3, 4].

Loftin and Bowen [2] describe that in this project approximately 100 members of the NASA HST flight team received over 200 hours of training using a virtual environment (VE). They go on to note that in addition to replicating the physical structure of the HST and the interrelationships of many of its components, the VE also modeled the most critical constraints associated with all major maintenance and repair procedures. Figures 1-2 illustrate scenes from this project.
Figure 1: STS-61, the 1993 Hubble Space Telescope (HST) Repair mission team using VR training simulator: Astronaut Nicollier looks at a computer display of the Shuttle's robot arm movements as astronaut  Akers looks on (Image courtesy NASA archives). Figure 2: Computer generated scene depicting the HST capture and EVA repair mission for mission planning (Image courtesy NASA archives)


Loftin and Bowen conclude by stating that for the first time, a VE was integrated with a limited capability Intelligent Computer-Aided Training (ICAT) system and put out the following challenge to future VE systems designers:

The results of this project serve to define the future role of VEs in training within NASA and to provide evidence that VEs can successfully support training in the performance of complex procedural tasks.

In the ensuing years, many new VE environments for training and design/test have been produced, including Transom Jack [6], Steve from USC [5], environments for submersibles and navy operations at the MOVES Institute at the Naval Postgraduate School [7,8,9] and significant projects within NASA including APEX from Ames [10,11] and VR interfaces for remote vehicle control [12].

We believe that no simulation and training environment thus far has represented more than a fraction of true underlying mission complexity especially when accounting for “humans in the loop”. For example, an ordinary EVA aboard the ISS not only has hundreds of individual variables including checklist items and utilized equipment, but all personnel including mission controller, PI and engineering contractors back on Earth play a role in that EVA activity, working from geographically separated areas using often unreliable and distorting channels of electronic communications. In addition, traditional VEs such as the one employed in the Hubble repair mission, used costly hardware and time consuming modeling processes.

Over the past decade the increasing power of consumer personal computers, ubiquity of the internet and standardization of net-based software components such as languages (Java), web browsers and integrating protocols (XML) have permitted the creation of a powerful new VE systems that are entirely based on commercial off the shelf (COTS) systems. Thus, there is a revolution about to take place in collaborative virtual environments for the management of complex human-machine environments.

2.2 The Innovation: BrahmsVE

BrahmsVE is the result of three years of intense work between the Brahms team at RIACS, NASA, and DigitalSpace beginning with an STTR in 2000 [13], continuing with work to model EVA and day to day operations aboard the FMARS/Haughton-Mars Project analogue habitats [14,15] and leading to a specification for the OWorld and Brahms interfaces [16,17].

The existing back-end architecture: Brahms

A virtual environment by itself is of little use without a powerful back-end architecture that can represent the complexity of human-machine systems.

For over a decade now, teams at NYNEX, the Institute for Research on Learning and now, at Agent iSolutions working with NASA Ames and RIACS have been developing an intelligent multi-agent environment used for modeling, simulating and analyzing work practice. The environment is called Brahms [18]. Brahms is a data driven (forward chaining) discrete event environment usable for simulation purposes as well as for agent-based software solutions requiring the use of intelligent agents. Brahms and its applications are described in detail in bibliographic references [19,20,21,22,23]. From the Agent iSolutions Web site [18]:

Brahms allows us to model the work activities of each type of role, and each individual (or artifact) playing that role in an organization. The focus of a Brahms model is on the context of work, meaning, how does the work really happen. One of the essential requirements for Brahms is that we can model collaboration and coordination between people working on one task, as well as that people can work on more than one task at a time, and are interrupted and able to resume their activities where they left off.

Prior to the partnership with DigitalSpace Brahms models could only be viewed in execution using a timeline bar chart-style interface. It was determined that Brahms could become a much more effective tool if it had an interface that allowed realistic reconstruction and interaction with 3D scenes representing the real world people and systems being modeled. Figure 3 shows the architecture of BrahmsVE in place as of Fall 2002.


Figure 3: BrahmsVE architecture, Fall 2002


A prime directive for BrahmsVE was that it should enable highly realistic recreations of environments involving people interacting with systems. Figures 4 and 5 below illustrate some of the human figure, gesture representation and scenario reenactment that the development version of BrahmsVE is capable of. These are depictions of BrahmsVE simulations of crew activities aboard the FMARS/Haughton-Mars habitat during the 2001 field season.

Fig 4: Human figure recreation and gesture in BrahmsVE from EVA suit-up
  
Fig 5: planning meeting simulation from 2002 BrahmsVE project to model a day in the life of the FMARS analogue Mars habitat


Components of the innovation

We believe that BrahmsVE is a uniquely powerful new tool that will offer human-centered computing unique opportunities for advancement. BrahmsVE has the following unique properties:
  • Brahms Java-based PersonalAgent with compiler, virtual machine, builder, IDE and AgentViewer all of which are currently utilized in a number of NASA projects including mobile agents.
  • Brahms Virtual Environment runs on industry standard consumer grade computing platforms over ordinary Internet connections with no special hardware required.
  • BrahmsVE employs industry standard languages and protocols such as Java, JavaScript, SOAP and XML.
  • BrahmsVE’s virtual environment uses industry-leading technology in Adobe Atmosphere such as Havok physics, Viewpoint models and inverse kinematic animations all running in an open framework utilizing an open source community server.

For a full specification of the BrahmsVE environment as well as detailed technical documentation on the system, please see the final report filed for this SBIR and additional materials on the Web sites referenced in [14,24].

Additional 2003 BrahmsVE projects completed show promise of wider applications


Fig 6: MER rover modeled for JPL concept presentation

Fig 7: FMARS habitat on terrain generated by whole-planet modeling exercise for Geoff Briggs

In parallel to the work completed for the Phase I SBIR, BrahmsVE was used for two non-funded exploratory projects at NASA Ames. For one project we designed a model of Mars Exploration Rover, which interacts with a virtual Mars surface utilizing the build-in physics engine. This test was successful and was presented as a concept piece to the Athena science team at JPL in January 2003 (see Figure 6). We believe that this shows we will be able to apply BrahmsVE to mobile agents applications. The second project was to illustrate the Mars habitats on a virtual reconstruction of the entire Mars surface derived from Mars Global Surveyor and other data, a project which was commissioned by Geoff Briggs at Ames (see Figure 7).
,br> Part 3 Technical Objectives

Introduction – the solicitation’s challenge
The following statement from the topic for this SBIR called for proposals to produce the following tools:

Visualization tools combining "virtual reality" projection with actual objects in the environment, conveying information about object identity, part relationships, and assembly or operational procedures. "Cognitive prostheses" that qualitatively change the capabilities of human perception, pattern analysis, scientific domain modeling, reasoning, and collaborative activity. Such tools could incorporate any of a variety of modeling techniques such as knowledge-based systems and neural networks, and fit tool operations to ongoing human physical interaction, judgment, and collaborative activity.

This statement is a broad-brush challenge to the entire field of visualization, human-centered computing and cognitive science. While the scope of this challenge was large, we set out to achieve its central objectives by producing a proof of concept that is a good first step along the road to this total vision.

Objectives of Phase I

In Phase 1, we constructed a simple yet believable cognitive prosthesis in a virtual reality environment that implemented a canonical scenario in the study of human-centered computing: a semi-autonomous mobile agent interacting with a human subject. We referred to this mobile agent the Personal Bot Assistant, or PBA.
Fig 8: Canonical example of a semi-autonomous agent interacting with a human astronaut

The objective of this project as stated in the Phase 1 proposal was therefore to produce, in a web-based 3D virtual environment, the canonical example of human/machine augmentation, that of a semi-autonomous agent assistant interacting with a human inhabitant of a space station or surface habitat. As the above figure (8) from the original proposal illustrated, that agent should be able to execute a minimum set of activities interacting both with its environment (the geometry and physical properties of a virtual space station or habitat) and with an astronaut agent. The astronaut agent in this example could be driven by a simulation engine or directly by a user at a workstation as an “avatar”. The virtual astronaut would in turn have a limited repertoire of commands, which can be directed to the PBA.

Our actual implementation of Phase I deliverables

Guidance from QSS, NASA and Brahms teams directed us to produce a simulated work practice utilization of the Personal Satellite Assistant (PSA), which is under development at NASA Ames Research Center, aboard a virtual analogue of the International Space Station (ISS). Design issues surrounding the PSA operating within the ISS environment provide an almost ideal challenge for the development of tools to aid in human/agent intelligent augmentation. Our colleagues who guided this project informed us that in a single simulation the following could be achieved:

  1. Testing design concepts for the PSA in a virtual space prior to next generation implementation of hardware, in this case the addition of a laser pointing device on the PSA,
  2. Modeling of the interior of the ISS, including obstructions, enabling agent and astronaut movement, tool mis-placement and comparison with the physical test fixture at Ames.
  3. We were informed that this kind of simulation environment could evolve into a training simulator for astronauts who will be testing and then utilizing PSA aboard the ISS in the next several years supporting both pre-flight and during flight operations.
  4. In the longer term, we were given the insight that coordinates and state transmitters aboard a flight PSA would be able to report its current position and activity to a 3D simulator which could be used by NASA JSC Space Station Mission Control to permit them to gain a PSA-eye view of the station and to determine the whereabouts and current activity of PSA.

Empirical results from the Phase I implementation

We were therefore well informed as to the benefits of pursuing this simulation within an upgraded BrahmsVE architecture (the main beneficiary of this Phase 1 project). Over a six-week period we executed eleven versions of the simulation, each with more complexity and more interface and reporting affordances added. Each of these iterations is available for live execution within the 3D environment or viewing via Quicktime movies at the project Web site [24]. As a result of these iterations the following empirical results emerged, answering the key questions posed to us by our collaborators:

  1. Is a laser pointer a useful device for a PSA or similar semi-autonomous robotic agent? The simulation suggests it would be and that the laser pointer would be used not only to permit mission controllers to point to a spot (current suggested teamwork application) but also as an indicator to astronauts what the PSA is doing (where it’s gaze is currently, its direction of travel). Of course the laser pointer would be used selectively as it has the ability to temporarily blind an astronaut given direct contact of the beam with the retina. It was suggested that a spotlight (not coherent) be substituted for simple indicator/tracking functions.
  2. What other affordances does a mobile robot with human eye-gaze level awareness (microgravity operation) need? The simulation suggested that status indicator lights to tell astronauts what state the PSA is currently in (searching, waiting for cleared obstruction, powering up) would be needed to quickly and reliably inform human occupants of the station of the PSA’s intention or current state.
  3. Can a PSA-like agent successfully avoid a collision with an astronaut while transiting the station, even in close spaces such as interface modules? Empirical tests in the virtual station seem to suggest that the ray-casting ability of the virtual PSA is able to determine when its path is blocked. Of course, only calibration with actual machine vision systems used on the actual PSA would be able to validate such a collision avoidance scheme.
  4. Can such a mobile agent effectively locate a tool or move itself to a known location aboard the station? The use of the laser pointer suggests that plans to apply bar-codes to locales and object such as tools might greatly aid the PSA’s ability to locate itself in space and find objects.
  5. Can the PSA successfully navigate and course correct given attitude adjustments of the station or passing astronauts and blower fans creating an airflow situation that the PSA would have to steer through? Only a single blower fan was implemented in our BrahmsVE implementation, which imparts a simple force vector on the passing PSA. More complete modeling of the ISS interior and orbital dynamics would have to be carried out to gain understanding of this issue.
The above heuristic results suggest directions for Phase II where statistical reporting and the Havok physics engine will be added to gain a more quantitative set of results from hypotheses that will be tested with the next generation models.

Detailed description of the implementation

We will next discuss what was constructed to enable the ISS/PSA BrahmsVE simulation. We will begin by a tour, aided by visual screen captures of the simulation in action. We will then describe the extensions to the BrahmsVE architecture constructed during Phase I.

3D virtual world models constructed

Utilizing imagery and schematics the DigitalSpace team modeled the interior of the current configuration of the International Space Station (ISS). Wherever possible high-resolution imagery was used to enhance realism of the interiors and we produced 3D models of significant equipment such as monitors, scientific equipment and hatches. We modeled two astronaut agents with head and body gestures representative of real astronauts aboard the ISS (floating forward, footrest positions, head tracking). Lastly we created a model of the Personal Satellite Assistant (PSA) complete with instruments and a laser pointing device which is used as an indicator of status, direction of travel, and ray-casting identification of a target object. The 3D visualization environments that resulted are pictured in the figures 9-19 below.


Figure 9: BrahmsVE web-browser based components

Figure 9 above illustrates the new interface to the BrahmsVE environment produced in this Phase I project. The 3D window represents the viewpoint of a third participant within the virtual ISS model. In this case we, the viewer, are positioned behind the PSA agent, as it establishes a line of sight view with the astronaut agent, in which has just repositioned across the module. The virtual PSA has presented us the following iconic views: the top icon representing that the astronaut has observed the PSA and the bottom icon representing that the PSA is in a wait state and able to respond to a command.

Above and below the 3D view are two text output windows. The top window reports all Brahms action commands and resultant actions and reports from the agents in the virtual environment. The text output buffer below the 3D window reports internal agent state (such as PSA remaining at station keeping as shown in this example).  To the right of the screen is two panels for controlling and reporting on the simulation, described below.

Figure 10: BrahmsVE Control Panel
PSA Search – Commands PSA to search for object randomly placed aboard the station (wrench, drill or flashlight)

Status Bar – Show additional status pane (rightmost controls)

Status Page, Syntax File, Help File, History – simulation documentation

Camera Views – PSA point of view and three other fixed cameras aboard the virtual ISS

Advanced – Move allows operator to place the three tools at a specific location within the virtual station

GUI Mode

This toggle changes the method used for displaying the status icons for the Agents. In GUI Mode, the icons are always the same size, and always visible.

Mouse Look

This toggle changes how the player/actor is controlled. When active, moving the mouse will cause the direction the player is looking to move. When the mouse is near the edges of the window, the player will continue to turn in that direction.

Astro Detectable – Idle, astronaut is not being detected by PSA, LOS:PSA, PSA has lost signal for detecting astronaut

PSA Detectable – Whether PSA is being searched for by astronaut agent

Power – Current power level of PSA, determines when PSA must seek a power station

Laser/Beam – determines whether the laser pointer will be shown and if it will seen as a beam or a spot

Fan – turns a random fan on or off in the station, providing an unexpected course changing vector for PSA while in transit

The above layout of the control panel for the ISS/PSA BrahmsVE simulation enables the operator to set up and then execute different scenarios, described next.


Figure 11: Exterior view of US module of the simulated ISS with PSA agent and astronaut agent (Overview 1 camera)

Figure 12: Exterior view of virtual station showing additional attached modules (Overview 2 camera)

Figure 13: PSA laser pointer in “spot” mode illuminating a spot on the station interior

Figure 14: PSA laser pointer spot seen projected from PSA point of view

The scenarios implemented in this BrahmsVE application centered around the ability of the operator (astronaut agent or operator) to dispatch the virtual PSA to locate a missing tool (wrench, drill, flashlight) placed randomly aboard the station and to then return to the dispatching astronaut-agent and report the tool found (figures 11-19).


Figure 15: PSA acting on search command, beginning search by exiting US module

Figure 16: PSA transiting interface between modules

Figure 17: PSA executing search for wrench tool within a module

Figure 18: PSA halted to avoid collision with astronaut transiting interface coupler between modules


Later we added a second astronaut traveling randomly (figure 18) and a blower fan to cause the PSA to have to avoid collisions and to make course corrections through force vectors.

Figure 19: Report of successful location of drill tool back to astronaut agent

Status icons

Throughout the simulation, the status of the agents is reported in the text buffers and indicated by convenient “status icons” projected above the active agents (astronaut and PSA). The visual language for these status icons is described in tables 1 and 2 below.

Table 1: Task Icons (Top Icon)

This icon is an indicator of the Action the Agent is currently performing.

PSA & Astronauts
The Agent is looking at, and tracking, the Target.
The Agent is moving to a new location.
PSA Only and Power Lead Seeking charging station
The PSA is currently scanning its surroundings for its Target. The PSA is reporting the tools location.
Table 2: Target Icons (Botton Icon)

The Target icon represents what object the Agent is currently focused on. This is either the Watching Settable or in the case of the PSA, the SearchFor Settable.

Icons Target
Drill, Flashlight or Wrench tools
PSA is receiving commands (via laptop)
Agent sees PSA or Astronaut
PSA is affected by blower fan
PSA identified and is utilizing power station

Underlying technology enabling the above interaction

In the following section we will describe the underlying architecture that enables the above BrahmsVE implementation of the multi-agent simulation of the ISS with PSA, covering new additions to the architecture made possible by this Phase I support.

Use of industry standard platforms

A primary goal of the BrahmsVE is to utilize standard web components that run on ordinary consumer personal computers with no special hardware. Achieving this kind of ubiquity is a longtime dream of industrial design and simulation and has been made possible by the advent of the multi-gigahertz processor, high-speed net connectivity and low-cost, high performance 3D acceleration. BrahmsVE is therefore able to be used from virtually any net-connected PC in the world and has the added benefit that there is a multi-user option allowing distributed researchers to chat and be visible as “avatars” within a shared simulation. Thus, BrahmsVE is evolving into a collaborative virtual environment for distributed team use, a tool for 21st Century work practice engineering.

Behind the scenes: Adobe Atmosphere, the OWorld engine and Brahms interfaces

Figures 20 and 21 depict the current state of the BrahmsVE architecture with the implementation of this Phase I project which is described next.



Figure 20: Current flowchart diagram describing interfaces between Brahms, the Web server and the entire BrahmsVE environment



Figure 21: Current flowchart diagram describing the OWorld engine operation with Atmosphere

Adobe Atmosphere

Adobe has partnered with DigitalSpace since 2001 to provide its Atmosphere 3D web plugin to the BrahmsVE effort. Adobe has contributed significantly to the platform (at no cost to DigitalSpace or NASA) based on our requests. Atmosphere is a stable, integral part of the platform and provides the 3D scenegraph, rendering engine (in software and hardware), scripting language, physics engine (from Havok, Inc.) and an object renderer and animator to represent objects and agents and their gestures (from Viewpoint Inc.).

The OWorld engine

The OWorld engine is the heart of the BrahmsVE platform and consists of the following major components (figures 20-21) which execute entirely within the framework of the Internet Explorer web browser and HTTP protocols:
  1. Interface layers that permit two-way communications with Brahms and Web services. This is implemented utilizing XML/HTTP. In this Phase I build, the full two way Brahms implementation is simulated by JavaScript. The Brahms team will be building special networking and command processing facilities into the Brahms server to permit the full two-way connection of our engine with Brahms.
  2. A command parser that requests (in JavaScript format) all Brahms commands (interface commands defining settables, detectables and reports).
  3. A series of multi-threaded message queues associated with each agent that interact with the entire OWorld framework and the settable/detectable actions.
  4. A path manager that reports on valid locations and traverses permitted by the world geometry.
  5. World geometry and viewpoint objects, which communicate valid path information and execute gestures required by actions.

Extensions to the OWorld engine

For this project the OWorld engine was extensively upgraded to provide multi-threaded support of multiple astronaut and robotic agents operating in parallel. A first generation path finding module was added as well as the two-way XML dialogue layer for future network communications with Brahms (figures 20, 21). New Brahms commands will be described next.

Brahms commands and new commands implemented

Activity/Interface Commands

All Brahms commands are made up of multiple parts, separated by "pipe" characters ( | ). The first part of any command is what "type" it is. There are two recognized types in the OWorld library, "activity" and "interface". Which type it is will decide how many other parts the command will have.

Activity

The Activity type command will always have at least six parts. The second part (after the Activity type) is what "sub-type" it is. There are four recognized values, "move", "get", "put", and "primitive". The main used types are move and primitive.

The third and forth parts of the Activity command are the start and end time of the command. The command will not begin until the simulation time is equal to or greater then the start time (and all preceding commands have been completed). The execution of the "Action" (explained later) is accelerated or slowed to ensure the command is completed by the end time.

There are some special values and uses of the start and end times. A common "trick" is to specify zero (0) as the start time. Because the system time will always be greater then or equal to zero, the command will be executed as soon as all preceding commands are completed. This is useful when the length of time taken by a previous command is unknown.

Another "trick" is to specify negative one (-1) as the end time. This will cause the command to take as long as it needs to perform the "Action" without accelerating or slowing the command.

The fifth part of the Activity command line is the Agent. This is the full name of the Agent that will be performing this command.

The sixth part is the name of the "Action" the Agent will execute, to fulfill this command. A common combination is the "move" sub-type and the "Walk" Action. In the case of a "primitive" subtype, this is usually an animation or gesture.

While every Activity command has to have at least six standard parts to be a valid Activity command, each of the sub-types require different additional information. For each of the sub-types defined in the OWorld library, there is an additional part that is the "props" of the command. This is the name of the Prop (such as coffee mug or tool) that the Agent is currently using.

The "move" sub-type requires two additional parameters, from and to. These are the names of the pre-defined Areas that the Agent will be moved between. The OWorld system will produce a path for this movement when the command is parsed (not when it is executed).

The Action performed by the "move" sub-type command is responsible for things such as obstruction detection and path failure. The technique currently used in OWorld is that when the Action "Walk" detects a problem, it will define an Area where the Agent currently is, stop moving, and it will produce an Interface type Alert for Brahms, informing it of the failure. Currently the response to such an Alert is to produce a new "move" sub-type command, from the failure location to its previous destination. The "primitive" sub-type does not require any additional parameters (other then the props part). A primitive is an Action that is performed "as is". The "get" and "put" sub-types require two additional parameters, from and to, affecting Props.

Interface

All "interface" type commands require six parameters:
  1. The first is the type "interface". The second is the sub-type, either "setDetectable", "setSettable".
  2. The third is time. Interface commands occur almost instantly, lasting for only one cycle (or frame). Therefore, they only require a start time. Like the "activity" type of command, they are executed when the simulation time is equal to or greater then the start time. In the same way, the "trick" of using zero (0) can be used to make the command occur as soon as all previous commands are complete.
  3. The Interface command has an extra trick, being using minus one (-1) as its time value. This is a specially checked condition, and will cause the command to be placed at the beginning of the queue, to be executed as soon as the current command (either Interface or Activity) has completed.
  4. The fourth part of the Interface command is Owner. This may be either an Agent or an Interactive (an Interactive is an improvement on the previous concept of Props. They are essentially Agents with a limited set of available Actions).
  5. The fifth part is the ID. This is the name of the Settable or Detectable being set.
  6. The sixth portion is the Value the Settable or Detectable is being set to. In the case of a Settable, this may be any value (as long as it does not contain the pipe character). In the case of a Detectable, this may be "true", "active", or "false". "true" or "active" will enable the Detectable, causing it to report using an Interface Alert (explained later) when its condition becomes valid. A setting of "false" will disable it.
Interface Alerts

Interface Alerts are Brahms Commands that are sent from the VE to the Brahms server. They use the sub-type "detectable", their time is when they were triggered. Owner and ID are the same as the other Interface commands. The Value portion depends on the Detectable that is sending the Alert. This is usually greater detail about the Alert, to be used by the Brahms server in deciding what will occur as a result.

A common example of this is the Idle Alert. While this is not actually a proper Detectable, as the Alert is sent when ever an Agent starts or stops its "idle" Action, by the Action itself, its Value portion of the Alert signifies if the Agent is starting to idle (Value is "true") or stopping (Value is "false").

Sample Brahms commands explained

The Brahms commands sent into and out of the environment are displayed in the text box at the top of the BrahmVE window (see figure 9). A > or < symbol indicates if the command is being sent from the environment, or from Brahms. Several of the commands have been extended to provide the capabilities required for this simulation. The most significant is the addition of the "interface" commands. These relate to “Settables” and “Detectables” (commands that set up the simulation and request reports or trigger actions), and are used in both directions. For example, an interface command will set the value of a Settable, or activate a Detectable, while a similar command in the other direction will notify Brahms of a Detectable being triggered.

Below is an example series of Brahms commands:
<interface|setSettable|0|projects.issvre.PSA|Watching|projects.issvre.flashlight1
activity|move|118.944999694824|-1|projects.issvre.PSA|Walk||projects.issvre.CentrifugeAccomodation.center|projects.issvre.USLab.center
interface|setSettable|0|projects.issvre.PSA|Watching|projects.issvre.Astro
>interface|detectable|108.944999694824|projects.issvre.PSA|Search|true

In this sequence, the virtual PSA agent has reported the triggering of its Detectable "Search". This resulted in the setting of the PSA Settable "Watching", making it watch the flashlight when Idle. It was also told that at the time 118... to move from the Centrifuge Accommodation to the US Lab. The -1 informs it to take as long as it needs to (rather then taking a prescribed amount of time, resulting in the PSA moving very quickly). A complete documentation of the Brahms action command set and related documentation is available at the project website referenced in [24].

Implementation of a virtual environment inventory and geometry reporting system The Brahms team req  ueted the implementation of a BrahmsVE export command that would report the geometry and inventory of objects within the VE. A version of this was built and will be completed as part of Phase II. This will allow the Brahms modeler to “query” the VE and gain the starting set of definitions to then use in modeling exercises. This reduces “double work” of defining the geometry and objects both in Brahms and then again in the VE.

Current Limitations of the present BrahmsVE platform

The major current limitation with the current BrahmsVE platform is the lack of the return interface to communicate the results of Detectables and Interface Alerts to the Brahms environment running on its server. The implementation of new code in Brahms is being scheduled to handle the syntax of messages we are now generating. Phase II will see the full synchronous interface implemented and tested in several sample applications.

Limitations with line-of-sight testing

The primary technical limitation relates to the line-of-sight/ray intersection functions. These are a group of functions designed for detecting points of collision along a line (for example, finding out where a laser beam strikes an object, to place a reflection object/sprite). A limitation of these functions is related to the way Atmosphere handles loaded models. When performing a ray intersection test on the world geometry, you are required to use the ENTIRE geometry, including any models loaded by the script (be they Atmosphere objects or Viewpoint). A result of this is that you cannot choose to "see through" objects that are dynamically placed, and you cannot tell the difference between a model and a wall. An example of this is when the PSA attempts to look for a tool. By performing a ray intersection (line-of-sight) test in the direction of the tool, it can determine if it can see the tool or if there is something in the way. However, you cannot tell what the object is. If the object that this test detects is supposed to be transparent (eg the PSA's laser beam), you cannot detect it.

A related problem is that the PSA and astronauts have to "look" from a point outside their own body models, otherwise the agent will see its own body, and think that something is in the way. This can lead to problems where if the agent is near a wall, they can often see through it (as the offset to avoid seeing themselves places the point they are looking from on the other side).  A place where this is more of a hazard is in path finding. The path-finding routines perform these line-of-sight tests between pre-defined nodes (waypoints/Area) to determine which it is able to travel to, in an expanding pattern until it finds its destination. However, if something is blocking one of these waypoints (an Astronaut moving temporarily through that space, or even the model of the Agent that is attempting to find the path), these line-of-sight tests will fail, and it will think there is no way to reach the destination point. In the current implementation this is overcome by continuously retrying if a path isn’t found (hoping the astronaut will move). Often this can last for an extended period of time (especially since the line-of-sight tests are not 100% accurate when working with animated Viewpoints), causing the simulation to appear stalled.

Other implementation Issues

The current implementation does not use the Havok physics engine. This is due to a number of reasons, including inflexibility in the physics modeling (center of gravity is always center of geometry), physics modeling not being aware of changes in a Viewpoints geometry due to animation (at last test), manikin collisions (an astronauts entire body will bounce from the wall, rather then just their arm bending), and speed (particularly during collisions).

All of these limitations and implementation issues will be addressed in Phase II.

Part 4 Work Plan

4.1 Technical Approach

During Phase I our technical approach focused on the following three task sets:
  • Task 1: Research, interviews and model construction
  • Task 2: Creation of OWorld engine components and new Brahms commands
  • Task 3: Build-out of the complete scenarios: astronauts, tools, wayfinding and reporting


4.2 Task Descriptions

Task 1: Research, interviews and model construction

DigitalSpace performed interviews of key advisors and participants including the Brahms team, the QSS team building the PSA and others. We then obtained imagery and scale plans for the ISS and proceeded to create the first 3D module (US Habitation) including a viewpoint model of the PSA in its current revision.

Task 2: Creation of OWorld engine components and new Brahms commands

The next task was to construct all new Oworld engine components and implement the new Brahms commands. The Path Finding module was built during this task phase.

Task 3: Build-out of the complete scenarios: astronauts, tools, wayfinding and reporting

The third step was to build all the 3D objects, Brahms simulated models, and then create the scripting, and communications dialogue for eleven renditions of the simulation, involving the PSA responding to requests to search for tools aboard the ISS while avoiding obstacles (blower fans and moving astronauts) and then returning to report on the findings.  A major effort was undertaken at this time to document the current BrahmsVE architecture and prepare recommendations for Phase II continuation.

4.3 Meeting The Technical Objectives

The Phase I BrahmsVE as specified above met the technical objectives outlined in Part 3 as follows:
  • The ISS/PSA application of BrahmsVE now proves that we can provide a world class model-based, discrete agent simulation environment that is both fully interactive, representing the simulation as a realistic 3D virtual environment, and collaborative, allowing researchers and operators to utilize it over networks on standard personal computers equipped only with a web browser.
  • Given Phase II continuation and full productization, BrahmsVE can serve as an environment that can provide interactive simulation for:
    • Training both well in advance of missions and for just in time training during mission operations and for commercial applications discussed in Part 8 below.
    • Testing of design concepts for flight hardware and whole vehicles (PSA and FMARS/MDRS) as well as commercial applications (to be discussed in Part 8).
    • Visualization of “day in the life” re-creations of crew/system operations captured on video and reconstructed in a 3D world for researchers to study.

BrahmsVE will meet our expected market/customer objectives in that it will be a low-cost solution as it requires only the basic current personal computer models with no added hardware, and permits content development (3D and scripting) using standard web components (Atmosphere, Viewpoint, JavaScript, XML, HTML).

4.4 Task Labor Categories and Schedules

This section describes the work schedule for the Phase I effort (see Tables 3-4 below).  DigitalSpace work is to be coordinated from its corporate offices located near Santa Cruz California. DigitalSpace design and testing teams are located at several places around the United States and internationally. This schedule assumes a five-month project duration.

Project Reference Website

The Project Reference Website is the center for ongoing progress and resources surrounding the project, from the interview phase to the prototype test fixture evaluation. The site is available now for review via Web reference [24].

Part 5 Potential Applications

5.1 Potential NASA Applications

Some NASA applications of BrahmsVE include: simulation for research of mission hardware and work practices, design of augmentation through agents, just in time training, VR for tele-operations, FMARS/MDRS activities modeling and re-creation, PSA/ISS design simulation, realistic rendering for Sim-Station or Sim-Shuttle, Mars MSL ’09 and Titan mission design, Virtual Digital Human project support, education/outreach.

See Part 8 for a detailed account of our efforts to create additional NASA BrahmsVE customers.

5.2 Potential Non-NASA Commercial Applications

Some non-NASA Federal Government applications of BrahmsVE include: wind farm and alternative energy production modeling for DOD and DOE, mobile agents and human augmented system conceptualization and simulation for military and security applications. Commercial applications for BrahmsVE include: design, training and operation of automated factories, multiplayer robot games, educational spacecraft/colony simulators for student distance team-based learning, museum installations for spacecraft mission simulations, and concept design and test of wireless and mobile devices with human users. See Part 8 for a detailed account of our efforts to create commercial BrahmsVE customers.

Part 6 Contacts

6.1 Key Contractor Participants

The following brief resumes introduce management/technical staff members for the SBIR Phase I project.
Name: Bruce Damer (PI)

Years of Experience: 22

Position: CEO

Education: Bachelor of Science in Computer Science (University of Victoria, Canada, 1984); MSEE (University of Southern California, 1986)

Assignment: Mr. Damer will be the Principal Investigator for the SBIR effort. He will coordinate all interaction between DigitalSpace and its collaborators and NASA and other participants, be responsible for all staffing, technical design, reporting and documentation. Mr. Damer will devote a minimum of 100 hours per month of his time to the NASA SBIR project.

Experience: Mr. Damer is the world's recognized expert on avatars and shared online graphical virtual spaces having created much of the early literature, conferences and awareness of the medium. Mr. Damer is a visiting scholar at the University of Washington Human Interface Technology Lab and a member of the staff at the San Francisco State Multimedia Studies Program. See http://www.digitalspace.com/papers for a complete bibliography of Mr. Damer's work.

Name: Stuart Gold

Years of Experience: 28

Position: President

Education: Royal Institute of British Architects

Assignment: Stuart Gold will serve as a Program Manager for the project and structure the technology components and architecture for the BrahmsVE 1.0 release as well as coordinating the 3D modeling teams and provide any database and real-time community tools infrastructural support on the project and the XML based interfaces with Brahms.

Experience: Mr. Gold is a pioneer of online systems, starting with his work on transaction processing for Prestel in the 1970s and concluding most recently with his leadership in the design and delivery of online virtual worlds including: TheU Virtual University Architecture Competition, International Health Insurance Virtual Headquarters, and Avatars98-2001 online events. Mr. Gold also is the chief architect of the DigitalSpace communities platform, implementing XML and JS based community tools for use by all DigitalSpace projects. See http://www.digitalspace.com/papers for his recent writings.
See http://www.digitalspace.com/papers for his recent writings.

Name: Bruce Campbell (Lead SE)

Position: Programmer/Architect, Oworld/Atmosphere

Experience: 5 years of experience at the University of Washington Human Interface Technologies Laboratory and the Department of Oceanography.

Assignment: JavaScript programming, interface with Brahms and 3D content, testing, open source component strategy, university and distance learning user partnerships.

Name: Galen Brandt (Marketing – Phases II and III)

Position: New business development, DigitalSpace

Experience: 25 years including creating market strategies for Dun and Bradstreet, the Franklin Mint, SUNY Fashion Institute of Technology, DoToLearn and others.

Assignment: Market development for Phase II and III.

Name: Dave Rasmussen (SE)

Position: Member of the 3D Design Studio, DigitalSpace

Experience: 8 years experience in virtual world design, skills: 3DS Max, Java, Active Worlds, Adobe Atmosphere, PHP/MySQL database development

Assignment: Directing team performing 3D modeling and animation, testing

Name: Merryn Neilson (Lead CD)

Position: Member of the 3D Design Studio, DigitalSpace

Experience: 8 years experience in virtual world design, skills: 3DS Max, Java, Active Worlds, Adobe Atmosphere

Assignment: Web design on project, 3D worlds, avatar design, testing

Name: Peter Newman (SE & TE)

Position: Developer in C++, JS, PHP, HTML, 3D Design Studio, DigitalSpace

Assignment: Programmer of OWorld engine extensions.

Name: Ryan Norkus (CD & TG)

Position: Graphic artist, 3d modeler and animator, 3D Design Studio, DigitalSpace

Assignment: Focusing on the automation of animated sequences

6.2 Key NASA Participants
  • Dr. William Clancey, COTR, Chief Scientist, Human-Centered Computing
  • Boris Brodsky, Ames, assisted with Brahms/BrahmsVE architecture and testing
  • Dr. Maarten Sierhuis, Senior Scientist, Human-Centered Computing, RIACS
  • Ron van Hoof, AgentiSolutions
  • Natalie Lemar, SBIR Contract Administrator
6.3 NASA and Non-NASA Advisors
  • Charles Neveu, QSS, PSA Team – advising on Phase I PSA/ISS application (implemented)
  • Mike Sims, Ames – advising on rover and surface mission design, assisted with JPL/MER demonstration project
  • Geoff Briggs, Ames – advising on terrain modeling, surface mission design, commissioned whole-planet Mars terrain demonstration project.
  • John Peterson, Arlington Institute – encouraged and hosted DOD/Energy security meeting where 3D wind farm was presented.
  • Tom Furness III, HIT Lab University of Washington – technology transfer advisor
  • Navy Captain Richard O’Neill, Director, Highlands Group

Part 7 Technical Activities

7.1 Recent Technical Activities

Task 1: Research, interviews and model construction

During the first two months of the Phase I project DigitalSpace performed interviews of key advisors and participants including the Brahms team, the QSS team building the PSA and others. We then obtained imagery and scale plans for the ISS and proceeded to create the first 3D module (US Habitation) including a viewpoint model of the PSA in its current revision.

Task 2: Creation of OWorld engine components and new Brahms commands

During the following two months, the new Oworld engine components were constructed and the new Brahms commands implemented. The Path Finding module was built during this period.

Task 3: Build-out of the complete scenarios: astronauts, tools, wayfinding and reporting

During the final two months of the project, all the 3D objects, Brahms simulated models, scripting, and communications dialogue were completed for eleven renditions of the simulation, involving the PSA responding to requests to search for tools aboard the ISS while avoiding obstacles (blower fans and moving astronauts) and then returning to report on the findings.  A major effort was undertaken at this time to document the current BrahmsVE architecture and prepare recommendations for Phase II continuation.

7.2 Future Technical Activities

Task 1: Complete Brahms two way synchronous communications

The next six months will be dedicated to working with the Brahms team to establish full two way communications with the Brahms server. Our new specification of Brahms commands and actions will inform the Brahms developers about additions needed to their architecture.

Task 2: Integrate Database PHP/SQL for object synchronization and script delivery

By early 2004 we hope to be integrating the database back-end providing PHP and SQL management of object and virtual world synchronization and delivering executable script to any agents on the fly. We will also integrate and test the physics engine.

Task 3: Build out sample applications, complete Brahms developer integration

By late 2004, early 2005 we will be creating a series of sample applications that will provide both a test of the platform and the developer examples for the 1.0 production release of BrahmsVE. These sample applications may well come from customers funding their development.

Part 8 Potential Customer and Commercialization Activities

In anticipation of Phase III commercialization and Phase II pre-launch marketing and customer activity, we have identified the following classes of valuable applications for BrahmsVE.

8.1 Recent NASA Potential Customer Activities

Numerous projects within NASA ranging from the PSA at Ames to the Mobile Agents project of interest to several NASA centers (including JSC) could benefit from the addition of “intelligent” agents within a model-based, discrete agent-driven virtual environment. Rapid, iterative design of a mobile agent working in tandem with human participants can yield a body of design feedback with much lower costs than building several iterations of physical models.

Modeling and simulation for Mars Science Laboratory and Titan missons

Geoff Briggs and Michael Sims of Ames informs us that future Mars (Mars Science Laboratory ’09) and Titan missions will include the use of drills, rotorcraft and airplanes. While these are strictly robotic missions there will be “humans in the loop” in the form of significant science backrooms and mission control.

Sim-station support

Julian Gomez, who is working on early renditions of the Sim-Station project for RIACS at Ames informs us that there will be need for 3D realistic representations of systems and people within the ISS to complement the schematic-style representation of station subsystems. Julian has been equipped with a running version of BrahmsVE.

Greater operator effectiveness through improved telepresence interfaces

NASA (JSC, Ames) is developing improved robotics and 3-D simulation technologies to provide operational robustness and intelligence with the goal of improving operator efficiency via advanced displays, controls and telepresence. Tactile feedback interface for collision awareness between workspace and avatar objects, and robot structure, force feedback devices for awareness of manipulator and payload inertia, gripping force and the use of stereoscopic display systems and spatial tracking of head, arms, etc are all considered key to more effective teleoperation. Based on its flexibility using open Web standards, we expect that BrahmsVE will have a role to play in teleoperation and that we will be able to build interfaces to tactile and force feedback systems, bringing this key interface modality into BrahmsVE.

Ames: Virtual Digital Human

Based on the new Mission Control Center System (MCCS) Architecture framework, integrated support for virtual-digital-human-in-the-loop and teleoperational interfaces is being promoted for flight and ground operations development, analyses, training, and support. The main result desired is an interactive system that enhances operator and IVA/EVA task efficiency via the teleoperational technologies and distributed collaborative virtual environments.

The implementation of the Virtual Digital Human (VDH) seeks to create anatomical, biomechanical and anthropometric functionality to fully simulate the somatic components and systems of the human body. BrahmsVE utilizing the Adobe Atmosphere Viewpoint technology for procedural (skeletal) skinned human body forms within shared virtual environments may be able to meet this challenge.

NASA educational outreach and Space Camp

Immersive virtual worlds, virtual digital human (VDH), and 3D simulation modeling, have become a significant vehicle for NASA's effort to generate and communicate knowledge/understanding to K-12 and college/university students on topics such as the International Space Station and Space Shuttle/Space Transport System (STS) operations, Robotics, Intravehicular/Extravehicular activities, Mission Control Center conduct, interplanetary space flight, and microgravity simulation. BrahmsVE can go a long way to helping NASA enable this kind of outreach and could even become a fixture at NASA Space Camp at several centers.

8.2 Potential Non-NASA Commercial Applications

K-12 and College, Education and Museums

The current set of NASA BrahmsVE applications could be repurposed into educational course modules for schools. In discussions with Al Globus of UC Santa Cruz, and the Planetary Society in Pasadena, California, we have determined that there is a need and a market for student spaces in which they can construct space stations or colonies on the Moon or Mars and design all of the subsystems and human/agent activities.

Of course, agent-based virtual environments can also be of great value to museums and science learning centers such as the Exploratorium in San Francisco, where we have been in contact with Technical Director Larry Shaw, who is interested in hosting an event for the MER landing in January 2004 and using our BrahmsVE MER modeling done for JPL.

8.2 Recent Non-NASA Potential Customer Activities

DOD and DOE – energy security design application

In January 2003 BrahmsVE was presented at a special workshop at the Arlington Institute held for the Office of Secretary of Defense. A prototype virtual windfarm [27] was developed utilizing Adobe Atmosphere, OWorld and the BrahmsVE engine. The Havok physics engine allowed us to show windfarm KW/Hour production scales for different configurations of turbines. Coupling this with a geographical information system provided by GeoFusion Inc. allowed us to show the DOD staffers and other energy experts how sites for windfarm power could be selected and then the production output modeled. It is planned to present this work again at a Highlands Forum program for the DOD at the end of 2003. It is felt that a DOE presentation will follow.

8.3 Other Recent Commercialization Activities

Robot games – educational and entertainment applications

Robot “wars” are one of the most popular forms of entertainment in the popular media and robot game competition are some of the finest learning events for K-12 and college engineering students and faculty. Ames sponsors such events with CMU students and high schools. We have communicated with the organizers of the Ames events and demonstrated them BrahmsVE. It is planned to partner with them and the local chapter of the Robotics Society of America to develop a kids’ robot design lab and competition space within the virtual spaces made possible by BrahmsVE. Massive multi-player online games are experiencing a large amount of investment and commercial interest. BrahmsVE is a competent platform for the creation of a successful multiplayer online game both as a learning tool and as a pay-per-play tournament environment.  We plan to seek support for a commercial, online robot games application. We have secured the trademark “digibots” for this project and are creating a business plan.

Defense design, training and operations applications

The military will be using semi and fully autonomous agents working closely to support troops and command in surveillance and combat missions throughout the 21st Century. Therefore we expect a great deal of interest surrounding a product in this space. We are already in contact with the Naval Postgraduate School MOVES Institute about cooperation on and adopting a new XML based standard in simulation communications.

8.4 Future Potential Customer and Commercialization Activities

Industrial design, training and operations applications

From factory floor automation to security systems, complex environments where humans work in tandem with mobile agents or other autonomous machine systems all need a comprehensive model-based environment with high fidelity 3D re-creation during both design, training and operations phases. Industrial training is a multi-billion dollar per year industry and BrahmsVE is uniquely suited to enter this market, running on industry standard platforms.

Consumer market research for personal wireless assistants

The emerging era of wireless, wearable personal assistants is picking up momentum with ever more sophisticated cell phones and other handheld devices. In a real sense, each of these devices represents the pairing of humans with machines, all which the BrahmsVE human/agent augmentation design environment can model for product design purposes.

Part 9 Resources Status

Up to the report date 100% of the work has been completed.

Part 10 References
[1] Associated Press News Report dated 08/16/2002, on the web at: http://www.cnn.com/2002/TECH/space/08/16/station.spacewalk.ap/index.html

[2] Loftin, R.B., and Kenney, P.J., "Training the Hubble Space Telescope Flight Team," IEEE Computer Graphics and Applications, vol. 15, no. 5, pp. 31-37, Sep, 1995.

[3] Engelberg, Mark[Ed] (September 11, 1994). Hubble Space Telescope Repair Training System [WWW document]. URL http://www.jsc.nasa.gov/cssb/vr/Hubble/hubble.html

[4] Cater, J. P., and Huffman, S. D. Use of Remote Access Virtual Environment Network (RAVEN) for Coordinated IVA-EVA Astronaut Training and Evaluation. _Presence: Teleoperators and Virtual Environments_ vol. 4, no. 2 (Spring 1995), p. 103-109. (Training for Hubble Space Telescope repair.)

[5] Jeff Rickel and W. Lewis Johnson. Task-oriented collaboration with embodied agents in virtual worlds. In J. Cassell, J. Sullivan, and S. Prevost, editors, Embodied Conversational Agents. MIT Press, Boston, 2000.

[6] Transom Jack is described on the Web at: http://www.manningaffordability.com/S&tweb/HEResource/Tool/Shrtdesc/Sh_TRANSOM.htm

[7] MOVES Institute on the Web at: http://www.movesinstitute.org/

[8] Zyda, M., Hiles, J., Mayberry, A., Wardynski, C., Capps, M., Osborn, B., Shilling, R., Robaszewski, M., Davis, M., "The MOVES Institute’s Army Game Project: Entertainment R&D for Defense," IEEE Computer Graphics and Applications, January/February 2003

[9] Blais, C., Brutzman, D., Horner, D., and Nicklaus, S., "Web-Based 3D Technology for Scenario Authoring and Visualization: The Savage Project", Proceedings of the 2001 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), Orlando, Florida, 2001.

[10] Michael Alan Freed. “APEX, Simulating Human Performance in Complex, Dynamic Environments”, A Dissertation Submitted to the Graduate School in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy, Field of Computer Science, Northwestern University, Evanston, Illinois, June 1998

[11] Michael A. Freed, Roger W. Remington (2000) Making Human-Machine System Simulation a Practical Engineering Tool: An Apex Overview. In Proceedings of the 2000 International Conference on Cognitive Modeling, Groningen, Holland;

[12] L. A. Nguyen, M. Bualat, L. J. Edwards, L. Flueckinger, C. Neveu, K. Schwehr, M.D. Wagner, E. Zbinden, “Virtual reality interfaces for  visualization and control of remote vehicles”, Autonomous Robots, 11:59-68, 2001.

[13] B. Damer, M. Sierhuis, R. van Hoof, B. Campbell, D. Rasmussen, M. Neilson, C. Kaskiris, S. Gold, G. Brandt (2001). Brahms VE: A Collaborative Virtual Environment for Mission Operations, Planning and Scheduling, Final Report for STTR Contract #NAS2-01019, October 8, 2001. URL: http://www.digitalspace.com/reports/sttr-techreport-final2.htm

[14] BrahmsVE/FMARS Project Home Page on the web at: http://www.digitalspace.com/projects/fmars

[15] FMARS/Haughton-Mars Project Home Page on the web at: http://www.marssociety.org/arctic/index.asp and http://www.arctic-mars.org

[16] Ron van Hoof et al, “TM00-0024 Brahms/OWorld XML DTD Specification, Version 1.0 For Review”, 28 November 2000, NASA Ames Research Center.

[17] Boris Brodsky et al, “TM00-0025 BRAHMS OWorld Event Specification Version 1.0 Draft”,  August 14, 2002,NASA Ames Research Center.

[18] Brahms is described on the web at http://www.agentisolutions.com and in several papers at: http://www.agentisolutions.com/documentation/papers.htm

[19] Clancey, W. J., Sachs, P., Sierhuis, M., and van Hoof, R.1998. Brahms: Simulating Practice for Work Systems Design. International Journal of Human-Computer Studies, 49, 831-865.

[20] Sierhuis, M. 2001. Modeling and Simulating Work Practice; Brahms: A multiagent modeling and simulationlanguage for work system analysis and design. Ph.D. thesis, Social Science and Informatics (SWI), University of Amsterdam, SIKS Dissertation Series No. 2001-10, Amsterdam, The Netherlands, ISBN 90-6464-849-2.

[21] Sierhuis, M.; Bradshaw, J.M.; Acquisti, A.; Hoof, R.v.; Jeffers, R.; and Uszok, A. Human-Agent Teamwork and Adjustable Autonomy in Practice, in Proceedings of The 7th International Symposium on Artificial Intelligence,Robotics and Automation in Space (i-SAIRAS), Nara, Japan, 2003.

[22] M. Sierhuis, W. J. Clancey, C. Seah, J. P. Trimble, and M. H. Sims, Modeling and Simulation for Mission Operations Work System Design, Journal of Management Information Systems, vol. Vol. 19, pp. 85-129, 2003

[23] M. Sierhuis and W. J. Clancey, Modeling and Simulating Work Practice: A human-centered method for work systems design, IEEE Intelligent Systems, vol. Volume 17(5), 2002.

[24] BrahmsVE/ISS-PSA SBIR Phase I Project and reports Web Page:  http://www.digitalspace.com/projects/iss_03

[25] Dowding, John, CommandTalk from SRI is described on the Web at: http://www.ai.sri.com/~lesaf/commandtalk.html

[26] Perlin, Ken, demonstrations and papers on procedural figure animation on the Web at: http://mrl.nyu.edu/perlin

[27] DigitalSpace virtual windfarm and Arlington/OSD presentation described on the Web at: http://www.digitalspace.com/presentations/arlington-energy/

[28] A. Acquisti, M. Sierhuis, W. J. Clancey, J. M. Bradshaw, Agent Based Modeling of Collaboration and Work Practices Onboard the International Space Station. Proceedings of the 11th Conference on Computer-Generated Forces and Behavior Representation, Orlando, FL, May 2002.

[29] M. Sierhuis, A. Acquisti, and W. J. Clancey, Multiagent Plan Execution and Work Practice: Modeling plans and practices onboard the ISS, presented at 3rd International NASA Workshop on Planning and Scheduling for Space, Houston, TX, 2002.

[30] Personal Satellite Assistant (PSA) Test Fixture (Greg Dorais, Yuri Gawdiak, Daniel Andrews, Brian Koss, Mike McIntyre) described on the web at: http://ficworkproducts.arc.nasa.gov/psa_test_fixture/psa_test_fixture.html

[31] Clancey, W. J. 2001. Field science ethnography: Methods for systematic observation on an Arctic expedition. Field Methods, 13(3):223-243, August.

[32] David C. Wilkins, Patricia M. Jones, Roger Bargar, Caroline C. Hayes, Oleksandr Chernychenko: Collaborative Decision Making and Intelligent Reasoning in Judge Advisor Systems. HICSS 1999