DigitalSpace Commons Reports

FINAL REPORT for STTR Contract #NAS2-01019
HTML Version is below
Word document version is available by clicking here
Proposal 000104, Brahms VE: A Collaborative Virtual Environment for
Mission Operations, Planning and Scheduling
Dated: October 8, 2001
PI/Contact: Bruce Damer, DigitalSpace Corporation, (831) 338 9400
STTR Internal Team Contributors/Reviewers:
Bruce Damer, Maarten Sierhuis, Ron Van Hoof, Bruce Campbell,
Dave Rasmussen, Merryn Neilson, Charis Kaskiris, Stuart Gold, Galen Brandt
Project Summary

The purpose of this research was to provide a feasibility study and test implementation for a three dimensional, multi-user virtual environment as an interface for the Brahms discrete event agent-based work practice simulation system. The research carried out included: 1) the specification of an interface between Brahms and the DigitalSpace Oworld (formerly called Dspace1) generic 3D abstraction layer; 2) an initial test implementation of the interface in ?scenario 1? (astronaut movement from a habitat vehicle to retrieve a rock) in the Oworld/Java3D environment; 3) the final test implementation in ?scenario 2? (two astronauts on rovers traversing a virtual Lowell Canal from the FMARS HMP habitat to retrieve a sample, and returning to the habitat). The 3D platform used for the second implementation was Atmosphere from Adobe Systems Inc, which was judged superior to the Java3D environment from Sun Microsystems Inc.

Having coded and tested the simulation interface to the 3D OWorld/Atmosphere platform, it was determined that a Brahms model could be represented in a series of sophisticated visualizations. A related IS-supported activity enabled the accurate modeling of the interiors of the HMP habitat structure. This virtual model was used in scenario 2 and could be employed in future work to take actual 2001 field season data and visualize the complexity of interactions within the Brahms VRE 3D representation.  We feel that that the successful implementation of scenario 2 and resulting NASA internal interest in Phase I justifies Phase II continuation. Beneficiaries of a proposed Phase II ?production grade? implementation of Brahms VE could include a broad range of current NASA projects. Brahms VE projects discussed include: future HMP/MDRS science team support, Science Back Room for remote teams interacting with Mars 2003/MER surface operations, Mobile Agents (PSA, JSC robotics), ATV studies, in-situ simulation of ISS operation and station mission control support, environments for online education and public outreach, and an important test of the system with the originally planned detailed recreation of the Apollo XVI ALSEP deployment.

Lastly, it is envisioned that numerous projects in government, industry, K-12 education and college/university, museums and science centers all converge on the need for the simulation, design, and operation of complex 3D environments in which people work or interact. We anticipate a broad range of adoption of Brahms VE as a commercial product in Phase III.

Table Of Contents

Major Sections



Initial Brahms and OWorld integration discussion


Final Phase I OWorld Integration Software Architecture


Scenario 2 Simulation Sample Brahms Exports


Scenario 2 Simulation Steps in the JavaScript Writer


Scenario 2 Excerpt from JavaScript Writer Output


Excerpts from: Simulating "Mars on Earth" A Preliminary Report from FMARS, Phase 2


Detailed Description of HMP/FMARS Brahms Model


Online documentation for further reference
1) Introduction and Project Objectives
Our original proposal for Phase I STTR support identified the need for a 3D graphic interface on the Brahms environment. It is useful in the context of this final report to excerpt the original identified need and technical objective, which are summarized here.
The need for an event driven discrete agent simulation environment
With the advent of more capable robots and longer-duration human exploration missions, it becomes necessary to design systems synergistically, so as to take into account the limitations and capabilities of both people and machines. The Brahms multi-agent simulation system represents human practices within a model of the work environment (e.g., habitats, spaceports and terrain), such that system simulations combine human cognitive and social behaviors with mechanisms and formal processes (robots and software agents).
The need for Brahms VE, a simulation environment with a 3D graphical interface
Experiments with Brahms, a currently funded Thinking Systems project, indicated that it would be suitable for modeling surface exploration practices such as the communication and interaction between earth and the astronauts during the Apollo lunar traverses.  We (RIACS team) have modeled successfully the ways in which  the astronauts used checklists and relied on Capcom to keep on schedule. However, this methodology is unable to represent graphically in the simulation output the relative sightlines of the astronauts and the TV camera(s).  For example, it is not apparent from the simulation whether the astronauts can see each other, how the TV is pointed, or what is visible to earth. Consequently, the methods used by Houston during the Apollo traverses for tracking one astronaut on the TV while speaking to the other cannot be represented in the model.
The proposed project is to add a 3d graphic interface by which the geographic layout, rovers, tools, robotic systems, spacecraft  and astronauts can be depicted during the simulation. The Brahms language is fully capable of modeling activity in such an environment, and the simulation engine is capable of driving the graphic output. What is required is coupling an existing 3D language to Brahms. We have identified that the VR language and API embodied in the OWorld platform from DigitalSpace Corporation (DSC) is uniquely suited for pairing with Brahms.  DSC has developed its Oworld/Atmosphere client VR platform to serve as a powerful scene-graph manager and java-based net-distributable cross-platform component for collaborative virtual environments. Discussions with RIACS staff in 1999-2000 showed that there was a good fit for integrating Oworld with Brahms to provide their needed visualization and simulation environment.

Challenge for the Brahms VE system

The resulting tool composed of Brahms coupled to a 3D graphic output (which we are terming Brahms VE for Brahms Virtual Environment) will provide a unique combination of modeling language and 3D visualization output, which is directly applicable both to modeling and also to supporting a variety of other space activities such as flight operations, EVAs, robot planning and control, payload processing, onboard training and virtual science, including fully autonomous missions with multiple, interacting robots. PSA (personal satellite assistant) and other ISS programs could benefit from the visualization, modeling and work planning capabilities of Brahms VE. We also anticipate that the Brahms VE platform would be applicable to modeling scientific fieldwork on Mars including the following key aspects:

·         Outdoor geographic modeling (terrain, climate) allowing the computation of sightlines and prediction of shadow casting by objects for power generation and visibility planning

·         Simulation of interplanetary communication time lag to support tele-operator training and improved work planning efficiency

·         Modeling several weeks of time, with unmanned rover ground operations tele-operation or manned team planning and support on the surface

·        Predicting wear and tear on objects, including breakage
2) Originally Stated Technical Objectives
The original Phase I technical objectives were to demonstrate:

(1)   Scoping, design and implementation of a prototype Brahms VE platform to be able to establish technical feasibility of the larger project goals

(2)   Representation of models and simulation content and testing in the Brahms VE prototype using the example of work practices and actual mission logs during the ALSEP deployment on the Apollo XVI mission. Implementation to include showing the sightlines of the astronauts and camera on the lunar surface and analyzing how these sightlines explain difficulties and disco-ordinations that arose during ALSEP deployment.
3) Work Carried Out 
The work carried out, starting in November 2000, comprises  two categories:

a)      Creation of an abstract interface specification between Brahms and the DigitalSpace 3D virtual world API layer, called OWorld (originally named DSpace1 in the Phase I proposal); 

b)      Testing of that layer within 3D virtual world representations and selection of the most competent platform to perform the visualization for Brahms.

The specification of an interface between Brahms and the DigitalSpace OWorld generic 3D abstraction API layer was first specified by Ron Van Hoof and Bruce Campbell from a meeting on  November 16, 2000  (see Appendix 1: Initial Brahms and OWorld integration discussion) and completed in January 2001 (see Appendix 2: Final Phase I OWorld Integration Software Architecture) and resulted in two exported files generated by Brahms:

i)                    an XML file (actually a Document Type Definition) that defined a linkage between the model and virtual worlds, with defined areas, and object names all mapped to Brahms objects and

ii)                   an event file consisting of discrete actions coming from the Brahms simulation which would need representation in the virtual world.

A complete specification of these files and their records was drawn up and used as the basis of the implementation of the initial OWorld loader. The key file for the simulation is the event file, a sample of which is shown in Appendix 3: Scenario 2 Simulation Sample Brahms Exports.
HMP/FMARS actual and virtual models
Fig 1: Actual Habitat lower deck in summer 2001 (ref: Bill Clancey) Fig 2: Habitat interior in July 2001
Fig 3: Actual FMARS seen from outside, summer 2000 Fig 4: A model of the FMARS interior by the Mars Society (actual constructed version varied from this plan).
Decision to target simulation models at HMP/FMARS
At this time a decision was made to target our work on the Haughton-Mars Project (HMP) FMARS habitat on Devon Island in the Canadian Arctic. Our original STTR proposal called for us to simulate a past event, that of the Apollo XVI ALSEP deployment. We felt that implementing the Brahms VE feasibility around an actual current project would provide more value from the Phase I investment and demonstrate usability in active work. In addition, Bill Clancey's upcoming field season at HMP would give us a great opportunity to gather real data to create a lifelike model of the FMARS facility as well as apply some of Bill's field data to a model operating within the resulting virtual FMARS. An IS-funded activity was also put into place that allowed the DigitalSpace team to create realistic models of FMARS interiors and exteriors for use in the STTR feasibility study. Views of the FMARS habitat can be seen in Figures 1-4 above.
Initial test with Java3D Environment
The team then moved on to interpret the first sample output for Brahms model, which we termed "scenario 1". Scenario 1 had an astronaut exit a simple virtual FMARS habitat and then move toward a rock sample, retrieve that sample and return to the habitat. No animation of the models (gestures) or transition between the inside and outside of the habitat was to occur. The model was completed and our OWorld event importer written and used to drive a 3D scene in Sun Microsystems? Java3D environment. A simple avatar, representing an astronaut, made the moves within the 3D scene corresponding to the Brahms events, and did indeed retrieve the sample and return to the virtual habitat.

Following this early prototype, the shortcomings of the environment and its likely difficulty to perform desired future simulation were evaluated. The conclusions were that:

1.      Java3D lacked the scene graph resolution and rendering efficiency to display the kind of detailed worlds that would be required.

2.      Java3D also lacked advanced scripting permitting complex motion and actions in the scene.

3.      Java3D had no facility for physics or avatar gesture, which would require us to write these facilities from scratch.

It was therefore determined that a new 3D visualization environment would have to be selected if further feasibility tests were to be carried out by this team.
Selection of Adobe Atmosphere
On March 26, 2001, Adobe Systems Inc launched the beta program for Adobe Atmosphere, its entry into the marketplace of multi user 3D virtual environments. As DigitalSpace had a prior relationship with Adobe on the Atmosphere project, we recommended that Atmosphere be used in the next scenario for Brahms VE. Adobe in turn agreed to implement a ?JavaScript Spigot? into its standard release, which would allow us to build a complete Brahms simulation into a web page-based environment.
FMARS as modeled virtually in Atmosphere
Fig 5: Lower deck view 1 Fig 6: Lower deck view 2, showing ladder to upper deck
Fig 7: Upper deck view 1 showing conference table Fig 8: Upper deck view 2 showing ladder to water tank area

Atmosphere permitted us to engage in realistic recreations of the FMARS environment so that the IS-related modeling could proceed in parallel with the Brahms VE work. Bill Clancey?s 2001 FMARS field season provided over 500 Megabytes of image data, panoramas, and architectural layout canvasses enabling us to produce lifelike recreations of the FMARS habitat interiors. See Figures 5-8 above for results of this modeling effort.
Definition and Implementation of Scenario 2
Scenario 2 was defined in June and it was decided to feature two astronauts on rovers traversing a virtual Lowell Canal from the FMARS HMP habitat to retrieve a sample, and then returning to the habitat. The technical development cycle then proceeded with the first phase the building of the Brahms model and exporting of the action file.  The  complete model was specified by Charis Kaskiris (see Appendix 7: Detailed Description of HMP/FMARS Brahms Model) and a sample output of which can be seen in Appendix 3: Scenario 2 Simulation Sample Brahms Exports. It was determined that the XML DTD file was no longer needed as Brahms could directly associate its objects and areas with actual objects and spaces in the 3D Atmosphere scene by one-to-one name mapping. Bruce Campbell constructed the ?JavaScript Writer? which converted the Brahms output to JavaScript suitable to run Atmosphere through the Spigot. The operation of the JavaScript writer is described in Appendix 4: Scenario 2 Simulation Steps in the JavaScript Writer. A sample of the output of the JavaScript writer can be seen in Appendix 5: Scenario 2 Excerpt from JavaScript Writer Output.
4) Results Obtained
An initial prototype of Scenario 2 with one rover was built to prove the mapping between objects and areas (anchors in Atmosphere language) and show that a rover could be driven across the virtual canal. Additional models such as the articulated avatars, geologic sample box and other objects were created in this phase. Figures 9-13 below show an actual space suit used at HMP/FMARS and virtual models of articulated space suit figures in the virtual world for FMARS
Figure 9: Actual spacesuit used at HMP (courtesy Mars Society)
FMARS astronaut models showing articulation
Fig 10: Astronaut model with rover and habitat in background Fig 11: Astronaut demonstrating gesture at margin of virtual Lowell Canal
Fig 12: Astronaut bending down to retrieve sample Fig 13: Astronaut demonstrating the "carrying motion" for the sample box
Putting it all together
Combining the modeling of the FMARS interiors and exteriors, and all objects including avatars, together with a simplified virtual model of the immediate environments of the FMARS habitat, we produced a simulation environment ready to run the Brahms scenario 2 model. Figures 14-17 below show the virtual habitat and areas defined for the simulation, as well as an early example of a rover traverse of the virtual Lowell Canal.
FMARS and virtual models and camera views
Fig 14: Habitat region showing "areas" defined for Brahms model (colored boxes) Fig 15: Rover starting traverse of virtual Lowell Canal
Fig 16: Rover on traverse, following areas across bottom of canal Fig 17: Rover at other side of canal with habitat in the distance
Figures 18-20 below show the combined simulation running the JavaScript described in Appendices 4 and 5 to produce the final visualization of scenario 2: two astronauts, each boarding rovers, one with a geology kit, traversing the canal where one astronaut dismounts and retrieves a sample, whereupon both astronauts return in their rovers to the habitat.

Fig 18: Complete Brahms VE Scenario 2 environment showing
JavaScript controls (below), text chat, bookmarks to other models.
Scene shows two astronauts and two rovers ready for simulated traverse.

Fig 19: Start of simulation showing astronauts boarding rovers
(with communication dishes deployed),
label changing in the world to indicate action and event scripts.

Fig 20: Stage in simulation where rovers arrive at far side of Lowell Canal,
one astronaut prepares to dismount
and retrieve sample.

5) Conclusions: Assessment of Technical Merit and Feasibility
It is our belief that our work in Phase I exceeded all expectations in terms of the production of a high quality, working scenario in a web-ready browser-based environment using industry standard components. We believe that this environment has a high degree of uniqueness and technical merit, being (as far as we know) the first 3D visualization of a complex agent-based simulation that runs on standard computers over the Internet. The technical merit derives from the use of extensible industry standard components such as JavaScript, Java, an open commercial package like Adobe Atmosphere and the customizability and universality of web-based interfaces. We believe that this combination of high performance and standard equipment will lead to affordable yet sophisticated solutions for a number of NASA and industrial applications.
Shortcomings of the first prototype of Brahms VE
We  feel that Brahms VE is not yet tested is in truly complex applications. We refer to Bill Clancey?s Mars Society report from HMP this summer partially reproduced in Appendix 6: Excerpts from: Simulating "Mars on Earth" A Preliminary Report from FMARS, Phase 2. This report shows that to realize any meaningful simulation of an environment as complex as FMARS and its inhabitants and daily activities, our model must involve hundreds of objects and thousands of distinct steps over dozens of parallel time lines.

In addition, the operation of the Phase I Brahms VE prototype was strictly in ?batch? mode, in which the Brahms engine would run and generate output which would then be visualized later in the virtual environment. By definition, the simulation was also run in a ?unidirectional? fashion, with feedback to Brahms from the virtual environment. It is our goal to move beyond the prototype to create a fully functional, non-batch (real time), bi-directional simulation environment. Thus it is our challenge to move beyond trivial test cases to real world examples. This will be taken up in the next section in this report.

More background and documents, including online access to the actual scenario 2 simulation, is listed in Appendix 8: Other documentation for further reference.
6) Potential Applications of the Project Results in Phase III
We believe that that the successful implementation of scenario 2 and resulting NASA internal interest in Phase I justifies Phase II continuation and points to good opportunity for use of Brahms VE in Phase III commercialization.
6.1) NASA Purposes
Beneficiaries of a proposed Phase II ?production grade? implementation of Brahms VE could include a broad range of current NASA projects. Brahms VE projects discussed include: future HMP/MDRS science team support, Science Back Room for remote teams interacting with Mars 2003/MER surface operations, Mobile Agents (PSA, JSC robotics) simulation, ATV studies, in-situ simulation of International Space Station (ISS) operation and station mission control support, environments for online education and public outreach, and an important test of the system with the originally planned detailed recreation of the Apollo XVI ALSEP deployment. It is our long-term vision that simple positional transmitter devices could be utilized in an environment like the ISS to directly drive a Brahms VE model enabling mission control to have a 3D shared view of the ISS from the inside out. Mission control would therefore be able to use Brahms VE to not only simulate and plan for future work tasks but also  actually view the tasks in action. The term ?simulated live? might come to reality in a future commercialized version of Brahms VE.
6.2) Commercial Purposes
It is envisioned that numerous projects in government, industry, K-12 education and college/university, museums and science centers all converge on the need for the simulation, design and operation of complex 3D environments in which people work or interact. We anticipate a broad range of adoption of Brahms VE as a commercial product in Phase III. The DigitalSpace and Brahms teams (USRA/RIACS) have discussed commercialization possibilities on numerous occasions. One could imagine that expensive and complex projects such as the construction of large public spaces could benefit tremendously from Brahms VE.
An Example: Brahms VE in the visualization and construction of a new civic center
For example, a new civic center could first be modeled in Brahms VE to study patterns of movement of people and objects, and aesthetic as well as functional aspects such as security. Next, the same model could be previewed to the public and governmental sponsors. Another Brahms VE model could simulate and help optimize work practice during construction. Lastly, the actual civic center could be wired to transmit its state and its traffic live into the Brahms VE model to test hypotheses against the actual functioning of the finished center. In this process, many firms and agencies, as well as the general public, would be users and beneficiaries of Brahms VE. Environments such as Brahms VE could revolutionize the design, testing, construction and operation of environments ranging from shopping malls to factory floors.
Other target markets for Brahms VE have also been identified
in the following markets:
Business Process Modeling Supply Chain Simulation
Business process modeling
Manpower planning modeling Call center and live customer interaction modeling
Factory automation modeling
Office automation modeling
Network operations modeling
E-commerce design in virtual environments
Strategic Supply Chain modeling
Aircraft Sparing Logistics modeling
Market/consumer patterns to inventory modeling
Transportation Modeling Museums and Science Centers
Port Simulation modeling
Railroad Network modeling
Virtual exhibits for online and in-person experience
Reenactment of space missions, trainer for space camp
Health Care K-12, Colleges and Universities
Healthcare provisioning modeling
Hospital design and operation modeling
Surgical Scheduling with visual interfaces
Real-time surgical Training Simulation
Telepresence "co laboratories"
Virtual field trip learning modules
Distance and team based learning
Industrial training
US Government Modeling and Simulation Market
Civilian Agency Modeling Defense Modeling
NASA: mission modeling, surface operations planning, space station decision support, mission ground operations, astronaut training, capcom training
FAA: Aircraft/Air Traffic Management modeling
DOE: energy production and distribution modeling
Multi-Warfare Analysis & Research Modeling
Aircraft Readiness modeling
All-forces crew/battalion work practices
National Air and Space modeling
Examples and value of Specific NASA missions for Brahms VE
During phase-I a number of discussions internally with NASA were held to identify potential future uses for a Brahms VE product in support of missions. Mars 2003/MER science team support and ISS operations planning and support are thought to be two such areas that could benefit from a production-grade Brahms VE. We are taking the following two potential applications as design challenges for our phase-II product architecture. At the conclusion of phase-II work we would hope that Brahms VE would be up to the following tasks:

1.      Designing Brahms models and virtual environments in support of the upcoming Mars 2003/MER mission vehicles and surface for prototyping of MER surface operations in coordination with the MER science team. Modeling of both the science back room and the 3D Mars surface environment. Provide virtual worlds describing a number of alternate landing selections. Testing the environment with the MER science team by allowing them to enter the Brahms VE environment online from their home institutions and both view a simulation of MER vehicle surface operations, and communicate about alternatives for vehicle movement on a simulation of hypothetical landing zones.

2.      Developing a Brahms model and VE for decision support in the operation of the International Space Station based on recorded data from a real day aboard the ISS when complex operations were involved, including an EVA. Demonstrating with this model how mission planners and controllers could use such a model to better plan for resources and people used on-orbit and in ground based facilities, reduce time for people to carry out tasks (reducing wait times, enhancing situational awareness and availability of tools or timely data) and improve the safety and cost-effectiveness during that day of operations. Use of the bi-synchronous communications abilities of the new VE will allow this test to show where errors might have occurred on board the station or during ground support, including object collisions, misplacement of tools, periods of interruption of communications or darkness on-orbit, shift changes or other interruptions on the ground, or availability of PIs or contractor experts for interaction. Testing this ISS simulation and planning environment with JSC personnel for their reactions.


Initial Brahms and OWorld integration discussion

Date: 16 November 2000
Attendees: Bruce Damer, DigitalSpace Corporation
Stuart Gold, DigitalSpace Corporation
Bruce Campbell, University of Washington
Maarten Sierhuis, RIACS, NASA Ames Research Center
Ron van Hoof, QSS Group, Inc., NASA Ames Research Center

Brahms is a multi-agent data driven (forward chaining) discrete event simulation environment the purpose of which is to support the development and simulation of models representing work practice in organizations to highlight how work actually gets done or how work is supposed to be done.
OWorld is a collection of components supporting cross 3D cyberspace platform voyaging. OWorld provides an API that can be implemented by 3D cyberspace platform developers to allow those developers to visualize their world(s) in OWorld.
The Brahms team has the desire to visualize Brahms models and simulations in a 3D world. OWorld provides a 3D platform that would allow for such visualizations. Several proposals are underway at NASA Ames that would highly benefit from 3D visualizations of simulations. Some of these proposals are FMARS, CONFIG and PSA. The Brahms team hopes that the work performed under the STTR proposal can be re-used for all of the aforementioned proposals.
To integrate Brahms and Oworld, both Brahms and OWorld require additional development to allow for the visualization of Brahms models and simulation results. The integration of the Brahms and OWorld can be accomplished in four different stages, each stage adding a higher level of integration:

Stage I A Brahms model is visualized in OWorld and the Brahms simulation drives the OWorld visualization by translating the relevant events generated by the Brahms Virtual Machine into API commands to be interpreted by OWorld

Stage II Includes Stage I and in addition will allow for OWorld to have control over the behavior in the simulation. The control is limited to having OWorld control the duration of move activities, to use OWorld to detect collisions and feed those collisions back to the simulation and to use OWorld to more accurately define what a Brahms agent can detect in the world using an agent?s point of view (an agent will not be able to detect something in an area that is behind it).

Stage III Includes Stage II and in addition will allow for users to be represented as avatars in the simulation and to actively interact with the simulation possibly modifying belief states for agents and/or fact states in the world to modify the behavior in the simulation as the simulation is running.

Stage IV Includes Stage III and in addition allows for both Brahms and OWorld to be used as a real-time system serving as a monitoring tool to visualize work being performed by persons and objects with behavior (rovers) and allowing for what-if scenarios to be performed on the real world to identify possible solutions for problems detected in the real world.

In 2001 we will focus on the Stage I integration of Brahms and OWorld. In the first prototype we will use Brahms and OWorld as two separate entities. Brahms will run a simulation of a model and export both the model and its events in a format interpretable by OWorld. OWorld will interpret the Brahms exported data and visualize the simulation at its own pace allowing for the user to start, pause, resume, fast forward, rewind and stop the visualization of the simulation. In a second prototype we will allow for Brahms to drive the 3D visualization as the simulation is running,  requiring a direct interface between Brahms and OWorld.
OWorld requires several pieces of information to setup a stage before a simulation is started:  information about the scene graph, cameras and camera positions and information about how the Brahms model constructs are to be visualized in the scene. The stage information required by OWorld will be specified in an XML format.
During a simulation, events are being generated by the Brahms virtual machine. These events also have to be translated into a format understandable by OWorld. For the first prototype we decided to generate one flat file containing delimited OWorld commands. These commands can be parsed by OWorld and translated into actual API class resulting in the visualization of the simulation.

The Stage information consists of:

·         scene graph source (envelope)

·         base terrain

·         backdrop panorama

·         sky color

·         name of the world

·         name of the simulation

·         descriptive message, the welcome message

·         multi-user enabled flag

·         entry bot model

·         entry location

·         initial camera

·         allowable cameras (view points)

·         camera position

·         initial scene object population

·         label types (images)

·         UI choices (skin choice)

·         display technology: software, hardware

The per agent/object visualization consists of:

·       name (tag)

·       model description/path (.wrl)

Assign model object properties:

·       URLs

·       actions

Assign scene object properties (nodes/waypoints):

·       name (tag)

·       location (X,Y,Z coordinate)

·       visible (true/false)

For the simulation we identified a set of commands that must be supported by OWorld in order to correctly visualize a Brahms simulation:

·      ?moving to? <start time>, <from area>, <to area>, <duration>

·      ?attach? <start time>, <name>, <relative coordinate> (could be an object or camera)

·      ?detach? <start time>, <name>, <relative coordinate> (could be an object or camera)

·      ?change label? <start time>, <end time>, <text>, <label type>, <font>, <color>

·      ?change bot model? <start time>, <name>, <.wrl>

·      ?setVisible? <start time>, <name>, <true/false>

Some issues that were raised and need to be addressed:

1.       How do we handle the moving of objects when walls are present in the direct path between the start and end location?

·         have sets of predefined tracks in the scene

·         have the model builder specify multiple moves

2.       How do we specify where an object is to be located at its destination location (area)?

·       Map areas to a scene object/node. Brahms will translate an area into a specific coordinate.

3.       Can OWorld automatically change the visualization of a bot when changing locations? Might not be desirable for OWorld to perform this operation as this is domain specific. Brahms knows when area changes take place and can therefore send commands to OWorld to change the bot visualization.


Final Phase I OWorld Integration Software Architecture

Excerpt from internal Technical memorandum, June 21, 2001
By R Van Hoof, B Campbell
The Brahms team and DigitalSpace are collaborating under an STTR contract #NAS2-01019. The collaboration is intended to transfer information and knowledge to one another. DigitalSpace is a company developing and using the state-of-the-art in virtual worlds to host virtual communities. Brahms is an agent environment used for simulating work practice and is planned for use as an agent-based software development platform. Brahms models of work practice include a geography component to situate the work of people/agents. The collaboration between Brahms and DigitalSpace has two goals:
1.      Visualize models of work practice in a virtual world and allow scientists to view the models of work practice from different viewpoints and collaborate with one another to improve or discuss the models of work practice.

2.      Determine if Brahms could be a candidate as a bot language to populate and control bots in a virtual world.

DigitalSpace is currently developing a virtual world architecture consisting of a variety of components and services that we collectively refer to as OWorld. To accomplish the two goals both Brahms and OWorld have to be integrated through some interfaces.

The STTR contract is for a maximum of three years. The first year is intended to do a feasibility study for integrating our environments. The second year is intended to develop the integration between the environments and show a proof of concept that the integration functions appropriately and is useful. The third year is intended to commercialize the integrated results.

This document describes the software architecture for the integration of OWorld and Brahms. It describes the software architecture as the environments are integrated in the first year, as it is at this moment, and it will show the proposed finalized software architecture for year two. This document is to be used as the basis for future development on both the Brahms and OWorld ends.
This document describes the high-level software architecture to integrate both software environments, OWorld and Brahms. This document does not specify any implementation details regarding how the interfaces will be implemented or what suppliers will be used if third-party components are required for the integration. These decisions will be made when the design and implementation cycle is started.
Intended Audience
This document is intended for both DigitalSpace and Brahms to come to a shared understanding of what the software architecture of the integrated components will look like.
This document is intended for the development teams of DigitalSpace and Brahms as the basis for the further requirements analysis, design and implementation of the actual interfaces between the two environments.
Software Architecture Phase I
The first year of the STTR, referred to as Phase I, is intended for a feasibility study. We decided that we wanted to complete a small demo of the integration of OWorld and Brahms to show that the integration of both environments is feasible. This integration had to be kept as simple as possible due to resource constraints in Phase I.
This section gives an overview of the current software architecture used to demo the integration of Brahms and OWorld.

Figure 1: Phase I Brahms OWorld Software Architecture
In this software architecture the two environments operate independently from one another. The interface is file based. Brahms model builders use the Brahms environment to create Brahms models by writing Brahms source code. This Brahms source code is mapped to virtual reality concepts in an XML Cluster File. The Brahms source code is compiled to a Brahms XML-based Model. The Brahms virtual machine processes the Brahms model and simulates the model. The virtual machine generates events about state changes happening during the simulation. These events are captured by the OWorld Event Service and translated to OWorld commands. These commands are stored in an event file.
When the model builder is ready to visualize the simulation in a virtual world he/she will load OWorld and tell it to use the XML Cluster File and generated event file as the basis for the visualization. OWorld parses the XML Cluster File to set up the initial scene graph and will parse the event file to start the animation of the simulation in the virtual world. The OWorld server sends the events to the OWorld client side component, which in turn uses JavaScript to trigger the animations in the virtual world visualized by Adobe Atmosphere.
In the current implementation there is no communication back from Adobe Atmosphere to OWorld.


Scenario 2 Simulation Sample Brahms Exports

The sample simulation that was translated to the .mbd file
through the AgentViewer.

wf-variable|VAR37|collectall(projects.fmarsdemo.Explorer) explorer|COLLECT-ALL
|PRE134|knownval(current.readyToReturn = true)
consequence|CON58|conclude((current.communicatedReadyToGoExploring = true));
wf-variable|VAR39|collectall(projects.fmarsdemo.Explorer) explorer|COLLECT-ALL
precondition|PRE141|knownval(current.communicatedReadyToGoExploring = true)
precondition|PRE144|knownval(current teammate explorer)
consequence|CON59|conclude((current.receivedReadyToGoExploring = true), fc:0);
initial|IST79|belief(current.completedMission = false);
initial|IST80|belief(current.needRover = false);
initial|IST81|belief(current.communicatedReadyToGoExploring = false);
primitive-activity|PAC8|Starting Rover|0|0|0|false
move|MOV3|Move to Rover Parking|0|0|0|false
communicate-with-agent|COM5|Communicate to ready to return|0|0|0|false|NONE
transfer|TDF5|send(current.readyToReturn = true)
primitive-activity|PAC11|Switch off Rover|0|0|0|false
move|MOV1|Move to Geologic Feature|0|0|0|false
move|MOV2|Move to Building Extension|0|0|0|false
primitive-activity|PAC6|Getting off the Rover|0|0|0|false
Specification and Sample File for Event Export
# This file contains a list of events specific to OWorld as it would
# be generated by Brahms during a simulation run until a simulation
# run completes.
# Before I put the specification in a more official document I will
# just put the documentation in this file for your review.
# Move
#      The move of an bot is triggered by a move action.
#             move|starttime|endtime|bot|startpoint|endpoint
#      The 'starttime' and 'endtime' are specified in simulated seconds
#      and implicitely specify the duration of the move.
#      The 'bot' specifies the name of the bot that is to be moved.
#      The 'startpoint' and 'endpoint' specify the names of the Points
#      where the move starts and where the move is to end.
# Changing the label of a bot:
#   The label of a bot can be changed by using the setLabel action.
#             setLabel|starttime|endtime|labeltype|displaytext|font|fontsize|colorR|colorG|colorB
#      The 'starttime' and 'endtime' specify during what period the label
#      is to be shown (in seconds). When the end time passes the default label is to
#      displayed again.
#      The 'labeltype' specifies the name of a Label as specified in
#      the XML file.
#      The 'displaytext' is the text to be displayed in the label.
#      The 'font' attribute specifies the name of the font to use to display the text
#      The 'fontsize' specifies the size of the font to display the text in
#      The 'colorR', 'colorG', 'colorB' together specify the font color of the text
# Attaching a bot to another bot to represent the carrying or picking up of an object:
#      A bot can be moved together with another bot without explicit move
#      actions by attaching it to the carrying bot using the attach action.
#             attach|time|containerbot|attachedbot|relativeX|relativeY|relativeZ
#      The 'time' specifies at what time (in seconds) the attachment is to take place
#      The 'containerbot' specifies the name of the bot to which to attach
#      the 'attachedbot' to.
#      The 'attachedbot' specifies the name of the bot that is to be
#      attached to the 'containerbot'.
#      The 'relativeX', 'relativeY', 'relativeZ' attributes specify the
#      X, Y, Z coordinate at which the attached object is to be shows relative
#      to the 'containerbot'.
# Detaching a bot from another bot to represent the dropping of an object:
#      An bot that is attached to another bot can be detached using the detach
#      action. The dettached bot will no longer implicitly move with the
#      container.
#             detach|time|containerbot|attachedbot|relativeX|relativeY|relativeZ
#      The 'time' specifies at what time (in seconds) the detachment is to take place
#      The 'containerbot' specifies the name of the bot from which to detach
#      the 'attachedbot'.
#      The 'attachedbot' specifies the name of the bot that is to be
#      detached from the 'containerbot'.
#      The 'relativeX', 'relativeY', 'relativeZ' attributes specify the
#      X, Y, Z coordinate at which the attached object is to be shows relative
#      to the 'containerbot' after it is detached.
# Making a bot visible or invisible:
#      A bot can be set to be visible or invisible by using the setVisible
#      action. This action can be useful in combination with the attach/detach
#      action if a bot is to be hidden when picked up or be made visible when
#      dropped.
#             setVisible|time|bot|visible
#      The 'time' specifies at what time (in seconds) the visibility of a bot is
#      to be changed.
#      The 'bot' specifies the name of the bot for which the visibility is to be
#      changed.
#      The 'visible' attribute specifies one of the values 'true' or 'false'
#      to indicate whether the bot is to be made visible or not.
Sample of FMARS file that is exported for Oworld and conversion to the JavaScript simulation.


Scenario 2 Simulation Steps in the JavaScript Writer
By Bruce Campbell

The approach we are using for the Scenario 2 output assumes pre-processing by a JavaScript writing process (making it optimal for write-once, run often in a Web browser). 
First, the JavaScript writer takes the command visualization output from BRAHMS and sets up an array of commands to be run by the simulation engine (see lines 9-78 for Scenario 2 purposes).
Then, the JavaScript sets up arrays of locations by reading anchors from the world model file (but note that anchors can be created from other potential processes using the pre-processor).These locations are mapped to 'areas' in the BRAHMS simulation(lines 80-135). Next, the JavaScript sets up the label objects for all labels that need to be shown during the sim run (lines 137-193). These labels are cumbersome right now since each label is a separate Atmosphere subworld. (It would be preferable to just write text to a 'text_output' anchor in the future as maintaining Atmosphere worlds for each label is cumbersome.)  Next, the JavaScript sets up the models for the simulation actors (lines 195-216), reading each from an Atmosphere subworld. Next, the JavaScript initializes all the global variables to be used in the simulation engine (lines 218-286)
Next, the JavaScript creates the interactive controls for the Atmosphere Control panel (lines 257-336).
Lines 337 and 338 fill each simulation thread with a command in order from the commands array.
The rest of the file includes all the subroutines that constitute the simulation engine:
1. Function moveTo (line 343) moves an object from one location to another following a linear spline over a time period
2. Function moveToOffset (line 357) moves an object from one location to another immediately with offset (used for attaching one object to another).
3. Function getCommand (line 396) gets the next command from the command array and parses it into the appropriate global variables for that command type. Valid command types so far (scenarios 1 and 2) include:
4. Function timestep (line 558) processes the commands by timesplicing among the command threads, using the global variables set by the system clock and the command parser. All visualization actions are controlled by the timestep function.
Items left for completion in the Writer
1. Add gravity to the rover movements that will not adversely affect the sitting astronaut.
2. Generalize the attach command so any object can be attached to any other object (right now attaches are hard-coded).
3. Play with the command threads to see the optimal number of threads to enact (probably more than the current two).


Scenario 2 Excerpt from JavaScript Writer Output

chat.print("Running Scenario2 fmars.js...\n");
function makeArray(len) {
    for (var i=0; i<len; i++) this[i]=null;
    this.length = len;
line = new makeArray(num_commands);

max_anchors = 32;
//find all the anchors
anchor_x = new Array(8);
anchor_y = new Array(8);
anchor_z = new Array(8);
command = new makeArray(num_threads);
text = new makeArray(num_threads);
billRot = 0.0;
ladder = world.find(".../LadderPlatform");
anchor_x[0]= ladder.position[0];
anchor_y[0]= ladder.position[1]+2.5;
anchor_z[0]= ladder.position[2];
anchor_x[1]= ladder.position[0]-5.0;
anchor_y[1]= ladder.position[1]+0.0;
anchor_z[1]= ladder.position[2];
roverm = world.find(".../RoverParkingSpotA");

start = new Array(num_threads);
end = new Array(num_threads);
duration = new Array(num_threads);

//rotate bill
button5 = Button("Rotate 15 degrees").add();
button5.onClick = function()
    billRot = billRot + .3925;
    bill.orientation = Rotation('Y',billRot);
//create a slider for gravity
gravity = -10;
gravitySlider = Slider("Gravity").add();
gravitySlider.range = [-32,0];
gravitySlider.value = gravity;
gravitySlider.integersOnly = true;
gravitySlider.onChange = function(val)
    gravity = val;
//The command parser
function getCommand(idx) {  
    if (counter>num_commands-1) {
    } else {
        current_line = line[counter];
        index = current_line.indexOf("|");
        command[idx] = current_line.substring(0,index);
        if (command[idx]=="move") {
            current_line = current_line.substring(index+1);
            index = current_line.indexOf("|");
            start[idx] = current_line.substring(0,index);
            current_line = current_line.substring(index+1);
            index = current_line.indexOf("|");
            end[idx] = current_line.substring(0,index);
…            current_line = current_line.substring(index+1);
            index = current_line.indexOf("|");
            y_offset[idx] = current_line.substring(0,index);
            z_offset[idx] = current_line.substring(index+1);
            chat.print("Encountered detach command");
        } else if (command[idx]=="setVisible") {
            current_line = current_line.substring(index+1);
            index = current_line.indexOf("|");
            start[idx] = current_line.substring(0,index);
            current_line = current_line.substring(index+1);
            index = current_line.indexOf("|");
            where = current_line.substring(0,index);
            which[idx] = current_line.substring(index+1);
            chat.print("Encountered setVisible " + which[idx] + " command");
        } else {
            chat.print("Encountered unknown command of " + command[idx]);
timestep=function(now, del) {
    if (t>0) {
        for(idx=0;idx<num_threads;idx++) {
            if (command[idx]=="move") {
                if (billmoving==1) {
                    distanceFraction = ((hour - start[idx]) / duration[idx]);
                    if (distanceFraction >= 1.0) {
                        distanceFraction = 1.0;
                        if (what[idx]=="projects.fmarsdemo.Bill") {
if (target[idx]=="parking") {
chat.print("Bill is parking");
billS.orientation = Rotation('Y',0.0);
bill.orientation = Rotation('Y',0.0);
                            } else {
moveTo(billS, anchor_x[anchorPrevious[idx]], anchor_y[anchorPrevious[idx]], anchor_z[anchorPrevious[idx]], anchor_x[anchorCurrent[idx]], anchor_y[anchorCurrent[idx]], anchor_z[anchorCurrent[idx]], 1.0);
moveTo(bill, anchor_x[anchorPrevious[idx]], anchor_y[anchorPrevious[idx]], anchor_z[anchorPrevious[idx]], anchor_x[anchorCurrent[idx]], anchor_y[anchorCurrent[idx]], anchor_z[anchorCurrent[idx]], 1.0);
                        } else if (what[idx]=="projects.fmarsdemo.RoverA") {
moveTo(billS, anchor_x[anchorPrevious[idx]],


Excerpts from: Simulating "Mars on Earth" A Preliminary Report from FMARS, Phase 2
Available in full on the web at:
STATUS REPORT, from Mars Society
Bill Clancey
Institute for Human and Machine Cognition, UWF, Pensacola
NASA/Ames Research Center, Mountain View, CA
July 22, 2001

By now, everyone who's heard of the Haughton-Mars Project knows that we are here on Devon Island to learn how people will live and work on Mars. But how do we learn about Mars operations from what happens in the Arctic? We must document our experience-the traverses, life in the hab, instrument deployment, communications, and so on. Then we must analyze and formally model what happens. In short, while most scientists are studying the crater, other scientists must be studying the expedition itself. That's what I do. I study field science, both as it naturally occurs at Haughton (unconstrained by a "Mars sim") and as a constrained experiment using the Flashline Mars Arctic Research Station.
Over the past week, I lived and worked in the hab as part of the Phase 2 crew of six. Besides participating in all activities, I took many photographs and time lapse video. The result of my work will be a computer simulation of how we lived and worked in the hab. It won't be a model of particular people or even my own phase per se, but a pastiche that demonstrates (a proof of concept) that we have appropriate tools for simulating the layout of the hab and daily routines followed by the group and individual scientists. Activities-how people spend their time-are the focus of my observations for building such a simulation model.
The FMARS Simulation
The FMARS simulation will be constructed using a tool called Brahms, which we are developing at NASA/Ames Research Center. The components of a Brahms model are fairly easy to understand: Why do we want to build such a model? The number of applications may be surprising:
Our immediate interest is to develop Brahms well enough so the various applications can be explored in further research projects. For example, through NASA funding we will integrate the FMARS simulation with an existing simulation of an air recycling system and an artificial intelligence monitoring and control system. The FMARS simulation will place loads on the recycling system, providing a contextual model of hab operations for testing the AI software. Furthermore, the (simulated) crew will interact with the AI software, for example, getting information about resource capacity (e.g., oxygen reserves) needed for planning daily work. In later work, we would like to develop computer programs that use a Brahms model to understand what the crew is doing, so the programs can provide appropriate support.
How do we build the FMARS model? There are two primary methods: Participant observation (learning by being a member of the crew) and photographic documentation (including time lapse). During my week in the hab, I took regular notes about who did what, where, when, and why. Each day I added to this, refining with details, and finally developing hypotheses about why activities unfold in the manner I observed. In short, I need a theory of "what happens next." What determines the next behavior of individuals and the group?
To organize my observations, I created a table in a document, with columns for the name of the activity, the location where it occurred, the time, who participated, and comments. For regular activities, such as EVAs and meetings, I used the table to record when the activity began and ended. By the fourth or fifth day I was able to sort the table more or less chronologically for a typical day and segment it into broader categories (e.g., breakfast, briefing/planning, EVA). Towards the end of the week, I began to refine some activities into subcategories (e.g., reasons for working at a laptop). Finally, after I left the hab, I realized the significance of activities and modes of behaving that I had not thought to write down earlier (e.g., listening to music while working at the computer).
My other notes were organized in an outline, organized by topics that emerged during my stay: Also, at various times I wrote down where everyone was in the hab and what they were doing. This provides a snapshot of life in the hab. In retrospect, I should have done this on a regular basis (e.g., once an hour), for it would be a good way of verifying the simulation model. I had intended to follow someone every day, to note their behaviors in some detail, but as a participant in the hab, where group activities dominated (mostly organized around EVAs), this proved impractical. Finally, after I began to understand why activities occurred when they did, I realized I needed statistical information about events (e.g., how often and when we received radio calls from base camp). It requires more than a week to realize all that one might study (especially if psychosocial factors are included). Plus I believe that several weeks would be necessary to realize what categories are relevant; I am uncertain whether a crew member would ever have sufficient time to make and record all these observations.
What are the results of my observations? I now have a table with about fifty activities, grouped according to broad "times of the day." Here is an initial description of these broad periods during a day in the life of FMARS

2001 Phase 2:
This outline is a broad abstraction, an average of seven days, not a schedule we followed. Nevertheless, the patterns can be striking. For example, on three sequential days the EVA crew stepped into the airlock at 1105, 1106, and 1108. No procedure required that we do this, it was just an emergent product of our intentions, the constraints of getting into suits and fixing radios, and our other habits (such as when we awoke, how long it takes to eat, and time to arrange personal gear). Absolute times will vary each day, but relative times, such as when a debriefing occurs after an EVA, are more regular (in this case, about 30 minutes). This chaining of group activities is a key part of the order of the day (which might be explained as part of individual, psychological processes).
What I have said so far should make clear why it's not reasonable to expect a "human factors" report from the hab every day, providing research results. Unlike the biologists and geologists, I do not collect isolated samples in plastic bags. My daily observations are mostly too mundane to mention (as the pattern itself hardly seems surprising). Also, it takes four to five days until apparent habits are established, and then a few more days before details can be filled in (e.g., what are people doing for so many hours at their computers?).
Section removed for brevity..
Layout of the Hab
An important part of the Brahms simulation of FMARS is a virtual reality depiction of the facility. The data gathered includes extensive photographs of all objects and areas, close-up photographs for color and texture rendering, and a scale drawing of the hab (Figure 4). This drawing shows the layout at a particular time, with the precise arrangement of laptops and chairs. The workstation area is the most obvious area where design requires improvement. The built-in table is not deep enough (about 24 inches) and is too cramped for six laptops plus a large server display (which hogs the most attractive area below the portal and blocks the view). Strikingly, one or two people used the wardroom table for working, echoing the conventional rows of workstations one often finds in mission control centers. The floor area is obviously spacious; another table might be placed at right angles to the first (between VP and the ladder area).
Activity Drivers: What determines what people do next?
The most detailed aspect of the Brahms simulation is a description of each activity as a set of conditional steps or alternative methods. That is, the conditions-when an activity is performed-must be specified. Given the table of activities (outlined above), we see that group activities are the main driver of behaviors in the hab, fitting the chronology of the day: Breakfast, Briefing, EVA, Debriefing, Dinner, Movie. That is, during this phase in the hab, individual behavior is constrained most strongly by coordinated group interactions. Furthermore, the daily EVA is the central, pivotal activity of the day, with meetings, preparations, and even meals occurring around it. This implies that the backbone of the simulation will be behaviors individuals inherit (in the Brahms representation) from the "Hab Crew" group. Each behavior in Brahms is represented as a workframe, which is a situation-action rule. In general, the situation (conditional part) of the key workframes for Hab Crew activities will
specify either the time of day (e.g., morning briefing) and previously completed activities (e.g., the post-EVA briefing).
Interruptions are secondary driver of behavior, including: Radio calls (from base camp) or satellite phone calls (usually pertaining to our communications systems), systems emergencies (toilet, comms), hab maintenance (refilling the water reservoir, refilling the generators), and media interviews (conducted in the lower deck). Frequency information for the radio and phone calls might be determined from the time lapse. I did not have the time (or presence of mind) to systematically gather information about the frequency and timing of when these activities occurred.
Individual activity , behaviors that are individually motivated and performed alone, fill the remainder of the day:
end of excerpt
Detailed Description of HMP/FMARS Brahms Model
by Charis Kaskiris
The OWorlds-Brahms demonstration model is a simple model of an Extra Vehicular Exploration on Devon Island. The model involves two fictitious astronauts, Alex and Bill, who are a planetary surface explorer  and planet surface geologist respectively. Once they decide on the mission, they retrieve the appropriate tools and then traverse the Lowell Canal and reach the Fortress by using two All Terrain Vehicles (or rovers). At the Fortress, the geologist, Bill, gets off the rover and explores the area for rocks. Once he finds and retrieves the rock, he returns to the rover, places the rock in the sample box on the rover and both astronauts return to the habitat. A screenshot of the model is included in appendix 7.A Figure 1.
Geography Model
The geography model used in this demonstration is a close replica to the FMARS geography. The distinctive Devon Island geography areas involved are the Haughton Crater, the Lowell Canal, the Fortress Area, the Von Braun Planitia, and the Haynes Ridge. At the same time more areas were defined in collaboration with DigitalSpace. These areas include rover parking space, rock location and the observation areas where the rovers will end up. There are also designated locations for the initial locations of all objects. The geography model descriptions are included in appendix 7.B.
Agent Model
There are two astronauts who are involved in the mission. Both are planetary surface explorers. Astronaut Alex is a planetary observer and Bill is a planetary geologist. Alex is the one that gives the preliminary instructions to Bill, who is responsible for getting the SampleBox and retrieving the Rock. The two agents have communications in the beginning of the mission and coordinate with each other as they move asynchronously.
Object Model
There are four objects in this model. There are two All Terrain Vehicles (ATVs), a SampleBox for collecting rocks, and a Rock. The two astronauts use the two ATVs, RoverA and RoverB, to traverse the Devon geography as specified in the timing model. Originally, RoverA is located in RoverParkingSpotA and RoverB in RoverParkingSpotB. The SampleBox is originally located in the ExternalToolsArea right outside the Habitat and it is used by the geologist in his mission. The Rock is originally located in the FortressPerimeterEjecta area close to the Fortress area. The Rock is the object that needs to be transported from its original location back to the Habitat area
Activity Model
There is currently a set of activities with which the  two agents are involved. Both agents have the ability of (a) getting on and off the rovers, (b) starting and turning off the rovers, (c) moving to appropriate geographic locations, (d) driving, (e) waiting, (f) communication activities (described in the next section).
The observer is also capable of performing an observation activity. The geologist has a more elaborate set of  activities to perform which include (a) picking up and putting down the sample box, (b) putting on and taking off the box from the rover, (c) picking up and putting down the rock, and (d) performing a geologic survey.
These activities are all performed in appropriate workframes with the aid of particular thoughtframes which are in general triggered after communication activities happen.
Communication Model
The agents communicate at the beginning of the simulation to exchange information about the mission. Then they communicate their readiness to move to the location so that they move together. Then when they are done with the exploration they communicate with each other that they are ready to return, in order to return together. The last two communication activities are required for coordination purposes.
Timing Model
The model begins at the footsteps of the ladder platform in front of the Habitat. Alex and Bill are standing on the Ladder Platform and the simulation is triggered by Alex explaining to Bill what they have to do. Once Alex finishes, he moves to his rover parking space and prepares his rover. He mounts it, turns it on and then communicates to Bill that he is ready to go. Meanwhile, Bill has moved to the area where the sample box is. He retrieves the box and carries it to the parking space where his rover is. He then places the box on the rover, mounts it, turns it on, and finally communicates to Alex that he is ready to go.
Once both have communicated that they are ready to go, they both move to the Fortress area where Alex reaches an area where he can observe Bill’s actions. Bill reaches the Fortress, gets off the rover and moves to a close location to perform a geologic survey. Once he detects the rock of interest, he picks it up and returns to his rover. He places the rock in the sample box on the rover, and mounts it again. Then he communicates to Alex his readiness to return to base. Alex acknowledges and both return to the Habitat. This concludes the mission.
Appendix 7.A Figure 1: Brahms Screenshot
Appendix 7.B: Modeling Constructs
  1. Geography Locations:
    1. DevonIslandGeography
    2. ExternalLadder
    3. ExternalToolArea
    4. Fortress
    5. FortressPerimeter
    6. FortressPerimeterEjecta
    7. FortressPerimeterRoverObserving
    8. FortressPerimeterRoverParking
    9. Habitat
    10. HaughtonCrater
    11. HaynesRidge
    12. LadderPlatform
    13. LowellCanal
    14. LowellCanalBottom
    15. LowellCanalEdgeNorth
    16. LowellCanalEdgeSouth
    17. LowellCanalFaceNorth
    18. LowellCanalFaceSouth
    19. RoverParkingAreaA
    20. RoverParkingAreaB
    21. VonBrownPlanitia
  2. Agent Model
    1. HabitatAstronaut
    2. Explorer
    3. ExplorerGeologist
      1. Bill
    4. ExplorerObserver
      1. Alex
  3. Class Model
    1. GeologicArtifact
      1. Rock
    2. SpaceSurfaceVehicle
      1. RoverA
    3. RoverUtility
      1. RoverB
    4. Tool
      1. SampleBox


Online documentation for further reference

1) DigitalSpace company page:

and the Oworld and Meet3D specification and examples at:

2) The full STTR phase I proposal may be viewed on the web at:

3) The Brahms specification and engine is described at:

4) The OWorld project and architectural specification can be found at:

5) The FMARS habitat design can be seen at:

6) The NASA HMP project homepage

7) The Mars Society homepage:

8) DigitalSpace's MeetingPage web collaboration environment:

9) Simulating "Mars on Earth" A Preliminary Report from FMARS Phase 2

10) Web location of the Brahms VE Scenario Simulation in Adobe Atmosphere

Location to download and install Adobe Atmosphere

end of report.