Project Proposals by Digital Space

BrahmsVE: Proof of Concept for Human/Agent Intelligent Augmentation
02-2-H2.02-8957 ARC

Part 1 - TABLE OF CONTENTS

PART             

DESCRIPTION

PAGE

1

Table of Contents

3

2

Identification and Significance of Innovation and Results of the Phase I Proposal

    2.1. Identifying the Need

    2.2. The Innovation

    2.2. Results of Phase I Project

4

3

Technical Objectives and Work Plan

    3.1. Technical Objectives

    3.2. Work Plan

        3.2.1. Technical Approach

        3.2.2. Task Descriptions

        3.2.3. Meeting the Technical Objectives

        3.2.4. Task Labor Categories and Schedules

23

4

Company Information

31

5

Facilities and Equipment

32

6

Key Personnel and Bibliography of Directly Related Work

    6.1. Management and technical staff members

    6.2. Bibliography or directly related work

    6.3 NASA and non-NASA advisors to project

32

7

Subcontracts and Consultants

37

8

Potential Applications

    8.1. Potential NASA Applications

    8.2. Potential Non-NASA Commercial Applications

38

9

Phase III Efforts, Commercialization and Business Planning

9.1. Market Feasibility and Competition

9.2. Strategic Relevance to the Offeror

9.3. Key Management, Technical Personnel and Organizational Structure

9.4. Production and Operations

9.5. Financial Planning

9.6. Intellectual Property

40

10

Capital Commitments Supporting Phase II and Phase III

44

11

Related R/R&D

44

 

Proposal Budget

 
Part 2  Identification and Significance of the Innovation and Results of the Phase I Proposal

2.1 Identifying the Need

Space systems involving people working with machines are becoming increasing complex to design, test, train for and support during flight. This complexity is affecting all aspects of NASA’s programs and impacts mission viability in the critical areas of safety, cost, timeliness and effectiveness. We believe that collaborative software systems involving model-based agent architectures situated in realistic 3D visualization environments are vital new tools that will allow NASA and other government agencies and commercial enterprises to manage increasingly complex human-machine environments.

Two generations ago, NASA used specially equipped spacecraft doubles as simulators to allow crew and mission control to train and also to assist in problem solving during mission operations. The ground-based doubles of the Apollo XIII Command and Lunar Modules were instrumental for mission control to test power, life support and other system survival strategies and bring the crew safely back to Earth.

Today, NASA is building craft that are too large and complex to be able to have operational physical doubles on Earth. In addition, efficient and safe day-to-day operation of longer duration missions requires a deeper understanding of design for human work practice, psychology and teamwork. Add to the mix autonomous agents, and there is a real risk that the complexity of operational environments can overwhelm crew and mission control, leading to critical errors.

On August 16, 2002, the following news item appeared in the wires of the Associated Press [see bibliographic reference 1 in section 6.2]:

Whitson and the space station's veteran commander, Valery Korzun, got off to a late start installing the Russian cosmic-debris shields. They evidently forgot to open an oxygen valve in their spacesuits while getting dressed, and the air lock had to be repressurized so they could open their suits and fix the problem.

By the time the spacewalkers finally opened the hatch, 250 miles (400 kilometers) above the South Atlantic, almost two hours had been wasted…

Because of the late start, Russian flight controllers cut the spacewalk short at 4 1/2 hours. The retrieval of a collection tray for measuring jet residue was put off, as was wiping the area for signs of contamination.

The above story illustrates an event in the day in the life aboard the most complex of these vehicles to date, the International Space Station (ISS). In this case, the mistake was easily resolved without danger to the mission. However a similar error aboard a human mission in transit to Mars might prove fatal, especially if a failed micro-meteor shield needs to be replaced in an emergency.  For several years we have been working with teams at NASA and RIACS to provide a virtual environment software platform that will prevent this kind of error in the future.

Virtual environments assume a critical new role in satisfying the need to train for and to manage mission complexity

Virtual environments created in 3D VR technology on computer workstations or projected onto immersive environments such as CAVEs and head-mounted-displays have been used to good effect in mission training and planning for over two decades. Projects in this tradition include: early Ames tele-operations training with the Canadarm, the Virtual Shuttle, the modeling of the Mars Pathfinder surface environment, and the virtual training environment used by the crew of STS-61, the 1993 Hubble Space Telescope repair mission [2, 3, 4].

Project managers Loftin and Bowen [2] describe that in this project, approximately 100 members of the NASA HST flight team received over 200 hours of training using a virtual environment (VE). They note that in addition to replicating the physical structure of the HST and the interrelationships of many of its components, the VE also modeled the most critical constraints associated with all major maintenance and repair procedures. Figures 1-2 illustrate scenes from this project.

Figure 1: STS-61, the 1993 Hubble Space Telescope (HST) Repair mission team using VR training simulator: Astronaut Nicollier looks at a computer display of the Shuttle's robot arm movements as astronaut  Akers looks on (Image courtesy NASA archives)

Figure 2: Computer generated scene depicting the HST capture and EVA repair mission for mission planning (Image courtesy NASA archives)

Loftin and Bowen conclude by stating that for the first time, a VE has been integrated with a limited capability Intelligent Computer-Aided Training (ICAT) system and that (it performed to excellent effect).  They then issued the following challenge to future VE systems designers:

The results of this project serve to define the future role of VEs in training within NASA and to provide evidence that VEs can successfully support training in the performance of complex procedural tasks.

In the ensuing years, many new VE environments for training and design/test have been produced, including Transom Jack [6], Steve from USC [5], environments for submersibles and navy operations at the MOVES Institute at the Naval Postgraduate School [7,8,9] and significant projects within NASA including APEX from Ames [10,11] and VR interfaces for remote vehicle control [12].

We believe that no simulation and training environment thus far has represented more than a fraction of true underlying mission complexity especially when accounting for “humans in the loop”. For example, an ordinary EVA aboard the ISS not only has hundreds of individual variables including checklist items and utilized equipment, but also all personnel including mission controller, PI and engineering contractors back on Earth play a role in that EVA activity, working from geographically separated areas using often unreliable and distorting channels of electronic communications. In addition, traditional VEs, such as the one employed in the Hubble repair mission, used costly hardware and time consuming modeling processes.

Over the past decade, the increasing power of consumer personal computers, ubiquity of the Internet  and standardization of net-based software components such as languages (Java), web browsers and integrating protocols (XML), have permitted the creation of powerful new VE systems that are entirely based on commercial off the shelf (COTS) systems. Thus, there is a revolution about to take place in collaborative virtual environments for the management of complex human-machine environments and we are working to provide one of the major new tools of this revolution.

2.2 The Innovation: BrahmsVE

BrahmsVE is the result of three years of intense work among the Brahms teams at RIACS, NASA, and DigitalSpace beginning with an STTR in 2000 [13], continuing with work to model EVA and day to day operations aboard the FMARS/Haughton-Mars Project analogue habitats [14,15] and leading to a specification for the OWorld and Brahms interfaces [16,17].

The existing back-end architecture: Brahms

A virtual environment by itself is of little use without a powerful back-end architecture that can represent the complexity of human-machine systems.

For over a decade, teams at NYNEX, the Institute for Research on Learning and now, at Agent iSolutions, working with NASA Ames and RIACS, have been developing an intelligent multi-agent environment used for modeling, simulating and analyzing work practice. The environment is called Brahms [18]. Brahms is a data driven (forward chaining) discrete event environment usable for simulation purposes as well as for agent-based software solutions requiring the use of intelligent agents. Brahms and its applications are described in detail in bibliographic references [19,20,21,22,23]. From the Agent iSolutions Web site [18]:

Brahms allows us to model the work activities of each type of role, and each individual (or artifact) playing that role in an organization. The focus of a Brahms model is on the context of work, meaning, how does the work really happen. One of the essential requirements for Brahms is that we can model collaboration and coordination between people working on one task, as well as that people can work on more than one task at a time, and are interrupted and able to resume their activities where they left off.

Prior to the partnership with DigitalSpace, Brahms models could only be viewed in execution using a timeline bar chart-style interface. It was determined that Brahms could become a much more effective tool if it were to include interface that allowed realistic reconstruction and interaction with 3D scenes representing the real world people and systems being modeled. This new product platform is now under development  and its current implementation is depicted in figure 3 below and described next.

Figure 3: BrahmsVE architecture, Fall 2002
A prime directive for BrahmsVE was that it should enable highly realistic recreations of environments involving people interacting with systems. Figures 4 and 5 below illustrate some of the human figure, gesture representation and scenario reenactment of which the development version of BrahmsVE is capable. These are depictions of BrahmsVE simulations of crew activities aboard the FMARS/Haughton-Mars habitat during the 2001 field season.


Figure 4: Human figure recreation and gesture in BrahmsVE from EVA suit-up


Fig 5: planning meeting simulation from 2002 BrahmsVE project to
model a day in the life of the FMARS analogue Mars habitat

Components of the innovation

We believe that BrahmsVE is a uniquely powerful new tool that will offer human-centered computing unique opportunities for advancement. BrahmsVE has the following unique properties:

Brahms Java-based PersonalAgent with compiler, virtual machine, builder, IDE and AgentViewer all of which are currently utilized in a number of NASA projects including mobile agents.

Brahms Virtual Environment runs on industry standard consumer grade computing platforms over ordinary Internet connections with no special hardware required.

BrahmsVE employs industry standard languages and protocols such as Java, JavaScript, SOAP and XML.

BrahmsVE’s virtual environment uses industry-leading technology in Adobe Atmosphere such as Havok physics, Viewpoint models and inverse kinematic animations all running in an open framework utilizing an open source community server.

For a full specification of the BrahmsVE environment as well as detailed technical documentation on the system, please see the final report filed for this SBIR and additional materials on the Web sites referenced in [14,24].
Additional 2003 BrahmsVE projects completed show promise of wider applications


Figure 6: MER rover modeled for JPL concept presentation


Fig 7: FMARS habitat on terrain generated by whole-planet modeling exercise for Geoff Briggs

In parallel to the work completed for the Phase I SBIR, BrahmsVE was used for two non-funded exploratory projects at NASA Ames. For one project DigitalSpace designed a model of the Mars Exploration Rover, which interacts with a virtual Mars surface utilizing the build-in physics engine. This test was successful and was presented as a concept piece to the Athena science team at JPL in January 2003 (see Figure 6). We believe that this shows that we will be able to apply BrahmsVE to mobile agents applications. The second project was to illustrate the Mars habitats on a virtual reconstruction of the entire Mars surface derived from Mars Global Surveyor and other data, a project commissioned by Geoff Briggs at Ames (see Figure 7).
2.3 Results of Phase I Project

Introduction – the solicitation’s challenge
The following statement from the topic for this SBIR called for proposals to produce the following tools:

Visualization tools combining "virtual reality" projection with actual objects in the environment, conveying information about object identity, part relationships, and assembly or operational procedures. "Cognitive prostheses" that qualitatively change the capabilities of human perception, pattern analysis, scientific domain modeling, reasoning, and collaborative activity. Such tools could incorporate any of a variety of modeling techniques such as knowledge-based systems and neural networks, and fit tool operations to ongoing human physical interaction, judgment, and collaborative activity.

This statement is a broad-brush challenge to the entire field of visualization, human-centered computing and cognitive science. While the scope of this challenge is large, we set out to achieve its central objectives by producing a proof of concept that is a good first step along the road to this total vision.

Objectives of Phase I

In Phase 1, we constructed a simple yet believable cognitive prosthesis in a virtual reality environment that implemented a canonical scenario in the study of human-centered computing: a semi-autonomous mobile agent interacting with a human subject. We referred to this mobile agent as the Personal Bot Assistant, or PBA.


Figure 8: Canonical example of a semi-autonomous agent interacting with a human astronaut

The objective of this project as stated in the Phase 1 proposal was therefore to produce, in a web-based 3D virtual environment, the canonical example of human/machine augmentation, that of a semi-autonomous agent assistant interacting with a human inhabitant of a space station or surface habitat. As the above figure (8) from the original proposal illustrates, that agent should be able to execute a minimum set of activities interacting both with its environment (the geometry and physical properties of a virtual space station or habitat) and with an astronaut agent. The astronaut agent in this example could be driven by a simulation engine or directly by a user at a workstation as an “avatar”. The virtual astronaut would in turn have a limited repertoire of commands, which can be directed to the PBA.
Our actual implementation of Phase I deliverables

Guidance from QSS, NASA and Brahms teams directed us to produce a simulated work practice utilization of the Personal Satellite Assistant (PSA), which is under development at NASA Ames Research Center, aboard a virtual analogue of the International Space Station (ISS). Design issues surrounding the PSA operating within the ISS environment provide an almost ideal challenge for the development of tools to aid in human/agent intelligent augmentation. Our colleagues who guided this project informed us that in a single simulation the following could be achieved:

  1. Testing design concepts for the PSA in a virtual space prior to next generation implementation of hardware, in this case the addition of a laser pointing device on the PSA,
  2. Modeling of the interior of the ISS, including obstructions, enabling agent and astronaut movement, tool mis-placement and comparison with the physical test fixture at Ames.
  3. We were informed that this kind of simulation environment could evolve into a training simulator for astronauts who will be testing and then utilizing PSA aboard the ISS in the next several years supporting both pre-flight and during flight operations.
  4. In the longer term, we were given the insight that coordinates and state transmitters aboard a flight PSA would be able to report its current position and activity to a 3D simulator which could be used by NASA JSC Space Station Mission Control to permit them to gain a PSA-eye view of the station and to determine the whereabouts and current activity of the PSA.
Empirical results from the Phase I implementation

We were therefore well informed as to the benefits of pursuing this simulation within an upgraded BrahmsVE architecture (the main beneficiary of this Phase 1 project). Over a six-week period we executed eleven versions of the simulation, each with more complexity and more interface and reporting affordances added. Each of these iterations is available for live execution within the 3D environment or viewing via Quicktime movies at the project Web site [24]. As a result of these iterations, the following empirical results emerged, answering the key questions posed to us by our collaborators:

  1. Is a laser pointer a useful device for a PSA or similar semi-autonomous robotic agent? The simulation suggests it would be and that the laser pointer would be used not only to permit mission controllers to point to a spot (current suggested teamwork application) but also as an indicator to astronauts of what the PSA is doing (where its gaze is currently, its direction of travel). Of course the laser pointer would be used selectively as it has the ability to temporarily blind an astronaut given direct contact of the beam with the retina. It was suggested that a spotlight (not coherent) be substituted for simple indicator/tracking functions.
  2. What other affordances does a mobile robot with human eye-gaze level awareness (microgravity operation) need? The simulation suggested that status indicator lights to tell astronauts what state the PSA is currently in (searching, waiting for cleared obstruction, powering up) would be needed to quickly and reliably inform human occupants of the station of the PSA’s intention or current state.
  3. Can a PSA-like agent successfully avoid a collision with an astronaut while transiting the station, even in close spaces such as interface modules? Empirical tests in the virtual station seem to suggest that the ray-casting ability of the virtual PSA is able to determine when its path is blocked. Of course, only calibration with actual machine vision systems used on the actual PSA would be able to validate such a collision avoidance scheme.
  4. Can such a mobile agent effectively locate a tool or move itself to a known location aboard the station? The use of the laser pointer suggests that plans to apply bar-codes to locales and object such as tools might greatly aid the PSA’s ability to locate itself in space and find objects.
  5. Can the PSA successfully navigate and course correct given attitude adjustments of the station or passing astronauts and blower fans creating an airflow situation that the PSA would have to steer through? Only a single blower fan was implemented in our BrahmsVE implementation, which imparts a simple force vector on the passing PSA. More complete modeling of the ISS interior and orbital dynamics would have to be carried out to gain understanding of this issue.

The above heuristic results suggest directions for Phase II in which statistical reporting and the Havok physics engine will be added to gain a more quantitative set of results from hypotheses that will be tested with the next generation models.

Detailed description of the implementation

We will next discuss what was constructed to enable the ISS/PSA BrahmsVE simulation. We will begin with a tour, aided by visual screen captures of the simulation in action. We will then describe the extensions to the BrahmsVE architecture constructed during Phase I.

3D virtual world models constructed

Utilizing imagery and schematics, the DigitalSpace team modeled the interior of the current configuration of the International Space Station (ISS). Wherever possible, high-resolution imagery was used to enhance realism of the interiors, and we produced 3D models of significant equipment such as monitors, scientific equipment and hatches. We modeled two astronaut agents with head and body gestures representative of real astronauts aboard the ISS (floating forward, footrest positions, head tracking). Lastly we created a model of the Personal Satellite Assistant (PSA) complete with instruments and a laser pointing device which is used as an indicator of status, direction of travel, and ray-casting identification of a target object. The 3D visualization environments that resulted are pictured in figures 9-19 below.

Figure 9: BrahmsVE web-browser based components

Figure 9 above illustrates the new interface to the BrahmsVE environment produced in this Phase I project. The 3D window represents the viewpoint of a third participant within the virtual ISS model. In this case we, the viewer, are positioned behind the PSA agent, as it establishes a line of sight view with the astronaut agent, which has just repositioned across the module. The virtual PSA has presented us the following iconic views: the top icon representing that the astronaut has observed the PSA and the bottom icon representing that the PSA is in a wait state and able to respond to a command.

Above and below the 3D view are two text output windows. The top window reports all Brahms action commands and resultant actions and reports from the agents in the virtual environment. The text output buffer below the 3D window reports internal agent state (such as PSA remaining at station-keeping ).  To the right of the screen are two panels for controlling and reporting on the simulation, described below.

Figure 10: BrahmsVE
Control Panel

PSA Search – Commands PSA to search for object randomly placed aboard the station (wrench, drill or flashlight)

Status Bar – Show additional status panel (rightmost controls)

Status Page, Syntax File, Help File, History – simulation documentation

Camera Views – PSA point of view and three other fixed cameras aboard the virtual ISS

Advanced – Move allows operator to place the three tools at a specific location within the virtual station

GUI Mode

This toggle changes the method used for displaying the status icons for the Agents. In GUI Mode, the icons are always the same size, and always visible.

Mouse Look

This toggle changes how the player/actor is controlled. When active, moving the mouse will cause the direction the player is looking to move. When the mouse is near the edges of the window, the player will continue to turn in that direction.

Astro Detectable – Idle, astronaut is not being detected by PSA, LOS:PSA, PSA has lost signal for detecting astronaut

PSA Detectable – Whether PSA is being searched for by astronaut agent

Power – Current power level of PSA, determines when PSA must seek a power station

Laser/Beam – Determines whether the laser pointer will be shown and if it will seen as a beam or a spot

Fan – Turns a random fan on or off in the station, providing an unexpected course changing vector for PSA while in transit

The above layout of the control panel for the ISS/PSA BrahmsVE simulation enables the operator to set up and then execute different scenarios, described next.

Figure 11: Exterior view of US module of the simulated ISS with PSA agent and astronaut agent (Overview 1 camera)

Figure 12: Exterior view of virtual station showing additional attached modules (Overview 2 camera)

Figure 13: PSA laser pointer in “spot” mode illuminating a spot on the station interior

Figure 14: PSA laser pointer spot seen projected from PSA point of view

The scenarios implemented in this BrahmsVE application centered around the ability of the operator (astronaut agent or operator) to dispatch the virtual PSA to locate a missing tool (wrench, drill, flashlight) placed randomly aboard the station and to then return to the dispatching astronaut-agent and report the tool found (figures 11-19).

Figure 15: PSA acting on search command, beginning search by exiting US module

Figure 16: PSA transiting interface between modules

Figure 17: PSA executing search for wrench tool within a module

Figure 18: PSA halted to avoid collision with astronaut transiting interface coupler between modules

Later we added a second astronaut traveling randomly (figure 18) and a blower fan to cause the PSA to have to avoid collisions and to make course corrections through force vectors.

Figure 19: Report of successful location of drill tool back to astronaut agent

Status icons

Throughout the simulation, the status of the agents is reported in the text buffers and indicated by convenient “status icons” projected above the active agents (astronaut and PSA). The visual language for these status icons is described in tables 1 and 2 below.

Table 1: Task Icons (Top Icon)

This icon is an indicator of the Action the Agent is currently performing.

PSA & Astronauts

 

The Agent is looking at, and tracking, the Target.

The Agent is moving to a new location.

PSA Only

 

The PSA is currently scanning its surroundings for its Target.

Power Lead

The PSA is currently recharging.

The PSA is reporting the tool’ location.

Table 2: Target Icons (Botton Icon)

The Target icon represents what object the Agent is currently focused on.
This is either the Watching Settable or in the case of the PSA, the SearchFor Settable.

Icons

Target

Drill, Flashlight or Wrench tools

PSA is receiving commands (via laptop)

Agent sees PSA or Astronaut

PSA is affected by blower fan

PSA has identified and is utilizing power station

Underlying technology enabling the above interaction

In the following section we will describe the underlying architecture that enables the above BrahmsVE implementation of the multi-agent simulation of the ISS with PSA, covering new additions to the architecture made possible by this Phase I support.

Use of industry standard platforms

A primary goal of the BrahmsVE is to utilize standard web components that run on ordinary consumer personal computers with no special hardware. Achieving this kind of ubiquity is a longtime dream of industrial design and simulation and has been made possible by the advent of the multi-gigahertz processor, high-speed net connectivity and low-cost, high performance 3D acceleration. BrahmsVE is therefore able to be used from virtually any net-connected PC in the world and has the added benefit that there is a multi-user option allowing distributed researchers to chat and be visible as “avatars” within a shared simulation. Thus, BrahmsVE is evolving into a collaborative virtual environment for distributed team use, a powerful and effective tool for 21st Century work practice engineering.

Behind the scenes: Adobe Atmosphere, the OWorld engine and Brahms interfaces

Figures 20 and 21 depict the current state of the BrahmsVE architecture with the implementation of this Phase I project which is described next.

Figure 20: Current flowchart diagram describing interfaces between Brahms, the Web server and the entire BrahmsVE environment

Figure 21: Current flowchart diagram describing the OWorld engine operation with Atmosphere

Adobe Atmosphere

Adobe has partnered with DigitalSpace since 2001 to provide its Atmosphere 3D web plugin to the BrahmsVE effort. Adobe has contributed significantly to the platform (at no cost to DigitalSpace or NASA) based on our requests. Atmosphere is a stable, integral part of the platform and provides the 3D scenegraph, rendering engine (in software and hardware), scripting language, physics engine (from Havok, Inc.) and an object renderer and animator to represent objects and agents and their gestures (from Viewpoint Inc.).

The OWorld engine

The OWorld engine is the heart of the BrahmsVE platform and consists of the following major components (figures 20-21) which execute entirely within the framework of the Internet Explorer web browser and HTTP protocols:

1.       Interface layers that permit two-way communications with Brahms and Web services. This is implemented utilizing XML/HTTP. In this Phase I build, the full two way Brahms implementation is simulated by JavaScript. The Brahms team will be building special networking and command processing facilities into the Brahms server to permit the full two-way connection of our engine with Brahms.

2.       A command parser that requests (in JavaScript format) all Brahms commands (interface commands defining settables, detectables and reports).

3.       A series of multi-threaded message queues associated with each agent that interact with the entire OWorld framework and the settable/detectable actions.

4.       A path manager that reports on valid locations and traverses permitted by the world geometry.

5.       World geometry and viewpoint objects, which communicate valid path information and execute gestures required by actions.

Extensions to the OWorld engine

For this project the OWorld engine was extensively upgraded to provide multi-threaded support of multiple astronaut and robotic agents operating in parallel. A first generation path-finding module was added as well as the two-way XML dialogue layer for future network communications with Brahms (figures 20, 21). New Brahms commands will be described next.

Brahms commands and new commands implemented

Activity/Interface Commands

All Brahms commands are made up of multiple parts, separated by "pipe" characters ( | ). The first part of any command is what "type" it is. There are two recognized types in the OWorld library, "activity" and "interface". Which type it is will decide how many other parts the command will have.

Activity

The Activity type command will always have at least six parts. The second part (after the Activity type) is what "sub-type" it is. There are four recognized values, "move", "get", "put", and "primitive". The main used types are move and primitive.

The third and forth parts of the Activity command are the start and end time of the command. The command will not begin until the simulation time is equal to or greater then the start time (and all preceding commands have been completed). The execution of the "Action" (explained later) is accelerated or slowed to ensure the command is completed by the end time.

There are some special values and uses of the start and end times. A common "trick" is to specify zero (0) as the start time. Because the system time will always be greater than or equal to zero, the command will be executed as soon as all preceding commands are completed. This is useful when the length of time taken by a previous command is unknown.

Another "trick" is to specify negative one (-1) as the end time. This will cause the command to take as long as it needs to perform the "Action" without accelerating or slowing the command.

The fifth part of the Activity command line is the Agent. This is the full name of the Agent that will be performing this command.

The sixth part is the name of the "Action" the Agent will execute to fulfill this command. A common combination is the "move" sub-type and the "Walk" Action. In the case of a "primitive" subtype, this is usually an animation or gesture.

While every Activity command has to have at least six standard parts to be a valid Activity command, each of the sub-types requires different additional information. For each of the sub-types defined in the OWorld library, there is an additional part that specifies the "props" of the command. This is the name of the Prop (such as coffee mug or tool) that the Agent is currently using.

The "move" sub-type requires two additional parameters, from and to. These are the names of the pre-defined Areas that the Agent will be moved between. The OWorld system will produce a path for this movement when the command is parsed (not when it is executed).

The Action performed by the "move" sub-type command is responsible for things such as obstruction detection and path failure. The technique currently used in OWorld is that when the Action "Walk" detects a problem, it will define an Area where the Agent currently is, stop moving, and produce an Interface type Alert for Brahms, informing it of the failure. Currently the response to such an Alert is to produce a new "move" sub-type command, from the failure location to its previous destination. The "primitive" sub-type does not require any additional parameters (other then the props part). A primitive is an Action that is performed "as is". The "get" and "put" sub-types require two additional parameters, from and to, affecting Props.

Interface

All "interface" type commands require six parameters:

  1. The first is the type "interface". The second is the sub-type, either "setDetectable" or "setSettable".
  2. The third is time. Interface commands occur almost instantly, lasting for only one cycle (or frame). Therefore, they only require a start time. Like the "activity" type of command, they are executed when the simulation time is equal to or greater than the start time. In the same way, the "trick" of using zero (0) can be used to make the command occur as soon as all previous commands are complete.
  3. The Interface command has an extra trick, which is using minus one (-1) as its time value. This is a specially checked condition, and will cause the command to be placed at the beginning of the queue, to be executed as soon as the current command (either Interface or Activity) has completed.
  4. The fourth part of the Interface command is Owner. This may be either an Agent or an Interactive (an Interactive is an improvement on the previous concept of Props. They are essentially Agents with a limited set of available Actions).
  5. The fifth part is the ID. This is the name of the Settable or Detectable being set.
  6. The sixth portion is the Value the Settable or Detectable is being set to. In the case of a Settable, this may be any value (as long as it does not contain the pipe character). In the case of a Detectable, this may be "true", "active", or "false". "True" or "active" will enable the Detectable, causing it to report using an Interface Alert (explained later) when its condition becomes valid. A setting of "false" will disable it.

Interface Alerts

Interface Alerts are Brahms Commands that are sent from the VE to the Brahms server. They use the sub-type "detectable"; their time is when they were triggered. Owner and ID are the same as the other Interface commands. The Value portion depends on the Detectable that is sending the Alert. This is usually greater detail about the Alert, to be used by the Brahms server in deciding what will occur as a result.

A common example is the Idle Alert. While this is not actually a proper Detectable, as the Alert is sent by the Action itself whenever  an Agent starts or stops its "idle" Action, , its Value portion of the Alert signifies whether the Agent is starting to idle (Value is "true") or stopping (Value is "false").

Sample Brahms commands explained

The Brahms commands sent into and out of the environment are displayed in the text box at the top of the BrahmVE window (see figure 9). A > or < symbol indicates whether the command is being sent from the environment or from Brahms. Several of the commands have been extended to provide the capabilities required for this simulation. The most significant is the addition of the "interface" commands. These relate to “Settables” and “Detectables” (commands that set up the simulation and request reports or trigger actions), and are used in both directions. For example, an interface command will set the value of a Settable, or activate a Detectable, while a similar command in the other direction will notify Brahms of a Detectable being triggered.

Below is an example series of Brahms commands:

<interface|setSettable|0|projects.issvre.PSA|Watching|projects.issvre.flashlight1
activity|move|118.944999694824|-1|projects.issvre.PSA|Walk||projects.issvre.CentrifugeAccomodation.center|projects.issvre.USLab.center
interface|setSettable|0|projects.issvre.PSA|Watching|projects.issvre.Astro
>interface|detectable|108.944999694824|projects.issvre.PSA|Search|true

In this sequence, the virtual PSA agent has reported the triggering of its Detectable "Search". This resulted in the setting of the PSA Settable "Watching", making it watch the flashlight when Idle. It was also told that at the time 118... it should move from the Centrifuge Accommodation to the US Lab. The -1 informs it to take as long as it needs to (rather then taking a prescribed amount of time, resulting in the PSA moving very quickly). A complete documentation of the Brahms action command set and related documentation is available at the project website referenced in [24].

Implementation of a virtual environment inventory and geometry reporting system

The Brahms team requested the implementation of a BrahmsVE export command that would report the geometry and inventory of objects within the VE. A version of this was built and will be completed as part of Phase II. This will allow the Brahms modeler to “query” the VE and gain the starting set of definitions to then use in modeling exercises. This eliminates the “double work” of defining the geometry and objects both in Brahms and then again in the VE.

Current Limitations of the present BrahmsVE platform

The major current limitation with the current BrahmsVE platform is the lack of the return interface to communicate the results of Detectables and Interface Alerts to the Brahms environment running on its server. The implementation of new code in Brahms is being scheduled to handle the syntax of messages we are now generating. Phase II will see the full synchronous interface implemented and tested in several sample applications.

Limitations with line-of-sight testing

The primary technical limitation relates to the line-of-sight/ray intersection functions. These are a group of functions designed for detecting points of collision along a line (for example, finding out where a laser beam strikes an object, to place a reflection object/sprite). A limitation of these functions is related to the way Atmosphere handles loaded models. When performing a ray intersection test on the world geometry, one is required to use the ENTIRE geometry, including any models loaded by the script (be they Atmosphere objects or Viewpoint). As a result,  you cannot choose to "see through" objects that are dynamically placed, and you cannot tell the difference between a model and a wall. An example of this is when the PSA attempts to look for a tool. By performing a ray intersection (line-of-sight) test in the direction of the tool, it can determine if it can see the tool or if there is something in the way. However, you cannot tell what the object is. If the object that this test detects is supposed to be transparent (eg the PSA's laser beam), you cannot detect it.

A related problem is that the PSA and astronauts have to "look" from a point outside their own body models, otherwise the agent will see its own body, and think that something is in the way. This can lead to problems; for example,  if the agent is near a wall, the agent can often see through it (as the offset to avoid seeing itself places the point it is looking from on the other side).  A place where this is more of a hazard is in path finding. The path-finding routines perform these line-of-sight tests between pre-defined nodes (waypoints/Area) to determine which it is able to travel to, in an expanding pattern until it finds its destination. However, if something is blocking one of these waypoints (an Astronaut moving temporarily through that space, or even the model of the Agent that is attempting to find the path), these line-of-sight tests will fail, and it will think there is no way to reach the destination point. In the current implementation this is overcome by continuously retrying if a path isn’t found (hoping the astronaut will move). Often this can last for an extended period of time (especially since the line-of-sight tests are not 100% accurate when working with animated Viewpoints), causing the simulation to appear stalled.

Other implementation Issues

The current implementation does not use the Havok physics engine. This is due to a number of reasons, including inflexibility in the physics modeling (center of gravity is always center of geometry), physics modeling not being aware of changes in a Viewpoints geometry due to animation (at last test), manikin collisions (an astronaut’s entire body will bounce from the wall, rather than just his arm bending), and speed (particularly during collisions).

All of these limitations and implementation issues will be addressed in Phase II, as described in Part 3 below.

Part 3 Technical Objectives and Work Plan

3.1 Technical Objectives

After three years of work on the BrahmsVE platform and the successful completion of the goals set forth in this Phase 1 SBIR, we have arrived at a key point in the project. We now have a direct roadmap to completing and rolling out BrahmsVE as a production platform for extensive use within NASA, for other government agencies and for commercial users. This section will outline the technical objectives that will be met in Phase II that will bring BrahmsVE from its development phase to a 1.0 commercial release.


Figure 22: Future flowchart showing planned real time connection to Brahms and use of PHP to both synchronize virtual environments, archive changes in a SQL database.

Figure 23: Future flowchart showing planned upgrades to the OWorld engine and the employment of Havok physics, procedural animation and a more intelligent Path Manager

Roadmap to release

Figures 22 and 23 above illustrate the final architecture for the BrahmsVE 1.0 release. This is the completion of the vision of the architecture as specified in figures 20 and 21. In addition to addressing some of the limitations identified in Part 3 (line of sight issues and the need for the physics engine) we will endeavor to deliver the following new components in Phase II:

a) Database (for dynamic modular code loading) – shown in figure 22

Currently BrahmsVE is based on a static Javascript framework. However, we were always designing with the idea in mind of using a back-end database. OWorld is a modular design so that parts can be added and removed "on-the-fly". For example, for a given simulation, an agent might only be required to perform a small number of actions, so we would employ a database archive to deliver only the code for these actions rather then loading every available action.

An example would be having multiple astronauts in the ISS. While both astronauts would use the same base code (as both are Agents) and would each use some of the same code (moving, watching other Agents), each astronaut would only be provided the code relevant to what it is required to do. So the astronaut that is never required to "open can with fork" will not know HOW to "open can with fork". However, to give that astronaut that ability, the code would simply be dynamically loaded from the database/archive.

b) Database (for synchronization) – shown in figure 22

In BrahmsVE there is currently no synchronization between different instances of the environment, resulting in each visitor to the environment seeing something different on his or her workstation. Using a combination of PHP and a database (likely to be SQL), it will be possible to provide synchronization between all instances of the environment. This capability will be delivered in Phase II.

c) Normalization with Brahms Commands – shown in figure 22

Phase II will provide support for the final normalization of Brahms command types with the current “Brahms action” commands we are now processing. At the same time, we will develop a new class of Brahms commands that the AgentiSolutions team will implement. These commands permit interruptible actions, such as collisions, to be communicated to Brahms and for Brahms to respond (by switching action frames).

d) Brahms real-time synchronous interface – shown in figure 22

The key remaining largetechnical component to the BrahmsVE architecture is the implementation of a full synchronous interface between Brahms and the OWorld engine. Currently we are parsing Brahms actions generated by real Brahms models, but simulating Brahms interaction once the model begins to execute. Brahms team members are currently planning to implement the facilities to catch and generate real-time messages to the VE. We will be completing a two-way dialogue communications system utilizing XML/HTTP and PHP/SQL in Phase II to accomplish this real-time synchronous interface.

e) Building VE into Brahms developer environment (not shown in the figures)

The Brahms team has requested that the VE be visible inside the Brahms authoring environment, for a seamless means to test new Brahms models with the VE. We have determined that the Java Native Interface will be able to embed the Internet Explorer instance within the Brahms developer interface and this will be completed under support for Phase II.

f) Implementing voice synthesis and voice recognition to instruct agents
(will be provided as an interface in figure 22)

CommandTalk [25] and other environments built for military and civilian use have been proposed for use by John Dowding at Ames/RIACS to give BrahmsVE a voice recognition environment such that researchers can give voice commands to the agents within the virtual environment. As an additional project under Phase II support, we will add text to speech voice synthesis so that agents within the VE can be instructed to report their activities by voice. Voice synthesis was used to good effect as a training tool within the Steve environment [5].

g) Procedural (skeletal) animations – shown in figure 23

Procedural animation using a skeletal (skinning) model will drastically improve the quality and detail of model animations in BrahmsVE. It will allow for "rag-dolling", creating more fluid motion animation, as well as providing collision detection with a greater degree of granularity. This means that rather then detecting a collision with an object and reacting as an entire static object, portions of the model will be deformed (along bone structures), for example bending at the elbow when an arm brushes a wall. Animations will also become more dynamic. For example, rather than having a "put down" animation that shows the model placing an object always in the same location, the animation would be dynamically changed to show the model placing the object where it will actually go. It would also permit animations to allow for collisions. For example, if while drifting through the ISS an astronaut were to brush a wall, rather then either A) his arm going through the wall (non-solid object), or B) his entire body bouncing off the wall (solid object), he would be able to bend his arm out of the way. Ken Perlin of the New York University Media Research Laboratory has offered to provide procedural animation technology [26] for this project.

Maarten Sierhuis of the Brahms team has requested that Phase II deliver a closely coupled integration of an agent-based activity subsumption architecture for modeling human behavior with a low-level gesture generation capability to develop a high-level behavioral VE bot-language for easier development of the next generation VE applications. This is requested based on needs at NASA for uploadable just-in-time mission training software and the need to include models of human behavior and work practice for a "life-like" interaction between astronaut avatars and bots in a virtual world training application. Procedural animation will be able to deliver on this need.

h) Improved Path Finding – shown in figure 23

Bill Clancey of Ames has requested the building in of autonomous path finding capability into the VE agents. By combining increased geometric detail with advanced scripting, the path finding algorithms will be able to use all of 3D space, rather than the current lines-between-defined-nodes method. This will greatly increase the quality of the path finding, including things like curved paths, and moving around obstructions. Clancey’s request is that agents in BrahmsVE be able to “learn” new paths and report them back to the Brahms model. This will be an essential facility for the use of BrahmsVE in mobile agents and other semi-autonomous vehicle projects.

i) Implementation with Havok physics engine – shown in figure 23

With many of the above enhancements we will come to the point where we can activate the Havok physics engine built into the Atmosphere component of BrahmsVE. Adobe, provider of Atmosphere, has built in callback functions to permit us to use the physics engine for fine-grained operations and this will be fully implemented in Phase II.

j) BrahmsVE Test and Reporting Subsystem – part of database in figure 22

This new module will enable a log of reportable actions and statistical monitors that can provide an effective test and measurement of any BrahmsVE model and will be used in the test and validation phase described below. In addition, the VE object inventory and geometry reporting system will be expanded in this subsystem.

k) Test and validation of Phase II BrahmsVE platform

With the addition of the above components to the BrahmsVE architecture we will enter a test and validation phase of the BrahmsVE 1.0 platform. As with the prior years of work on BrahmsVE, testing and validation were carried out by building simulation scenarios in the platform and then testing them for both effectiveness and weakness. A significant benefit of this approach is that we obtain sample applications to be used both in the packaging of the BrahmsVE commercial release, including developer examples and market-opening examples for promoting the platform to industry. We propose the following sample applications for implementation in the test and validation of BrahmsVE during Phase II:

  1. Expanded ISS with next revision of PSA, EVA activity :  Fully populate the virtual ISS with the full crew complement of three, plus shuttle crew visitors, add two models of PSA and test a number of simulation scenarios for teamwork with the ground (JSC Mission Control, see 3). This application will tie in with the recent work by the Brahms team to model work practices aboard ISS [28,29].
  1. FMARS/MDRS Unified Habitat model :  Both floors of habitat, and exterior geography and vehicles, all operating in the same virtual environment. A number of models will be tried here including a complete “water tank filling” exercise which involves an EVA with vehicle, upper and lower floor coordination and feedback between crew and agents.
  1. NASA JSC Mission Control operations or Science Backroom operations : BrahmsVE model of JSC Mission Control or a Science Backroom (Ames, JPL) to bring the ground support team into the full BrahmsVE modeling capacity. Tie model from mission control to actual activities aboard 1 or 2 above to create a unified ground/mission model.
  1. Virtual Windfarm and Hydrogen Production Facility for DOD and DOE : Based on January 2003 presentation to Office of Secretary of Defense of initial windfarm model [27], extend model to factor in humans/vehicles for maintenance, build in hydrogen production facility. Use this model for opening important customers in DOD, DOE and energy producers.
  1. Lights-out automated factory :Model of an existing all automated factory setting with occasional human participation in the form of maintenance personnel. Gain understanding of impact of human agents on uptime, from gaining an estimate of response time to fix errors. This sample application should help us present BrahmsVE to commercial customers in the manufacturing field.

Measuring applications on the virtual test fixture

In Phase I, we ran eleven simulation scenarios in the ISS/PSA BrahmsVE application and gathered empirical findings against the real experience of the QSS/PSA team who are employing a physical test fixture at Ames [30]. These empirical results were reported in section 2.3 above. In the Phase II virtual test fixture, we will use a new BrahmsVE Test and Reporting Subsystem to gain quantitative analysis of the performance of the platform and the applications. We plan to use this more formalized method to meet the goals from the solicitation set forth in Table 3. In addition, field science Ethnographic techniques described by Clancey [31] as well as techniques in collaborative decision-making and intelligent reasoning described by Wilkins, Jones, Bargar, Hayes, and Chernychenko in [32] could be applied to the evaluation of the effectiveness of the BrahmsVE platform.

Table 3: Goals from Solicitation

Goals set forth in solicitation

How BrahmsVE 1.0 test and validation will address this goal

On-board systems must aid people in diagnosis and repair and enhance safety

Use BrahmsVE to permit training in a virtual test fixture “just in time” prior to diagnosis and repair operations. BrahmsVE will be operable on standard computers by crew of ISS or other mission while on-orbit. In future, sensored agents can support a virtual world model tied to real mission support capability.

Interface design of mobile or built-in agent-assistants, human perceptual-motor coordination, cognitive operations, and group dynamics

Use BrahmsVE to train users to navigate the astronaut agent while communicating with agents. Permit several users to enter the scene simultaneously as “avatar astronauts” and interact with a single robotic agent or system, testing group dynamics in a kind of “learning game” environment.

Permit the testing of procedures where machine systems complement human activities

Utilize BrahmsVE in a training exercise where agents move in tandem with operator-astronauts to provide -camera sightlines to a second astronaut-agent (the viewer) or use a laser pointer for indications (teamwork with capcom).

Enable just in time training

BrahmsVE is being developed as a just in time training tool that can be run on standard personal computers on low bandwidth networks.

Understanding appropriate task separation between human and machine systems

Use BrahmsVE to test where mobile agent could play a role and take on human tasks where appropriate. Examine error rates of the agentin complex path navigation suggesting limits of a mobile agent (as in fan blowout on ISS model in Phase I).

Develop robust control systems and exploration tools that can be understood by people, easily learned, maintained, and directed

The later addition of the “tactile interfaces” on the 3D models within BrahmsVE as well as web-based interfaces (akin to a control panel) will permit experimentation with learning and operating a mobile agent or space suit. Need identified by Story Musgrave on HST repair mission.

l) Deliverable end product: BrahmsVE 1.0 Production Release

The completion of the above components of the BrahmsVE architecture followed by the test and validation phase will allow us to proceed, as part of the Work Plan described in 3.2, to the full productization of BrahmsVE. The BrahmsVE 1.0 production release will consist of the following deliverables:

  1. BrahmsVE development environment to be used in conjunction with the Brahms development environment
  2. BrahmsVE runtime environment consisting of OWorld script libraries, executable runtime server, Adobe Atmosphere (customized version for BrahmsVE) and reporting/interfacing modules
  3. BrahmsVE statistical reporting package
  4. BrahmsVE OWorld and event language interpreter
  5. Full documentation, both online and in print
  6. Full set of samples downloadable from the web of both BrahmsVE OWorld script, 3D virtual world models, gestural animations and Web-based user interface templates
  7. Pricing for both the BrahmsVE environment for shrink-wrap purchases and as a service to be provided by certified BrahmsVE developers
  8. End user and site license agreements
  9. Web-based BrahmsVE marketing program including partner and key BrahmsVE customer examples

3.2 Work Plan

3.2.1 Technical Approach

In order to achieve the technical objectives described in Part 3 of this proposal, we have divided the project into the following major component tasks:

  1. BrahmsVE OWorld Engine Capabilities: This comprises the fundamental capabilities of the BrahmsVE engine as exposable to the scripting interfaces in OWorld and the communications dialogue system. The Engine is also related to the capabilities of the Atmosphere 3D world subsystem.
  1. BrahmsVE Communications Dialogue System: This is the system permitting communication with both Brahms via the network and XML, the PHP/SQL database layer proposed in Phase II and interaction with Web-based elements such as the user interfaces for each BrahmsVE model.
  1. BrahmsVE Test and Reporting Subsystem: This module consists of reportable actions and statistical monitors that can provide an effective test and measurement of any BrahmsVE model.
  1. BrahmsVE Developer’s Environment and Brahms Integration: This environment will provide a visual toolset for setting up of BrahmsVE 3D models, and script environments and all the affordances and communications for each BrahmsVE application.
  1. BrahmsVE Key Application Samples, Test and Documentation: These samples will be part of a distribution with the Developer Environment and demonstrate all of the capabilities of the BrahmsVE platform. Testing of samples will provide point-of-ship validation of any given release and the developer Documentation will be derived from and center around these samples.
3.2.2    Task Descriptions
During Phase II, our effort will focus on the development of the following component tasks:
BrahmsVE OWorld Engine Capabilities
This component task will implement items a, f, g, h, and i in the Roadmap to release in section 3.1 above.
BrahmsVE Communications Dialogue System
This component task will implement items b, c, d and e in the Roadmap to release in section 3.1 above.
BrahmsVE Test and Reporting Subsystem
This component task will implement item j in the Roadmap to release in section 3.1 above.
BrahmsVE Developer’s Environment and Brahms Integration
This component task will implement item e in the Roadmap to release in section 3.1 above.
BrahmsVE Key Application Samples, Test and Documentation
This component task will implement items k and l in the Roadmap to release in section 3.1 above.

3.2.3 Meeting the Technical Objectives

The BrahmsVE 1.0 production release as specified above will meet the technical objectives outlined in Part 3 as follows:

  • BrahmsVE 1.0 will provide a world class model-based, discrete agent simulation environment that is both fully interactive, representing the simulation as a realistic 3D virtual environment, and collaborative, allowing researchers and operators to utilize it over networks on standard personal computers equipped only with a web browser.
  • BrahmsVE 1.0 will serve as an environment that can provide interactive simulation for:
    • Training both well in advance of missions and for just in time training during mission operations and for commercial applications discussed in Part 8 below.
    • Testing of design concepts for flight hardware and whole vehicles (PSA and FMARS/MDRS) as well as commercial applications (to be discussed in Part 8).
    • Visualization of “day in the life” re-creations of crew/system operations captured on video and reconstructed in a 3D world for researchers to study.
  • BrahmsVE will be a low-cost solution as it requires only the basic current personal computer models with no added hardware, and permits content development (3D and scripting) using standard web components (Atmosphere, Viewpoint, JavaScript, XML, HTML).

3.2.4 Task by Labor Categories and Schedules

This section describes the work schedule for the Phase II effort in terms of our projected allocation of person-hours by labor category by task (Table 4) and schedule by task (Table 5). DigitalSpace work is carried out by distributed team members and will be coordinated and assembled at its corporate offices located near Santa Cruz, California. This schedule assumes an 18 month project duration (over six quarters).

Table 4: our projected allocation by labor category by task.

TASK

DESCRIPTION

PI

PM

Lead SE

2ndSE

Lead     CD

2nd CD

TE

1

BrahmsVE OWorld Engine Capabilities

400

400

400

100

100

50

0

2

BrahmsVE Communications Dialogue System

300

200

200

100

100

50

100

3

BrahmsVE Test and Reporting Subsystem

300

100

200

200

200

0

100

4

BrahmsVE Developer’s Environment and Brahms Integration

200

50

200

200

100

100

200

5

BrahmsVE Key Application Samples, Test and Documentation


600


50


200


200


600


300

400

   

1800

800

1200

800

1100

500

800

Where:  PI = Principal Investigator, PM = Program Manager, CD   = Content Developer, SE = Software Engineer, TE = Test Engineer

Total estimated hours: 7000

Table 5: our projected schedule by task.

 

Yr 1

Q1

Q2

Q3

Q4

Yr 2

Q5

Q6

Complete BrahmsVE OWorld Engine Capabilities

¦

d

d*

d

dt

*

Extend BrahmsVE Communications Dialogue System

¦

d

d*

d

dt

*

Develop BrahmsVE Test and Reporting Subsystem

¦

d

d*

d

dt

*

Complete BrahmsVE Developer’s Environment and Brahms Integration

¦

d

d*

dt

t

t*


BrahmsVE Developer’s Environment and Brahms Integration

 


¦


d*


d


dt


t*


Build BrahmsVE Key Application Samples, Test and Documentation

   


¦


d*


dt


t*

Where:

¦ = Specification and or Design and Documentation

d = Software and Content Development

t = Sample Applications and Testing/Validation

* = Interim or Final Report, Product Packaging and Documentation

Project Reference Website

The Project Reference Website will be a center for ongoing progress and resources surrounding the project, from the specification and design phase to the sample applications test and documentation. The site will consist of the following components:

·         Project goals, timeline, team biographies and contact information

·         Documentary results of each phase of the project, from the posting of interviews to architectural and specification documents to code bases and 3D models

·         Listserver for project participants with log of message traffic

·         Executable releases of test fixture environments and related tools and libraries

·         Links to related team resources, other NASA sites and industry sites

 

Part 4 Company Information

DigitalSpace Corporation was incorporated in the state of California on August 24, 1995.  DigitalSpace is a company organized to innovate and commercialize in the multi-user virtual worlds and virtual communities market. The company’s business is based on the following concepts:

•      That the Internet and especially 3D virtual worlds, voice and text environments, can be effective meeting places and enable communication, learning and team based projects. DigitalSpace uses these spaces daily in a proof of concept that a company can base its entire operations on them;

•      as the need for teleworking, distance learning, virtual communities of interest and Visualization grows, so will grow the capabilities of ordinary consumer personal computers to deliver real-time 3D multi-user experiences. It is the convergence between these needs and the capabilities of consumer computing hardware that will create a large industry producing and hosting virtual worlds and communities in the future.

On these two premises, DigitalSpace has provided solutions for dozens of clients to produce both demonstration and fully functional virtual spaces since 1995. See our web site at http://www.digitalspace.com for a portfolio of projects and clients. We will feature a number of them here for their relevance to this SBIR proposal:

Part 5 Facilities and Equipment

5.1 Facilities

DigitalSpace headquarters is near Santa Cruz California and currently lease office space in a 2 story building at 221 Ancient Oaks Way, Boulder Creek, California 95006. Additional DigitalSpace US team members have satellite offices in Phoenix Arizona and Seattle Washington.

5.2 Equipment used

All DigitalSpace team member have at least one personal computer connected on the Internet (most have IBM PCs, Pentium 3-4 class) with all needed software for 3D modeling, website design and programming.  

Part 6 Key Personnel and Bibliography of Directly Related Work

6.1   Management and technical staff members

The following brief resumes introduce management/technical staff members for the proposed project.  DigitalSpace certifies that Bruce Damer, the Principal Investigator, has his primary employment at DigitalSpace at the time of award and during the conduct of the project.

Name:                           Bruce Damer (PI)

Years of Experience:    22

Position:                        CEO

Education:                    Bachelor of Science in Computer Science (University of Victoria, Canada, 1984); MSEE (University of Southern California, 1986)

Assignment:                 Mr. Damer will be the Principal Investigator for the SBIR Phase II effort.  He will coordinate all interaction between DigitalSpace and its collaborators and NASA and other participants, be responsible for all staffing, technical design, reporting and documentation.   Mr. Damer will devote a minimum of 100 hours per month of his time to the NASA SBIR project.

Experience:                  Mr. Damer is the world's recognized expert on avatars and shared online graphical virtual spaces having created much of the early literature, conferences and awareness of the medium. Mr. Damer is a visiting scholar at the University of Washington Human Interface Technology Lab and a member of the staff at the San Francisco State Multimedia Studies Program. See http://www.digitalspace.com/papers for a complete bibliography of Mr. Damer's work.

Name:                           Stuart Gold (PM)

Years of Experience:    28

Position:                        Chief Architect (communities platform)       

Education:                    Royal Institute of British Architects

Assignment:                 Stuart Gold will serve as a Program Manager for the project and structure the technology components and architecture for the BrahmsVE 1.0 release as well as coordinating the 3D modeling teams and provide any database and real-time community tools infrastructural support on the project and the XML based interfaces with Brahms.

Experience:                  Mr. Gold is a pioneer of online systems, starting with his work on transaction processing for Prestel in the 1970s and concluding most recently with his leadership in the design and delivery of online virtual worlds including: TheU Virtual University Architecture Competition, International Health Insurance Virtual Headquarters, and Avatars98-2001 online events. Mr. Gold also is the chief architect of the DigitalSpace communities platform, implementing XML and JS based community tools for use by all DigitalSpace projects. See http://www.digitalspace.com/papers for his recent writings.

Name:                           Bruce Campbell (Lead SE)

Position:                        Programmer/Architect, Oworld/Atmosphere

Experience:                  5 years of experience at the University of Washington Human Interface Technologies Laboratory and the Department of Oceanography.

Assignment:                 JavaScript programming, interface with Brahms and 3D content, testing, open source component strategy, university and distance learning user partnerships.

Name:                           Galen Brandt (Marketing – Phases II and III)

Position:                        New business development, DigitalSpace

Experience:                  25 years including creating market strategies for Dun and Bradstreet, the Franklin Mint, SUNY Fashion Institute of Technology, DoToLearn and others.

Assignment:                 Market development for Phase II and III.

Name:                           Dave Rassmussen (TE)

Position:                      Member of the 3D Design Studio, DigitalSpace

Experience:                 8 years experience in virtual world design, skills: 3DS Max, Java, Active

Worlds, Adobe Atmosphere, PHP/MySQL database development

Assignment:                Directing team performing 3D modeling and animation, testing

Name:                           Merryn Neilson (Lead CD)

Position:                      Member of the 3D Design Studio, DigitalSpace

Experience:                 8 years experience in virtual world design, skills: 3DS Max, Java, Active

Worlds, Adobe Atmosphere

Assignment:                 Web design on project, 3D worlds, avatar design, testing

Name:                           Peter Newman (SE)

Position:                      Developer in C++, JS, PHP, HTML, 3D Design Studio, DigitalSpace

Assignment:                 Programmer of OWorld engine extensions.

Name:                           Ryan Norkus (CD)

Position:                      Graphic artist, 3d modeler and animator, 3D Design Studio,DigitalSpace

Assignment:                 Focusing on the automation of animated sequences

6.2 Bibliography of Directly Related Work

[1] Associated Press News Report dated 08/16/2002, on the web at: http://www.cnn.com/2002/TECH/space/08/16/station.spacewalk.ap/index.html

[2] Loftin, R.B., and Kenney, P.J., "Training the Hubble Space Telescope Flight Team," IEEE Computer Graphics and Applications, vol. 15, no. 5, pp. 31-37, Sep, 1995.

[3] Engelberg, Mark[Ed] (September 11, 1994). Hubble Space Telescope Repair Training System [WWW document]. URL http://www.jsc.nasa.gov/cssb/vr/Hubble/hubble.html

[4] Cater, J. P., and Huffman, S. D. Use of Remote Access Virtual Environment Network (RAVEN) for Coordinated IVA-EVA Astronaut Training and Evaluation. _Presence: Teleoperators and Virtual Environments_ vol. 4, no. 2 (Spring 1995), p. 103-109. (Training for Hubble Space Telescope repair.)

[5] Jeff Rickel and W. Lewis Johnson. Task-oriented collaboration with embodied agents in virtual worlds. In J. Cassell, J. Sullivan, and S. Prevost, editors, Embodied Conversational Agents. MIT Press, Boston, 2000.

[6] Transom Jack is described on the Web at: http://www.manningaffordability.com/S&tweb/HEResource/Tool/Shrtdesc/Sh_TRANSOM.htm

[7] MOVES Institute on the Web at: http://www.movesinstitute.org/

[8] Zyda, M., Hiles, J., Mayberry, A., Wardynski, C., Capps, M., Osborn, B., Shilling, R., Robaszewski, M., Davis, M., "The MOVES Institute’s Army Game Project: Entertainment R&D for Defense," IEEE Computer Graphics and Applications, January/February 2003

[9] Blais, C., Brutzman, D., Horner, D., and Nicklaus, S., "Web-Based 3D Technology for Scenario Authoring and Visualization: The Savage Project", Proceedings of the 2001 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), Orlando, Florida, 2001.

[10] Michael Alan Freed. “APEX, Simulating Human Performance in Complex, Dynamic Environments”, A Dissertation Submitted to the Graduate School in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy, Field of Computer Science, Northwestern University, Evanston, Illinois, June 1998

[11] Michael A. Freed, Roger W. Remington (2000) Making Human-Machine System Simulation a Practical Engineering Tool: An Apex Overview. In Proceedings of the 2000 International Conference on Cognitive Modeling, Groningen, Holland;

[12] L. A. Nguyen, M. Bualat, L. J. Edwards, L. Flueckinger, C. Neveu, K. Schwehr, M.D. Wagner, E. Zbinden, “Virtual reality interfaces for  visualization and control of remote vehicles”, Autonomous Robots, 11:59-68, 2001.

[13] B. Damer, M. Sierhuis, R. van Hoof, B. Campbell, D. Rasmussen, M. Neilson, C. Kaskiris, S. Gold, G. Brandt (2001). Brahms VE: A Collaborative Virtual Environment for Mission Operations, Planning and Scheduling, Final Report for STTR Contract #NAS2-01019, October 8, 2001. URL: http://www.digitalspace.com/reports/sttr-techreport-final2.htm

[14] BrahmsVE/FMARS Project Home Page on the web at: http://www.digitalspace.com/projects/fmars

[15] FMARS/Haughton-Mars Project Home Page on the web at: http://www.marssociety.org/arctic/index.asp and http://www.arctic-mars.org

[16] Ron van Hoof et al, “TM00-0024 Brahms/OWorld XML DTD Specification, Version 1.0 For Review”, 28 November 2000, NASA Ames Research Center.

[17] Boris Brodsky et al, “TM00-0025 BRAHMS OWorld Event Specification Version 1.0 Draft”,  August 14, 2002,NASA Ames Research Center.

[18] Brahms is described on the web at http://www.agentisolutions.com and in several papers at: http://www.agentisolutions.com/documentation/papers.htm

[19] Clancey, W. J., Sachs, P., Sierhuis, M., and van Hoof, R.1998. Brahms: Simulating Practice for Work Systems Design. International Journal of Human-Computer Studies, 49, 831-865.

[20] Sierhuis, M. 2001. Modeling and Simulating Work Practice; Brahms: A multiagent modeling and simulationlanguage for work system analysis and design. Ph.D. thesis, Social Science and Informatics (SWI), University of Amsterdam, SIKS Dissertation Series No. 2001-10, Amsterdam, The Netherlands, ISBN 90-6464-849-2.

[21] Sierhuis, M.; Bradshaw, J.M.; Acquisti, A.; Hoof, R.v.; Jeffers, R.; and Uszok, A. Human-Agent Teamwork and Adjustable Autonomy in Practice, in Proceedings of The 7th International Symposium on Artificial Intelligence,Robotics and Automation in Space (i-SAIRAS), Nara, Japan, 2003.

[22] M. Sierhuis, W. J. Clancey, C. Seah, J. P. Trimble, and M. H. Sims, Modeling and Simulation for Mission Operations Work System Design, Journal of Management Information Systems, vol. Vol. 19, pp. 85-129, 2003

[23] M. Sierhuis and W. J. Clancey, Modeling and Simulating Work Practice: A human-centered method for work systems design, IEEE Intelligent Systems, vol. Volume 17(5), 2002.

[24] BrahmsVE/ISS-PSA SBIR Phase I Project and reports Web Page: http://www.digitalspace.com/projects/iss_03

[25] Dowding, John, CommandTalk from SRI is described on the Web at: http://www.ai.sri.com/~lesaf/commandtalk.html

[26] Perlin, Ken, demonstrations and papers on procedural figure animation on the Web at: http://mrl.nyu.edu/perlin

[27] Digital Space virtual windfarm and Arlington/OSD presentation described on the Web at: http://www.digitalspace.com/presentations/arlington-energy/

[28] A. Acquisti, M. Sierhuis, W. J. Clancey, J. M. Bradshaw, Agent Based Modeling of Collaboration and Work Practices Onboard the International Space Station. Proceedings of the 11th Conference on Computer-Generated Forces and Behavior Representation, Orlando, FL, May 2002. 

[29] M. Sierhuis, A. Acquisti, and W. J. Clancey, Multiagent Plan Execution and Work Practice: Modeling plans and practices onboard the ISS, presented at 3rd International NASA Workshop on Planning and Scheduling for Space, Houston, TX, 2002.

[30] Personal Satellite Assistant (PSA) Test Fixture (Greg Dorais, Yuri Gawdiak, Daniel Andrews, Brian Koss, Mike McIntyre) described on the web at: http://ficworkproducts.arc.nasa.gov/psa_test_fixture/psa_test_fixture.html

[31] Clancey, W. J. 2001. Field science ethnography: Methods for systematic observation on an Arctic expedition. Field Methods, 13(3):223-243, August.

[32] David C. Wilkins, Patricia M. Jones, Roger Bargar, Caroline C. Hayes, Oleksandr Chernychenko: Collaborative Decision Making and Intelligent Reasoning in Judge Advisor Systems. HICSS 1999

6.3 NASA and Non-NASA Advisors to the project (beyond Brahms team)

Dr. Charles Neveu, QSS, PSA Team – advising on Phase I PSA/ISS application (implemented)

Mike Sims, Ames – advising on rover and surface mission design, assisted with JPL/MER demonstration project

Dr. Geoff Briggs, Scientific Director, Center for Mars Exploration, NASA Ames – advising on terrain modeling, surface mission design, commissioned whole-planet Mars terrain demonstration project.

John Peterson, Arlington Institute – encouraged and hosted DOD/Energy security meeting where 3D wind farm was presented.

Dr. Tom Furness III, HIT Lab University of Washington – technology transfer advisor

Dr. Don Brutzman, Naval Postgraduate School, MOVES institute

Captain Richard O’Neill, Directory, Highlands Group


Part 7 Subcontracts and Consultants

7.1 Portfolio of recent projects (separate from BrahmsVE work)

Subcontracts:

ORGANIZATION: Adobe Systems Incorporated, San Jose California

DESCRIPTION:  Support Adobe on all aspects of the architectural specifications, testing, community site development, user input, and market presentation of the Adobe Atmosphere™ product set.

ORGANIZATION: Elixir Technologies Corporation, Ventura California

DESCRIPTION: Creation of a strategy for interactive document presentation and an architecture for a product and services business. Provision of DigitalSpace tools to enable the live interactivity. Training of developer teams to implement the architecture.

ORGANIZATION: American General Financial Group, Houston Texas

DESCRIPTION: Designed and developed a production virtual classroom and campus using 3D java technology. Deployed as a learning community site for 200,000 teachers who are AGFG’s clients for retirement planning services.

ORGANIZATION: Datafusion Inc, San Francisco California

DESCRIPTION: Designed and developed a prototype virtual world for Datafusion's knowledge map product, depicting problem and resolutions graphically in navigable layered 3D spaces.

COMPANY: International Health Insurance, Copenhagen, Denmark

DESCRIPTION: Worked closely with this financial services client to build a virtual headquarters in 3D complete with help desk functions delivered in five languages by automated agents. This world was also tested with a satellite based paging system to inform help desk personnel when a client enters the virtual world.

COMPANY: The Contact Consortium, Scotts Valley California

DESCRIPTION: Coordinated the Consortium's three annual conferences beginning in 1996. Created program, fund raising plan, financial and logistical support, and build a new technology platform so that the Consortium's 1998 conference could be held entirely inside the Internet in 3D virtual worlds.

Consultants

DigitalSpace Corporation will not use outside consultants on this project. Team members employed in the BrahmsVE project are member-owners of the corporation whose participation is defined by member license and agreement.

Part 8 Potential Applications

8.1 Potential NASA Applications

Numerous projects within NASA ranging from the PSA at Ames to the Mobile Agents project of interest to several NASA centers (including JSC) could benefit from the addition of “intelligent” agents within a model-based, discrete agent-driven virtual environment. Rapid, iterative design of a mobile agent working in tandem with human participants can yield a body of design feedback with much lower costs than building several iterations of physical models.

Modeling and simulation for Mars Science Laboratory and Titan missons

Geoff Briggs and Michael Sims of Ames informs us that future Mars (Mars Science Laboratory ’09) and Titan missions will include the use of drills, rotorcraft and airplanes. While these are strictly robotic missions there will be “humans in the loop” in the form of significant science backrooms and mission control.

Sim-station support

Julian Gomez, who is working on early renditions of the Sim-Station project for RIACS at Ames informs us that there will be need for 3D realistic representations of systems and people within the ISS to complement the schematic-style representation of station subsystems. Julian has been equipped with a running version of BrahmsVE.

Greater operator effectiveness through improved telepresence interfaces

NASA (JSC, Ames) is developing improved robotics and 3-D simulation technologies to provide operational robustness and intelligence with the goal of improving operator efficiency via advanced displays, controls and telepresence. Tactile feedback interface for collision awareness between workspace and avatar objects, and robot structure, force feedback devices for awareness of manipulator and payload inertia, gripping force and the use of stereoscopic display systems and spatial tracking of head, arms, etc are all considered key to more effective teleoperation. Based on its flexibility using open Web standards, we expect that BrahmsVE will have a role to play in teleoperation and that we will be able to build interfaces to tactile and force feedback systems, bringing this key interface modality into BrahmsVE.

Ames: Virtual Digital Human
Based on the new Mission Control Center System (MCCS) Architecture framework, integrated support for virtual-digital-human-in-the-loop and teleoperational interfaces is being promoted for flight and ground operations development, analyses, training, and support. The main result desired is an interactive system that enhances operator and IVA/EVA task efficiency via the teleoperational technologies and distributed collaborative virtual environments.

The implementation of the Virtual Digital Human (VDH) seeks to create anatomical, biomechanical and anthropometric functionality to fully simulate the somatic components and systems of the human body. BrahmsVE utilizing the Adobe Atmosphere Viewpoint technology for procedural (skeletal) skinned human body forms within shared virtual environments may be able to meet this challenge.

NASA educational outreach and Space Camp
Immersive virtual worlds, virtual digital human (VDH), and 3D simulation modeling, have become a significant vehicle for NASA's effort to generate and communicate knowledge/understanding to K-12 and college/university students on topics such as the International Space Station and Space Shuttle/Space Transport System (STS) operations, Robotics, Intravehicular/Extravehicular activities, Mission Control Center conduct, interplanetary space flight, and microgravity simulation. BrahmsVE can go a long way to helping NASA enable this kind of outreach and could even become a fixture at NASA Space Camp at several centers.

8.2 Potential Non-NASA Commercial Applications

Applications for other Federal Agencies

DOD and DOE – energy security design application

In January 2003 BrahmsVE was presented at a special workshop at the Arlington Institute held for the Office of Secretary of Defense. A prototype virtual windfarm [27] was developed utilizing Adobe Atmosphere, OWorld and the BrahmsVE engine. The Havok physics engine allowed us to show windfarm KW/Hour production scales for different configurations of turbines. Coupling this with a geographical information system provided by GeoFusion Inc. allowed us to show the DOD staffers and other energy experts how sites for windfarm power could be selected and then the production output modeled. It is planned to present this work again at a Highlands Forum program for the DOD at the end of 2003. It is felt that a DOE presentation will follow.

Defense design, training and operations applications

The military will be using semi and fully autonomous agents working closely to support troops and command in surveillance and combat missions throughout the 21st Century. Therefore we expect a great deal of interest surrounding a product in this space. We are already in contact with the Naval Postgraduate School MOVES Institute about cooperation on and adopting a new XML based standard in simulation communications.

Applications for the educational and private sector

K-12 and College, Education and Museums

The current set of NASA BrahmsVE applications could be repurposed into educational course modules for schools. In discussions with Al Globus of Ames, and the Planetary Society in Pasadena, California, we have determined that there is a need and a market for student spaces in which they can construct space stations or colonies on the Moon or Mars and design all of the subsystems and human/agent activities.

Of course, agent-based virtual environments can also be of great value to museums and science learning centers such as the Exploratorium in San Francisco, where we have been in contact with Technical Director Larry Shaw, who is interested in hosting an event for the MER landing in January 2004 and using our BrahmsVE MER modeling done for JPL.

Robot games – educational and entertainment applications

Robot “wars” are one of the most popular forms of entertainment in the popular media and robot game competition are some of the finest learning events for K-12 and college engineering students and faculty. Ames sponsors such events with CMU students and high schools. We have communicated with the organizers of the Ames events and demonstrated them BrahmsVE. It is planned to partner with them and the local chapter of the Robotics Society of America to develop a kids’ robot design lab and competition space within the virtual spaces made possible by BrahmsVE. Massive multi-player online games are experiencing a large amount of investment and commercial interest. BrahmsVE is a competent platform for the creation of a successful multiplayer online game both as a learning tool and as a pay-per-play tournament environment.  We plan to seek support for a commercial, online robot games application. We have secured the trademark “digibots” for this project and are creating a business plan.

Industrial design, training and operations applications

From factory floor automation to security systems, complex environments where humans work in tandem with mobile agents or other autonomous machine systems all need a comprehensive model-based environment with high fidelity 3D re-creation during both design, training and operations phases. Industrial training is a multi-billion dollar per year industry and BrahmsVE is uniquely suited to enter this market, running on industry standard platforms.

Consumer market research for personal wireless assistants

The emerging era of wireless, wearable personal assistants is picking up momentum with ever more sophisticated cell phones and other handheld devices. In a real sense, each of these devices represents the pairing of humans with machines, all which the BrahmsVE human/agent augmentation design environment can model for product design purposes.

Part 9 - Phase III Efforts, Commercialization and Business Planning

Background

DigitalSpace has an eight year history of profitable operations in our chose market segment. We have built a business through development, project work and acquisition that now offers a dozen product configurations to several market segments ranging from government to large and mid-sized enterprises, universities and colleges and the special interest group and nonprofit sector.

Since 2000 DigitalSpace has made a strategic multi-year commitment to the development of the vision we share with the NASA and USRA/RIACS team members who have made this effort possible. Briefly stated, our joint mission is to create the world’s most comprehensive, graphically realistic, collaborative work practice, mission planning and operations development environment. Support of this SBIR Phase II will allow us to deliver a full 1.0 production packaging of BrahmsVE into a multitude of markets. Multiple NASA and outside customers have already expressed interest or commissioned test projects in the development version of BrahmsVE for BrahmsVE (as reported here).

At the start of Phase III, DigitalSpace plans to either finance its initial operation with customer revenues or venture capital, or if no venture capital is obtained, the principals are committed to self-finance the venture during Phase III.

9.1 Market Feasibility and Competition

DigitalSpace’s target markets are divided into several areas: NASA and other federal agencies (Contracts, Grants, Educational Programs, Software and Content Development); strategic large and medium-sized company partnerships including Adobe and Elixir technologies Corporation (technical and marketing partner, technology testing, online community support, evangelism); universities and colleges including the State University of New York and the RedAppleOnline project (educational software, virtual communities for learning, 3D and 2D chat collaborative tools); special interest groups including community-based organizations and political campaigns such as the Rainforest Action Network (voice over web, chat, and instant messaging product line).

DigitalSpace’s estimates of the potential market sizes (government and/or non-government), are as follows- NASA (4-5 Customers in each of the five identified centers: ARC, JPL, JSC, KSC, MSFC for a total of 25 internal projects, with more likely at ARC), Federal Government including DOD, DOE, NSF, NIH, HUD, DOJ, we estimate a further 200 Customers), educational institutions (a typical university has at least 4 programs in virtual environments, over 500 Customers) and the Private Sector (design automation, industrial training, workplace reengineering over 500 Customers).

DigitalSpace’s estimates of the market shares after first year of sales and after five years are as follows- NASA (first year 5%, after 5 years 25% of VE/simulation projects), Federal Government (first year 2%, after 5 years 15% of VE projects), educational institutions (first year 4%, after 5 years 15%, with an incentive program including open source releases) and the Private Sector (first year 5%, after 5 years 12% benefiting from industry standard platforms, replacing proprietary systems).

DigitalSpace’s main competition after the first year will come from several enterprises:

  • Virtools
  • Sense8
  • EON Reality
  • Superscape

DigitalSpace’s main competition after five years will come from several companies:

  • EDS/Simulation and Training Practice (Solidworks)
  • Silicon Graphics Inc (sgi)

9.2 Strategic Relevance to Offeror

DigitalSpace’s role for the commercial 1.0 release of BrahmsVE and its associated services has in the company’s current business plan roughly 35% of the firm’s business in the first twelve to eighteen months. In the next five years, we plan to have BrahmsVE encompass roughly 60% of our business. The

AgentiSolutions team will act as a natural marketing and delivery partner as we service further customers for Brahms and by extension, BrahmsVE.

9.3 Key Management, Technical Personnel and Organizational Structure

In this section, we describe (a) the skills and experiences of key management and technical personnel in bringing innovative technology to the market, (b) current organizational structure, and (c) plans and timelines for obtaining needed business development expertise and other necessary personnel.

a)       DigitalSpace has the skills and experiences of key management and technical personnel in bringing innovative technology to the market.  Bruce Damer’s seven years of experience in bringing the innovations of Xerox PARC out to market for Elixir Technologies Corp (1987-94) and eight years experience as a director of an industry group, the Contact Consortium (1995-present), gives him long experience in both productizing invention and forming industrial partnerships. Over 5,000 customers in 120 countries use his product, the Elixir Desktop. Stuart Gold’s experience as both a practicing architect (1979-86) and a database architect (1987-present) give him a unique framework in the construction and management of 3D spaces on the internet. Galen Brandt’s marketing experience with firms such as Dun and Bradstreet, Corning, Johnson & Johnson, and her experience promoting VR into the medical community give her unique skills to be used in marketing BrahmsVE 1.0. Bruce Campbell’s years as a researcher and engineer at the Human Interface Technology Lab of the University of Washington and his PhD work at the Dept of Oceanography give him an insightful perspective on the OWorld architecture and educational uses of BrahmsVE.

b)       The current organizational structure is a matrix form, rather than a hierarchical form. The company is organized under the member-owner “commons” model pioneered by Visa International. The company is organized under a lines-of-business structure with members sharing responsibility across each line:

a.       Virtual Environment Studio (includes NASA work, Adobe, Atmosphere). Members: Damer, Gold, Brandt, Campbell, Rasmussen, Neilson, Newman, Norkus, Miller

b.       Traveler sales: Damer, Turner, Thomasson, Miller

c.       TalkSpace sales and development: Hagerty, Damer, Brandt, Thomasson, Meigs

d.       MeetingPage development and sales: Gold, Damer, Brandt

c)       Our plans and timelines for obtaining needed business development expertise and other necessary personnel include the following ramp-up of promotion of BrahmsVE 1.0 in early 2005 as we approach our 1.0 release:

a.       Marketing partners will likely include AgentiSolutions, Adobe and the MOVES Institute (based on current discussion).

b.       L. Hagerty is developing a sales and support network for our TalkSpace Voice over the Web product line (distributors) and this will be the framework upon which we will build our BrahmsVE sales/support network starting in March 2005.

9.4 Production and Operations

DigitalSpace has engaged in the product development of BrahmsVE since the end of 2000. Product development schedules, reports and milestones are all documented on the web at [14,24]. With the completion of Phase II and additional investment by DigitalSpace (see 9.5 below) we will reach a fully marketable version of BrahmsVE by Q2 to Q3 of 2005. DigitalSpace is investing in our primary delivery mechanism, our co-located server facilities housed at Hurricane Electric in Fremont, California and has committed another financial commitment to upgrading this facility in 2003/04 for BrahmsVE deployment to NASA and other customers during Phase II and following into Phase III. No other physical assets will be needed, as fulfillment will be done entirely through the Web with no boxed product or inventory required.

DigitalSpace will be adding marketing, customer support and specialist staff in early to mid 2005 for the ramp-up to BrahmsVE sales within and beyond NASA customers.

9.5 Financial Planning

DigitalSpace has committed private financial resources dedicated to development of the BrahmsVE product including our investment in the network infrastructure (XML/HTTP back end) R&D to provide the two-way Brahms interface and multithreaded scripting for OWorld.

9.6 Intellectual Property

DigitalSpace has two patents in process surrounding the unique JavaScript implementation in the XML/HTTP communications layer that is currently used in its MeetingPage product and will be applied to BrahmsVE in Phase II. DigitalSpace has a technology sharing agreement with the Brahms team at AgentiSolutions and an agreement to cross-license with NASA and RIACS all critical interface level elements of BrahsmVE.

Part 10 Capital Commitments Supporting Phase II and Phase III

As mentioned previously, DigitalSpace has invested in the XML/HTTP framework that will enable BrahmsVE to establish communications dialogue with Brahms and the Web. DigitalSpace has made a commitment to invest a similar amount in 2004-2005 for parallel development of the PHP/SQL database and continuing in 2005 for market development for BrahmsVE. Current sales of TalkSpace, Traveler and new sales of our MeetingPage collaboration platform in 2004 as well as our other consulting and database project fulfillment should allow us to fund this entirely internally.

Other NASA and DOD customers are already expressing interest in purchasing BrahmsVE implementations as outlined in Part 8. In addition, we are able to obtain a business line of credit to supplement Phase III activities. Lastly, private venture support may be sought in Phase III for some of the applications listed in Part 8, especially the Robot Games application. In summary, ongoing DigitalSpace revenues and some early sales will supplement development of BrahmsVE in Phase II while sales and a number of funding options will assist us in product launch in Phase III.

In addition, DigitalSpace has secured a commitment from its bank, Bank of America, to extend us a line of credit in the first year of commercialization and an equal or greater commitment in the following year. Documents detailing this commitment can be provided.

Part 11 Related Research/ Research and Development

In extensive research of the field of web-standardized, model-based, discrete agent, collaborative 3D simulation environments for training, design and operations (see Part 2 and references in 6.2) we have found no equivalent project being pursued under contract to the Federal Government. We also affirm that our company has not already achieved the stated objectives. The closest project to BrahmsVE that we have been able to locate is that of Dr. Don Brutzman’s work at the MOVES Institute at the Naval Postgraduate School in Monterey, California. We have agreed to cooperate with MOVES to bring their Extensible Modeling and Simulation Framework (XMSF) XML-based standard (XMSF) into the BrahmsVE platform and they will in turn open up new applications within the Navy and other US Government customers.

end.