Project: NNA06AA31C, SBIR 2004-II
Simulation-Based Lunar Telerobotics Design, Acquisition and Training Platform for Virtual Exploration

Final Technical Report with Source Code and Access Website Included
December 2nd, 2007

Prepared by

�Bruce Damer, Galen Brandt
(DigitalSpace, Santa Cruz, California, USA )

Dave Rasmussen, Merryn Neilson, Peter Newman, Ryan Norkus, Andrew Gault
(DM3D Studios, Maffra, Victoria, Australia )

Phase II Final Report - Project Summary

Firm:� DigitalSpace Corporation

Contract Number:� NNA06AA31C

Project Title:� Simulation-Based Lunar Telerobotics Design, Acquisition and Training Platform for Virtual Exploration

Name and Address of Principal Investigator and offeror:

Bruce Damer, DigitalSpace Corporation, 343 Soquel Ave, #70, Santa Cruz CA 95062, email: This e-mail address is being protected from spambots. You need JavaScript enabled to view it. , phone: (831) 338 9400

The purpose of the research

For several years DigitalSpace Corporation has been building and utilizing an open source real-time 3D collaborative design engineering and training platform called Digital Spaces (DSS) in support of NASA�s new exploration vision. This platform has been deployed into several NASA centers and other institutions to deliver innovative applications in almost every program, ranging from ISS training to Mars exploration. This SBIR Phase II has provided us resources to focus development of a series of major improvements to DSS in support of the Robotic Lunar Exploration Program (RLEP2) and other NASA research efforts into spacecraft surface options (including landing and mobility) for the moon, NEOs and Mars. The challenge for this work was to produce high enough fidelity physics simulations of vehicles, rigid body dynamics and surface characteristics to permit rapid prototyping of real mission concepts.

Brief description of the research carried out

The research carried out permitted elements of a simulation to be assembled and executed in a compact, client program. These elements include: terrain meshes, 3D vehicles models with rigging, physics properties, and other aspects such as data input and output, user interface controls and communication via HTTP, and databases. The execution of simulations was placed under the control of external scripting languages (XML and Python languages) at an object, event and heartbeat level. The entire platform was put to the test delivering several applications for NASA centers including lunar rover simulation for RLEP2 (MSFC, ARC), LSAM version for descent and landing (JSC) and a design study of a crewed mission to a NEO (JSC, ARC). Limitations of the platform were noted and development directions adjusted. A commercial application for the mining industry resulted from the fixing of shortcomings in the scripting interfaces.

Research findings and results

This Phase II project successfully developed and tested a new open source 3D platform (Digital Spaces, DSS) that supports 3D real-time simulation. Over the course of the project, several key applications were delivered for NASA centers. In addition, a commercial 3D application (for the mining industry), was developed. Further work on DSS was identified and future Phase III commercialization support is already undeway. These include applications supporting the European Space Program, simulation of excavators, and an NIH-funded application in building games for children with special needs. New technology reporting was completed on the innovation of the DInterfaces, the unique binding of scripting interfaces with callbacks for physics and other high performance simulation data pathways.

Final Phase 2 Summary Chart

������� Table of Contents

PART�� �����������



Cover Page


Project Summary


Final Phase 2 Summary Chart


Table of Contents



The project objectives



The work carried out



The results obtained



An assessment of technical feasibility



The potential applications of the project results in Phase III both for NASA purposes and for commercial purposes


SF 298


A) The project objectives

All research and development in the nation�s space program has been aligned around the Vision for Space Exploration (VSE). This project was designed to meet that challenge by proposing a rapid prototyping tool that makes it possible to create and share collaborative, plausible, real-time, 3D simulations of missions. Going well beyond static artists� conceptions, this prototyping tool would allow distributed teams to iterate designs on a week-to-week basis crafting operational scenarios with real-time 3D simulations tightly linked to team meetings, teleconferences, and decision support tools. The resulting work product for this SBIR II, an open source 3D platform called Digital Spaces (DSS), has accomplished that goal.


B) The work carried out

In 2004 COTR Mark Shirley identified that surface terrain modeling for landing and mobility was an underserved mission simulation capability at NASA. Therefore the focus of this work has been on developing tools for vehicle/terrain interaction with real-time physics. This work built on our successful modeling of the Colorado School of Mines� lunar bucket wheel excavator during Phase I of this SBIR (see figure 1 below).

Figure 1: Lunar Bucket Wheel Excavator model from SBIR Phase I

Architectural Overview, Examples and Documentation

Figure 2 below shows the current implementation state of the DSS platform. We will now review the current major components of the architecture that have been built or extended since the project started in 2005. Extensive documentation on individual modules and APIs may be found in the project delivery web site referenced later in this report. The entire work product is available at no cost under the LGPL (Lesser Gnu Public License) open source framework.

Fig 2: Final architectural overview for the DSS 1.0 release

ODE Physics

While investigating required functionality for the project, we decided to select the ODE (Open Dynamics Engine) physics component, as it has tested applications in the industry and available under LGPL licensing. Recent development in ODE now allow mobile meshes and collision between them, in addition to new physics primitives, such as cylinders (currently emulated using capsules) and height fields.

Resource Manager

The implementation of the Resource Manager allows all components to use a centralized organization point for accessing resources in a cross platform and storage independent manner. This design was created with future compatibility in mind, meaning that as support for different storage systems is added to the Resource Manager, all modules will gain access to that storage without any alteration.

Currently the Resource Manager provides access to the local file system and storage access protocols supported by libCURL (see ). This has been used solely for HTTP to date, yet should support many more protocols (like FTP and HTTPS) with very little or no modification.

The current interfaces used by the Resource Manager purposely allow gradual access to a resource as it becomes available. In future this may be used for progressively loading resources (such as models, textures and sounds) although it requires that the module requesting the resource allow for not having a resource entirely available immediately, which all the current modules expect (due to loading previously being from the local file system).


CEGUI 0.5 is the graphical user interface library we are employing for user controls and data reporting. CEGUI widgets are rendered as a step within the scengraph output buffer so are seamlessly presented in the environment (in a cross-platform manner).

CEGUI supports window types, menus and the language used to describe windows allows for skinning via an XML interface. We also developed the concept of Window Sets. This is a system for logically grouping windows based on their type or related task. For example, there are the generic sets DEFAULT, SYSTEM and DEBUG. The Vehicle Agent and Vehicle Agent Controller use VEHICLE to group vehicle related windows together. Currently the user can choose which Window Set to display (with DEFAULT displayed by default). The user can also Hide All and Show All.

To make Window Sets easier to work with for end users, we also create the Enhanced Frame Window. This uses a normal Frame Window as it's basis (a window with a title bar and border) but adds buttons allowing the user to assign it manually to one or more Window Sets. It also adds a "Roll Up" button that reduces the window to just a title bar, a similar concept to "minimize".

From a developer�s perspective, the logic handling for the system GUI (the menus) was moved from the DSS Core Application to the GUI module. This simplifies the process of attaching to and handling window events, and removes code from the Core that is dependant on behavior of a single module. As an extension to this, the bookmark system was revamped, to use a menu structure, and to allow the user to manipulate the bookmarks in program, by adding, renaming and deleting bookmarks.


During the project, the Core Application was improved to better handle incorrect data and conditions, by reporting loading errors to the user and providing the option to load a default space or exit Digital Spaces. Also a splash screen was added, which gives the user visual feedback on the progress of the application start up, and in the case when resources are being pulled via the CURL portion of the Resource Manager, status messages reporting the progress of loading the space.

Scene-graph Management

During the project, a new scene-graph implementation was produced which is significantly more flexible and allows for much greater data encapsulation.

Fig 3: Architecture of aew Scenegraph Manager

This has simplified both the design and execution of the scene-graph management (see Figure 3 above for the functional block diagram of the new scene-graph manager). As a central component to the presentation of the simulation content, these improvements have sped up both execution and development, reduced the potential sources of errors and accelerated future development. Additionally, this change makes the scene file a much more flexible source of information, and will in future allow data to be consolidated from the multiple files and sources used now, into one centralized file and structure.

Render To Texture

With the re-implemented scene-graph manager in place, we were able to add support for creation of on-card texture render targets. This allows in-scene cameras to be created, which render to texture memory rather then the main window. The contents of this texture memory can be extracted via the added API, and manipulated at will.

Python scripting extensions

Due to our use of the Python Interpreter for our scripting implementation, Digital Spaces is able to use standard Python extension modules. For our proof of concept testing of the render to texture feature, we extracted the texture contents via a Python script, and use the Python extension module PyMedia to create a movie file, using the render texture contents as the movie content. The Render to Texture implementation is described in Figure 4 below.

Fig 4: New feature to render scenes to movies/texture images

These textures can be used as regular textures (such as on in scene objects) or can have their contents read for processing. An excellent example of this is having multiple cameras rendering to different parts a single texture, and feeding this rendered data into a movie compression codec. This produced a single movie that contained multiple points of view of a simulation.

The most impressive part, from a technical point of view, is that all this rendering functionality is controlled through the scripting interpreter. This allows for much greater flexibility and ease of entry for other developers. It also allowed the use of a Python module to produce the movie, greatly reducing the complexity.

Human Agent

New development was carried out enabling the user to have human like "avatars" for interaction with the simulation in a natural manner. Additionally, it also allowed the simulation of human like agents within a simulation. While working on the Human Agent implementation, we added several features to allow greater customization, such as walk speed, jump height, and the ability to activate an accelerated flight mode.

C) The results obtained

The two major results of this project were the projects carried out using the platform for NASA and private industry, and the release of the codebase and documentation available under LGPL open source license to the general public and research agencies and universities. We will next review these projects and the 1.0 release of the work product.

1) Projects carried out for NASA and others using DSS during the course of development

Several projects were carried out over the course of the two year development of DSS under this SBIR Phase II. Three of these projects were vital to the validation and development direction of the platform and satisfying some needs of NASA�s Constellation program. Another application was a proof of concept of commercial viability in the mining and construction industry.


A. Development of the Digital Spaces Engine in support of the LSAM application

In Houston in March of 2006 we met John Connolly, head of the human lunar lander team for NASA�s JSC. He saw our work on simulation for RLEP2 surface mobility and received a positive recommendation from the MSFC pre-Phase A team and subsequently invited us to produce some simulation sequence for the LSAM (Lunar Surface Access Module) during final descent and landing on a number of simulated lunar terrains. The results of this work can be seen in figures 5-10 below.

Figure 5: LSAM on final entry to simulated lunar surface.

Figure 6: Reaction Control System visual effects as physics thrust applied.

Figure 7: Descent seen from below.

Figure 8: Descent onto rock hazards.

Figure 9: Contact with hazards on landing.

Figure 10: Contact with slope hazard.

The next section details the improvements made to the DSS platform to support the LSAM application.

Development of the Digital Spaces Engine in support of the LSAM application

LSAM Space

We developed a Space demonstrating some sample flight and landing characteristics of a proposed LSAM design.

Waypoint System

The path followed by the LSAM was controlled by the Waypoint System. This Component is a flexible control system, allowing rapid development of pre-scripted movement within a scene graph by autonomous Agents. The control logic is stored in a human readable XML file, and may be manually edited or developed using the in-scene editor interface. It supports all current Agent Implementations, and is future compatible with all implementations of the Agent Specification.

Space Vehicle Agent

Each Agent Implementation is responsible for providing the mode of movement through the scene graph. The Space Vehicle Agent autonomously selects appropriate thruster forces to move at the correct speed and direction, as per instructions from another Component, such as the Waypoint System.

The Space Vehicle Agent Implementation also provides orientation logic, which attempts to keep the Space Vehicle upright. This is performed by adjusting thrusters to compensate for rolling and pitching, which can be caused by the main steering thrusters, as well as non-central weight distribution.

Suspension Joints

A feature used in the physical modeling of the LSAM is a suspension joint. This is a physical joint that models a spring & shock-absorber system. By specifying the applied forces at each end of the joints movement range, modelers can simulate the energy absorbing properties of this type of system.

Space Vehicle Controller

Following the design of separating Controllers and Agents, a Space Vehicle Controller was added to Digital Spaces. This read different axis from a user joystick (or a keyboard mapping simulating a joystick) and converted these into target velocities. The Controller then instructed the Space Vehicle Agent implementation to move at the specified velocity, and the autonomous Agent implementation did the actual work of ensuring this happened. This allowed the user to control the LSAM in a highly detailed manner, giving the ability to perform landings at different velocities, as well as the ability to perform lateral movement of the LSAM.

DImplementation Macros

A new method of typecasting was provided to each Implementation object by a series of macros. This simplifies development as developers of new objects no longer need to know the intimate details of the type conversion process. Instead, they just specify what DInterfaces their object supports, and the type conversion support is added for them.

Factory Specification and Macros

The previous method of exporting objects from components involved duplication of information, knowledge of DInterface DUIDs, as well as creation of new objects that depended on the reference counting system. This was redesigned to remove the need for reference counting, and also to simplify the process. Now exported objects use a macro based system for specifying what DInterfaces the exported object (referred to as a Factory) supports. This is almost identical to the newly added type conversion support macros.

Common Task Functions

The existing common task functions have been refined, and new ones added. This makes acquiring DInterfaces from other modules simpler, and provides automatic conversion from DInterfaces to common C++ data structures.


B. Architectural Support for RLEP2 Surface Mobility Trades

In late 2005 DigitalSpace was invited to join the NASA RLEP2 (robotic lunar exploration program) to support the design simulation of �pre-phase A� rover concepts for a planned 2011 mission to explore cold, dark traps on the Lunar south pole. Our earlier work on a real-time simulation of the lunar bucket wheel excavator paved the way for our RLEP2 participation. Figures 11-20 below detail the work done to fully simulate the RLEP2 �option 3� mid-term review rover vehicle in January of 2006.

Figure 11: New moon hazard yard with �V2� rover from RLEP2 mid-term

Figure 12: Vehicle on �lander�

Figure 13: Vehicle traversing toward crater rim (note �dust� effects on wheel/surface contact)

Figure 14: Approaching crater rim

Figure 15:� Slant-traversing slope

Figure 16: Mounting �high centering� hazard

Figure 17: Surmounting hazard

Figure 18: Mounting �rock� hazard

Figure 19: Entering �negative� hazard

Figure 20: Emerging from hazard

Status of RLEP2 Support

Following the Mid-term Review, the DigitalSpace team completed a newly rigged �V2� rover and placed it into a moon hazards yard derived from the earlier work with four vehicles in December 2005. The figures above depict testing of this simulation within the new moon hazards yard.

C. Lunar Exploration and Real-Time Soil Mechanics Modeling

This section will describe the use of DSS to create a real-time simulation of four distinct lunar excavation systems in a lunar polar environment. The work was conducted in Spring 2007 for the NASA Office of Space Operations by a team that included DigitalSpace Corporation, National Security Technologies LLC and Los Alamos National Laboratories. To begin the effort, the DigitalSpace team developed a clean room simulation environment, and placed a prototype backhoe excavator within that environment to begin implementation of a real-time lunar soil mechanics model, as well as the extraction of force and torque data across the excavation platform and output into a special data log file. Following the initial calibration studies, the team completed the implementation of the Balovnev soil mechanics model, and placed it into a simulated lunar base South Polar environment with a low sun angle. Figures 2 through 11 below depict operation of this simulation within the newly designed lunar base terrain model, which was intended to test the excavation systems under repeatable conditions by using a waypoint following tool. Note the physics engine input GUI interfaces in various locations of the simulation, which enabled NASA and LANL engineers to adjust model parameters in real-time.

Extensive upgrades to the Digital Spaces (DSS) software platform were required, and were undertaken by according to the following task plan:

  • Develop a drivable, real-time simulation models for four distinct mobile lunar excavation platforms. Excavator types will include a front-end loader, a bucket-wheel excavator, a bucket-ladder excavator, and a clamshell shovel.
  • Develop a tool to capture statistical / engineering data from real-time lunar excavation simulation. Tracked variables can include position, wheel rotation, torque and force.
  • Implement physics model of excavator-soil interaction, including the Balovnev excavation tool force equation to estimate forces based on penetration depth, soil mechanics parameters, and tool geometry.
  • Create a control panel button that can load and then follow a waypoint file. This capability will then extend to other mechanical system elements (e.g., bucket path trajectory).
  • Create and implement a failure model using mean time between failure (MTBF) criteria that disables system components (e.g., the seizure of a wheel bearing, stopping wheel rotation) when a very low value (defined by MTBF criteria) is drawn from a random number generator.
  • Investigate the feasibility of implementing terrain modification (i.e,, deforming terrain under mobility platform and excavation tool) for DSS in anticipation of Phase II work.�

Implement a power consumption model by outputting force data to a file, then post-processing force and torque data to extract power consumption through time.

The current simulation also incorporated previous tools developed to enable NASA engineers to:

Set and calibrate physics engine parameters including gravitational force, static and dynamic friction, engine speed and torque, and damping coefficients.

Drive and steer the vehicle in the simulated lunar environment in real-time.

Navigate hazards including crater walls, regolith berms and lunar base equipment.

Actively steer the view camera, using both static and �follow the rover� modes.

Fig 21: Vehicle on virtual lunar terrain, excavating volume

Figure 21 above and figures 22-29 below represent a pre-implementation concept visualization of a simulation of a simple three phase regolith model and a bucket movement within that model will work. The simulation concentrated on the discrete element simulation of the bucket traversing a cubic volume of simulated regolith in three layers, a top �fluffy� layer of little resistance, a middle layer of medium density and a lower layer of higher resistance. Infused in this excavation volume will be hard rock hazards which will make hard contact with the bucket and either cause the simulation to pause, reporting an event, or to continue, if the rock hazard is moved out of the way or captured by the bucket.

Please note that no effort to show the disturbed or otherwise removed volume of granular regolith materials will be attempted in this phase. All efforts here are focused solely on producing an early low fidelity simulation of the bucket moving through layers of differing densities, infused with rocks, the resulting output being a stream of data for analysis by project partner George Tompkins of LANL. The content and form of that data is still being discussed. The simulation may be drivable by the end user through arbitrary paths in the excavation volume or may be driven through preset waypoint pathways. These modalities are yet to be determined.

Fig 22: Overhead view of excavation volume

Fig 23: Path (waypoints) of excavation

Fig 24: Initial ground penetration

Fig 25: Digging through top surface layer

Fig 26: Penetration

Fig 27: Contact with rock hazard

Fig 28: Close up view of bucket-rock contact

Fig 29: Rock extracted

Implementation of Bucket Excavation Simulation

For this project we implemented a framework within which we can use various equations to model digging resistance. This has not currently involved any alterations to the DSS architecture. This framework was built independently of the physics simulation implementation, to be a module of DSS, rather then of ODE.

We worked with professionals in the field (George Tompins of LANL, partner in the project, and Lee Johnson, a soil/machine dynamics expert at Colorado School of Mines) for support in conceptualizing the calculations the framework is to support, as well as to standardize on data types to be exported from the running simulation via the framework.

Detail: Volumetric Friction in Digital Spaces

To make this project possible we modeled the friction encountered by a digging bucket when excavating lunar regolith. In order to model this, we built a framework on top of our existing physics engine specifically to model friction when moving through a volume. We had to model this ourselves as no real-time physics simulator implements volumetric friction, as it is too costly and not useful in general circumstances.

The implemented version uses simple bounding boxes and spheres to detect when a managed shape (the bucket) is intersecting the volume (the regolith). When the bounding shapes overlap, the current velocity of the bucket is compared against the resistance axis of the volume. This resistance axis is to allow resistance in one direction but not in the other, which approximates resistance when digging into the regolith but not when drawing the bucket clear.

If the bucket is moving into the regolith then an appropriate friction is calculated. Currently the calculation used is simply to take the velocity of the bucket and apply a scaled counter force. This doesn't allow for bucket angle or shape, simply treating it as a sphere.

While we are aware that a correct implementation would take the force being applied to the volume by the shape and counter that, due to the rapid approximation method used by the real time physics engine, this data is not available. The only force we are able to accurately counter is gravity, as this is applied as a force by the physics simulation.

Future development will be to calculate the angle of the bucket blade and to vary the resistance based on that. The initial implementation simply scales resistance based on the angular difference between the objects motion and the direction of the digging blade, while later models will use the Balovnev calculation to more accurately calculate resistance.

Additional future development may possibly use a gradated space. This means that there will be greater resistance at the bottom of the volume then at the top, as a means of approximating increasing soil density. Another future development will be to "suspend" a shape in the space (a rock in the regolith) and attempt to have it react appropriately.

Certain improvements for mobile lunar robotic and ISRU systems modeling are anticipated to be within easy reach, given current progress with upgrades in the DSS platform, and the growing library of real-time physics approximations. Near-term improvements could include:

Integrated real-time capture of instantaneous, average and peak power, and the output of a mission power utilization profile that reflects the choices made by the �driver� of the simulation during a particular run. It is anticipated that this tool could aid in the selection of the optimal power system for a various mobility conditions.

Modification of the terrain based on the trajectory path of an excavation tool, wheel or other machine-regolith interaction. Note that solving deformation equations in real time necessitates the use of simplified approximation methods.

Improved-fidelity physics approximations for machine-regolith interaction including slip conditions on talus slopes, negative hazards (smaller craters), hardness of rock features, regolith bearing capacity, and potentially even dust regimes.

Real-time approximations of thermal conditions for the vehicle based on sun angle to the radiator, including transition to shadowed regions. The thermal model could also potentially include a simple model of dust effects.

Low-fidelity physics modeling of drill behavior in variable soil & rock conditions.

Physics modeling of more complex lunar systems, including robotic and human vehicles as well as lunar habitation systems. Elements could include robotic lunar construction equipment and further refinement of ISRU systems and subsystems.

The development of a multi-user capability, enabling mobile simulation agents or �drivers� to participate via an Internet connection.

Access to the complete Lunar Excavator Simulation project materials, simulation code:

For the results of this work, see figures 30-39 below which show the �digging trials� of three simulated excavators in the analog lunar regolith volume. Common lunar outpost and ISRU hardware concepts are shown as backdrop (source book credit: Moonrush by Dennis Wingo).

Figure 30. DSS Lunar Polar simulation showing three excavators and control windows.

Figure 31. Close-up of clamshell in excavation yard.

Figure 32. Clamshell penetrates into lunar regolith.

Figure 33. Close-up of bucket wheel with front end loader and clamshell in background.

Figure 34. Overview of Polar outpost environment showing landing pads on left.

Figure 35. Close-up of static lunar base elements.

Figure 36. Front end loader penetrates lunar regolith to fill bucket.

Figure 37. Bucket wheel excavator traverses into dump zone. Note log file window in upper left.

Figure 38. Shadow of crane crosses excavation yard.� Note statistical input window in bottom which allows real time input of failure criteria.

Figure 39. Lunar excavation yard. Note Balovnev soil mechanics model parameters, accessible in real time through window in upper right corner.

D. Development of a �Drill Jumbo� simulation application for the mining industry (detailed description demonstrating scripting)

During the 0.6 release of the Digital Spaces (DSS) Prototyper in late 2006 we delivered a proof of concept Phase III commercial application for the mining industry. This was the first application using the full Python scripting interface. This was a commercial (tech transfer) application produced for the global mining company Xstrata (formerly Falconbridge) of Canada . This project was directed by project expert Brad Blair (figure 40) and Peter Rutherford, senior systems engineer at Xstrata in Canada . The application involved the simulation of a drill jumbo vehicle for the mining manufacturing firm Atlas Copco in Sweden (figures 41-42). This application is available for download (see below) and will be described in more detail next (including the physics scripting through Python).

Fig 40: Mining economist Brad Blair developing new scripting interfaces to Drill Jumbo simulation

Fig 41: Atlas Copco drill jumbo at work in mine

Fig 42: DSS-prototyper Atlas Copco drill jumbo simulation with Python scripting

Operating the Drill Jumbo application

After you have downloaded Digital Spaces and Drill Jumbo Space (web source:, an installed the client application, the following interface elements can be employed to operate the virtual vehicle.

Moving the viewpoint (fig 43)

  • The Forward and Back arrow keys move the camera forward and backward
  • The Left and Right arrow keys rotate the camera left and right
  • PgUp tilts the camera up
  • PgDown tilts the camera down

Fig 43: Drill Jumbo simulation at the drift

Moving the vehicle

W������� Forward

A��������� Turn Left

D�������� Turn Right

S��������� Reverse

E��������� Brake

These keys are laid out in a triangle on the left of your keyboard.

�W E


Using the GUI

The arrow cursor on the GUI is controlled with the mouse. The GUI uses standard concepts like buttons and lists.

Fig 44: Boom controls

Controlling the arms

Figure 44 above shows a list of the joints making up the arms.

For example:

����������� Right Boom - Left/Right
����������� Right Boom - Up/Down
����������� Right Boom - Extend/Retract
����������� Right Wrist - Up/Down
����������� Right Wrist - Left/Right
����������� Right Wrist - Twist

As you see, joints form logical groups (such as Right Boom). Below this list, there are two buttons "Assign to vertical axis" and "Assign to horizontal axis". These buttons assign the currently highlighted entry in the list of joints.

Below these buttons are five "radio" buttons, arranged in a + pattern.

When you assign a joint to the Vertical axis (the name will be shown to the left of the + arrangement), the buttons in the top and bottom positions will affect this joint.

When you assign a joint to the Horizontal axis (the name will be shown to the right of the + arrangement), the buttons in the left and right positions will affect this joint.

The center position will stop movement.

Assigning a joint to both axis will usually cause it to be locked in place.

This demonstration model is simple and allows you to assign any joint to either axis, however to make it easier to use, we recommend assigning Left/Right joints to the horizontal axis, and up/down joints to the vertical axis.

Adjusting the shadows

The keys 1, 2, 3, 4 (although not the number pad) will select the type of shadows being used in the scene (match to figures 45 below).

1 - Stencil Modulative - Crisp but dark

2 - Stencil Additive - Crisp, more colour accurate then 1

3 - Texture Modulative - Least accurate, but fast

4 - No shadows

Fig 45: Four types of shadow presented in the scene

Drill Jumbo Control Script

This section will detail the underlying Python script that controls the above drill jumbo application.

What does it do?

The Drill Jumbo Control Script simply allows control of a set of physics rigged joints. The developer is able to specify the joint name, a descriptive name, as well as motor speed and force. The end user is presented with a GUI that allows them to select joints, assign them to the axis of an imaginary joystick, and rotate the joints about their axis.

The Script

Below is a section by section description of the process involved. The text from the script is in a different font to the main text of this document. The text will usually refer to the code above.




This section checks for the existence of an object �jointControl�. If the object does not exist, this script is being ran for the first time and needs to perform initialization. This initialization is shown below:

class JointControlManager:

The �class� of object. This can be considered a template for how the script will perform.

� agentName = "undgrdmngdrl#1"

As the Agent Manager performs substitution for the agent name when loading its scene graph definition, we store this in a variable for convenience.

� joints = ( ����

����� { 'name': agentName + "_jnt_rbj1chas1", 'descname':"Right Boom - Left/Right" , 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rb1rbj1", 'descname':"Right Boom - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rb1rb2", 'descname':"Right Boom - Extend/Retract", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rbj2rb2", 'descname':"Right Wrist - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rb3rbj2", 'descname':"Right Wrist - Left/Right", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rbj3rb3", 'descname':"Right Wrist - Twist", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rdcrbj3", 'descname':"Right Wrist - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rdr1rdc", 'descname':"Right Drill - Extend/Retract", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rdr2rdr1", 'descname':"Right Drill - Rail 1", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_rdrdr2", 'descname':"Right Drill - Rail 2", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_cbj1chas1", 'descname':"Centre Boom - Left/Right", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_cb1cbj1", 'descname':"Centre Boom - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_cb2cb1", 'descname':"Centre Boom - Extend/Retract 1", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_cb3cb2", 'descname':"Centre Boom - Extend/Retract 2", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_bskt2cb3", 'descname':"Centre Basket - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_lbk1chas1", 'descname':"Left Boom - Left/Right", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_lb1lbj1", 'descname':"Left Boom - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_lb2lb1", 'descname':"Left Boom - Extend/Retract", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_lbj2lb2", 'descname':"Left Wrist - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_lb3lbj2", 'descname':"Left Wrist - Left/Right", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_lbj3lb3", 'descname':"Left Wrist - Twist", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_ldclb3", 'descname':"Left Drill - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_ldr1ldc", 'descname':"Left Drill - Extend/Retract", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_ldr2ldr1", 'descname':"Left Drill - Rail 1", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_ldldr2", 'descname':"Left Drill - Rail 2", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_ss1sa1", 'descname':"Left Stabiliser - Up/Down", 'speed':0.2, 'force':900.0 },

����� { 'name': agentName + "_jnt_ss2sa2", 'descname':"Right Stabiliser - Up/Down", 'speed':0.2, 'force':900.0 },

���� )

This data structure is what the script works on. It defines the names, descriptions, speeds and forces used in controlling joints. Each line is a Python Dictionary, which maps keys to values.

name�������� This is the logical name of the joint, as defined in the scene graph. In the scene graph definition file (.scene), there is a substitution for ==agentName==. This is what is represented by the agentName variable.

descname This is a human readable �pretty� name for the joint.

speed������� This is the speed the joint will be moved at. In the case of a slider joint (boom, piston) this is in meters per second. In the case of a hinge joint this is in radians per second.

force��������� This is the amount of torque used to move the joint. It is measured in Newton-meters.

� physicsWorlds = list()

� for scenegraphmanager in dss_core.DI3DScenegraphManager1.GetFactoriesWithInterface():

��� physicsWorld = dss_core.DIPhysicsWorld.cast( scenegraphmanager)

��� if physicsWorld:

���� physicsWorlds.append( physicsWorld )

This section collects all the DI3DScenegraphManager1 Factories, and converts them to Physics Worlds, for working with the joints. It is not possibly to directly use GetFactoriesWithInterface for DIPhysicsWorld, as DIPhysicsWorld�s are not exported as Factory objects (being produced by DIPhysics1 Factories). However the common DI3DScenegraphManager uses the conversion process to provide easy access to the DIPhysicsWorld1 it is using.

� ceguiManagers = dss_core.DICEGUI1.GetFactoriesWithInterface()

This section collects all the DICEGUI1 Factories. As these are going to be used directly, there is no conversion process.

� keyboardManagers = dss_core.DIUserInputKeyboard1.GetFactoriesWithInterface()

This section collects all the DIUserInputKeyboard1 Factories.

� ogreSceneManagers = list()

� for visualOgre in dss_core.DI3DVisualsOgre1.GetFactoriesWithInterface():

��� sceneManager = dss_core.DIOGRESceneManager.cast(visualOgre)

��� if sceneManager:

���� ogreSceneManagers.append( sceneManager )

This section performs a similar function to the code previous, where by a collection of Factories is retrieved then converted. In this case, DI3DVisualsOgre1�s are being converted to Ogre Scene Managers. (This step should not be necessary but is an artifact from earlier development).

You may have realized that many of the times when the script gets all Factories, you would only expect one to be present. There are very few reasonable circumstances where more then one copy of Ogre, or CEGUI, or the Scenegraph Manager, would be available. However, the base design of Digital Spaces allows for additional implementations of any DInterface or Factory.

� for joint in joints:

��� if 'descname' not in joint:

���� joint['descname'] = joint['name']

This step performs some sanitization on the joints array. If a descname property is not present for any joint entry, one will be created, using the logical name of the joint.

� # We could do this in the class definition rather then in init, since its only used once, but it makes more sense in __init__

� def __init__(self):

This function defines initialization to be performed every time an instance of this class is created. All the previous initialization is performed when the class is first declared. An approximate equivalence would be C++ static members.

��� for CEGUI in self.ceguiManagers:

���� jcWindow = CEGUI.getWindow("JointControl")

���� if not jcWindow:

This section and below is responsible for creation of the GUI presented by the script. The section above goes through all the previously discovered CEGUI Factories, and tests for the existence of a named window. If the window does not exist, then the below code is executed, creating the window.

����� jcWindow = CEGUI.createWindow("EnhancedFrameWindow", "JointControl" )

��� � jcWindow.setProperty("WindowSets", "DEFAULT" )

����� CEGUI.getWindow("System/AutoLayout").addChildWindow( jcWindow )

����� jcWindow.setSize( ( (0, 275), (0.4, 0 ) ) )

����� jcWindow.setText( "Joint Control" )

A Enhanced Frame Window is created, put in the DEFAULT window set, and added to the system defined �System/AutoLayout� grouping window. (This allows the system to position it so as to not overlay any other windows.) Then the windows size and text are set. In an Enhanced Frame Window, this text is displayed in the title bar.

����� listBox = CEGUI.createWindow("ItemListbox", "JointControlList")

����� jcWindow.addChildWindow( listBox )

����� listBox.setPosition( ( ( 0, 4), (0,20) ) )

����� listBox.setSize( ( (1,-8), (1, -120) ) )

This section creates and lays out an Item Listbox, which is displayed on the previously created Enhanced Frame Window.

����� for curJoint in self.joints:

������ listItem = CEGUI.createWindow("ListboxItem", curJoint['name'] )

������ listItem.setText( curJoint['descname'] )

��� ��� listItem.setUserString( "name", curJoint['name'] )

������ dss_core.DICEGUIItemListBase.cast(listBox).addItem( dss_core.DICEGUIItemEntry.cast(listItem) )

This section populates the created list box. It this it by going through all the joints defined at the beginning of the script, and using the CEGUI factory to create a Listbox Item window. This Listbox Item�s display text is set to the descriptive name of the joint, and uses the setUserString function to store the logical name of the joint.

The last line of this section inserts the Listbox Item into the Item Listbox. The Listbox Item window is converted to a Listbox Item, the Item Listbox Window is converted to an Item Listbox, and the Listbox Item is added to the Item Listbox.

The reason for all this cast-ing is that the CEGUI Factory produces DICEGUIWindow1 objects. This is the base class for all CEGUI Windows, and provides the most commonly used functionality. To access the type specific functionality (such as adding Listbox Items to an Item Listbox), the DICEGUIWindow objects need to be converted to the DInterface which provides that functionality.

����� assignVert = CEGUI.createWindow("Button", "JointControl_AssignVert" )

����� jcWindow.addChildWindow( assignVert )

����� assignVert.setPosition( ( ( 0, 4 ), ( 1, -100) ) )

����� assignVert.setSize( ( (0.5,-4), (0,16 )) )

����� assignVert.setFont( "DejaVuSans-8-noscale" )

����� assignVert.setText( "Assign to Vertical Axis" )

This section creates a Button, adds it to the Enhanced Frame Window, sets its position, size and font, then assigns it text. Buttons display this text on the button as a label. This button will be used to assign joints to the virtual joystick.

����� assignHorz = CEGUI.createWindow("Button", "JointControl_AssignHorz" )

����� jcWindow.addChildWindow( assignHorz )

����� assignHorz.setPosition( ( ( 0.5, 0 ), ( 1, -100) ) )

����� assignHorz.setSize( ( (0.5,-4), (0,16 )) )

����� assignHorz.setFont( "DejaVuSans-8-noscale" )

����� assignHorz.setText( "Assign to Horizontal Axis" )

This section creates a Button, adds it to the Enhanced Frame Window, sets its position, size and font, then assigns it text. Buttons display this text on the button as a label. This button will be used to assign joints to the virtual joystick.

����� directionUp = CEGUI.createWindow( "RadioButton", "JointControl_Up" )

����� jcWindow.addChildWindow( directionUp )

����� directionUp.setPosition( ( (0.5, -8 ), (1, -84 ) ) )

����� directionUp.setSize( ( (0, 16 ), (0, 16 ) ) )

����� directionNone = CEGUI.createWindow( "RadioButton", "JointControl_None" )

����� jcWindow.addChildWindow( directionNone )

����� directionNone.setPosition( ( (0.5, -8 ), (1, -52 ) ) )

����� directionNone.setSize( ( (0, 16 ), (0, 16 ) ) )

����� directionNoneRadio = dss_core.DICEGUIRadioButton.cast( directionNone )

����� directionNoneRadio.setSelected ( True )

����� directionDown = CEGUI.createWindow( "RadioButton", "JointControl_Down" )

����� jcWindow.addChildWindow( directionDown )

����� directionDown.setPosition( ( (0.5, -8 ), (1, -20 ) ) )

����� directionDown.setSize( ( (0, 16 ), (0, 16 ) ) )

����� directionLeft = CEGUI.createWindow( "RadioButton", "JointControl_Left" )

����� jcWindow.addChildWindow( directionLeft )

����� directionLeft.setPosition( ( (0.5, -40 ), (1, -52 ) ) )

����� directionLeft.setSize( ( (0, 16 ), (0, 16 ) ) )

����� directionRight = CEGUI.createWindow( "RadioButton", "JointControl_Right" )

����� jcWindow.addChildWindow( directionRight )

����� directionRight.setPosition( ( (0.5, 24 ), (1, -52 ) ) )

����� directionRight.setSize( ( (0, 16 ), (0, 16 ) ) )

This section creates the Radio Buttons that make up the virtual joystick. Each of their names suggest which function they will provide, with None being the center of the virtual joystick.

����� verticalAssigned = CEGUI.createWindow( "MultiLineEditbox", "JointControl_AssignedVert" )

����� jcWindow.addChildWindow( verticalAssigned )

����� verticalAssigned.setPosition( ( (0, 4), (1, -84) ) )

����� verticalAssigned.setSize( ( ( 0.5, -44 ), ( 0, 80 ) ) )

����� horizontalAssigned = CEGUI.createWindow( "MultiLineEditbox", "JointControl_AssignedHorz" )

����� jcWindow.addChildWindow( horizontalAssigned )

����� horizontalAssigned.setPosition( ( (0.5, 40), (1, -84) ) )

����� horizontalAssigned.setSize( ( ( 0.5, -44 ), ( 0, 80 ) ) )

This section creates two Multi Line Edit boxes. These are used to display which joints are currently assigned to the virtual joystick. Labels could be used, however labels do not wrap text properly.

This is the last of the GUI creation code.

��� for curJoint in self.joints:

���� for physicsWorld in self.physicsWorlds:

����� curPhysicsJoint = physicsWorld.FindJoint( curJoint['name'] )

����� if curPhysicsJoint:

������ #print "Setting ", curJoint['name'], " to 0"

������ curPhysicsJoint.SetMotorParameters( 0, curJoint['force'], 0 )

This section initializes the physics joints. For every physics world, the script searches for the joint by its logical name. If found, the motor parameters of the joint are set to 0 speed, using the defined joint force. This keeps them in place until adjusted by the user. If this is not done, all the joints will move freely. The third parameter describes which axis of the joint this force should be applied to, but this script assumes that these motor forces should be applied to the 0th joint.

��� self.curVertJoint = None

��� self.curHorzJoint = None

This initializes the axis of the virtual joystick, to initially start as unassigned.

This is the end of the initialization section.

� def PerformHeartbeat(self):

The Perform Heartbeat function is designed to be called every heartbeat (100th of a second). It performs the repetitive read-eval-assign functionality of the script.

��� HorzMotion = 0

��� VertMotion = 0

These two variables are used to track which axis are selected on the virtual joystick.

��� for CEGUI in self.ceguiManagers:

For each known GUI, the GUI components state needs to be checked. This includes the Assign To Axis buttons, as well as the Radio buttons making up the virtual joystick.

���� # Check the assign axis buttons

���� assignVertWindow = CEGUI.getWindow("JointControl_AssignVert")

���� assignVertButton = dss_core.DICEGUIButton.cast( assignVertWindow )

���� if assignVertButton:

����� if assignVertButton.isPushed():

������ jcListWindow = CEGUI.getWindow("JointControlList")

������ jcList = dss_core.DICEGUIItemListbox.cast( jcListWindow )

������ selectedItem = jcList.getFirstSelectedItem()

������ if selectedItem:

������� selectedWindow = dss_core.DICEGUIWindow.cast( selectedItem )

������� for curJoint in self.joints:

��������� jointName = selectedWindow.getUserString("name")

��������� if curJoint['name'] == jointName:

���������� self.curVertJoint = curJoint

������� assignedVert = CEGUI.getWindow("JointControl_AssignedVert")

������� assignedVert.setText( selectedWindow.getText() )

This section retrieves the previously created button as a DICEGUIWindow, converts it to a DICEGUIButton, and if the conversion was successful, checks if it is Pushed (in the process of being clicked). If so, the script then retrieves the Item Listbox (which contains the list of joints) as a DICEGUIWindow, converts it to a Item Listbox, and retrieves the first selected item. If there is a selected item, the script converts the ICEGUIListboxItem back to a DICEGUIWindow, and retrieves the logical name of the joint (using getUserString). The script goes through the list of joints (defined at the start of the script) and if the joint is found (matched by logical name) then the joint information is stored in curVertJoint. Also the descriptive name of the joint is displayed in the Multi Line Editbox.

��� assignHorzWindow = CEGUI.getWindow("JointControl_AssignHorz")

���� assignHorzButton = dss_core.DICEGUIButton.cast( assignHorzWindow )

���� if assignHorzButton:

����� if assignHorzButton.isPushed():

������ self.curHorzJoint = list()

������ jcListWindow = CEGUI.getWindow("JointControlList")

������ jcList = dss_core.DICEGUIItemListbox.cast( jcListWindow )

������ selectedItem = jcList.getFirstSelectedItem()

������ if selectedItem:

������� selectedWindow = dss_core.DICEGUIWindow.cast( selectedItem )

������� for curJoint in self.joints:

��������� jointName = selectedWindow.getUserString("name")

��������� if curJoint['name'] == jointName:

���������� self.curHorzJoint = curJoint

������� assignedHorz = CEGUI.getWindow("JointControl_AssignedHorz")

������� assignedHorz.setText( selectedWindow.getText() )

This section of code does the same as the previous, however it checks and assigns to the horizontal axis rather then the vertical axis.

���� # Check the directional radio buttons

���� directionUpWindow = CEGUI.getWindow("JointControl_Up")

���� directionUpRadio = dss_core.DICEGUIRadioButton.cast(directionUpWindow)

���� if directionUpRadio:

����� if directionUpRadio.isSelected():

������ VertMotion = -1

���� directionDownWindow = CEGUI.getWindow("JointControl_Down")

��� directionDownRadio = dss_core.DICEGUIRadioButton.cast(directionDownWindow)

���� if directionDownRadio:

����� if directionDownRadio.isSelected():

������ VertMotion = 1

���� directionLeftWindow = CEGUI.getWindow("JointControl_Left")

���� directionLeftRadio = dss_core.DICEGUIRadioButton.cast(directionLeftWindow)

���� if directionLeftRadio:

����� if directionLeftRadio.isSelected():

������ HorzMotion = -1

���� directionRightWindow = CEGUI.getWindow("JointControl_Right")

���� directionRightRadio = dss_core.DICEGUIRadioButton.cast(directionRightWindow)

���� if directionRightRadio:

����� if directionRightRadio.isSelected():

������ HorzMotion = 1

This section checks each of the Radio buttons used in the virtual joystick. In each case, it uses the CEGUI Factory to get a DICEGUIWindow version of the button, then converts it to DICEGUIRadioButton. If this conversion was successful, it checks if its selected. If so, it adjusts the multiplier to be used when adjusting the joint currently assigned to the appropriate axis.

This is the last of the GUI status/event checking.

��� # Apply the forces to the physics joints

��� for physics in self.physicsWorlds:

���� if self.curHorzJoint:

����� curHorzJoint = physics.FindJoint( self.curHorzJoint['name'] )

����� if curHorzJoint:

������ curHorzJoint.SetMotorParameters( self.curHorzJoint['speed'] * HorzMotion, self.curHorzJoint['force'], 0 )

���� if self.curVertJoint:

����� curVertJoint = physics.FindJoint( self.curVertJoint['name'] )

����� if curVertJoint:

������ curVertJoint.SetMotorParameters( self.curVertJoint['speed'] * VertMotion, self.curVertJoint['force'], 0 )

This section sets the motor forces applied to the currently assigned joints. It uses the multipliers calculated in the previous section to calculate what speed to apply to the current joints.

��� # Check if we are toggling shadows

��� for keyboard in self.keyboardManagers:

���� if keyboard.IsKeyDown("1"):

����� for sceneManager in self.ogreSceneManagers:

������ sceneManager.SetShadowTechnique( 1 )

���� if keyboard.IsKeyDown("2"):

����� for sceneManager in self.ogreSceneManagers:

������ sceneManager.SetShadowTechnique( 2 )

���� if keyboard.IsKeyDown("3"):

����� for sceneManager in self.ogreSceneManagers:

������ sceneManager.SetShadowTechnique( 3 )

���� if keyboard.IsKeyDown("4"):

����� for sceneManager in self.ogreSceneManagers:

������ sceneManager.SetShadowTechnique( 0 )

This section is a last minute addition to the script. It checks the keyboard managers to see if the numeric keys 1, 2, 3 or 4 are being pressed. If so, it uses the previously collected Scene Managers to set the type of shadows being used during rendering.

This is the last section of the Perform Heartbeat function.

jointControl = JointControlManager()

This line creates the jointControl object, using the JointControlManager class. At this point the __init__ function is performed. It is this variable that is checked for in the first two lines of the script, and its existence means that the initialization has been performed.


This line calls the Perform Heartbeat function on the created jointControl object. Because the entire script is executed linearly per heartbeat, by this point we can be sure the jointControl object has been created and initiated, ready to perform the work defined in the Perform Heartbeat function.


E. Design simulation framework for Human-NEO mission

The final major project of the period of performance included a model construction and animation for the use of Constellation hardware for a crewed NEO mission feasibility study in support of teams at NASA ARC and JSC. This was work done with models in and rendered from 3D Studio Max and not yet implemented in DSS. If NASA does commission future work in this area we hope that we will be requested to use DSS to develop a full DSS simulation of the soft-contact �berthing� of the Orion+NSAM with the NEO surface. We present some views of this work in the figures 46-49 below.

Figure 46. Orion/NSAM vehicle approaching NEO, matching rotation, searching for target surface areas.

Figure 47. Soft airbag and penetrometer sensor contact with NEO surface, determination of secure area, firing harpoon tethers.

Figure 48. EVA and operation of sampling systems

Figure 49. Departure from NEO surface, leaving behind science base station.

Access to the NEO mission work is at:

2) DSS Final Packaging � delivery web site

The Final Packaging (1.0 developer release) of DSS has produced documentation and two example applications designed to provide illustration of the development of 3D scenes and Python scripting functionality.

Find the DSS delivery web site (figure 50 below) at:

Figure 50: DSS application, codebase and documentation delivery site

DSS Final Packaging � Digital Spaces source code

DSS source code and executables are available. Source can be unpacked and built using Microsoft Visual Studio.

DSS Final Packaging � example applications

Views of the �White Room� and �Lunar Playground� demo application spaces are shown in figures 51 below.

Figures 51: Three views of �White Room� and �Lunar Playground� demo applications

DSS Final Packaging � platform documentation

Platform documentation can also be accessed at the above site and includes:

1. Tutorial on installing a DSS space.

2. Default keyboard controls.

3. Delivery Documentation in PDF format

4. Agent Rigging guide in PDF format.

5. Digital Spaces API.

DSS Final Packaging � developer site including user forums

The DSS platform developer site including user forums is set up in a Wiki and can be accessed at the following site:

D) An assessment of technical feasibility

DSS was implemented using open source components and open protocols to support 3D virtual environment simulations. Several applications were developed to verify that this system was ready for use in real projects including work with NASA�s Constellation program and applications for the mining industry.

The unique or novel features of the DSS prototyper platform include the direct implementation of an interface between best-of-breed open source components, powerful scripting (Python) and physics rendered in a high performance 3D client interface using standard APIs and a plug-in architecture. This application has been a technical success and is now being applied in commercial applications in the mining industry and could find a wide applicability in applications from industrial training to medical treatment modalities. More on these applications are covered in the next sections.

E) The potential applications of the project results in Phase III both for NASA purposes and for commercial purposes

1) NASA and Commercial Applications in Development

Efforts in the two years of development of DSS led to the applications developed for NASA in lunar robotics discussed earlier in this report. The work on modeling a bucket ladder excavator has led to participation in a successful SBIR Phase II by the Colorado company sysRAND. The mining application developed with Colorado School of Mines and Xstrata to produce mining haul truck and Drill Jumbo simulations remain active efforts within both organizations.

In one additional application, NASA ARC invited DigitalSpace to participate in an effort to define a lightweight lunar lander mission (see figure 52 below) and this work was presented at NASA headquarters in August, 2006.

Fig 52: NASA ARC lightweight lander

Additional mining application

In addition to the Drill Jumbo application described earlier in this report DigitlSpace developed an open-pit mining simulation to interest the mining and construction industries. The results of that project are shown in figures 53-56 below. The use of the DSS platform for real-time simulation applications within the mining industry has great potential. The mine simulation was developed to facilitate NASA technology transfer, an important element of the SBIR program.

Note that the mining simulation below represents an early stage model. Completed elements included vehicle mobility, geometric layout of the open pit and haulage ramp, automation of haul truck traverse using waypoints, and construction of the general environment.

Current capabilities of the mine simulation using DSS platform include:

  • Used to visualize both open pit and underground mining operations
  • Virtual models of selected mining equipment were built and assigned physics properties
  • Real-time pit or underground 3D mine data could be viewed by managers or engineers with ability to steer camera view angle to any geometry desired
  • Could be used as an operations planning and training tool with 3D visual feedback

Fig 53: Plan view of mining haul truck application showing numerous waypoints

Fig 54: Loading area

Fig 55: Traversing between waypoints

Fig 56: Offloading at crusher

2) Further Detailed Analysis of Potential Applications

i) Potential NASA Applications

There are several potential NASA Applications for the Digital Spaces real-time 3D platform. The use of DSS in support of Lunar precursor robotic missions was described earlier in this report. Other potential NASA applications include design simulation for operability and full virtual vehicle lifecycle management (ISS completion, Constellation/CEV), crew health and safety work practices and training, EVA crew refresher training, mobile agents and human/robotic systems design and field testing, telepresence interface development, lunar and Mars base design concepts, virtual astronaut modeling including haptic interfaces, and NASA outreach and K-12 programs (such as Space Camp).

ii) Potential Non-NASA Commercial Applications

Any remotely controlled vehicle in mining, construction, security, hazardous waste handling, military operations and other commercial applications requires virtual environments containing high performance physics interfaces for development of viable vehicle designs, interaction scenarios, and eventually for the training of operators and day to day operations. The needs for tele-operations in the terrestrial mining, construction and manufacturing industries alone could create a multi-billion dollar annual business. We have identified the following applications in design simulation that could be served by the platform:

1. Collaborative engineering of automated excavating, drilling and hauling equipment for deep mining where heat and hazards make ore extraction costly or prohibitive.

2. The design of automated construction systems including cranes and assemblers.

3. Defense and security applications in the design, training and operations for teleoperated vehicles in the battlefield or for facilities surveillance.

4. Planning and training for emergency first responder hazardous area robotics.

5. Industrial design, training and operations applications for robotically-equipped factories.

6. Software games in the �robot wars� genre and for education/outreach in space and engineering education.

Other non-NASA commercial applications

As open source platforms gain ground in industrial simulation, the game industry and in education, there is a growing market for a platform like DSS. In 2004, DSS was engaged in a project for the teaching of safety practices to children with Autism in a controlled study funded by the National Institutes of Health at Virtual Reality Aids (VRA) in Raleigh North Carolina. VRA has included DigitalSpace in another awarded NIH grant proposal. Other commercial applications include use in the construction industry to simulate work packages, the design of factory floors where people work in concert with robots, surgical theaters where physicians and staff need to optimize the utilization of space, time and equipment, and as a multiplayer game platform, especially in the area of learning games for robotics.

Potentially competitive companies in this area include Common Point and Multigen-Paradigm Inc.

3) Organizational Commitments Supporting Phase III Commercialization

A number of organizations have made capital commitments to Phase III commercialization. In addition, internal resources from DigitalSpace are available to carry out this commercialization effort. Virtual Reality Aids of Raleigh, North Carolina, SysRAND Corp of Parker, Colorado and Humanspace, a new company that is part of the ESA ESTEC facility in the Netherlands , have all made financial commitments or are negotiating awarded grant proposals with DigitalSpace.�

In addition, a number of universities and other institutions are engaging with the DSS open source platform to build extensions and applications for their use. These include:

1. City University of New York in Brooklyn (Entertainment Technology Division) which is actively extending DSS to add a generative sound library for performance arts.

2. The Institute for Advanced Study, Princeton New Jersey, which is considering DSS for its Principedia project.

3. Project Biota, Santa Cruz, CA, which is preparing to invest in building complex adaptive systems within DSS and extend it to create a �hyper-evolver� environment supporting �artificial life� and research into fundamental biology.


We would like to acknowledge NASA Ames Research Center for funding this work, Barney Pell for instituting the topic area, Bill Clancey, Maarten Sierhuis and the whole Brahms team for providing critical support on previous work on BrahmsVE, Michael Kaplan and Adobe for providing support and the predecessor platform (Atmosphere), and Mark Shirley for giving critical guidance on the work and acting as our COTR.

We would also like to give our special thanks to Tom Cochrane who provided early and ongoing connections that made this work possible, including our partnership with Brad Blair and the Colorado School of Mines who were instrumental in both phases.