BI-MONTHLY TECHNICAL PROGRESS REPORT NUMBER 002
For NAS2-03134 SBIR 2002
Proposal #: H2.02-8957: “BrahmsVE: Proof of Concept for Human/Agent Intelligent Augmentation”.
Reporting Period: March 14, 2003 – May 14, 2003
PI/Contact: Bruce Damer, DigitalSpace Corporation, (831) 338 9400
NASA Program Recipients
NASA Ames Research Center
Accounting Operations Branch, M/S 203-18
Moffett Field, CA 94035-1000
Natalie LeMar MS 241-1
COTR MS 269-3
SBIR Specialist MS 202A-3
SBIR Team Recipients: M Sierhuis (RIACS), R Van Hoof (RIACS), D Rassmussen (DigitalSpace), B Damer (DigitalSpace)
(1) A quantitative description of work performed during the period
The DigitalSpace team completed the second third of work on this Phase I SBIR by adding two interactive agents into our virtual model of the International Space Station (ISS) interior US habitation module. One agent is an analogue of the NASA Ames Personal Satellite Assistant (PSA). The new agent added was that of an astronaut with the ability to follow the PSA agent and move its position to avoid a collision. Cameras were then set up and a user interface built to permit multiple points of view of the simulation (including a camera on the virtual PSA itself). This running model can be seen depicted in Figure 1 below.
Fig 1: Running ISS/PSA BrahmsVE simulation with addition of timing information, output reporting console and new scenario of an Astronaut agent interacting with the PSA agent.
Figure 2 below shows how the camera viewpoint settings work as well as the simulation clock.
Fig 2: Viewpoint of PSA agent about to pass in front of Astronaut agent.
The running example of this model is online at:
A focus of our work during this period was to create scenegraph elements that would permit interaction between these two agents. For example, in real operation of a PSA-like robot aboard the ISS it will be important to know what sightlines allow for the visibility of the vehicle from the point of view of astronauts. To accomplish this we developed a visibility “ray casting” technique that would allow us to train the head of the astronaut model on the PSA as it came within range of view. When the astronaut agent “determines” that the PSA is going to pass close by, the agent moves itself from one side of the habitat to the other, assuming a typical astronaut attitude with feet lodged in a foothold. See figure 3 below for these new functions in practice.
Fig 3: Astronaut agent tracks PSA agent, Astronaut agent will track and then move across the analogue space station module to another foothold position to avoid PSA agent.
In our last phase of work we built an initial XML/HTTP interface (see explanation in Appendix A) such that activities, events and exceptions could be reported via the Internet to the Brahms server. This reporting function was extended in the second phase to report on the astronaut agent activities. The log of this typical “alert” communication dialogue is included below with the new astronaut activities shown in the “incoming” statements:
** Incoming **
** Outgoing **
(2) An indication of any current problems, which may impede performance or impact program schedule or cost, and proposed corrective action
Current problems that may occur are possible issues with the physics and collision callbacks that are now being prepared for use. In addition the formal product launch of the Atmosphere player by Adobe will be completed in this period. Debugging and platform stability are high priorities and we are testing the existing BrahmsVE applications and reporting to Adobe to ensure regression of functionality does not occur.
(3) A discussion of the work to be performed during the next reporting period
Fig 4: Astronaut agent and PSA agent as seen from coupling interface to analogue US module. In subsequent work the simulation space will be extended to a more extensive reconstruction of the ISS.
During the next period we expect complete the following:
Development of a syntactical detectibles language and reporting modules
1. Build a syntax/language (extending the current Alert reporting actions) to report much more activity out to XML/HTTP such as smaller inter-location movements, collisions, whether one object can be seen by another (occlusions). We are defining these as "detectibles" and they will be made possible by the setting of Alerts (as currently implemented). In this manner we will achieve a continuous stream of reporting from the world. We will develop a series of reporting buffers on the web page (rather than the popup window).
2. Set up a way for an outside process (via a Brahms action command) to access a broader range of "methods" available within the virtual world. For example, if we have a light that can be turned on and off in the station, allow an outside process to get to it via the network interface (XML/HTTP) and activate it. As a preparation for this kind of access, for a given world we will export all the features that can be activated from outside (a kind of “instruction set”). We will term these "settables" and are ways for Brahms to actually affect the world state at initialization time or during the simulation.
3. We will develop a function, which emits via the reporting function the list of areas that the world possesses. In this way, Brahms will be "informed" as to the available addressable geometry of the world. The reason for doing this is that it does not make sense for Brahms developers to labor over writing rules to describe a world in a 3D VE when the 3D designer already independently writes all of that information into the world. So if there is a hierarchy of spaces built into the VE, such as "sink in counter in kitchen in station module", this hierarchy could be reported out as a starting point for Brahms.
Indeed 2 and 3 are related as they both involve "interrogating" the 3D world to have it report out its "settables" and its "geometry".
Real world station/PSA example to test the reporting module
Ames PSA team members explained that they are planning to put a laser pointer onto the next PSA prototype build, so that the agent could fulfill a "teamwork" function: a person in mission control could use the PSA to "point out" specific things aboard the ISS to on board crewmembers. Another application they see for the PSA is for it to be able to travel throughout the station looking for tools or parts that are attached to the walls. Sometimes the locations of important objects are not obvious or remembered by the crew. In the future, a flight PSA might be able to scan barcodes and report a position of an object.
We will develop a scenario where the PSA is positioned near a couple of astronaut agents, shining the laser pointer on an area to indicate something in front of them. As the astronauts move about the work area, the PSA would try to keep its distance. This is the reverse case of the work done in phase two, which had the astronaut agent avoiding the PSA agent. At one point, the PSA could be dispatched to go and look for a tool, such as a wrench, that is somewhere in the station, and then return to project the image and coordinates (area hierarchy) of where that tool was found to the working astronaut agents (reporting textually in the output module). As figure 4 above shows, we will have to develop a more extensive model of the ISS to best illustrate this tool-finding exercise.
To implement this scenario, we will set up "detectibles" that the PSA or station would report based on preset alerts, the "settables" that we can address from outside (ie: turning on or off the PSA laser pointer), and have the whole simulation report its geometry. Several text buffers within the web page will report out all the detectibles and geometry while manually text command input or Brahms commands will operate the settables and issue commands to the PSA (go find wrench example).
Change of direction from original SBIR proposal
As a result of NASA team input, we have changed some of the original stated SBIR goals of providing a more autonomous PSA (then called a PBA) agent and instead are focusing on providing a richer data set allowing Brahms to be tied more effectively into the virtual world environments. It was requested by NASA to make this change as both teams felt that the early successful completion of the real time interface between Brahms and the VE makes possible a much more powerful BrahmsVE end product. The richer interaction language will provide the Brahms development team much needed guidance, as they will now be charged with substantially upgrading elements of Brahms to take advantage of the real time connection.
(4) Estimated percentage of physical completion of the contract
We estimate we are approximately sixty six percent toward the completion of the contract.
References: for ongoing project tracking please refer to the web resources references
1) The BrahmsVE project homepage is at
2) The full SBIR 2002 phase I proposal may be viewed on the web at:
3) The Brahms specification and engine is described at:
4) ISS/PSA homepage is at
end of report.
This system uses an implementation of the Oworld library, implementing the PSA as an "agent", the same way that the NASA agents were used in the other uses of this library. The code also sets up some basic information about the ISS environment, such as four "area’s”, again in the same way used in previous implementations.
The main expansion of this library is the addition of an "alert" event. This is an addition to the Brahms defined constants for command strings. This event causes the relevant agent to execute an Alert action. This action uses the spigot to call a function embedded in the containing web page, passing this function the data specified in the "alert" command line. This function uses the XMLHTTP ActiveX interface to make a request of a HTTP supporting server, providing the alert data in the request.
Depending on what the server actually is, it will handle this information differently, however the result will be the server returning data (using the normal HTTP response to a request). This data will be one or more command strings.
On receiving this data, the web-page functions parse the data, breaking it into separate strings and then using the spigot to pass these strings into the Atmosphere plugin, to the Oworld libraries. If the strings are correctly formatted Brahms command strings, they will be processed, if not, discarded.
In the current example, the PSA does not create "alert" events on its own. Instead, alert commands are passed to it, through the spigot-server-spigot interface, as the final command in a sequence. Below is an example sequence of commands:
example, there are two command strings (if you ignore the line wrapping). The
first is for an activity, move, from time 0 to time 10, performed by the PSA
(projects.issvre.PSA), using the Walk action. Additional data is provided,
being the beginning and end location for the move command. As stated earlier,
Point0, Point1, Point2 and Point3 are pre-defined locations, currently
The second command string is the alert command, to be performed at (or after) time 10, by the PSA (projects.issvre.PSA), for the reason of "Idle". Additional data provided is the agents (PSAs) current (expected) location.
The net result of this is to cause the PSA to move from Point0 to Point1, then request additional instructions. If the recieved instructions include an alert command, more instructions will be requested at that point, thus continuing the process.
In the existing example, the server document is a PHP script, however it could be anything, as long as it can be accessed through HTTP. This includes a web server running ASP, or CGI, or PHP, or a server-side application specifically designed to provide HTTP responses. A likely example of this would the Brahms server, which is apparently Java coded.
In this design, it was decided that "alert" commands should be provided by the server, rather than the PSA creating them "on-the-fly". While it is possible for the PSA to create alert events when it goes idle (has no remaining commands), it was simply easier at this point to have them as part of the provided commands.
Similarly, it was suggested that the PSA should be able to create alert events when it detects a collision, or potential collision. However, from looking at the current design of Brahms commands, as well as their current implementation, it appears that the Brahms server will be responsible for preventing collisions, by not issuing commands that would cause one.
Also, in terms of moving from one location to another, the libraries allow the creating of "paths", which are used if available. An example of this is in the FMars simulations, the agents walked around the table, using the defined path.
Another problem with this is the Brahms system seems to define locations by name, and collisions would generally happen at unnamed locations. So the system would be unable to tell the Brahms server where it was when the collision happened, and thus, the Brahms engine would be unable to tell the system where to move from, in moving to a new location. (For example, in the example move command shown earlier, it contained both source and destination information.)
Another feature implemented in this design is the display of all outgoing and incoming information, between the client (web-browser) and the server. This is provided by the transfer functions embedded in the web page (not by the Oworld library), and is used just for clarity.
Also it probably be noted that the "events"
initiated by the Oworld library (inside Atmosphere). A suggested course was
having script on the web page polling a status of a variable stored within
Atmosphere, however in implementation that system would be unwieldy. The
explanation behind this is, information cannot be read through the spigot in
one command. Rather, the web-page code would have to request the state of the
variable (using the spigot, web-page into Atmosphere) then Atmosphere would have
to respond, passing the value out (using the spigot, Atmosphere to web-page).
This would result in two distinct events, when it is simpler to merely have the
second type of event (Atmosphere to web-page) when there is significant data to
be provided. In terms of alteration to the Oworld system, they are both of
It is also worth noting that the current system allows for event queuing. For example, if there are multiple agents operating in the environment, and the second one requests information in response to an alert event while the first one is still waiting for a response to its request for information, the second request will be queued, and executed when the first is complete.
The current alert event system was created here to fulfill the need for this example. However, there may already be a specification for a similar event in the Brahms format, or there may be a better way of implementing such an event, while keeping in the style of the other commands. In the current implementation, the Oworld system only requests additional data when an event occurs, in response to an alert command. A possible future expansion is that the system will periodically create a request to the server, just to check for any additional commands. Creation of Jserv Servlets for the reading of Brahms simulation action statements and distribution to running version of Oworld for representation in virtual FMARS (the Manifest Manager).
This appendix can be found online at: