In order to accomplish our goal of a metaverse with the current tools and protocols, we chose to use a client/server model. All information about the world, including hyperlinks, is stored on one server. Clients request the information, which is transmitted using HTTP and displayed. The client keeps track of the current viewpoint and generates commands in the IVL language which tell the server which parts of the world to transmit. Navigation and rendering is entirely up to the client---the server simply sends scene descriptions (including links and inline information).
If we concede that we are going to have problems with bandwidth and rendering speed, we must implement features in our world which act to minimize these problems, else the world will quickly become unusable. To this end, we must define levels of detail properly so that things don't show up if the viewer is too far away (map will give an overview, so nothing will be missed) and will show up as low detail until the viewer is close. The metaverse application by default wraps all added nodes and URL's in an LOD node which does this. Texture maps are forbidden (stripped out automatically) except at highest level of detail.
Users access the metaverse home page to build permanent structures in the metaverse. A password protection scheme is used to ensure only the owner of a structure or a link can modify it. Bounding Box checks are performed to make sure structures do not intersect. Since the IVL parser for the metaverse is running continuously, it maintains its current state in file written via SoIdleSensor when CPU permits. This allows for recovery of the (dynamic) world should the server crash.
An icon, called an avatar, is used to represent the physical location of each client in the metaverse, and the server push feature is used to send current location to all other clients. Avatars are removed when the client exits or when a browser idle time limit is exceeded. Users are allowed to link their own avatar, or one is provided by default. We do not as yet provide for interactions between avatars or between avatars and scene objects (other that collision detection). Avatars simply share the same space at the same time.