The Free and Open Productivity Suite
Released: Apache OpenOffice 4.1.15
The Idea of Universal Network Object Technology (UNO)

The Idea of Universal Network Object Technology



General solution
The Idea
Why our own object model?


Before explaining the concepts behind UNO, some problems that occurred in a C++ based, development effort, need to be presented.

  1. There are a number of base projects (Tools, Streams, Visual Class Library, Framework, etc.) The higher projects, such as the word processor, calc, etc., use the classes of these base projects. After a change of some of these classes, for example, a new member or virtual method is added, the entire office suite needs to be rebuilt. This takes two days, if no problems occur.
  2. The API of the base project, with a few exceptions, is not well documented. The base projects grow with the requirements of the higher projects.
  3. The projects dependencies are complicated and difficult to understand. Before making changes to an API we need to know, exactly, which projects are affected.
  4. StarOffice has a complex build environment. This made it very difficult for third parties to write components which could be integrated into the office suite.
  5. StarOffice components could not be used outside of StarOffice.
  6. There was a requirement to integrate components from other object models like CORBA, COM/DCOM, or Java into StarOffice. In addition, it was desirable to have StarOffice components be first class components in other component models.

General solution

There is a general approach to solve the above problems; therefore, the question of “why should we use UNO”, will be answered in this section.

  1. There is a mechanism which enables a new method to be added to an existing class: this is done with interface technology. Only interfaces are exposed to other projects. To add a new method, you only have to add a new interface. So, new methods can be added to an existing old class, and then the other projects can use these new features. There is a migration path to the new API.
  2. Use of an IDL-language to describe our interfaces and the functionality of components. To do this on an abstract level, normally, the documentation is better and the API is not implementation dependent.
  3. To reduce the build dependencies of a specific component only interfaces are used to communicate with other components and the base libraries. In this case, the dependencies are flat.
  4. Provide infrastructure to add components to a existing product or to the system.
  5. Reduce the dependencies of components, if possible.
  6. To communicate between different component models we have to create a bridge from UNO to the other component model.

Here is why we are not using an existing component model: First, we cannot use Java Beans because it is not abstract, it is only usable in Java itself. Second, CORBA wouldn't be the right choice because there exists no specification for binary compatibility in one process and the communication between two components must be handled through the IIOP protocol. Third, COM/DCOM does not support exception handling, which is necessary to integrate smoothly into languages with native exception handling, such as C++, Java, etc.

The Idea

First, we need to distinguish the difference between the Universal Component Environment (UCE) and UNO. UCE defines an environment in which components can be embedded and defines the API which must be supported by a component. The points 5 and 6, above, are solved with this technology. So, the UCE is on top of UNO. The construction of the UCE is documented in uce.html.

The ideas of UNO are as follows:

  1. A binary specification of the memory layout of the IDL types. This specification is machine dependent, so it can be implemented directly, in many languages.
  2. Each object lives in an environment. Objects share this environment with other objects, which are compatible. This means the same compiler version, the same java virtual machine or something else. The only access to an object from another environment is to generate a proxy in your own environment. This can be accomplished by a bridge.
  3. To reduce the number of bridges, we define one environment called the Binary UNO Environment. It is recommended to provide a bridge to this environment. If you want to access an object, normally, you have two bridges. The first, from your environment to the Binary UNO Environment, and the second, from the Binary UNO Environment to the destination environment. For every new environment it is only necessary to implement one bridge.
    You can implement bridges between any two environments, but do this only for performance reasons.
  4. Provide a UNO runtime library which organizes access to the bridges and the environments. With the UNO runtime library, it is simple to access an object from another environment, presuming that the bridges are installed.
  5. The important concept is, that all calls to an object in the Binary UNO Environment are dispatched through one function. This is the Dynamic Dispatch Function. All calls contain a full description of the method, which means: method name, argument types, return type, exceptions, and additional information.
    It is very simple to create a bridge to an interpreter, remote, or an environment which has a well specified API to call object methods, for example, Java.

How does this ideas solve the problems?

I explain the special solution with UNO and use the stated points from the general solution.

  1. UNO has one base interface called This interface provides a method called queryInterface. With this method you can get other interfaces. In this way, it is possible to extend an existing class. In UNO terminology, we speak of components instead of classes. Look into the document xinterface.html for detailed information about
  2. An UNO IDL compiler is provided. The language is similar to CORBA's IDL. An extension of our language was a special tag called service. In a service, you can specify the interfaces, properties, and the interaction between the interfaces of a component.
  3. We implement components in UNO only with dependencies to interfaces, or to our base libraries, VOS, SAL, and OSL. We generate the declaration files in the project of the component itself; so there are only dependencies to the base libraries, the binary type repository, and the tool that generates the declarations.
  4. The Universal Component Environment provides an API to get the dependencies from a component. This solution is documented in the UCE document.
  5. The other solutions are nice to have, but this is the fundamental solution done in UNO. First, the UNO runtime library provides access to the environment in which the component is written (e.g., uno_getEnvironment(..., “msci”, 0 )). A component need not know it's environment until it is accessed from another environment. Next, the UNO runtime library provides access to the bridge between two environments (e.g., uno_getMapping(..., “msci”, “java”)). So from the user side, it is very simple and normally this is done in a loader like a shared library loader. The loader is part of the Universal Component Environment.

    On the other side, there is the implementation of the bridge between two environments. The bridge should create a stub very efficiently, and the transformation of the calls from one environment to another must be fast. To save disk space and avoid the administration of marshaling code, the bridge creates stub objects only from a type repository. The type repository contains all type information that is necessary. For example, these are the methods of an interfaces, the members of structures, and so on. The access to the type repository is also provided by the UNO runtime library.

    The point that makes the transformation between two environment fast is the binary UNO specification. The transformation, from one environment to the binary UNO specification and to the target environment, should be fast. There are two other ways to speed up the transformation: First, if two components lie in the same environment, the communication between them is direct (e.g., a virtual method call in c++) and no translation is necessary. Second, you can provide a direct bridge between two environments, so the transformation to the binary UNO specification is avoided.

Why our own object model?

The main reason is that the other object models don't provide the full functionality. The COM/DCOM model does not provide exception handling.

CORBA is normally used for remote communication and there is no local standard API between objects.

Java RMI is only useful in a Java environment. So we can't use an existing standard to implement the object model we want to use. But if you look at our object model, then you see that we use the IIOP protocol in the remote case, and we use reference counting like COM to determine the lifetime of the interfaces.

Author: Jörg Budischewski ($Date: 2004/12/05 13:27:08 $)
Copyright 2002 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, CA 94303 USA.

Apache Software Foundation

Copyright & License | Privacy | Contact Us | Donate | Thanks

Apache, OpenOffice, and the seagull logo are registered trademarks of The Apache Software Foundation. The Apache feather logo is a trademark of The Apache Software Foundation. Other names appearing on the site may be trademarks of their respective owners.