The Apache Jakarta Project Jakarta HiveMind Project
 
   

Jakarta HiveMind Project Documentation

HiveMind プロジェクト

リファレンス

HiveMind Services

In HiveMind, a service is simply an object that implements a particular interface, the service interface. You supply the service interface (packaged as part of a module). You supply the core implementation of the interface (in the same module, or in a different module). At runtime, HiveMind puts it all together.

HiveMind uses four service models: primitive, singleton, threaded and pooled. In the primitive and singleton models, each service will ultimately be just a single object instance. In the threaded and pooled models, there may be many instances simultaneously, one for each thread.

Unlike EJBs, there's no concept of location transparency: services are always local to the same JVM. Unlike XML-based web services, there's no concept of language transparency: services are always expressed in terms of Java interfaces. Unlike JMX or Jini, there's no concept of hot-loading of of services. HiveMind is kept delibrately simple, yet still very powerful, so that your code is kept simple.

Defining Services

A service definition begins with a Java interface, the service interface. Any interface will do, HiveMind doesn't care, and there's no base HiveMind interface.

A module descriptor may include <service-point> elements to define services. A module may contain any number of services.

Each <service-point> establishes an id for the service and defines the interface for the service. An example is provided later in this document.

HiveMind is responsible for supplying the service implementation as needed; in most cases, the service implementation is an additional Java class which implements the service interface. HiveMind will instantiate the class and configure it as needed. The exact timing is determined from the service's service model:

  • primitive : the service is constructed on first reference
  • singleton : the service is not constructed until a method of the service interface is invoked
  • threaded : invoking a service method constructs and binds an instance of the service to the current thread
  • pooled : as with threaded, but service implementations are stored in a pool when unbound from a thread for future use in other threads.

Additional service models can be defined via the hivemind.ServiceModels configuration point.

HiveMind uses a system of proxies for most of the service models (all except the primitive service model, which primarily exists to bootstrap the core HiveMind services used by other services). Proxies are objects that implement the service interface and take care of details such as constructing the actual implementation of a service on the fly. These lifecycle issues are kept hidden from your code behind the proxies.

A service definition may include service contributions, or may leave that for another module.

Ultimately, a service will consist of a core implementation (a Java object that implements the service interface) and, optionally, any number of interceptors. Interceptors sit between the core implementation and the client, and add functionality to the core implementation such as logging, security, transaction demarkation or performance monitoring. Interceptors are yet more objects that implement the service interface.

Instantiating the core service implementation, configuring it, and wrapping it with any interceptors is referred to as constructing the service. Typically, a service proxy will be created first. The first time that a service method is invoked on the proxy, the service implementation is instantiated and configured, and any interceptors for the service are created.

Extending Services

Any module may contribute to any service extension point. An <implementation> element contains these contributions. Contributions take three forms:

  • Service constructors:
  • <interceptor> to add additional logic to a core implementation
Service Constructors

A service constructor is used to instantiate a Java class as the core implementation instance for the service.

There are two forms of service constructors: instance creators and implementation factories.

An instance creator is represented by a <create-instance> element. It includes a class attribute, the Java class to instantiate.

An implementation factory is represented by a <invoke-factory> element. It includes a service-id attribute, the id of a service implementation factory service (which implements the ServiceImplementationFactory interface). The most common example is the hivemind.BuilderFactory service.

Implementation Factories

An implementation factory is used to create a core implementation for a service at runtime.

Often, the factory will need some additional configuration information. For example, the hivemind.lib.EJBProxyFactory service uses its parameters to identify the JNDI name of the EJB's home interface, as well as the home interface class itself.

Parameters to factory services are the XML elements enclosed by the <invoke-factory> element. Much like a configuration contribution, these parameters are converted from XML into Java objects before being provided to the factory.

The most common service factory is hivemind.BuilderFactory. It is used to construct a service and then set properties of the service implementation object.

Interceptor Contributions

An interceptor contribution is represented by an <interceptor> element. The service-id attribute identifies a service interceptor factory service: a service that implements the ServiceInterceptorFactory interface.

An interceptor factory knows how to create an object that implements an arbitrary interface (the interface being defined by the service extension point), adding new functionality. For example, the hivemind.LoggingInterceptor factory creates an instance that logs entry and exit to each method.

The factory shouldn't care what the service interface itself is ... it should adapt to whatever interface is defined by the service extension point it will create an instance for.

A service extension point may have any number of interceptor contributions. If the order in which interceptors are applied is important, then the optional before and after attributes can be specified.

A Stack of Interceptors

In this example, is was desired that any method logging occur first, before the other interceptors. This ensures that the time taken to log method entry and exit is not included in the performance statistics (gathered by the performance interceptor). To ensure that the logging interceptor is the first, or earliest, interceptor, the special value * (rather than a list of interceptor service ids) is given for its before attribute (within the <interceptor> element). This forces the logging interceptor to the front of the list (however, only a single interceptor may be so designated).

Likewise, the security checks should occur last, after logging and after performance; this is accomplished by setting the after attribute to *. The performance interceptor naturally falls between the two.

This is about as complex as an interceptor stack is likely to grow. However, through the use of explicit dependencies, almost any arraingment of interceptors is possible ... even when different modules contribute the interceptors.

Interceptors implement the toString() method to provide a useful identification for the interceptor, for example:
<Iterceptor: hivemind.LoggingInterceptor for com.myco.MyService(com.myco.MyServiceInterface)>

This string identifies the interceptor service factory (hivemind.LoggingInterceptor), the service extension point (com.myco.MyService) and the service interface (com.myco.MyServiceInterface).

Warning
If toString() is part of the service interface (really, a very rare case), then the interceptor does not override the service implementation's method. However, this is not a recommended practice.
A short example

As an example, let's create an interface with a single method, used to add together two numbers.

package com.myco.mypackage;

public interface Adder
{
  public int add(int arg1, int arg2);
}

We could define many methods, and the methods could throw exceptions. Once more, HiveMind doesn't care.

We need to create a module to contain this service. We'll create a simple HiveMind deployment descriptor. This is an XML file, named hivemodule.xml, that must be included in the module's META-INF directory.

<?xml version="1.0"?>      
<module id="com.myco.mypackage" version="1.0.0">
  <service-point id="Adder" interface="com.myco.mypackage.Adder"/>
</module>

The complete id for this service is com.myco.mypackage.Adder , formed from the module id and the service id. Commonly, the service id will exactly match the complete name of the service interface, but this is not required.

Normally, the <service-point> would contain a <create-instance> or <invoke-factory> element, used to create the core implementation. For this example, we'll create a second module that provides the implementation. First we'll define the implementation class.

package com.myco.mypackage.impl;

import com.myco.mypackage.Adder;

public class AdderImpl implements Adder
{
  public int add(int arg1, int arg2)
  {
    return arg1 + arg2;
  }
}

That's what we meant by a POJO. We'll create a second module to provide this implementation.

<?xml version="1.0"?>
<module id="com.myco.mypackage.impl" version="1.0.0">
  <implementation service-id="com.myco.mypackage.Adder">
    <create-instance class="com.myco.mypackage.impl.AdderImpl"/>
  </implementation>
</module>

The runtime code to access the service is very streamlined:

Registry registry = . . .
Adder service = (Adder) registry.getService("com.myco.mypackage.Adder", Adder.class);  
int sum = service.add(4, 7);

Another module may provide an interceptor:

<?xml version="1.0"?>
<module id="com.myco.anotherpackage version="1.0.0">
  <implementation service-id="com.myco.mypackage.Adder">
    <interceptor service-id="hivemind.LoggingInterceptor">
  </implementation>
</module>

Here the Logging interceptor is applied to the service extension point. The interceptor will be inserted between the client code and the core implementation. The client in the code example won't get an instance of the AdderImpl class, it will get an instance of the interceptor, which internally invokes methods on the AdderImpl instance. Because we code against interfaces instead of implementations, the client code neither knows nor cares about this.

Primitive Service Model

The simplest service model is the primitive service model; in this model the service is constructed on first reference. This is appropriate for services such as service factories and interceptor factories, and for several of the basic services provided in the hivemind module.

Singleton Service Model

Constructing a service can be somewhat expensive; it involves instantiating a core service implementation, configuring its properties (some of which may also be services), and building the stack of interceptors for the service. Although HiveMind encourages you to define your application in terms of a large number of small, simple, testable services, it is also desirable to avoid a cascade of unneccesary object creation due to the dependencies between services.

To resolve this, HiveMind defers the actual creation of services by default. This is controled by the model attribute of the <service-point> element; the default model is singleton.

When a service is first requested a proxy for the service is created. This proxy implements the same service interface as the actual service and, the first time a method of the service interface is invoked, will force the construction of the actual service (with the core service implementation, interceptors, references to other services, and so forth).

In certain cases (including many of the fundamental services provided by HiveMind) this behavior is not desired; in those cases, the primitive service model is specified. In addition, there is rarely a need to defer service implementation or service interceptor factory services.

Threaded Service Model

In general, singleton services (using the singleton or primitive service models) should be sufficient. In some cases, the service may need to keep some specific state. State and multithreading don't mix, so the threaded service model constructs, as needed, a service instance for the current thread. Once constructed, the service instance stays bound to the thread until it is discarded. The particular service implementation is exclusive to the thread and is only accessible from that thread.

The threaded service model uses a special proxy class (fabricated at runtime) to support this behavior; the proxy may be shared between threads but methods invoked on the proxy are redirected to the private service implementation bound to the thread. Binding of a service implementation to a thread occurs automatically, the first time a service method is invoked.

The service instance is discarded when notified to cleanup; this is controlled by the hivemind.ThreadEventNotifier service. If your application has any threaded services, you are responsible for invoking the fireThreadCleanup() method of the service.

A core implementation may implement the Discardable interface. If so, it will receive a notification as the service instance is discarded.

HiveMind includes a servlet filter to take care creating the Registry and managing the ThreadEventNotifier service.

Pooled Service Model

The pooled service model is very similar to the threaded model, in that a service implementation will be exclusively bound to a particular thread (until the thread is cleaned up). Unlike the threaded model, the service is not discarded; instead it is stored into a pool for later reuse with the same or a different thread.

As with the threaded model, all of this binding and unbinding is hidden behind a dynamically fabricated proxy class.

Core service implementations may implement the RegistryShutdownListener interface to receive a callback for final cleanups (as with the singleton and deferred service models).

In addition, a service may implement the PoolManageable interface to receive callbacks specific to the pooled service. The service is notified when it is activated (bound to a thread) and deactivated (unbound from the thread and returned to the pool).

Service Lifecycle

As discussed, the service model determines when a service is instantiated. In many cases, the service needs to know when it has been created (to perform any final initializations) or when the Registry has been shut down.

A core service implementation may also implement the RegistryShutdownListener interface. When a Registry is shutdown, the registryDidShutdown() method is invoked on all services (and many other objects, such as proxies). The order in which these notifications occur is not defined. A service may release any resources it may hold at this time. It should not invoke methods on other service interfaces.

The threaded service model does not register services for Registry shutdown notification; regardless of whether the core service implementation implements the RegistryShutdownListener interface or not. Instead, the core service implementation should implement the Discardable interface, to be informed when a service bound to a thread is discarded.

It is preferred that, whenever possible, services use the singleton service model (the default) and not the primitive model. All the service models (except for the primitive service model) expose a proxy object (implementing the service interface) to client code (included other services). These proxies are aware of when the Registry is shutdown and will throw an exception when a service method is invoked on them.

Services and Events

It is fairly common that some services will produce events and other services will consume events. The use of the hivemind.BuilderFactory to construct a service simplifies this, using the < event-listener> element. The BuilderFactory can register a core service implementation (not the service itself!) as a listener of events produced by some other service.

The producing service must include a matched pair of listener registration methods, i.e., both addFooListener() and removeFooListener. Note that only the implementation class must implement the listener interface; the service interface does not have to extend the listener interface. The core service implementation is registered directly with the producer service, bypassing any interceptors or proxies.

Frequently Asked Questions
  • Why do I pass the interface class to getService()?

    This is to add an additional level of error checking and reporting. HiveMind knows, from the module descriptors, the interface provided by the service extension point, but it can't tell if you know that. By passing in the interface you'll cast the returned service to, HiveMind can verify that you won't get a ClassCastException. Instead, it throws an exception with more details (the service extension point id, the actual interface provided, and the interface you passed it).

  • What if no module provides a core implementation of the service?

    HiveMind checks for a service constructor when the registry itself is assembled. If a service extension point has no service constructor, an error is logged (identifying the extension point id). In addition, getService() will throw an ApplicationRuntimeException.

  • What if I need to do some initializations in my service?

    If you have additional initializations that can't occur inside your core service implementations constructor (for instance, if the initializations are based on properties set after the service implementation object is instantiated), then your class should use the hivemind.BuilderFactory to invoke an initializer method.

  • What if I don't invoke Registry.cleanupThread()?

    Then service implementations bound to the current thread stay bound. When the thread is next used to process a request, the same services, in whatever state they were left in, will be used. This may not be desirable in a servlet or Tapestry application, as some state from a client may be left inside the services, and a different client may be associated with the thread in later executions.

  • What if I want my service to be created early, not just when needed?

    Contribute your service into the hivemind.EagerLoad configuration; this will force HiveMind to instantiate the service on startup. This is often used when developing an application, so that configuration errors are caught early; it may also be useful when a service should be instantiated to listen for events from some other service.

Configuration Points

A central concept in HiveMind is configuration extension points. Once you have a set of services, its natural to want to configure those services. In HiveMind, a configuration point contains an unordered list of elements. Each element is contributed by a module ... any module may make contributions to any configuration point.

There is no explicit connection between a service and a configuration point, though it is often the case that a service and a configuration point will be similarily named (or even identically named; services and configuration points are in seperate namespaces). Any relationship between a service and an configuration point is explicit only in code ... the service may be configured with the elements of a configuration point and operate on those elements in some way.

Defining a Configuration Point

A module may include <configuration-point> elements to define new configuration points. A configuration point may specify the expected, or allowed, number of contributions:

  • Zero or one
  • Zero or more (the default)
  • At least one
  • Exactly one

At runtime, the number of actual contributions is checked against the constraint and an error is reported if the number doesn't match.

Defining the Contribution Format

A significant portion of an configuration point is the <schema> element ... this is used to define the format of contributions that may be made inside <contribution> elements. Contributions take the form of XML elements and attributes, the <schema> element identifies which elements and which attributes and provides rules that transform the contributions into Java objects.

This is very important: what gets fed into an configuration point (in the form of contributed <contribution>s) is XML. What comes out on the other side is a list of configured Java objects. Without these XML transformation rules, it would be necessary to write Java code to walk the tree of XML elements and attributes to create the Java objects; instead this is done inside the module deployment descriptor, by specifying a <schema> for the configuration point, and providing rules for processing each contributed element.

If a contribution from an <contribution> is invalid, then a runtime error is logged and the contribution is ignored. The runtime error will identify the exact location (the file, line number and column number) of the contribution so you can go fix it.

The <schema> element contains <element> elements to describe the XML elements that may be contributed. <element>s contain <attribute>s to define the attributes allowed for those elements. <element>s also contain <rules> used to convert the contributed XML into Java objects.

Here's an example from the HiveMind test suite. The Datum object has two properties: key and value.

<configuration-point id="Simple">
  <schema>
    <element name="datum">
      <attribute name="key" required="true"/>
      <attriute name="value" required="true"/>
      
      <conversion class="hivemind.test.config.impl.Datum"/>
    </element>
  </schema>
</configuration-point>

<contribution configuration-id="Simple">
  <datum key="key1" value="value1"/>
  <datum key="key2" value="value2"/>
</contribution>

The <conversion> element creates an instance of the class, and initializes its properties from the attributes of the contributed element (the datum and its key and value attributes). For more complex data, the <map> and <rules> elements add power (and complexity).

This extra work in the module descriptor eliminates a large amount of custom Java code that would otherwise be necessary to walk the XML contributions tree and convert elements and attributes into objects and properties. Yes, you could do this in your own code ... but would you really include all the error checking that HiveMind does? Or the line-precise error reporting? Would you bother to create unit tests for all the failure conditions?

Using HiveMind allows you to write the schema and rules and know that the conversion from XML to Java objects is done uniformly, efficiently and robustly.

The end result of this mechanism is very concise, readable contributions (as shown in the <contribution> in the example).

In addition, it is common for multiple configuration points to share the exact same schema. By assigning an id attribute to a <schema> element, you may reference the same schema for multiple configuration points. For example, the hivemind.FactoryDefaults and hivemind.ApplicationDefaults configuration points use the same schema. The hivemind module deployment descriptor accomplishes this by defining a schema for one configuration point, then referencing it from another:

<schema id="Defaults">
  <element name="default">
  
  . . .
  
  </element>
</schema>

<configuration-point id="FactoryDefaults" schema-id="Defaults"/>

Like service points and configuration points, schemas may be referenced within a single module using an unqualified id, or referenced between modules using a fully qualified id (that is, prefixed with the module's id).

Accessing Configuration Points

Like services, configuration points are meant to be easy to access (the only trick is getting a reference to the registry to start from).

Registry registry = . . .;
List elements = registry.getConfiguration("com.myco.MyConfig");	

int count = elements.size();
for (int i = 0; i < count; i++)
{
  MyElement element = (MyElement) elements.get(i);
  
  . . .
}

Note
Although it is possible to access configurations via the Registry, it is often not a good idea. It is unlikely that you want the information contained in a configuration as an unordered list. A best practice is to always access the configuration through a service, which can organize and validate the data in the configuration.

The list of elements is always returned as an unmodifiable list. An empty list may be returned.

The order of the elements in the list is not defined. If order is important, you should create a new (modifiable) list from the returned list and sort it.

Note that the elements in the list are no longer the XML elements and attributes that were contributed, the rules provided in the configuration point's <schema> are used to convert the contributed XML into Java objects.

Lazy Loading

At application startup, all the module deployment descriptors are located and parsed and in-memory objects created. Validations (such as having the correct number of contributions) occur at this stage.

The list of elements for an configuration point is not created until the first call to Registry.getConfiguration() for that configuration point.

In fact, it is not created even then. When the element list for an configuration point is first accessed, what's returned is not really the list of elements; it's a proxy, a stand-in for the real data. The actual elements are not converted until they are actually needed, in much the same way that the creation of services is deferred.

In general, you will never know (or need to know) this; when you access the size() of the list or get() any of its elements, the conversion of contributions into Java objects will be triggered, and those Java objects will be returned in the list.

If there are minor errors in the contribution, then you may see errors logged; if the <contribution> contributions are singificantly malformed, HiveMind may be unable to recover and will throw a runtime exception.

Substitution Symbols

The information provided by HiveMind module descriptors is entirely static, but in some cases, some aspects of the configuration should be dynamic. For example, a database URL or an e-mail address may not be known until runtime (a sophisticated application may have an installer which collects this information).

HiveMind supports this notion through substitution symbols. These are references to values that are supplied at runtime. Substitution symbols can appear inside literal values ... both as XML attributes, and as character data inside XML elements.

Example:

<contribution configuration-id="com.myco.MyConfig">
  <value> dir/foo.txt </value>
  <value> ${config.dir}/${config.file} </value>
</contribution>

This example contributes two elements to the com.myco.MyConfig configuration point. The first contribution is simply the text dir/foo.txt. In the second contribution, the content contains substitution symbols (which use a syntax derived from the Ant build tool). Symbol substitution occurs before <schema> rules are executed, so the config.dir and config.file symbols will be converted to strings first, then whatever rules are in place to convert the value element into a Java object will be executed.

Note
If you contribute text that includes symbols that you do not want to be expanded then you must add an extra dollar sign to the false symbol. This is to support legacy data that was already using the HiveMind symbol notation for its own, internal purposes. For example, foo $${bar} baz will be expanded into the text foo ${bar} baz.
Symbol Sources

This begs the question: where do symbol values come from? The answser is application dependent. HiveMind itself defines a configuration configuration point for this purpose: hivemind.SymbolSources. Contributions to this configuration point define new objects that can provide values for symbols, and identify the order in which these objects should be consulted.

If at runtime none of the configured SymbolSources provides a value for a given symbol then HiveMind will leave the reference to that symbol as is, including the surrounding ${ and }. Additionally an error will be logged.

Frequently Asked Questions
  • Are the any default implementations of SymbolSource?

    There is now an configuration point for setting factory defaults: hivemind.FactoryDefaults . A second configuration point, for application defaults, overrides the factory defaults: hivemind.ApplicationDefaults.

    SystemPropertiesSymbolSource is a one-line implementation that allows access to system properties as substitution symbols. Note that this configuration is not loaded by default.

    Additional implementations may follow in the future.

  • What's all this about schemas and rules?

    A central goal of HiveMind is to reduce code clutter. If configuration point contributions are just strings (in a .properties file) or just XML, that puts a lot of burden on the developer whose code reads the configuration to then massage it into useful objects. That kind of ad-hoc code is notoriously buggy; in HiveMind it is almost entirely absent. Instead, all the XML parsing occurs inside HiveMind, which uses the schema and rules to validate and convert the XML contributions into Java objects.

    You can omit the schema, in which case the elements are left as XML (instances of Element) and your code is responsible for walking the elements and attributes ... but why bother? Far easier to let HiveMind do the conversions and validations.

  • How do I know if the element list is a proxy or not?

    Basically, you can't, short of performing an instanceof check. There isn't any need to tell the difference between the deferred proxy to the element list and the actual element list; they are both immutable and both behave identically.

HiveDoc

HiveMind には、HiveMind レジストリ(ランタイム時にデプロイされた全てのモジュールからの情報がセットになったもの)を文書化する為のツールが含まれています。ビルド時に、全ての関連したHiveMindモジュールのデプロイ記述子が解析され、その結果が結合されて一つのファイルに纏められます。次に、マスターファイル(このドキュメントにのみ使われるもの)がXSLTを通じてHTMLファイルのセットへと変換されます。最終成果物はJavaDocにとてもよく似ており、全てハイパーリンクがつけられ、全てのサービス・設定ポイント・コントリビューション・そしてスキーマが明白に分かるようになるでしょう。

生成されるドキュメントに結合されるのは、ユーザが指定した説明です。 <attribute>, <configuration-point>, <element>, <module>, <schema> 及び <service-point> 要素には、文字列データの説明を入れる事が出来ます。例えば:

<module id="mymodule" version="1.0.0">
  A module for my application with my services, etc.
</module>

HiveMind フレームワーク及びライブラリ用の HiveDoc はこちらで手に入ります。

Warning
ドキュメントのビルドに関する詳細はまた改めて。
HiveMind Module Descriptor

The purpose of the module descriptor is to provide a runtime and compile-time description of each HiveMind module in terms of service and configuration extension points and contributions to those extension points.

The descriptor is named hivemodule.xml and is stored in the META-INF directory of the module.

The root element of the descriptor is the <module> element.

attribute

<attribute> is used to define an attribute within an <element>. Inside a <contribution>, only known attributes are allowed in elements; unknown attributes will be logged as an error and ignored. In addition, some attributes are required; again, errors occur if the contributed element does not provide a value for the attribute.

Attribute Type Required ? Description
name string yes The name of the attribute.
required boolean no If true, the attribute must be provided in the contributed configuration element. The default is false.
unique boolean no If true, the attribute must contain a unique value with respect to all other contributions to the same configuration point. The default is false.
translator string no The translator configuration that is used to convert the attribute into a useable type. By default, the attribute is treated as a single string.
configuration-point

The <configuration-point> element defines a configuration extension point.

Attribute Type Required ? Description
id string yes The simple id of the service extension point. The fully qualified id for the extension point is created by prefixing with the module's id (and a dot).
occurs unbounded | 0..1 | 1 | 1..n | none no The number of contributions allowed:
  • unbounded (default): any number
  • 0..1: optional
  • 1 : required
  • 1..n: at least one
  • none: none allowed
Note
none doesn't make sense for occurances to a configuration point, but does occasionally make sense for parameters to a factory.
schema-id string no Used to reference a <schema> (in the same module, or a different one) that defines the format of contributions into the configuration point. This may be omitted, in which case the extension point will contain a list of Element .

Contains: <schema>

<configuration-point> only defines a configuration point, it does not supply any data into that point. The <contribution> element is used to provide such data.

contribution

The <contribution> element contributes elements to an existing configuration extension point.

Attribute Type Required ? Description
configuration-id string yes Either the id of an <configuration-point> within the module, or the fully qualified id of an <configuration-point> in another module.

The content of the <contribution> consists of elements. These elements are converted, in accordance with the configuration point's <schema>, into Java objects.

conversion

<conversion> is an alternative to <rules> that is generally simpler and more concise. An <element> should contain <conversion> or <rules> but not both.

<conversion> is geared towards the typical case; a straight-forward mapping of the element to an instance of a Java class, and a mapping of the element's attributes to the properties of the Java object.

Attribute Type Required ? Description
class string yes The fully qualified name of a Java class to instantiate.
parent-method string no The name of a method of the parent object used to add the created object to the parent. The default, addElement, is appropriate for top-level <element>s.

Contains: <map>

Each attribute will be mapped to a property. A limited amount of name mangling occurs: if the attribute name contains dashes, they are removed, and the character following is converted to upper case. So, an attribute named "complex-attribute-name" would be mapped to a property named "complexAttributeName". Only attributes identified with a <attribute> element will be mapped, others will be ignored.

create-instance

<create-instance> is used, within <service-point> and <implementation> to create the core service implementation for a service by instantiating a class. This is appropriate for simple services that require no explicit configuration.

Attribute Type Required ? Description
class class name yes Fully qualified class name to instantiate.
model primitive | singleton | threaded | pooled no The model used to construct and manage the service. singleton is the default.

Additional service models can be defined via the hivemind.ServiceModels configuration point.

element

The <element> element is used to define an element in a the <schema>. <element> may also be nested within another <element>, to indicate an element that may be enclosed within another element.

Attribute Type Required ? Description
name string yes The name of the element.
content-translator string no The translator configuration that is used to convert the element's content into a useable type. By default, the content is treated as a single string.

Contains: <attribute>, <conversion>, <element>, <rules>

Future enhancements to the HiveMind framework will include greater sophistication in defining configuration content.

implementation

The <implementation> element contributes a core implementation or interceptors to a service extension point.

Attribute Type Required ? Description
service-id string yes The id of the service to extend; this may be a fully qualified id, or the local id of a service within the same module.

Contains: <create-instance>, <interceptor>, <invoke-factory>

interceptor

<interceptor> contributes an interceptor factory to a service extension point. An interceptor factory is a service which implements the ServiceInterceptorFactory interface.

When the service is constructed, each invoked interceptor factory will fabricate an interceptor class to provide additional functionality for the service.

Attribute Type Required ? Description
service-id string yes The id of the service that will create the interceptor for the service. This may be the local id of a service defined within the same module, or a fully qualified id.
before string no A list of interceptors whose behavior should come later in execution than this interceptor.
after string no A list of interceptors whose behavior should come earlier in execution than this interceptor.

Like a service implementation factory, a service interceptor factory may need parameters. As with <invoke-factory>, parameters to the interceptor factory are enclosed by the <interceptor> element. The service interceptor factory will decode the parameters using the schema identified by its parameters-schema-id attribute.

Interceptor ordering is based on dependencies; each interceptor can identify, by interceptor service id, other interceptors. Interceptors in the before list are deferred until after this interceptor. Likewise, this interceptor is deferred until after all interceptors in the after list.

Note
The after dependencies will look familar to anyone who has used Ant or any version of Make. before dependencies are simply the opposite.

The value for before or after is a list of service ids seperated by commas. Service ids may be unqualified if they are within the same module. Alternately, the fixed value * may be used instead of a list: this indicates that the interceptor should be ordered absolutely first or absolutely last.

invoke-factory

The <invoke-factory> element is used to provide a service implementation for a service by invoking another service, a factory service.

Attribute Type Required ? Description
service-id string no The id of the factory service. This may be a simple id for a service within the same module, or a fully qualified service id. The default, if not specified, is hivemind.BuilderFactory.
model primitive | singleton | threaded | pooled no The model used to construct and manage the service. singleton is the default.

A service factory defines its parameters in terms of a schema. The content of the <invoke-factory> is converted, in accordance with the factory's schema, and provided to the factory.

Note
Additional service models can be defined via the hivemind.ServiceModels configuration point.
map

The <map> element appears within <conversion> to override the default mapping from an attribute to a property. By default, the property name is expected to match the attribute name (with the name mangling described in the description of <conversion>); the <map> element is used to handle exceptions to the rule.

Attribute Type Required ? Description
attribute string yes The name of the attribute, which should match a name defined by an<attribute> (of the enclosing <element>).
property string yes The corresponding property (of the Java object specified by the enclosing <conversion>)
module

The <module> element is the root element.

Attribute Type Required ? Description
id string yes The id should be a dotted sequence, like a package name. In general, the module id should be the package name.
version version number yes The version of the module as a dotted sequence of three numbers. Example: "1.0.0"

Contains: <contribution>, <configuration-point>, <implementation> , <service-point>, <schema>, <sub-module>

Warning
The version is not currently used, but a later release of HiveMind will include runtime dependency checking based on version number.
parameters-schema

The <parameters-schema> element is identical to the <schema> element, but only appears inside <service-point>, to define the schema for the parameters for a service implementation factory or service interceptor factory.

rules

<rules> is a container for element and attribute parsing rules within an <element>. These rules are responsible for converting the contributed element and its attributes into objects and object properties. The available rules are documented separately .

schema

The <schema> element is used to describe the format of element contributions to an <configuration-point>, or parameters provided to a service or interceptor factory.

Attribute Type Required ? Description
id string yes Assigns a local id to the schema that may be referenced elsewhere.

Contains: <element>

At a future time, the <schema> element will be extended to provide more options, to provide more precise control over the elements that may be provided in an <contribution>. At this time, a <schema> is simply a list of <element> elements.

Warning
When <schema> appears directly within <configuration-point>, or <parameters-schema> appears directly within <service-point>, then the id attribute is not allowed.
service-point

The <service-point> element defines a service extension point.

Attribute Type Required ? Description
id string yes The simple id of the service extension point. The fully qualified id for the extension point is created by prefixing with the module's id (and a dot).
interface class name yes The fully qualified name of the Java interface supplied by this service extension point.
parameters-schema-id string no Used to reference a <schema> (in the same module, or a different one) that defines parameters used by the service. This is used when the service being defined is a ServiceImplementationFactory or a ServiceInterceptorFactory.
parameters-occurs unbounded | 0..1 | 1 | 1..n | none no The number of parameter element contributions allowed:
  • unbounded: any number
  • 0..1: optional
  • 1 (default) : required
  • 1..n: at least one
  • none: none allowed

Contains: <create-instance>, <interceptor>, <invoke-factory> , <parameters-schema>

sub-module

The <sub-module> element is used to identify an additional HiveMind module deployment descriptor. This is used when a single JAR file contains logically distinct packages, each of which should be treated as an individual HiveMind module. This can also be useful as a way to reduce developer conflict against a single, large, central module descriptor by effectively breaking it into smaller pieces. Sub-modules identified in this way must still have their own unique module id.

Attribute Type Required ? Description
descriptor string yes Location of the module descriptor.

The descriptor should be specified as a relative path, either the name of another module descriptor within the same folder, or within a child folder.

Contribution Processing Rules

The concept of performing a rules-directed conversion of elements and attributes into Java objects was pioneered (to my knowledge) in the Jakarta Digester framework (which started inside Tomcat, moved to Struts, and is now available on its own).

The technique is very powerful, even in the limited subset of Digester provided by HiveMind (over time, the number of available rules will increase).

Rules

Rules are attached to <element>s. Each rule object has two methods: the begin() method is invoked when the element is first encountered. The content of the element is then processed recursively (which will involve more rules). Once that completes, the end() method is invoked.

Note: begin() is invoked in the order that rules are defined within the <rules> element. end() is invoked in inverse order. This rarely makes any difference.

Element processing is based on an object stack. Several rules will manipulate the top object on the stack, setting properties based on attributes or content. The <create-object> rule will instantiate a new object at begin() and pop it off the stack at end() .

In several cases, rule descriptions reference the parent and child objects. The top object on the stack is the child, the object beneath that is the parent. The <set-parent> and <invoke-parent> rules are useful for creating hierarchies of objects.

create-object

The <create-object> rule is used to create a new object, which is pushed onto the stack at begin(). The object is popped off the stack at end(). <create-object> is typically paired up with <invoke-parent> to connect the new object (as a child) to a parent object.

Attribute Type Required ? Description
class string yes The complete class name of the object to create. The class must be public, and have a no-arguments public constructor.
custom

The <custom> rule is used to provide a custom implementation of the Rule interface. Note that any such rules must not contain any individual state, as they will be reused, possibly by multiple threads.

Attribute Type Required ? Description
class string yes The complete class name of the class implementing the Rule interface.
invoke-parent

The <invoke-parent> rule is used to connect the child (top object on the stack) to its parent (the next object down). A method of the parent is invoked, passing the child as a parameter. This invocation occurs inside the rule's begin() method; to ensure that the child object is fully configured before being added to the parent place this rule after all properties of the child object have been configured.

Attribute Type Required ? Description
method string yes The name of the method to invoke on the parent object.
depth number no The depth of the parent object within the object stack. The top object (the child) is at depth 0, and default depth of the parent is 1.
Warning
Top level elements should include an <invoke-parent> rule, and specify the method as addElement. This adds the created, configured object to the list of contributed objects for the <contribution> (or for service factories, adds the object as a parameter).
push-attribute

The <push-attribute> rule reads an attribute, converts it with a translator, and pushes the result onto the stack. It will typically be combined with a <invoke-parent> to get the pushed value added to the configuration point elements (or to some parent object).

Attribute Type Required ? Description
attribute string yes The name of the attribute to read.
read-attribute

The <read-attribute> rule reads an attribute from the current element, optionally translates it (from a string to some other type), and then assigns the value to a property of the top object on the object stack.

Attribute Type Required ? Description
property string yes The name of the property of the top object on the stack to update.
attribute string yes The name of the attribute to read.
skip-if-null boolean no If "true" (the default), then an omitted attribute will be ignored. If "false", the property will be updated regardless.
translator string no A translator that overrides the attribute's translator.
read-content

The <read-content> rule is similar to <read-attribute>, except it concerns the content of the current element (the text wrapped by its start and end tags).

Attribute Type Required ? Description
property string yes The name of the property of the top object on the stack to update.
set-module

<set-module> is used to set a property of the top object on the stack to the module which made the contribution. This is often used when some other attribute of the contribution is the name of a service or configuration extension point (but it is advantageous to defer access to the service or configuration). The module can be used to resolve names of services or configurations that are local to the contributing module.

Attribute Type Required ? Description
property string yes The name of the property of the top object to update with the contributing module.
set-parent

The <set-parent> rule is used to set a property of the child object to parent object. This allows for backwards connections from child objects to parent objects.

Attribute Type Required ? Description
property string yes The name of the property of the child object to set.
set-property

The <set-property> rule is used to set a property of the top object to a preset value.

Attribute Type Required ? Description
property string yes The name of the property of the child object to set.
value string yes The value to set the proeprty to. The is interpreted as with the smart translator, meaning that conversion to normal Java types (boolean, int, etc.) will work as expected.
Translators

Commonly, it is necessary to perform some translation or transformation of string attribute value to convert the value into some other type, such as boolean, integer or date. This can be accomplished by specifying a translator in the <attribute> element (it also applies to element content, with the content-translator attribute of the <element> element).

A translator is an object implementing the Translator interface. The translator value specified in a rule may be either the complete class name of a class implementing the interface, or one of a number of builtin values.

Translators configurations consist of a translator name, and an optional initalizer string. The initializer string is separated from the translator id by a comma, ex: int,min=0 (where min=0 is the initializer string). Initializer strings are generally in the format of key=value[,key=value]* ... but each Translator is free to interpret the initializer string its own way.

The following sections describe the basic translators provided with the framework. You can add additional translators by contributing to the hivemind.Translators configuration point.

bean

The bean translator expects its input to bean in the form service-id:locator. The service-id references a service implementing BeanFactory.

Note
This translator is contributed by the hivemind.lib module.
boolean

The boolean translator converts an input string into a boolean value. "true" is translated to true, and "false" to false.

A default value is used when the input is blank. Normally, this default is false, but the "default" key in the initializer can override this (i.e., boolean,default=true).

class

The class translator converts a class name into a Class object. The value must be a fully qualified class name. A null input value returns null.

Note
This translator is hard-coded, and does not appear in the hivemind.Translators configuration point.
configuration

The configuration translator converts an input value into a configuration point id, then obtains the elements for that configuration point as a List. The id may be fully qualified, or a local id within the contributing module.

A blank input value returns null.

double

The double translator converts the input into an double precision floating point value. It recognizes three initializer values:

  • default: the default value (normally 0) to use when the input is blank
  • min: a minimum acceptible value
  • max: a maximum acceptible value
enumeration

The enumeration translator converts input strings into enumerated values. Enumeration requires an initializer string, with a special format:
enumeration,class-name,input=field-name[,input=field-name]*

That is, the initializer begins with the name of the class containing some number of public static fields. Input values are mapped against field names. Example:
enumeration,java.lang.Boolean,yes=TRUE,no=FALSE

If the input is null or empty, then the translator returns null.

id-list

Translates a comma-seperated list of ids into a comma-seperated list of fully qualified ids (qualified against the contributing module). Alternately, passes the value * through as-is. Id lists are typically used to establish ordering of items within a list, as with <interceptor>.

instance

The object translator converts a fully qualified class name into an object instance. The class must implement a public no-arguments constructor.

int

The int translator converts the input into an integer value. It recognizes three initializer values:

  • default: the default value (normally 0) to use when the input is blank
  • min: a minimum acceptible value
  • max: a maximum acceptible value
Note
This translator is hard-coded, and does not appear in the hivemind.Translators configuration point.
long

The long translator converts the input into an long integer (64 bit) value. It recognizes three initializer values:

  • default: the default value (normally 0) to use when the input is blank
  • min: a minimum acceptible value
  • max: a maximum acceptible value
object

The object translator is allows the caller to provide an object value in a multitude of ways. The object translator inverts the normal roles; the caller has all the power in determining how to interpret the value, and the schema takes whatever value shows up. The object translator is driven by the hivemind.ObjectProviders configuration.

qualified-id

Translates an id into a fully qualified id (qualified against the contributing module's id).

resource

The resource translator is used to find a resource packaged with (or near) the module's deployment descriptor. The input value is the relative path to a file. The translator converts the input value to a Resource for that file.

If the file doesn't exist, then an error is logged. If a localization of the file exists, then the Resource for that localization is returned.

service

The service translator is used to lookup a service in the registry. The input value is either a local service id from the contributing module, or a fully qualified service id.

Note
This translator is hard-coded, and does not appear in the hivemind.Translators configuration point.
service-point

The service translator is used to lookup a service point (not a service) in the registry. The input value is either a local service id from the contributing module, or a fully qualified service id.

smart

The smart translator attempts an automatic conversion from a string value (the attribute value or element content) to a particular type. It determines the type from the property to which the value will be assigned. Smart translator makes use of the JavaBeans's PropertyEditor class for the conversion, which allows easy this translator to be used with most common primitive types, such as int, short and boolean. See the SmartTranslator documentation for more details.

In general, the smart translator is the useful for most ordinary Java type properties, unless you want to specify range constraints.

It recognizes one initializer value:

  • default: the default value to use when the input is blank
Note
This translator is hard-coded, and does not appear in the hivemind.Translators configuration point.
ライブラリ:依存関係

HiveMind は、他のオープンソースのフレームワークと様々な依存関係を持っています。HiveMind 用の Ant ビルドファイルが、ibiblio 上のMavenレポジトリから自動的に依存関係にあるファイルを自動的にダウンロードします。

ファイル名 名称 特記事項
commons-logging-1.0.3.jar Commons-Logging
easymock-1.1.jar EasyMock testing framework HiveMindTestCase によってのみ利用されます。独自のテストに基づいて存在します
jboss-j2ee-3.2.1.jar JBoss J2EE Server HiveMind ライブラリのいくつかのサービスで使われます。JBossそのものとの依存関係はなく、ただjavax.ejbパッケージを使うのみです。
javassist-2.6.jar Javassist bytecode library
oro-2.0.6.jar ORO Regular Expressions
spring-full-1.0.1.jar Spring hivemind.lib.SpringLookupFactory サービスで使われます。

典型的には、HiveMind ライブラリ、Javassist、ORO、及び Commons-logging があれば十分です。貴方のEJB コンテナが javax.ejb クラス群を提供してくれるでしょう。明らかに、貴方は Spring を使う場合には Spring を、HiveMindTestCaseベースクラスを使ったテストを書く際には EasyMock を、夫々含める必要があります。

多くの場合、HiveMind は"handy"バージョンに対してビルドが行われています:十中八九、既存の環境に合うように依存関係にあるもの(jars)の「バージョン」に変更を加える事が出来ます。ただ、いくつかテストを書いておくのを忘れないでくださいネ!

Note
HiveMind は、JDK 1.3 以上で使われる事をはっきりと念頭においてデザインされています。JDK 1.3 に含まれない 1.4 の機能は一切使っておりません。

History of Changes

RSS

Version 1.0 (Sep 22 2004)
  • fix Ensure that the logging interceptor will work properly when wrapping around JDK dynamic proxies. (HLS) Fixes HIVEMIND-55.
Version 1.0-rc-2 (Sep 11 2004)
  • add Add method getSymbolValue() to RegistryInfrastructure and Module (HLS)
  • fix Fix class loader issues concerning fabricated classes in different modules. (HLS) Fixes HIVEMIND-48.
  • fix Allow symbols to be escaped rather than expanded. (HLS) Fixes HIVEMIND-47.
  • fix The previous fix was incomplete; this should close the remaining sychronization gaps. (HLS) Thanks to James Carman. Fixes HIVEMIND-44.
  • fix Class loading issue inside Tomcat. (HLS) Fixes HIVEMIND-49.
  • fix Tweak HiveMind to work properly in a JavaWebStart application. (HLS) Thanks to James Carman. Fixes HIVEMIND-10.
  • add Add clearCache() method to PropertyUtils. (HLS)
  • update Change the API for ClassFactory to take a ClassLoader, not a Module. (HLS)
  • fix Handle duplicated methods in service interfaces, avoiding "attempt to redefine method" errors. (HLS) Fixes HIVEMIND-52.
Version 1.0-rc-1 (Aug 25 2004)
  • update Remove support for Simple Data Language ... it's all XML again. (HLS)
  • update Re-work part of PipelineFactory to take advantage of object references. (HLS)
  • update Make the service-id of <invoke-factory> optional and default to hivemind.BuilderFactory. (HLS)
  • update Change the hivemind.Startup configuration to take an object reference, not a service id. (HLS)
  • fix SmartTranslator should differentiate between blank strings and null input (HLS) Thanks to Michael Frericks. Fixes HIVEMIND-29.
  • update Improvements to HiveBuild to properly handle changing versions or useages of artifacts. (HLS)
  • add Add ability to mark attributes of an element as unique, such that duplicate values in contributions result in errors. (HLS) Thanks to Johan Lindquist. Fixes HIVEMIND-43.
  • fix Add checks to SchemaProcessorImpl for empty stack conditions (HLS) Fixes HIVEMIND-41.
  • fix Add parameters-occurs attribute to <service-point> element. (HLS) Fixes HIVEMIND-33.
  • fix Specify location in all module deployment descriptor parse exceptions. (HLS) Fixes HIVEMIND-23.
  • add Add Quick Reference Sheet. (HLS) Thanks to Stefan Liebig. Fixes HIVEMIND-42.
  • fix Add getCause() method to ApplicationRuntimeException (HLS) Thanks to Luke Blanshard. Fixes HIVEMIND-16.
  • add Add polling methods to Registry. (HLS) Thanks to Naresh Sikha. Fixes HIVEMIND-37.
  • add Add polling methods to BeanFactory. (HLS) Thanks to Naresh Sikha. Fixes HIVEMIND-36.
  • fix Fix broken synchronization in ThreadedServiceModel and PooledServiceModel that could make them randomly fail when creating a service by invoking a factory. (HLS) Thanks to James Carman. Fixes HIVEMIND-44.
  • fix Check for <sub-module> references that do not exist. (HLS) Thanks to Johan Lindquist. Fixes HIVEMIND-34.
Version 1.0-beta-2 (Aug 1 2004)
  • fix Removed dependency on Werkz. (KW) Fixes HIVEMIND-6.
  • update Added link to the Jakarta mailing lists page. (HLS)
  • fix Modifed the build scripts to properly include variable info when compiling. (HLS) Thanks to Achim Hügen. Fixes HIVEMIND-21.
  • update Moved the Ant build scripts to a new directory, hivebuild, in preparation for making hivebuild reusable on new projects. (HLS)
  • update Added protected method constructRegistry() to HiveMindFilter. (HLS)
  • update Renamed existing 'object' translator to 'instance', and created a new 'object' translator with great flexibility. Extend BuilderFactory to add a set-object element that leverages the object translator. (HLS)
  • update Created service-property object translator. (HLS)
  • update Added a version of Registry.getService() that omits the service id (but requires that exactly one service point implements the service interface). (HLS) Thanks to Marcus Brito. Fixes HIVEMIND-20.
  • update Extended the BuilderFactory to autowire services. (HLS) Fixes HIVEMIND-22.
  • add Added a new module that contains HiveMind example code. (HLS)
  • update Fixed some latent bugs related to submodules inside the constructRegistry task. Made some more improvements to the hivebuild scripts. (HLS)
  • update Updated the download location for the Forrest distribution. (HLS)
  • update Added more examples and examples documentation. (HLS)
  • add Added StrictErrorHandler, an implementation of ErrorHandler that always throws an ApplicationRuntimeException. (HLS)
  • update Moved the code for the Grabber Ant task into the tree and improve the build scripts to dynamically compile and use it. (HLS)
  • fix Typo in jar-module.xml causes broken build if junit library is missing (HLS) Thanks to Johan Lindquist. Fixes HIVEMIND-31.
  • fix Made a number of changes to ensure HiveMind compatibility with JDK 1.3. (HLS) Fixes HIVEMIND-35.
  • fix Changed some unit tests to adapt to platform line endings. (HLS) Fixes HIVEMIND-26.
  • fix Fix the HiveDoc XSL to use XML (not SDL) output. (HLS) Thanks to Johan Lindquist. Fixes HIVEMIND-46.
Version 1.0-beta-1 (Jun 26 2004)
  • updateAdded change log. (HLS)
  • updateRefactored ClassFab and related classes for easier reuse outside of HiveMind. Added a new suite of tests related to ClassFab.(HLS)
  • addCreated two new services in hivemind-lib for creating default implementations of arbitrary interfaces (DefaultImplementationBuilder) and for using that to create placeholder services (PlaceholderFactory).(HLS)
  • addCreated MessageFormatter class as a wrapper around ResourceBundle and an easy way for individual packages to gain access to runtime messages. (HLS)
  • updateModified the read-attribute rule to allow a translator to be specified (overriding the translator for the attribute).(HLS)
  • addAdded the qualified-id and id-list translators.(HLS)
  • addAdded the hivemind.lib.PipelineFactory and related code, schemas, tests and documentation. (HLS)
  • fix Enhance logging of exceptions when setting a service property to a contribution (HLS) Fixes HIVEMIND-4.
  • add Added service hivemind.lib.BeanFactoryBuilder. (HLS)
  • update Removed the <description> element from the module descriptor format; descriptions are now provided as enclosed text for element that support descriptions. (HLS)
  • update Changed the MethodMatcher classes to use a MethodSignature rather than a Method. (HLS)
  • update Changed MessageFormatter to automatically convert Throwables into their message or class name. (HLS)
  • add Added FileResource. (HLS)
  • update Extended hivemind.BuilderFactory to be able to set the ClassResolver; for a service implementation, and to autowire common properties (log, messages, serviceId, errorHandler, classResolver) if the properties are writeable and of the correct type. (HLS)
  • update Added methods newControl(), newMock(), addControl(), replayControls() and verifyControls() to HiveMindTestCase to simplify test cases that use multiple EasyMock mock objects. (HLS)
  • update Changed HiveMindFilter to log a message after it stores the registry into the servlet context. (HLS)
  • fix Restore the getConfiguration() and expandSymbols() methods to the Registry interface. (HLS) Fixes HIVEMIND-11.
  • fix SimpleDataLanguageParser calls the ContentHandler with a null namespace argument instead of "". That leads to some problems if you want to use transformers. (HLS) Thanks to Dieter Bogdoll. Fixes HIVEMIND-9.
  • fix Fix how certain translator messages are generated to avoid unit test failures. (HLS) Thanks to Achim Hügen. Fixes HIVEMIND-7.
  • update Modify the build files to enable debugging by default. (HLS) Fixes HIVEMIND-12.
  • update Added validation of id attributes in module deployment descriptors (using ORO regular expressions). (HLS)
  • fix Fix some typos in definition of the hivemind.lib.NameLookup service. (HLS)
  • fix Fix a mistake in the BuilderFactory's set-object element, and add integration tests. (HLS) Thanks to Naresh Sikha. Fixes HIVEMIND-25.

Todo List

Release 1.1
  • [lib] JMX Integration → HLS

HiveMind:ダウンロード

HiveMind のディストリビューションは、Apache Mirrorsから入手する事ができます。 HiveMind は、他の殆どの Apache (Jakarta) プロジェクトとは異なり、メインのディストリビューションがバイナリソースの両方を含んでいます。但し、ドキュメントは別途分けられております:

  • hivemind-release.tar.gz -- バイナリ・ソース複合ディストリビューション
  • hivemind-release.zip -- バイナリ・ソース複合ディストリビューション (.tar.gz の2倍の容量)
  • hivemind-release-docs.tar.gz -- HiveMind のドキュメント(このサイトと同一物)

夫々のファイルにはMD5チェックサムファイルが存在しますので、ダウンロードしたものが妥当なものであるかを確認する事が出来ます。また、GPG Key (.asc) を使う事で更に改竄が無い事も確認する事が出来ます。

Warning
インターネット・エクスプローラ上では、.tar.gz ファイルは正しいファイル名でダウンロードされません。一旦ダウンロードし、リネームして .tar.gz 拡張子にしてから、WinZipなどを使って開いてください。

CVS Access

CVSへの匿名アクセスは、次のようにします:

:pserver:anoncvs@cvs.apache.org:/home/cvspublic/jakarta-hivemind  

因みに、CVS repositoryへはここからブラウザ上でアクセスできます。

チュートリアル/情報

Bootstrapping the Registry

アプリケーションのモジュール・デプロイ記述子に定義されているコンフィグ・ポイントやサービスにアクセスする事が可能となる前に、レジストリが必要となります:ここでは、どのようにレジストリを組み立てるかについて記述していきます。

ここでキーとなるクラスは、RegistryBuilderです。これはに、モジュール・デプロイ記述子の場所確認と解析を行うコード、及び、バインディングデータからレジストリを組み立てる為のコードが含まれています。記述子は全てクラスパスの中から探し出されます:アプリケーションのJAR 群にパッケージ化された記述子と共に、HiveMind 自身の記述子をも含んでいます

Note
HiveMind が人気となるにつれ、HiveMind モジュールのデプロイ記述子が第三者のフレームワークにバンドルされているのを見る事から始めてもいいかもしれません。がしかし、まだ時期尚早かもしれません。

どのように全て成り立つか調べてみます。プロジェクトのレイアウトは以下のとおりです。

[Project Layout]
サービス・インタフェースと実装

まず始めのステップで、サービス・インタフェースを定義します:

package hivemind.examples;

public interface Adder
{
    public int add(int arg0, int arg1);
} 

次に、このサービス用の実装が必要になります:

package hivemind.examples.impl;

import hivemind.examples.Adder;

public class AdderImpl implements Adder
{

  public int add(int arg0, int arg1)
  {
    return arg0 + arg1;
  }

} 

この例では、さらに3つのインタフェースとそれに対応する実装が追加されています:Subtracter・Multiplier・Dividerそして最後に全部を纏めるCalculatorです:

package hivemind.examples;


public interface Calculator extends Adder, Subtracter, Multiplier, Divider
{

}    

このCalculatorの実装は結合の為の配線工事が必要です:他の4つのサービス(Adder・, Substracter・Multiplier・Divider)が差し込まれます:

package hivemind.examples.impl;

import hivemind.examples.Adder;
import hivemind.examples.Calculator;
import hivemind.examples.Divider;
import hivemind.examples.Multiplier;
import hivemind.examples.Subtracter;

public class CalculatorImpl implements Calculator
{
  private Adder _adder;
  private Subtracter _subtracter;
  private Multiplier _multiplier;
  private Divider _divider;

  public void setAdder(Adder adder)
  {
    _adder = adder;
  }

  public void setDivider(Divider divider)
  {
    _divider = divider;
  }

  public void setMultiplier(Multiplier multiplier)
  {
    _multiplier = multiplier;
  }

  public void setSubtracter(Subtracter subtracter)
  {
    _subtracter = subtracter;
  }

  public int add(int arg0, int arg1)
  {
    return _adder.add(arg0, arg1);
  }

  public int subtract(int arg0, int arg1)
  {
    return _subtracter.subtract(arg0, arg1);
  }

  public int multiply(int arg0, int arg1)
  {
    return _multiplier.multiply(arg0, arg1);
  }

  public int divide(int arg0, int arg1)
  {
    return _divider.divide(arg0, arg1);
  }
}
モジュール・デプロイ記述子

最後に、HiveMind モジュール・デプロイ記述子であるhivemodule.xml が必要となります。

このモジュール記述子は、インタフェースと実装に関して各々のサービスを作成します。

<?xml version="1.0"?>
<module id="hivemind.examples" version="1.0.0">
  <service-point id="Adder" interface="hivemind.examples.Adder">
    <create-instance class="hivemind.examples.impl.AdderImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  
  <service-point id="Subtracter" interface="hivemind.examples.impl.SubtracterImpl">
    <create-instance class="hivemind.examples.impl.AdderImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  
  <service-point id="Multiplier" interface="hivemind.examples.Multipler">
    <create-instance class="hivemind.examples.impl.MultiplierImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  
  <service-point id="Calculator" interface="hivemind.examples.Calculator">
    <invoke-factory>
      <!-- Autowires service properties based on interface! -->
      <constuct class="hivemind.examples.impl.CalculatorImpl"/>
    </invoke-factory>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>  
  
</module>

ここで、module id hivemind.examples がパッケージの名前と一致するよう選びましたが、絶対的に必須であるわけではありません。

興味深い点は、hivemind.BuilderFactory をCalculatorサービスのコンストラクトのために使い、他の4つのサービスとつなぎ合わせるためにも使っている、という点です。

レジストリのビルド

あらゆるサービス(あるいはコンフィグ・ポイント)にアクセスできるようになる前に、Registry がコンストラクトされなければなりません:Registry は、HiveMind によって管理されるサービスや設定のゲートウェイです。

package hivemind.examples;

import org.apache.hivemind.Registry;
import org.apache.hivemind.impl.RegistryBuilder;

public class Main
{

  public static void main(String[] args)
  {
    int arg0 = Integer.parseInt(args[0]);
    int arg1 = Integer.parseInt(args[1]);

    Registry registry = RegistryBuilder.constructDefaultRegistry();

    Calculator c =
      (Calculator) registry.getService(Calculator.class);

    System.out.println("Inputs " + arg0 + " and " + arg1);

    System.out.println("Add   : " + c.add(arg0, arg1));
    System.out.println("Subtract: " + c.subtract(arg0, arg1));
    System.out.println("Multiply: " + c.multiply(arg0, arg1));
    System.out.println("Divide  : " + c.divide(arg0, arg1));
  }
}
 

RegistryBuilder には、Registry をコンストラクトする為の、殆どの場面に見合った static メソッドが含まれています。

そして、レジストリを有する事になりましたので、Calculator インタフェースを Calculator の実装を探すためのキーとして使う事が出来るようになったわけです。実際のアプリケーションでは、同じインタフェースを実装する複数のサービスがある事がよくありますので、完全修辞のサービスIDを指定する事も必要となるでしょう。

Calculator サービスへのリファレンスを使う事で、漸くadd()subtract()multiply()divide()メソッドを invoke する事が出来るようになりました。

サンプルのビルド

Ant を使ってサンプルをビルド・実行するのはいともたやすい事です:詳細は、build.xmlの中に全てあります:

<?xml version="1.0"?>

<project name="HiveMind Adder Example" default="jar">

  <property name="java.src.dir" value="src/java"/>
  <property name="test.src.dir" value="src/test"/>
  <property name="conf.dir" value="src/conf"/>
  <property name="descriptor.dir" value="src/descriptor"/>
  <property name="target.dir" value="target"/>
  <property name="classes.dir" value="${target.dir}/classes"/>
  <property name="test.classes.dir" value="${target.dir}/test-classes"/>
  <property name="example.jar" value="${target.dir}/hivemind-examples.jar"/>
  <property name="lib.dir" value="lib"/>
  <property name="junit.temp.dir" value="${target.dir}/junit-temp"/>
  <property name="junit.reports.dir" value="${target.dir}/junit-reports"/>

  <path id="build.class.path">
    <fileset dir="${lib.dir}">
      <include name="*.jar"/>
    </fileset>
  </path>
      
  <path id="test.build.class.path">
    <path refid="build.class.path"/>
    <path location="${classes.dir}"/>
  </path>
        
  <path id="run.class.path">
    <path refid="build.class.path"/>
    <pathelement location="${classes.dir}"/>
    <pathelement location="${descriptor.dir}"/>
    <pathelement location="${conf.dir}"/>
  </path>
  
  <path id="test.run.class.path">
    <path refid="run.class.path"/>
    <path location="${test.classes.dir}"/>
  </path>  
  
  <target name="clean" description="Delete all derived files.">
    <delete dir="${target.dir}" quiet="true"/>
  </target>
  
  <target name="compile" description="Compile all Java code.">  
    <mkdir dir="${classes.dir}"/>    
    <javac srcdir="${java.src.dir}" destdir="${classes.dir}" classpathref="build.class.path"/>
  </target>
  
  <target name="compile-tests" description="Compile test classes." depends="compile">
    <mkdir dir="${test.classes.dir}"/>
    <javac srcdir="${test.src.dir}" destdir="${test.classes.dir}" classpathref="test.build.class.path"/>
  </target>
  
  <target name="run-tests" description="Run unit tests." depends="compile-tests">
  
    <mkdir dir="${junit.temp.dir}"/>
    <mkdir dir="${junit.reports.dir}"/>

    <junit haltonfailure="off" failureproperty="junit-failure" tempdir="${junit.temp.dir}">    
      <classpath refid="test.run.class.path"/>
    
      <formatter type="xml"/>
      <formatter type="plain"/>
      <formatter type="brief" usefile="false"/>
    
      <batchtest todir="${junit.reports.dir}">
        <fileset dir="${test.classes.dir}">
          <include name="**/Test*.class"/>
        </fileset>
      </batchtest>
    </junit>

    <fail if="junit-failure" message="Some tests failed."/>

  </target>
  
  <target name="jar" description="Construct the JAR file." depends="compile,run-tests">
    <jar destfile="${example.jar}">
      <fileset dir="${classes.dir}"/>
    <fileset dir="${descriptor.dir}"/>
    </jar>
  </target>
  
  <target name="run" depends="compile" description="Run the Adder service.">
    <java classname="hivemind.examples.Main" classpathref="run.class.path" fork="true">
      <arg value="11"/>
      <arg value="23"/>
    </java>
  </target>

</project>

JAR の中に HiveMind モジュール・デプロイ記述子とクラスファイルの両方をパッケージかして入れておく、というが重要なポイントです。

その他、唯一変わった点としては、src/conf をランタイムのクラスパスに追加する、という点が挙げられるでしょうか:log4j.properties という設定ファイルを追加するためです:さもないと、Log4J が設定ミスに関するコンソールのエラーメッセージを吐き出すでしょう。

サンプルの実行
bash-2.05b$ ant run
Buildfile: build.xml

compile:
    [mkdir] Created dir: C:\workspace\hivemind-example\target\classes
    [javac] Compiling 15 source files to C:\workspace\hivemind-example\target\classes

run:
     [java] Inputs 11 and 23
     [java] Add     : 34
     [java] Subtract: -12
     [java] Multiply: 253
     [java] Divide  : 0



BUILD SUCCESSFUL
Total time: 3 seconds
Inversion of Control

Seems like Inversion of Control is all the rage these days. The Avalon project is completely based around it. Avalon uses detailed assembly descriptions to tie services together ... there's no way an Avalon component can "look up" another component; in Avalon you explicitly connect services together.

That's the basic concept of Inversion of Control; you don't create your objects, you describe how they should be created. You don't directly connect your components and services together in code, you describe which services are needed by which components, and the container is responsible for hooking it all together. The container creates all the objects, wires them together by setting the necessary properties, and determines when methods are invoked.

More recently, this concept has been renamed Dependency Injection.

There are three different implementation pattern types for IoC:

type-1 Services need to implement a dedicated interface through which they are provided with an object from which they can look up dependencies (other services). This is the pattern used by the earlier containers provided by Avalon.
type-2 Services dependencies upon are assigned via JavaBeans properties (setter methods). Both HiveMind and Spring use this approach.
type-3 Services dependencies are provided as constructor parameters (and are not exposed as JavaBeans properties). This is the exclusive approach used by PicoContainer, and is also used in HiveMind and Spring.

HiveMind is a much looser system than Avalon. HiveMind doesn't have an explicit assembly stage; it wires together all the modules it can find at runtime. HiveMind is responsible for creating services (including core implementations and interceptors). It is quite possible to create service factories that do very container-like things, including connecting services together. hivemind.BuilderFactory does just that, instantiating an object to act as the core service implementation, then setting properties of the object, some of which are references to services and configuration point element data.

In HiveMind, you are free to mix and match type-2 (property injection) and type-3 (constructor injection), setting some (or all) dependencies via a constructor and some (or all) via JavaBeans properties.

In addition, JavaBeans properties (for dependencies) can be write-only. You only need to provide a setter method. The properties are properties of the core service implementation, there is no need for the accessor methods to be part of the service interface.

HiveMind's lifecycle support is much more rudimentary than Avalon's. Your service implementations can get hivemindcallbacks when they are first created, and when they are discarded, by implementing certain interfaces.

Purist inversion of control, as in Avalon, may be more appropriate in well-constrained systems containing untrusted code. HiveMind is a layer below that, not an application server, but a microkernel. Although I can see using HiveMind as the infrastructure of an application server, even an Avalon application server, it doesn't directly overlap otherwise.

HiveMind:ローカライズ

全ての HiveMind モジュールには、メッセージ対(つい)が含まれています。メッセージはモジュールのデプロイ記述子と一緒に、META-INF/hivemodule.properties (モジュールの JAR 内) として格納されています。

Note
実際は、プロパティファイルの名前は、記述子の名前から拡張子(".xml")を除いてローカライズ・コードと".properties"をくっ付けたものになります。これは、貴方が(可能性としては、 <sub-module> 要素を通じて)標準ではない場所からモジュール・デプロイ記述子をロードする場面にのみ関連してきます。

サービスは、Messages のインスタンスとして、ローカライズされたメッセージを受け取る事が出来ます。Messages は、メッセージへのアクセスとメッセージのフォーマットをする為のメソッド(引数を持つ)を持っています。

モジュール記述子内では、<contribution><invoke-factory> の場合、属性あるいは要素の値に於いて単にメッセージキーに"%"の接頭辞をつけた形で、ローカライズされたメッセージを参照する事が出来ます。例えば:

  
<contribution configuration-id=...>
  <some-item <message="%message.key">
    %other.message.key
  </some-item>
</contribution> 

これら二つのキー(message.key 及び other.message.key)は、contributingモジュールのメッセージによって検索されます。

ありがたいことに、HiveMind は未定義のメッセージの埋め合わせを行ってくれます。メッセージが properties ファイルに無ければ、HiveMind は代替となる値、つまりキー名を大文字にし括弧を付随した形の値、をセットします。例えば:[MESSAGE.KEY] といった具合。これによって、アプリケーションが(エラーを出す事無く)動きつづけるわけですが、それでいてハッキリと「メッセージが欠けている」という事がわかるわけです。

別途ファイルを用意する事で、メッセージのローカライズが達成できます。例えば、2番目のファイルMETA-INF/hivemodule_fr.propertiesを追加する事で、フランス語のローカライズされたメッセージが提供されます。共通のキーが2つのファイル間にある場合は、指定された方のファイルの方が優先されます。

ロケールの設定

RegistryRegistryBuilder によって生成された際、ロケールが指定されます。これは、Registry の ロケールであり、更に拡張解釈すれば、レジストリ内の全てのモジュールのロケールにもなるというわけです。ロケールは変更されません。

HiveMind:マルチスレッド

HiveMind は、はっきりと WAR や EAR のデプロイという J2EE を目標に定めており、特に Tapestry アプリケーションの一部として意識しています。勿論、J2EEは必須条件ではありませんし、HiveMind は単体であってもシンプルな環境でも非常に使い勝手の良いものに仕上がっています。

J2EEの世界に於いて、マルチスレッドという概念は常に話題に上ります。HiveMind サービスは通常シングルトンで、マルチスレッド環境に於いて実行されるよう準備されていなければなりません。つまり、サーブレット同様、サービス自体が特定の段階(状態)を持っているべきではない、という事になります。

コンストラクション段階

HiveMind では、一つの起動スレッドで初期の処理が進行していく事が予定されています。これは、初期段階・モジュールのデプロイ記述子の場所確認・解析されたところのコンストラクションの段階・そしてレジストリを集めるのに使われる内容であります:RegistryBuilderのドメインです。

コンストラクション自体はスレッドセーフではありません。これには、パーサーやその他のコード(事実上、その全てがアプリケーションから隠されています)

コンストラクション段階は、RegistryBuilder が、constructRegistry() メソッドから Registry を返したときに終了します。

ランタイム段階

Registry とモジュールで起こる全ての事象はスレッドセーフでなければなりません。キーとなるメソッドは常に synchronized です。特に、 サービスをコンストラクトするメソッドやコンフィグ・ポイントをコンストラクトするメソッドはスレッドセーフです。インタセプタ・スタックをビルドするといった処理、コアとなるサービスの実装のインスタンス化、Java オブジェクトのXMLへの変換といった処理は、スレッドセーフで行われます。しかし、異なるスレッドが同時に他のサービスをビルドする可能性もあります。つまり、例えば、2つ以上の異なるサービスの為のインタセプタを同時に呼び出す事もあり得るため、インタセプタ・サービスの実装はスレッドセーフであるのです。

一方、XML<rules> によってコンストラクトされたJavaオブジェクトはスレッドセーフである必要がありません。というのも、コンストラクションは適切に synchronized な状態で行われるからです。・・・ただ一つのスレッドが XML を あらゆる一つのコンフィグ・ポイントで Java オブジェクトに変換するでしょう。

サービス状態の管理

サービスが単にbetweenメソッドの invoke 状態を維持しなければならない場合、いくつかの良いオプションがあります:

  • サービスに渡されるあるいはサービスから戻ってくるオブジェクト内のデータを格納
  • thread-local マップ内のデータを格納する為の、hivemind.ThreadLocalStorage サービスの使用
  • threaded あるいは pooled サービス・モデルの使用:サービス・メソッドが invoke されている間その状態をキープするのをサービスに許可する
HiveMind Servlet Filter

HiveMind には、HiveMind のウェブアプリケーション上での利用を能率化する為の機能が含まれています:サーブレット・フィルタ と呼ばれ、それは、HiveMind Registry を自動的にコンストラクトし、リクエストの終端のクリーンアップスレッドを確実に起こします。

フィルタクラスは、HiveMindFilterです。初期化された時に、標準の HiveMind Registry をコンストラクトし、それが含まれるアプリケーションがアンデプロイされた時に Registry をシャットダウンします。

各々のリクエストは、Registry の cleanupThread() メソッド のコールと共に終了します。このメソッドは、現在のスレッドに拘束されたサービス実装を含む、スレッドローカルなあらゆる値をクリーンアップします。

HiveMindFilter クラスには、Registry にアクセスするための static メソッドが含まれています。

デプロイ記述子

フィルタを利用するためには、ウェブ・デプロイ記述子(web.xml)の中でその事が宣言されていなければなりません。フィルタは、サーブレットか、URL パターンかあるいはその両方に記述する事が出来ます。例えば:

<filter>
  <filter-name>HiveMindFilter</filter-name>
  <filter-class>org.apache.hivemind.servlet.HiveMindFilter</filter-class>
</filter>

<servlet>
  <servlet-name>MyServlet</servlet-name>
  <servlet-class>myco.servlets.MyServlet</servlet-class>
</servlet>

<filter-mapping>
  <filter-name>HiveMindFilter</filter-name>
  <servlet-name>MyServlet</servlet-name>
</filter-mapping>
 
Overriding a Service

It is not uncommon to want to override an existing service and replace it with a new implementation. This goes beyond simply intercepting the service ... the goal is to replace the original implementation with a new implementation. This occurs frequently in Tapestry where frequently an existing service is replaced with a new implementation that handles application-specific cases (and delegates most cases to the default implementation).

Note
Plans are afoot to refactor Tapestry 3.1 to make considerable use of HiveMind. Many of the ideas represented in HiveMind germinated in earlier Tapestry releases.

HiveMind doesn't have an explicit mechanism for accomplishing this ... that's because its reasonable to replace and wrap existing services just with the mechanisms already available.

Step One: A non-overridable service

To describe this technique, we'll start with a ordinary, every day service. In fact, for discussion purposes, there will be two services: Consumer and Provider. Ultimately, we'll show how to override Provider. Also for discussion purposes, we'll do all of this in a single module, though (of course) you can as easily split it up across many modules.

To begin, we'll define the two services, and set Provider as a property of Consumer:

<module id="ex.override" version="1.0.0">
  <service-point id="Provider" interface="ex.override.Provider">
    <create-instance class="ex.override.impl.ProviderImpl"/>
  </service-point>
  
  <service-point id="Consumer" interface="ex.override.Consumer">
    <invoke-factory>
      <construct class="ex.override.impl.Consumer">
        <set-service property="provider" service-id="Provider"/>
    </invoke-factory>
  </service-point>
</module> 
Step Two: Add some indirection

In this step, we still have just the two services ... Consumer and Provider, but they are linked together less explicitly, by using substitution symbols.

<module id="ex.override" version="1.0.0">
  <service-point id="Provider" interface="ex.override.Provider">
    <create-instance class="ex.override.impl.ProviderImpl"/>
  </service-point>
  
  <contribution configuration-id="hivemind.FactoryDefaults">
    <default symbol="ex.override.Provider" value="ex.override.Provider"/>
  </contribution>
  
  <service-point id="Consumer" interface="ex.override.Consumer">
    <invoke-factory>
      <construct class="ex.override.impl.Consumer">
        <set-service property="provider" service-id="${ex.override.Provider}"/>
    </invoke-factory>
  </service-point>
</module>       

The indirection is in the form of the symbol ex.override.Provider, which evaluates to the service id ex.override.Provider and the end result is the same as step one. We needed to use a fully qualified service id because, ultimately, we don't know in which modules the symbol will be referenced.

Step Three: Override!

The final step is to define a second service and slip it into place. For kicks, the OverrideProvider service will get a reference to the original Provider service.

<module id="ex.override" version="1.0.0">
  <service-point id="Provider" interface="ex.override.Provider">
    <create-instance class="ex.override.impl.ProviderImpl"/>
  </service-point>
  
  <contribution configuration-id="hivemind.FactoryDefaults">
    <default symbol="ex.override.Provider" value="ex.override.Provider"/>
  </contribution>
  
  <service-point id="OverrideProvider" interface="ex.override.Provider">
    <invoke-factory>
      <construct class="ex.override.impl.OverrideProviderImpl">
        <set-service property="defaultProvider" service-id="Provider"/>
      </construct>
    </invoke-factory>
  </service-point>
  
  <!-- ApplicationDefaults overrides FactoryDefaults -->
  
  <contribution id="hivemind.ApplicationDefaults">
    <default symbol="ex.override.Provider" value="ex.override.OverrideProvider"/>
  </contribution>
  
  <!-- Consumer unchanged from step 2 -->
  
  <service-point id="Consumer" interface="ex.override.Consumer">
    <invoke-factory>
      <construct class="ex.override.impl.Consumer">
        <set-service property="provider" service-id="${ex.override.Provider}"/>
    </invoke-factory>
  </service-point>
</module>  

The new service, OverrideProvider, gets a reference to the original service using its real id. It can't use the symbol that the Consumer service uses, because that would end up pointing it at itself. Again, in this example it's all happening in a single module, but it could absolutely be split up, with OverrideProvider and the configuration to hivemind.ApplicationDefaults in an entirely different module.

hivemind.ApplicationDefaults overrides hivemind.FactoryDefaults. This means that the Consumer will be connected to ex.override.OverrideProvider.

Note that the <service-point> for the Consumer doesn't change between steps two and three.

Limitations

The main limitation to this approach is that you can only do it once for a service; there's no way to add an EvenMoreOverridenProvider service that wraps around OverrideProvider (that wraps around Provider). Making multiple contributions to the hivemind.ApplicationDefaults configuration point with the name symbol name will result in a runtime error ... and unpredictable results.

This could be addressed by adding another source to the hivemind.SymbolSources configuration.

To be honest, if this kind of indirection becomes extremely frequent, then HiveMind should change to accomidate the pattern, perhaps adding an <override> element, similar to a <interceptor> element.

モジュール:hivemind

サービス

hivemind.BuilderFactory Service

The BuilderFactory service is a service implementation factory ... a service that is used to construct other services.

The builder factory takes a single parameter element (usually with nested elements):

<construct
    class="..." autowire-services="..." log-property="..." messages-property="..."
    service-id-property="..." initialize-method="..."
    error-handler-property="..." class-resolver-property="...">
    
    <log/>
    <messages/>
    <service-id/>
    <error-handler/>
    <class-resolver/>
    <string>  ... </string>
    <boolean> ... </boolean>
    <configuration> ... </configuration>
    <int> ... </int>
    <long> ... </long>
    <resource> ... </resource>
    <service> ... </service>
    <object> ... </object>
    
    <event-listener service-id="..." event-type-name="..."/>
    <set property="..." value="..."/>
    <set-configuration property="..." configuration-id="..."/>
    <set-resource property="..." path="..."/>
    <set-service property="..." service-id="..."/>
    <set-object property="..." value="..."/>
</construct> 

The attributes of the construct element are used to specify the implementation class and set common service properties. Nested elements supply the constructor parameters and configure other specific properties of the implementation (the set-... elements).

Note
BuilderFactory is a complex tool, with support for both constructor dependency injection and property dependency injection. Many of the options are rarely used; the most general purpose and most frequently used are set, set-object and event-listener (along with autowiring of certain properties).
construct
Attribute Required ? Description
autowire-services no If true (the default), then the BuilderFactory will attempt to automatically wire any services that are not otherwise set. Any property that is writable, and whose type is an interface, will be autowired. For such properties, it is required that there be a single service point that implements the interface. An error will be logged if no service point implements the interface, or if multiple service points implement the interface.
class yes The fully qualified name of the class to instantiate.
class-resolver-propertynoThe property to receive the module's ClassResolver.
error-handler-property no The name of a property to recieve the module's ErrorHandler instance (which is used to report recoverable errors).
initialize-method no The name of a method (public, no parameters) to invoke after the service is constructed, to allow it to perform any final initializion before being put into use.
log-property no The name of a property which will be assigned a org.apache.commons.logging.Log instance for the service. The Log is created from the complete service id (not the name of the class). If ommitted, no Log will be assigned.
messages-property no Allows the Messages for the module to be assigned to a property of the instance.
service-id-property no Allows the service id of the constructed service to be assigned to a property of the service implementation.

The remaining elements are enclosed by the <construct> element, and are used to supply constructor parameters and configure properties of the constructed service implementation.

Autowiring

BuilderFactory will automatically set certain common properties of the service implementation. By using standard names (and standard types), the need to specify attributes log-property, error-handler-property, etc. is avoided. Simply by having a writable property with the correct name and type is sufficient:

Property name Property Type
classResolver ClassResolver
errorHandler ErrorHandler
log org.apache.commons.logging.Log
messages Messages
serviceId String

In addition, if the initialize-method attribute is not specified, and the service implementation includes a public method initializeService() (no parameters, returns void), then initializeService() will be invoked as the initializer.

Constructor Parameter Elements

The following table summarizes the elements which can be used to specify constructor parameters for the class to instantiate. These elements can be mixed freely with the properties configuring elements. It is important to know that the number, type, and order of the constructor parameter elements determine the constructor that will be used to instantiate the implementation.

Element Matched Parameter Type Passed Parameter Value
error-handler ErrorHandler The module's ErrorHandler, user to report recoverable errors.
log org.apache.commons.logging.Log The Log is created from the complete service id (not the name of the class) of the created service.
messages org.apache.hivemind.Messages The Messages object of the invoking module.
object variable As determined by the object translator, this is decidedly free-form. See hivemind.ObjectProviders.
service-id java.lang.String The service id of the constructed service.
string java.lang.String This element's content.
boolean boolean This element's content. Must be either "true" or "false".
configuration java.util.List The List of the elements of the configuration specified by this element's content as a configuration id. The id can either by a simple id for a configuration within the same module as the constructed service, or a complete id.
int int This element's content parsed as an integer value.
long long This element's content parsed as a long value.
resource org.apache.hivemind.Resource This element's content parsed as a path to a Resource, which is relative to the contributing module's deployment descriptor. If available, a localized version of the Resource will be selected.
service interface corresponding to specified service The implementation of the service with the id given in this element's content. The id can either be a simple id for a service within the same module as the constructed service, or a complete id.
Service Property Configuring Elements
event-listener
Attribute Description
service-id The service which produces events. The service must provide, in its service interface, the necessary add and remove listener methods.
name The name of an event set to be registered. If not specified, all applicable event sets are used.

If the name attribute is not specified, then BuilderFactory will register for all applicable event sets. For each event set provided by the specified service, BuilderFactory will check to see if the service instance being constructed implements the corresponding listener interface ... if so, the constructed service instance is added as a listener. When the name attribute is specified, the constructed service instance is registered as a listener of just that single type.

Event notifications go directly to the constructed service instance; they don't go through any proxies or interceptors for the service. The service instance must implement the listener interface, the constructed service's service interface does not have to extend the listener interface. In other words, event notifications are "behind the scenes", not part of the public API of the service.

It is perfectly acceptible to include multiple <event-listener> elements for a number of different event producing services.

It is not enough for the event producer service to have an add listener method (i.e., addPropertyChangeListener(PropertyChangeListener)). To be recognized as an event set, there must also be a corresponding remove listener method (i.e., removePropertyChangeListener(PropertyChangeListener)), even though BuilderFactory does not make use of the remove method. This is an offshoot of how the JavaBeans API defines event sets.

set
Attribute Description
property The name of the property to set.
value A value to assigned to the property. The value will be converted to an appropriate type for the property.
set-configuration
Attribute Description
property The name of the property to set.
configuration-id The id of a configuration, either a simple id for a configuration within the same module as the constructed service, or a complete id. The property will be assigned a List of the elements of the configuration.
set-object
Attribute Description
property The name of the property to set.
value The selector used to find an object value. The selector consists of a prefix (such as "service" or "configuration"), a colon, and a locator whose interpretation is defined by the prefix. For example, service:MyService. See hivemind.ObjectProviders.
set-resource
Attribute Description
property The name of the property to set.
path The path to a Resource, relative to the contributing module's deployment descriptor. If available, a localized version of the Resource will be selected.
set-service
Attribute Description
property The name of the property to set.
service-id The id of a service, either a simple id for a service within the same module as the constructed service, or a complete id. The property will be assigned the service.
hivemind.LoggingInterceptor Service

The LoggingInterceptor service is used to add logging capability to a service, i.e.:

<interceptor service-id="hivemind.LoggingInterceptor">
  <include method="..."/>
  <exclude method="..."/>    
</interceptor>

The service make take parameters (which control which methods will be logged).

The logging interceptor uses a Log derived from the service id (of the service to which logging is being added).

The service logs, at debug level, the following events:

  • Method entry (with parameters)
  • Method exit (with return value, if applicable)
  • Thrown exceptions (checked and runtime)

By default, the interceptor will log all methods. By supplying parameters to the interceptor, you can control exactly which methods should be logged. The include and exclude parameter elements specify methods to be included (logged) and excluded (not logged). The method attribute is a method pattern, a string used to match methods based on name, number of parameters, or type of parameters; see the MethodMatcher class for more details.

A method which does not match any supplied pattern will be logged.

hivemind.ShutdownCoordinator Service

Service implementations that need to perform any special shutdown logic should implement the RegistryShutdownListener interface, and let thehivemind.BuilderFactory register them for notifications.

hivemind.ThreadLocalStorage Service

The ThreadLocalStorage service implements the ThreadLocalStorage interface. This service acts as a kind of Map for temporary data. The map is local to the current thread, and is cleared at the end of the transaction.

It is your responsibility to ensure that keys are unique, typically by prefixing them with a module id or package name.

設定

hivemind.ApplicationDefaults Configuration

The ApplicationDefaults configuration is used to set default values for substitution symbols. Application defaults override contributions to hivemind.FactoryDefaults. The contribution format is the same FactoryDefaults:

  <default symbol="..." value="..."/>
hivemind.EagerLoad Configuration

The EagerLoad configuration allows services to be constructed when the Registry is first initialized. Normally, HiveMind goes to great lengths to ensure that services are only constructed when they are first needed. Eager loading is appropriate during development (to ensure that services are configured properly), and some services that are event driven may need to be instantiated early, so that they may begin receiving event notifications even before their first service method is invoked.

Care should be taken when using services with the pooled or threaded service models to invoke cleanup the thread immediately after creating the Registry.

Contributions are as follows:

<load service-id="..."/> 
hivemind.FactoryDefaults Configuration

The FactoryDefaults configuration is used to set default values for substitution symbols. Contributions look like:

<default symbol="..." value="..."/>

Values defined here can be overriden by making a contribution to hivemind.ApplicationDefaults.

hivemind.ObjectProviders Configuration

The ObjectProviders configuration drives the object translator. Contributions define an object provider in terms of a prefix (such as service) and a service that implements the ObjectProvider interface.

Object providers exist to support object references. Object references are a way to contribute objects that may be references to services, or to other objects, or may even be created on the spot. This is often used with the hivemind.BuilderFactory's <set-object> element.

An object reference consists of a prefix and a locator seperated by a colon. The interpretation of the locator is different for each provider. The prefix determines which provider will be utilized to interpret the locator.

The contribution format defines the name and class for each service model:

<provider prefix="..." service-id="..."/> 

Prefixes must be unique.

The following default prefixes are available:

Prefix Descripton Example
bean The locator is a BeanFactory locator, consisting of the id of a BeanFactory service, a colon, and an optional initializer for the bean.
Note
Provided by the HiveMind library.
bean:ValidatorFactory:string,required
configuration The locator is the id of a configuration. configuration:MyConfiguration
instance The locator is a fully qualified class name, which must have a public no arguments contructor. instance:com.example.MyObject
service The locator is the id of a service. service:MyService
service-property The locator provides a service id and a property name (provided by that service), seperated with a colon. service-property:MyService:activeRequest
hivemind.ServiceModels Configuration

The ServiceModels configuration defines the available service models. Service models control the lifecycle of services: when they are created and when they are destroyed (often tied to the current thread's activity).

The contribution format defines the name and class for each service model:

<service-model name="..." class="..."/> 

An instance of the specified class will be instantiated. The class must implement the ServiceModelFactory interface (which creates an instance of the actual service model for a particular service extension point).

Names of service models must be unique; it is not possible to override the built-in service model factories.

hivemind.SymbolSources Configuration

The SymbolSources configuration is used to define new SymbolSources (providers of values for substitution symbols).

Contributions are of the form:

<source name="..." before="..." after="..." class="..." service-id="..."/>	

Sources are ordering based on the name, before and after elements. before and after may be comma-seperated lists of other sources, may be the simple value *, or may be omitted.

Only one of class and service-id attributes should be specified. The former is the complete name of a class (implementing the SymbolSource interface). The second is used to contribute a service (which must also implement the interface).

hivemind.Translators Configuration

The Translators configuration defines the translators that may be used with XML conversion rules.

The contribution format defines the name and class for each service model:

<translator name="..." class="..."/> 

An instance of the specified class will be instantiated. The class must implement the Translator interface. It should have a no-args and/or single String constructor.

Names of translators must be unique; it is not possible to override the existing service model translators. A single translator, class , is hard-coded into HiveMind, the others appear as ordinary contributions.

Antタスク

ConstructRegistry Ant Task

Reads some number of HiveMind module descriptors and assembles a single registry file from them. The output registry consists of a <registry> element which contains one <module> element for each module descriptor read. This registry is useful for generating documentation.

The registry XML is only updated if it does not exist, or if any of the module deployment descriptor is newer.

This task is implemented as org.apache.hivemind.ant.ConstructRegistry.

Parameters
Attribute Description Required
output The file to write the registry to. Yes
Parameters specified as nested elements
descriptors

A path-like structure, used to identify which HiveMind module descriptors (hivemodule.xml) should be included.

Each path element should either be a module deployment descriptor, or be a JAR containing a deployment descriptor (in the META-INF folder).

Examples

Create target/registry.xml from all hivemodule.xml descriptors found inside the src directory.

<constructregistry output="target/registry.xml">
  <descriptors>
    <fileset dir="src">
      <include name="**/hivemodule.xml"/>
    </fileset>
  </descriptors>
</constructregistry> 
ManifestClassPath Ant Task

Converts a classpath into a space-separated list of items used to set the Manifest Class-Path attribute.

This is highly useful when modules are packaged together inside an Enterprise Application Archive (EAR). Library modules may be deployed inside an EAR, but (in the current J2EE specs), there's no way for such modules to be added to the classpath in the deployment descriptor; instead, each JAR is expected to have a Manifest Class-Path attribute identifying the exactly list of JARs that should be in the classpath. This Task is used to generate that list.

This task is implemented as org.apache.hivemind.ant.ManifestClassPath.

Parameters
Attribute Description Required
property The name of a property to set as a result of executing the task. Yes
directory If specified, then the directory attribute does two things:
  • It acts as a filter, limiting the results to just those elements that are within the directory
  • It strips off the directory as a prefix (plus the separator), creating results that are relative to the directory.
No
Parameters specified as nested elements
classpath

A path-like structure, used to identify what the classpath should be.

Examples

Generate a list of JARs inside the ${target} directory as relative paths and use it to set the Class-Path manifest attribute.

<manifestclasspath directory="${target}" property="manifest.class.path">
  <classpath refid="build.class.path"/>
</manifestclasspath>

<jar . . .>
  <manifest>
    <attribute name="Class-Path" value="${manifest.class.path}"/>
    . . .
  </manifest>
</jar>

 

モジュール:hivemind.lib

サービス

hivemind.lib.BeanFactoryBuilder Service

The BeanFactoryBuilder services is used to construct a BeanFactory instance. An BeanFactory will vend out instances of classes. A logical name is mapped to a particular Java class to be instantiated.

Client code can retrieve beans via the factory's get() method. Beans are retrieved using a locator, which consists of a name and an optional initializer seperated by commas. The initializer is provided to the bean via an alternate constructor that takes a single string parameter. Initializers are used, typically, to initialize properties of the bean, but the actual implementation is internal to the bean class.

Usage

The service takes a single parameter element:

<factory vend-class="..." configuration-id="..." default-cacheable="..."/>

The vend-class attribute is the name of a class all vended objects must be assignable to (as a class or interface). This is used to validate contributed bean definitions. By default it is java.lang.Object.

The configuration-id is the id of the companion configuration (used to define object classes).

The optional default-cacheable attribute sets the default for whether instantiated beans should be cached for reuse. By default this is true, which is appropriate for most use cases where the vended objects are immutable.

Configuration

Each BeanFactory service must have a configuration, into which beans are contributed:

<configuration-point id="..." schema-id="hivemind.lib.BeanFactoryContribution"/>
         

Contributions into the configuration are used to specify the bean classes to instantiate, as:

<bean name="..." class="..." cacheable="..."/>  

name is a unique name used to reference an instance of the class.

class is the Java class to instantiate.

cacheable determines whether instances of the class are cacheable (that is, have immutable internal state and should be reused), or non-cacheable (presumably, because of mutable internal state).

hivemind.lib.DefaultImplementationBuilder Service

The DefaultImplementationBuilder service is used to create default implementations of interfaces. As described in the service interface JavaDoc, methods return null, 0 or false (depending on return type) and otherwise do nothing.

hivemind.lib.EJBProxyFactory Service

The EJBProxyFactory service is used to construct a HiveMind service that delegates to an EJB stateless session bean. The EJB's remote interface is the service interface. When the first service method is invoked, the fabricated proxy will perform a JNDI lookup (using the NameLookup service), and invokes create() on the returned home interface.

The single service instance will be shared by all threads.

The service expects a single parameter element:

<construct home-interface="..." jndi-name="..." name-lookup-service="..."/> 

The home-interface attribute is the complete class name for the home interface, and is required.

The jndi-name attribute is the name of the EJB's home interface, also required.

The name-lookup-service-id attribute is optional and rarely used; it is an alternate service implementing the NameLookup interface to be used for JNDI lookups.

hivemind.lib.NameLookup Service

The NameLookup service is a thin wrapper around JNDI lookup. It is used by the EJBProxyFactory service to locate EJBs.

The implementation makes use of three symbols (all of whose values default to null):

  • java.naming.factory.initial
  • java.naming.factory.url.pkgs
  • java.naming.provider.url

By supplying overrides of these values, it is possible to configure how the NameLookup service generates the InitialContext used for performing the JNDI lookup.

hivemind.lib.PlaceholderFactory Service

The PlaceholderFactory service is a service implementation factory that uses the DefaultImplementationBuilder service to create placeholder implementations for services. Placeholders do nothing at all.

hivemind.lib.PipelineFactory Service

The PipelineFactory services is used to construct a pipeline consisting of a series of filters. The filters implement an interface related to the service interface.

Each method of the service interface has a corresponding method in the filter interface with an identical signature, except that an additional parameter, whose type matches the service interface has been added.

For example, a service interface for transforming a string:

package mypackage; 

public interface StringTransformService
{
  public String transform(String inputValue); 
} 

The corresponding filter interface:

package mypackage;

public interface StringTransformFilter
{
  public String transform(String inputValue, StringTransformService service);
}

The service parameter may appear at any point in the parameter list, though the convention of listing it last is recommended.

The filters in a pipeline are chained together as follows:

Pipeline Calling Sequence

The bridge objects implement the service interface (and are created dynamically at runtime). The terminator at the end also implements the service interface. This is an object reference (it can be an object or a service) if no terminator is specified, a default implementation is created and used. Only a single terminator is allowed.

A pipeline is always created in terms of a service and a configuration. The service defines the service interface and identifies a configuration. The configuration conforms to the hivemind.lib.Pipeline schema and is used to specify filters and the terminator. Filters may be ordered much like <interceptor>s, using before and after attributes. This allows different modules to contribute filters into the service's pipeline.

Usage

The factory expects a single parameter element:

<create-pipeline filter-interface="..." configuration-id="..." terminator="..."/>

The filter-interface attribute is the complete class name of the filter interface.

The configuration-id is the id of the companion configuration (used to define filters).

The optional terminator attribute is used to specify an object reference. A terminator may also be contributed into the pipeline configuration.

Configuration

Each pipeline service must have a configuration, into which filters are contributed:

<configuration-point id="..." schema-id="hivemind.lib.Pipeline"/>
Contributions

Contributions into the configuration are used to specify the filters and the terminator.

filter
<filter name="..." before="..." after="..." object="..."/> 

Contributes a filter. The optional before and after attributes are lists of the ids of other filters in the pipeline, used to set the ordering of the filters. They may be comma-seperated lists of filter ids (or filter names), or simple * to indicate absolute positioning.

The object attribute is the filter object itself, an object reference to an object implementing the filter interface.

terminator
<terminator object="..."/>

Specifies the terminator for the pipeline, as an object reference to an object implementing the service interface. Only a single terminator may be specified, and the terminator service provided in the factory parameters takes precendence over a terminator in the configuration.

hivemind.lib.RemoteExceptionCoordinator Service

The RemoteExceptionCoordinator is used to propogate notifications of remote exceptions throughout the HiveMind repository. When any individual service encounters a remote exception, it notifies all listeners, who release all remote object proxies.

The service interface, RemoteExceptionCoordinator, allows objects that implement the RemoteExceptionListener interface to be registered for notification, and includes a method for firing notifications.

hivemind.lib.ServicePropertyFactory Service

The ServicePropertyFactory exposes a property of a service as a new service. The property's type must the same as (or assignable to) the service interface.

On each invocation of a service method, the property is re-acquired from the property source service, and the method reinvoked on the active value. This is useful when the value of the property can change at different times ... by using this factory, and not the service-property object provider, your code will always access the current value.

This can invaluable when a small number of services use the threaded or pooled service models. Other services can access information in those services transparently, without themselves having to be threaded or pooled.

A single parameter element is expected:

<construct service-id="..." property="..."/>      
      

Both attributes are required.

hivemind.lib.SpringLookupFactory Service

The SpringLookupFactory supports integration with the Spring framework, another open-source lightweight container. SpringLookupFactory is a service constructor that obtains a core service implementation from a Spring BeanFactory .

By default, the BeanFactory is obtained from the DefaultSpringBeanFactoryHolder. Part of your application startup code requires that you start a Spring instance and inform the DefaultSpringBeanFactoryHolder about it.

The SpringLookupFactory expects exactly one parameter element:

<lookup-bean name="..." source-service-id="..."/> 

The name attribute is the name of the bean to look for inside the Spring BeanFactory.

The optional source-service-id attribute allows an alternate service to be used to obtain the Spring BeanFactory. The identified service must implement the SpringBeanFactorySource interface.

サンプルコード

Calculator

The calculator example demonstrates the most basic concepts of HiveMind; the difference between <create-instance> and <invoke-factory>, the fact that services are, by default, created only as needed, and the ability of hivemind.BuilderFactory to automatically wire services together. It also demonstrates the behavior of the hivemind.LoggingInterceptor.

After compiling the examples, you can use Ant to run them:

bash-2.05b$ ant run-calculator
Buildfile: build.xml

run-calculator:
     [java] Calculator [DEBUG] Creating SingletonProxy for service examples.Calculator
     [java] Inputs:   28.0 and 4.75
     [java] Calculator [DEBUG] Constructing core service implementation for service examples.Calculator
     [java] Subtracter [DEBUG] Creating SingletonProxy for service examples.Subtracter
     [java] Calculator [DEBUG] Autowired service property subtracter to <SingletonProxy for examples.Subtracter(org.apache.hivemind.examples.Subtracter)>
     [java] Divider [DEBUG] Creating SingletonProxy for service examples.Divider
     [java] Calculator [DEBUG] Autowired service property divider to <SingletonProxy for examples.Divider(org.apache.hivemind.examples.Divider)>
     [java] Multiplier [DEBUG] Creating SingletonProxy for service examples.Multiplier
     [java] Calculator [DEBUG] Autowired service property multiplier to <SingletonProxy for examples.Multiplier(org.apache.hivemind.examples.Multiplier)>
     [java] Adder [DEBUG] Creating SingletonProxy for service examples.Adder
     [java] Calculator [DEBUG] Autowired service property adder to <SingletonProxy for examples.Adder(org.apache.hivemind.examples.Adder)>
     [java] Calculator [DEBUG] Applying interceptor factory hivemind.LoggingInterceptor
     [java] Calculator [DEBUG] BEGIN add(28.0, 4.75)
     [java] Adder [DEBUG] Constructing core service implementation for service examples.Adder
     [java] Adder [DEBUG] Applying interceptor factory hivemind.LoggingInterceptor
     [java] Adder [DEBUG] BEGIN add(28.0, 4.75)
     [java] Adder [DEBUG] END add() [32.75]
     [java] Calculator [DEBUG] END add() [32.75]
     [java] Add:      32.75
     [java] Calculator [DEBUG] BEGIN subtract(28.0, 4.75)
     [java] Subtracter [DEBUG] Constructing core service implementation for service examples.Subtracter
     [java] Subtracter [DEBUG] Applying interceptor factory hivemind.LoggingInterceptor
     [java] Subtracter [DEBUG] BEGIN subtract(28.0, 4.75)
     [java] Subtracter [DEBUG] END subtract() [23.25]
     [java] Calculator [DEBUG] END subtract() [23.25]
     [java] Subtract: 23.25
     [java] Calculator [DEBUG] BEGIN multiply(28.0, 4.75)
     [java] Multiplier [DEBUG] Constructing core service implementation for service examples.Multiplier
     [java] Multiplier [DEBUG] Applying interceptor factory hivemind.LoggingInterceptor
     [java] Multiplier [DEBUG] BEGIN multiply(28.0, 4.75)
     [java] Multiplier [DEBUG] END multiply() [133.0]
     [java] Calculator [DEBUG] END multiply() [133.0]
     [java] Multiply: 133.0
     [java] Calculator [DEBUG] BEGIN divide(28.0, 4.75)
     [java] Divider [DEBUG] Constructing core service implementation for service examples.Divider
     [java] Divider [DEBUG] Applying interceptor factory hivemind.LoggingInterceptor
     [java] Divider [DEBUG] BEGIN divide(28.0, 4.75)
     [java] Divider [DEBUG] END divide() [5.894736842105263]
     [java] Calculator [DEBUG] END divide() [5.894736842105263]
     [java] Divide:   5.894736842105263

BUILD SUCCESSFUL
Total time: 3 seconds

The logging configuration enables logging for the hivemind logger; that and the logging interceptors produces quite a bit of output. You can see that a proxy is created for services initially, and that the "core service implementation" for the service is created later ... the core service implementation consists of an instance of the service's POJO class, wrapped with any interceptors (the logging interceptor, in this case).

The Registry is built from the following module deployment descriptor:

<?xml version="1.0"?>
<module id="examples" version="1.0.0">
  <service-point id="Adder" interface="org.apache.hivemind.examples.Adder">
    <create-instance class="org.apache.hivemind.examples.impl.AdderImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  <service-point id="Subtracter" interface="org.apache.hivemind.examples.Subtracter">
    <create-instance class="org.apache.hivemind.examples.impl.SubtracterImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  <service-point id="Multiplier" interface="org.apache.hivemind.examples.Multiplier">
    <create-instance class="org.apache.hivemind.examples.impl.MultiplerImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  <service-point id="Divider" interface="org.apache.hivemind.examples.Divider">
    <create-instance class="org.apache.hivemind.examples.impl.DividerImpl"/>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
  <service-point id="Calculator" interface="org.apache.hivemind.examples.Calculator">
    <invoke-factory>
      <construct class="org.apache.hivemind.examples.impl.CalculatorImpl"/>
    </invoke-factory>
    <interceptor service-id="hivemind.LoggingInterceptor"/>
  </service-point>
</module>

The service-point for the Calculator service is very simple ... as the comment indicates, the BuilderFactory is capable of locating the other services (Adder, Subtracter, etc.) by their interface, rather than requiring set-service elements to connect properites to services (using the target service's ids). These properties of the Calculator implementation are autowired to the matching services. Autowriring works only because just a single service within the entire Registry implements the specific interface. You would see errors if no service implemented the interface, or if more than one did.

Panorama Startup

Panorama is a disguised version of WebCT's Vista application. Vista is a truly massive web application, consisting of thousands of Java classes and JSPs and hundreds of EJBs. Vista is organized as a large number of somewhat interrelated tools with an underlying substrate of services. In fact, HiveMind was originally created to manage the complexity of Vista.

Note
The reality is that Vista, a commercial project, has continued with an older version of HiveMind. Panorama is based on original code in Vista, but has been altered to take advantage of many features available in more recent versions of HiveMind. Keeping the names seperate keeps us honest about the differences between a product actually in production (Vista) versus an idealized version used for demonstration and tutorial purposes (Panorama).

With all these interrelated tools and services, the simple act of starting up the application was complex. Many tools and services have startup operations, things that need to occur when the application first starts up within the application server. For example, the help service reads and caches help text stored within the database. The mail service creates periodic jobs to peform database garbage collection of deleted mail items. All told, Vista had over 40 different tasks to perform at startup ... many with subtle dependencies (such as the mail tool needing the job scheduler service to be up and running).

The legacy version of Vista startup consisted of a WebLogic startup class that invoked a central stateless session EJB. The startup EJB was responsible for performing all 40+ startup tasks ... typically by invoking a public static method of a class related to the tool.

This was problematic for several reasons. It created a dependency on WebLogic to manage startup (really, a minor consideration, but one nonetheless). More importantly, it created an unnecessary binding between the startup EJB and all the other code in all the other tools. These unwanted dependencies created ripple effects throughout the code base that impacted refactored efforts, and caused deployment problems that complicated the build (requiring the duplication of many common classes inside the startup EJB's JAR, to resolve runtime classloader dependencies).

Note
It's all about class loaders. The class loader that loaded the startup EJB didn't have visibility to the contents of the other EJB JARs deployed within the Vista EAR. To satisfy WebLogic's ejbc command (EJB JAR packaging tool), and to succesfully locate the classes at runtime, it was necessary to duplicate many classes from the other EJB JARs into the startup EJB JAR. With HiveMind, this issue goes away, since the module deployment descriptors store the class name, and the servlet thread's context class loader is used to resolve that name ... and it has visibility to all the classes in all the EJB JARs.
Enter HiveMind

HiveMind's ultimate purpose was to simplify all aspects of Vista development and create a simpler, faster, more robust development environment. The first step on this journey, a trial step, was to rationalize the startup process.

Each startup task would be given a unique id, a title and a set of dependencies (on other tasks). How the task actually operated was left quite abstract ... with careful support for supporting the existing legacy approach (public static methods). What would change would be how these tasks were executed.

The advantage of HiveMind is that each module can contribute as many or as few startup tasks as necessary into the Startup configuration point as needed. This allows the startup logic to be properly encapsulated in the module. The startup logic can be easily changed without affecting other modules, and without having to change any single contentious resource (such as the legacy approach's startup EJB).

Startup task schema

The schema for startup tasks contributions must support the explicit ordering of execution based on dependencies. With HiveMind, there's no telling in what order modules will be processed, and so no telling in what order contributions will appear within a configuration point ... so it is necessary to make ordering explicit by giving each task a unique id, and listing dependencies (the ids of tasks that must precede, or must follow, any single task).

Special consideration was given to supporting legacy startup code in the tools and services; code that stays in the form of a public static method. As HiveMind is adopted, these static methods will go away, and be replaced with either HiveMind services, or simple objects. In the very long term, much of this startup logic will become uncessary, as more of the system will be implemented using HiveMind services, which will lazily initialize just as needed.

The schema definition (with desriptions removed, for compactness) follows:

<schema id="Tasks">
  <element name="task">
    <attribute name="title" required="true"/>
    <attribute name="id" required="true"/>
    <attribute name="before"/>
    <attribute name="after"/>      
    <attribute name="executable" required="true" translator="object"/>
      
    <conversion class="com.panorama.startup.impl.Task"/>
  </element>
  
  <element name="static-task">
    <attribute name="title" required="true"/>
    <attribute name="id" required="true"/>
    <attribute name="before"/>
    <attribute name="after"/>           
    <attribute name="class" translator="class" required="true"/>
    <attribute name="method"/>
      
    <rules>
      <create-object class="com.panorama.startup.impl.Task"/>
      <invoke-parent method="addElement"/>
        
      <read-attribute attribute="id" property="id"/>
      <read-attribute attribute="title" property="title"/>
      <read-attribute attribute="before" property="before"/>
      <read-attribute attribute="after" property="after"/>
        
      <create-object class="com.panorama.startup.impl.ExecuteStatic"/>
      <invoke-parent method="setExecutable"/>
        
      <read-attribute attribute="class" property="targetClass"/>
      <read-attribute attribute="method" property="methodName"/>       
    </rules>
  </element>
</schema>
Note
For more details, see the HiveDoc for the Tasks schema.

This schema supports contributions in two formats. The first format allows an arbitrary object or service to be contributed:

<task id="mail" title="Mail" executable="service:MailStartup"/>

The executable attribute is converted into an object or service; here the service: prefix indicates that the rest of the string, MailStartup, is a service id (other prefixes are defined by the hivemind.ObjectProviders configuration). If this task has dependencies, the before and after attributes can be specified as well.

To support legacy code, a second option, static-task, is provided:

<static-task id="discussions" title="Discussions" after="mail" class="com.panorama.discussions.DiscussionsStartup"/>

The static-task element duplicates the id, title, before and after attributes, but replaces executable with class (the name of the class containing the method) and method (the name of the method to invoke, defaulting to "init").

Startup Service

The schema just defines what contributions look like and how they are converted to objects; we need to define a Startup configuration point using the schema, and a Startup service that uses the configuration point.

<configuration-point id="Startup" schema-id="Tasks"/>

<service-point id="Startup" interface="java.lang.Runnable">
  <invoke-factory>
    <construct class="com.panorama.startup.impl.TaskExecutor">
      <set-configuration property="tasks" configuration-id="Startup"/>
    </construct>
  </invoke-factory>
</service-point>

<contribution id="hivemind.Startup">
  <startup object="service:Startup"/>
</contribution>

The hivemind.Startup configuration point is used to ensure that the Panorama Startup service is executed when the Registry itself is constructed.

Implementation

All that remains is the implementations of the service and task classes.

Executable.java
package com.panorama.startup;

/**
 * Much like {@link java.lang.Runnable}, but allows the caller
 * to handle any exceptions thrown.
 *
 * @author Howard Lewis Ship
 */
public interface Executable
{
    public void execute() throws Exception;
}

The Executable interface is implemented by tasks, and by services or other objects that need to be executed. It throws Exception so that exception catching and reporting can be centralized inside the Startup service.

Task.java
package com.panorama.startup.impl;

import org.apache.hivemind.impl.BaseLocatable;

import com.panorama.startup.Executable;

/**
 * An operation that may be executed. A Task exists to wrap
 * an {@link com.panorama.startup.Executable} object with
 * a title and ordering information (id, after, before).
 *
 * @author Howard Lewis Ship
 */
public class Task extends BaseLocatable implements Executable
{
    private String _id;
    private String _title;
    private String _after;
    private String _before;
    private Executable _executable;

    public String getBefore()
    {
        return _before;
    }

    public String getId()
    {
        return _id;
    }

    public String getAfter()
    {
        return _after;
    }

    public String getTitle()
    {
        return _title;
    }

    public void setExecutable(Executable executable)
    {
        _executable = executable;
    }

    public void setBefore(String string)
    {
        _before = string;
    }

    public void setId(String string)
    {
        _id = string;
    }

    public void setAfter(String string)
    {
        _after = string;
    }

    public void setTitle(String string)
    {
        _title = string;
    }

    /**
     * Delegates to the {@link #setExecutable(Executable) executable} object.
     */
    public void execute() throws Exception
    {
        _executable.execute();
    }

}

The Task class is a wrapper around an Executable object; whether that's a service, some arbitrary object, or a StaticTask.

ExecuteStatic.java
package com.panorama.startup.impl;

import java.lang.reflect.Method;

import com.panorama.startup.Executable;

/**
 * Used to access the legacy startup code that is in the form
 * of a public static method (usually <code>init()</code>) on some
 * class.
 *
 * @author Howard Lewis Ship
 */
public class ExecuteStatic implements Executable
{
    private String _methodName = "init";
    private Class _targetClass;

    public void execute() throws Exception
    {
        Method m = _targetClass.getMethod(_methodName, null);

        m.invoke(null, null);
    }

    /**
     * Sets the name of the method to invoke; if not set, the default is <code>init</code>.
     * The target class must have a public static method with that name taking no
     * parameters.
     */
    public void setMethodName(String string)
    {
        _methodName = string;
    }

    /**
     * Sets the class to invoke the method on.
     */
    public void setTargetClass(Class targetClass)
    {
        _targetClass = targetClass;
    }
}

ExecuteStatic uses Java reflection to invoke a public static method of a particular class.

TaskExecutor.java
package com.panorama.startup.impl;

import java.util.Iterator;
import java.util.List;

import org.apache.commons.logging.Log;
import org.apache.hivemind.ErrorHandler;
import org.apache.hivemind.Messages;
import org.apache.hivemind.order.Orderer;

/**
 * A service that executes a series of {@link com.panorama.startup.impl.Task}s. Tasks have
 * an ordering based on pre- and post-requisites.
 *
 * @author Howard Lewis Ship
 */
public class TaskExecutor implements Runnable
{
    private ErrorHandler _errorHandler;
    private Log _log;
    private List _tasks;
    private Messages _messages;

    /**
     * Orders the {@link #setTasks(List) tasks} into an execution order, and executes
     * each in turn.  Logs the elapsed time, number of tasks, and the number of failures (if any).
     */
    public void run()
    {
        long startTime = System.currentTimeMillis();
 
        Orderer orderer = new Orderer(_log, _errorHandler, task());

        Iterator i = _tasks.iterator();
        while (i.hasNext())
        {
            Task t = (Task) i.next();

            orderer.add(t, t.getId(), t.getAfter(), t.getBefore());
        }

        List orderedTasks = orderer.getOrderedObjects();

        int failures = 0;

        i = orderedTasks.iterator();
        while (i.hasNext())
        {
            Task t = (Task) i.next();

            if (!execute(t))
                failures++;
        }

        long elapsedTime = System.currentTimeMillis() - startTime;

        if (failures == 0)
            _log.info(success(orderedTasks.size(), elapsedTime));
        else
            _log.info(failure(failures, orderedTasks.size(), elapsedTime));
    }

    /**
     * Execute a single task.
     * 
     * @return true on success, false on failure
     */
    private boolean execute(Task t)
    {
        _log.info(executingTask(t));

        try
        {
            t.execute();

            return true;
        }
        catch (Exception ex)
        {
            _errorHandler.error(_log, exceptionInTask(t, ex), t.getLocation(), ex);

            return false;
        }
    }

    private String task()
    {
        return _messages.getMessage("task");
    }

    private String executingTask(Task t)
    {
        return _messages.format("executing-task", t.getTitle());
    }

    private String exceptionInTask(Task t, Throwable cause)
    {
        return _messages.format("exception-in-task", t.getTitle(), cause);
    }

    private String success(int count, long elapsedTimeMillis)
    {
        return _messages.format("success", new Integer(count), new Long(elapsedTimeMillis));
    }

    private String failure(int failureCount, int totalCount, long elapsedTimeMillis)
    {
        return _messages.format(
            "failure",
            new Integer(failureCount),
            new Integer(totalCount),
            new Long(elapsedTimeMillis));
    }

    public void setErrorHandler(ErrorHandler handler)
    {
        _errorHandler = handler;
    }

    public void setLog(Log log)
    {
        _log = log;
    }

    public void setMessages(Messages messages)
    {
        _messages = messages;
    }

    public void setTasks(List list)
    {
        _tasks = list;
    }

}

This class is where it all comes together; it is the core service implementation for the panorama.startup.Startup service. It is constructed by the hivemind.BuilderFactory, which autowires the errorHandler, log and messages properties, as well as the tasks property (which is explicitly set in the module deployment descriptor).

Most of the run() method is concerned with ordering the contributed tasks into execution order and reporting the results.

Unit Testing

Unit testing in HiveMind is accomplished by acting like the container; that is, your code is responsible for instantiating the core service implementation and setting its properties. In many cases, you will set the properties to mock objects ... HiveMind uses EasyMock extensively, and provides a base class, HiveMindTestCase, that contains much support for creating Mock controls and objects.

TestTaskExcecutor.java
package com.panorama.startup.impl;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Locale;

import org.apache.commons.logging.Log;
import org.apache.hivemind.ApplicationRuntimeException;
import org.apache.hivemind.ErrorHandler;
import org.apache.hivemind.Messages;
import org.apache.hivemind.Resource;
import org.apache.hivemind.impl.MessagesImpl;
import org.apache.hivemind.test.ExceptionAwareArgumentsMatcher;
import org.apache.hivemind.test.HiveMindTestCase;
import org.apache.hivemind.test.RegexpArgumentsMatcher;
import org.apache.hivemind.util.FileResource;
import org.easymock.MockControl;

import com.panorama.startup.Executable;

/**
 * Tests for the {@link com.panorama.startup.impl.TaskExecutor} service.
 *
 * @author Howard Lewis Ship
 */
public class TestTaskExecutor extends HiveMindTestCase
{
    private static List _tokens = new ArrayList();

    protected void setUp()
    {
        _tokens.clear();
    }

    protected void tearDown()
    {
        _tokens.clear();
    }

    public static void addToken(String token)
    {
        _tokens.add(token);
    }

    public Messages getMessages()
    {
      . . .
    }

    public void testSuccess()
    {
        ExecutableFixture f1 = new ExecutableFixture("f1");

        Task t1 = new Task();

        t1.setExecutable(f1);
        t1.setId("first");
        t1.setAfter("second");
        t1.setTitle("Fixture #1");

        ExecutableFixture f2 = new ExecutableFixture("f2");

        Task t2 = new Task();
        t2.setExecutable(f2);
        t2.setId("second");
        t2.setTitle("Fixture #2");

        List tasks = new ArrayList();
        tasks.add(t1);
        tasks.add(t2);

        MockControl logControl = newControl(Log.class);
        Log log = (Log) logControl.getMock();

        TaskExecutor e = new TaskExecutor();

        ErrorHandler errorHandler = (ErrorHandler) newMock(ErrorHandler.class);

        e.setErrorHandler(errorHandler);
        e.setLog(log);
        e.setMessages(getMessages());
        e.setTasks(tasks);

        // Note the ordering; explicitly set, to check that ordering does
        // take place.
        log.info("Executing task Fixture #2.");
        log.info("Executing task Fixture #1.");
        log.info("Executed 2 tasks \\(in \\d+ milliseconds\\)\\.");
        logControl.setMatcher(new RegexpArgumentsMatcher());

        replayControls();

        e.run();

        assertListsEqual(new String[] { "f2", "f1" }, _tokens);

        verifyControls();
    }

}

In this listing (which is a paired down version of the real class), you can see how mock objects, including EasyMock objects, are used. The ExecutableFixture classes will invoke the addToken() method; the point is to provide, in the tasks List, those fixtures wrapped in Task objects and see that they are invoked in the correct order.

We create a Mock Log object, and check that the correct messages are logged in the correct order. Once we have set the expectations for all the EasyMock controls, we invoke replayControls() and continue with our test. The verifyControls() method ensures that all mock objects have had all expected methods invoked on them.

That's just unit testing; you always want to supplement that with integration testing ... to ensure, at the very least, that your schema is valid, the conversion rules work, and the contributions are correct. However, as the code coverage report shows, you can reach very high levels of code coverage (and code confidence) using unit tests.

Creating a Logging Interceptor

One of the most powerful features of HiveMind is the ability to create interceptors for services. Interceptors provide additional behavior to a service, often a cross-cutting concern such as logging or transaction management. Interceptors can be thought of as "aspect oriented programming lite".

This example shows how easy it can be to create an interceptor; it creates a simplified version of the standard hivemind.LoggingInterceptor.

The real logging interceptor uses the Javassist bytecode enhancement framework to create a new class at runtime. This has some minor advantages in terms of runtime performance, but is much more complicated to implement and test than this example, which uses JDK Dynamic Proxies.

The Interceptor Factory

Interceptors are created by interceptor factories, which are themselves HiveMind services. Interceptor factories implement the ServiceInterceptorFactory interface.

Our implementation is very simple:

package org.apache.hivemind.examples.impl;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
import java.util.List;

import org.apache.commons.logging.Log;
import org.apache.hivemind.InterceptorStack;
import org.apache.hivemind.ServiceInterceptorFactory;
import org.apache.hivemind.internal.Module;

public class ProxyLoggingInterceptorFactory implements ServiceInterceptorFactory
{

    public void createInterceptor(InterceptorStack stack, Module invokingModule, List parameters)
    {
        Log log = stack.getServiceLog();

        InvocationHandler handler = new ProxyLoggingInvocationHandler(log, stack.peek());

        Object interceptor =
            Proxy.newProxyInstance(
                invokingModule.getClassResolver().getClassLoader(),
                new Class[] { stack.getServiceInterface()},
                handler);

        stack.push(interceptor);
    }
}

The createInterceptor() method is passed the InterceptorStack, the Module of the invoking module (the module containing the service being created), and any parameters passed to the interceptor (from inside the <interceptor> element). This example does not make use of parameters, but the real logging interceptor uses parameters to control which methods are, and are not, logged.

An interceptor's job is to peek() at the top object on the stack and create a new object, wrapped around the top object, that provides new behavior. The top object on the stack may be the core service implementation, or it may be another interceptor ... all that's known for sure is that it implements the service interface defined by the <service-point>.

The interceptor in this case is a dynamic proxy, provided by Proxy.newProxyInstance(). The key here is the invocation handler, and object (described shortly) that is notified any time a method on the interceptor proxy is invoked.

Once the interceptor is created, it is pushed onto the stack. More interceptors may build upon it, adding yet more behavior.

In HiveMind, a single Log instance is used when constructing a service as well as by any interceptors created for the service. In other words, by enabling logging for a particular service id, you will see log events for every aspect of the construction of that particular service. If you add a logging interceptor, you'll also see method invocations. To ensure that logging takes place using the single logging instance, neither the class nor the interceptor factory is responsible for creating the logging instance ... that's the responsibility of HiveMind. The logging instance to use is provided by the getServiceLog() method of the InterceptorStack instance provided to the interceptor factory.

Invocation Handler

The invocation handler is where the intercepting really takes place; it is invoked every time a method of the proxy object is invoked and has a chance to add behavior before, and after (or even instead of!) invoking a method on the next object in the stack. What is the "next object"? It's the next object in the interceptor stack, and may be another interceptor instance, or may be the core service implementation.

package org.apache.hivemind.examples.impl;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;

import org.apache.commons.logging.Log;
import org.apache.hivemind.service.impl.LoggingUtils;

public class ProxyLoggingInvocationHandler implements InvocationHandler
{
    private Log _log;
    private Object _inner;

    public ProxyLoggingInvocationHandler(Log log, Object inner)
    {
        _log = log;
        _inner = inner;
    }

    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable
    {
        boolean debug = _log.isDebugEnabled();

        if (debug)
            LoggingUtils.entry(_log, method.getName(), args);

        try
        {
            Object result = method.invoke(_inner, args);

            if (debug)
            {
                if (method.getReturnType() == void.class)
                    LoggingUtils.voidExit(_log, method.getName());
                else
                    LoggingUtils.exit(_log, method.getName(), result);
            }

            return result;
        }
        catch (InvocationTargetException ex)
        {
            Throwable targetException = ex.getTargetException();

            if (debug)
                LoggingUtils.exception(_log, method.getName(), targetException);

            throw targetException;
        }
    }

}

The invoke() method is the key. Using the remembered Log instance, and the remembered inner object (the next inner object on the interceptor stack ... the object that was peek()-ed). The code for actually generating the logging output is inside static methods of the LoggingUtils utility class -- in this way the output from this interceptor is identical to the output when using hivemind.LoggingInterceptor.

The invoke() method is invoked for all methods that can be invoked on the proxy ... this includes the methods of the service interface, but also includes java.lang.Object methods such as hashCode() or toString().

Note
The hivemind.LoggingInterceptor will typically add its own implementation of toString(), to assist in debugging (it clearly identifies itself as an interceptor object, and identifies the service id and service interface). This proxy-based implementation does not, so invoking toString() on the proxy will end up invoking the method on the next object.
Declaring the interceptor factory

Like any other service, an service interceptor factory must appear inside a HiveMind module deployment descriptor:

<service-point id="ProxyLoggingInterceptor" interface="org.apache.hivemind.ServiceInterceptorFactory">
  <create-instance class="org.apache.hivemind.examples.impl.ProxyLoggingInterceptorFactory"/>
</service-point>
Using the interceptor

Using the interceptor is the same as using any other interceptor; the <interceptor> element simply has to point at the correct service:

<service-point id="Target" interface="org.apache.hivemind.examples.TargetService">
  <create-instance class="org.apache.hivemind.examples.impl.TargetServiceImpl"/>
  
  <interceptor service-id="ProxyLoggingInterceptor"/>
  
</service-point>

The TargetService interface defines three methods used to demonstrate the logging interceptor:

package org.apache.hivemind.examples;

import java.util.List;

public interface TargetService
{
    public void voidMethod(String string);

    public List buildList(String string, int count);

    public void exceptionThrower();
}

The implementation class is equally inspiring:

package org.apache.hivemind.examples.impl;

import java.util.ArrayList;
import java.util.List;

import org.apache.hivemind.ApplicationRuntimeException;
import org.apache.hivemind.examples.TargetService;

public class TargetServiceImpl implements TargetService
{

    public void voidMethod(String string)
    {

    }

    public List buildList(String string, int count)
    {
        List result = new ArrayList();
        
        for (int i = 0; i < count; i++)
            result.add(string);

        return result;
    }

    public void exceptionThrower()
    {
        throw new ApplicationRuntimeException("Some application exception.");
    }
}
Running the examples

From the examples directory, run ant compile, then run ant -emacs run-logging:

bash-2.05b$ ant -emacs run-logging
Buildfile: build.xml

run-logging:
Target [DEBUG] Creating SingletonProxy for service examples.Target

*** Void method (no return value):

Target [DEBUG] Constructing core service implementation for service examples.Target
Target [DEBUG] Applying interceptor factory examples.ProxyLoggingInterceptor
ProxyLoggingInterceptor [DEBUG] Creating SingletonProxy for service examples.ProxyLoggingInterceptor
ProxyLoggingInterceptor [DEBUG] Constructing core service implementation for service examples.ProxyLoggingInterceptor
Target [DEBUG] BEGIN voidMethod(Hello)
Target [DEBUG] END voidMethod()

*** Ordinary method (returns a List):

Target [DEBUG] BEGIN buildList(HiveMind, 4)
Target [DEBUG] END buildList() [[HiveMind, HiveMind, HiveMind, HiveMind]]

*** Exception method (throws an exception):

Target [DEBUG] BEGIN exceptionThrower()
Target [DEBUG] EXCEPTION exceptionThrower() -- org.apache.hivemind.ApplicationRuntimeException
org.apache.hivemind.ApplicationRuntimeException: Some application exception.
        at org.apache.hivemind.examples.impl.TargetServiceImpl.exceptionThrower(TargetServiceImpl.java:35)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at org.apache.hivemind.examples.impl.ProxyLoggingInvocationHandler.invoke(ProxyLoggingInvocationHandler.java:38)
        at $Proxy0.exceptionThrower(Unknown Source)
        at $SingletonProxy_fe67f7e0ae_12.exceptionThrower($SingletonProxy_fe67f7e0ae_12.java)
        at org.apache.hivemind.examples.LoggingMain.main(LoggingMain.java:27)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at org.apache.tools.ant.taskdefs.ExecuteJava.run(ExecuteJava.java:193)
        at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:130)
        at org.apache.tools.ant.taskdefs.Java.run(Java.java:705)
        at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:177)
        at org.apache.tools.ant.taskdefs.Java.execute(Java.java:83)
        at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275)
        at org.apache.tools.ant.Task.perform(Task.java:364)
        at org.apache.tools.ant.Target.execute(Target.java:341)
        at org.apache.tools.ant.Target.performTasks(Target.java:369)
        at org.apache.tools.ant.Project.executeTarget(Project.java:1214)
        at org.apache.tools.ant.Project.executeTargets(Project.java:1062)
        at org.apache.tools.ant.Main.runBuild(Main.java:673)
        at org.apache.tools.ant.Main.startAnt(Main.java:188)
        at org.apache.tools.ant.launch.Launcher.run(Launcher.java:196)
        at org.apache.tools.ant.launch.Launcher.main(Launcher.java:55)

BUILD SUCCESSFUL
Total time: 3 seconds

The log4j.properties file for the examples has enabled debug logging for the entire module; thus we see some output about the construction of the ProxyLoggingInterceptor service as it is employed to construct the interceptor for the Target service.

Conclusion

Implementing a basic interceptor using HiveMind is very simple when using JDK Dynamic Proxies. You can easily provide code that slips right into the calling sequence for the methods of your services with surprisingly little code. In addition, the APIs do not force you to use any single approach; you can use JDK proxies as here, use Javassist, or use any approach that works for you. And because interceptor factories are themselves HiveMind services, you have access to the entire HiveMind environment to implement your interceptor factories.

その他/リソース

全サイト

関連プロジェクト