Loom
Table of Contents
Introduction
The Bootstrapper
The Core Plugin
The Ant Plugin
The Drools Plugin
The Java Plugin (or: How a Rule-Based Build System Works)
Introduction
Loom is a build system which is based on the Drools rule engine. It follows a very modular concept, i.e. there is just a relatively small bootstrapper and a bunch of plugins. Hivemind is used as plugin system, although the bootstrapper adds some modifications to it by using some of the more internal APIs (nothing undocumented, though, it should still be future-proof).
The idea of Loom is to create a build system that builds intelligently, yet keeping all the flexibility. A simple project without any dependencies should be buildable without any configuration. Of course a build system cannot be done without configuration, but it should be kept as simple as possible.
The core plugins of Loom are:
- core – The core of the build system. It manages configurations and multi-project builds.
- ant – The interface to Ant. Ant is used as the workhorse of the build system.
- drools – The drools build system. Builds the rule base and working memory, and evaluates the rules.
The build system cannot work when any of these plugins is missing. Through the use of Hivemind, they can be replaced by different implementations. This is however not advisable, at least not currently, as they're hooked thightly together.
Existing non-core plugins:
- java – Compile Java code.
Planned non-core plugins while still in proof-of-concept phase:
- jar – Create JAR files (in the near future).
- classpath – Build the classpath for compiling (in the near future). This is only a temporary plugin and will be kept very simple. It will be obsolete as soon as the deps plugin is functional.
- deps – Manage dependencies and classpath (possibly using Ivy?). This still requires a lot of thinking, also, but not only, because it should be suitable for OSGi projects.
- junit – Run JUnit tests. This depends on the classpath or deps plugin.
The Bootstrapper
The bootstrapper reads the boot configuration (i.e. the command line, but it should be designed in a way that it can also be used from within an IDE, e.g.), sets up the 'hive' and class loaders, etc.
Plugins
Plugins are scanned from the file system from several paths. By default, it looks for plugins in the Loom installation directory, under the directory lib/plugins. Most plugins will contain at least one META-INF/hivemodule.xml somewhere (usually in the private section).
Plugins are organised in directories. The bootstrapper knows of two subdirectories: exported and private. exported contains JAR files that are provided to other plugins, private contains the JAR files with classes visible to the plugin only (see the section on the class loaders below).
The bootstrapper also sets up the PluginManager
as Hivemind service, which allows other plugins to do their own queries on the contents of the plugin directories (e.g. the Drools plugin scans a subdirectory rules for DRL files).
Class Loaders
It's specified that the context class loader is always set to the class loader of the plugin the current object/service belongs to. In Hivemind, the bootstrapper makes sure this is true by applying the ContextClassLoader
interceptor to every service specified in any hivemodule.xml. This class loader will be of the type LoomClassLoader
. Originally, this was only for debugging purposes (toString()
method) but now, this class loader also contains a reference to the PluginInfo
object of the plugin it belongs to. This allows utility methods to find out what plugin they were called from (used e.g. in AntUtils
in ant-api or ContextClassLoaderWorkingMemoryBuilder
in drools-impl for logging).
This may be replaced by something less 'hackish'.
Other plugins (notably the Drools plugin, class ContextClassLoaderWorkingMemoryBuilder
) may need to make sure they follow this specification themselves.
New Bootstrapper on the Way
Shortly, there will be an all new Bootstrapper which was developed from scratch and independently from Loom. This new Bootstrapper will add better ClassLoader management (package exports instead of JAR exports, package versioning, explicit imports, basically the things OSGi provides), more structured access to plugins and their resources, distinction between boot modules (plain old HiveMind modules) and so-called AppModules (plugins) and a simple VFS.
The VFS
As for the VFS: This comes in two parts. The file system layout of pugins as been made pluggable (interface AppModuleLayout which is responsible for recognising plugins, finding hivemodule.xml and scanning the classpath of a plugin). Together with the "virtual" VFS provider, this allows developers to work with the IDE build without having to set up a specific file structure (e.g. by calling Ant) on each build to be able to run it, which eases and speeds up development greatly. The virtual VFS together with the development AppModuleLayout provider allows the user to "mount" resources to some point in the VFS using hiveapp-mount.properties. Like that, you can simply scan the working directory tree, the system won't notice that things actually aren't where they seem to be:
# Avoid clutter of the plugin root by mounting nothing as the root /: null:/add/anything/here/eg/the/plugin/name/for/logging # mount src/descriptor/hivemodule.xml into the root /hivemodule.xml: src/descriptor/hivemodule.xml # The '!' means: mount as top-level container; there's no way to distinguish e.g. between # a directory and a JAR file, both are containers. Therefore, the VFS has the concept # of top-level containers, i.e. containers that mark the beginning of something new in # the container hierarchy. This allows the classpath scanner to recognise JAR files (or # directories marked as top-level) as a classpath root. /lib/classes: !target/classes # mount our lib directory (populated by Ivy) into our /lib /lib: lib # Mount our DRL files for the Drools plugin /rules: src/rules
The class loader also uses this VFS to load classes. I leave it to your imagination, what more can be done with this concept. Load plugins from the network? Trigger a sub-Loom to build a plugin before starting the main Loom?
This new bootstrapper will clean up many things I wasn't really happy with ...
The ClassLoader
The new classloader supports explicit package-level exports and imports with versioning. Packages are exported and imported by contributing to the configuration point hiveapp.Exports
and hiveapp.Imports
. Furthermore, it can operate in two modes: First try to import and then look locally (recommended and default to reduce the danger of running into ClassLoader conflicts) or first look locally and only import if not found.
<contribution configuration-id="hiveapp.Exports"> <!-- a plain export --> <package name="my.package" version="1.1"/> <!-- the recommented way of exporting: export some version but try to import first, preferring newer versions. Instead of writing accept=..., you could also just add <package name="another.package" version="3.2.2+"/> to the imports. --> <package name="another.package" version="3.2.2" accept="3.2.2+"/> <!-- export recursively, careful with that --> <package name="yet.another.package" version="2.0" recursive="true"/> </contribution> <contribution configuration-id="hiveapp.Imports"> <!-- import a specific version --> <package name="my.package" version="1.2"/> <!-- import some version or newer --> <package name="my.package" version="1.2+"/> <!-- import some major release of a package ignoring any minor release accept 1.2.3, 1.2.1; reject 1.1 or 1.3 --> <package name="my.package" version="1.2?"/> <!-- import any version --> <package name="my.package" version="*"/> </contribution>
If more than one exported version matches an import, the latest one will be used. Each ClassLoader instance registers itself as MBean to allow querying what class has been loaded from where in the event of "strange" ClassCastExceptions.
The Core Plugin
The core plugin coordinates multi project builds and loads/manages the configuration of the project.
Configuration
The configuration is read in several steps: First, a system wide configuration is read. Then, it will read a user specific configuration and finally, the project configuration will be applied (see below).
The configuration mechanism is kept rather simple. Configuration files are XML files, the root element is project, all direct subelements of this will be handled by handlers that are contributed by the plugins. The core itself contributes two handlers: info and general, that read meta information (organisation, name, version) and general settings (currently there are none, but there certainly will be).
These handlers will be passed an org.w3c.dom.Node
and it's up to them to handle its contents. They will be called the scope upwards, i.e.:
- Initialise the project
- Handle the node from the system configuration
- Handle the node from the user configuration
- Handle the node from the meta project(s), if any
- Handle the node from the project file
Multi-Project Builds and Meta Projects
Don't worry, the concept of meta projects is only slightly different from the 'classical' hierarchical project/module concept.
The idea is to focus on the (sub-)project being built and put it in a context using a meta project. That is, a project may be part of a meta project, which contains configurations common to all sub-projects. But the multi-project build will actually be part of the dependency resolution.
Let's make an example: We've got project A, B and C. A can be built on its own, it depends on no other project. B and C both depend on A, C additionally depends on a project D, which is standalone again. The final build sequence depends on the project Loom was called from:
- Build A => Only A will be built.
- Build B => First A, then B will be built.
- Build C => A, then D and finally C will be built.
That's already it. Basically, it's a top-down approach to multi-project builds instead of the usual bottom-up approach.
The Ant Plugin
TODO
The Drools Plugin
The Rule Base
TODO
The Working Memory
TODO
The Java Plugin (or: How a Rule-Based Build System Works)
TODO