IDE Version

IDE Verssion

Add instant code structure visualization to the Eclipse IDE.

Reverse engineered class diagrams tightly integrated with the Eclipse IDE delivering powerful, custom filterable class diagrams ideal for agile Java development.

Buy Now
Full Version

Full Verssion

Be the first to bring browser accessible class diagrams to your whole team.

With the full force of the IDE version plus a batch generator intelligently building HTML5-hosted class diagrams interactively browsable by the whole team.

Buy Now

Ten Reasons Why CASE Tool Programming is Harder Than, say, Game Programming

Looking at the graphical output of a CASE tool like AgileJ StructureViews, the 2d picture it creates, while informational, is somewhat tame compared with the rich graphical 3d worlds created by computer games. Simple lines, text and a few icons appear dry and lifeless compared with the varied textures and landscapes of a game, not to mention the effects and behaviors like liquids or explosions which are routine for games and motion graphics software. Ideally, we would like software structure visualization to be as vivid as playing a game or watching a movie, but CASE tool development is weighed down by a number of additional concerns which leave reduced development resources to devote to the graphical layer. This article lists those additional other concerns.

  1. Platforms. Many games run on a single platform and this frees the programmer up from having to continually consider platform portability issues. If the product works in development then it will probably work the same way in deployment. Java does help us a lot with developing for the three main platforms, MS Windows, Mac OS X and Linux, but there is still a lot of effort required to test across supported platforms. It is surprising how the quality of graphical content in Java varies between these platforms. The speed and clarity of graphical content rendering are impacted by the operating system and the hardware and what looks fine and renders quickly on one platform often lacks resolution and speed on another.
  2. Java versions. Thankfully, the Java language is settling and maturing well now. But changes between Java versions (Java 1.4 to Java 1.5 comes immediately to mind) introduced a great many new constructs which had to be supported. In particular the introduction of parameterized types had an impact on how class diagrams are displayed. Not only do the parameterized types have to be displayed in methods and fields, they have an impact on containment relationships too. It makes sense, for example, to show a relationship between two classes using the contained type rather than the container.

    List<SomeOtherClass> oneToManyField;

    Hence supporting a shifting programming language demands model changes in the CASE tool.

  3. Eclipse versions. Eclipse takes care to avoid having new versions break old plugins. There is a rule in the use of Eclipse packages that if the package name excludes the word 'internal' then it is part of the platform API and will remain stable from release to release. However, if a package does contain the word 'internal' then it is fair-game for change between Eclipse versions. Even so, the scope remains that subtle differences in the implementation of the API can have unexpected effects across versions. Furthermore Eclipse, being an SWT application and therefore using native graphics where possible, varies in its behavior across platforms. In particular we have found, and had to create workarounds for, the different ways native image components operate and the image size constraints they impose.
  4. Undo and redo. Support for undo and redo adds whole layers of complexity on its own. The real world doesn't support undo and redo (although we sometimes wish it did), games don't generally have to worry about undo and redo either. However, that policy is not acceptable for a CASE tool. We expect Ctrl+Z to take us back to the previous state. While that sounds simple enough, anyone who has implemented undo/redo buffering will tell you that it is complex. Firstly, the application must constantly keep a record of actions taken and buffer them ready to be undone. Secondly, it is not sufficient to just restore a previous model state, the user must be shown that state restore happening otherwise, without feedback, the user will doubt that the undo action has had any effect. Hence along with recording each action, the display parameters must also be recorded so the action can be 'played back'. Thirdly, the undo and redo buffers can lose integrity if a further action is executed after restoring a previous state. Fourthly, actions happen in clusters. For example, selecting then repositioning multiple classes, while it may be implemented as many separate actions, must be undone or redone in a single step.
  5. The file system. Games rarely involve the file system, well more specifically, games rarely involve user interaction with the files system. Apart from the usual backslash vs forwards slash issues. It's even harder when you consider that the Eclipse IDE contains a workspace with many projects. Each project has source folders and buildpath entries. This really goes for the clipboard and drag and drop support too. These are not features which the finished product can boast about and most users don't even notice they are there. But if they were missing it would seem strange.
  6. The combinations of possibilities in the Java model. Inner classes, templatized types, sourcecode/bytecode, code which doesn't compile and refactoring tools all make it complex to model object oriented Java software under construction. To be fair, the Eclipse Java model mostly performs exceedingly well building inheritance trees, presenting code completion options and indexing the model according to multiple facets - but there are some Java model issues such as dependencies for which we have to create additional Java modeling code. This additional modeling code takes time and the development of a lot of test cases. The point is that the Java model is made of up of multiple 2d graphs like the dependency graph, the inheritance graph, the packages/bundles graph, call graph and composition graph. Hence the Java model far exceeds the familiar 3d world most games have to model.
  7. Integrating with the ide. Early CASE tools were programming language independent and were stand alone applications. This made things simple as the application could be built from the ground up without worrying about interaction and integration with other software development tools. But the main limitation of the stand alone approach was that effort was required by the user to keep the CASE model in line with the implementation in a real programming language like Java or C++ happening in a separate IDE. Fortunately, IDEs like Eclipse offer excellent facilities for third-party tool vendors to create plugins which integrate with the programming environment they contain. This affords an excellent opportunity to tap into high-quality models in the IDE, but the downside is that it takes time to understand the APIs offered by the IDE. Most games are written for a platform, such as Android rather than for a framework. Yes, you can write mods for Minecraft, but the mods are more like tweaks to an existing game as opposed to whole games being written by plugging into a generalized gaming framework.
  8. Text/fonts. Games rarely need to render large volumes of detailed text. Whereas CASE tools are required to present large amounts of detailed text. Badly scaled, fuzzy text is not acceptable. The entire diagram zooming mechanism in AgileJ is therefore built around letting the underlying platform graphics layers render text at fixed point sizes. The layout of the class diagram is built up backwards from the size occupied by each piece of text. Text rendering is a heavily time consuming aspect to CASE tool development.
  9. Printing. Exporting. Games don't generally ever print anything or export graphical content ready to be opened by another application. However, printing and exporting are significant functions for CASE tools. People like to print class diagrams for reference, or export the content so that it can be embedded in other documents. Unspectacular as this is, printing is complex because of the page breaks, labelling each page and giving a preview of how pages will be printed.
  10. Filtering. Finally, there is a need within CASE tools for filtering down the volume of information which they present. This is akin to switching on or off a number of options within an online map application - things like terrain, roads, traffic information, place names, boundaries. In a CASE tool such as AgileJ StructureViews, the filtering controls the content which is displayed and sifts through rules configured for each Java element to determine what to display and how to display it. This filtering layer lies between the Java model and the presentation layer. The tools which allow the user to specify the filtering, and the filtering mechanism itself are important for the usefulness of the product as they simplify the graphical content and reduce the detail which is presented - not something you ever want a computer game to do.

Working on a modestly graphical application like a CASE tool is probably not a fair comparison with the intense graphical content of a game. Games are built first and foremost around how they look and behave - and this puts games programmers in the enviable position of creating smooth, detailed worlds. In the future, code visualization will head nearer to a comparably well rendered, interactive world in which software under construction will be navigated more like a 3d world than the primitive 2d world it currently occupies. CASE tool development has a lot of catching up to do.

8 Ways UML is Holding Back Java Code Structure Visualization

8 Ways UML is Holding Back Java Code Structure Visualization

April 1st, 2013


The human brain is very good at handling pictures and graphs and diagrams are a quick way of grasping structure. Java source code is full of structure which we programmers have to understand, but the leading visual language we have is UML which was built for hand-drawn diagrams and results in cluttered and unreadable diagrams when reverse engineered. What would you change about UML to make the pictures do a better job of helping you to understand the code? In answer to these questions here are some reasons UML diagrams make a poor job of Java code visualization along with some suggestions as to how code visualization could made to better serve the Java programming community.
1. Language neutrality
When UML was conceived there was a lot in common between the emerging OO programming languages - and there still is. But the reason we have different programming languages as opposed to having one standard programming language is because ideas shift over time as to what makes a language efficient and expressive. And for the same reason ideas about how to visualize the structure in that language should be allowed to shift with the language. In other words, the modeling language should serve the programming language in each case as needed, and we should stop worrying about trying to unify across different programming languages.
2.Based on hand drawing
If a software development methodology prescribes up-front design with the project stakeholders gathered in one room, then hand drawn diagrams are indeed a rapid and expressive way to communicate and document design. However, if the methodology favors working code over documentation then the design is expressed directly as code by the programmers. Advances in IDEs have helped to make this possible through wizards and refactoring capabilities. So while UML serves its purpose as a visual language for hand drawing this in no way qualifies it as a suitable visual language for reverse engineering.
3. Symbols
The visibility symbols of UML are plus (+) for public, minus (-) for private, and hash (#) for protected. If this was an intuitive nomenclature then it would have been adopted by IDE makers. However, the reason for this choice of symbols in UML was, again, ease of hand drawing. Yet it fails on an aesthetic level when glancing down a list of class members especially if displayed with the same font as the member names. Furthermore, IDEs use richer symbols which convey more than just class and member accessibility.
4. Associations between classes
To Java programmers, used to making references between classes using only one mechanism (field members), the notions of composition, aggregation and association take a little more thinking about. Most can quickly appreciate the differences between these three once having realized the roles that object ownership and object lifecycle have in determining the type of association. However, composition is really only a concern in the absence of automated garbage collection: destruction of the owner mandates destruction of its parts. Clues to aggregation and composition in Java are expressed through things like naming conventions, inner classes, the package structure and design patterns. Because the programming language makes no distinction between association types, then it is difficult for reverse engineering tools to make the distinction and determine whether to put a diamond on the tail of the association line or not, and whether to color it in or not.
5. Cardinality
From the point of view of static structure analysis we care if a reference in the code is a reference to a single object or multiple objects. Java represents this as a field referencing either a named field pointing to a single instance of a given type or an array of instances to a given type. In other words, relationships are either one-to-one or one-to-many, with no other cardinalities built into the language. Enforcement of other cardinalities normally happens dynamically through checking for null in the case of single objects and through range checking in the case of multiple objects. From a reverse engineering perspective it is a headache to try to determine cardinality ranges and most likely not worth the effort given that cardinality ranges of, say, 3..17, are rarely hard coded anyway.
6. Java conventions
Java programmers care about a range of conventions which help to make code easy to read and the structure easy to understand, but these are simply absent from UML diagrams. For example: The Bean Properties naming convention and the presence of get and set methods amount to an association with the type being set or got. Checked exceptions - love them or hate them, they are part of the language and it is useful to be able to see them as they are part of the method signature and occupy a dimension of their own. Serialization - ask anyone who has serialized a graph of Java objects and they will tell you how important it is to maintain an overview of the boundaries of what is included in the serialization operation. Synchronization - not center stage most of the time except for when deadlocks occur and then the need arises to trace through the model looking for possible causes. Collections - the cornerstone of object modeling in Java representing one-to-many relationships most of the time yet not inherently viewed as such by UML. Java programmers find little value in visual representations of their code when the diagrams are ignorant of the constructs and conventions of the programming language.
7. Naming and labelling of associations
UML supports labeling of associations on the canvas next to the association line. On a freeform hand drawn diagram you might well label lines by the addition of free text parallel to the line but at the same time you would limit the amount of text you add, only write in empty space and position the text so that it is clear which line it refers to. This is a problem for automated class diagram generation with a smattering of text around the lines. Java does not need labels on associations as the field name is effectively the name of the association.
8. The assumption of paper
Finally, and linking back again to the theme of hand drawing, UML diagram contains no notion of tooltips, folding, filtering, hyperlinking or any other degree of interactivity which is now the norm for electronic documents. Instead, reverse engineered UML diagrams remain constrained as if paper based. Online maps allow the switching on and off of place names, traffic information and arial photographs and contain links to local services and points of interest. UML in particular, and code structure visualization in general needs to start presenting the wealth of structural information interactively, filtering out most of the information to make the diagram easy to navigate and revealing more detail about one element at a time as the user shows an interest in it.

8 Myths About Software Modeling Tools and Modeling Languages

8. The assumption of paper

8 Myths About Software Modeling Tools and Modeling Languages

March 28th, 2013


These eight myths about modeling tools and modeling languages might sound manifestly ridiculous given what we now know about how to best go about developing software in ways which ensure delivery of business benefit and minimize the scope for defects to go undetected. Yet one decade or so ago belief in the validity of these ideas below drove a global market in heavy-weight software modeling tools. This was at a time when the software industry was awash with well funded software projects all drawing from a relatively small pool of skilled programmers who took the opportunity to command strong hourly rates for their efforts. High programmer rates in turn made it easy for modeling tool vendors to enchant project managers with stories of programmer requirements being slashed through the use of sophisticated modeling tools. Remember that at the top of the price range it was typical to be charged USD10,000 for the tool, plus the same amount again for training per developer.

Myth 1. Programmers spend a long time typing.

Generating skeleton code will therefore save a lot of expensive programmer time as there will be a lot less typing to do. Skeleton code is source code without executable statements - in other words the class declarations, fields and method signatures with stubbed out method bodies. Generation of skeleton code is the centerpiece of the forwards engineering feature set, the idea being that the business-specific design work is best performed by an analyst/designer pictorially, using a modeling tool. The skeleton code can then be fleshed out by programmers who need only to focus on the detail of one method at a time without having to worry about the larger model as a whole.

Why is this a myth?

While the skeleton can account for, say, 20% or so of the final source code, it would be naive to think that this therefore equates to 20% of the whole software coding effort. The actual saving in development cost due to the generation of skeleton code is likely to have been negligible for the following reasons: 1. The slowest single-fingered programmer spends much more time thinking about algorithms, looking up examples of how to use third-party libraries, thinking of descriptive variable names, writing tests and fixing defects. In other words, even if all the time spent typing can been eliminated, programming still takes about the same length of time. 2. Even if actual typing was the rate determining step, the names of all the classes, fields and methods still have to be typed into the modeling tool. So the programmers typing effort has simply been shifted elsewhere.

Myth 2. Programmers spend a long time deciphering hand-drawn design diagrams due to the absence of a single unified standard meaning for the symbols.

Having arrived via separate routes, leading authors had prescribed parallel object oriented modeling languages comprised of symbols to represent constructs like class, inheritance, composition and information hiding. Standardizing the nomenclature saves time spent switching between competing standards. By using a modeling tool then the tool can enforce the modeling language standards.

Why is this a myth?

People work with non-unified language all the time because there is a trade off between uniformity and freedom of expression. Here is an example to illustrate this point. If you are asked by a visitor to your office for directions to the train station, you might respond wait a second, I will draw you a map . But which pictorial language should you adopt, and will the visitor understand it? The map you draw will likely contain a clear start and finish point, with direction arrows, left and right turns, road junctions and significant landmarks. Would the communication be helped by the strict application of a modeling language in this instance? Or would, as is far more likely to be the case, a language created on-the-fly using unimpeded expression be fast, expressive and comprehensible. Obviously there are differences between specifying object oriented models and a short journey across a few streets, but there are also many similarities. Relevance of the information displayed is more significant to the communication than its strict adherence to an agreed language. Think how much an application for drawing ad-hoc street direction maps would be a pain compared to a quick sketch on a piece of scrap paper.

Myth 3. After training, customers and analyst/programmers can both understand and validate software design diagrams sat down together around a table. The diagrams can work as a middle ground that both customers and implementers can understand.

Using a requirements capture method, the functions of the new system can be described, firstly in English, and then onwards to greater abstractions detailing the static and dynamic behavior of the model.

Why is this a myth?

In practice customers relate well to use cases, but usually get lost somewhere en route to state transition diagrams and class diagrams. Even some developers fail to grasp the differences between composition and aggregation as these are notions which exist only in the model and not in the code. These days we show working software to customers, not diagrams of what the software will be.

Myth 4. Left to their own devices, programmers write software without caring whether the finished product will be used. Giving programmers a blueprint of the required system means that the goals of the system as a whole are guaranteed to be met.

A detailed up-front design adds predictability to the software creation process and assists planning the implementation effort. The more detail in the design, the greater the certainty that the end result will meet the requirements.

Why is this a myth?

Only a minority of programmers would churn out code without caring if the finished product is of use to the customer. Arguably, constraining programmers to work on small functions one at a time creates detachment from the bigger picture. Instead of the goal being to satisfy the customer, the goal is to implement a given number of methods within a given timescale. It should also be noted that this approach assumes that the design is correct. The design may be incorrect, or there may be no such thing as a correct design if there are conceptual flaws in the requirements as stated by the customer.

Myth 5. Expensive modeling tools pay for themselves in the long term.

Software programmers are expensive, so it stands to reason that reducing the need for programmer time is going to save a lot of money across the duration of the project lifecycle.

Why is this a myth?

Bearing in mind that, as stated above, this could stretch to USD10,000 a seat plus nearly as much again for training, this is quite a claim. Not that the cost of the tool and training buys more than a few weeks of programmer time, but the tool would be unlikely to save the equivalent cost because the premise upon which this claim is based assumes the project contains a high proportion of repetitive programming work which can be condensed down to a simpler modeling task. In reality, programmers are adept at avoiding repetition, using domain specific languages and other code generators, aspects, templates, inheritance and frameworks like Spring and Hibernate all to keep down the volume of verbose, hand-written code.

Myth 6. Project teams often change their mind about which programming language to use multiple times throughout the implementation phase of the project lifecycle.

Model Driven Architecture (MDA) makes the target programming language swappable. Unlike the generation of skeleton code, for MDA the result of translation of the model to an implementation (programming) language is an executable system.

Why is this a myth?

The effectiveness of MDA as an overall development approach is not the question here - rather whether the flexibility on target implementation language is that useful an option. The question boils down to: have you ever got part way through a project and wished youd picked an alternative programming language, and furthermore wished you could distance yourself from the details of the programming language? This is unlikely for two reasons. Firstly, you pick a programming language before you start development based on availabilities of libraries, tools, workforce and community knowledge. Secondly, when bugs need fixing, you need to focus in on the detail of the action of a single line of code, where platform and runtime version suddenly start to matter, not step back from the detail.

Myth 7. It is a good thing to keep the code separate to the documentation.

The code has the primary purpose of running correctly. If it fails in this purpose, the customer will not pay for the product. Usually, the code contains some work-arounds, fudges, fixes and boilerplate code all of which are ugly and obscure the picture of the business object model which project stakeholders want to see. A clean model of the software separate to the detail of the implementation can provide a more meaningful view, and it is worth keeping the model up to date as the code is developed.

Why is this a myth?

It is not a myth that a model is useful. It is a myth that it is worth the effort of maintaining the model, because that effort is considerable, and has a knock-on effect. The effort is that programmers move the codebase forwards rapidly, but in ways which respond to the discovery of limitations in the design, and to exploit better implementations as they are discovered. For example, a programmer may find that the design contains some repetition, which makes sense at the business object level, but would be a wasteful pattern to follow in the construction. However, this causes a headache at the model level. Should the model reflect the requirements as captured, or be updated to reflect the implementation as optimized? And what happens when it transpires that the design was inadequate - which we will look at next.

Myth 8. The design can be got right before the programming starts.

This is the most dangerous of the myths because it sounds the most plausible. The construction industry, for example, executes projects with a detailed up-front plan every time. The design is modeled, visualized and reviewed thoroughly before construction work commences. It therefore does not seem unreasonable that software can also be designed and planned to fit together seamlessly before the programming starts, making the programming somewhat of a formality with a predictable timeline. Therefore anything which assists the production of clear, unambiguous design takes software development nearer to that goal of assured, timely delivery of a product fit for purpose. Software modeling tools offer the promise a complete, detailed design, with a traceable history from requirements capture through to each facet of the final design.

Why is this a myth?

It is a myth because the software industry and the construction industry are different. Here are the differences: The purpose of a construction project is normally obvious. While the details of a bridge or a building require a lot of thought, the overall function can be summarized in a few sentences. This is not so with most software which has to satisfy a purpose which is hard to specify. The materials and components of construction change slowly. This is not to say that new materials do not come into use in construction, but in software the platforms, tools, libraries and frameworks change far more rapidly making it more difficult to specify up-front how easy the implementation will be and how well the finished product will perform. The complexity of the requirements is much higher for software. There simply are more conditions, rules, exceptions and special cases to the definition of a software system. Software can be adapted and changed and it makes sense to exploit that flexibility.

Closing thoughts

The software industry has come a long way in the past ten years, but the vestigial influence of modeling tools remains to this day in a couple of key places. Software tool catalogs and awards competition categories in particular still adhere this antiquated classification of development tools without waking up to the fact that the industry has long since grown skeptical of wild claims about programmer productivity from modeling tools and moved on to better approaches more solidly grounded in measurable productivity.

Using XText to make Software Easy to Configure

The meaning of Configuration

Imagine a piece of software which can be run out of the box without needing any instance- specific information - without any configuration. Not that the software runs with overridable default values if you do not supply any; this software was developed perfectly, foreseeing all operating needs and circumstances and never needing to be configured. Of course, this rarely happens except for bespoke software created to be installed in just one static situation. Normally, there are many behavior-defining details which are unknowable as we develop our software because the required behavior varies somewhat from user to user or installation to installation. The variation arises from things like differing requirements according to line-of-business or integration between ours and other systems whose credentials are installation-specific. For this reason we anticipate the variation, declare those details configurable and build our software around them expecting them to be resolved to concrete values at runtime. Industrial scale systems typically require extensive configuration to tailor the system for any one customer's needs and circumstances - a specialist's task.

The configuration is often read at system startup, but this is not always the case. For example in the case of the Coucho Resin web server, the configuration file web.xml, in which website page redirections can be specified, may be modified and saved; this action immediately updates Resin's web-serving behavior without it needing to be re-started.


Before we investigate the different ways of representing configuration information let us have a working definition of configuration:

The consolidation of information input to a system which is unknowable at build time, is read as the system is initialized or as changes are detected, accommodates installation-specific details and once interpreted, from that point onwards may direct the behavior of all other execution.

This definition covers what the configuration amounts to from the viewpoint of the system, but bear in mind we are also interested for this topic in the ease with which the user can edit the configuration. Yes, the user supplies values, but it helps if the process of supplying those values is guided by options and confirmed to form a meaningful set by validation.


configuration interaction


Input Types

Actually configuring software can often involve a mixture of input types: command line arguments, .ini files, XML files (for example: deployment descriptors or bean wiring), environment variables, admin screens and setup wizards not to mention plugins for advanced configuration and customization. Or the configuration settings could live in rows in a database table. So why propose yet another way to configure software? Well, if we look at some of the pros and cons of these conventional ways first then we will be in a position to judge how using a Domain Specific Language (DSL) measures up as an alternative.

Java frowns on environment variables due to their platform dependence, while .ini files and the command line offer only limited expression of a handful of configuration values. If our software needs any magnitude of structured, iterative, self-referencing, type-validated data to direct its execution, then the choice boils down to either grabbing values from an XML file, building user interface screens to guide the user through the process or declaring a plugin API.

Example: Logistics System

Suppose we are building a logistics monitoring system which polls a number of services, one of which is called the Cargo Booking Service, and notifies specified members of staff whenever there is a change to the services' availabilities. In reality, such a change in availability may well have to trigger a range of different types of actions and make a variety of co-ordinating calls to other systems, but for the sake of simplicity let us assume the response is simply to send an email to an individual. This logistics system will be used as the example against which we will try different approaches to configuration.


The XML to configure our logistics system might look something like this:

<staff id="bsikes" name="Bill Sikes" email=""/>
<staff id="jdawkins" name="Jack Dawkins" email=""/>
<notification source="cargo-booking" event="unavailable">
	<notify staff-ref="bsikes"/> 

Two members of staff are declared, each with a unique id, real name for addressing them formally at the start of any message and their email address. These are followed by an instruction to notify the first staff member upon the Cargo Booking Service becoming unavailable.

Using XML for configuration in this case is fairly quick and convenient to develop and test. A third-party library does all the hard work of parsing and validating the XML for us and serves it up as a structure from which our software can just pluck the values it needs any place in the code it needs them. For example the Document Object Model (DOM) is one way of doing this. We can quickly lash together the first config file in XML letting its format firm up along-side the software that depends upon it, adding more tags and attributes as needed. But as well as making it easy for us programmers XML also offers help in various forms to the user doing the configuring (the stick man in the diagram above). XML is plain text; and plain text has advantages over a Graphical User Interface (GUI). Plain text nurtures collective knowledge because snippets can be easily dropped into an email or pasted to a bulletin board. Plain text is search engine friendly, so anyone looking for examples of how to configure our software can search on any XML tag and view the examples which are thrown up. Furthermore, comparison between configuration file versions is simple with plain text: version control tools display who changed what - shedding light on why the system has started spamming some members of staff with cargo booking event notifications they do not care about. Finally, to give more guidance during configuration an XML schema can be declared which can direct an XML editor to enforce data types and apply structural constraints.

However, despite these advantages over a GUI, asking users to edit raw XML is only appropriate if they are already used to XML's syntax of tags and attributes and if the configuration mainly contains small, unconnected structures.


The GUI to configure our logistics system could be a separate dedicated application just for editing a single configuration file or could be part of the same application with its configuration state persisted by an unknown persistence provider of some kind. For a GUI we would probably hide the configuration model and storage behind a facade. The point is that a GUI to guide the user through the process of setting up our software is more appropriate where the configuration requires constraints to be met as the configuration settings are entered.

Let us consider what the GUI for configuring our logistics system might look like.


configuration gui


Now we can assist editing of the configuration by offering a set of legal options such as theNotify list above being populated from previously created users. We can also validate the configuration by forcing the selection of one or more users before enabling the Createbutton. The GUI warns when deleting a member of staff would leave no-one notified when the Cargo Booking Service becomes unavailable.

But developing GUIs is slow and hinders agility; GUIs take time to test. Even with GUI testing tools effort is still required to cover all permutations of actions. Bear in mind also that whenever the underlying software which uses the configuration is updated or refactored then the GUI needs corresponding amendments to keep it and its tests in step; and all the screenshots in the documentation need to be revised too.

While a well crafted GUI can make light work of configuration for even the first-time user of our software, a poor GUI can make the same task irksome for all. Bleating error messages, poor explanations as to why an input is unacceptable and controls disabled for no intuitive reason will quickly frustrate any user. The worst trait of a poor GUI is low visibility of the impact of a change. In the absence of any indication of how many outstanding issues need to be resolved before the configuration will reach an acceptable state, the GUI offers no guidance as to whether it is best to bash on with a particular configuration change or revert to a previous state and pursue a different strategy.

General Purpose Languages

We could expose an API to our logistics system in a General Purpose Language (GPL) such as Java or C#. The staff and notification objects would be expressed in the GPL in a plugin library which the user writes, compiles and places in a location discoverable by our system at run time.


java configuration


ILogisticsConfigurationPlugin is an interface which we declare in our API, not shown here but can be assumed to declare the signatures of the two public methods. User and Notification are immutable classes also exposed by the API with constructors as called.

This is patently a heavyweight means of supplying configurability, effectively leaving a chunk of the coding to be completed at each installation. Yet it offers the ultimate in flexibility as the plugin is free to obtain the information programmatically from whomever, wherever and however it pleases; not necessarily hard-coded as in the example above.

The GPL is more than capable of expressing the configuration information our logistics system requires and the compiler checks that the contract of the interface is met and the statements are executable; but the compiler knows nothing of the notion that the Notification constructor's User parameter must be an existing User and not a User constructed in-line.

If there really is no need for the degree of flexibility offered by a plugin then using a full- blown programming language just to express data values and structure invites problems. The configuration plugin writer could legally add a System.exit(), return null, or perform any of a hundred other undesirable actions in this plugin code. In addition, we have placed a condition on our software that those configuring it be proficient in the GPL.


The XText tool allows you to create your own programming language, a Domain Specific Language (DSL). You do this by specifying your language's grammar then asking XText to generate a supporting model and editor.

The advantage of a DSL over a GPL is the higher degree of abstraction, or closeness of the language to the problem space. Information expressed in the DSL is more concise and readable than the same information expressed in a GPL. If the DSL's grammar is designed thoughtfully, anyone familiar with the problem space will stand a fair chance of understanding a script written in the DSL on first sight. Furthermore, a DSL restricts the scope of what can be expressed to just constructs relevant to the domain space. This is a good thing. Narrowed down options make the language easier to learn and minimize the possibility of unexpected or harmful behavior.


configuration dsl


So there it is. A readable script written in a language dedicated to the configuration of our logistics system. But there is more. The editor used to enter this script is not a plain text editor; the editor is dedicated to the DSL - it was generated from language grammar definition - and this is where the power of using a DSL really kicks in.

DSL Grammar

We define the syntax of our language (in this case using the XText language development framework) using the grammar definition language in four lines as follows:


dsl grammar


A full description of how grammars are declared in XText is outside of the remit of this article, but to help make sense of this grammar its meaning can be paraphrased as follows:

(Line 1) The model consists of a list of config elements and (Line 2) a config element is either a Staff or a Notification declaration. (Line 3) Staff declarations start with the literalsStaff id followed by an identifier, then name, then a string, then email, then another string. (Line 4) Notification declarations start with Notify, followed by a staff id, then when, then either cargo or security, then becomes, then either unavailable or restricted.

This grammar is all that is needed to generate an editor which enforces the basic structural validation we need (in practice you would probably include validation of single fields such as the email address in the grammar). From the grammar definition the editor deduces how to validate the configuration, issuing warnings and errors.

The red underline, the error marker indication and the tooltip come for free. The generated editor also offers code assist:


completion suggestions


Not bad for four lines of grammar code; and hence our software development stays fantastically agile. We can quickly drop in a new construct or a new validation rule as the need arises without burning up weeks of work re-jigging, re-testing and re-documenting a GUI. But not only does the user benefit from the text editor being aware of the structure of what is being entered, the software development benefits from the grammar definition forming the central definition of all configuration data and its constraints. Our software is then free to focus on its main line of business without concerning itself with validating configuration values.

Along with the editor, an Eclipse Modeling Framework (EMF) model is induced from the grammar. This EMF model loosely equates to the DOM in the XML case above. It is via this EMF model that our DSL-configured software accesses its configuration values.

Summary of Configuration Input Types and their Features


summary of configuration input types



We want to make our software configurable in a way which is easy for the users to configure, encourages community knowledge about its configuration, guides the user through the configuration process as much as possible, allows easy comparison between versions, gives an indication of the impact of a change, and all without harming agility. Using a DSL for configuration yields all of these benefits and with a little care in grammar design can result in a configuration script which reads close to English.


Beck, K. (2000) Extreme Programming Explained, Addison Wesley Steinberg, D. et al.

EMF: Eclipse Modeling Framework, Addison Wesley

5 Ways Objects Can Communicate With Each Other Heading Towards Decoupling

Way 1. Simple method call
class diagram for hotel


Object A calls a method on object B. This is clearly the simplest type of communication between two objects but is also the way which results in the highest coupling. Object A's class has a dependency upon object B's class. Wherever you try to take object A's class, object B's class (and all of its dependencies) are coming with it.

Way 2. Decouple the callee from the caller
class diagram for hotel


Object A's class declares an interface and calls a method on that interface. Object B's class implements that interface. This is a step in the right direction as object A's class has no dependency on object B's class. However, something else has to create object B and introduce it to object A for it to call. So we have created the need for an additional class which has a dependency upon object B's class. We have also created a dependency from B to A. However, these can be a small price to pay if we are serious about taking object A's class off to other projects.

Way 3. Use an Adaptor
class diagram for hotel


Object A's class declares an interface and calls a method on that interface. An adaptor class implements the interface and wraps object B, forwarding calls to it. This frees up object B's class from being dependent on object A's class. Now we are getting closer to some real decoupling. This is particularly useful if object B's class is a third-party class which we have no control over.

Way 4. Dependency Injection

Dependency injection is used to find, create and call object B. This amounts to deferring until runtime how object A will talk to object B. This way certainly feels to have the lowest coupling, but in reality just shifts the coupling problem into the wiring realm. At least before we could rely on the compiler to ensure that there was a concrete object on the other end of each call and furthermore we had the convenience of using the development tools to help us unpick the interaction between objects.

Way 5. Chain of command pattern

The chain of command pattern is used to allow object A to effectively say "does anyone know how to handle this call?". Object B, which is listening out for these cries for help, picks up the message and figures out for itself if it is able to respond. This approach does mean that object A has to be ready for the outcome that nobody is able to respond, however it buys us great flexibility in how the responder is implemented.


class diagram for hotel


Chain of command, way 5, is the decoupling winner and here's an example to help explain why. Let object A be a raster image file viewer, with responsibilities for allowing the user to pick the file to open, and zoom in and out on the image as it is displayed. Let object B be a loader which has the responsibility of opening a gif file and returning an array of colored pixels. Our aim is to avoid tying object A's class to object B's class because object B's class uses a third party library. Additionally, object A doesn't want to know about how the image file is interpreted, or even if it is a gif, jpg, png or whatever. In this example object B, or more likely a wrapper of object B, will declare a method which equips it to respond to any requests to open an image file. The method will respond with an array of pixels if the file is of a format it recognizes, or respond with null if it does not recognize the format. The framework then simply asks handlers in turn until one provides a non-null response.


With this framework in place we are now free to slide in more image loaders with the addition of just one more handler class. And furthermore, on the source end of the call, we can add other classes to not just view the images, but print them, edit them or manipulate them in any other way we choose.


In conclusion, we can see that decoupling can be achieved and yield flexibility, but this does not mean it is appropriate for every call from one object to another. The best thing to do is start with straight method calls, but keep cohesion in mind. Then if at a later stage it becomes necessary to swap in and out different objects it won't be too hard to extract an interface and put in place a decoupling mechanism.


P: +44 20 8123 2318


AgileJ Ltd 2nd Floor St James House 9-15 St James Road Surbiton Surrey KT6 4QH United Kingdom