9.11 Guidance for successful Release Management
What follows is a set of practical suggestions to make Release
Management easier to implement and control. The suggestions are divided
into the ITIL process that is affected. This demonstrates the close dependency
of one process on another.
The best piece of advice, however, is to use the principles of Release Management
in the implementation of Release Management itself, i.e. to make all Release
Management procedures subject to controlled Change
and kept under Configuration
Management; and to bundle Changes to the procedures into planned 'Releases'
with accompanying training and support documentation.
- Automate software and hardware auditing, and compare the reality found with
the information in the Configuration
Management Database.
- Enforce strict record-keeping and controls on software-protection devices ('dongles') because they are valuable assets and often irreplaceable. Keep spares in case of emergencies.
- Minimise the number of variants (both hardware and software) that exist in the live environment. This makes support much easier and more reliable. Keep an officially approved hardware and software list.
- Provide mechanisms for Users
to determine versions of software installed and hardware information. In other
words, set up a 'Help About' function for a workstation as a whole, not just
individual applications.
- Check critical files for their integrity at start-up, for example using checksums.
- Avoid continuing to pay maintenance on hardware or software that has been
made redundant by a subsequent rollout. In addition, it may be possible to
use the audit services of Configuration Management to check that no illegal
copies of software are installed. This can be performed both as part of a
distribution and via ad-hoc audits. This requires a fair degree of automation
to do effectively.
- Enforce strict desktop control via policies within the operating system
to reduce the chance of the end Users changing the target platforms. Note
that software developers typically need more open access, or even two workstations,
to do their work.
- Reduce the number of officially supported variations of target platform within an organisation.
- Limit through software controls the Changes that Users
can make to the configuration of their PCs.
- Hold a sequence of cutover planning meetings with all parties involved in the development, implementation and support of proposed Changes.
9.11.3 Release Management
- Ensure that the software delivered to the live environment can be maintained from the source code saved through controlled build management procedures, with automated sequences of actions and strict version control of software source files .
- Pilot new Changes with a 'model office' in the live environment - undertaken
by a small number of real Users
for a week or two after their training.
- Automatically detect the need for software updates at start-up, and initiate them as required. This should, therefore, eliminate the risk of running out-of-date versions of software.
- Automate the building, distribution and implementation of most or all of the software installed on workstations. Ideally, this should be driven from a centrally held 'model' to give the exact specification of the software to install.
- Have permanent build-machines for specific platforms that can be allocated to projects and groups of similar applications.
- Have representative and appropriate test environments that replicate the live hardware and software environments as closely as possible.
9.11.4 Application design issues
The design of the applications themselves can affect the task of Release
Management. The following suggestions should be considered when reviewing
an application's design:
- The positioning of the application software, locally or centrally (see the next section), needs to be controlled.
- If it is important that software updates are installed on time, then it might be acceptable to arrange 'time-outs' to be built into the application that prevent its use beyond a certain date. This is particularly useful for software that is released into an uncontrolled environment (e.g. outside the organisation).
- Where parts of the application are split between different computers (e.g. a client-server arrangement, or an n-tier approach), it is important that all software components be consistent. It may be possible to build runtime checks into the application to verify that interfaces are compatible e.g. raise an error if an out-of-date version of a component on another platform is detected.
- Consider all options for the positioning of data, for example distributed data against distributed applications. It may be required, for instance, to distribute and/or replicate centrally held data and possibly receive Changes back. A simpler scenario would be the distribution of static control data - e.g. look-up tables or a postcode database.
- When making an application suitable for international use (covering different languages, locations and time zones) is required, it may be possible to design an application so that most of the files to be distributed are common and only a few are different for each country, the ultimate arrangement being a small number of configuration settings.
9.11.5 The positioning of software: what to put where
Consider minimising the amount of software installed on client workstations by deploying the executable software on servers instead. This will reduce the effort required to distribute software changes; however, it may increase the network traffic as a result. An extreme example of minimising the amount of software installed on client workstations is the 'thin client' model described in Paragraph 9.10.2.
Many applications require some files to be installed on every client PC, typically in shared directories. These runtime support files need to be managed very carefully, as they may well need to be updated along with new versions of the applications that use them. This can be particularly hard to manage where they are shared by several applications.
There is a line of thought that states that, given the rapidly decreasing prices of hard disks and their increasing capacity, you might as well install everything locally on each workstation (assuming, of course, that you have tools for remote distribution and workstation management). However, some organisations are successfully managing to operate such that any member of staff can use any PC. This has many benefits; for example, it makes the provision of 'hot-swap' equipment for faulty workstations very simple (as they are all identical). This approach can be achieved through a combination of techniques, for example:
- no data being stored locally on workstations, only on central fileservers
- dynamically loading as much software as possible over the LAN
- the use of clever 'caching' to reduce the demand on the LAN.
Even where sophisticated software distribution utilities are deployed, it may be simpler to have to keep a relatively small number of code servers up-to-date, compared with having to refresh code on thousands of PCs in a synchronised manner. There is no blanket answer for this problem, but it is recommended that organisations consider keeping the more volatile application code on their central servers. Although you may not be able change a purchased application, many do provide for being installed in this way and software developers should be strongly encouraged to consider these issues for any new applications.
The 'bottom line' is that an organisation should consider its approach and state that clearly as part of its Release policy.