Previous Section   Next Section

4.4  Implementing the Process

4.4.1 Produce a Service Catalogue
4.4.2 Expectation Management
4.4.3 Plan the SLA structure
4.4.4 Establish Service Level Requirements and Draft SLA
4.4.5 Wording of SLAs
4.4.6 Seek agreement
4.4.7 Establish monitoring capabilities
4.4.8 Review Underpinning contracts and Operational Level Agreements
4.4.9 Define Reporting and Review Procedures
4.4.10 Publicise the existence of SLAs


When planning activities have been completed, the following activities must be undertaken to implement SLM.

 

4.4.1  Produce a Service Catalogue

Over the years, organisations' IT Infrastructures have grown and developed, and there may not be a clear picture of all the Services currently being provided and the Customers of each. In order to establish an accurate picture, it is recommended that an IT Service Catalogue is produced.

Such a catalogue should list all of the services being provided, a summary of their characteristics and details of the Customers and maintainers of each. A degree of 'detective work' may be needed to compile this list and agree it with the Customers (sifting through old documentation, searching program libraries, talking with IT staff and Customers, looking at procurement records and talking with suppliers and contractors etc). If a CMDB or any sort of Asset database exists, these may be a valuable source of information.

Hint

Service Desk Incident records are a good pointer to those old systems that everyone else but the User has forgotten about.

What is a service?

This question is not as easy to answer as it may first appear, and many organisations have failed to come up with a clear definition in an IT context. IT staff often confuse a 'service' as perceived by the Customer with an IT System. In many cases one 'service' can be made up of other 'services' (and so on) which are themselves made up of one or more IT systems within an overall Infrastructure including operations, networks, applications, etc. A good starting point is often to ask Customers which IT Services they use and how those services map onto their Business processes. Customers often have a greater clarity of what they believe a service to be.

One possible definition may be: 'One or more IT systems which enable a business process'.

To avoid confusion, it may be a good idea to define a hierarchy of services within the Service Catalogue, by qualifying exactly what type of service is meant e.g. business service (that which is seen by the Customer), Infrastructure services, network service, application service (all invisible to the Customer - but essential to the delivery of Customer services).

When completed, the catalogue may initially consist of a matrix, table or spreadsheet. Some organisations integrate and maintain their Service Catalogue as part of their Configuration Management Database (CMDB). By defining each service as a Configuration item (CI) and, where appropriate, relating these to form a service hierarchy, the organisation is able to relate events such as Incidents and RFCs to the services affected, thus providing the basis for service monitoring via an integrated tool (e.g. 'list or give the number of Incidents affecting this particular service'). This can work well and is recommended.

The Service Catalogue can also be used for other Service Management purposes (e.g. for performing a Business Impact analysis (BIA) as part of IT Service Continuity Planning, or as a starting place for Workload Management, part of Capacity Management). The Cost and effort of producing the catalogue is therefore easily justifiable. If done in conjunction with prioritisation of the BIA, then it is possible to ensure that the most important services are covered first. An example of a simple Service Catalogue that can be used as a starting point is given in Annex 4B.

4.4.2  Expectation Management

From the outset, it is wise to try and manage Customers' expectations. This means setting proper expectations in the first place, and putting a systematic process in place to manage expectations going forward, as satisfaction = expectation - perception. SLAs are just documents and in themselves do not materially alter the quality of service being provided (though they may affect behaviour and help engender an appropriate service culture, which can have an immediate beneficial effect, and make longer-term improvements possible). A degree of patience is therefore needed and should be built into expectations.

Where charges are being made for the services provided, this should modify Customer demands (Customers can have whatever they can cost justify - providing it fits within agreed corporate strategy - and have authorised budget for, but no more!). Where direct charges are not made, the support of senior business managers should be enlisted to ensure that excessive or unrealistic demands are not placed upon the IT provider by any individual Customer group.

4.4.3  Plan the SLA structure

Using the catalogue as an aid, Service Level Management must plan the most appropriate SLA structure to ensure that all services and all Customers are covered in a manner best suited to the organisation's needs. There are a number of potential options, including:

Service based

Where an SLA covers one service, for all the Customers of that service. For example an SLA may be established for an organisation's E-mail service, covering all of the Customers of that service.

This may appear fairly straightforward. However, difficulties may arise if the specific requirements of different Customers vary for the same service, or if characteristics of the IT Infrastructure mean that different service levels are inevitable (e.g. head office staff may be connected via a high-speed LAN while local offices may have to use a lower speed leased line). In such cases, separate targets may be needed within the one agreement. Difficulties may also arise in determining who should be the signatories to such an agreement.

Customer based

An agreement with an individual Customer group, covering all the services they use. For example, agreements may be reached with an organisation's Finance Department covering, say, the Finance System, the Accounting System, the Payroll System, the Billing System, the Procurement System and any other IT systems that they use.

Customers often prefer such an agreement, as all of their requirements are covered in a single document. Only one signatory is normally required, which simplifies this issue.

Hints and tips

A combination of either of these structures might be appropriate, providing all services and Customers are covered, with no overlap or duplication.

Multi-level SLAs

Some organisations have chosen to adopt a multi-level SLA structure. For example, a three-layer structure as follows:

1)  Corporate Level: covering all the generic SLM issues appropriate to every Customer throughout the organisation. These issues are likely to be less volatile and so updates are less frequently required.

2)  Customer Level: covering all SLM issues relevant to the particular Customer group, regardless of the service being used.

3)  Service Level: covering all SLM issues relevant to the specific service, in relation to this specific Customer group (one for each service covered by the SLA).

Figure 4.3 - Multi-level SLAs

As shown in Figure 4.3, such a structure allows SLAs to be kept to a manageable size, avoids unnecessary duplication, and reduces the need for frequent updates.

Example

The UK Employment Service successfully used a three-level SLA format. They used the titles: Framework, Standard, Specific to describe their 3 levels - but the concept was basically the same as described above.

4.4.4  Establish Service Level Requirements and Draft SLA

It is difficult to give guidance on which of these should come first - since it is often an iterative process. Once the SLA structure has been agreed, a first SLA must be drafted. It is advisable to involve Customers from the outset, but rather than going along with a blank sheet to commence with, it may be better to produce a first outline draft as a starting point for more detailed and in-depth discussion. Be careful though not to go too far and appear to be presenting the Customer with a fait accompli.

It can be difficult to draw out requirements, as the business may not know what they want - especially if not asked before and they may need help in understanding and defining their needs. Be aware that the requirements initially expressed may not be those ultimately agreed - they are more likely to Change where Charging is in place. Several iterations of negotiations may be required before an affordable balance is struck between what is sought and what is achievable and affordable.

Many organisations have found it valuable to produce a pro-forma that can be used as a starting point for all SLAs. The proforma can often be developed alongside the pilot SLA. Guidance on the items to be included in an SLA is given in Section 4.6.

Hints and tips

Make roles and responsibilities a part of the SLA. Consider three perspectives, the IT provider, the IT Customer, and the actual Users.

4.4.5  Wording of SLAs

The wording of SLAs should be clear and concise and leave no room for ambiguity. There is normally no need for agreements to be couched in legal terminology, and plain language aids a common understanding. It is often helpful to have an independent person, who has not been involved with the drafting, to do a final read-through. This often throws up potential ambiguities and difficulties that can then be addressed and clarified.

It is also worth remembering that SLAs may have to cover services offered internationally. In such cases the SLA may have to be translated into several languages. Remember also that an SLA drafted in a single language may have to be reviewed for suitability in several different parts of the world (i.e. a Version drafted in Australia may have to be reviewed for suitability in the USA or the UK - and differences in terminology, style and culture must be taken into account).

4.4.6  Seek agreement

Using the draft agreement as a basis, negotiations must be held with the Customer(s), or Customer representatives to finalise the contents of the SLA and the initial service level targets, and with the Service providers to ensure that these are achievable. Guidance on general negotiating techniques is included in the ITIL Business and Management Skills book.

One Problem that might be encountered is identifying a suitable Customer to negotiate. Who 'owns' the service? In some cases this may be obvious, and a single Customer manager is willing to act as the signatory to the agreement. In other cases, it might take quite a bit of negotiating or cajoling to find a representative 'volunteer' (beware that volunteers often want to express their own personal view rather than represent a general consensus), or it may be necessary to get all Customers to sign (messy!).

If Customer representatives exist who are able to genuinely represent the views of the Customer community, because they frequently meet with a wide selection of them, this is ideal. Unfortunately, all too often so-called representatives are head-office based and seldom come into contact with genuine service Customers. In the worst-case, Service Level Management may have to perform his/her own Programme of discussions and meetings with Customers to ensure true requirements are identified.

Anecdote

On negotiating the current service and support hours for a large system an organisation found a discrepancy in the required time of usage between Head Office and the field offices Customers. Head Office (with a limited User population) wanted Service Hours covering 8am to 6pm, whereas the field (with at least 20 times the User population) stated that starting an hour earlier would be better - but all offices closed to the Public by 4pm at the latest and so wouldn't require a service much beyond this. Head Office won the 'political' argument and so the 8am to 6pm band was set. When the Service came to be used (and hence monitored) it was found that Service extensions were usually asked for by the field to cover the extra hour in the morning, and actual usage figures showed that the system had not been accessed after 5pm, except on very rare occasions. The Service Level Manager was blamed by the IT staff for having to cover a late shift, and by the Customer Representative for charging for a service that was not used (i.e. staff and running costs).

Hints and tips

Care should be taken when opening discussions on service levels for the first time, as it is likely that 'current issues' (the failure that occurred yesterday!) or long-standing grievances (that old printer that we have been trying to get replaced for ages!) are likely to be aired at the outset. Important though these may be, they must not be allowed to get in the way of establishing the longer-term requirements. Be aware however that it may well be necessary to address any issues raised at the outset before gaining any credibility to progress further.

If there has been no previous experience of SLM, then it is advisable to start with a pilot SLA. A decision should be made on which services/Customers to be used for the pilot. It is helpful if the selected Customer is enthusiastic and wishes to participate - perhaps because they are anxious to see improvements in service quality. The results of the initial Customer perception survey may give pointers to a suitable pilot.

Hints and tips

Don't pick an area where large Problems exist as pilot. Try to pick an area that is likely to show some quick benefits and develop the SLM Process. Nothing breeds acceptance of a new idea quicker than success.

One difficulty sometimes encountered is that staff at different levels within the Customer community may have different objectives and perceptions. For example a senior manager may rarely use a service and may be more interested in issues such as value for money and output, whereas a junior member of staff may use the service throughout the day and may be more interested in issues such as responsiveness, usability and reliability. It is important that all of the appropriate and relevant Customer's requirements, at all levels, are identified and incorporated in SLAs.

Some organisations have formed focus groups from different levels from within the Customer community to assist in successfully ensuring that all issues have been correctly addressed. This takes additional Resources, but can be well worth the effort.

The other group of people that have to be consulted during the whole of this process is the appropriate representatives from within the IT provider side (whether internal or from a Third-party supplier). They need to agree that targets are realistic, achievable and affordable. If they are not, further negotiations are needed until a compromise acceptable to all parties is agreed. The views of suppliers should also be sought and any contractual implications should be taken into account during the negotiation stages.

Where no past monitored data is available, it is advisable to leave the agreement in draft format for an initial period, until monitoring can confirm that initial targets are achievable. Targets may have to be re-negotiated in some cases. When targets have been confirmed, the SLAs must be signed.

Once the pilot has been completed and any initial difficulties overcome, then move on and gradually introduce SLAs for other services/Customers. If it is decided from the outset to go for a multi-level structure, it is likely that the corporate-level issues have to be covered for all Customers at the time of the initial pilot. It is also worth trialling the corporate issues during this pilot phase.

Hints and tips

Don't go for easy targets at the Corporate level; they may be easy to achieve but have no value in improving Service Quality. Also if the targets are set at a high enough level the Corporate SLA can be used as the standard that all new services should reach.

One point to ensure is that at the end of the drafting and negotiating process, the SLA is actually signed by the appropriate managers on the Customer and IT provider sides to the agreement. This gives a firm commitment by both parties that every attempt will be made to meet the agreement by both sides. Generally speaking, the more senior the signatories are within their respective organisations, the stronger the message of commitment. Once an SLA is agreed, wide publicity needs to be used to ensure that Customers and IT providers alike are aware of its existence, and of the key targets.

It is important that the Service Desk staff are committed to the SLM process and become proactive ambassadors for SLAs, embracing the necessary service culture, as they are the first contact point for Customers' Incidents, complaints and queries. If the Service Desk Staff are not fully aware of SLAs in place and do not act upon them then Customers very quickly lose faith in SLAs.

4.4.7  Establish monitoring capabilities

Nothing should be included in an SLA unless it can be effectively monitored and measured at a commonly agreed point. The importance of this cannot be overstressed, as inclusion of items that cannot be effectively monitored almost always result in disputes and eventual loss of faith in the SLM process. A lot of organisations have discovered this the 'hard way' and as a consequence, have absorbed heavy costs both in a financial sense as well as in terms of negative impacts on their culture.

Anecdote

A global network provider agreed Availability targets for the provision of a managed network service. These Availability targets were agreed at the point where the service entered the Customer's premises. However, the global network provider could only monitor and measure Availability at the point the connection left its premises. The network links were provided by a number of different national Telecommunications service providers, with widely varying Availability levels. The result was a complete mis-match between the Availability figures produced by the network provider and the Customer, with correspondingly prolonged and heated debate and argument.

Existing monitoring capabilities should be reviewed and upgraded as necessary. Ideally this should be done ahead of, or in parallel with, the drafting of SLAs, so that monitoring can be in place to assist with the validation of proposed targets.

It is essential that monitoring matches the Customer's true perception of the service. Unfortunately this is often very difficult to achieve. For example, monitoring of individual components, such as the network or server, does not guarantee that the service will be available so far as the Customer is concerned - a desktop or application failure may mean that the service cannot be used by the Customer. Without monitoring all components in the end-to-end service (which may be very difficult and costly to achieve) a true picture cannot be gained. Similarly, Users must be aware that they should report Incidents immediately to aid diagnostics, especially if performance related.

Where multiple services are delivered to a single workstation, it is probably more effective to record only downtime against the service the User was trying to access at the time (though this needs to be agreed with the Customers). Customer perception is often that although a failure might affect more than one service all they are bothered about is the service they cannot access at the time of the reported Incident - though this is not always true, so caution is needed.

A considerable number of organisations use their Service Desk, linked to a comprehensive CMDB, to monitor the Customer's perception of Availability. This may involve making specific Changes to Incident/Problem logging screens and require stringent compliance with Incident logging procedures. All of this needs discussion and agreement with the Availability Management function. Chapter 8 gives guidance and examples of the formulae that might be used to determine service Availability levels, and the amendments that may be needed to capture the required data.

The Service Desk is also used to monitor Incident response times and Resolution times, but once again the logging screen might need amendment to accommodate data capture, and call logging procedures may need tightening and must be strictly followed. If support is being provided by a third-party, this monitoring may also underpin supplier management.

It is essential to ensure that any Incident/Problem handling targets included in SLAs are the same as those included in Service Desk tools and used for escalation and monitoring purposes. Where organisations have failed to recognise this, and perhaps used defaults provided by the tool supplier, they have ended up in a situation where they are monitoring something different from that which has been agreed in the SLAs, and are therefore unable to say whether SLA targets have been met, without considerable effort to massage the data.

Some amendments may be needed to support tools, to include the necessary fields so that relevant data can be captured.

Another notoriously difficult area to monitor is transaction response times (the time between sending a screen and receiving a response). Often end-to-end response times are technically very difficult to monitor. In such cases it may be appropriate to deal with this as follows:

a)  Include a statement in the SLA along the following lines 'The services covered by this SLA are designed for high-speed response and no significant delays should be encountered. If a response time delay of more than x seconds is experienced for more than y minutes this should be reported immediately to the Service Desk'.

b)  Agree and include in the SLA an acceptable target for the number of such Incidents that can be tolerated in the reporting period.

c)  Create an Incident Category 'poor response' (or similar) and ensure that any such Incidents are logged accurately and that they are related to the appropriate service.

d)  Produce regular reports of occasions where SLA transaction response time targets have been breached, and instigate investigations via Problem Management to correct the situation.

This approach not only overcomes the technical difficulties of monitoring, but also ensures that incidences of poor response are reported at the time they are occurring. This is very important as poor response is often caused by a number of interacting events, which can only be detected if they are investigated immediately (the 'smoking gun' syndrome!).

The preferred method however is to implement some form of automated client/server response time monitoring. These tools are becoming increasingly available and increasingly more cost effective to use. These tools provide the ability to measure or sample actual or very similar response times to those being experienced by a variety of Users.

Hints and tips

Some organisations have found that in reality 'poor response' is sometimes a problem of User perception. The User, having become used to a particular level of response over a period of time, starts complaining as soon as this is slower. Take the view that 'if the User thinks the system is slow - then it is'.

If the SLA includes targets for assessing and implementing Requests for Charge (RFCs), the monitoring of targets relating to Change Management should ideally be carried out using whatever Change Management tool is in use (preferably part of an integrated Service Management support tool) and Change logging screens and escalation processes should support this.

There are a number of important 'soft' issues that cannot be monitored by mechanistic or procedural means, such as Customers' overall feelings (these need not necessarily match the 'hard' monitoring). For example, even when there have been a number of reported service failures the Customers may still feel positive about things, because they may feel satisfied that appropriate actions are being taken to improve things. Of course, the opposite may apply and Customers may feel dissatisfied with some issues (e.g. the manner of some staff on the Service Desk) when few or no SLA targets have been broken.

It is therefore recommended that attempts are made to monitor Customer perception on these soft issues. Methods of doing this include:

Where possible, targets should be set for these and monitored as part of the SLA (e.g. an average score of 3.5 should be achieved by the service provider on results given, based on a scoring system of 1 to 5, where 1 is poor performance and 5 is excellent). Ensure that if Users provide feedback they receive some return and demonstrate to them that their comments have been incorporated in an action plan, perhaps a Service Improvement Programme.

4.4.8  Review Underpinning contracts and Operational Level Agreements

Most IT Service Providers are dependent to some extent on their own suppliers (both internal and/or external). They cannot commit to meeting SLA targets unless their own suppliers' performances underpin these targets. Contracts with external suppliers are mandatory, but many organisations have also identified the benefits of having simple agreements with internal support groups, usually referred to as OLAs. Figure 4.4 illustrates this.

Figure 4.4 - SLA Support Structure

OLAs need not be very complicated, but should set out specific back-to-back targets for support groups that underpin the targets included in SLAs. For example, if the SLA includes overall time to respond and fix targets for Incidents (varying on the Priority levels), then the OLAs should include targets for the each of the elements in the support chain (target for the Service Desk to answer calls, escalate etc, targets for Network Support to start to investigate and to resolve network related errors assigned to them). In addition, overall support hours should be stipulated for all groups that underpin the required service Availability times in the SLA. If special procedures exist for contacted staff (e.g. out of hours telephone support) these must also be documented.

It must be understood however that the Incident resolution targets included in SLAs should not normally match the same targets included in contracts or OLAs with suppliers. This is because the SLA targets must include an element for all stages in the support cycle (e.g. Detection time, Service Desk logging time, escalation time, referral time between groups etc, Service Desk review and Closure time - as well as the actual time fixing the failure). The SLA target should cover all of this.

Before committing to SLAs, it is therefore important that existing contractual arrangements are investigated and where necessary, upgraded. This is likely to incur additional costs, which must either be absorbed by IT, or passed on to the Customer. In the latter case the Customer must agree to this, or the more relaxed targets in existing contracts should be agreed for inclusion in SLAs.

OLAs should be monitored against these targets and feedback given to the Managers of the support groups. This highlights potential problem areas, which may need to be addressed internally or by a further review of the SLA. Serious consideration should be given to introducing formal OLAs where they do not already exist.

4.4.9  Define Reporting and Review Procedures

The SLA reporting mechanisms, intervals and report formats must be defined and agreed with the Customers. The frequency and format of Service Review Meetings must also be agreed with the Customers. Regular intervals are recommended. Periodic reports should fit in with the reviewing cycle.

The SLAs themselves must be reviewed periodically (e.g. annually in line with financial cycle) to ensure that they are still current and indeed still relevant - does the SLA still fit the needs of the business and the capabilities of IT? All SLAs should be under strict Change Management control and any Changes should be reflected in an update to the Service Catalogue, if needed.

4.4.10  Publicise the existence of SLAs

Steps must be taken to advertise the existence of the new SLAs amongst the Service Desk and other support groups with details of when they become operational. It may be helpful to extract key targets from the SLAs into tables that can be on display in support areas - so that staff are always aware of the targets to which they are working. If support tools allow it, these targets should be included as thresholds and automatically alerted against when a target is threatened or actually breached. SLAs and the targets they contain must also be publicised amongst the User community, so that Users are aware of what they can expect from the services they use, and know at what point to start to express dissatisfaction.

Previous Section   Next Section