Friday, August 25, 2006

Making Things Work - Part the Fourth - Do things the customer's way

One of the fastest ways to alienate a customer while trying to deliver an integration project is to fail to ask how they like things done.

Unless you are dealing with a completely new site, with virtually no existing infrastructure, there will be an accepted "right way" to do things such as labelling cables and managing backups which is widely understood by the customer's IT staff. They may not have any of their normal procedures and practices documented, but they will be deeply offended if you don't follow their standards. There is little as depressing as having a customer representative come by the racks of equipment that you are busily installing in their data centre, looking disapproving and making some helpful remark such as "the management cables should all be green". Particularly if you have already installed all the cables. If the customer has a cabling standard, they will expect you to comply with it, and failure to do so may hold up acceptance and sign off. And "we didn't know" will not fly as an excuse, so don't try it.

Involve your customer in the planning of the installation, and get their commitment on the details (in writing!). This will save arguments and rework, and rework is to be avoided at all costs: rework tends to introduce errors, and is of itself expensive. If they insist on something that you know is wrong, stupid or hazardous, make sure that you explain to them in writing why whatever they want will cause a problem. If they still insist, make sure the project manager has record of who said what and why, in the sure and certain knowledge that you will need this information when the project reaches the "allocation of blame" phase, and then do what they want. I have never seen a project stopped because a customer wanted something stupid done, though I've seen a few that should have been.

So what should you expect to need to know about the customer's preferences?

First, if you want a reference for good data centre design, try Rob Snevely's "Enterprise Data Center Design and Methodology". There are other books on the subject, but that one has always worked for me. However, whatever references you may have read, please temper their application with a big dose of common sense: no customer has a data centre that actually looks like the ones in the references.

Cabling standards. Do they use particular colours for particular purposes? How do they do label cables? How do they do cable management? In my experience, if the customer has a regular contractor who does their cabling, it can been quicker and cheaper to subcontract that party to do whatever is needed for the project.

What is the company policy for operating system installations? For firewalls and routers? For the use of encryption? If they have nothing (and that is still very common), and request that you apply "best practice", make absolutely certain that they are fully briefed about what your interpretation of best practice is before you start installing. I have wasted days, and in some extreme cases weeks, negotiating the minefield that is "security best practice" on customer sites. It's a subject everyone thinks they know about, usually because they read something in an inflight magazine or some other similiarly authoritative publication. When time permits, I'll do a blog entry on idiotic security practices I have seen. For now, try to avoid getting hung up on this particular reef.

Is there a standard administration account name that they use?

Unless the backup facilities are part of your deployment, how are they going to backup the systems you are installing now? What connectivity do you need to allow for in your build? I was once handed a "design" for a large, N-tier network installation. On close inspection, I realised that there was no way to connect the new equipment to the customer's existing infrastructure: the design had connection points for the internet, and for the customer's partners, but not for their own admin staff. It had a single "console" (actually a small desktop machine which had a screen), from which everything in the build was supposed to be controlled. I went to manager responsible for the job and said "there's no way to connect this lot to the customer's network". And he said "that feature wasn't listed in their Request For Proposal", and took the position that, since they hadn't asked for it, it wasn't our problem to provide the functionality.

I could see that this was going to be a problem, but reasoned argument made no headway at the time, so the infrastructure engineering team went ahead and installed everything according to the "design" (which had numerous other defects, which we had to fix as we went along). A few months later (it was a big and complex build), the equipment was on the floor of the customer's data centre, and they realised two things. The first was that the only way to administer the new equipment was to walk into the data centre operator's room, sit down at the "console" and work from there: it was unreachable from anywhere else. And the second things was that they couldn't print any reports from the new system, because it had no connectivity to their printers.

The technically correct solution in my view was to redesign the whole thing to integrate tidily and securely into the existing customer infrastructure, but no one (including the customer) was prepared to contemplate that option. They had a deadline to get the new systems into production, and the project was already running late. Instead, we had to design and build an ugly chunk of bridging network to connect the new systems to their network. This cost the customer more money (it was unquestionably a variation to the scope of works), complicated the build, extended the test phase and looked like what it was: a belated after thought. One phone call could have headed this off, if it had been made at the beginning of the project, if anyone had been prepared to tell the customer that they were making a mistake. Even if they chose not to address the problem at the time, at least we as the vendor would have retained a little credibility.

So if you can see problems in the design, try and make someone pay attention early on, while there is still time to fix things. Even if all you achieve is an email trail that documents recognition of a potential problem, that may be enough to make you look less foolish if the wheels fall off further into the build.

Double check what they are planning to do about backups: this may be the point where you discover that nobody quoted them the extra backup client licences that they will need for whatever product they use. Be particularly careful if you have to deal with applications that require special backup clients: I've seen jobs where the salesman only quoted operating system backup clients, and missed the licences needed for Oracle. In the worst case, the customer may need to upgrade a tape silo or SAN to get sufficient capacity for backups.

What are the customer's expectations of user acceptance testing (UAT)? On a large project with a lot of software, there is a tendency for the testing phase to focus on the applications and their functionality: "does this stuff do what we wanted?" testing. And I have seen projects where the entire test plan was written by the software development team: they were astounded when they found that the customer expected to see test cases for the hardware as well. Find out what your customer wants, and get ready to deliver it, because without a completed UAT, you will not get sign off. Be extremely wary of contracts that contain vague statements about "mutually agreed test plans". That phrase just means that the people who drafted the contract had no idea what testing could or should be done. Chances are the contract was worked out by business people, not IT staff. But you can bet that the UAT will have to be acceptable to the IT staff, and they may insist on all sorts of time consuming tests.

The whole issue of testing can blow your schedule right out of the water, and it can add unanticipated extra work to the build. Take a fairly common case: you build a set of infrastructure that is supposed to be firewalled off from the internet on one side, and connected to the customer's existing network on the other. When you come to test it, the customer insists that you conduct tests to prove that the infrastructure is secure before they will let you connect it to either their network or the internet. You essentially need to perform a penetration test, but in an isolated environment. A common requirement is that you facilitate access for a neutral third party to conduct the penetration test (and this is quite reasonable: the people who build secure systems should not be the same people who test them). I've seen projects that had to install additional equipment just provide adequate facilities for testing to take place in a manner that the customer was pleased to accept. The equipment had to be sourced, installed and configured. It all takes time.

My strongest recommendation on testing is that you start writing the test plan before you unpack a single piece of equipment. Write an outline, parcel out the work, make sure the project manager realises that this is a non-trivial task, get regular customer feedback. If nothing else, the work of writing test cases can be used to keep the team busy if some other part of the project stalls. And stall it will...

No comments:


Bookmark and Share