Tuesday, 3 December 2013

What the Heck are Non-Functional Requirements?

If functional requirements are for the end-users (customers), then who are the non-functional requirements for?

Non-functional requirements are for those that install and maintain operational code, i.e. the help desk and operators

Every developer needs to be aware of what those non-functional requirements are and why operations personnel and help desk personnel are indirect  customers that are really  important.  There is no way for these guys to be unhappy and it not to back up into development as an urgent problem!

Functional Requirements

Functional requirements are baked into the code that developer's deliver (interpreted or compiled).   Events from input devices (network, keyboard, devices) trigger functions to convert input into output -- all functions have the form:


This is true whether you use an object-oriented language or not.

Non-functional requirements involve everything that surrounds a functional code unit.  Non-functional requirements concern things that involve time, memory, access, and location:

Performance
  • Server must be able to handle 100 transactions a second
  • End user must have less than 1 second response
Availability
  • The service has to be available 99.99% of the time during the service hours
Capacity
  • During service hours the server must support 700 simultaneous users
Continuity
  • Service is resilient to disk, machine, and operational center failure
Security
  • Ensure that only people authorized to access the service can
  • Ensure that data is never corrupted due to illegal activity or machine failure

I won't spend any time on performance because this is the non-functional requirement that everyone understands.

Non-functional requirements are slightly different between desktop applications and services; this article is focused on non-functional requirements for services.

If you have any knowledge of ITIL you will recognize that the highlighted items deal with the warranty of a service.  In fact, the functional requirements involve the utility of a service, the non-functional requirements involve the warranty of a service.

Availability

Availability is about making sure that a service is available when it is supposed to be available. Availability is about a Configuration Item (CI) in the environment of the operations center that specifies how the code is accessed.  Availability is decided independently of the code and is at best part of the Service Design Package (SDP) that is delivered to the operations department.

Developer's need to be aware of single-points of failure (i.e. services hard-coded to a specific IP) which causes fits in operations that are not running virtual machines (VM) that can have virtual IPs . The requirement to create code that is not reliant on static IPs or specific machines is a non-functional requirement.  Availability is simplified in operations if the code is resilient enough to allow itself to easily move (or be replicated) among servers.

Availability non-functional requirements include:
  • Code to verify that customers are in their user windows
  • Automatic installation of CI or mechanisms
  • Ability to detect and prevent manual errors for a CI
  • Ability to easily move code between servers

Capacity

Capacity is about delivering enough functionality when required.  If you ask a web service to supply 1,000 requests a second when that server is only capable of 100 requests a second then something will fail.

This may look like an availability issue, but it is caused because you can't handle the capacity requested.

Internet services can't provide enough capacity with a single machine and operations personnel need to be able to run multiple servers with the same software to meet capacity requirements.  The ability to run multiple servers without conflicts is a non-functional requirement. The ability to take a failing node and restart it on another machine or VM is a non-functional requirement.

Capacity non-functional requirements include:
  • Ability to run multiple instances of code easily
  • Ability to easily move a running code instance to another server

Continuity

Continuity involves being able to be robust against major interruptions to a service, these include power outages, floods or fires in an operational center, or any other disaster that can disrupt the network or physical machines.

Where availability and capacity often involve redundancy inside a single operation center, continuity involves geographic and network redundancy.  Continuity at best involves having multiple servers that can work in geographically distributed operation centers.  At worst, you need to be able to have a master-slave fail over model with the ability to journal transactions and eventually bring the master back up.

Continuity non-functional requirements include:
  • Resilience of a code base to potential network outages, i.e. ability to retry transactions or find a new server
  • Making sure that correct error messages are returned when physical failures are encountered, i.e. if the network is unavailable then don't give the end-user a message "Customer record has errors, please correct".
  • Ability to recognize inconsistent data and not continue to corrupt data inside the database.


Security
Security non-functional requirements concern who has access to functions and preventing the integrity of data from being corrupted.

Where access is concerned, how difficult will it be for operations personnel or help desks to set up security for users?

Developer's build in different levels of access into their applications without considering how difficult it will be for a 3rd party (help desk or operations) to set-up end users.

Data integrity is another non-functional requirement.  Developer's need to consider how their applications will behave if the program encounters corrupted data due to machine or network failures.  This is not as important an issue in environments using RAID or redundant databases.

Security non-functional requirements include:
  • Ease of a help desk to set up a new user on an application
  • Ability to configure a user's rights to enable them only to access the functions that they have a right to
  • Ensuring through data redundancy or consistency algorithms that data is not corrupted


When You Forget Non-Functional Requirements...

Commonly start-ups are so busy setting up their services that the put non-functional requirements on the back burner.  The problem is that there are non-functional requirements that need to be designed into the architecture when software is created.

For example, it is easy to be fooled into building software that is tied to a single machine, however, this will not scale in operations and cause problems later on.  One of the start-ups I was with built a server for processing credit/debit card transactions without considering non-functional requirements (capacity, continuity).

 It cost more to add the non-functional requirements than it cost to develop the software!

Every non-functional requirement that is not thought through at the inception of a project will often represent significant work to add later on.  Every such project is a 0 function point project that will require non zero cost!

Generally availability, capacity, and continuity is not a problem for services developed with cloud computing in mind.  However, there are thousands of legacy services that were developed before cloud computing was even possible.

If you are developing a new service then make sure it is cloud enabled!

Operations People are People Too

Make no mistake, operations and help desk personnel are fairly resourceful and have learned how to manage software where non-functional requirements are not addressed by the developers.

Hardware and OS solutions exist for making up for poorly written software that assumes single machines or does not take into account the environment that the code is running in, but that can come at a fairly steep cost in infrastructure.

The world has moved to services and it is no longer possible for developers to ignore the non-functional requirements involved with the code that they are developing.  Developer's that think through the non-functional requirements can reduce costs dramatically on the bottom line and quality of service being delivered.

The guys that run  operational centers and help desks are customers that are only slightly less important than the end-user.  Early consideration of the non-functional requirements makes their lives easier and makes it much easier to sell your software/services. It is no longer possible for competent developers to be unaware of non-functional requirements.



Other Articles
No Experience Necessary Counter-intuitive evidence why years of experience does not make developers more productive
Shift Happens Why scope shift on development projects is inevitable and why not capturing requirements at the start of a project can doom it to failure.
Inspections are not Optional Software inspections are intensive but evidence shows that for each hour of inspection you can reduce QA by 4 hours!

Tuesday, 19 November 2013

Who should set defect priority?

Surprisingly, defect priority should not be set by QA.  QA are generally the owners of the defect tracking system and control it, but this is one attribute that they should not control.  The defect tracker is a shared resource between QA, engineering, engineering management, and product mangement and is a coordinating mechanism for all these parties.

Commonly people mix up priority and severity, for example, there may be a severe defect that causes the software either not to install or to cease functioning.

It is common for new releases to have various installation problems when they first get to QA. This blocks QA, so they mark the defects with a high severity and high priority.  This issue has a high severity and needs to be addressed right away, but remember bug tracking systems are append only -- once this defect gets into the system, it will never get out.  This kind of issue should be escalated to the engineers and engineering management because it makes little sense to clog the defect tracking system with it.

Now there may be intermittent issues that cause the software to fail and you may assume that this defect would be high priority, but if it occurs very rarely and would cost too much to fix then this defect may be a low priority.  Once again the priority of an intermittent severe issue can not be determined by QA.

Similarly, there may be many cosmetic or minor defects where fixing them might make a huge difference in the user experience and reduce support calls.  Even though these defects are minor, they may be easy to fix and save you serious money.  Once again, this can not be decided by QA.

Therefore, QA should reserve the right to set the initial severity, and may have an internal field for QA priority, but the priority of a defect should be determined by a product manager (PM) who has a more complete understanding of the overall context of the product.  The priority of a defect is a business issue, not an engineering or QA issue.

Ideally, the product manger will set the priority of a defect during a defect triage session where representatives from engineering, engineering management, and QA are present. As you go through each new defect each person can present their logic for what the priority should be.  The PM should then set the priority defect and assign the release it should be fixed in (i.e. this release, minor release, major release, won't fix).  Ideally there are extra fields in the defect record for both QA and development to put their priority beliefs.

It is important that the main status field of a defect not include any of the following or your defect tracker will go to hell (i.e. this information should be in other fields):
  • Priority
  • Severity
  • Fix version
It is very hard to generate meaningful reports when these attributes creep into the defect status.  This has happened if you have a multi-page training manual for entering defects into the system.  Some of the statuses that make sense initially but turn the defect system into a nightmare are:
  • FAD (functions as designed)
  • WontFix
  • NextVersion
  • CantReproduce
Of course, if you don't have regular bug triage sessions with all the parties mentioned above then you are probably sand-bagged in fire-fighting.

Other related articles:

Wednesday, 30 October 2013

Don't manage enhancements in the bug tracker

As development progresses we inevitably run into functionality gaps that are either deemed as enhancements.

These issues often get captured by QA in the bug tracker and assigned to a developer that will be unable to resolve the issue.

Assigning issues to the wrong people will cause the defect to bounce around like a hot potato and waste everyone's time.

Enhancements are a requirements defect that can not be resolved by either QA or the developer.

The life cycle of a defect and the life-cycle of a enhancement are two entirely different things.  A defect is a difference between a stated requirement and the code. If there is no documentation then there is no code defect (see It's not a bug, it's...).

Most enhancements will eventually be coded by some developer; they just should not be managed from the bug tracker. They need to go through the requirements process to determine whether or not the change will be made.  Even when changes are made they can be very different from what QA or the developer expects.

Defect Life-cycle

The defect life-cycle is well known:
  • Defect is identified as a departure from the requirements
  • Defect is assigned to a developer
  • Defect is corrected
  • Correct is verified
    • If not corrected re-open and re-assign to developer
  • The defect is closed

This is the correct way to manage a defect.

This is the incorrect way to manage enhancements. When QA discovers functionality not covered by the requirements then we have an issue; however, it is unlikely that it will be resolved by a developer.

Therefore assigning enhancements to developers is simply going to waste time because neither QA nor developer should be deciding requirements.

Non-Code Defect Life-cycle

There are defects that are not coding defects, this includes:
  1. Insufficient requirements
  2. Correct requirements but incorrect test plans
Enhancements are really requirement defects. Enhancements should be logged as such in the bug tracker and assigned to the person in charge of requirements (business analyst or product manager).  Those individuals should be responsible to track down how these issues should be handled.

If the requirements are correct and the test plans are defective then it should be logged as a test defect. This is tricky because QA often controls the bug tracker and may not  log errors that they commit.  This is typically reported in QA with a  functions as designed status.

At a minimum, the implementation of requirements and test defects in the bug tracker can do several positive things for you:
  1. It removes the responsibility to find a solution from development.
  2. It makes it clear how many defects are in the requirements or test plans.
  3. It reduces stress; no developer wants to be blamed for an issue that is not his.
  4. Many enhancements call for updated project plans and pushing back the deadline.

Put Responsibility Where it Belongs

The creation of requirements and test defects in the bug tracker goes a long way to cleaning up the bug tracker.  In fact, requirements and test defects represent about 1 out of 4 of defects in most systems.

The percentages break down as follows:
  • Requirements defects: 9.58%
  • Testing defects: 15.42%
The creation of requirement and test defects in the bug-tracker alleviate pressure on the engineering department and redirect it  to either the product manager or QA. Eventually enough data will accumulate in the bug-tracker to get management's attention.

At a minimum, these defect categories should help reduce the amount of fire-fighting at the end of a project..



See also:

Wednesday, 23 October 2013

It's not a bug, it's...

When does a bug become a bug?
Who decides that it is a bug?

How many legs does a lamb have if I say the tail is a leg?  The answer is 4, just because I say the tail is a leg does not make it a leg!

Bugs should be obvious, but we say It's not a bug, it's a feature because often it isn't obvious.  Watson Humphrey felt that we should use the term defect and not bug because most people don't take bugs seriously, so let's use the term defect instead.

So when does a defect become a defect?
  • When quality assurance tells you that you have a defect?
  • When product management says that it is a defect?
  • When the customer says that it is a defect?
The answer is: none of the above. Now it might turn out that there is a problem and that code needs to change, but a defect only exists if:

code behaves differently than the requirements specification

This is important because most systems are under specified (if they are specified at all :-) ). Code can only be considered a defect if it differs from the specification.  We call defects undocumented features because we know that the problem is that the requirements were never written.

Incomplete and Inconsistent Requirements

Many organizations do not create sufficiently complete requirements before starting development, either because they don't know how to capture requirements properly or because they don't have resources capable of capturing complete requirements.

Incomplete (and inconsistent) requirements and unrealistic deadlines often force developers into making decisions about how to implement features.  The end result is that developers are regularly told that they have defects in their code.

While this process is common, it is destructive.  When requirements are under specified and inconsistent developers end up needing to perform serious rework. The rework will can require dramatic changes that will impact the architecture of the code.

The time required to find a work around (if it is possible) is rarely included in the project plan. Complicating matters is that the organizations that are reluctant to spend time creating requirements also tend to underestimate their projects.  This puts tremendous pressure on the engineering department to deliver; this promotes the 5 worst practices in software development (see Stop It! No… really stop it.)

When poorly or undocumented systems require changes that are not specified we should  call them change requests rather than defects.

Only 54% of Issues are Resolved by Engineers

The attitude that all defects must be resolved by the engineering department is severely misguided.  Analysis by Capers Jones of over 18,000+ projects shows that only about 54% of all defects can be resolved by the engineers! (only the 3 highlighted rows below)

Defect Role Category Frequency Role
Requirements defect 9.58% BA/Product Management
Architecture or design defect 14.58% Architect
Code defect 16.67% Developer
Testing defect 15.42% Quality Assurance
Documentation defect 6.25% Technical Writer
Database defect 22.92% Data base administrator
Website defect 14.58% Operations/Webmaster

This means that precious time will be wasted assigning issues to developers that they can not resolve.  The time necessary to redirect the issue to the correct person is a major contributing factor to fire-fighting

Getting Control of the Defect Process

For most organizations fixing the defect process involves understanding and categorizing defects correctly. Organizations that are not tracking the different sources of defects probably have a bug tracker that has gone to hell.  Here is how you can fix that problem, see Bug Tracker Hell and How To Get Out!

At a minimum you need to implement the requirements defect, once you identify issues that are caused by poor requirements it will shine the white hot light of shame onto the resources that are capturing your requirements.

Once you realize how many requirements defects exist in your system you can begin to inform senior management about the requirements problem.

Reducing Fire-Fighting

The best way to reduce fire fighting is to start writing better requirements (or writing requirements :-) ).  To do so you need to figure out which of the following are broken:
  1. Not enough time is allocated to the requirements phase
  2. Unskilled people are capturing your requirements
In all likelihood both of these issues need to be fixed in your organization.  When requirements are incomplete and inconsistent you will have endless fire-fighting meetings involving everyone (see Root cause of 'Fire-Fighting' in Software Projects)

Stand your ground if someone tells you that you have coded a defect when there is no documentation for the requirement.

Friday, 11 October 2013

Stupid is as stupid does

Senior management does not set out to have failed projects, yet 7 out of 10 projects fail and you wonder why they keep doing things the same way.

As Forrest Gump's mother stated, "stupid is as stupid does", that is, smart people sometimes do stupid things.  Most senior managers are pretty smart, however, managers end up doing some very stupid things.

Now from a human perspective, something very interesting is going on here.  For example, suppose you were give the following choices:
  1. An 80% chance of making $100
  2. A guaranteed $50
Choosing the 1st alternative has an expected value of $80, so people might be expected to choose this because $80 is more than $50. However, because humans are risk averse, almost everyone will choose the second alternative with guaranteed money.

Now out of 10 software projects:
  • 3 will succeed
  • 4 will be challenged
  • 3 will outright fail
To put that in perspective, if you were watching people cross the street at an intersection:
  • 3 cross the street successfully
  • 4 get maimed
  • 3 get killed
How interested would you be in crossing that street?

Stupid is as stupid does...
You can Google "software project failure rates" to see that this has been demonstrated reliably over many industries over 50 years.  Challenged projects are over budget and under deliver but senior management manages to sell them as victories.  I do not consider challenged projects to be successful; hence the statement that 7 out of 10 projects fail.

So if human beings are risk averse and the odds of project success are so low then:

Why does senior management ignore risks on software projects?

It seems likely that senior management can't conceive of their projects failing. They must believe that every software project that they initiate will be successful, that other people fail but that they are in the 3 out of 10 that succeed.

This inability to understand the base rate of failure in software development is systemic. There are so many software projects that are started by senior management where the technical team knows that the chance of success is 0% from the start.

Senior management is human (seriously, they are :-) ) and is risk averse, you just need to find a way to remind them of this. One way to get senior management to think twice about projects is to make sure that there is a meeting before launching the project where management is asked the following question:

Assume that this project will fail, why would it have failed? What will the consequences be?

This exercise (if done seriously) may have the effect of causing senior management to realize that the project can indeed fail. With luck, the normal risk aversion that every human being is endowed with will kick in and the project may get re-evaluated.

Stupid is as stupid does...

Related Articles

Bibliography

Kahneman, Daniel. Thinking, Fast and Slow.

Kahneman explains why it is human nature to ignore base rates of failure and why we over-estimate our chances of success in any venture.

Monday, 30 September 2013

Size Matters


Some say that software development is challenging because of complexity. This might be true, but this definition does not help us find solutions to reduce complexity. We need a better way to explain complexity to non-technical people.

The reality is that when coding a project size matters.  Size is measured in the number of pathways through a code base, not by the number of lines of code. Size is proportional to the number of function points in a project.

There are many IT people that succeed with programs of a certain size and then fail miserably when taking on programs that are more sophisticated.  Complexity increases with size because the number of pathways increase exponentially in large programs.

Virtually anyone (even CEOs :-) ) can build a hello, world! application; an application that only has a single pathway through it and is as simple as you can get.  Some CEOs write the simple hello, world! program and incorrectly convince themselves that development is easy. Hello, world! only has a single pathway through it and virtually anyone can write it.

main() {
     printf( "hello, world" ); 
}

If you have an executive that can't even complete hello,world then you should take away his computer :-)

Complexity Defined

As programs get more sophisticated, the number of decisions that have to be made increase and the depth of the call tree increases.  Every non-trivial routine will have multiple pathways through it.

If your average call depth is 10 with an average of 4 pathways through each routine then this represents over 1 million pathways.  If the average call depth is 15 then it represents 107 million pathways. Increasing sophisticated programs have greater call depth than ever and distributed applications increase the call depth even because the call depth of a system is additive. This is what we mean by complexity; it is impossible for us to test all of the different pathways in a black box fashion.

Now in reality every combination of pathways is not possible, but you only have to leave holes in a few routines and you will have hundreds, if not thousands, of pathways where calculations and decisions can go wrong.

In addition, incorrect calculations or decisions higher up in the call tree can lead to difficult to find defects that may blow up much further away from the source of the problem.

What are Defects?

Software defects occur for very simple reasons, an incorrect calculation is performed that causes an output value to be incorrect.  Sometimes there is no calculation at all because input data is not validated to be consistent and that data is either stored incorrectly or goes on to cause incorrect calculations to be performed.

We only recognize that we have a defect when we see an output value and recognize that it is incorrect. More likely QA sees it and tells us that we are incorrect.

Basically we follow a pathway that is correct through nodes 1, 2, 3, 4, and 5.  At point 6 we make a miscalculation calculation, and then we have the incorrect values at points 7 and 8 and discover the problem at node 9. So once we have a miscalculation, we will either continue to make incorrect calculations or make incorrect decisions and go down the wrong pathways (where we will then make incorrect calculations).

Not all Defects are Equal

It is clear that the more distance there is between a miscalculation and its discover will make defects harder to detect.  The longer the call depth the greater the chance that there can be a large distance between the origin and detection, in other words:

Size Matters

Today we build sophisticated systems of many cooperating applications and the call depth is exponential with the size of the system.  This is what we mean by complexity in software.

Reducing Complexity

Complexity is reduced for every function where:
  • You can identify when inconsistent parameters are passed to a function
  • All calculations inside of a function are done correctly
  • All decisions through the code are taken correctly
The best way to solve all 3 issues is through formal  planning and development.Two methodologies that focus directly on planning at the personal and team level are the Personal Software Process (PSP) and the Team Software Process (TSP) invented by Watts Humphrey.

Identifying inconsistent parameters is easiest when you use Design By Contract (DbC) , a technique that was pioneered by the Eiffel programming language. It is important to use DbC on all functions that are in the core pathways of an application.

Using Test Driven Development is a sure way to make sure that all calculations inside of a function are done correctly, but only if you write tests for every pathway through a function.

Making sure that all calculations are done correctly inside a function and that correct decisions are make through the code is best done through through code inspections (see Inspections are not Optional and Software Professionals do Inspections).

All techniques that can be used to reduce complexity and prove the correctness of your program are covered in Debuggers are for Losers.  N.B. Debuggers as the only formalism will only work well for systems with low call depth and low branching.

Conclusion

Therefore, complexity in software development is about making sure that all the code pathways are accounted for.  In increasingly sophisticated software systems the number of code pathways increases exponentially with the call depth. Using formal methods is the only way to account for all the pathways in a sophisticated program; otherwise the number of defects will multiply exponentially and cause your project to fail.

Only projects with low complexity (i.e. small call depth) can afford to be informal and only use debuggers to get control of the system pathways. As a system gets larger only the use of formal mechanisms can reduce complexity and develop sophisticated systems. Those formal mechanisms include:
  • Personal Software Process and Team Software Process
  • Design by Contract (via Aspect Oriented Programming)
  • Test Driven Development
  • Code and Design Inspections

Wednesday, 18 September 2013

Are We There Yet?

We associate "are we there yet?" with kids asking incessantly if a long trip is almost over.  It is generally funny, however, it is less funny on a project that should be complete; it is even less funny if it is your project.

Projects follow distinct phases:
  1. Basic requirements are collected
  2. Project plan and end date are established
  3. Development starts
  4. Projects track to the project plan
Often, tasks start off well until they all level off at 90-95% complete and get stuck.  Management was satisfied with progress until the project stalls, they see frantic activity picking up and they start asking  "are we there yet?."

Like a family vacation, this trip is sometimes not even close to being finished.  You  expect that if 9,000 hours have been spent on a project estimated at 10,000 hours that you would be 90% done.

Surprisingly this is not the case for 7 projects out of 10 -- have you ever worked on a project where a 90% complete project plan meant the project was 90% done?

Project Plans can give the Illusion of Control


 Estimating project completion using the project plan is valid if there is a direct correlation between the project goal and the project plan. You often discover that the goal and the plan differ late in a project. Project plans and results differ because:
  • Requirements and tasks are missing
  • The project is incorrectly estimated
  • Work is performed on tasks that do not advance the project

Requirements and Tasks are Missing

Clearly missing requirements mean that more work will be necessary to get to the goal, but the time for this work is rarely added to the project deadline.

A relative of missing requirements is missing tasks, this occurs when the work breakdown structure is incomplete and more subtasks are necessary to complete a task than estimated.

In both cases, if there are 2,000 hours of missing requirements and tasks then a project initially forecasted for 10,000 hours should move the deadline to 11,000 hours.  Therefore if 9,000 hours are done then you are only 75% complete.

Project plans that show 90% complete when there are 20% of the tasks and requirements missing are really 75% complete.


Unfortunately, weak IT leadership, internal politics, and embarrassment over  poor estimates will not move the deadline and teams will have pressure put on them by overbearing senior executives to get to the original deadline even though that is not possible.

The Project is Incorrectly Estimated

There is much literature about how accurate estimates are possible and necessary to successful projects. Weak and uniformed IT leadership will cave in to senior management demands for project deadlines without formal estimates.

A typical CEO and VP Engineering interaction looks as follows:

CEO: We need feature X, how much will it cost and how long will it take to built?

VP Engineering: Well we need to define feature X properly, see how it will be implemented, determine if we have the necessary skill sets, and see what the impact to our other operations will be.  It will take time to do do this work.

CEO: We don't have time for formal estimates.  How hard can it be to add feature X?  By next Friday, I will need a ballpark estimate for time and cost.

VP Engineering: I'll see what I can come up with for next week.

Weak IT executives allow themselves to be bullied all the time by other executives that have no idea what is involved in IT projects.  The end result is an underspecified project that will be underestimated in time and cost (see Why Executive Declared Deadlines lead to Disaster)

The more inaccurate the requirements the more extra work there is to do to get to the target.  That is why short requirements processes lead to strongly shifting requirements and canceled projects (see Shift Happens).  In fact, the degree of requirements shift is equal to the chance of a project being canceled.

Work is Performed that Does Not Advance the Project

Even if a project is correctly specified, there are several activities that will be performed that will not advance the project:
  • Some requirements can not be implemented as specified and time will be spent researching and implementing work arounds
  • Some requirements will be ambiguously specified and be implemented incorrectly and need to be redone
  • Some requirements will be inconsistent and require time and analysis to establish consistent requirements
  • Infrastructure might need to be refactored when you discover that it will not support the code created later
Work executed on these activities will not advance your project and should not be counted in the total of completed hours.

So if 2,000 hours have been spent on activities that don't advance the project then if 9,000 hours have been done on a 10,000 hour project then you have really done 7,000 hours of the 10,000 hour project and you are only 70% done.

Project plans that show 90% complete when  20% has been spent on unproductive tasks are really 70% complete.

Unfortunately...

On projects you will have both missing tasks and unproductive activities.  So if 2,000 hours are unproductive and 2,000 hours are missing in a 10,000 hour project where 9,000 hours have been done then you have only done 7,000 hours on a 12,000 hour project you are only 58.3% done.

Project plans that show 90% complete when there are 20% missing requirements and 20% time spent unproductive tasks are really 53% complete.

Many of you have been in this situation before, you know that the project plan states that you are 90% done but you know that you are not even close to finishing the project.

 Not Changing the Deadline Leads to Worst Practices

Whether a project is off course because of missing activities or non-productive activities, not changing the project end date will lead to schedule pressure as the project advances and people slowly start to realize that you are not going to make it.


When it becomes clear that a project can't make it's original deadline many organizations will start common but deadly practices.  Excessive schedule pressure often leads to the following bad practices (see Stop It! No... Really stop it.):
  • Friction within the team
  • Friction amongst the managers
  • Inadequate communication with  stakeholders
  • Layoffs of key personnel

Solutions

There are only a few cures for the 'Are we there yet?' problem:
Intestinal Fortitude
  1. IT management with the intestinal fortitude to hold out for the creation of proper work breakdown structures and formal estimates
  2. Proper requirement processes that yield complete, consistent, and concise requirements
  3. Proper change management processes to alter the project deadline when missing tasks and non-productive activities are encountered
Failure to comply with these 3 principles means that you will continue to be subject to chaotic environments where 7 out of 10 projects fail (see Executives: Understanding your Chances of a Successful Project)

Wednesday, 5 June 2013

Let the Machines Take Over

Whether you believe that the machines will take over or not will always be up for debate (until they actually take over :-). What is not up for debate is that many processes for software creation have been automated and can help improve productivity and quality in every phase of a software project.

Here are some statistics about how useful automation can be and give you some ammunition when you confront management about the need for these tools.

These results have been generated by Capers Jones who has been collecting data for over 15,000 projects for about 40 years.  Full results shown in the paper in the references.

Requirements Analysis

There have been tools available for tracing requirements for quite some time.  Generally the only companies forced to provide requirements traceability, but this makes sense to do on all large projects:

Automated requirements tracing can raise productivity by 12.89% and quality by 17.8%

There are tools that can help size programs in terms of function points, which then allows you to get relatively accurate estimates of a project`s cost and time before the project is executed.

Automated sizing in function points can raise productivity by 16.5% and quality by 23.7%

Development

There are several tools that are useful during development, but by far the biggest bang for your buck comes in the form of automated static analysis.

Automated static analysis can raise productivity by 20.9% and quality by 30.9%

There are now static analysis tools available for virtually all current languages that are cost effective and can help to locate those hard to find occasional defects.  You can find a list here of automated tools for C/C++, Java, JavaScript, Objective-C, Perl, PHP, and Python.

Then getting cyclomatic complexity counts for your code can be done in an automated fashion.  Studies have shown that files with a count of 74+ were 98% likely to have defects.

Automated cyclomatic complexity analysis can raise productivity by 14.5% and quality by 19.5%

Performance is a tricky thing, we will talk about it often but rarely do we actually do something about it. Automated performance analysis allows you to determine if there will be production problems before deploying new code.

Automated performance analysis can raise productivity by 12.5% and quality by 19.5%

There are tools for restructuring code automatically, many of these are built directly into today's IDEs.  There are many stand alone tools for automated restructuring, some are shown here.

Automated restructuring can raise productivity by 8.0% and quality by 11.7%

Automated unit testing is the easiest way to detect defects early.  If your developers are using Test Driven Development (TDD) and any continuous integration tool  (i.e, Hudson or Jenkins) then you can find out with each build if a defect has crept into the code.

Automated unit testing can raise productivity by 16.5% and quality by 23.8%

Testing

Believe it or not I was working with a team that was not using version control or defect tracking last year. So there are clearly still some organizations not using defect tracking.

Automated defect tracking can raise productivity by 17.5% and quality by 25.9%

Testing the percentage of the application pathways that are actually being tested is critical.  Most of your defects will lie in the pathways that you can not test.

Automated test coverage analysis can raise productivity by 15.5% and quality by 21.2%

Test cases can be generated automatically in some situations.

Automated test case generation can raise productivity by 15.0% and quality by 20.0%

Deployment

The next two tools are not about technology that you acquire, rather it is about building tools to support critical deployment functions.  When the pressure is on it is tough to think about creating applications to manage configuration in an automated way.  When management tells you that it would be a luxury to build these tools then show them these results.

Some organizations rely on manually updating configurations, but can be error prone if there are many values to update.

Automated configuration control can raise productivity by 16.4% and quality by 23.5%

Also, when automated configuration tools don`t exist then you often don`t have automated deployment tools.

Automated deployment support can raise productivity by 14.6% and quality by 19.6%

Conclusion

One of the most cost effective ways to improve productivity and quality in your software development is to automate any of the issues above.  In most cases you can find a vendor that will provide you support; for deployment you can make the case that building automated tools is the way to go.


References
1N.B. All productivity and quality percentages were derived over 15,000+ actual projects


Articles in the "Loser" series
Moo?

Want to see sacred cows get tipped? Check out:
Make no mistake, I am the biggest "Loser" of them all.  I believe that I have made every mistake in the book at least once :-)

Monday, 3 June 2013

To be, or not to be... Formal

What the heck is a formal process?  Is it in the category of you know it when you see it?

A formal process is a methodical way of tackling any repetitive process so that it can be handled in a standardized way.  The idea of formality is to reduce variation in output and minimize the chance of forgetting something.  Formal processes include the ideas of studying, planning, measurement, and process refinement; they cost more and take longer than just just jumping in and getting things done.

Imagine if they tried to make cars in an informal way

Informal, or not formal processes, are faster and cheaper because you get rid of the overhead of a formal process.  It makes sense to be informal when the variation of the process output is not very sensitive to studying, planning, measurement, or refinement.

We are all familiar with trying to use formality when it makes no sense, you simply end up with process for processes sake and waste time and money.

In software, formality has become synonymous with documentation, and frankly we all hate documentation. We don't like to produce it and we don't like to read it.  I've instructed hundreds of engineers and only a few people bother to read the requirements brick.  The way education is going right now I'm not even sure that university graduates can read anyways, but I digress... :-)

For example, documentation for requirements, design, and testing is the end result of a formal process, it is not the formal process itself.  It is smart not to produce more documentation than you need, but that does not mean that the formal processes that creates the documentation are optional.

Formal processes in software development cluster around two very simple ideas:
  • getting the correct requirements
  • elimination of defects
The problem is that formal processes only work when you do enough of them to make a difference.  That is not doing enough of a formal process is just as bad as doing too much.  For example, UML class diagrams will not only help you to plan code but also educate new developers on the team.

Producing UML diagrams is a formal practice; however, if you insist on producing UML diagrams for everything then you will end up spending a significant amount of time creating the diagrams to get a diminishing benefit.  Not having any UML diagrams will lead to code being developed more than once simply because no one has a good overview of the system.
Selective UML diagrams can really accelerated your development

Visualizing Formality

As you increase any formal practice the effect of that practice will increase, for example the more formal you are with UML diagrams the less defects you are likely to have:

However, as you increase formality, the costs of keeping track of all your artifacts goes up:
When you put the two graphs together you get this effect:
This shows that there is a cost to not having enough formality, i.e. no UML diagrams means that more code will get developed and less reuse means more defects.  However, producing too many UML diagrams will also have the cost of producing the UML diagrams and keeping them updated.  The reality is that for any formal practice there is a sweet spot where the minimal formality gives the lowest cost.

What that means is that productivity will be maximized in the sweet spot.

Remember that most formal practices are hygiene practices, these are things that no one wants to do but are really necessary to be highly productive and produce high quality code (see here for more on hygiene practices)

Statistics Behind Formal / Informal Practices

Formal practices can be applied to many things, but Capers Jones has measured the following formal practices (when properly executed) can have a positive effect on a project (15,000+ projects): 

Formal risk management can raise productivity by 17.8% and quality by 26.4%
Formal measurement programs can raise productivity by 20.0% and quality by 30.0%
Formal test plans can raise productivity by 16.6% and quality by 24.1%
Formal requirements analysis can raise productivity by 16.3% and quality by 23.2%
Formal SQA teams can raise productivity by 15.2% and quality by 20.5%
Formal scope management can raise productivity by 13.5% and quality by 18.5%
Formal project office can raise productivity by 11.8% and quality by 16.8%

A special category of formal practices are inspections:
Formal code inspections can raise productivity by 20.8% and quality by 30.9%
Formal requirements inspections can raise productivity by 18.2% and quality by 27.0% Formal design inspections can raise productivity by 16.9% and quality by 24.7%
Formal inspection of test materials can raise productivity by 15.1% and quality by 20.2%

More information on inspections can be found here:
Not being formal enough has its costs, and it is rarely neutral.  Here are some of the costs of not being formal enough:
Informal progress tracking can lower productivity by 0.5% and quality by 1.0%
Informal requirements gathering can lower  productivity by 5.7% and quality by 8.4%
Inadequate risk analysis can lower  productivity by 12.5% and quality by 17.5%
Inadequate testing can lower  productivity by 13.1% and quality by 18.1%
Inadequate measurement of quality can lower  productivity by 13.5% and quality by 18.5%
Inadequate defect tracking can lower  productivity by 15.3% and quality by 20.9%
Inadequate inspections can lower  productivity by 16.0% and quality by 22.1%
Inadequate progress tracking can lower  productivity by 16.0% and quality by 22.5%

N.B. progress tracking is on the list twice.  Informal progress tracking is fairly neutral, but inadequate progress tracking is deadly.

Conclusion

In development practices it is not a matter of formal vs informal.  It is a matter of seeing what kind of variation that you are exposed to and recognizing that formality can make a difference in that area.  Then it is a matter of only adding as much formality as you need to get maximum productivity.

Formal practices must be monitored, you must have some way of testing if the formal practice is having an effect on your development.  You want to make sure that you are executing all formal practices properly and not just going through the motions of a formal practice with no effect.  You must monitor the effects of formality to understand when you are not doing enough and when you are doing too much!


Articles in the "Loser" series
Moo?

Want to see sacred cows get tipped? Check out:
Make no mistake, I am the biggest "Loser" of them all.  I believe that I have made every mistake in the book at least once :-)