FPA - Function Point Analysis
References
Function Point Analysis is a means to measure the size of the software. A good book on FPA is "Measuring the software process; a practical guide to functional measurements" by David Garmus and David Herron.
FPA is maintained by IFPUG - International Function Point Users Group (www.ifpug.org)
Function Point Analysis Introduction
FPA is an accepted standard for the measurement of software size. It is a normalizing factor for software comparison similar to other standard units of size such as meters, ohm etc.
Function points can then be applied to costing and estimating. But just like the cost of 100 sq mts in a city downtown is more than 100 sq mts of a village, the cost and estimate depend on other factors aswell. However, the key ingredient would be function points. In other words, function points are the "pure" measurement metrics for the functionality as perceived by the users. It is independent of the technology, and other such similar system charecteristics.
The function point analysis process consists of the following steps
- First determine the type of function point to perform
- Then identify the system boundary
- Identify the data functions and their complexity
- Identify the transaction functions and their complexity
- Determine the UFPC - unadjusted function point count
- Determine the VAF - value adjustment factor depending on system characteristics
- Determine the FPC
There are three types of function points -
- Development project function points
- Enhancement project function points
- Application function points
Enhancement projects would include either new functionality additions, changes or deletions. Conversion functionality may also be implemented. The function points for the above changes are then reflected on the application function point.
Application function point counts measures an installed application and provides the function point counts of the functionality provided to the user.
Defining the system boundary
This is the boundary of the system being developed and the external systems and user domain. The following are the rules by IFPUG -
- The boundary should be user perceivable such that the user should be able to define the scope.
- When defining boundaries between related systems, it should be based only on business functionality and not on other factors such as technology.
- As enhancements are done to the application, the application boundary changes with added or deleted functionality.
Data functions relate to user identifyable logically grouped/related data or control information which are stored and available for update and retrieving. There are two kinds of data types -
- Internal logical file (ILF) - maintained within the boundary of the application
- External interface file (EIF) - readonly references to data which is maintained outside the application boundary
Elementary process - Smallest user meaningful unit of activity. Such an activity may do sub-activitities, but such activities on their own are not useful to the user.
User identifiable - Part of the requirements that can be understood and defined by an experience user.
Logical related data - logical group of data that fits the description, typically entities in the 2nd and 3rd normalized form. If for some implementation reason data is split, it should be merged back.
Control information - This is the information that will somehow influence an elementary process.
For example, the last time of login, to implement any timeout semantics etc.
Internal logical files (ILF)
It is
- user identifiable
- group of logically related data or control information
- maintained within the application boundary
- by an elementary process of the application
External Interface Files (EIF)
It is -
- User identifiable
- Logically related data or control information
- referenced by an elementary process in the application boundary
- but maintained by another application
Complexity of a data function is calculated using two concepts -
- Data element type (DET)
- Record element type (RET)
These are the
- user recognizable
- unique, non-recursive, field/attibutes (including any foreign key attributes) maintained in ILF or EIF.
Record element type
These are the
- user recognizable
- subgroups (optional or mandatory) of data elements contained within an ILF or EIF. Sub groups are typically represented in an ERD as entity sub types or attributive entities commonly called parent child relationship
Calculating complexity of data functions
Depending on the number of RET and DET identified for each data function, complexity is calculated using the complexity matrix. This matrix is the same for both ILF and EIF and is -
- If number number of RET is <>
- If number number of RET is 2-5, then for DET 1-19, complexity is "L", for DET 20-50, complexity is "A" and 51+, it is "H"
- If number number of RET is >5, then for DET 1-19, complexity is "A", for DET 20-50, complexity is "H" and 51+, it is "H"
For ILF
- If complexity is "L", then unadjusted function point is number of ILF multiplied by 7
- If complexity is "A", then unadjusted function point is number of ILF multiplied by 10
- If complexity is "H", then unadjusted function point is number of ILF multiplied by 15
- If complexity is "L", then unadjusted function point is number of ILF multiplied by 5
- If complexity is "A", then unadjusted function point is number of ILF multiplied by 7
- If complexity is "H", then unadjusted function point is number of ILF multiplied by 10
Transaction function points
Information systems are usually developed with the intent that certain manual tasks can be accomplished more economically and effectively. It is these tasks that end up being identified as transactional functions.
There are three types of transaction functions -
- External input (EI) which maintains ILF