Sunday, April 18, 2010

Software Architecture Assessment Outline

The primary goal of this software architecture assessment outline is to help identify and recommend how to increase the software quality while lessening the overall maintenance needs from both software and hardware perspectives. To perform this assessment several major categories will be reviewed in combination with a weighted measurement factor for each category and rolled up into one final summary.

General Software Architecture Assessment Categories

Included in this outline are further definitions of each of the software assessment categories evaluated. Within each section is a detailed explanation of the category relevant to this document, a design /architectural principles section, measurement factors, and a score summary.

For this outline only a high-level software architecture assessment is being applied. However, the measurement factors will most likely be evaluated as a part of a more detailed architectural analysis. For the purpose of this assessment, the measurement factors were considered, but not effectively evaluated in all code modules.

Performance

The performance of an application is generally categorized as how well the application responds to simultaneous events. The performance can also be viewed as how well the application responds after a certain interval of time.

Design/Architectural Principles Evaluated

• Connection Pooling

• Load Balancing

• Distributed Processing

• Caching

• Object Instantiation

• Transaction concurrency

• Process Isolation

• Replication of Data

Measurement Factors

• Transactions per unit time

• Amount of time it takes to complete a transaction

Reliability

Application reliability represents how well the system continues to operate over time in the context of application and system errors in situations of unexpected or incorrect usage. The reliability of the system can be viewed as how well overall the system performs based on a predictable set of factors.

Design/Architectural Principles Evaluated

• Using preventive measures
o Recycling of server processes in IIS.6 ASP.NET /COM+ 1.5
o Containment - COM+ server process isolation
o Database transaction logs (rollback)

Measurement Factors

• Mean-time to failure

Availability

Availability refers to the ability of the user community to access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is said to be unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable.

Design/Architectural Principles Evaluated

• Fail-over

• Transaction Manager

• Stateless Design

Measurement Factors

• Length of time between failures

• How quickly the system is able to resume operation in the event of failure.

Security

Application security encompasses measures taken to prevent exceptions in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, or deployment of the application.

Applications only control the use of resources granted to them, and not which resources are granted to them. They, in turn, determine the use of these resources by users of the application through application security.

Design/Architectural Principles Evaluated

• Authorization

• Authentication

• Auditing

• Integrity

• Confidentiality

• Denial-of-service

• Data Isolation

Measurement Factors

• N/A

Portability

Portability is one of the key concepts of high-level programming. Portability is the software codebase feature to be able to reuse the existing code instead of creating new code when moving software from an environment to another. The pre-requirement for portability is the generalized abstraction between the application logic and system interfaces. When one is targeting several platforms with the same application, portability is the key issue for development cost reduction

Design/Architectural Principles Evaluated

• Virtual machines

• Functionality

Measurement Factors

• Number change request

Change Management

The change management process in systems engineering is the process of requesting, determining attainability, planning, implementing and evaluation of changes to a system. There are two main goals concerning change management. The main goals include, supporting the processing of changes, and enabling traceability of changes, which should be possible through proper execution of the process of the system or application.

Design/Architectural Principles Evaluated

• Client-Server

• Independence of interface from implementation

• Strategy Separation

• Encoding function into data meta-data and language interpreters

• Runtime Discovery

Measurement Factors

• Using specific changes as benchmarks and recording how expensive those changes are to implement

Extensibility

In software engineering, extensibility (sometimes confused with forward compatibility) is a system design principle where the implementation takes into consideration future growth. It is a systemic measure of the ability to extend a system and the level of effort required to implement the extension. Extensions can be through the addition of new functionality or through modification of existing functionality. The central theme is to provide for change while minimizing impact to existing system functions.

In systems architecture, extensibility means the system is designed to include hooks and mechanisms for expanding/enhancing the system with new capabilities without having to make major changes to the system infrastructure. A good architecture provides the design principles to ensure this—a roadmap for that portion of the road yet to be built. Note that this usually means that capabilities and mechanisms must be built into the final delivery, which will not be used in that delivery and, indeed, may never be used. These excess capabilities are not frills, but are necessary for maintainability and for avoiding early obsolescence.

Design/Architectural Principles Evaluated

• Easy incremental additions of functionality

• Coupling/cohesion

• Conceptual Integrity

Measurement Factors

• N/A

Interoperability

Interoperability is a property referring to the ability of diverse systems and organizations to work together (inter-operate). The term is often used in a technical systems engineering sense, or alternatively in a broad sense, taking into account social, political, and organizational factors that impact system-to-system performance.

Design/Architectural Principles Evaluated

• Simple data-types

• XML

• RSS

• Web Services

• Windows Communication Foundation

• .Net Remoting

Measurement Factors

• General overview of service oriented architecture

Usability and Standards

Usability and software standards enable software to interoperate seamlessly and cohesively. Many things are (somewhat) arbitrary, so the important thing is that everyone agrees on what they are and represent within an organization. Usability and software standards are one of the unsolved problems in software engineering.

The key factor evaluated is the incorrect implementation of standards or specifications. Many organizations result in a requirement for implementation specific code and special case exceptions as a necessity for cross-platform interoperability. Notable modern examples include web browser compatibility and web-services interoperability. The arbitrariness of most software concepts, which is related to historical hardware and software implementation, lack of common standards worldwide, and economic pressures.

Design/Architectural Principles Evaluated

• User Interface Standards

• Coding Standards

• Deployment Standards

• Security Standards

• Database Standards

• Service Oriented Architecture Standards

Measurement Factors

• Number of errors made by a user familiar with prior releases or other members of the product line

Maintainability

In software testing, based on the definition given in ISO 9126, the ease with which a software product can be modified in order to, correct defects, meet new requirements, make future maintenance easier, or cope with a changed environment.

Design/Architectural Principles Evaluated

• Localization

• Globalization

• Effects of change

Measurement Factors

• N/A

Efficiency
In software engineering, the efficiency of an application is defined as how well the application has been coded in order to be efficient. The efficiency is often handled by acquiring resources during the initial load of an application such as a splash screen and then releasing resources throughout the application.

Design/Architectural Principles Evaluated

• Acquire late, release early

• Reducing round-trips

• Lowering traffic throughput

Measurement Factors

• N/A

Testability

The testability of an application refers to how easy it is to test and validate the code as a unit, sub-system or application. Using key tools such as automated testing tools for unit testing, white box and black box testing are also considered when evaluating the overall testability of a particular system.

Tests are applied at several steps in the hardware manufacturing flow and, for certain products, may be used for hardware maintenance in the customer’s environment. The tests generally are driven by test programs that execute in Automatic Test Equipment (ATE) or, in the case of system maintenance, inside the assembled system itself. In addition to finding and indicating the presence of defects (i.e., the test fails), tests may be able to log diagnostic information about the nature of the encountered test fails. The diagnostic information can be used to locate the source of the failure.

Testability plays an important role in the development of test programs and as an interface for test application and diagnostics. Automatic test pattern generation, or ATPG, is much easier if appropriate testability rules and suggestions have been implemented.

Design/Architectural Principles Evaluated

• Test plans

• Code implemented unit test scripts

• Build server automation

• Interface-based programming

• Inversion of control/Dependency injection

• Classes with well defined responsibilities

Measurement Factors

• N/A

Reusability

In computer science and software engineering, reusability is the likelihood a segment of source code can be used again to add new functionalities with slight or no modification. Reusable modules and classes reduce implementation time, increase the likelihood that prior testing and use has eliminated bugs and localizes code modifications when a change in implementation is required.

Subroutines or functions are the simplest form of reuse. A chunk of code is regularly organized using modules or namespaces into layers. Proponents claim that objects and software components offer a more advanced form of reusability, although it has been tough to objectively measure and define levels or scores of reusability.

The ability to reuse relies in an essential way on the ability to build larger things from smaller parts, and being able to identify commonalities among those parts. Reusability is often a required characteristic of platform software.

Reusability implies some explicit management of build, packaging, distribution, installation, configuration, deployment, and maintenance and upgrade issues. Software reusability more specifically refers to design features of a software element (or collection of software elements) that enhance its suitability for reuse.

Design/Architectural Principles Evaluated

• Code components are reusable

• Use Enterprise Libraries

• Use stored procedures

• Reuse of User Controls

• Reuse of Web User Controls

• Use of common services

• Use of business objects

Measurement Factors

• N/A


Ease of deployment

Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur at the producer site or at the consumer site or both. Because every software system is unique, the precise processes or procedures within each activity can hardly be defined. Therefore, "ease of deployment" can be interpreted as a general process that has been customized according to specific requirements or characteristics to aid and assist in the overall ease of deploying the specific software.

Design/Architectural Principles Evaluated

• Deployment mechanism

• Installation programs

• Automated updates

• Hot-fix deployment

Measurement Factors

• Can be measured by the time and resources required to install the product and /or distribute a new unit of functionality

Ease of administration
The ease of administration refers to the infrastructure, tools, and staff of administrators and technicians needed to maintain the health of the application. The ease of administration would include items such as being able to change the physical locations of services while having a minimal impact on the rest of the system.

Design/Architectural Principles Evaluated

• N/A

Measurement Factors

• Decreased Support Cost: can be measured by comparing number of help desk calls for a standard period of time

Scalability

In software engineering scalability is a desirable property of a system, a network, or a process, which indicates its ability to handle growing amounts of work in a graceful manner, or to be readily enlarged. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added.

Scalability, as a property of systems, is generally difficult to define and in any particular case, it is necessary to define the specific requirements for scalability on those dimensions, which are deemed important. An algorithm, design, networking protocol, program, or other system is said to scale if it is suitably efficient and practical when applied to large situations (e.g. a large input data set or large number of participating nodes in the case of a distributed system). If the design fails when the quantity increases then it does not scale.

The ability to support more users while maintaining the same level of performance, user demand, and business complexity would be considered scalable. The system must ale to extend the minimum hardware configuration needed for the application with additional hardware to support increased workloads.

Design/Architectural Principles Evaluated

• Stateless design

• Load-balancing

• Concurrency (optimistic)

• Serialization

Measurement Factors

• N/A

Debug-ability / Monitoring

Debugging and monitoring is a name for design techniques that add certain testability features to a microelectronic hardware product design. The premise of the added features is that they make it easier to develop and apply manufacturing tests for the designed hardware. The purpose of manufacturing tests is to validate that the product hardware contains no defects that could otherwise, adversely affect the products correct functioning.

Design/Architectural Principles Evaluated

• Tracing support

• Logging in exception handling mechanism

• Alerting/notification mechanism

1 comment:

customerspecifics said...

It is safe to say that nearly everything about managing customer specific requirements is a hassle. If you're an auditor, how do you know what customer specific requirements exist so that you can audit against them? If you're the customer, how do you distribute them efficiently? If you're a supplier, how do you get them? How do you know if you have the latest version?

Customerspecifics.com was founded as a way of improving the management of customer specific requirements for registrars and quality personnel. The idea started when a member was surprised to find that his revision of a customer specific requirement had become obsolete just days before his audit, resulting in a finding.

Really?
This person wasn't notified of the release of a new revision. If suppliers are required to notify their customers of changes to processes, shouldn't customers return the favor and notify their suppliers of changes to requirements? If something is important enough to be a requirement for a supplier, it's just good business practice to make sure that your suppliers are aware of these requirements.

These are the issues that customerspecifics.com is attempting to solve. We thank each of our users for your valuable document submissions and welcome any and all feedback. We look forward to hearing from you!

D. Matthew Morris