Good Software Development

Back in March in 2014 I wrote a post of the key points of good software development. I decided to keep on improving on the document and also to make it available as a separate document for ease of use. So here you will find the latest version of my own little project where I have studied and gathered information on good software development. I use this personally as a “quick reference”.

Download this page as a PDF

Good software development
This document describes good practices and methods for good software development. All the info here is gathered from different sources and I the writer of this document have tried to made the referances to the original sources as accurate as possible. Hope this helps someone and if there are problems or error with this document you can do it through my blog and I will try to do my best to fix them. use this information as you see fit at your own discretion.



Good software development 7

Key Points of good software development 7

Things to take into consideration with Design. 8

Best coding practices. 8

Software quality. 9

Code-based analysis. 12

Reliability. 13

Efficiency. 14

Security. 14

Maintainability. 15

Size. 16

Identifying critical programming errors. 17

Prerequisites. 18

Life cycle. 18

Requirements. 18

Architecture. 19

Design. 19

Choice of programming language(s) 22

Coding standards. 23

Commenting.. 23

Naming conventions. 24

Keep the code simple. 24

Portability. 24

Code development. 24

Code building.. 24

Testing.. 24

Debugging the code and correcting errors. 25

Guidelines in brief. 25

Deployment. 25

Programming methods and concepts. 25

Objects. 26

Loose and Tight Coupling. 26

Loose Coupling. 26

Tight Coupling. 26

Encapsulation. 27

Delegation. 28

Abstraction. 31

Interface. 32

Encapsulation. 33

Change. 34

Cohesive (or Single Responsibility Principle) 34

Collections. 35

Inheritance. 35

Summary. 36

Model View Controller – MVC (Design Pattern) 36

View.. 37

Controllers. 37

Models. 38

Design Patterns. 39

Creational Design Patterns. 39

Behavioral Design Patterns. 41

Structural Design Patterns. 42

Game Programming Patterns. 43

Design Patterns. 43

Sequencing Patterns. 43

Behavioral Patterns. 44

Decoupling Patterns. 44

Optimization Patterns. 45

Anti Patterns. 45

Software Development AntiPatterns. 46

Software Architecture AntiPatterns. 48

Software Project Management AntiPatterns. 50

Refactoring. 51

Composing Methods – 52

Extract Method. 52

Inline Method. 53

Inline Temp. 54

Replace Temp with Query. 55

Moving Features Between Objects. 56

Organizing Data. 56

Simplifying Conditional Expressions. 56

Making Method Calls Simpler. 57

Dealing with Generalization. 57

Big Refactorings. 58

Design, Analysis and Architecture. 58

Key points of software design. 58

Design principles. 59

Principle #1: The Open-Closed Principle (OCP) 59

Principle #2: The Don’t Repeat Yourself Principle (DRY) 61

Principle #3: The Single Responsibility Principle (SRP) 62

Principle #4: The Liskov Substitution Principle (LSP) 64

Alternatives to inheritance. 67

Delegation. 67

Composition. 69

Aggregation. 70

Alternatives to inheritance – Summary. 72

Principle #5: Interface Segregation Principle (ISP) 72

Principle #6: Dependency Inversion Principle (DIP) 74

Summary. 77

Functionalities, requirements and analysis. 78

Key points of requirements. 78

What is a requirement?. 78

Requirements list 78

Use case. 79

Summary – Use Cases. 80

Requirements change. 81

Alternate Paths and Scenarios. 81

Key points to requirement changes. 81

Analysis. 82

Key points to analysis. 82

Textual analysis. 82

Summary. 83

Class Diagrams. 83

Big applications and problems. 83

Solving big problems. 83

Two things to take into consideration with big problems are: 84

Things to find out in your system: 84

Features. 84

Use case diagrams. 85

Use cases reflect usage, features reflect functionality. 85

Domain Analysis. 86

Summary. 86

Architecture. 87

First step – Functionality. 87

Questions to ask. 87

Summary. 88

Iterating, Testing and Contracts. 89

Writing test scenarios. 90

What makes a good test case. 91

Anatomy of a Test Case. 91

Types Of Test cases. 92

Software Testing Techniques and Methods. 92

Types of performance testing: 94

A sample testing cycle. 96

Programming by contract 97

Summary. 98

The Lifecycle of software development 98

Resources. 100

Other programming resources. 101

Database related. 102

Object-relational impedance mismatch. 102




Good software development


Key Points of good software development

  • Does what the customer wants it to do, the application is stable, it is upgradable
  • The software is stable and works no matter what the customer might do
  • No code duplicates
  • Each object controls its own behavior
  • Extendable code with solid and flexible design
  • Usage of design patterns and principles
  • Keeping object loosely coupled
  • Your code open for extension but closed for modification. Makes code reusable.
  • Object oriented analysis and design (OOA&D) provides a way to produce well-designed applications that satisfy both the customer and the programmer.
  • Find the parts of your application that change often, and try and separate them from the parts of your application that don’t change.
  • Building an application that works well but is poorly designed satisfies the customer but will leave you with possible problems that will take time and energy to fix.
  • One of the best ways to see if software is well-designed is to try and change it.
  • If your software is hard to change, there’s probably something you can improve about the design.



  • Software must satisfy the customer. The software must do what the customer wants it to do.
  • Software is well-designed, well-coded, and easy to maintain, reuse, and extend.

Things to take into consideration with Design

  • Make sure that the application works like it should before you dive into applying design patterns or trying to do any real restructuring of how the application is put together. Too much design before basic functionality is done can be work wasted. Potentially a lot of design will change as you add new functionality.
  • A functional and flexible design, allows you to employ design patterns to improve your design further, and make your application easier to reuse.
  • Begin a project by figuring out what the customer wants.
  • Once basic functionalities are in place, work on refining the design so it’s flexible.
  • Use Object Oriented principles like encapsulation and delegation to build applications that are flexible.

Best coding practices

Best coding practices are a set of informal rules that the software development community has learned over time which can help improve the quality of software.[1]

Many computer programs remain in use for far longer than the original authors ever envisaged (sometimes 40 years or more),[2] so any rules need to facilitate both initial development and subsequent maintenance and enhancement by people other than the original authors.

In Ninety-ninety rule, Tim Cargill is credited with this explanation as to why programming projects often run late: “The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time.” Any guidance which can redress this lack of foresight is worth considering.

The size of a project or program has a significant effect on error rates, programmer productivity, and the amount of management needed.[3]

Software quality

Main article: Software quality

As listed below, there are many attributes associated with good software. Some of these can be mutually contradictory (e.g. very fast versus full error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency.[4] Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes.

Sommerville has identified four generalised attributes which are not concerned with what a program does, but how well the program does it:[5]

Weinberg has identified four targets which a good program should meet:[6]

  • Does a program meet its specification; “correct output for each possible input”?
  • Is the program produced on schedule (and within budget)?
  • How adaptable is the program to cope with changing requirements?
  • Is the program efficient enough for the environment in which it is used?

Hoare has identified seventeen objectives related to software quality, including:[7]

  • Clear definition of purpose.
  • Simplicity of use.
  • Ruggedness (difficult to misuse, kind to errors).
  • Early availability (delivered on time when needed).
  • Extensibility in the light of experience.
  • Efficiency (fast enough for the purpose to which it is put).
  • Minimum cost to develop.
  • Conformity to any relevant standards.
  • Clear, accurate, and precise user documents.

Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using the Quality Function Deployment approach, these measurable attributes are the “hows” that need to be enforced to enable the “whats” in the Software Quality definition above.

The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions.

The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left):

  • Application Architecture Practices
  • Coding Practices
  • Application Complexity
  • Documentation
  • Portability
  • Technical & Functional Volume

One of the founding member of the Consortium for IT Software Quality, the OMG (Object Management Group), has published an article on “How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations” that states that correlations between programming errors and production defects unveil that basic code errors account for 92% of the total errors in the source code. These numerous code-level issues eventually count for only 10% of the defects in production. Bad software engineering practices at the architecture levels account for only 8% of total defects, but consume over half the effort spent on fixing problems, and lead to 90% of the serious reliability, security, and efficiency issues in production.[24]

Code-based analysis

Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions (Park, 1992),[25] tokens (Halstead, 1977),[26] control structures (McCabe, 1976), and objects (Chidamber & Kemerer, 1994).[27]

Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured].

This view of software quality on a linear continuum has to be supplemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems (Nygard, 2007)[28] that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is the Common Weakness Enumeration (Martin, 2001),[29] a repository of vulnerabilities in the source code that make applications exposed to security breaches.

The measurement of critical application characteristics involves measuring structural attributes of the application’s architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included calculating the characteristic’s measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978)[30] and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application.

Structural quality analysis and measurement is performed through the analysis of the source code, the architecture, software framework, database schema in relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed by development tools which are mostly concerned with implementation considerations and are crucial during debugging and testing activities.


The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application’s reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation.

Assessing reliability requires checks of at least the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Coding Practices
  • Complexity of algorithms
  • Complexity of programming practices
  • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
  • Component or pattern re-use ratio
  • Dirty programming
  • Error & Exception handling (for all layers – GUI, Logic & Data)
  • Multi-layer design compliance
  • Resource bounds management
  • Software avoids patterns that will lead to unexpected behaviors
  • Software manages data integrity and consistency
  • Transaction complexity level

Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software.


As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data.

Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Appropriate interactions with expensive and/or remote resources
  • Data access performance and data management
  • Memory, network and disk space management
  • Coding Practices
  • Compliance with Object-Oriented and Structured Programming best practices (as appropriate)
  • Compliance with SQL programming best practices


Most security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting. These are well documented in lists maintained by CWE,[31] and the SEI/Computer Emergency Center (CERT) at Carnegie Mellon University.

Assessing security requires at least checking the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Multi-layer design compliance
  • Security best practices (Input Validation, SQL Injection, Cross-Site Scripting, etc.[32] )
  • Programming Practices (code level)
  • Error & Exception handling
  • Security best practices (system functions access, access control to programs)


Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code.[33]

Assessing maintainability requires checking the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Architecture, Programs and Code documentation embedded in source code
  • Code readability
  • Complexity level of transactions
  • Complexity of algorithms
  • Complexity of programming practices
  • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
  • Component or pattern re-use ratio
  • Controlled level of dynamic coding
  • Coupling ratio
  • Dirty programming
  • Documentation
  • Hardware, OS, middleware, software components and database independence
  • Multi-layer design compliance
  • Portability
  • Programming Practices (code level)
  • Reduced duplicated code and functions
  • Source code file organization cleanliness

Maintainability is closely related to Ward Cunningham’s concept of technical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent,[34] and often have their origin in developers’ inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainable source code.[35]


Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size:

  • There are several software technical sizing methods that have been widely described. The most common technical sizing method is number of Lines Of Code (#LOC) per technology, number of files, functions, classes, tables, etc., from which backfiring Function Points can be computed;
  • The most common for measuring functional size is Function Point Analysis. Function Point Analysis measures the size of the software deliverable from a user’s perspective. Function Point sizing is done based on user requirements and provides an accurate representation of both size for the developer/estimator and value (functionality to be delivered) and reflects the business functionality being delivered to the customer. The method includes the identification and weighting of user recognizable inputs, outputs and data stores. The size value is then available for use in conjunction with numerous measures to quantify and to evaluate software delivery and performance (Development Cost per Function Point; Delivered Defects per Function Point; Function Points per Staff Month.).

The Function Point Analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries.

Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. However, Function Points has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the “currency” by which services are delivered and performance is measured.

One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications.

CISQ announced the availability of its first metric standard, Automated Function Points,to the CISQ membership, in CISQ Technical. These recommendations have been developed in OMG’s Request for Comment format and submitted to OMG’s process for standardization.[citation needed]

Identifying critical programming errors

Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk.

These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical.

Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below:

  • Reliability
    • Avoid software patterns that will lead to unexpected behavior (Uninitialized variable, null pointers, etc.)
    • Methods, procedures and functions doing Insert, Update, Delete, Create Table or Select must include error management
    • Multi-thread functions should be made thread safe, for instance servlets or struts action classes must not have instance/non-final static fields
  • Efficiency
    • Ensure centralization of client requests (incoming and data) to reduce network traffic
    • Avoid SQL queries that don’t use an index against large tables in a loop
  • Security
    • Avoid fields in servlet classes that are not final static
    • Avoid data access without including error management
    • Check control return codes and implement error handling mechanisms
    • Ensure input validation to avoid cross-site scripting flaws or SQL injections flaws
  • Maintainability
    • Deep inheritance trees and nesting should be avoided to improve comprehensibility
    • Modules should be loosely coupled (fanout, intermediaries, ) to avoid propagation of modifications
    • Enforce homogeneous naming conventions



Before coding starts, it is important to ensure that all necessary prerequisites have been completed (or have at least progressed far enough to provide a solid foundation for coding). If the various prerequisites are not satisfied then the software is likely to be unsatisfactory, even if it is completed.

From Meek & Heath: “What happens before one gets to the coding stage is often of crucial importance to the success of the project.”[8]

The prerequisites outlined below cover such matters as:

  • how is development structured? (life cycle)
  • what is the software meant to do? (requirements)
  • the overall structure of the software system (architecture)
  • more detailed design of individual components (design)
  • choice of programming language(s)

For small simple projects involving only one person, it may be feasible to combine architecture with design and adopt a very simple life cycle.

Life cycle

A software development methodology is a framework that is used to structure, plan, and control the life cycle of a software product. Common methodologies include waterfallprototypingiterative and incremental development,spiral developmentagile software developmentrapid application development, and extreme programming.

The waterfall model is a sequential development approach; in particular, it assumes that the requirements can be completely defined at the start of a project. However, McConnell quotes three studies which indicate that, on average, requirements change by around 25% during a project.[9] The other methodologies mentioned above all attempt to reduce the impact of such requirement changes, often by some form of step-wise, incremental, or iterative approach. Different methodologies may be appropriate for different development environments.


McConnell states: “The first prerequisite you need to fulfil before beginning construction is a clear statement of the problem the system is supposed to solve.”[10]

Meek and Heath emphasise that a clear, complete, precise, and unambiguous written specification is the target to aim for.[11] Note that it may not be possible to achieve this target, and the target is likely to change anyway (as mentioned in the previous section).

Sommerville distinguishes between less detailed user requirements and more detailed system requirements.[12] He also distinguishes between functional requirements (e.g. update a record) and non-functional requirements (e.g. response time must be less than 1 second).


Hoare points out: “there are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies; the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”[13]

Software architecture is concerned with deciding what has to be done, and which program component is going to do it (how something is done is left to the detailed design phase, below). This is particularly important when a software system contains more than one program, since it effectively defines the interface between these various programs. It should include some consideration of any user interfaces as well, without going into excessive detail.

Any non-functional system requirements (response time, reliability, maintainability, etc.) need to be considered at this stage.[14]

The software architecture is also of interest to various stakeholders (sponsors, end-users, etc.) since it gives them a chance to check that their requirements can be met.


The main purpose of design is to fill in the details which have been glossed over in the architectural design. The intention is that the design should be detailed enough to provide a good guide for actual coding, including details of any particular algorithms to be used. For example, at the architectural level it may have been noted that some data has to be sorted, while at the design level it is necessary to decide which sorting algorithm is to be used. As a further example, if an object-oriented approach is being used, then the details of the objects must be determined (attributes and methods).


Software design is both a process and a model. The design process is a sequence of steps that enable the designer to describe all aspects of the software to be built. It is important to note, however, that the design process is not simply a cookbook. Creative skill, past experience, a sense of what makes “good” software, and an overall commitment to quality are critical success factors for a competent design. The design model is the equivalent of an architect’s plans for a house. It begins by representing the totality of the thing to be built (e.g., a three-dimensional rendering of the house) and slowly refines the thing to provide guidance for constructing each detail (e.g., the plumbing layout). Similarly, the design model that is created for software provides a variety of different views of the computer software. Basic design principles enable the software engineer to navigate the design process. Davis [DAV95] suggests a set of principles for software design, which have been adapted and extended in the following list:

  • The design process should not suffer from “tunnel vision.” A good designer should consider alternative approaches, judging each based on the requirements of the problem, the resources available to do the job.
  • The design should be traceable to the analysis model. Because a single element of the design model often traces to multiple requirements, it is necessary to have a means for tracking how requirements have been satisfied by the design model.
  • The design should not reinvent the wheel. Systems are constructed using a set of design patterns, many of which have likely been encountered before. These patterns should always be chosen as an alternative to reinvention. Time is short and resources are limited! Design time should be invested in representing truly new ideas and integrating those patterns that already exist.
  • The design should “minimize the intellectual distance” between the software and the problem as it exists in the real world. That is, the structure of the software design should (whenever possible) mimic the structure of the problem domain.
  • The design should exhibit uniformity and integration. A design is uniform if it appears that one person developed the entire thing. Rules of style and format should be defined for a design team before design work begins. A design is integrated if care is taken in defining interfaces between design components.
  • The design should be structured to accommodate change. The design concepts discussed in the next section enable a design to achieve this principle.
  • The design should be structured to degrade gently, even when aberrant data, events, or operating conditions are encountered. Well- designed software should never “bomb.” It should be designed to accommodate unusual circumstances, and if it must terminate processing, do so in a graceful manner.
  • Design is not coding, coding is not design. Even when detailed procedural designs are created for program components, the level of abstraction of the design model is higher than source code. The only design decisions made at the coding level address the small implementation details that enable the procedural design to be coded.
  • The design should be assessed for quality as it is being created, not after the fact. A variety of design concepts and design measures are available to assist the designer in assessing quality.
  • The design should be reviewed to minimize conceptual (semantic) errors. There is sometimes a tendency to focus on minutiae when the design is reviewed, missing the forest for the trees. A design team should ensure that major conceptual elements of the design (omissions, ambiguity, inconsistency) have been addressed before worrying about the syntax of the design model.
Design Concepts

The design concepts provide the software designer with a foundation from which more sophisticated methods can be applied. A set of fundamental design concepts has evolved. They are:

  1. Abstraction – Abstraction is the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically in order to retain only information which is relevant for a particular purpose.
  2. Refinement – It is the process of elaboration. A hierarchy is developed by decomposing a macroscopic statement of function in a step-wise fashion until programming language statements are reached. In each step, one or several instructions of a given program are decomposed into more detailed instructions. Abstraction and Refinement are complementary concepts.
  3. Modularity – Software architecture is divided into components called modules.
  4. Software Architecture – It refers to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. A good software architecture will yield a good return on investment with respect to the desired outcome of the project, e.g. in terms of performance, quality, schedule and cost.
  5. Control Hierarchy – A program structure that represents the organization of a program component and implies a hierarchy of control.
  6. Structural Partitioning – The program structure can be divided both horizontally and vertically. Horizontal partitions define separate branches of modular hierarchy for each major program function. Vertical partitioning suggests that control and work should be distributed top down in the program structure.
  7. Data Structure – It is a representation of the logical relationship among individual elements of data.
  8. Software Procedure – It focuses on the processing of each modules individually
  9. Information Hiding – Modules should be specified and designed so that information contained within a module is inaccessible to other modules that have no need for such information
Design considerations

There are many aspects to consider in the design of a piece of software. The importance of each should reflect the goals the software is trying to achieve. Some of these aspects are:

  • Compatibility – The software is able to operate with other products that are designed for interoperability with another product. For example, a piece of software may be backward-compatible with an older version of itself.
  • Extensibility – New capabilities can be added to the software without major changes to the underlying architecture.
  • Fault-tolerance – The software is resistant to and able to recover from component failure.
  • Maintainability – A measure of how easily bug fixes or functional modifications can be accomplished. High maintainability can be the product of modularity and extensibility.
  • Modularity – the resulting software comprises well defined, independent components which leads to better maintainability. The components could be then implemented and tested in isolation before being integrated to form a desired software system. This allows division of work in a software development project.
  • Reliability – The software is able to perform a required function under stated conditions for a specified period of time.
  • Reusability – the software is able to add further features and modification with slight or no modification.
  • Robustness – The software is able to operate under stress or tolerate unpredictable or invalid input. For example, it can be designed with a resilience to low memory conditions.
  • Security – The software is able to withstand hostile acts and influences.
  • Usability – The software user interface must be usable for its target user/audience. Default values for the parameters must be chosen so that they are a good choice for the majority of the users.[3]
  • Performance – The software performs its tasks within a user-acceptable time. The software does not consume too much memory.
  • Portability – The usability of the same software in different environments.
  • Scalability – The software adapts well to increasing data or number of users.
Modeling language

A modeling language is any artificial language that can be used to express information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure. A modeling language can be graphical or textual. Examples of graphical modeling languages for software design are:


Choice of programming language(s)

Mayer states: “No programming language is perfect. There is not even a single best language; there are only languages well suited or perhaps poorly suited for particular purposes. Understanding the problem and associated programming requirements is necessary for choosing the language best suited for the solution.”[15]

From Meek & Heath: “The essence of the art of choosing a language is to start with the problem, decide what its requirements are, and their relative importance, since it will probably be impossible to satisfy them all equally well. The available languages should then be measured against the list of requirements, and the most suitable (or least unsatisfactory) chosen.”[16]

It is possible that different programming languages may be appropriate for different aspects of the problem. If the languages or their compilers permit, it may be feasible to mix routines written in different languages within the same program.

Even if there is no choice as to which programming language is to be used, McConnell provides some advice: “Every programming language has strengths and weaknesses. Be aware of the specific strengths and weaknesses of the language you’re using.”[17]

Coding standards

This section is also really a prerequisite to coding, as McConnell points out: “Establish programming conventions before you begin programming. It’s nearly impossible to change code to match them later.”[18]

As listed near the end of Coding conventions, there are different conventions for different programming languages, so it may be counterproductive to apply the same conventions across different languages.

The use of coding conventions is particularly important when a project involves more than one programmer (there have been projects with thousands of programmers). It is much easier for a programmer to read code written by someone else if all code follows the same conventions.

For some examples of bad coding conventions, Roedy Green provides a lengthy (tongue-in-cheek) article on how to produce unmaintainable code.[19]


Due to time restrictions or enthusiastic programmers who want immediate results for their code, commenting of code often takes a back seat. Programmers working as a team have found it better to leave comments behind since coding usually follows cycles, or more than one person may work on a particular module. However, some commenting can decrease the cost of knowledge transfer between developers working on the same module.

In the early days of computing, one commenting practice was to leave a brief description of the following:

  1. Name of the module.
  2. Purpose of the Module.
  3. Description of the Module (In brief).
  4. Original Author
  5. Modifications
  6. Authors who modified code with a description on why it was modified.

However, the last two items have largely been obsoleted by the advent of revision control systems.

Also regarding complicated logic being used, it is a good practice to leave a comment “block” so that another programmer can understand what exactly is happening.

Unit testing can be another way to show how code is intended to be used. Modifications and authorship can be reliably tracked using a source-code revision control system, rather than using comments.

Naming conventions

Use of proper naming conventions is considered good practice. Sometimes programmers tend to use X1, Y1, etc. as variables and forget to replace them with meaningful ones, causing confusion.

In order to prevent this waste of time, it is usually considered good practice to use descriptive names in the code since we deal with real data.

Example: A variable for taking in weight as a parameter for a truck can be named TrkWeight or TruckWeight, with TruckWeight being the more preferable one, since it is instantly recognisable. See CamelCase naming of variables.

Keep the code simple

The code that a programmer writes should be simple. Complicated logic for achieving a simple thing should be kept to a minimum since the code might be modified by another programmer in the future. The logic one programmer implemented may not make perfect sense to another. So, always keep the code as simple as possible.[20]


Program code should never ever contain “hard-coded”, i.e. literal, values referring to environmental parameters, such as absolute file paths, file names, user names, host names, IP addresses, URLs, UDP/TCP ports. Otherwise the application will not run on a host that has a different design than anticipated. Such variables should be parametrized, and configured for the hosting environment outside of the application proper (e.g. property files, application server, or even a database).

As an extension, resources such as XML files should also contain variables rather than literal values, otherwise the application will not be portable to another environment without editing the XML files. For example with J2EE applications running in an application server, such environmental parameters can be defined in the scope of the JVM and the application should get the values from there.

Code development

Code building

A best practice for building code involves daily builds and testing, or better still continuous integration, or even continuous delivery.


Testing is an integral part of software development that needs to be planned. It is also important that testing is done proactively; meaning that test cases are planned before coding starts, and test cases are developed while the application is being designed and coded.

Debugging the code and correcting errors

Programmers tend to write the complete code and then begin debugging and checking for errors. Though this approach can save time in smaller projects, bigger and complex ones tend to have too many variables and functions that need attention. Therefore, it is good to debug every module once you are done and not the entire program. This saves time in the long run so that one does not end up wasting a lot of time on figuring out what is wrong. Unit tests for individual modules, and/or functional tests for web services and web applications, can help with this.

Guidelines in brief

A general overview of all of the above:

  1. Know what the code block must perform
  2. Indicate a brief description of what a variable is for (reference to commenting)
  3. Correct errors as they occur.
  4. Keep your code simple
  5. Maintain naming conventions which are uniform throughout.


Deployment is the final stage of releasing an application for users.


This section is from:

Programming methods and concepts



“In computer science, an object is a location in memory having a value and referenced by an identifier. An object can be a variable, function, or data structure. In the object-oriented programming paradigm, “object,” refers to a particular instance of a class where the object can be a combination of variables, functions, and data structures. In relational Database management an object can be a table or column, or an association between data and a database entity (such as relating a person’s age to a specific person).” –

Objects are need to be particular about their jobs. Each object must do its job, and only its job, to the best of its ability. Object should not be used to do something that isn’t its true purpose.


  1. Objects should do what their names indicate.
    1. Should have specific tasks to its wanted application. Example: A bus can move and stop but not handle passenger requests(well unless it’s a very smart with an AI J)
  2. Each object should represent a single concept.
    1. Sample: An object of a bus, does not need to represent a fast bus or a flying bus object.
  3. Avoid unused properties.
    1. If you have empty or null properties think if they have a better purpose somewhere else.

Loose and Tight Coupling

Loose Coupling means reducing dependencies of a class that use a different class directly. In tight coupling, classes and objects are dependent on one another. In general, tight coupling is usually bad because it reduces flexibility and re-usability of code and it makes changes much more difficult and impedes testability etc.

Loose Coupling

“Loose coupling is a design goal that seeks to reduce the inter-dependencies between components of a system with the goal of reducing the risk that changes in one component will require changes in any other component. Loose coupling is a much more generic concept intended to increase the flexibility of a system, make it more maintainable, and makes the entire framework more “stable”. “

Tight Coupling

“A Tightly Coupled Object is an object that needs to know quite a bit about other objects and are usually highly dependent on each other’s interfaces. Changing one object in a tightly coupled application often requires changes to a number of other objects. In a small application we can easily identify the changes and there is less chance to miss anything. But in large applications these inter-dependencies are not always known by every programmer or there is a chance of overlooking changes. But each set of loosely coupled objects are not dependent on each other.”




Encapsulation allows you to hide the inner workings of your application’s parts by making it clearer what each part does.

Anytime you see duplicate code, look for a way to encapsulate it. Encapsulate the parts of your application that might vary away from the parts that will stay the same. Breaking up the different parts of your application, you can change one part without having to change all the other parts.

Places to apply:

  • Entire set of properties
    • Protecting data in your class from other parts of the application
  • Behaviors
    • When you break a behavior out from a class, you can change the behavior without the class having to change as well. So if you changed how properties were stored, you wouldn’t have to change your class at all, because the properties are encapsulated away from the class.

class Program {

public class Account {

private decimal accountBalance = 500.00m;


public decimal CheckBalance() {

return accountBalance;




static void Main() {

Account myAccount = new Account();

decimal myBalance = myAccount.CheckBalance();


/* This Main method can check the balance via the public

               * “CheckBalance” method provided by the “Account” class

               * but it cannot manipulate the value of “accountBalance” */



” –



The act of one object forwarding an operation to another object, to be performed on behalf of the first object.

Delegation is when an object needs to perform a certain task, and instead of doing that task directly, it asks another object to handle the task (or sometimes just a part of the task).

Delegation makes your code more reusable. It also lets each object worry about its own functionality, rather than spreading the code that handles a single object’s behavior all throughout your application

Delegation lets each object worry about equality (or some other task) on its own. This means your objects are more independent of each other, or more loosely coupled. Loosely coupled objects can be taken from one app and easily reused in another, because they’re not tightly tied to other objects’ code.

Loosely coupled is when the objects in your application each have a specific job to do, and they do only that job. So the functionality of your app is spread out over lots of well-defined objects, which each do a single task really well.

Loosely coupled applications are usually more flexible, and easy to change. Since each object is pretty independent of the other objects, you can make a change to one object’s behavior without having to change all the rest of your objects. So adding new features or functionality becomes a lot easier.

Sample in Java:

class A {

void foo() {

// “this” also known under the names “current”, “me” and “self” in other languages;



void bar() {





class B {

private A a; // delegation link


public B(A a)


this.a = a;



void foo() {; // call foo() on the a-instance



void bar() {





a = new A();

b = new B(a); // establish delegation between two objects

” –

Singlecast” delegates (C#)

delegate void Notifier(string sender);  // Normal method signature with the keyword delegate


Notifier greetMe;                       // Delegate variable


void HowAreYou(string sender) {

Console.WriteLine(“How are you, ” + sender + ‘?’);



greetMe = new Notifier(HowAreYou);

“ –

Multicast delegates (C#)

void HowAreYou(string sender) {

Console.WriteLine(“How are you, ” + sender + ‘?’);



void HowAreYouToday(string sender) {

Console.WriteLine(“How are you today, ” + sender + ‘?’);



Notifier greetMe;


greetMe = new Notifier(HowAreYou);

greetMe += new Notifier(HowAreYouToday);


greetMe(“Leonardo”);                      // “How are you, Leonardo?”

// “How are you today, Leonardo?”


greetMe -= new Notifier(HowAreYou);


greetMe(“Pereira”);                   // “How are you today, Pereira?”

” –



Abstract classes are placeholders for actual implementation classes. The abstract class defines behavior, and the subclasses implement that behavior.

An abstract class defines some basic behavior, but it’s really the subclasses of the abstract class that add the implementation of those behaviors.

Whenever you find common behavior in two or more places, look to abstract that behavior into a class, and then reuse that behavior in the common classes. Helps to avoid duplicate code.

public class Animal extends LivingThing


private Location loc;

private double energyReserves;


public boolean isHungry() {

return energyReserves < 2.5;


public void eat(Food food) {

// Consume food

energyReserves += food.getCalories();


public void moveTo(Location location) {

// Move to new location

this.loc = location;



thePig = new Animal();

theCow = new Animal();

if (thePig.isHungry()) {;


if (theCow.isHungry()) {;



” –



An interface contains definitions for a group of related functionalities that a class or a struct can implement.

By using interfaces, you can, for example, include behavior from multiple sources in a class.

Interfaces can contain methods, properties, events, indexers, or any combination of those four member types.

You can write code that interacts directly with a subclass, or you can write code that interacts with the interface. When you run into a choice like this, you should always favor coding to the interface, not the implementation. It makes your software easier to extend. By coding to an interface, your code will work with all of the interface’s subclasses—even ones that haven’t been created yet.

This it adds flexibility to your app. Instead of your code being able to work with only one specific subclass you’re able to work with the more generic, the interface. That means that your code will work with any subclass of the interface, and even subclasses that haven’t even been designed yet.

interface ISampleInterface


void SampleMethod();



class ImplementationClass : ISampleInterface


// Explicit interface member implementation:

void ISampleInterface.SampleMethod()


// Method implementation.



static void Main()


// Declare an interface instance.

ISampleInterface obj = new ImplementationClass();


// Call the member.




” –


The main issue is to prevent duplicate code. Encapsulation also helps you protect your classes from unnecessary changes.

Anytime you have behavior in an application that you think is likely to change, you want to move that behavior away from parts of your application that probably won’t change very frequently. In other words, you should always try to encapsulate what varies.

“class Program {

public class Account {

private decimal accountBalance = 500.00m;


public decimal CheckBalance() {

return accountBalance;




static void Main() {

Account myAccount = new Account();

decimal myBalance = myAccount.CheckBalance();


/* This Main method can check the balance via the public

               * “CheckBalance” method provided by the “Account” class

               * but it cannot manipulate the value of “accountBalance” */



” –


A constant in software development is CHANGE. Software that isn’t well designed falls apart at the first sign of change, but great software can change easily.

The easiest way to make your software resilient to change is to make sure each class has only one reason to change. In other words, you’re minimizing the chances that a class is going to have to change by reducing the number of things in that class that can cause it to change.

When you see a class that has more than one reason to change, it is probably trying to do too many things. See if you can break up the functionality into multiple classes, where each individual class does only one thing—and therefore has only one reason to change.

Cohesive (or Single Responsibility Principle)

A cohesive class does one thing really well and does not try to do or be something else.

The more cohesive your classes are, the higher the cohesion of your software.

Look through the methods of your classes-do they all relate to the name of your class? If you have a method that looks out of place, it might belong in another class. Cohesive classes are focused on specific tasks.

Cohesion measures the degree of connectivity among the elements of a single module, class, or object. The higher the cohesion of your software is, the more well-defined and related the responsibilities of each individual class in your application. Each class has a very specific set of closely related actions it performs.

In other words cohesion tells how easy it is to change your application

Cohesion focuses on how you’ve constructed each individual class, object, and package of your software. If each class does just a few things that are all grouped together, then it’s probably a highly cohesive piece of software. But if you have one class doing all sorts of things that aren’t that closely related, you’ve probably got low cohesion.

A highly cohesive software is loosely coupled

In almost every situation, the more cohesive your software is, the looser the coupling between classes.

The higher the cohesion in your application, the better defined each object’s job is. And the better defined an object (and its job) is, the easier it is to pull that object out of one context, and have the object do the same job in another context.


Collections provide a more flexible way to work with groups of objects. Unlike arrays, the group of objects you work with can grow and shrink dynamically as the needs of the application change. For some collections, you can assign a key to any object that you put into the collection so that you can quickly retrieve the object by using the key.

When you have a set of properties that vary across your objects, use a collection, like a Map, to store those properties dynamically.

You’ll remove lots of methods from your classes, and avoid having to change your code when new properties are added to your app.


“Inheritance enables you to create new classes that reuse, extend, and modify the behavior that is defined in other classes. The class whose members are inherited is called the base class, and the class that inherits those members is called the derived class. Inheritance is transitive. If ClassC is derived from ClassB, and ClassB is derived from ClassA, ClassC inherits the members declared in ClassB and ClassA.

A derived class is a specialization of the base class. For example, if you have a base class Animal, you might have one derived class that is named Mammal and another derived class that is named Reptile. A Mammalis an Animal, and a Reptile is an Animal, but each derived class represents different specializations of the base class. “


  • Encapsulate what varies.
  • Code to an interface rather than to an implementation.
  • Each class in your application should have only one reason to change.
  • Classes are about behavior and functionality.

Model View Controller – MVC (Design Pattern)

“Remember you’re technically minded and close to the code. MVC to you is as clear as day, but saying to the business ‘Model, View, Contoller’ could give them the impression that you are suffering from some form tourette syndrome. MVC won’t mean much to the business even after you define them in relation to the code. To get the business to understand why this is the answer and least of all what it is, can be more of a task than expected in my experience. Even some fellow developers have difficulty understanding this on occasion.

To get the listener to understand what MVC is and why it works what I have tried in the pass is to apply MVC to a different industries where the listeners have had more involvement. An example that has worked for me in the past in a comparison to the property or even the vehicles. Most people have had dealing’s with builders, carpenters, plumbers, electricians or have watched the flood of property shows on the TV. This experience is a good platform to use and to explain why separation such as MVC works. I know you’re probably thinking that won’t work as it’s not the same as in software, but remember you’re not trying to train the business to become developers or have an in depth understanding of MVC, simply explaining to them that separation in production is required and that’s what an MVC structure offers.

To give an example of how you could describe this I have very briefly explained how separation works in property. Keep in mind this is focused on using the system not developing which could be a completely different angle of explanation.


The view in MVC is the presentation layer. This is what the end user of a product will see and interact with. A system can have multiple views of all different types ranging from command line output to rendered HTML. The view doesn’t consist of business logic in most clear designs. The interface is fit for purpose and is the area of interaction. Therefore you could simply output HTML for consumers to interact with or output SOAP/XML for businesses to interact with. Both use the same business logic behind the system otherwise known as the models and controllers.

In the world of property you could think of the view as the interior of a property or the outer layer of a property that the inhabitants interact with. The interior can be customised for purpose and the same property can have many different types of tenants. For example a property of a particular design could contain residential dwellings. The same internal space could easily be used as office space, where although in the same property has a different purpose. However the property structure is the same. Therefore the environment in which the users interact does not interfere with the structure of the building.


The controller is where the magic happens and defines the business application logic. This could be where the user has sent a response from the view, then this response is used to process the internal workings of the request and processes the response back to the user. Taking a typical response where a user has requested to buy a book. The controller has the user id, payment details, shipping address and item choice. These elements are then processed through the business logic to complete a purchase. The data is passed through the system into the model layer and eventually after the entire request satisfies the business definitions, the order is constructed and the user receives their item.

If we compare this to a property, we could compare the ordering of a book online to turning on a light switch. A tenant will flick the switch to on just like ordering a book. The switch itself is an element in the view layer which sends the request to the controller just like clicking a checkout button on a web site. The business logic in this case is what the electrician installed and are embedded within the property designs. The switch is flicked, which completes the circuit. Electricity runs through all the wires including the fuse box straight through to the light bulb. Just like the user receiving a book, in this case the tenant receives light. The whole process behind the scenes involving the electricity cabling is not visible to the the tenant. They simply interact with the switch within the space and from there the controller handles the request.


The models in MVC are the bottom most layer and handle the core logic of the system. In most cases this could be seen as the layer that interacts with the data source. In systems using MVC, the controller will pass information to the model in order to store and retrieve data. Following on from the example above controller definition, this is where the order details are stored. Additional data such as stock levels, physical location of product of the book amongst many things are all stored here. If that was the last book in stock ordered, the next request for this item may check if it’s available and disallow the order as the item is no longer available.

Sticking with out example of turning on a light switch, this level in our structure could be the electricity supply. When the tenant flicks the switch, the internal circuit must request electricity to power the request which is similar when the user requested data from the database, as in data is needed to process a request. If the dwelling isn’t connected to an electric supply, it cannot complete the process. Business benefits from using MVC

After you get the message across explaining what MVC is, you will then have to see what benefits can be obtained from it. I’m not going to go into a huge amount of detail here are I’m sure you can apply benefits more accurately which are directly related to you actual situation. To list just some of the common benefits of an MVC based system here are a few examples:

  • Different skill levels can work on different system levels. For example designers can work on the interface (View) with very little development knowledge and developers can work on the business logic (Controller) with very little concern for the design level. Then they simply integrate together on completion.
  • As a result of the above separation projects can be managed easier and quicker. The designer can start the interfaces before the developer and vice versa. This development process can be parallel as opposed to being sequential therefore reducing development time.
  • Easy to have multiple view types using the same business logic.
  • Clear route through the system. You clearly know where there different levels of the system are. With a clear route of the system, logic can be shared and improved. This has added security benefits as you clearly know the permitted route from the data to the user and can have clear security checks along the route.
  • Each layer is responsible for itself. (Relates to point 1) This means that you can have clean file structure which can be maintained and managed much easier and quicker than a tightly couple system where you may have lots of duplicate logic.
  • Having a clear structure means development will be more transparent which should result in reduced development time, maintenance problems and release cycles if applied properly.

Design Patterns

Design patterns help you recognize and implement GOOD solutions to common problems. Design patterns are proven solutions to particular types of problems, and help you structure your own applications in ways that are easier to understand, more maintainable, and more flexible.

Creational Design Patterns

“In software engineeringcreational design patterns are design patterns that deal with object creation mechanisms, trying to create objects in a manner suitable to the situation. The basic form of object creation could result in design problems or added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.

Creational design patterns are composed of two dominant ideas. One is encapsulating knowledge about which concrete classes the system uses. Another is hiding how instances of these concrete classes are created and combined.[1]

Creational design patterns are further categorized into Object-creational patterns and Class-creational patterns, where Object-creational patterns deal with Object creation and Class-creational patterns deal with Class-instantiation. In greater details, Object-creational patterns defer part of its object creation to another object, while Class-creational patterns defer its object creation to subclasses.[2]

Five well-known design patterns that are parts of creational patterns are the

  • Abstract factory pattern, which provides an interface for creating related or dependent objects without specifying the objects’ concrete classes.[3]
  • Builder pattern, which separates the construction of a complex object from its representation so that the same construction process can create different representation.
  • Factory method pattern, which allows a class to defer instantiation to subclasses.[4]
  • Prototype pattern, which specifies the kind of object to create using a prototypical instance, and creates new objects by cloning this prototype.
  • Singleton pattern, which ensures that a class only has one instance, and provides a global point of access to it.[5]

Consider applying creational patterns when:

  • A system should be independent of how its objects and products are created.
  • A set of related objects is designed to be used together.
  • Hiding the implementations of a class library of product, revealing only their interfaces.
  • Constructing different representation of independent complex objects.
  • A class want its subclass to implement the object it creates.
  • The class instantiations are specified at run-time.
  • There must be a single instance and client can access this instance at all times.
  • Instance should be extensible without being modified.

“This design patterns is all about class instantiation. This pattern can be further divided into class-creation patterns and object-creational patterns. While class-creation patterns use inheritance effectively in the instantiation process, object-creation patterns use delegation effectively to get the job done

  • Abstract Factory
    Creates an instance of several families of classes
  • Builder
    Separates object construction from its representation
  • Factory Method
    Creates an instance of several derived classes
  • Object Pool
    Avoid expensive acquisition and release of resources by recycling objects that are no longer in use
  • Prototype
    A fully initialized instance to be copied or cloned
  • Singleton
    A class of which only a single instance can exist


Behavioral Design Patterns

“In software engineering, behavioral design patterns are design patterns that identify common communication patterns between objects and realize these patterns. By doing so, these patterns increase flexibility in carrying out this communication.”


“This design patterns is all about Class’s objects communication. Behavioral patterns are those patterns that are most specifically concerned with communication between objects

  • Chain of responsibility
    A way of passing a request between a chain of objects
  • Command
    Encapsulate a command request as an object
  • Interpreter
    A way to include language elements in a program
  • Iterator
    Sequentially access the elements of a collection
  • Mediator
    Defines simplified communication between classes
  • Memento
    Capture and restore an object’s internal state
  • Null Object
    Designed to act as a default value of an object
  • Observer
    A way of notifying change to a number of classes
  • State
    Alter an object’s behavior when its state changes
  • Strategy
    Encapsulates an algorithm inside a class
  • Template method
    Defer the exact steps of an algorithm to a subclass
  • Visitor
    Defines a new operation to a class without change



Structural Design Patterns

“In software engineering, structural design patterns are design patterns that ease the design by identifying a simple way to realize relationships between entities.”

“This design patterns is all about Class and Object composition. Structural class-creation patterns use inheritance to compose interfaces. Structural object-patterns define ways to compose objects to obtain new functionality.

  • Adapter
    Match interfaces of different classes
  • Bridge
    Separates an object’s interface from its implementation
  • Composite
    A tree structure of simple and composite objects
  • Decorator
    Add responsibilities to objects dynamically
  • Facade
    A single class that represents an entire subsystem
  • Flyweight
    A fine-grained instance used for efficient sharing
  • Private Class Data
    Restricts accessor/mutator access
  • Proxy
    An object representing another object

Game Programming Patterns


Design Patterns

Sequencing Patterns

“Videogames are exciting in large part because they take us somewhere else. For a few minutes (or, let’s be honest with ourselves, much longer), we become inhabitants of a virtual world. Creating these worlds is one of the supreme delights of being a game programmer.

One aspect that most of these game worlds feature is time — the artificial world lives and breathes at its own cadence. As world builders, we must invent time and craft the gears that drive our game’s great clock.

The patterns in this section are tools for doing just that. A Game Loop is the central axle that the clock spins on. Objects hear its ticking through Update Methods. We can hide the computer’s sequential nature behind a facade of snapshots of moments in time using Double Buffering so that the world appears to update simultaneously.


Behavioral Patterns

“Once you’ve built your game’s set and festooned it with actors and props, all that remains is to start the scene. For this, you need behavior — the screenplay that tells each entity in your game what to do.

Of course all code is “behavior”, and all software is defining behavior, but what’s different about games is often the breadth of it that you have to implement. While your word processor may have a long list of features, it pales in comparison with the number of inhabitants, items, and quests in your average role-playing game.

The patterns in this chapter help to quickly define and refine a large quantity of maintainable behavior. Type Objects create categories of behavior without the rigidity of defining an actual class. A Subclass Sandbox gives you a safe set of primitives you can use to define a variety of behaviors. The most advanced option is Bytecode, which moves behavior out of code entirely and into data.

Decoupling Patterns

“Once you get the hang of a programming language, writing code to do what you want is actually pretty easy. What’s hard is writing code that’s easy to adapt when your requirements change. Rarely do we have the luxury of a perfect feature set before we’ve fired up our editor.

A powerful tool we have for making change easier is decoupling. When we say two pieces of code are “decoupled”, we mean a change in one usually doesn’t require a change in the other. When you change some feature in your game, the fewer places in code you have to touch, the easier it is.

Components decouple different domains in your game from each other within a single entity that has aspects of all of them. Event Queues decouple two objects communicating with each other, both statically and in time. Service Locators let code access a facility without being bound to the code that provides it.


Optimization Patterns

“While the rising tide of faster and faster hardware has lifted most software above worrying about performance, games are one of the few remaining exceptions. Players always want richer, more realistic and exciting experiences. Screens are crowded with games vying for a player’s attention — and cash! — and the game that pushes the hardware the furthest often wins.

Optimizing for performance is a deep art that touches all aspects of software. Low-level coders master the myriad idiosyncrasies of hardware architectures. Meanwhile, algorithms researchers compete to prove mathematically whose procedure is the most efficient.

Here, I touch on a few mid-level patterns that are often used to speed up a game. Data Locality introduces you to the modern computer’s memory hierarchy and how you can use it to your advantage. The Dirty Flag pattern helps you avoid unnecessary computation while Object Pools help you avoid unnecessary allocation. Spatial Partitioning speeds up the virtual world and its inhabitants’ arrangement in space.


Anti Patterns

Anti patterns are about recognizing and avoiding BAD solutions to common problems. Anti-patterns are the reverse of design patterns: they are common BAD solutions to problems. These dangerous pitfalls should be recognized and avoided.

“What Is an AntiPattern?

AntiPatterns, like their design pattern counterparts, define an industry vocabulary for the common defective processes and implementations within organizations. A higher-level vocabulary simplifies communication between software practitioners and enables concise description of higher-level concepts.

An AntiPattern is a literary form that describes a commonly occurring solution to a problem that generates decidedly negative consequences. The AntiPattern may be the result of a manager or developer not knowing any better, not having sufficient knowledge or experience in solving a particular type of problem, or having applied a perfectly good pattern in the wrong context.

AntiPatterns provide real-world experience in recognizing recurring problems in the software industry and provide a detailed remedy for the most common predicaments. AntiPatterns highlight the most common problems that face the software industry and provide the tools to enable you to recognize these problems and to determine their underlying causes.

Furthermore, AntiPatterns present a detailed plan for reversing these underlying causes and implementing productive solutions. AntiPatterns effectively describe the measures that can be taken at several levels to improve the developing of applications, the designing of software systems, and the effective management of software projects. “

Software Development AntiPatterns


“A key goal of development AntiPatterns is to describe useful forms of software refactoring. Software refactoring is a form of code modification, used to improve the software structure in support of subsequent extension and long-term maintenance. In most cases, the goal is to transform code without impacting correctness.

Development AntiPatterns utilize various formal and informal refactoring approaches. The following summaries provide an overview of the Development AntiPatterns found in this chapter and focus on the development AntiPattern problem. Included are descriptions of both development and mini-AntiPatterns. The refactored solutions appear in the appropriate AntiPattern templates that follow the summaries.

  • The Blob
    Procedural-style design leads to one object with a lion’s share of the responsibilities, while most other objects only hold data or execute simple processes. The solution includes refactoring the design to distribute responsibilities more uniformly and isolating the effect of changes.
  • Continuous Obsolescence
    Technology is changing so rapidly that developers often have trouble keeping up with current versions of software and finding combinations of product releases that work together. Given that every commercial product line evolves through new releases, the situation is becoming more difficult for developers to cope with. Finding compatible releases of products that successfully interoperate is even harder.
  • Lava Flow
    Dead code and forgotten design information is frozen in an ever-changing design. This is analogous to a Lava Flow with hardening globules of rocky material. The refactored solution includes a configuration management process that eliminates dead code and evolves or refactors design toward increasing quality.
  • Ambiguous Viewpoint
    Object-oriented analysis and design (OOA&D) models are often presented without clarifying the viewpoint represented by the model. By default, OOA&D models denote an implementation viewpoint that is potentially the least useful. Mixed viewpoints don’t allow the fundamental separation of interfaces from implementation details, which is one of the primary benefits of the object-oriented paradigm.
  • Functional Decomposition
    This AntiPattern is the output of experienced, nonobject-oriented developers who design and implement an application in an object-oriented language. The resulting code resembles a structural language (Pascal, FORTRAN) in class structure. It can be incredibly complex as smart procedural developers devise very “clever” ways to replicate their time-tested methods in an object-oriented architecture.
  • Poltergeists
    Poltergeists are classes with very limited roles and effective life cycles. They often start processes for other objects. The refactored solution includes a reallocation of responsibilities to longer-lived objects that eliminate the Poltergeists.
  • Boat Anchor
    A Boat Anchor is a piece of software or hardware that serves no useful purpose on the current project. Often, the Boat Anchor is a costly acquisition, which makes the purchase even more ironic.
  • Golden Hammer
    A Golden Hammer is a familiar technology or concept applied obsessively to many software problems. The solution involves expanding the knowledge of developers through education, training, and book study groups to expose developers to alternative technologies and approaches.
  • Dead End
    A Dead End is reached by modifying a reusable component if the modified component is no longer maintained and supported by the supplier. When these modifications are made, the support burden transfers to the application system developers and maintainers. Improvements in the reusable component are not easily integrated, and support problems can be blamed upon the modification.
  • Spaghetti Code
    Ad hoc software structure makes it difficult to extend and optimize code. Frequent code refactoring can improve software structure, support software maintenance, and enable iterative development.
  • Input Kludge
    Software that fails straightforward behavioral tests may be an example of an input kludge, which occurs when ad hoc algorithms are employed for handling program input.
  • Walking through a Minefield
    Using today’s software technology is analogous to walking through a high-tech mine field. Numerous bugs are found in released software products; in fact, experts estimate that original source code contains two to five bugs per line of code.
  • Cut-and-Paste Programming
    Code reused by copying source statements leads to significant maintenance problems. Alternative forms of reuse, including black-box reuse, reduce maintenance issues by having common source code, testing, and documentation.
  • Mushroom Management
    In some architecture and management circles, there is an explicit policy to keep system developers isolated from the system’s end users. Requirements are passed second-hand through intermediaries, including architects, managers, or requirements analysts.


Software Architecture AntiPatterns


“Architecture AntiPatterns focus on the system-level and enterprise-level structure of applications and components. Although the engineering discipline of software architecture is relatively immature, what has been determined repeatedly by software research and experience is the overarching importance of architecture in software development.

The following AntiPatterns focus on some common problems and mistakes in the creation, implementation, and management of architecture.

  • Autogenerated Stovepipe
    This AntiPattern occurs when migrating an existing software system to a distributed infrastructure. An Autogenerated Stovepipe arises when converting the existing software interfaces to distributed interfaces. If the same design is used for distributed computing, a number of problems emerge.
  • Stovepipe Enterprise
    A Stovepipe System is characterized by a software structure that inhibits change. The refactored solution describes how to abstract subsystem and components to achieve an improved system structure. The Stovepipe Enterprise AntiPattern is characterized by a lack of coordination and planning across a set of systems.
  • Jumble
    When horizontal and vertical design elements are intermixed, an unstable architecture results. The intermingling of horizontal and vertical design elements limits the reusability and robustness of the architecture and the system software components.
  • Stovepipe System
    Subsystems are integrated in an ad hoc manner using multiple integration strategies and mechanisms, and all are integrated point to point. The integration approach for each pair of subsystems is not easily leveraged toward that of other subsystems. The Stovepipe System AntiPattern is the single-system analogy of Stovepipe Enterprise, and is concerned with how the subsystems are coordinated within a single system.
  • Cover Your Assets
    Document-driven software processes often produce less-than-useful requirements and specifications because the authors evade making important decisions. In order to avoid making a mistake, the authors take a safer course and elaborate upon alternatives.
  • Vendor Lock-In
    Vendor Lock-In occurs in systems that are highly dependent upon proprietary architectures. The use of architectural isolation layers can provide independence from vendor-specific solutions.
  • Wolf Ticket
    A Wolf Ticket is a product that claims openness and conformance to standards that have no enforceable meaning. The products are delivered with proprietary interfaces that may vary significantly from the published standard.
  • Architecture by Implication
    Management of risk in follow-on system development is often overlooked due to overconfidence and recent system successes. A general architecture approach that is tailored to each application system can help identify unique requirements and risk areas.
  • Warm Bodies
    Software projects are often staffed with programmers with widely varying skills and productivity levels. Many of these people may be assigned to meet staff size objectives (so-called “warm bodies”). Skilled programmers are essential to the success of a software project. So-called heroic programmers are exceptionally productive, but as few as 1 in 20 have this talent. They produce an order of magnitude more working software than an average programmer.
  • Design by Committee
    The classic AntiPattern from standards bodies, Design by Committee creates overly complex architectures that lack coherence. Clarification of architectural roles and improved process facilitation can refactor bad meeting processes into highly productive events.
  • Swiss Army Knife
    A Swiss Army Knife is an excessively complex class interface. The designer attempts to provide for all possible uses of the class. In the attempt, he or she adds a large number of interface signatures in a futile attempt to meet all possible needs.
  • Reinvent the Wheel
    The pervasive lack of technology transfer between software projects leads to substantial reinvention. Design knowledge buried in legacy assets can be leveraged to reduce time-to-market, cost, and risk.
  • The Grand Old Duke of York
    Egalitarian software processes often ignore people’s talents to the detriment of the project. Programming skill does not equate to skill in defining abstractions. There appear to be two distinct groups involved in software development: abstractionists and their counterparts the implementationists.


Software Project Management AntiPatterns

“In the modern engineering profession, more than half of the job involves human communication and resolving people issues. The management AntiPatterns identify some of the key scenarios in which these issues are destructive to software processes.

  • Blowhard Jamboree
    The opinions of so-called industry experts often influence technology decisions. Controversial reports that criticize particular technologies frequently appear in popular media and private publications. In addition to technical responsibilities, developers spend too much time answering the concerns of managers and decision makers arising from these reports.
  • Analysis Paralysis
    Striving for perfection and completeness in the analysis phase often leads to project gridlock and excessive thrashing of requirements/models. The refactored solution includes a description of incremental, iterative development processes that defer detailed analysis until the knowledge is needed.
  • Viewgraph Engineering
    On some projects, developers become stuck preparing viewgraphs and documents instead of developing software. Management never obtains the proper development tools, and engineers have no alternative but to use office automation software to produce psuedo-technical diagrams and papers.
  • Death by Planning
    Excessive planning for software projects leads to complex schedules that cause downstream problems. We explain how to plan a reasonable software development process that includes incorporating known facts and incremental replanning.
  • Fear of Success
    An interesting phenomenon often occurs when people and projects are on the brink of success. Some people begin to worry obsessively about the kinds of things that can go wrong. Insecurities about professional competence come to the surface.
  • Corncob
    Difficult people frequently obstruct and divert the software development process. Corncobs can be dealt with by addressing their agendas through various tactical, operational, and strategic organizational actions.
  • Intellectual Violence
    Intellectual violence occurs when someone who understands a theory, technology, or buzzword uses this knowledge to intimidate others in a meeting situation.
  • Irrational Management
    Habitual indecisiveness and other bad management habits lead to de facto decisions and chronic development crises. We explain how to utilize rational management decision-making techniques to improve project resolution and for keeping managers on track.
  • Smoke and Mirrors
    Demonstration systems are important sales tools, as they are often interpreted by end users as representational of production-quality capabilities. A management team, eager for new business, sometimes (inadvertently) encourages these misperceptions and makes commitments beyond the capabilities of the organization to deliver operational technology.
  • Project Mismanagement
    Inattention to the management of software development processes can cause directionlessness and other symptoms. Proper monitoring and control of software projects is necessary to successful development activities. Running a product development is as complex an activity as creating the project plan, and developing software is as complex as building skyscrapers, involving as many steps and processes, including checks and balances. Often, key activities are overlooked or minimized.
  • Throw It over the Wall
    Object-oriented methods, design patterns, and implementation plans intended as flexible guidelines are too often taken literally by the downstream managers and object-oriented developers. As guidelines progress through approval and publication processes, they often are attributed with unfulfilled qualities of completeness, prescriptiveness, and mandated implementation.
  • Fire Drill
    Airline pilots describe flying as “hours of boredom followed by 15 seconds of sheer terror.” Many software projects resemble this situation: “Months of boredom followed by demands for immediate delivery.” The months of boredom may include protracted requirements analysis, replanning, waiting for funding, waiting for approval, or any number of technopolitical reasons.
  • The Feud
    Personality conflicts between managers can dramatically affect the work environment. The employees reporting to these managers often suffer the consequences of their supervisors’ disagreements. Animosity between managers is reflected in the attitudes and actions of their employees.
  • E-mail Is Dangerous
    E-mail is an important communication medium for software managers. Unfortunately, it is an inappropriate medium for many topics and sensitive communications.



Refactoring changes the internal structure of your code WITHOUT affecting your code’s behavior. Refactoring is done to increase the cleanness, flexibility, and extensibility of your code, and usually is related to a specific improvement in your design.


Composing Methods –

“A large part of refactoring is composing methods to package code properly. Almost all the time the problems come from methods that are too long. Long methods are troublesome because they often contain lots of information, which gets buried by the complex logic that usually gets dragged in.

·         Extract Method

·         Inline Method

·         Inline Temp

·         Introduce Explaining Variable

·         Remove Assignments to Parameters

·         Replace Method with Method Object

·         Replace Temp with Query

·         Split Temporary Variable

·         Substitute Algorithm


Extract Method

“You have a code fragment that can be grouped together.

Turn the fragment into a method whose name explains the purpose of the method.”

void printOwing(double amount) {



    //print details

    System.out.println (“name:” + _name);

    System.out.println (“amount” + amount);


void printOwing(double amount) {





void printDetails (double amount) {

    System.out.println (“name:” + _name);

    System.out.println (“amount” + amount);


Inline Method

“A method’s body is just as clear as its name.

Put the method’s body into the body of its callers and remove the method.

int getRating() {

        return (moreThanFiveLateDeliveries()) ? 2 : 1;



boolean moreThanFiveLateDeliveries() {

        return _numberOfLateDeliveries > 5;


int getRating() {

        return (_numberOfLateDeliveries > 5) ? 2 : 1;


Inline Temp

“You have a temp that is assigned to once with a simple expression, and the temp is getting in the way of other refactorings.

Replace all references to that temp with the expression.

       double basePrice = anOrder.basePrice();

       return (basePrice > 1000)


return (anOrder.basePrice() > 1000)

Replace Temp with Query

You are using a temporary variable to hold the result of an expression.

Extract the expression into a method. Replace all references to the temp with the expression. The new method can then be used in other methods.

double basePrice = _quantity * _itemPrice;

if (basePrice > 1000)

    return basePrice * 0.95;


    return basePrice * 0.98;

if (basePrice() > 1000)

    return basePrice() * 0.95;


    return basePrice() * 0.98;

double basePrice() {

    return _quantity * _itemPrice;



Moving Features Between Objects

“One of the most fundamental, if not the fundamental, decision in object design is deciding where to put responsibilities. So, this set of refactorings is all about object’s responsibilities.

·         Extract Class

·         Hide Delegate

·         Inline Class

·         Introduce Foreign Method

·         Introduce Local Extension

·         Move Field

·         Move Method

·         Remove Middle Man

Organizing Data

“In this chapter we’ll discuss several refactorings that make working with data easier.

·         Change Bidirectional Association to Unidirectional

·         Change Reference to Value

·         Change Unidirectional Association to Bidirectional

·         Change Value to Reference

·         Duplicate Observed Data

·         Encapsulate Collection

·         Encapsulate Field

·         Replace Array with Object

·         Replace Data Value with Object

·         Replace Magic Number with Symbolic Constant

·         Replace Record with Data Class

·         Replace Subclass with Fields

·         Replace Type Code with Class

·         Replace Type Code with State/Strategy

·         Replace Type Code with Subclasses

·         Self Encapsulate Field

Simplifying Conditional Expressions

“Conditional logic has a way of getting tricky, so here are a number of refactorings you can use to simplify it.

·         Consolidate Conditional Expression

·         Consolidate Duplicate Conditional Fragments

·         Decompose Conditional

·         Introduce Assertion

·         Introduce Null Object

·         Remove Control Flag

·         Replace Conditional with Polymorphism

·         Replace Nested Conditional with Guard Clauses

Making Method Calls Simpler

“Objects are all about interfaces. Coming up with interfaces that are easy to understand and use is a key skill in developing good object-oriented software. This chapter explores refactorings that make interfaces more straightforward.

·         Add Parameter

·         Encapsulate Downcast

·         Hide Method

·         Introduce Parameter Object

·         Parameterize Method

·         Preserve Whole Object

·         Remove Parameter

·         Remove Setting Method

·         Rename Method

·         Replace Constructor with Factory Method

·         Replace Error Code with Exception

·         Replace Exception with Test

·         Replace Parameter with Explicit Methods

·         Replace Parameter with Method

·         Separate Query from Modifier

Dealing with Generalization

“Generalization produces its own batch of refactorings, mostly dealing with moving methods around a hierarchy of inheritance.

Big Refactorings

“The preceding chapters present the individual “moves” of refactoring. What is missing is a sense of the whole “game.” You are refactoring to some purpose, not just to avoid making progress (at least usually you are refactoring to some purpose). What does the whole game look like?

·         Convert Procedural Design to Objects

·         Extract Hierarchy

·         Separate Domain from Presentation

·         Tease Apart Inheritance

·         The Nature of the Game

Design, Analysis and Architecture


Key points of software design

  • Mistakes are bound to happen, learn to let go and look forward how to improve the design
  • Do not be afraid to let bad design die
  • Be ready to look at your code more than once, even after it is ready
  • Design is iterative and you have to be willing to change your own designs, as well as those that you inherit from other programmers.
  • Most good designs come from analysis of bad designs. Never be afraid to make mistakes and then change things around.
  • Great software is usually about being good enough. In other words there is no need to try to write the “perfect software”. Be realistic to the circumstances that limit or free your development and design decisions. If your software works, the customer is happy, and you’ve done your best to make sure things are designed well, then it just might be time to move on to the next project.

Design principles

A design principle is a basic tool or technique that can be applied to designing or writing code to make that code more maintainable, flexible, or extensible.

Principle #1: The Open-Closed Principle (OCP)

Open-Closed Principle is about allowing change, but doing it without requiring you to modify existing code: Classes should be open for extension, and closed for modification.

Closed for medication:

Make sure that nobody can change your class’s code, and you’ve made that particular piece of behavior closed for modification. In other words, nobody can change the behavior, because you’ve locked it up in a class that you’re sure won’t change.

Open for extension:

Let others subclass your class, and then they can override your method to work like they want it to. So even though they didn’t mess with your working code, you still left your class open for extension.

OCP is about flexibility, and goes beyond just inheritance.

OCP is not just about inheritance and what it offers you. It is also about that anytime you write working code, you want to do your best to make sure that code stays working… and that means not letting other people change that code. In situations where you want to allow change rather than just diving into your code and making a bunch of changes, the OCP lets you extend your working code, without changing that code. So another way of achieving this is composition.

Other ways to use OCP is to provide access or functionality on private methods in your class. You’re extending the behavior of the private methods, without changing them. So anytime your code is closed for modification but open for extension, you’re using the OCP.

public class Logger


    IMessageLogger _messageLogger;


    public Logger(IMessageLogger messageLogger)


        _messageLogger = messageLogger;



    public void Log(string message)







public interface IMessageLogger


    void Log(string message);




public class ConsoleLogger : IMessageLogger


    public void Log(string message)







public class PrinterLogger : IMessageLogger


    public void Log(string message)


        // Code to send message to printer



Principle #2: The Don’t Repeat Yourself Principle (DRY)

Don’t Repeat Yourself: Avoid duplicate code by abstracting out things that are common and placing those things in a single location. DRY is really about ONE requirement in ONE place. When you’re trying to avoid duplicate code, you’re really trying to make sure that you only implement each feature and requirement in your application one single time.

DRY is about avoiding duplicate code, but it’s also about doing it in a way that won’t create more problems down the line. Rather than just tossing code that appears more than once into a single class, you need to make sure each piece of information and behavior in your system has a single, clear place where it exists. That way, your system always knows exactly where to go when it needs that information or behavior.

Whether you’re writing requirements, developing use cases, or coding, you want to be sure that you don’t duplicate things in your system. A requirement should be implemented one time, use cases shouldn’t have overlap, and your code shouldn’t repeat itself.

Principle #3: The Single Responsibility Principle (SRP)

The SRP is all about responsibility, and which objects in your system do what. You want each object that you design to have just one responsibility to focus on—and when something about that responsibility changes, you’ll know exactly where to look to make those changes in your code. Each of your objects has only one reason to change.

DRY and SRP are related, and often appear together. DRY is about putting a piece of functionality in a single place, such as a class; SRP is about making sure that a class does only one thing, and that it does it well.

In good applications, one class does one thing, and does it well, and no other classes share that behavior. Cohesion is actually just another name for the SRP. If you’re writing highly cohesive software, then that means that you’re correctly applying the SRP.

public class OxygenMeter


    public double OxygenSaturation { get; set; }


    public void ReadOxygenLevel()


        using (MeterStream ms = new MeterStream(“O2”))


            int raw = ms.ReadByte();

            OxygenSaturation = (double)raw / 255 * 100;






public class OxygenSaturationChecker


    public bool OxygenLow(OxygenMeter meter)


        return meter.OxygenSaturation <= 75;





public class OxygenAlerter


    public void ShowLowOxygenAlert(OxygenMeter meter)


        Console.WriteLine(“Oxygen low ({0:F1}%)”, meter.OxygenSaturation);



Principle #4: The Liskov Substitution Principle (LSP)

Subtypes must be substitutable for their base types.

The LSP is all about well-designed inheritance. When you inherit from a base class, you must be able to substitute your subclass for that base class without things going wrong. Otherwise, you’ve used inheritance incorrectly! Make sure that new derived classes are extending the base classes without changing their behavior.


“ The LSP applies to inheritance hierarchies. It specifies that you should design your classes so that client dependencies can be substituted with subclasses without the client knowing about the change. All subclasses must, therefore, operate the same manner as their base classes. The specific functionality of the subclass may be different but must conform to the expected behaviour of the base class. To be a true behavioural subtype, the subclass must not only implement the base class’s methods and properties but also conform to its implied behaviour. This requires compliance with several rules.


The first rule is that there should be contravariance between parameters of the base class’s methods and the matching parameters in subclasses. This means that the parameters in subclasses must either be the same types as those in the base class or must be less restrictive. Similarly, there must be covariance between method return values in the base class and its subclasses. This specifies that the subclass’ return types must be the same as, or more restrictive than, the base class’ return types.

The next rule concerns preconditions and postconditions. A precondition of a class is a rule that must be in place before an action can be taken. For example, before calling a method that reads from a database you may need to satisfy the precondition that the database connection is open. Postconditions describe the state of objects after a process is completed. For example, it may be assumed that the database connection is closed after executing a SQL statement. The LSP states that the preconditions of a base class must not be strengthened by a subclass and that postconditions cannot be weakened in subclasses.

Next the LSP considers invariants. An invariant describes a condition of a process that is true before the process begins and remains true afterwards. For example, a class may include a method that reads text from a file. If the method handles the opening and closing of the file, an invariant may be that the file is not open before the call or afterwards. To comply with the LSP, the invariants of a base class must not be changed by a subclass.

The next rule is the history constraint. By their nature, subclasses include all of the methods and properties of their superclasses. They may also add further members. The history constraint says that new or modified members should not modify the state of an object in a manner that would not be permitted by the base class. For example, if the base class represents an object with a fixed size, the subclass should not permit this size to be modified.

The final LSP rule specifies that a subclass should not throw exceptions that are not thrown by the base class unless they are subtypes of exceptions that may be thrown by the base class.

The above rules cannot be controlled by the compiler or limited by object-oriented programming languages. Instead, you must carefully consider the design of class hierarchies and of types that may be subclassed in the future. Failing to do so risks the creation of subclasses that break rules and create bugs in types that are dependent upon them.

One common indication of non-compliance with the LSP is when a client class checks the type of its dependencies. This may be by reading a property of an object that artificially describes its type or by using reflection to obtain the type. Often a switch statement will be used to perform a different action according to the type of the dependency. This additional complexity also violates the Open / Closed Principle (OCP), as the client class will need to be modified as further subclasses are introduced. ” –

public class Project


    public Collection<ProjectFile> AllFiles { get; set; }

    public Collection<WriteableFile> WriteableFiles { get; set; }


    public void LoadAllFiles()


        foreach (ProjectFile file in AllFiles)






    public void SaveAllWriteableFiles()


        foreach (WriteableFile file in WriteableFiles)








public class ProjectFile


    public string FilePath { get; set; }


    public byte[] FileData { get; set; }


    public void LoadFileData()


        // Retrieve FileData from disk





public class WriteableFile : ProjectFile


    public void SaveFileData()


        // Write FileData to disk



Alternatives to inheritance


Delegating functionality to another class. Delegation is when you hand over the responsibility for a particular task to another class or method. Delegation is best used when you want to use another class’s functionality, as is, without changing that behavior at all. If you need to use functionality in another class, but you don’t want to change that functionality, consider using delegation instead of inheritance.

In delegation, the behavior of the object you’re delegating behavior to never changes.

Delegation is the simple yet powerful concept of handing a task over to another part of the program. In object-oriented programming it is used to describe the situation where one object assigns a task to another object, known as the delegate. This mechanism is sometimes referred to as aggregation, consultation or forwarding (when a wrapper object doesn’t pass itself to the wrapped object[4]).


“Delegation is dependent upon dynamic binding, as it requires that a given method call can invoke different segments of code at runtime. It is used throughout Mac OS X (and its predecessor NeXTStep) as a means of customizing the behavior of program components.[5] It enables implementations such as making use of a single OS-provided class to manage windows because the class takes a delegate that is program-specific and can override default behavior as needed. For instance, when the user clicks the close box, the window manager sends the delegate a windowShouldClose: call, and the delegate can delay the closing of the window if there is unsaved data represented by the window’s contents.


It has been argued that delegation may in some cases be preferred for inheritance to make program code more readable and understandable.” –

class A {  void foo() {    // “this” also known under the names “current”, “me” and “self” in other languages;  }   void bar() {    print(“”);  }}; class B {  private A a; // delegation link   public B(A a)  {    this.a = a;  }   void foo() {; // call foo() on the a-instance  }   void bar() {    print(“”);  }}; a = new A();b = new B(a); // establish delegation between two objects


Assemble behaviors from other classes. When you need to have more than one single behavior to choose from. Composition is most powerful when you want to use behavior defined in an interface, and then choose from a variety of implementations of that interface, at both compile time and run time.

In composition, the object composed of other behaviors owns those behaviors. When the object is destroyed, so are all of its behaviors. The behaviors in a composition do not exist outside of the composition itself.

// Composition

class Car




    // Car is the owner of carburetor.

    // Carburetor is created when Car is created,

    // it is destroyed when Car is destroyed.

    Carburetor carb;





Aggregation is when one class is used as part of another class, but still exists outside of that other class.

An aggregate object is one which contains other objects. For example, an Airplane class would contain Engine, Wing, Tail, Crew objects. Sometimes the class aggregation corresponds to physical containment in the model (like the airplane). But sometimes it is more abstract (e.g. Club and Members).

When you should use composition, and when you should use aggregation? Does the object whose behavior I want to use exist outside of the object that uses its behavior? If the object does make sense existing on its own, then you should use aggregation; if not, then go with composition.

// Aggregation

class Pond



   std::vector<Duck*> ducks;

}; –


“Aggregation differs from ordinary composition in that it does not imply ownership. In composition, when the owning object is destroyed, so are the contained objects. In aggregation, this is not necessarily true. For example, a university owns various departments (e.g., chemistry), and each department has a number of professors. If the university closes, the departments will no longer exist, but the professors in those departments will continue to exist. Therefore, a University can be seen as a composition of departments, whereas departments have an aggregation of professors. In addition, a Professor could work in more than one department, but a department could not be part of more than one university.” –

class Professor; class Department{ private: // Aggregation Professor* members[5];}; class University{ private:  std::vector< Department > faculty;  create_dept() { // Composition (must limit to 20) faculty.push_back( Department(…) ); faculty.push_back( Department(…) ); }}; –


Alternatives to inheritance – Summary


Delegate behavior to another class when you don’t want to change the behavior, but it’s not your object’s responsibility to implement that behavior on its own.


You can reuse behavior from one or more classes, and in particular from a family of classes, with composition. Your object completely owns the composed objects, and they do not exist outside of their usage in your object.


When you want the benefits of composition, but you’re using behavior from an object that does exist outside of your object, use aggregation.

If you favor delegation, composition, and aggregation over inheritance, your software will usually be more flexible, and easier to maintain, extend, and reuse.

Principle #5: Interface Segregation Principle (ISP)


“The Interface Segregation Principle (ISP) states that clients should not be forced to depend upon interfaces that they do not use. When we have non-cohesive interfaces, the ISP guides us to create multiple, smaller, cohesive interfaces. The original class implements each such interface. Client code can then refer to the class using the smaller interface without knowing that other members exist.

When you apply the ISP, class and their dependencies communicate using tightly-focussed interfaces, minimising dependencies on unused members and reducing coupling accordingly. Smaller interfaces are easier to implement, improving flexibility and the possibility of reuse. As fewer classes share interfaces, the number of changes that are required in response to an interface modification is lowered. This increases robustness.” –

public interface IEmailable


    string Name { get; set; }

    string EmailAddress { get; set; }


 public interface IDiallable


    string Telephone { get; set; }


 public class Contact : IEmailable, IDiallable


    public string Name { get; set; }

    public string Address { get; set; }

    public string EmailAddress { get; set; }

    public string Telephone { get; set; }


 public class MobileEngineer : IDiallable


    public string Name { get; set; }

    public string Vehicle { get; set; }

    public string Telephone { get; set; }


 public class Emailer


    public void SendMessage(IEmailable target, string subject, string body)


        // Code to send email, using target’s email address and name



public class Dialler


    public void MakeCall(IDiallable target)


        // Code to dial telephone number of target


} –


Principle #6: Dependency Inversion Principle (DIP)

“The Dependency Inversion Principle (DIP) states that high level modules should not depend upon low level modules. Both should depend upon abstractions. Secondly, abstractions should not depend upon details. Details should depend upon abstractions.

The idea of high level and low level modules categorizes classes in a hierarchical manner. High level modules or classes are those that deal with larger sets of functionality. At the highest level they are the classes that implement business rules within the overall design of a solution. Low level modules deal with more detailed operations. At the lowest level they may deal with writing information to databases or passing messages to the operating system. Of course, there are many levels between the highest and the lowest. The DIP applies to any area where dependencies between classes exist.

Applying the DIP resolves these problems by removing direct dependencies between classes. Instead, higher level classes refer to their dependencies using abstractions, such as interfaces or abstract classes. The lower level classes implement the interfaces, or inherit from the abstract classes. This allows new dependencies to be substituted without impact. Furthermore, changes to lower levels should not cascade upwards as long as they do not involve changing the abstraction.

The effect of the DIP is that classes are loosely coupled. This increases the robustness of the software and improves flexibility. The separation of high level classes from their dependencies raises the possibility of reuse of these larger areas of functionality. Without the DIP, only the lowest level classes may be easily reusable.” –

public interface ITransferSource


    void RemoveFunds(decimal value);


 public interface ITransferDestination


    void AddFunds(decimal value);


 public class BankAccount : ITransferSource, ITransferDestination


    public string AccountNumber { get; set; }


    public decimal Balance { get; set; }


    public void AddFunds(decimal value)


        Balance += value;


     public void RemoveFunds(decimal value)


        Balance -= value;



 public class TransferManager


    public ITransferSource Source { get; set; }

     public ITransferDestination Destination { get; set; }

  public decimal Value { get; set; }


    public void Transfer()





} –



  • When you find code that violates the LSP, consider using delegation, composition, or aggregation to use behavior from other classes without resorting to inheritance.
  • If you need behavior from another class but don’t need to change or modify that behavior, you can simply delegate to that class to use the desired behavior.
  • Composition lets you choose a behavior from a family of behaviors, often via several implementations of an interface.
  • When you use composition, the composing object owns the behaviors it uses, and they stop existing as soon as the composing object does.
  • Aggregation allows you to use behaviors from another class without limiting the lifetime to those behaviors.
  • Aggregated behaviors continue to exist even after the aggregating object is destroyed.


  • Program to Interface Not Implementation.
  • Don’t Repeat Yourself.
  • Encapsulate What Varies.
  • Depend on Abstractions, Not Concrete classes.
  • Least Knowledge Principle.
  • Favor Composition over Inheritance.
  • Hollywood Principle.
  • Apply Design Pattern wherever possible.
  • Strive for Loosely Coupled System.
  • Keep it Simple and Sweet / Stupid.



The primary tasks in object-oriented analysis (OOA) are(“”):

  • Find the objects
  • Organize the objects
  • Describe how the objects interact
  • Define the behavior of the objects
  • Define the internals of the objects



Functionalities, requirements and analysis

Key points of requirements

  • Gather requirements about the functionality
  • Figure out what the functionality should really do(think outside what the client told you, look at things differently and into the future)
  • Get additional information from the client and make sure you have understood the client correctly by going through WHAT YOU have understood. You may have to do this more than once, perhaps several times.
  • Build the right functionality. Make sure that the client is on the same page about the needed functionality and requirements.

What is a requirement?

A requirement is usually a single thing, and you can test that thing to make sure you’ve actually fulfilled the requirement. It’s a specific thing your system has to do to work correctly. A requirement is a singular need detailing what a particular product or service should be or do.

A “system” is the complete app or project you’re working on. A system has a lot of things that it needs to do. What a client comes up with is part of what the system “does”. Remember, the customer decides when a system works correctly. So if you leave out a requirement, or even if they forget to mention something to you, the system isn’t working correctly!

Pay attention to what the system needs to do; you can figure out how the system will do those things later.

Requirements list

When creating a requirements list you need to understand how things will be used.

You’ve got to ask the customer questions to figure out what they want before you can determine exactly what the system should do. Begin by finding out what your customer wants and expects, and what they think the system you’re building for them should do. Then, you can begin to think beyond what your customers asked for and anticipate their needs, even before they realize they have a problem.

Remember, most people expect things to work even if problems occur. So you’ve got to anticipate what might go wrong, and add requirements to take care of those problems as well. A good set of requirements goes beyond just what your customers tell you, and makes sure that the system works, even in unusual or unexpected circumstances.


The system is everything needed to meet a customer’s goals.

You’ve got to make sure your application works like the customer wants it to—even if that’s not how you would use the application. That means you’ve got to really understand what the system has to do, and how your customers are going to use it.

In fact, the only way to ensure you get to your client a working, successful application is to know the system even better than they do, and to understand exactly what it needs to do. You can then anticipate problems, and hopefully solve them before your client ever knows something could have gone wrong.

Use case

A use case describes what your system does to accomplish a particular customer goal. Or what people call the steps that a system takes to make something happen.

“In software and systems engineering, a use case is a list of steps, typically defining interactions between a role (known in Unified Modeling Language (UML) as an “actor”) and a system, to achieve a goal. The actor can be a human or an external system.” –

Use cases are all about the “what.” What a particular goal needs to do. Not how, for the moment.

A single use case focuses on a single goal. What needs to be done for a single goal to successfully complete. In other words avoid having more than one outcome for a single use case.

The customer goal is the point of the use case: what do all these steps need to make happen? The focus is on the client. The system has to help that customer accomplish their goal.

A use case is a technique for capturing the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific goal.

Key points for good User Cases

  • Every use case must have a clear value to the system. If the use case doesn’t help the customer achieve their goal, then the use case isn’t of much use.
  • Every use case must have a definite starting and stopping point. Something must begin the process, and then there must be a condition that indicates that the process is complete.
  • Every use case is started off by an external initiator, outside of the system. Sometimes that initiator is a person, but it could be anything outside of the system.

One of the key points about a use case is that it is focused on accomplishing one particular goal. If your system does more than one thing then you’ll need more than one use case.

Use cases are meant to help you understand what a system should do—and often to explain the system to others (like the customer or your boss). If your use case focuses on specific code- level details, it’s not going to be useful to anyone but a programmer. As a general rule, your use cases should use simple, everyday language. If you’re using lots of programming terms, or technical jargon, your use case is probably getting too detailed to be that useful. You will never write great software if you can’t deliver an app that does what the customer wants it to do. Use cases are a tool to help you figure that out—and then you’re ready to write code to actually implement the system your use case describes.

Summary – Use Cases

  • Your system must work in the real world, plan and test for when things go wrong.
  • Requirements are things your system must do to work correctly.
  • Good requirements ensure your system works like your customers expect.
  • Make sure your requirements cover all the steps in the use cases for your system.
  • Use your use cases to find out about things your customers forgot to tell you.
  • Your use cases will reveal any incomplete or missing requirements that you might have to add to your system.
  • Your initial requirements usually come from your customer.
  • To make sure you have a good set of requirements, you should develop use cases for your system.
  • Use cases detail exactly what your system should do.
  • A use case has a single goal, but can have multiple paths to reach that goal.
  • A good use case has a starting and stopping condition, an external initiator, and clear value to the user.
  • A use case is simply a story about how your system works.
  • You will have at least one use case for each goal that your system must accomplish.
  • After your use cases are complete, you can refine and add to your requirements.
  • A requirements list that makes all your use cases possible is a good set of requirements.
  • Your system must work in the real world, not just when everything goes as you expect it to.
  • When things go wrong, your system must have alternate paths to reach the system’s goals.

Requirements change

Requirements always change. With good use cases you can usually change your software quickly to adjust to those new requirements.

If a use case is confusing to you, you can simply rewrite it. There are tons of different ways that people write use cases, but the important thing is that it makes sense to you, your team, and the people you have to explain it to.

Alternate Paths and Scenarios

An alternate path is one or more steps that a use case has that are optional, or provide alternate ways to work through the use case. Alternate paths can be additional steps added to the main path, or provide steps that allow you to get to the goal in a totally different way than parts of the main path.

You can have alternate paths that provide additional steps, and multiple ways to get from the starting condition to the ending condition.

A complete path through a use case, from the first step to the last, is called a scenario. Most use cases have several different scenarios, but they always share the same user goal.

Any time you change your use case, you need to go back and check your requirements. Remember to keep things as simple as you can; there’s no need to add complexity if you don’t need it.

Key points to requirement changes

  • Requirements will always change as a project progresses.
  • When requirements change, your system has to evolve to handle the new requirements.
  • When your system needs to work in a new or different way, begin by updating your use case.
  • A scenario is a single path through a use case, from start to finish.
  • A single use case can have multiple scenarios, as long as each scenario has the same customer goal.
  • Alternate paths can be steps that occur only some of the time, or provide completely different paths through parts of a use case.
  • If a step is optional in how a system works, or a step provides an alternate path through a system, use numbered sub- steps, like 3.1, 4.1, and 5.1, or 2.1.1, 2.2.1, and 2.3.1.
  • Try to avoid duplicate code. It’s a maintenance nightmare, and usually points to problems in how you’ve designed your system.


Analysis helps you ensure your system works in a real-world context.

Key points to analysis

  • Figuring out potential problems.
  • Plan a solution.

There is usually more than one solution to a problem, there is usually more than one way to write that solution in a use case.

Write your use cases in a way that makes sense to you, your boss, and your customers. Analysis and your use cases let you show customers, managers, and other developers how your system works in a real world context.

Each use case should detail one particular user goal.

Looking at the nouns (and verbs) in your use case to figure out classes and methods is called textual analysis.

Remember, you need classes only for the parts of the system you have to represent.

A good use case clearly and accurately explains

What a system does, in language that’s easily understood. With a good use case complete, textual analysis is a quick and easy way to figure out the classes in your system.

Textual analysis

Textual analysis tells you what to focus on, not just what classes you should create.

Think about how the classes you do have can support the behavior your use case describes.

Nouns and Verbs

The point is that the nouns are what you should focus on.

The verbs in your use case are (usually) the methods of the objects in your system.

You’ve already seen how the nouns in your use case usually are a good starting point for figuring out what classes you might need in your system. If you look at the verbs in your use case, you can usually figure out what methods you’ll need for the objects that those classes represent: nouns are candidates for classes… not every noun will be a class.


  • Analysis helps you ensure that your software works in the real world context, and not just in a perfect environment.
  • Use cases are meant to be understood by you, your managers, your customers, and other programmers.
  • You should write your use cases in whatever format makes them most usable to you and the other people who are looking at them.
  • A good use case precisely lays out what a system does, but does not indicate how the system accomplishes that task.
  • Each use case should focus on only one customer goal. If you have multiple goals, you will need to write multiple use cases.

Class Diagrams

Class diagrams give you an easy way to show your system and its code constructs as an overall “big picture”. A class diagram describes the structure of a system by showing the system’s classes their attributes, operations (or methods), and the relationships among objects.

  • The attributes in a class diagram usually map to the member variables of your classes.
  • The operations in a class diagram usually represent the methods of your classes.

Class diagrams leave lots of detail out, such as class constructors, some type information, and the purpose of operations on your classes.

Textual analysis helps you translate a use case into code-level classes, attributes, and operations.

The nouns of a use case are candidates for classes in your system, and the verbs are candidates for methods on your system’s classes.

Big applications and problems

Solving big problems

You solve big problems the same way you solve small problems.

The best way to look at a big problem is to see it as lots of individual pieces of functionality. You can treat each of those pieces as an individual problem to solve, and apply the things you already know.

You can solve a big problem by breaking it into lots of functional pieces, and then working on each of those pieces individually.

Two things to take into consideration with big problems are:

  • Encapsulation: The more you encapsulate things, the easier it will be for you to break a large app up into different pieces of functionality.
  • Interfaces: By coding to an interface, you reduce dependencies between different parts of your application.

The best way to get good requirements is to understand what a system is supposed to do.

  • If you know what each small piece of your app’s functionality should do, then it’s easy to combine those parts into a big app that does what it’s supposed to do.

Analysis helps you ensure your system works in a real-world context.

  • Analysis is even more important with large software and in most cases, you start by analyzing individual pieces of functionality, and then analyzing the interaction of those pieces.

Great software is easy to change and extend, and does what the customer wants it to do

  • This doesn’t change with bigger problems. In fact, the higher the cohesion of your app, the more independent each piece of functionality is, and the easier it is to work on those pieces one at a time.

Things to find out in your system:

  • Commonality: What things are similar? Helps you determine what is important in your system. What you need to worry about.
  • Variability: What things are different? Helps you determine what is not important in your system. What your system is not like. Tells you what to NOT worry about.


A feature is just a high-level description of something a system needs to do. You usually get features from talking to your customers (or listening in on their conversations).

You can take one feature, and come up with several different requirements that you can use to satisfy that feature. Figuring out a system’s features is a great way to start to get a handle on your requirements.

Get features from the customer, and then figure out the requirements you need to implement those features.

Use case diagrams

Use cases don’t always help you see the big picture. When you start to write use cases, you’re really getting into a lot of detail about what the system should do. The problem is that can cause you to lose sight of the big picture.

So even though you could start writing use cases, that probably won’t help you figure out exactly what you’re trying to build, from the big-picture point of view. When you’re working on a system, it’s a good idea to defer details as long as you can. You won’t get caught up in the little things when you should be working on the big things.

Even though use cases might be a little too focused on the details for where we are in designing the system right now, you still need to have a good understanding of what your system needs to do. So you need a way to focus on the big picture, and figure out what your system should do, while still avoiding getting into too much detail.

When you need to know what a system does, but don’t want to get into all the detail that use cases require you can use Use Case Diagrams.

Use case diagrams are the blueprints for your system.

The focus here is on the big picture. That use case diagram may seem vague, but they help you keep your eye on the fundamental things that your system must do.

Use your feature list to make sure your use case diagram is complete. Once you have your features and a use case diagram, you can make sure you’re building a system that will do everything it needs to. Take your use case diagram, and make sure that all the use cases you listed will cover all the features you got from the customer. Then you’ll know that your diagram—the blueprints for your system—is complete, and you can start building the system.

Use a feature or requirement list to capture the BIG THINGS that your system needs to do.

Once you’ve got your features and requirements mapped out, you need to get a basic idea of how the system is going to be put together. Use cases are often too detailed, so a use case diagram can help you see what a system is like from far above, a blueprint for your application.

Draw a use case diagram to show what your system IS without getting into unnecessary detail.

Use cases reflect usage, features reflect functionality.

When writing use cases, we’re dealing with just the interactions between actors and a system. We’re just talking about the ways that your system is used (which is where the term “use case” came from).

The features in your system reflect your system’s functionality. Your system must do those things in order for the use cases to actually work, even though the functionality isn’t always an explicit part of any particular use case.

A use case may depend upon a feature to function, but the feature may not actually be part of the steps in the use case itself.

Use cases are requirements for how people and things (actors) interact with your system, and features are requirements about things that your system must do. They’re related, but they are not the same. Still, to implement a system’s use cases, you’re going to need the functionality in the system’s features. That’s why you should always be able to map your features to the use cases that they enable and are used by.

Your customer—and her customers—only interact with your system through the use cases. So if a feature doesn’t at least indirectly make a use case possible, you’re customer really isn’t going to see a benefit. If you think you’ve got a feature that doesn’t really affect how your system is used or performs, talk it over with the customer, but don’t be afraid to cut something if it’s not going to improve your system.

The features in your system are what the system does, and are not always reflected in your use cases, which show how the system is used. Features and use cases work together, but they are not the same thing.

Domain Analysis

Domain analysis lets you check your designs, and still help you speak the customer’s language. Domain analysis, and just means that we’re describing a problem using terms the customer will understand.

The process of identifying, collecting, organizing, and representing the relevant information of a domain, based upon the study of existing systems and their development histories, knowledge captured from domain experts, underlying theory, and emerging technology within a domain.

Domain analysis helps you avoid building parts of a system that aren’t your job to build.


  • The best way to look at a big problem is to view it as a collection of smaller problems.
  • Just like in small projects, start working on big projects by gathering features and requirements.
  • Features are usually “big” things that a system does, but also can be used interchangeably with the term requirement.
  • Use cases are detail-oriented; use case diagrams are focused more on the big picture.
  • Your use case diagram should account for all the features in your system.
  • Domain analysis is representing a system in language that the customer will understand.
  • Commonality and variability give you points of comparison between a new system and things you already know about.
  • An actor is anything that interacts with your system, but isn’t part of the system.


It’s not enough to just figure out the individual pieces of a big problem. You also need to know about how those pieces fit together, and which ones might be more important than others; that way, you’ll know what you should work on first.

Architecture is your design structure, and highlights the most important parts of your app, and the relationships between those parts.

Architecture is the organizational structure of a system, including its decomposition into parts, their connectivity, interaction mechanisms, and the guiding principles and decisions that you use in the design of a system.

First step – Functionality

The first step is to make sure an application does what it’s supposed to do. In small projects, we use a requirements list to write down functionality; in big projects, we’ve are using a feature list to figure those things out: features are about functionality, they focus on what the system has to do, not on what principles or patterns you use to build the system.

We need to figure out which pieces are the most important. Those are the pieces we want to focus on first.

What are the most important features? It’s your job to figure out which features you think are the most important, and then in what order you’d work on those things.

The things in your application that are really important are architecturally significant, and you should focus on them FIRST.

Architecture isn’t just about the relationships between parts of your app; it’s also about figuring out which parts are the most important, so you can start building those parts first.

Questions to ask

  • Is it(the feature) part of the essence of the system?
    • Is the feature really core to what a system actually is. Can you imagine the system without a feature? If not, then you’ve probably found a feature that is part of the essence of a system. When you’re looking at a feature, ask yourself: “If this feature wasn’t implemented, would the system still really be what it’s supposed to be?” If the answer is no, you’ve found yourself an “essence feature.” The essence of a system is what it is at its most basic level. In other words, if you stripped away all the bells and whistles, all the “neat” things that marketing threw in, and all the cool ideas you had, what would the system really be about? That’s the essence of a system.
  • What does it mean?
    • If you’re not sure what the description of a particular feature really means, it’s probably pretty important that you pay attention to that feature. Anytime you’re unsure about what something is, it could take lots of time, or create problems with the rest of the system. Spend time on these features early, rather than late.
  • How do I do it?
    • Another place to focus your attention early on is on features that seem really hard to implement, or are totally new programming tasks for you. If you have no idea how you’re going to tackle a particular problem, you better spend some time up front looking at that feature, so it doesn’t create lots of problems down the road.

If you do not know what something means then you can simply go back to find out what is going on. Get more details and information.

The reason that these features are architecturally significant is that they all introduce risk to your project. It doesn’t matter which one you start with—as long as you are working towards reducing the risk in succeeding. The point here is to reduce risk, not to argue over which key feature you should start with first. You can start with ANY of these, as long as you’re focused on building what you’re supposed to be building.

Focus on one feature at a time to reduce risk in your project Don’t get distracted with features that won’t help reduce risk.

Find out the key features, minimize risks then you can go back to add in more details to your planning, such as using use cases.

Good design will always reduce risk.

OOA&D is all about code—it’s about writing great software, every time. But the way you get to good code isn’t always by sitting down and writing it right away. Sometimes the best way to write great code is to hold off on writing code as long as you can. Plan, organize, architect, understand requirements, reduce risks… all these make the job of actually writing your code very simple.

Customers don’t pay you for great code, they pay you for great software. Great software is more than just great code.

Great code is well-designed, and generally functions like it’s supposed to. But great software not only is well- designed, it comes in on time and does what the customer really wants it to do.

That’s what architecture is about: reducing the risks of you delivering your software late, or having it not work like the customer wants it to. Our key feature list, class diagrams, and those partially done classes all help make sure we’re not just developing great code, but that we’re developing great software.


  • Architecture helps you turn all your diagrams, plans, and feature lists into a well-ordered application.
  • The features in your system that are most important to the project are architecturally significant.
  • Focus on features that are the essence of your system, which you’re unsure about the meaning of, or unclear about how to implement first.
  • Everything you do in the architectural stages of a project should reduce the risks of your project failing.
  • If you don’t need all the detail of a use case, writing a scenario detailing how your software could be used can help you gather requirements quickly.
  • When you’re not sure what a feature is, you should ask the customer, and then try and generalize the answers you get into a good understanding of the feature.
  • Use commonality analysis to build software solutions that are flexible.
  • Customers are a lot more interested in software that does what they want, and comes in on time, than they are in code that you think is cool/great.

Iterating, Testing and Contracts


You can write good software iteratively. Work on the big picture, and then iterate over pieces of the application until it’s complete.

How to choose on which piece to focus on?

You can choose to focus on specific features of the application. This approach is all about taking one piece of functionality that the customer wants, and working on that functionality until it’s complete.

  • You can choose to focus on – Feature driven development: That is when you pick a specific feature in your app, and plan, analyze, and develop that feature to completion.
    • When you’re using feature driven development, you work on a single feature at a time, and then iterate, knocking off features one at a time until you’ve finished up the functionality of an application.
    • Feature driven development is more granular.
      • Works well when you have a lot of different features that don’t interconnect a whole lot.
      • Allows you to show the customer working code faster.
      • Is very functionality-driven. You’re not going to forget about any features using feature driven development.
      • Works particularly well on systems with lots of disconnected pieces of functionality.
    • Or – Use case driven development: when you pick a scenario through a use case, and write code to support that complete scenario through the use case.
      • With use case driven development, you work on completing a single scenario through a use case. Then you take another scenario and work through it, until all of the use case’s scenarios are complete. Then you iterate to the next use case, until all your use cases are working. With use case driven development, you work from the use case diagram, which lists the different use cases in your system.
      • Use case driven development is more “big picture”.
        • Works well when your app has lots of processes and scenarios rather than individual pieces of functionality.
        • Allows you to show the customer bigger pieces of functionality at each stage of development.
        • Is very user-centric. You’ll code for all the different ways a user can use your system with use case driven development.
        • Works particularly well on transactional systems, where the system is largely defined by lengthy, complicated processes.


Both approaches to iterating are driven by good requirements. Because requirements come from the customer, both approaches focus on delivering what the customer wants.

Your customers want to see something that makes sense to them. Feature driven development allows you to create faster results for your customer to see something concrete. Test scenarios are also a way to show to the customer how things works.

Writing test scenarios

Test cases don’t have to be very complex; they just provide a way to show your customer that the functionality in your classes is working correctly.

You should test your software for every possible usage you can think of. Be creative! Don’t forget to test for incorrect usage of the software, too. You’ll catch errors early, and make your customers very happy.

Good software development is about mixing a lot of different methods and ways of doing things based on the situation and what is required of the software. You might start with a use case (use case driven development), and then choose just a small feature in that use case to start working on (which is really a form of feature driven development). Finally, you might use tests to figure out how to implement that feature (feature driven development).

You want to keep your tests simple, and have them test just a small piece of functionality at a time. If you start testing multiple things at once, it’s hard to tell what might have caused a particular test to fail. You may need a lot more tests, but keep each one focused on a very specific piece of functionality. each test really focuses on a single piece of functionality. That might involve one method, or several methods.

Test driven development focuses on getting the behavior of your classes right.

Design decisions are always a tradeoff. Design choices have both positive and negative effects. The smart thing to do is to reevaluate your design decisions and make sure they are working. Be courageous in changing your design and implementation for the best possible end result.

Iteration is really the key point here. Lots of design decisions look great at one stage of your development, but then turn out to be a problem as you get deeper into a particular part of your app. So once you make a decision, stick with it, and iterate deeper into your application. As long as your design is working, and you’re able to use good OO principles and apply design patterns, you’re in good shape. If you start running into trouble with a decision, though, don’t ever be afraid to change designs and rework things.

You always have to make a choice, even if you’re not 100% sure if it’s the right one. It’s always better to take your best guess, and see how things work out, rather than spend endless hours debating one choice or another. That’s called analysis paralysis, and it’s a sure way to not get anything done. It’s much better to start down one path, even if you’re not totally sure it’s the right one, and get some work done, than to not make a choice at all.

Good software is built iteratively. Analyze, design, and then iterate again, working on smaller and smaller parts of your app. Each time you iterate, reevaluate your design decisions, and don’t be afraid to CHANGE something if it makes sense for your design.

What makes a good test case

  • Each test case should have an ID and a name
    • The names of your test cases should describe what is being tested. Test names with nothing but a number at the end aren’t nearly as helpful as names like testProperty() or testCreation().
  • Each test case should have one specific thing that it tests.
    • Each of your test cases should be atomic: each should test only one piece of functionality at time. This allows you to isolate exactly what piece of functionality might not be working in your application.
  • Each test case should have an input you supply.
    • You’re going to give the test case a value, or a set of values, that it uses as the test data. This data is usually then used to execute some specific piece of functionality or behavior.
  • Each test case should have an output that you expect.
    • Given your input, what should the program, class, or method output? You’ll compare the actual output of the program with your expected output, and if they match, then you’ve got a successful test, and your software works.

Anatomy of a Test Case

Fields in test cases:

  • Test case id
  • Requirement # / Section
  • Objective [What is to be verified? ]
  • Assumptions & Prerequisites
  • Test Procedure Steps to be executed
  • Test data: Variables and their values
  • Expected result
  • Actual result
  • Status: Pass/Fail
  • Comments

Types Of Test cases

  • Functional Test Cases
  • Performace Test Cases
  • Security Test Cases
  • Integration Test Cases
  • Positive Test Cases
  • Negative Test Cases
  • Database Test Cases
  • Acceptance Test Cases
  • Usability Test Cases

Software Testing Techniques and Methods

White box testing:

White box testing is done by the Developers. This requires knowledge of the internal coding of the software.

White box testing is concerned with testing the implementation of the program. The intent of this testing is not to exercise all the different input or output conditions, but to exercise different programming structures and data structures used in the program. It is commonly called structural testing.

White box testing mainly applicable to lower levels of testing: Unit testing and Integration Testing.

Implementation knowledge is required for white box testing.

Black box testing:

Black box testing is done by the professional testing team. This does not require knowledge of internal coding of the application. Testing the application against the functionality of the application without the knowledge of internal coding of the software.

In Black box testing the structure of the program is not considered. Test cases are decided solely on the basis of the requirements or specification of the program or module.

Black box testing mainly applicable to higher levels of testing: Acceptance Testing and System Testing.

Implementation knowledge is not required for black box testing.

Gray box testing:

A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams. In gray box testing tester applies a limited no of test cases to the internal working of the software under test. In remaining part of gray box testing one takes a black box approach in applying inputs to the software under test and observing the outputs.

Unit testing:

The first test in the development process is the unit test. Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Unit test depends upon the language on which the project is developed.

Integration testing:

Testing in which modules are combined and tested as a group. Modules are typically code modules, individual applications, client and server applications on a network, etc. Integration Testing follows unit testing and precedes system testing.

Regression testing:

Testing the application as a whole for the modification in any module or functionality. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process.

Usability testing:

Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Performance testing:

Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements. It is usually conducted by the performance engineer. Performance testing will determine whether or not their software meets speed, scalability and stability requirements under expected workload.

Types of performance testing:

Load testing- Load testing is the simplest form of performance testing .Load testing is a generic term covering Performance Testing and Stress Testing. Testing technique that puts demand on a system or device and measures its response. It is usually conducted by the performance engineers.

Stress testing- Stress testing is normally used to understand the upper limits of capacity within the system Testing technique which evaluates a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. It is usually conducted by the performance engineer.


Stress testing involves testing an application under extreme workloads to see how it handles high traffic or data processing .The objective is to identify breaking point of an application.

Volume testing- testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. It is usually conducted by the performance engineer.

Endurance testing- It is also known as soak testing. Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers.

Scalability testing– The objective of scalability testing is to determine the software application’s effectiveness in “scaling up” to support an increase in user load. It helps plan capacity addition to your software system.

Spike testing- tests the software’s reaction to sudden large spikes in the load generated by users i.e.  Spike testing is done by suddenly increasing the number of, or load generated by, users by a very large amount and observing the behavior of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.

Recovery testing:

Testing technique which evaluates how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is performed by the testing teams.

Security Testing-:

A process to determine that an information system protects data and maintains functionality as intended. It can be performed by testing teams or by specialized security-testing companies. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

Conformance testing:

Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

Smoke testing:

Testing technique which examines all the basic components of a software system to ensure that they work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made .

Compatibility testing:

Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.

System testing:

Testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

Alpha testing:

Type of testing a software product or system conducted at the developer’s site. Usually it is performed by the end user. Testing is done at the end of development.

Beta testing:

Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

Acceptance testing:

Testing to verify a product meets customer specified requirements. A customer usually does this type of testing on a product that is developed externally.

Comparison testing:

Comparison of product strengths and weaknesses with previous versions or other similar products.

Sanity testing:

Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Ad-hoc Testing:

Testing performed without planning and documentation – the tester tries to ‘break’ the system by randomly trying the system’s functionality. It is performed by the testing teams.

“ –

Install/uninstall testing – Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

“Internationalization and localization

The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).” –


A sample testing cycle


“Although variations exist between organizations, there is a typical cycle for testing.[45] The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.

  • Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
  • Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
  • Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
  • Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
  • Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
  • Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
  • Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
  • Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
  • Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.”


Programming by contract

When you’re writing software, you’re also creating a contract between that software and the people that use it. The contract details how the software will work when certain actions are taken—like requesting a non-existent property in an object.

If the customer wants an action to result in different behavior, then you’re changing the contract. So if your framework should throw an exception when a non-existent property is queried, that’s fine; it just means that the contract between the frame users and the framework has changed.

When you program by contract, you and your software’s users are agreeing that your software will behave in a certain way. Programming by contract is really all about trust.

When you return null, you’re trusting programmers to be able to deal with null return values. Programmers are basically saying that they’ve coded things well enough that they won’t ask for non-existent properties, so their code just doesn’t worry about getting null values back from an object.

But what happens if you don’t think your code will be used correctly? Or if you think that certain actions are such a bad idea that you don’t want to let users deal with them in their own way? In these cases, you may want to consider defensive programming. Instead of returning a null you throw an exception that forced the user of your code to deal with the situation.

When you are programming by contract, you’re working with client code to agree on how you’ll handle problem situations. When you’re programming defensively, you’re making sure the client gets a “safe” response, no matter what the client wants to have happen.



  • The first step in writing good software is to make sure your application works like the customer expects and wants it to.
  • Customers don’t usually care about diagrams and lists; they want to see your software actually do something.
  • Use case driven development focuses on one scenario in a use case in your application at a time.
  • In use case driven development, you focus on a single scenario at a time, but you also usually code all the scenarios in a single use case before moving on to any other scenarios, in other use cases.
  • Feature driven development allows you to code a complete feature before moving on to anything else.
  • You can choose to work on either big or small features in feature-driven development, as long as you take each feature one at a time.
  • Software development is always iterative. You look at the big picture, and then iterate down to smaller pieces of functionality.
  • You have to do analysis and design at each step of your development cycle, including when you start working on a new feature or use case.
  • Tests allow you to make sure your software doesn’t have any bugs, and let you prove to your customer that your software works.
  • A good test case only tests one specific piece of functionality.
  • Test cases may involve only one, or several, methods in a single class, or may involve multiple classes.
  • Test driven development is based on the idea that you write your tests first, and then develop software that passes those tests. The result is fully functional, working software.
  • Programming by contract assumes both sides in a transaction understand what actions generate what behavior, and will abide by that contract.
  • Methods usually return null or unchecked exceptions when errors occur in programming by contract environments.
  • Defensive programming looks for things to go wrong, and tests extensively to avoid problem situations.
  • Methods usually return “empty” objects or throw checked exceptions in defensive programming environments.

The Lifecycle of software development


Feature List

Figure out what your app is supposed to do at a high level. Feature lists are all about understanding what your software is supposed to do.

Use Case Diagrams

Nail down the big processes that your app performs, and any external forces that are involved. Use case diagrams let you start thinking about how your software will be used, without getting into a bunch of unnecessary details.

Break Up the Problem

Break your application up into modules of functionality, and then decide on an order in which to tackle each of your modules.


Figure out the individual requirements for each module, and make sure those fit in with the big picture.

Domain Analysis

Figure out how your use cases map to objects in your app, and make sure your customer is on the same page as you are.

Preliminary Design

Fill in details about your objects, define relationships between the objects, and apply principles and patterns.


Write code, test it, and make sure it works. Do this for each behavior, each feature, each use case, each problem, until you’re done.


You’re done! Release your software.

OOA&D is about having lots of options. There is never one right way to solve a problem, so the more options you have, the better chance you’ll find a good solution to every problem.




Head First Object-Oriented Analysis & Design – Book





Other programming resources

Database related


Object-relational impedance mismatch

From Wikipedia, the free encyclopedia

(Redirected from Object-Relational impedance mismatch)

The object-relational impedance mismatch is a set of conceptual and technical difficulties that are often encountered when a relational database management system (RDBMS) is being used by a program written in an object-oriented programming language or style; particularly when objects or class definitions are mapped in a straightforward way to database tables or relational schema.

The term object-relational impedance mismatch is derived from the electrical engineering term impedance matching.




Object-oriented concepts[edit]


Object-oriented programs are designed with techniques that result in encapsulated objects whose representation is hidden. In an object-oriented framework, the underlying properties of a given object are expected to be unexposed to any interface outside of the one implemented alongside the object. However, object-relational mapping necessarily exposes the underlying content of an object to interaction with an interface that the object implementation cannot specify. Hence, object-relational mapping violates the encapsulation of the object.


In relational thinking, “private” versus “public” access is relative to need rather than being an absolute characteristic of the data’s state, as in the OO model. The relational and OO models often have conflicts over relativity versus absolutism of classifications and characteristics.

Interface, class, inheritance and polymorphism[edit]

Under an object-oriented paradigm, objects have interfaces that together provide the only access to the internals of that object. The relational model, on the other hand, utilizes derived relation variables (views) to provide varying perspectives and constraints to ensure integrity. Similarly, essential OOP concepts for classes of objects, inheritance and polymorphism are not supported by relational database systems.

Mapping to relational concepts[edit]

A proper mapping between relational concepts and object-oriented concepts can be made if relational database tables are linked to associations found in object-oriented analysis.

Data type differences[edit]

A major mismatch between existing relational and OO languages is the type system differences. The relational model strictly prohibits by-reference attributes (or pointers), whereas OO languages embrace and expect by-reference behavior. Scalar types and their operator semantics are also very often subtly to vastly different between the models, causing problems in mapping.

For example, most SQL systems support string types with varying collations and constrained maximum lengths (open-ended text types tend to hinder performance), while most OO languages consider collation only as an argument to sort routines and strings are intrinsically sized to available memory. A more subtle, but related example is that SQL systems often ignore trailing white space in a string for the purposes of comparison, whereas OO string libraries do not. It is typically not possible to construct new data types as a matter of constraining the possible values of other primitive types in an OO language.

Structural and integrity differences[edit]

Another mismatch has to do with the differences in the structural and integrity aspects of the contrasted models. In OO languages, objects can be composed of other objects—often to a high degree—or specialize from a more general definition. This may make the mapping to relational schemas less straightforward. This is because relational data tends to be represented in a named set of global, unnested relation variables. Relations themselves, being sets of tuples all conforming to the same header[clarification needed] do not have an ideal counterpart in OO languages. Constraints in OO languages are generally not declared as such, but are manifested as exception raising protection logic surrounding code that operates on encapsulated internal data. The relational model, on the other hand, calls for declarative constraints on scalar types, attributes, relation variables, and the database as a whole.

Manipulative differences[edit]

The semantic differences are especially apparent in the manipulative aspects of the contrasted models, however. The relational model has an intrinsic, relatively small and well defined set of primitive operators for usage in the query and manipulation of data, whereas OO languages generally handle query and manipulation through custom-built or lower-level, case and physical access path specific imperative operations. Some OO languages do have support for declarative query sub-languages, but because OO languages typically deal with lists and perhaps hash-tables, the manipulative primitives are necessarily distinct from the set-based operations of the relational model.[citation needed]

Transactional differences[edit]

The concurrency and transaction aspects are significantly different also. In particular, relational database transactions, as the smallest unit of work performed by databases, are much larger than any operations performed by classes in OO languages. Transactions in relational databases are dynamically bounded sets of arbitrary data manipulations, whereas the granularity of transactions in OO languages is typically individual assignments of primitive typed fields. OO languages typically have no analogue of isolation or durability as well and atomicity and consistency are only ensured for said writes of primitive typed fields.

Solving impedance mismatch[edit]

Solving the impedance mismatch problem for object-oriented programs starts with recognition of the differences in the specific logic systems being employed, then either the minimization or compensation of the mismatch.


There have been some attempts at building object-oriented database management systems (OODBMS) that would avoid the impedance mismatch problem. They have been less successful in practice than relational databases however, partly due to the limitations of OO principles as a basis for a data model.[1] There has been research performed in extending the database-like capabilities of OO languages through such notions as transactional memory.

One common solution to the impedance mismatch problem is to layer the domain and framework logic. In this scheme, the OO language is used to model certain relational aspects at runtime rather than attempt the more static mapping. Frameworks which employ this method will typically have an analogue for a tuple, usually as a “row” in a “dataset” component or as a generic “entity instance” class, as well as an analogue for a relation. Advantages of this approach may include:

  • Straightforward paths to build frameworks and automation around transport, presentation, and validation of domain data.
  • Smaller code size; faster compile and load times.
  • Ability for the schema to change dynamically.
  • Avoids the name-space and semantic mismatch issues.
  • Expressive constraint checking
  • No complex mapping necessary

Disadvantages may include:

  • Lack of static type “safety” checks. Typed accessors are sometimes utilized as one way to mitigate this.
  • Possible performance cost of runtime construction and access.
  • Inability to natively utilize uniquely OO aspects, such as polymorphism.

Alternative architectures[edit]

The rise of XML databases and XML client structures has motivated other alternative architectures to get around the impedance mismatch challenges. These architectures use XML technology in the client (such as XForms) and native XML databases on the server that use the XQuery language for data selection. This allows a single data model and a single data selection language (XPath) to be used in the client, in the rules engines and on the persistence server.[2]


The mixing of levels of discourse within OO application code presents problems, but there are some common mechanisms used to compensate. The biggest challenge is to provide framework support, automation of data manipulation and presentation patterns, within the level of discourse in which the domain data is being modeled. To address this, reflection and/or code generation are utilized. Reflection allows code (classes) to be addressed as data and thus provide automation of the transport, presentation, integrity, etc. of the data. Generation addresses the problem through addressing the entity structures as data inputs for code generation tools or meta-programming languages, which produce the classes and supporting infrastructure en masse. Both of these schemes may still be subject to certain anomalies where these levels of discourse merge. For instance, generated entity classes will typically have properties which map to the domain (e. g. Name, Address) as well as properties which provide state management and other framework infrastructure (e. g. IsModified).


It has been argued, by Christopher J. Date and others, that a truly relational DBMS would pose no such problem,[3][4][5] as domains and classes are essentially one and the same thing. A naïve mapping between classes and relational schemata is a fundamental design mistake; and that individual tuples within a database table (relation) ought to be viewed as establishing relationships between entities; not as representations for complex entities themselves. However, this view tends to diminish the influence and role of object oriented programming, using it as little more than a field type management system.

The impedance mismatch in programming between the domain objects and the user interface. Sophisticated user interfaces, to allow operators, managers, and other non-programmers to access and manipulate the records in the database, often require intimate knowledge about the nature of the various database attributes (beyond name and type). In particular, it’s considered a good practice (from an end-user productivity point of view) to design user interfaces such that the UI prevents illegal transactions (those which cause a database constraint to be violated) from being entered; to do so requires much of the logic present in the relational schemata to be duplicated in the code.

Certain code-development frameworks can leverage certain forms of logic that are represented in the database’s schema (such as referential integrity constraints), so that such issues are handled in a generic and standard fashion through library routines rather than ad hoc code written on a case-by-case basis.

It has been argued that SQL, due to a very limited set of domain types (and other alleged flaws) makes proper object and domain-modelling difficult; and that SQL constitutes a very lossy and inefficient interface between a DBMS and an application program (whether written in an object-oriented style or not). However, SQL is currently the only widely accepted common database language in the marketplace; use of vendor-specific query languages is seen as a bad practice when avoidable. Other database languages such as Business System 12 and Tutorial D have been proposed; but none of these has been widely adopted by DBMS vendors.

In current versions of mainstream “object-relational” DBMSs like Oracle and Microsoft SQL Server, the above point may be a non-issue. With these engines, the functionality of a given database can be arbitrarily extended through stored code (functions and procedures) written in a modern OO language (Java for Oracle, and a Microsoft.NET language for SQL Server), and these functions can be invoked in-turn in SQL statements in a transparent fashion: that is, the user neither knows nor cares that these functions/procedures were not originally part of the database engine. Modern software-development paradigms are fully supported: thus, one can create a set of library routines that can be re-used across multiple database schemas.

These vendors decided to support OO-language integration at the DBMS back-end because they realized that, despite the attempts of the ISO SQL-99 committee to add procedural constructs to SQL, SQL will never have the rich set of libraries and data structures that today’s application programmers take for granted, and it is reasonable to leverage these as directly as possible rather than attempting to extend the core SQL language. Consequently, the difference between “application programming” and “database administration” is now blurred: robust implementation of features such as constraints and triggers may often require an individual with dual DBA/OO-programming skills, or a partnership between individuals who combine these skills. This fact also bears on the “division of responsibility” issue below.

Some, however, would point out that this contention is moot due to the fact that: (1) RDBMSes were never intended to facilitate object modelling, and (2) SQL generally should only be seen as a “lossy” or “inefficient” interface language when one is trying to achieve a solution for which RDBMSes were not designed. SQL is very efficient at doing what it was designed to do, namely, to query, sort, filter, and store large sets of data. Some would additionally point out that the inclusion of OO language functionality in the back-end simply facilitates bad architectural practice, as it admits high-level application logic into the data tier, antithetical to the RDBMS.

Here the “canonical” copy of state is located. The database model generally assumes that the database management system is the only authoritative repository of state concerning the enterprise; any copies of such state held by an application program are just that — temporary copies (which may be out of date, if the underlying database record was subsequently modified by a transaction). Many object-oriented programmers prefer to view the in-memory representations of objects themselves as the canonical data, and view the database as a backing store and persistence mechanism.

The proper division of responsibility between application programmers and database administrators (DBA). It is often the case that needed changes to application code (in order to implement a requested new feature or functionality) require corresponding changes in the database definition; in most organizations, the database definition is the responsibility of the DBA. Due to the need to maintain a production database system 24 hours a day; many DBAs are reluctant to make changes to database schemata that they deem gratuitous or superfluous; and in some cases outright refuse to do so. Use of developmental databases (apart from production systems) can help somewhat; but when the newly developed application “goes live”; the DBA will need to approve any changes. Some programmers view this as intransigence; however the DBA is frequently held responsible if any changes to the database definition cause a loss of service in a production system—as a result, many DBAs prefer to contain design changes to application code, where design defects are far less likely to have catastrophic consequences.

In organizations where the relationship between DBAs and application programmers is not dysfunctional, the above point is a non-issue, and the decision as to whether to change a schema or not is driven by business needs. If new functionality is mandated, and it requires the capture of information that must be persisted, schema enhancements to achieve persistence are the logical first step. Certain schema modifications (including addition of indexes) will also be acceptable if they dramatically improve performance of a critical application. But schema modifications that might serve no purpose beyond making a programmer’s life modestly easier would be vetoed if they result in unacceptable design decisions such as denormalization of transactional schemas.

Philosophical differences[edit]

Key philosophical differences between the OO and relational models can be summarized as follows:

  • Declarative vs. imperative interfaces — Relational thinking tends to use data as interfaces, not behavior as interfaces. It thus has a declarative tilt in design philosophy in contrast to OO’s behavioral tilt. (Some relational proponents propose using triggers, stored procedures, etc. to provide complex behavior, but this is not a common viewpoint.)
  • Schema bound — Objects do not have to follow a “parent schema” for which attributes or accessors an object has, while table rows must follow the entity’s schema. A given row must belong to one and only one entity. The closest thing in OO is inheritance, but it is generally tree-shaped and optional. A dynamic form of relational tools that allows ad hoc columns may relax schema bound-ness, but such tools are currently rare.
  • Access rules — In relational databases, attributes are accessed and altered through predefined relational operators, while OO allows each class to create its own state alteration interface and practices. The “self-handling noun” viewpoint of OO gives independence to each object that the relational model does not permit. This is a “standards versus local freedom” debate. OO tends to argue that relational standards limit expressiveness, while relational proponents suggest the rule adherence allows more abstract math-like reasoning, integrity, and design consistency.
  • Relationship between nouns and actions — OO encourages a tight association between operations (actions) and the nouns (entities) that the operations operate on. The resulting tightly bound entity containing both nouns and the operations is usually called a class, or in OO analysis, a concept. Relational designs generally do not assume there is anything natural or logical about such tight associations (outside of relational operators).
  • Uniqueness observation — Row identities (keys) generally have a text-representable form, but objects do not require an externally viewable unique identifier.
  • Object identity — Objects (other than immutable ones) are generally considered to have a unique identity; two objects which happen to have the same state at a given point in time are not considered to be identical. Relations, on the other hand, have no inherent concept of this kind of identity. That said, it is a common practice to fabricate “identity” for records in a database through use of globally unique candidate keys; though many consider this a poor practice for any database record which does not have a one-to-one correspondence with a real world entity. (Relational, like objects, can use domain keys if they exist in the external world for identification purposes). Relational systems in practice strive for and support “permanent” and inspect-able identification techniques, whereas object identification techniques tend to be transient or situational.
  • Normalization — Relational normalization practices are often ignored by OO designs. However, this may just be a bad habit instead of a native feature of OO. An alternate view is that a collection of objects, interlinked viapointers of some sort, is equivalent to a network database; which in turn can be viewed as an extremely denormalized relational database.
  • Schema inheritance — Most relational databases do not support schema inheritance. Although such a feature could be added in theory to reduce the conflict with OOP, relational proponents are less likely to believe in the utility of hierarchical taxonomies and sub-typing because they tend to view set-based taxonomies or classification systems as more powerful and flexible than trees. OO advocates point out that inheritance/subtyping models need not be limited to trees (though this is a limitation in many popular OO languages such as Java), but non-tree OO solutions are seen as more difficult to formulate than set-based variation-on-a-theme management techniques preferred by relational. At the least, they differ from techniques commonly used in relational algebra.
  • Structure vs. behaviour — OO primarily focuses on ensuring that the structure of the program is reasonable (maintainable, understandable, extensible, reusable, safe), whereas relational systems focus on what kind of behaviour the resulting run-time system has (efficiency, adaptability, fault-tolerance, liveness, logical integrity, etc.). Object-oriented methods generally assume that the primary user of the object-oriented code and its interfaces are the application developers. In relational systems, the end-users’ view of the behaviour of the system is sometimes considered to be more important. However, relational queries and “views” are common techniques to re-represent information in application- or task-specific configurations. Further, relational does not prohibit local or application-specific structures or tables from being created, although many common development tools do not directly provide such a feature, assuming objects will be used instead. This makes it difficult to know whether the stated non-developer perspective of relational is inherent to relational, or merely a product of current practice and tool implementation assumptions.
  • Set vs. graph relationships — The relationship between different items (objects or records) tend to be handled differently between the paradigms. Relational relationships are usually based on idioms taken from set theory, while object relationships lean toward idioms adopted from graph theory (including trees). While each can represent the same information as the other, the approaches they provide to access and manage information differ.

As a result of the object-relational impedance mismatch, it is often argued by partisans on both sides of the debate that the other technology ought to be abandoned or reduced in scope.[6] Some database advocates view traditional “procedural” languages as more compatible with an RDBMS than many OO languages; or suggest that a less OO-style ought to be used. (In particular, it is argued that long-lived domain objects in application code ought not to exist; any such objects that do exist should be created when a query is made and disposed of when a transaction or task is complete). On the other hand, many OO advocates argue that more OO-friendly persistence mechanisms, such as OODBMS, ought to be developed and used, and that relational technology ought to be phased out. Of course, it should be pointed out that many (if not most) programmers and DBAs do not hold either of these viewpoints; and view the object-relational impedance mismatch as a mere fact of life that information technology has to deal with.

It is also argued that the O/R mapping is paying off in some situations, but is probably oversold: it has advantages besides drawbacks. Skeptics point out that it is worth to think carefully before using it, as it will add little value in some cases.[7]




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.