Charlie Alfred’s Weblog

Is Architecture? Is Not Architecture

Enapsulation is a long-standing “tried and true” principle of architecture. It dates back over 35 years to David Parnas’ work on the importance of information hiding, which was a cornerstone of function-based and structured programming. Twenty years later, in the middle of the object-oriented revolution, Gamma, Helms, Johnson, and Vlissides (the “gang of four”) amplified the importance of this concept with their advice to “encapsulate the concept that varies.”

As you might imagine, encapsulation depends heavily on “separation of concerns” and “clearly drawn boundaries.” All three concepts are enshrined in the “Architecture Hall of Fame.”

A few years ago, I was working at a software outsourcing company, and we were doing a project to write the control software for an excemer laser engine (i.e. the component that powers many applications, such as laser printers, or laser-beam photomask generators for semiconductor chips. While trying to pin down requirements and scope, one of my colleagues engaged the client in an exersize of “is a laser?” vs. “is not a laser?” Somebody would mention a capability, and the group would debate the question, “is this capability the responsibility of the laser, or not?” Is it in scope (technically) or out of scope?” This was a simple, effective exercise for getting the entire team (including product marketing, development, and QA people) to all get a common understanding of the system’s scope.

Today, I am working at a foreign language learning company, working on building a repository to manage language content assets. We played a variant of the “is a laser” game.

We determined early on that the repository would hold “application-neutral, language content”. This gave us two specific dimensions to consider when addressing the “is the content repository” vs. “is not the content repository” question.

o  “language content vs. non-language content” (e.g. words and translations vs. users, learning activities and outcomes)

o  “application-neutral” vs. “application specific (e.g. words and translations vs. containers and lessons)

Excemer lasers and language content are both fascinating subjects, but here, their purpose is to illustrate encapsulation, as a lanching pad for a slightly different experiment.


What Does Architecture Encapsulate?

Architecture is both a process and a result. As a process, architecture should be able to encapsulate a set of discovery and formulation activities that yield an effective solution design from a variety of inputs (deployment contexts, stakeholder expectations, technology opportunities, etc.). As a result, architecture seeks to identify “architecturally-significant decisions” and gather them into a pile, so that they all can be considered as a whole. Clements, Kazman, and Klein of the SEI write:

“The software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them.”

“By externally-visible properties, we are referring to those assumptions other components are allowed to make of a component, like its provided services, performance characteristics, fault handling, shared resource usage, …”

“Our criteria for something to be architectural is [that it] needs to be externally visible in order to reason about the system’s ability to meet its quality requirements or to support decomposition of the system into independently implementable pieces.”

Please pardon the oversimplification, but architecture encapsulates the reasoning behind:

o  how a system is organized,

o  how the parts collaborate,

o  how it adapts to significant changes between contexts (variation in situation),

o  how it evolves gracefully over time

To slightly extend an observation made by Gerry Weinberg, these four things represent a system’s being, behaving, balancing, and becoming.

So that’s it. We’re done, right? That wasn’t so bad.

Not so fast, amigo.


What Should Architecture Encapsulate?

I am a firm believer in everything that has been said so far. But something is gnawing at me, hinting that the story cannot end here. What’s missing?

If I don’t believe that the definition of architecture, or what it encapsulates are fundamentally flawed, then perhaps my concern lies in how we answer the question: “what is the system?” Before considering this question, let’s digress for a moment and consider what Russell Ackoff has to say about systems.

After all, if architecture is about a system’s being, behaving, balancing, and becoming, we should be clear about “what is the system?” and “what isn’t the system?”

Ackoff asserts “A system is a collection of interdependent elements; each is related, directly or indirectly to every other.” Further, “a purposive system is a system that has two or more goals, related by a common purpose, and is able to choose the means to achieve them” while a purposeful system “adds the ability to choose its own goals.”

In other words, a person driving a car is a purposeful system (can choose destination, route, speed, etc.). A car alone is merely purposive (accellerate, brake, absorb shocks), unless of course, it is “My Mother the Car.” (which by the way, is one of the few TV shows from the 1960’s and 1970’s that hasn’t been made into a feature movie).

But the key insight here is Ackoff’s observation about “interdependent elements, each related to one another.”  What this means is that if we encapsulate at the wrong place, we might push certain elements “outside” of the system, which are really “inside the system”.

This is a troubling thought, because it means that our system scope could easily grow without bound.  Consider the following example:

o  A business needs a system – an excemer laser, or a language content repository, or thousands of other possibilities

o  They form a project team to work on it

o  The project team leaders discover and clarify the requirements (some may be mandated)

o  The project team creates some development processes (e.g. agile vs. waterfall, continuous vs. periodic integration, outsource vs. internal)

o  They define the architecture for the system (organization, component interfaces, technology choices, policies)

o  They staff their team, define the work needed, break it down into tasks and dependencies

o  They develop and test the system and prepare to deploy it

o  The business users prepare for the new system (training, workflow changes, migration plan, etc.)

o  The system is deployed, and everybody lives happily ever after (not counting a middle of the night crisis or three)

How many systems can you find in this simple description? Just find the ones that are mentioned. Don’t even bother to look for the ones that may be implied.

The following figure shows at least 6 interdependent systems which combine to make at least 3 others.

The 2 layers in this diagram (the development project and the deployment environment) are systems in their own right.  And each of them contains (at least) 3 systems:

o  the technology system,

o  the social (or team) system,

o  and the project (or business process) system.

So, 3 x 2 = 6, plus 2 = 8, plus the business that contains the development and deployment environments, plus, plus…

System and Architecture Revisited

“Project architecture? Social architecture? Who are you trying to kid? These things have absolutely nothing to do with software architecture. “

The fact is that how the project is organized and how the teams are formed has everything to do with software architecture. Conway’s Law, first published by Melvin Conway in 1968 concludes:

“The basic thesis of this article is that organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations. We have seen that this fact has important implications for the management of system design. Primarily, we have found a criterion for the structuring of design organizations: a design effort should be organized according to the need for communication. “

What goes on in the project team with the team members or in the deployment environment is unrelated?

Au contraire, mon frere. Conways‘ Law directly addresses communication. However, it is also closely linked to physical location, organization structure, and project team structure. To verify this, imagine the differences working with:

o  another team down the hall,

o  one located in the same building two floors away,

o  one in a another building across town, and

o  a offshore outsourcing partner, 10,000 miles and 10 time zones away.

The effects of Conway’s Law also extends to choice of project process. An agile method, like SCRUM or XP chooses to focus on short iterations, refactoring, and adaptability over longer range planning. A waterfall or spiral approach focuses more on up-front planning and coordination of effort. Is it possible that this choice affects the software architecture and the system being built? You think?

The reach of Conway’s Law also extend to cultural norms about decision making authority, peer reviews, raising and resolving objections, and abiding by unpopular decisions.

Technical architecture is too complex as it is. We’ll get buried under this avalanche.

What if I need to architect a system today?  If my system is linked to the deployment environment, what happens when the company brings in a new CEO that reorganizes the whole operation? What happens if a competitor releases an unexpected product that redefines the market? What happens if .com mania strikes and the three leaders of my design team leave to form a startup? We’ll never decide whether to use Java or C#, Oracle or MySQL, SOAP or REST-ful Web Services if we need to worry about these things. Just wake me up when the nightmare is over.

Back in the 1980’s, the Fram Oil Filters ran a popular TV ad that showed an auto mechanic finishing an expensive repair job on a car engine. The mechanic concluded, “You can pay me now. Or… you can pay me later.” You can spend a little every 3000 miles to change your oil and filter, or you can press your luck and overhaul your engine. The choice is yours.

So, what if I choose to focus inwardly on my software architecture problem. What if I ignore what’s going on with the formation of my project team? What if I push for a new technology like Ruby on Rails without considering whether the developers have the right skills? What if I ignore competitive trends in my industry, or how it is regulated?

Ackoff said that all elements are interrelated, but he didn’t say how tightly. If a spider captures an insect in Montana, it’s not necessarily going to result in a bug in my Web Server.

Synthesis, Risk Management and Hypotheses

There are three very powerful tools that every architect needs to have in his toolbox, and know how to use: synthesis, risk management, and hypotheses.


Ackoff wrote extensively on the distinction between analysis and synthesis. Both concepts date back to the ancient Greeks, and are mirror images of each other.

Analysis starts with a thing and looks inward. It partitions the thing into its component parts, and tries to explain how the parts work (or will work). Architects and designers are very familiar with analysis. It is what they do on a daily basis. Scientists do it, too – biological classification, the periodic table of elements, and geological classifications are examples.

By contrast, synthesis starts with a thing and looks outward. Synthesis asks about the neighboring and containing systems. It seeks to understand what the role of this thing is (or should be) in its environment. Synthesis focuses on the context, environment, and role/purpose of a thing.

Synthesis is extremely important to an architect, because it provides a balanced perspective on the big picture. Synthesis that is performed well provides an architect with an ability to anticipate how things might change or develop. Good anticipation is not uninformed guesswork. It is grounded on on a solid understand of higher-level patterns, much like the way that capable meteorologists use movements of warm/cold fronts, temperature, wind and high/low pressure systems to forecast the weather.

Synthesis cannot be performed effectively in a vacuum. Typically, there is far too much “conceptual distance” between the neighboring and containing domains and the specifics of the system being developed. This is where it is essential for the architect to identify experts and enlist their assistance. The architect’s job is to:

o  learn enough about the subject matter to be conversant,

o  provide context about the system being developed,

o  ask enough questions to guide the subject matter expert, then

o  ensure that the architect understands the potential implications of the responses.

Risk Management:

From time to time (or seemingly more frequently), the system we are seeking to develop will be affected by events in a related system. These events can vary according to a number of things:

impact: between nonexistent and critical,

frequency: between one-time and continuous,

likelihood: between highly unlikely and certain,

This is a classic risk management question, and it is important to use combine these three dimensions to separate the things that aren’t significant from the ones that are worth worrying about. The marginal risks are the ones we might elect to leave alone and accept. The critical risks are the ones that jeopardize our critical success factors: key capabilities, quality, schedule, or cost. We assess and develop contingency plans for the critical risks. Sometimes these contingency plans only require incremental adjustments. Often, however, they can cause us to rethink some of our fundamental approaches. It is also worth noting that risks change over time, and threats need to be reviewed periodically.


As we all learned in high school, hypotheses are the foundation of scientific methods. Hypotheses are testable assertions, frequently used when uncertainty is present. Because they are testable, experiments can be conducted to confirm or refute hypotheses.

Hypotheses can be formed about many things:

about interdependencies

failure conditions (timing, frequency, impact)

effectivenss or suitability of design approaches

The ley thing about hypotheses is that they are statements about a system that can be recorded and traced to other things.  Consider a systems engineering environment for a jet engine, such as one to power an Airbus 320.

There are several large, powerful external systems that interact here:

o  The airlines who buy the planes interact with the airframe manufacturer (Boeing, Airbus) to specify the system requirements for the airplane.

o  The FAA (and other international air travel regulating bodies) interact with both the airlines and airframe manufacturers to add requirements, by mandating safety regulations for both outcomes and processes

o  The airframe manufacturer interacts with the jet engine manufacturers (GE, Rolls Royce).  System engineering for the airplane as a whole (a containing system) creates derived requirements for the engine.

Note that these derived requirements may be made before anything concrete is known about the jet engine, or whether these requirement are even feasible.

o  The process is repeated when the jet engine manufacturer performs its system engineering, and creates derived requirements for the electronics control board, control software, or turbine hydraulics.

More hypotheses need to be made about the ability of these underlying components to come together to satisfy the requirements that have been imposed on them.

The important point to emphasize here is about the traceability of hypotheses.  Safety regulated systems engineering processes mandate that requirements are recorded and traced to each other.  This provides the audit trail that helps ensure: a) that all upper level requirements are backed up by lower level requirements, and b) there are no stray lower level requirements that can’t be traced to one or more upper level ones.

Hypotheses can be treated the same way.  They can be linked to risks, requirements, or architectural approaches.  Further, the results of experiments meant to verify the hypotheses can be linked to the hypotheses themselves.


In conclusion, let’s take a quick look at how synthesis, risk management, and hypothesis are combined. Suppose that we are trying to architect a large, complex system. The following set of steps makes sense:

Step 1: Synthesis. We look outward at containing and neighboring systems and ask, “how does this system support them” or “how is this system influenced by them”?

Step 2: Analysis: We identify related systems and assess the nature of our relationships with each

Step 3: Risk Management: Based on this assessment, we identify the major threats and opportunities.

Step 4: Synthesis: We use our synthesized understanding of the surrounding systems to prioritize them according to impact, frequency, and likelihood

Step 5: Hypothesis: We make and record testable hypotheses about these threats and opportunities. Note that failing to capitalize on an opportunity is a form of risk

Step 6: Analysis: We prioritize the hypotheses and determine how testable each is. Identify risks you will accept

Step 7: Verify: Specify the experiments and perform the most important ones that can be tested.

What action to take next depends on the situation:

o  how do the results of the experiment affect the hypotheses?

o  how do the hypotheses affect the risks?

o  how do the risks affect the requirements and/or approaches they are linked to?

o  how much of a chain reaction is created in the system and its architecture?

While these questions cannot be answered in the general case, certain facts remain:

o  You must do a good job of synthesis to understand “the big picture” for the system being created.  Ignoring the neighboring and containing systems is inviting trouble.

o  You don’t have to specify everything in order to build anything

o  You must have a way to identify what you don’t know, and be able to assess how much danger it creates

o  When there is a critical requirement or approach that must be made, and it carries risk that is sufficiently large, you must form testable hypotheses so that you can verify them later.

Many people believe that risk management is the responsibility of the project manager.  I have mixed feelings about this.

First, risk management has to live wherever there is knowledge or awareness of risks.  The project manager might be aware of schedule risks or people behavior risks, but might not have a deep enough technical knowledge to know about technical risks.

Second, as we’ve discussed above, a project is a big system that encapsulates schedule, resources, social behavior, and technology concerns.  In many large projects, these areas are further divided into responsibility areas like:

o  project management,

o  product (or business operation) management, and

o  technical architecture.

The trouble is that the activities and results are intertwined, and don’t stay as nicely organized.  This is a major risk.  For a complex system, the job is frequently too big for one individual.  However, authority and responsiblity can be an impediment to communication.

What are we to do?  Why not make some hypotheses?

o  Functional specialties (like project management, product management, and architecture) give the specialists different responsibilities, perspectives, values and comprehension.

o  These differences between specialialists are a result of “conceptual distance” and create a barrier to communication and shared understanding.

o  Unless we can reduce the complexity or shrink the size of the problem, the only solution is to be more effective at reducing conceptual distance.

o  Reducing conceptual distance requires domain experts to work closely together, educate each other, and focus on how smaller decisions impact the big picture.

TrackBack URI

Blog at

%d bloggers like this: